Skip to content

[bug]: mocking=true does not reliably isolate downstream HTTP dependencies during replay in a simple docker-compose Java microservice chain #3909

@akinnya

Description

@akinnya

👀 Is there an existing issue for this?

  • I have searched and didn't find similar issue

👍 Current behavior

We tested mocking=true in a reduced Java/Spring Boot multi-service setup and found that downstream HTTP dependencies are not reliably isolated by mocks during replay.

The application topology is simple:

  • one entry service
  • one main business service
  • multiple downstream HTTP services
  • one database container

Our expectation was:

  • replay request reaches the main service
  • downstream HTTP calls are intercepted before hitting real downstream services
  • recorded mocks are returned
  • replay becomes deterministic

What we actually confirmed is:

Scenario A: downstream services are running

Replay can return HTTP 200, but the final response still reflects live downstream state.

This confirms that mocks did not fully isolate downstream services, because real downstream state still affected the replay result.

Scenario B: downstream services are NOT running

Replay request still reaches the main service, but the top-level request times out instead of being satisfied by mocks.

Observed error:

Post "http://127.0.0.1:/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

This confirms that mocks did not reliably serve the downstream calls when the real downstream services were absent.

Confirmed conclusion

In this simplified setup:

  • when downstream services are running, mocks do not reliably prevent real downstream usage
  • when downstream services are not running, mocks do not reliably satisfy downstream calls

So mocking=true did not provide reliable downstream isolation in this multi-service replay scenario.

👟 Steps to Replicate

  1. Prepare a small Java/Spring Boot docker-compose project with:
    • one entry service
    • one main service
    • multiple downstream HTTP services
    • one database
  2. Record a request to a business endpoint that triggers downstream HTTP calls.
  3. Confirm that mocks.yaml is generated for the test set.
  4. Replay with mocking=true.
  5. Case A:
    • start all downstream services
    • replay succeeds but response values drift because real downstream state is still used
  6. Case B:
    • start only the entry service, main service, and database
    • do NOT start downstream HTTP services
    • replay request reaches the main service but then times out instead of being completed by mocks

📜 Logs (if any)

Case A: replay reaches business layer but still reflects real downstream state

EXPECT:
stable business fields from recorded response

ACTUAL:
different business values caused by live downstream state

Case B: replay enters main service but downstream calls are not satisfied by mocks

... DispatcherServlet ... Completed initialization in 1 ms
🐰 Keploy: ... ERROR failed to send testcase request to app {"error": "Post "http://127.0.0.1:/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"}
🐰 Keploy: ... ERROR failed to simulate request {"error": "Post "http://127.0.0.1:/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"}
Testrun failed for testcase with id: "test-1"

💻 Operating system

Linux

🧾 System Info (uname -a)

No response

📦 OS Release Info (cat /etc/os-release)

No response

🐳 Docker Info (if applicable)

Yes, running inside Docker.

  • Build image: custom Spring Boot app images built on Linux VM
  • Runtime image: generic JRE base image + database image

🧱 Your Environment

  • Linux VM
  • Docker Compose based Java microservice chain
  • Spring Boot services
  • Replay with mocking=true
  • Main request enters entry service successfully
  • Problem is specifically with downstream mock isolation during replay

🎲 Version

Keploy 3.3.34 / 3.3.35

📦 Repository

keploy

🤔 What use case were you trying? (optional)

We want deterministic replay of a Java service after code changes:

  • same top-level input
  • mocked downstream HTTP dependencies
  • stable top-level output

The specific expectation is:
the main service should execute normally, while downstream HTTP dependencies should be intercepted and served from recorded mocks before hitting real downstream services.

If useful, I can provide a reduced reproducible docker-compose example of this setup.

Metadata

Metadata

Labels

bugSomething isn't workingkeploy

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions