Replace Live Services with OpenAPI Mocks from Real HTTP Traffic with Specmatic Proxy

By Naresh Jain

Share this page

API proxy recording: Capture traffic, generate mocks, and simulate faults

When you need to test how a system behaves when a downstream API misbehaves, API proxy recording is a practical, low-friction approach. Instead of relying on the live service, you record actual traffic, generate a machine-readable contract, and spin up a mock that reproduces real responses and examples. This lets you simulate faults such as timeouts or slow responses for a single request while keeping other requests behaving normally—without changing production services.

Why use API proxy recording?

API proxy recording bridges the gap between brittle, environment-dependent tests and deterministic simulations. It captures real interactions between clients and services, producing an OpenAPI specification and example payloads automatically. From that starting point you can:

  • Replace the real service with a deterministic mock quickly.
  • Replay realistic responses in CI and local development.
  • Apply fault simulations—like timeouts—on a per-request basis.
  • Create an accurate contract to guide integration testing and consumer development.
Record API Specification UI with target service http://order-api:8090 and proxy port 8080 highlighted

How the workflow looks in practice

The core idea of API proxy recording is straightforward: put a proxy between the client and the downstream service, capture the traffic, then convert that traffic into both a specification and a mock server. The mock becomes a controllable stand-in for the downstream API.

Step-by-step API proxy recording workflow

Below is a condensed, practical sequence you can follow to convert live interactions into repeatable tests and fault scenarios.

  1. Configure a proxy to listen on a chosen port. For example, run the proxy on port 9090 while your downstream service was originally listening on port 8090. Starting the proxy captures request/response pairs as traffic flows through. Proxy configuration form showing Target Service URL http://order-api:8090, Proxy Port 9090 and a Start button
  2. Redirect the client to the proxy. Change the client’s configuration so it calls the proxy instead of the real service. For a microservice architecture, this can be as simple as altering an environment variable or a host:port setting.
  3. Run your tests or exercise the client. With the proxy in place, execute the tests or flows that interact with the downstream API. The proxy will record all requests and responses, including headers and body examples. Proxy recording UI showing POST /products request payload and response details
  4. Stop the proxy and generate artifacts. When you stop the recording, the proxy can generate an OpenAPI specification that describes the recorded endpoints and includes example payloads. This specification is a useful contract and a source of truth. Studio app showing 'Proxy Recording Success' notification and the generated OpenAPI YAML displayed in an editor
  5. Switch to a mock server. Use the generated specification and examples to spin up a mock server on the same port the proxy used. The mock can reply with the recorded responses and be configured to reproduce faults like single-request timeouts or custom error codes. Specmatic Studio Mock tab showing recorded endpoints and 'Mock started on http://0.0.0.0:9090' with Stop button
  6. Validate and iterate. Once the mock is running, you can stop the real downstream service. Run your tests again to verify behavior under both normal and fault conditions. The mock gives you precise control over timing and error scenarios so you can ensure resilience, observe retry logic, and confirm fallbacks. Studio UI showing mock server running at 0.0.0.0:9090 with covered endpoints and a stop button

Key benefits and practical tips

Replacing a dependency with a recorded mock is most powerful when paired with a few best practices:

  • Record meaningful interactions: Capture a representative set of requests and edge cases so the generated OpenAPI and examples reflect real usage.
  • Keep the mock configurable: Make it easy to change response codes, delays, and payloads for targeted fault injection.
  • Use the mock in CI: Running tests against a mock makes CI deterministic and removes flakiness caused by network or downstream instability.
  • Version the generated specification: Treat the OpenAPI output as an artifact and store it in the repo or an artifact store so it can be reviewed and reused.
  • Limit scope for safety: Apply fault simulations only to the requests you want to test. The proxy-to-mock workflow makes it simple to reproduce a timeout for a single request while keeping all other traffic normal.

A good setup for API proxy recording also includes a short feedback loop: record, mock, test, refine. With each iteration the mock becomes more accurate and the tests become more trustworthy.

When to reach for API proxy recording

Use this approach when you need to:

  • Introduce fault scenarios that are hard to reproduce on production systems, like intermittent timeouts.
  • Enable parallel developer workflows without needing the real downstream to be available.
  • Create a living contract for teams working concurrently on consumers and providers.
  • Ensure CI environments are isolated from external service instability.

Common pitfalls and how to avoid them

Be mindful of a few traps that can undermine the value of API proxy recording:

  • Recording sensitive data. Filter or redact credentials and personal data during recording and before committing generated artifacts.
  • Overfitting to recorded examples. Include varied examples that cover success and failure cases so tests are robust.
  • Forgetting to version contracts. Keep the generated OpenAPI under source control to avoid drift between mock and real service behavior.

Conclusion

API proxy recording is a pragmatic technique for creating realistic, controllable mocks from real traffic. It shortens the path from integration testing to reliable, repeatable simulations of both normal and faulty downstream behavior. By recording traffic, generating an OpenAPI-based contract, and running a mock server, teams can simulate timeouts, errors, and other edge cases without touching production services.

What is API proxy recording and how does it differ from manual mock creation?

API proxy recording captures actual request and response traffic between a client and a service, then generates a specification and examples automatically. Manual mock creation requires handcrafting endpoints and payloads, which can miss subtle behaviors recorded traffic will include.

Can I simulate a timeout for a single request while keeping other requests normal?

Yes. After recording interactions and generating a mock server, you can configure the mock to apply a timeout or delay to just the targeted request while leaving other responses unchanged.

Do I need the real downstream service after creating the mock?

No. Once the mock is running and validated, you can stop the real service and run tests against the mock. This makes tests stable and reproducible in CI and developer environments.

How do I avoid recording sensitive data?

Filter or redact sensitive fields during recording. Many proxy tools provide options to exclude headers or mask fields in bodies. Treat the generated artifacts as code and review them before committing.

Related Posts

testing 202 responses thumb

By Naresh Jain

When Downstream Services Lag, Does Your API Gracefully Accept with 202 Responses?

When Downstream Services Lag: Designing Reliable APIs with 202 responses As systems get distributed, synchronous calls to downstream services become fragile. When a downstream service
Read More
testing 429 responses thumbnail

By Naresh Jain

When Dependencies Timeout, Does Your API Shed Load with 429 Responses?

When Dependencies Timeout: Engineering Tests that Produce a 429 response Simulating backend slowdowns and verifying that your API returns a proper 429 response is a
Read More
arazzo openapi asyncapi demo with specmatic

By Hari Krishnan

Authoring & Leveraging Arazzo Spec for OpenAPI & AsyncAPI Workflow Testing

Seamlessly test both synchronous and asynchronous APIs in realistic workflows before they ever hit production! Discover how combining OpenAPI and AsyncAPI specs can simplify complex
Read More
api resiliency testing intro cutdown thumbnail

By Naresh Jain

Our API Ecosystem is More Fragile Than We Think

API resiliency testing: how to keep your services standing when dependencies fail The fragile truth about modern systems Resiliency matters, and yet we still underestimate
Read More

By Joel Rosario

Build Apps from API specs using AI: Self-Correcting Contract-Driven Agentic Workflows with Specmatic

Harnessing the Power of API Specifications for Robust Microservices  Modern microservice architecture hinges on precise and dependable communication between services. This is where API specifications
Read More

By Naresh Jain

OpenAPI’s Broken Tooling: Roundtrip Fidelity Failure with CodeGen and DocGen​

Exploring the Strengths and Weaknesses of Automated API Development  Maintaining well-documented and reliable APIs is essential for any microservices development pipelines. At the heart of
Read More
api resiliency testing

By Naresh Jain

Why APIs Fail and How No-Code, Intelligent API Resiliency Testing Can Prevent the Next Outage

Ensuring Reliability in an API-Driven World APIs have become the backbone of today’s digital landscape, connecting applications, services, and countless user experiences. With microservices architectures
Read More
jaydeep aws lambda

By Jaydeep Kulkarni

AWS Lambda Data Pipeline Testing using LocalStack with Specmatic

Table of Contents Mastering Testing AWS Lambda Functions with LocalStack and Specmatic With fast-evolving data ecosystems, building reliable and scalable data products is essential. One
Read More

By Hari Krishnan

WireMock’s Dirty Secret: Ignoring API Specs & Letting Invalid Examples Slip Through 

Overcoming the Challenges of Hand-Rolled Mocks with Contract-Driven Development  APIs and microservices have transformed the way software systems are built and maintained. However, developing a
Read More
mcp auto test exposed mcp servers lying

By Yogesh Nikam

Exposed: MCP Servers Are Lying About Their Schemas

Table of Contents Practical Lessons from MCP Server Testing Over the last few weeks the Specmatic team ran a focused series of MCP server testing
Read More