Guardrail is an open-source tool that generates regression tests for stateless non-persisted microservices using recorded production traffic. It combines traffic replay and service virtualization to test a microservice in isolation.
There are three core functionalities to Guardrail:
- Record traffic in the production environment
- Replay traffic in the testing environment
- Report the results from the testing environment
Let’s walk through a typical workflow for a developer using Guardrail.
The first step is to record traffic upstream and downstream of the microservice in production that we are changing.
There are a few requirements an architecture must meet before Guardrail can be deployed.
-
The network traffic between microservices must be unencrypted. This scenario typically will involve a firewall and a gateway that separates the private and the public internet. Nginx as an API gateway with TrueCrypt is a basic example of this scenario. It is possible to use Guardrail with encrypted traffic, but the developer must add their own TLS termination proxy.
-
The application must use the “correlation ID” pattern to trace requests. A “correlation ID” is a unique HTTP header value attached to a request when it passes into an application’s private network.
-
The use case is limited to non-persisted stateless services that are purely for data transformation.
-
Guardrail can work with architectures with (synchronous) HTTP communication patterns using a combination of REST and JSON. It doesn't work with asynchronous communication patterns that use HTTP Polling or message queues.
Install Goreplay, Mountebank, and Guardrail on the production machine of the microservice you will eventually be changing.
Traffic between the microservice and its downstream dependencies is recorded using a proxy, so the URLs the microservice uses to address those dependencies must be changed to the URLs of the proxies.
Declare a list of dependency URLs in a file called dependency.json
with the following format and then run the command guardrail init
. Specify these details for each:
destinationURL
: This is the "origin" of the destination host. Origin is composed of the scheme, hostname, or IP, and port.proxyPort
: This is the port the proxy will receive requests.varName
: This name identifies each proxy.
[
{
"varName": "DEPENDENCY_1",
"destinationURL": "https://downstream-service1",
"proxyPort": 5002
},
{
"varName": "DEPENDENCY_2",
"destinationURL": "http://localhost:9000",
"proxyPort": 5003
}
]
When ready, run guardrail record
.
From that point on, GoReplay is recording upstream traffic, and Mountebank is recording downstream traffic. Traffic is recorded to the production host’s file system.
Stop upstream and downstream traffic recording by quitting Guardrail (^C). Then, the URLs addressing the downstream recording proxies should be reverted to the URLs that point directly towards the downstream dependencies.
Follow the same installation on the host the test will run on, then spin up the updated microservice on that machine. The microservice should be configured using the same URLs declared on dependencies.json
. If it is configured with addresses of the actual dependencies, the service under test will issue requests to production dependencies.
Transfer the files of recorded traffic from the production host to the testing host.
Run guardrail replay
command in the testing host. This starts up the Mountebank virtualized services using the data collected from production and then replays the upstream requests against the service under test using GoReplay. It also starts up a component of Guardrail called the “Reporting Service,” which becomes relevant in the next section.
The same traffic recording can be replayed multiple times, allowing developers to iterate on the service under test without having to re-record traffic.
In addition to storing traffic data in a database, the Reporting Service calculates the results of a replay session and serves the results to the Guardrail user interface. Results are a comparison of the actual and expected HTTP response status, headers, and JSON body.
Once a replay session finishes, view the results hosted at http://localhost:9001
. Requests that had different responses from what was recorded in production are listed, along with the expected and actual responses. It also shows a few metrics on performance parity that compares response times, error rates, timeouts.