What is Sidecar Pattern?
As per this pattern supporting features of the application are deployed as separate process or container to provide isolation and encapsulation.
This sidecar pattern is often used with containers and referred to as a sidecar container or sidekick container.
This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle.
The sidecar also shares the same lifecycle as the parent application, being created and killed alongside the parent.
What is the problem context?
- Applications generally require additional functionalities like
- Such functionalities tightly integrated, so
- Outage in one of these components can affect other components or the entire application.
- Functionalities need to be implemented using the same language as the parent application.
What is the solution?
The solution is a sidecar pattern:
- Additional functionalities implemented as separate components or services.
- Application is decomposed into services.
- Sidecar services are not necessary part of application but are connected to it.
- Shares the same fate as parent application.
- For each instance of the application, an instance of the sidecar is deployed and hosted alongside it.
What are the advantages of using this pattern?
- Sidecar is independent from its primary application in terms
- Runtime environment
- Programming language
- Proximity to the primary application, hence no significant latency when communicating between them.
- For legacy applications where extending application functionality is difficult, you can use a sidecar to extend functionality.
What are important things to consider when using this pattern?
- Consider the deployment and packaging of sidecar with application
- Decide on the interprocess communication mechanism between application and sidecars.
- Deploying these features as separate services can add latency to the application.
- For small applications cost of deploying a sidecar service may not be worth.
- Consider if functionality could be implemented as a library or using a traditional mechanism.
- Language-specific libraries may have a deeper level of integration and less network overhead.
- Managing the code and dependencies for these language-specific interfaces can also add considerable complexity, especially for hosting, deployment, and management.
When to Use this Pattern
Use this pattern when:
- Your primary application uses a heterogeneous set of languages and frameworks. A component located in a sidecar service can be consumed by applications written in different languages using different frameworks.
- A component is owned by a remote team or a different organization.
- A component or feature must be co-located on the same host as the application
- You need a service that shares the overall lifecycle of your main application, but can be independently updated.
- You need fine-grained control over resource limits for a particular resource or component. For example, you may want to restrict the amount of memory a specific component uses. You can deploy the component as a sidecar and manage memory usage independently of the main application.
ExampleThe sidecar pattern is applicable to many scenarios. Some common examples:
- Infrastructure API. The infrastructure development team creates a service that’s deployed alongside each application, instead of a language-specific client library to access the infrastructure. The service is loaded as a sidecar and provides a common layer for infrastructure services, including logging, environment data, configuration store, discovery, health checks, and watchdog services. The sidecar also monitors the parent application’s host environment and process (or container) and logs the information to a centralized service.
- Manage NGINX/HAProxy. Deploy NGINX with a sidecar service that monitors environment state, then updates the NGINX configuration file and recycles the process when a change in state is needed.
- Ambassador sidecar. Deploy an ambassador service as a sidecar. The application calls through the ambassador, which handles request logging, routing, circuit breaking, and other connectivity related features.
- Offload proxy. Place an NGINX proxy in front of a node.js service instance, to handle serving static file content for the service.