Using Envoy as a Front Proxy

Mike Dunn
CloudX at Fidelity
Published in
8 min readJul 1, 2021

--

Image Source: Mattia Serrani on Unsplash

This is the first Envoy & Open Policy Agent (OPA) Getting Started Guide. Each guide is intended to explore a single Envoy or OPA feature and walk through a simple implementation. Each guide builds on the concepts explored in the previous guide with the end goal of building a very powerful authorization service by the end of the series.

The source code for this getting started example is located on GitHub.

Overview

Envoy is an open-source edge and service proxy that has become extremely popular as the backbone underneath most of the leading service mesh products (both open-source and commercial). This article is intended to demystify it a bit and help people understand how to use it on its own in a minimalist fashion.

Envoy is just like any other proxy. It receives requests and forwards them to services that are located behind it. The 2 ways to deploy Envoy are:

  1. Front Proxy
drawing of the front proxy approach with the internet on the left, a cluster of envoy servers in the middle and 3 different clusters of servers for different upstream services on the right.

In a front proxy deployment Envoy is very similar to NGINX, HAProxy, or an Apache web server. The Envoy server has its own IP address and is a separate server on the network from the services that it protects. Traffic comes in and get forwarded to a number of different services that are located behind it. Envoy supports a variety of methods for making routing decisions.

Path based routing: One mechanism is to use path-based routing to determine the service of interest. For instance, a couple of requests coming in as:

someserver.com/service1/some/other/stuff

someserver.com/service2/my/application/path

The first URI path element can be used as a routing key. /service1/ can be used to route requests to service 1. Additionally, /service2/ can be used to route the request to a set of servers that support service 2. The path can be rewritten as it goes through Envoy to trim off the routing prefix (e.g., /service1/).

Server Name Indication (SNI): Another mechanism is to use Server Name Indication (SNI) which is a TLS extension that can be used to determine where to forward a request.

service1.com/some/other/stuff

service2.com/my/application/path

Using this technique, the first URL above would use the server name service1.com/ to route the request to the upstream servers for service1. Additionally, the first part of the URL for the 2nd request would use service2.com/ to route the request to the upstream servers for service2.

2. Sidecar Proxy

A drawing of the side car proxy pattern where each service has its own envoy instance and that is the only way in or out of each service

In a sidecar deployment, the Envoy server is located at the same IP address as the service that it protects. The Envoy server when deployed as a sidecar only has a single instance of a service behind it. The sidecar approach can intercept all inbound traffic and optionally all outbound traffic on behalf of the service. IP Tables rules are typically used to configure the operating system to capture and redirect this traffic to Envoy.

In this article and example project, we will create the simplest possible Envoy deployment. This example just uses docker compose to show how to get Envoy up and running. There will be a number of subsequent articles that expand on this simple approach to demonstrate more Envoy capabilities. Later in the series, Open Policy Agent will also be introduced to handle more complex authorization use cases that cannot be handled by Envoy alone.

The diagram below shows the environment that will be built and deploy locally.

Building an Envoy Front Proxy

The code for the complete working example can be found on Github. The Envoy container images are specified on line 1. The Envoy images are pulled from Docker hub. Docker-compose ensures that local setup dependencies are minimal. It also allows us to configure some envoy behavior through environment variables.

The Dockerfile

A picture of the dockerfile program listing.

In the Dockerfile, a custom image is created based on the official envoy container images. A custom bash script acts as the entry point when the container runs. The entrypoint.sh file is where the magic happens.

Entrypoint.sh

Looking at the entrypoint.sh bash script below, you can see that environment variables are used to determine which service (SERVICE_NAME) Envoy will route to and the port (SERVICE_PORT) required for that service. Additionally, the amount of detail to capture in the logs is determined by the DEBUG_LEVEL environment variable. As you can see from the script below on line 3, sed replaces those environment variables in Envoy’s configuration file before starting Envoy.

A picture of the Entrypoint.sh bash script

Envoy Configuration (via envoy.yaml)

There is no configuration file yet. So, we will create that next. Envoy is very flexible and powerful. There is an enormous amount of expressiveness that the Envoy API and its configuration files support. With this flexibility and power, Envoy configuration files can become quite complicated with deep nesting of configuration sections in the YAML hierarchy. Additionally, each feature has a lot of parameters available. The documentation can only cover so much of that functionality with an open-source community of volunteers.

One of the challenges that I have when reading through the documentation and trying to apply it, is that the documentation has a variety of YAML snippets. There are very few places that these YAML snippets are pulled together into a functioning example. There are a few examples in the source code examples directory but they are far from comprehensive. That leaves a lot of tinkering for engineers to figure out how to compose a functional configuration while interpreting sometimes unclear error messages. That is the reason that I am writing a series of getting started guides. These articles are intended to give folks a known-to-work starting point for Envoy authorization features and extensions like Open Policy Agent.

The Envoy configuration (shown below) starts with defining a listener on line 2. The listener’s first property is the address and port to accept traffic on (lines 3 through 6). The next property is a filter chain. Filter chains are very powerful and enable a wide variety of possible behaviors. The filter chain in the example is as simple as it gets. It accepts any HTTP traffic with any URI pattern and routes it to the cluster named service.

The http_connection_manager component does this for us. It’s configuration starts on line 9 and extends to line 24. Execution order of each filter is determined by the order they are listed in the configuration file. The important part for this discussion begins on line 14 with the route_config. This sets up routing requests for any domain (line 18) and any request URI that begins with a slash (line 20) to go to the cluster named service. The cluster definitions are in a separate section to make them reusable destination across a variety of rules.

A picture of the envoy.yaml configuration file

In the configuration file above, the cluster definitions begin on line 25. You can see that there is only a single cluster defined. It has the name service. Envoy uses DNS to find server instances and uses round robin to direct traffic across multiple instances. The hostname is on line 32 and the port is on line 33. These are the environment variables that will be replaced when the entry.sh script runs.

The last section of the configuration file tells Envoy where to listen for admin traffic. The admin GUI is a handy little tool that is not covered in this guide. It is definitely worth poking around in to observe what is going on inside an individual Envoy instance.

Docker Compose Configuration

Now that we understand the Envoy configuration, let’s move on to understanding the rest of the simple environment. Line 4, in the configuration file show below, triggers docker to build the custom Envoy image. Docker will only build the custom Envoy image the first time (when it sees that an image does not exist). If you want to force rebuilding the Envoy container on subsequent runs add the --build parameter to your docker compose command. Envoy is exposed to the host network on lines 6 and 7. The envoy configuration file created in the previous step of this article is specified on line 9.

This is a screen capture of the docker-compose configuration file.

The upstream service name and the service’s port are defined in the environment variables on lines 12 and 13. Notice that the service name ‘app’ matches the name of the container created by the configuration starting at line 15. This example uses HTTPBIN to reflect any inbound request back to client.

Running The Example

The last step to getting our front proxy up is running the included script. The test.sh script builds and runs example. The script explains what it is about to do. This ensures you know what you are about to see scrolling across the terminal screen.

  • Line 3 starts the environment.
  • Line 8 lets you check to make sure both containers are running before trying to send a request.
  • Line 10 simply calls Envoy using a curl command with the --verbose parameter to show the headers and request details.
  • Line 12 tears down the whole environment.
screen capture of the test.sh bash script

Results

After you run the test.sh script, you should see something like this if everything works correctly:

screen capture of the results when the test.sh script is run

😊 Congratulations!!! You have successfully stood up your first Envoy instance and configured it to forward traffic! This is the simplest possible Envoy configuration. It doesn’t have any security configured yet or any other features that Envoy is famous for. Those will be covered in future articles. Feel free to explore further by using postman to send other requests to envoy. Additionally, don’t forget to try Envoy’s admin console by pointing your web browser to http://localhost:8001

While our project is still very simple, it is a great time to add tools that make it easier to see envoy’s logs. Check out the next article to add Elasticsearch and Kibana for log aggregation.

Originally published at https://helpfulbadger.github.io on August 31, 2020.

--

--

Mike Dunn
CloudX at Fidelity

Lead Developer Experience and Engineering Standards for Fidelity’s Personal Investing Business.