Tutorial: Managing Serverless GCP Load Balancers Dynamically with Terraform

Austen Novis
Engineering at Premise
7 min readDec 14, 2021

--

By Austen Novis, Software Engineer

Photo by Omar Flores on Unsplash

Google Cloud Platform (GCP) offers support for many different serverless services that include App Engine, Cloudrun, Cloudfunctions, and Api Gateway. Each service has its pros and cons and often companies will host applications in multiple if not all of these services. One difficulty in managing multiple different types of services is maintaining a robust networking layer.

The simplest solution for managing your networking layer is to not use a load balancer at all and instead call each service directly. While this solution is certainly the easiest, there are many problems, especially if you would like to migrate an application to a new backend service since it would require updating all references to its service name in external applications that make calls to that service. Additionally, GCP’s global load balancers offer many important features that include DDOS protection, rate limiting and throttling, and improved network latency, that make it a no-brainer to use.

Another option to manage the networking layer for serverless is to create a load balancer for each application in each backend service, with its own unique DNS address. The benefit of this approach is that it is easy to update and manage each service as they are all maintained independently. However, as the number of services grow, maintaining individual load balancers, certifications, and DNS records can become a significant operations problem.

Alternatively, the approach that we at Premise took is to maintain one single load balancer that then routes requests to all of our individual services. The difficulty with this approach is that the singular load balancer can become quite complex and tough to maintain. To solve this problem, we leveraged Terraform’s GCP serverless load balancer module combined with dynamic blocks. The result is a simple, scalable, and customizable solution to maintain a singular load balancer for any number of serverless applications.

Dynamic Serverless Load Balancer Diagram

This tutorial will go step by step into the different pieces of the Terraform script required to deploy and maintain a dynamic serverless load balancer. We will begin with installing and setting up Terraform, followed up by setting local variables. Next we will dive into the different dynamic blocks, and illustrate different options to further customize your load balancer resources. Finally we will end with future work and how we plan on expanding this load balancer in the future, including several Terraform pull requests we have in progress.

The code for this tutorial can be found in our gcp-tutorials GitHub repository.

Getting Started

Getting up and running with Terraform is quick and easy. To install, simply download from the Terraform website. Now to get started you will need to setup your Terraform provider, which will be google in our case and Terraform backend, which is where your Terraform state file will be saved to. The easiest way is to have Terraform saved locally using a local backend.

provider “google” {}terraform {  backend “local” {}}

Once you have verified it working locally you should store your state file in GCS using the gcs backend.

provider “google” {}terraform {backend “gcs” {  bucket = “tf-state”  prefix = “lb”  }}

The first command you will need to run is terraform init which will install the google provider libraries. You will need to run this command later one once we have defined our modules later on in this tutorial, but for now you are up and running with Terraform!

Setting Local Variables

The next step is setting up local variables, which will make it easy to add new backend services as well as maintain multiple load balancers for different environments such as for development and production. We use locals, but you can certainly use regular Terraform variables as well. The main variables used are domain, project, region, environment, and services. The services variable is the only place required to define your different backends and any customizations required. The rest of the Terraform script will dynamically create all resources based on what you have defined in services.

For example, the example definition below will create a load balancer with paths to three different services with the paths DOMAIN.com/service1/* that will route to an app engine backend, DOMAIN.com/service2/* that will route to an additional app engine backend, and DOMAIN.com/service3/* that will route to a cloudrun backend. All three of these routes have path_prefix_rewiting set so the applications themselves do not need to have any logic to handle the incoming base paths.

To extend this Terraform script you would be able to add additional fields to each service and use dynamic blocks to either further customize currently defined resources or even create additional resources such as cloud armor security policies.

locals {
domain = "DOMAIN.COM"
project = "PROJECT"
region = "REGION"
env = "dev"
services = [
{
"service" : "service1",
"type" : "app_engine",
"path" : "/service1/*",
"path_prefix_rewrite" : "/"
},
{
"service" : "service2",
"type" : "app_engine",
"path" : "/service2/*",
"path_prefix_rewrite" : "/"
},
{
"service" : "service3",
"type" : "cloud_run",
"path" : "/service3/*",
"path_prefix_rewrite" : "/"
}
]
}

Dynamic Serverless Network Endpoint Groups

We can keep all of our logic in the services local variable because we utilize dynamic blocks in various Terraform resources, with the first example in the google_compute_region_network_endpoint_group. Here we use a for_each statement, to iterate through each service and then use a dynamic block to set values for app_engine or cloudrun since only one type can be set at a time in this resource.

resource "google_compute_region_network_endpoint_group" "neg" {
for_each = { for service in local.services : "${service.service}" => service }
name = "${each.value.service}-${local.env}" network_endpoint_type = "SERVERLESS"
region = local.region
project = local.project
dynamic "app_engine" {
for_each = each.value.type == "app_engine" ? [{ "service" : each.value.service }] : []
content {
service = app_engine.value.service
}
}
dynamic "cloud_run" {
for_each = each.value.type == "cloud_run" ? [{ "service" : each.value.service }] : []
content {
service = cloud_run.value.service
}
}
}

Dynamic Compute URL Map

We again utilize dynamic blocks to generate path rules for each of our services and take it one step further with an additional nested dynamic block to further customize our path rules. In the example below, we have a nested dynamic path_rule block that allows us to customize the path rule for each service. In this case we just have a customization option for path_prefix_rewrite, but it is easy to extend and add other customization options based on the services definitions you provide in the local variable.

resource "google_compute_url_map" "url-map" {
name = "${local.env}-url-map"
description = "${local.env} url mapping for ${local.domain}"
project = local.project
default_service = module.lb-http.backend_services["default"].self_linkhost_rule {
hosts = [${local.env}.${local.domain}.${local.domain_suffix}"]

path_matcher = "main"
}
path_matcher {
name = "main"
default_service = module.lb-http.backend_services["default"].self_link
dynamic "path_rule" {
for_each = local.services
content {
paths = [path_rule.value.path]
service = module.lb-http.backend_services[path_rule.value.service].self_link
dynamic "route_action" {
for_each = can(path_rule.value.path_prefix_rewrite) ? [{ "path_prefix_rewrite" : path_rule.value.path_prefix_rewrite }] : []

content {
url_rewrite {
path_prefix_rewrite = route_action.value.path_prefix_rewrite
}
}
}
}
}
}
}

Serverless Loadbalancer

Finally you will need to define your serverless load balancer using the source GoogleCloudPlatform/lb-http/google//modules/serverless_negs and reference the resources created above. This is done by first referencing the url-map resource and then using a for loop in the backends section of the load balancer to iterate through each service. During each iteration you will reference the service group using google_compute_region_network_endpoint_group.neg[serviceObj.service].self_link to dynamically select each service endpoint group resource and retrieve its self link.

module "lb-http" {
source = "GoogleCloudPlatform/lb-http/google//modules/serverless_negs"

version = "~> 6.1.1"
name = "${local.env}-${local.domain}"
project = local.project
ssl = true
managed_ssl_certificate_domains = [${local.env}.${local.domain}.${local.domain_suffix}"]

https_redirect = true
create_url_map = false
url_map = google_compute_url_map.url-map.self_link
backends = {
for serviceObj in local.services :
serviceObj.service => {
description = serviceObj.service
groups = [
{
group = google_compute_region_network_endpoint_group.neg[serviceObj.service].self_link
}
]
enable_cdn = false
security_policy = null
custom_request_headers = null
custom_response_headers = null
timeout_sec = 300
iap_config = {
enable = false
oauth2_client_id = ""
oauth2_client_secret = ""
}
log_config = {
enable = false
sample_rate = null
}
}
}
}

Deploying your Load balancer

Once you have defined all of your Terraform resources you can test your configuration using terraform plan , which will show all resources that will be created.

Once you verified that your plan is what you expect, then you can deploy all resources by running terraform apply .

Further Applications and Future Work

While this tutorial goes through the basics to set up a dynamic serverless load balancer, there are many additional components you can add to extend this example. One option is to add a DNS resource that can reference the IP address from the load balancer output.

Another exciting opportunity is to add various Cloud Armor policies to different routes and endpoints. This feature is still in beta, but as it comes more mature we will have an additional blog post on it, so stay tuned. An example resource could look like the below policy, which we could add in a dynamic block and reference in the backend block with the security_policy field.

 resource "google_compute_security_policy" "service_sec_policy" {
provider = google-beta
name = "service_sec_policy-${local.domain}"
description = "mobileproxy-sec-policy"
project = local.project
adaptive_protection_config {
layer_7_ddos_defense_config {
enable = true
}
}

We also have several pull requests pending in Google’s Terraform provider to add additional functionality including adding the api-gateway resource as an option in the google_compute_region_network_endpoint_group resource as well as add full support for cloud armor policies in google_compute_security_policy including rate limiting and throttling options.

If you enjoyed this blog, then check out our other tutorials or explore additional content on our newly launched engineering blog!

Premise is constantly looking to hire top-tier engineering talent to help us improve our front-end, mobile offerings, data and cloud infrastructure. Come find out why we’re consistently named among the top places to work in Silicon Valley by visiting our Careers page.

--

--