Moving to the BeyondCorp Model With Cloud IAP and IAP Connector

Context of the Engagement

As a Google Cloud Premier Partner, SADA helps customers get the most out of Google Cloud products by building and recommending solutions based on Google’s best practices and methodologies. Recently, I worked with a Google Cloud Professional Services Organization (PSO) customer who wanted to manage and secure access to a couple of their applications (running both on premises and in AWS) using Google’s Identity-Aware Proxy product called Cloud IAP. For the sake of this blog, we will refer to those applications as app1 and app2.

Cloud IAP

Cloud IAP is a building block towards BeyondCorp, Google’s implementation of a zero trust security model designed to enable employees to work from untrusted networks without the use of a VPN.

Cloud IAP enables companies and application admins to control internet access to applications running in Google Cloud Platform (GCP) accessed through HTTPS. This is done by verifying user identity and context of the request to determine if a user’s request should be allowed through. This provides an application-level access control model instead of relying on network-level firewalls. In other words, you can set granular access control policies for applications based on user identity (e.g, employees versus contractors), device security status and IP address.

IAP Connector

Cloud IAP secures authentication and authorization of all requests to Google App Engine or Cloud Load Balancing HTTP(S). When your application is running outside of GCP, Google’s solution is to deploy a middleware proxy based on Ambassador, a version of the Envoy proxy designed for Kubernetes, running in Google Kubernetes Engine (GKE). The proxy will then forward authenticated and authorized requests to your application running outside of GCP. As such, you can set IAP IAM policies on the proxy itself since it’s running in GCP and then allow access to your on-prem application only from the IAP connector through private IP using one of the Google Cloud hybrid connectivity products such as Cloud VPN or Cloud Interconnect. This is referred to as the IAP connector.

The first version of the IAP connector is deployed using a deployment manager template which will create the following resources:

The deployment manager template is not very customizable and has some limitations such as:

  • GKE cluster created is not a private cluster
  • Template does not allow you to use a shared network or specify IP space for pods and services
  • Only one Ambassador deployment and one ingress resource per cluster is supported
  • Ambassador-based services are not customizable beyond routes mapping such as connection timeouts and retry policies and other optional ambassador configurations

Additionally, the customer, a Terraform Enterprise shop, wanted a solution that integrates well with their existing technologies and employees’ skillsets. As such, we started working on a deployment solution based on Terraform and Helm.

IAP Connector With Terraform and Helm Flavor

The deployment process of the new IAP connector is divided into four logical phases and will result in the structure shown in the graph below:

IAP Connector Terraform Graph
 A. Terraform

In this phase, we create all infrastructure needed to deploy the IAP connector in GCP using Terraform.

First, we start by creating the VPC. We chose to use a shared VPC since it’s the best way to separate network administration from the application itself while allowing many projects to share the network infrastructure and communicate privately using RFC1918. In this way, you can remove the need to interconnect many VPCs with the on-prem network while maintaining centralized control over network resources.

Second, we create the necessary routers in the VPC and set up the interconnect using any of Google’s hybrid connectivity solutions with the customer’s corporate network to enable private communication between Ambassador pods and the backend applications that we want to protect.

Third, we deploy NAT gateways to allow instances and pods in the private GKE cluster to reach out to the internet later. We create all firewall rules necessary to block or allow ports, protocols or IP ranges from the on-prem network into the GCP network according to the setup.

Finally, we deploy the GKE cluster by calling the Terraform module here. The GKE cluster created is a VPC-native, regional, private cluster with auto scaling enabled on each of the two node-pools created.

Before we move to the next section, to make sure the deployed IAP connector’s HTTP(S) load balancer uses a static public IP and encrypts the traffic with SSL certificates of the application we are trying to protect (app1 and app2), we allocate a global static IP and create the relevant application certificate resources for app1 and app2 in GCP using Terraform too. 

B. Helm

In this phase, we deploy the IAP connector Helm chart into the GKE cluster created earlier. The Helm chart will create the following resources:

  • GKE ingress: this will be the ingress traffic point to Google Cloud. This creates an HTTP(S) global load balancer inside GCP using a single static anycast IP that we allocated earlier. The DNS hostname of your applications will later point to this GLB IP.
  • A Kubernetes service for each routing rule where Ambassador configurations such as retry policies and connection timeouts can be applied as annotations. See here for more information on how to configure these annotations.
  • A configured number of Ambassador deployments that contain both Ambassador control plane and Envoy Proxy instances which handle the rerouting of traffic. 
  • An optional Horizontal Pod Autoscaler to autoscale the Ambassador deployment based on CPU and memory consumption of its pods.

The definition for the Helm chart deployed is here, and values passed to the chart will reflect mapping between the external hostname of the application coming through the IAP-protected load balancer and the internal hostname of the application we want to protect. For example, for the app1 application, the mapping configuration will look similar to the following:

mapping:
   - name: host
     source: app1.domain.com
     destination: app1-int.domain.com 

Look up this example values file to see how you can configure the mapping and configure the Helm chart in general.

C. Configure OAuth Consent Screen and Enable IAP

Once you deploy the Helm chart, if you jump to the Identity-Aware Proxy page, you will notice there’s a backend service for each route rule you created through Helm. Before you can enable IAP on those backends, you have to configure an OAuth consent screen as mentioned here.

Jump to the OAuth consent screen in your project to add a support email for your applications and the application name that you want to display in the Oauth screen page. Click “save.” 

Once done, hop on again to the IAP page to set up Cloud IAP access policies to your backend applications. More specifically you can set which users or groups have access to those services and which access level they are assigned, if any. The procedure is well documented here.

D. Cutover

Once everything is deployed, it’s time to switch your application DNS hostname to the GLB IP protected by IAP. Two DNS records need to be created for each backend application. For example, for app1:

  • An external DNS hostname of your application (app1.domain.com) pointing to the HTTP(S) load balancer IP. 
  • An internal DNS hostname of the associated on-premise application (app-int.domain.com) pointing to the backend application IP.

When you create these two records, Ambassador will apply these mapping rules and proxy authenticated external traffic to the internal application endpoint. Your app should be accessible now securely from the internet using the external hostname.

Notes

After implementing the solution, a good practice is to limit access to your backend application to the range of IP addresses allocated for pods. This is done by setting the proper firewall rules on your on-premise app to allow traffic originating from pod IPs only.

Another thing that we encountered was that the Ambassador pods needed to resolve internal DNS records like app1-int.domain.com. If these are stored in a private DNS server running on prem, an option would be to create a stub domain inside GKE to forward DNS queries for domain.com to the private name server.

Conclusion

We went through the steps above to secure access in production to a couple of applications running on prem using Cloud IAP and IAP connector. The IAP connector was tested both under synthetic load and under production load on 1000+ customers with no single recorded issues. 

The customer explained the benefits of security, increased user access and reduced latency for employees and partners away from their corporate network.

Google Cloud Logo Icon

THE GCP VS AWS DEBATE

We spoke to dozens of customers who shared their experiences with both cloud providers. The overwhelming trends tell a big story. Download the eBook to learn more.

Solve not just for today but for what's next.

We'll help you harness the immense power of Google Cloud to solve your business challenge and transform the way you work.

Scroll to Top