Multi-region deployment is the new frontier for businesses aiming to achieve global reach while ensuring low latency and high availability. Google Cloud Platform (GCP) offers the infrastructure to facilitate such deployments, and when coupled with CTO.ai, the process becomes seamless.

This article will guide you through deploying multi-region applications in GCP using CTO.ai.

Prerequisites

Set Up Google Cloud Service Account

Create a service account on GCP to give CTO.ai access:

  • Go to GCP console > IAM & admin > Service accounts.
  • Click Create Service Account
  • Add relevant permissions for your Compute Engine.

Configure and Set up GCP Workflows

Before we get started with this guide, install the GCP GKE Pulumi Py workflow from. If you don't have access to the repository, kindly contact us at [email protected] The repo includes a complete IaC for deploying infrastructure over GCP: Kubernetes, Container Registry, Database Clusters, Load Balancers, and Project Resource Management, all built using Python + Pulumi + CTO.ai.

Clone the repository with:

git clone “https://github.com/workflows-sh/gcp-gke-pulumi-py.git” 

cd gcp-gke-pulumi-py

Run and Set up your Infrastructure

Next, you need to build and set up the infrastructure that will deploy each resource to GCP using the GCP workflow stack. Set up your infrastructure using the ops run -b . This will provision your stack and set up your infrastructure.

  • Select setup infrastructure over GCP
  • This process will build your Docker image and start provisioning your GCP infra resources.

​​

  • Next, select the services you want to deploy from the CLI. We will select the “all” service and install all the dependencies, which will also provision our GCP container registry.
  • Back in the GCP console, click on your container registry, and you will see your Container Registry created for usage.
  • When your resources are deployed and your infra is created, you can view your VM instances, Database, GKE, and other resources you will use in your GCP console.
  • We can now see our machine configuration.
  • Back in your GCP console, you can also see your GKE cluster.



  • When you click on it, you can see the Machine configuration and Network configs

Setting Up Your Environment

Configure GCP Service Account

To grant CTO.ai CTO.ai access to GCP.

  • Go to the GCP Console > IAM & Admin > Service Accounts.
  • Create a service account, and grant it the necessary permissions for App Engine and Compute Engine.
  • Download the JSON key for this service account; the credentials will be stored in CTO.ai Secrets and referenced in your ops.yml file.

Configuring CTO.ai  for Multi-region Deployment

Your ops.yml should have the necessary configurations for multi-region deployment:

version: "1"
pipelines:
  - name: sample-expressjs-pipeline-gcp-gke-pulumi-py:0.1.1
    description: Build and Publish an image in a GCP Container Registry
    env:
      static:
        - DEBIAN_FRONTEND=noninteractive
        - STACK_TYPE=gcp-gke-pulumi-py
        - ORG=cto-ai
        - GH_ORG=workflows-sh
        - REPO=sample-expressjs-gcp-gke-pulumi-py
        - BIN_LOCATION=/tmp/tools
      secrets:
        - GITHUB_TOKEN
        - PULUMI_TOKEN
    events:
      - "github:workflows-sh/sample-gcp-gke-pulumi-py:pull_request.opened"
      - "github:workflows-sh/sample-gcp-gke-pulumi-py:pull_request.synchronize"
      - "github:workflows-sh/sample-gcp-gke-pulumi-py:pull_request.merged"
    jobs:
      - name: sample-expressjs-build-gcp-gke-pulumi-py
        description: Build step for sample-expressjs-gcp-gke-pulumi-py
        packages:
          - git
          - unzip
          - wget
          - tar
          - gcloud
          - kubectl
        steps:
          - mkdir -p $BIN_LOCATION
          - export PATH=$PATH:$BIN_LOCATION
          - ls -asl $BIN_LOCATION

multi-region-deployment:
  jobs: 
    - deploy-region:
        region: us-central
        filters:
          branches:
            only: main
    - deploy-region:
        region: europe-west
        filters:
          branches:
            only: main
    - deploy-region:
        region: asia-east
        filters:
          branches:
            only: main

jobs:
  - name: update load balancer
    description: set up load balancer and utils
    steps:
      - echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json
      - gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
      - gcloud container clusters get-credentials YOUR_CLUSTER_NAME
  - name: Deploy to different regions
    steps:
      - gcloud app deploy app.yaml --region=<< parameters.region >>

This CTO.ai configuration specifies a workflow that deploys an application to three different regions (us-central, europe-west, and asia-east) whenever changes are pushed to the master branch.


You can also update the __main__.py file in the GCP workflow stack to support multip region deployments.

import os
from pulumi import export
from src.stack.main import Stack

STACK_ENV = os.getenv("STACK_ENV")
STACK = os.getenv("STACK")
STACK_TYPE = os.getenv("STACK_TYPE")
PULUMI_ORG = os.getenv("PULUMI_ORG")
GOOGLE_ZONE = os.getenv("GOOGLE_ZONE")
GOOGLE_PROJECT = os.getenv("GOOGLE_PROJECT")
CLOUD_SQL_REGION = os.getenv("CLOUD_SQL_REGION", "us-central1")
GCR_LOCATION = os.getenv("GCR_LOCATION", "US")
GCS_BUCKET_LOCATION = os.getenv("GCS_BUCKET_LOCATION", "us-central1")
MEMORY_STORE_LOCATION = os.getenv("MEMORY_STORE_LOCATION", "us-central1-c")
CA_LOCATION = os.getenv("CA_LOCATION", "us-central1")

def main():
    stack = Stack(
        # ... [other parameters]
        gke_cluster_location=GOOGLE_ZONE,
        # ... [other parameters]
        cloud_sql_region=CLOUD_SQL_REGION,
        # ... [other parameters]
        gcr_location=GCR_LOCATION,
        # ... [other parameters]
        gcs_bucket_location=GCS_BUCKET_LOCATION,
        # ... [other parameters]
        memory_store_location=MEMORY_STORE_LOCATION,
        ca_location=CA_LOCATION,
        # ... [other parameters]
    )
    export("stack_outputs", stack.outputs)

if __name__ == "__main__":
    main()

With this, you can deploy in different regions by changing the environment variables needed for your resources.

Execution

When you push code to the main branch of your repository, CTO.ai:

  • Authenticates with GCP
  • Deploys the application to each region specified in your application workflow, ensuring users from various geographic locations experience reduced latency.

Benefits of Multi-region Deployment via CTO.ai

  • Reduced Latency: Serving applications closer to the user base ensures faster load times.
  • High Availability: Even if one region faces an outage, your application remains accessible from other regions.
  • Streamlined Process: CTO.ai simplifies multi-region deployment, making it consistent and less error-prone.

Wrapping Up: Multi-region Deployments with CTO.ai

Deploying applications across multiple regions is an imperative step towards ensuring high availability, low latency, and disaster recovery. With CTO.ai and GCP, this process becomes not just efficient but also easily replicable. If you're looking to achieve global reach without the complexities traditionally associated with such deployments, this combination offers a streamlined path.