Continuous integration and continuous deployment (CI/CD) have become cornerstones of modern software development. CTO.ai, a popular CI/CD tool, in tandem with Google Cloud Platform (GCP), can help automate your application deployments. In this guide, we'll explore how to deploy a web application to GCP using CTO.ai.
Prerequisites
- GCP account
- A created project on GCP
- Google Cloud SDK installed and enabled
- Docker installed on your local machine
- A CTO.ai account
- CTO.ai GCP workflow is installed on your machine, and Ops CLI installed.
- Pulumi Token installed on your project
Set Up Google Cloud Service Account
Create a service account on GCP to give CTO.ai access:
- Go to GCP console > IAM & admin > Service accounts.
- Click
Create Service Account
- Add relevant permissions for your Compute Engine.
Configure and Set up GCP Workflows
Before we get started with this guide, install the GCP GKE Pulumi Py workflow from. If you don't have access to the repository, kindly contact us at [email protected]
The repo includes a complete IaC for deploying infrastructure over GCP: Kubernetes, Container Registry, Database Clusters, Load Balancers, and Project Resource Management, all built using Python + Pulumi + CTO.ai.
Clone the repository with:
git clone “https://github.com/workflows-sh/gcp-gke-pulumi-py.git”
Cd gcp-gke-pulumi-py
Run and Set up your Infrastructure
Next, you need to build and set up the infrastructure that will deploy each resource to GCP using the GCP workflow stack. Set up your infrastructure using the ops run -b .
This will provision your stack and set up your infrastructure.
- Select setup infrastructure over GCP
- This process will build your Docker image and start provisioning your GCP infra resources.
- Next, select the services you want to deploy from the CLI. We will select the all service and install all the dependencies, which will also provision our GCP container registry.
- Back in the GCP console, click on your container registry, and you will see your Container Registry created for usage.
- When your resources are deployed and your infra is created, you can view your VM instances in your GCP console, which we will use to deploy our web application.
- We can now see our machine configuration.
Accessing the VM
Once your VM is running, you can access it:
- In the VM instances list, click on
SSH
in the row of your VM. - This opens a terminal window connected to your instance.
Backend Deployment with Node.js
# Update packages
sudo apt update
# Install Node.js
sudo apt install -y nodejs npm
# Move to your desired directory
cd /path/to/your/directory
Navigate to your directory
cd /path/to/your/backend
# Install the necessary np packages:
npm install
# Start the application using `pm2` to keep the app running.
npm start
Your backend should be running on your localhost
Frontend Deployment with React
You will need to install Node.js, which we have already covered on the backend setup.
- Navigate to your frontend directory and run
cd /path/to/your/frontend
and install the npm packages usingnpm install
then build your application usingnpm run build
- Serve the build using a web server. We will use
serve
for this:
sudo npm install -g serve
serve -s build
Your front end will be live at htttp://localhost:5000
by default.
Setting up Nginx for Reverse Proxy
Having both frontend and backend on the same VM, you can use Nginx to manage routes and serve the frontend.
Install Nginx:
sudo apt install nginx
- Create a configuration file for your application in
/etc/nginx/sites-available/
sudo nano /etc/nginx/sites-available/myapp
Add the following configuration:
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:5000; # Frontend address
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /api/ {
proxy_pass http://localhost:YOUR_BACKEND_PORT; # Backend address
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
- Link your configuratoin to
sites-enabled
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
- Test Nginx configuration and restart:
sudo nginx -t
sudo systemctl restart nginx
Your frontend should now be accessible at your domain’s root, and the backend via /api/
Conclusion
This setup provides a basic deployment structure for a frontend and backend application on GCP’s Compute Engine. Ensure to configure firewalls, implement monitoring using CTO.ai insights, and set up logging for a production-ready environment.
Comments