How to Do Serverless the Right Way at an Early Stage Startup
Fatih Acet is the CTO and co-founder of Superpeer and employee number 40 at GitLab.
He recently came on The Ops Show with me to discuss how he used a serverless strategy to scale his engineering team to 17 developers.
The video below gives his entire explanation, but below I’ll break it down into steps for ease.
So, how to do serverless right and save money doing it. Here we go:
Step 1
Fatih describes himself as a T-shaped Frontend Engineer. That means his expertise goes deep while his collaboration goes wide. He initially wanted to use MEAN stack but then decided to go serverless. With his previous experience with AWS, he found AWS confusing and hard to manage so he decided to go with GCP. Instead of Mongo, he chose Firestore and also Cloud Function. With the first site launch, they had around 30 Cloud Functions.
Step 2
He used the Cloud Functions for the application backend. However, because of the cold starts, accepting a booking (Superpeer allows mentoring calls to be booked) was taking around 8s, 6s spent for the cold starts. A Cloud Function instance can only handle one concurrent request so a cold start was unavoidable.
Step 3
Eventually, they built up to 85 Cloud Functions and deployment became an issue, too. Cloud Function deployments are done one by one and take around 25 minutes to be completed. Application Frontend can only be deployed after all of the Cloud Function deployments are completed. However, there was an over 20-minute difference between the first and the last Cloud Function deployment.
Step 4
They started to migrate to Cloud Run. Cloud Run can handle 80 concurrent requests. Also, deployment of Cloud Run is a lot easier since it’s a containerized application deployments happen at the same time.
Step 5
Recently, Cloud Run started to allow you to keep a number of instances always-on and he set the minimum instance count to 5 and got the system down to no cold starts. The maximum number of instances is set to 1,000 to allow almost infinite scaling.
Step 6
After deployment of the Cloud Run, accepting a booking down to 300 milliseconds instead of 8 seconds with Cloud Functions. This made a huge impact on the application’s perceived performance.
Step 7
20 Cloud Functions are still remaining for async stuff. For example, resizing uploaded images or transcribing uploaded videos, and sending email. All application backend moved to Cloud Run. This is where Fatih made it to the true future of serverless.
Extras
Fatih’s serverless approach helped him to scale his engineering team to 17 members because he didn’t need to worry about scaling the infrastructure anymore. He could be sure the system will perform as expected when doing their larger launches like what they did with ProductHunt.
Expenses
Serverless expenses are low for Superpeer right now. Last month their staging and production costs were around $200 total. The free quota offering from GCP is also very generous, he adds.
So, that’s it. Fatih’s serverless approach is saving him hundreds, if not thousands, with enhanced velocity and observability.
This is the type of approach that we believe in at CTO.ai. Simple workflows that are tracked, observable, and collaborative. If you’d like to try out The Ops Platform to make your serverless journey easier, let us know by sending us a note below: