Building scalable infrastructure

July 13, 2021

Before even launching, I've had a great deal of experience at running scalable systems on the backbone of Firestore. Previously I've utilised a methodology of pure Cloud Functions to interact with Firestore via Node. However, over time this has led to a clear issue, and that issue is speed. Customers using Hyra demand fast, rapid response times, in the few hundred milliseconds. This requires not only a lot of globally distributed infrastructure, but also a fast runtime. Cloud Functions cold start time was causing a serious performance bottleneck and it was hitting hard on customer experience, especially when loading rare endpoints, where some customers would experience 10 second load times - crazy!

Cold start: When a function hasn't been used in a while, so an entire instance has to be spun up to serve the request

Cold starts were costing time and causing customers frustration. After debating how we could securely manage our APIs, I decided on an infrastructure that would run off the back of Docker containers, allowing our API to be ran as a microservice.

Our API now runs in Docker containers across 2 datacentres, and is physically hosted by Heroku, with standbys on other platforms. Heroku was chosen to allow for ease of deployment and monitoring. The uptime of AWS, driven by the PaaS of Heroku has led to serious improvements in speed, performance and load times for all customers.

The flexibility provided by running a standalone Express server compared to that of Cloud Functions is still a true liberty and Cloud Functions have a long way to go before they will be able to fully replace the standard web server.

Let's start reinventing your staffing.

Sign up for free today

Sign up