When I started at Thinker, we decided to create our own web server at Amazon. Our initial tests suggested that our server could respond to traffic much faster than our previous hosts. We set about designing a Linux-based server, which would provide us everything we needed to host our clients’ websites with an emphasis on speed and security.
This worked well for a few months, until one of our clients launched a smartphone app. The app’s first task was to download from their website all the icons and theme graphics to drive it. They launched the app in a meeting of more than 400 people, telling them to “get out your smartphone, go to your app store and download our app called … .” However, the server was not set up to handle that much traffic at once, resulting in an effective denial-of-service attack and crashing it.
The next day we realized that we could make the server larger, with more memory and CPU (scale up), in case one of our clients did this again. But we realized that all of that increased computing power and memory would be sitting idle most of the time. This has been the traditional route for web servers that handle large amounts of traffic. And the hourly cost for this server would be much more than our client would be willing to pay.
After much research, we decided to migrate our server to a scale-out model. Google has been running an in-house system, originally called Borg but renamed Kubernetes. Google’s experience has been documented and released in its own cloud infrastructure. It’s designed to create mini-servers, which live and die and are regenerated depending on the quantity of traffic. This is known as a web stack deployment, the server and all it’s entities are managed and grown.
We decided to move all our websites to this new infrastructure, which we did over a few months. We created two distinct platforms, based on the version of PHP our clients needed. We have some clients who need an older version of PHP for plugins that have not been updated to the faster PHP 7.
All our website requests enter at the same point, or IP address. A Kubernetes loadbalancer deployment handles any certificates their websites use, then selects which backend services should be used to render the website. Any database calls are also directed to use an API (application programming interface) instead of a direct connection.
The beauty of this system is that each service has its CPU load measured. If load goes up, another instance or mini-server is brought online, often in a few seconds, and placed into active service. Furthermore, any problems with the architecture mean that a mini-server that isn’t responding is killed off and re-created. This means our stack is self-healing.
We’ve put a lot of time and effort into testing the scaling of our web stack over the past few months. We have successfully handled huge spikes in traffic increase of several thousand percent without creating downtime for any of our clients. We now are able to grow our infrastructure to 100 times its normal scale within three minutes.
We’re always looking for ways to speed up our current services. We’ve invested in several content delivery networks (CDNs), which means that individual website images and scripts are saved on servers around the world. When a client in Europe or Australia wants to visit one of our hosted sites that uses a CDN, all images and scripts are loaded from a server much closer to them geographically.
If you’re interested in the nitty-gritty technical details of how this actually works, please come and talk to me about it. I find this endlessly fascinating. If you want to skip the explanation, then the meaning of this blog post is essentially this: If you want a secure and fast web service that can scale if your website suddenly starts trending, hosting with Thinker is your solution. We offer all sorts of options, from keeping WordPress up to date to handling all your SSL certificates. We can even migrate your business-critical applications to a secure web-based platform, so your staffers are no longer tied to their desks.