Valued at $12.43 billion in 2022, the global serverless market is projected to reach $193.42 billion by 2035 at a healthy CAGR of 25.70% during the forecast period.
Needless to say, serverless is now emerging as the mainstay of modern cloud computing. Over the last year, serverless adoption on Google Cloud and MS Azure has increased by 6-7%. A 2021 IBM survey on serverless architecture found that 85% of the respondents believe serverless technology is worth all the money and effort.
However, along with its multiple benefits, serverless technology has its share of downsides. Among them is the "infamous" issue of serverless cold starts. Cold starts are extremely detrimental to the user experience. The good thing is that they are entirely avoidable with the right cloud strategy.
So, what exactly are serverless cold starts, and how do they reduce latency? Let's address this problem (along with the solution).
What Are Serverless Cold Starts?
Usually, a cold start is a term used to define a situation where applications take longer than expected to start up and respond to a request. In the serverless context, a cold start is the total time that users must wait to get a response for their function request.
Typically, in a serverless runtime environment, for each received request, a new container is initialized to execute the response (or the code). Serverless functions are usually served by single (or multiple) micro-containers. On receiving a request, the serverless function checks the availability of a container to serve the request. If there are no available containers, it has to create a new one, which is referred to as a cold start.
Effectively, on invoking a serverless function (in a cold state), every request takes additional time to be processed, thus resulting in high latency. Some of the driving factors for serverless cold starts include:
- The runtime of the underlying programming language
- The size of the data package required to load to run the serverless application code
- Additional startup connections and initializations that are external to the main function handler
Strategies for Reducing Latency
So, how can organizations reduce latency caused by serverless cold starts? Here are six effective strategies:
1. Select a Faster Runtime Environment
Some cloud workloads are sensitive to startup duration, thus adding to the latency. To resolve this challenge, consider a lightweight programming language like Python to create serverless functions. Scripting languages like Python and Ruby are much faster than compiled runtime languages like Java and C#.
For instance, Python has a 100x faster startup time than the other languages. Its reduced latency can also improve runtime performance and reduce cloud expenses.
2. Use Smaller Serverless Functions
Serverless functions with larger codebases can have higher latency during starts. They also require more setup configuration from cloud vendors. As an approach, serverless aims to break down large monolithic functions into smaller functions. The only question is the level of granularity.
For example, serverless functions don't perform optimally when a heavyweight synchronous function call blocks the code from executing.
As a best practice, use smaller and granular serverless functions that are faster to load and easier to manage. Leverage the capabilities of serverless monitoring applications to observe the performance and provide valuable insights.
3. Observe Application Performance
Besides the serverless infrastructure, application code is also a contributing factor to high latency. Through observability practices for serverless applications, developers can identify any performance bottlenecks or constraints. As a strategy, maintain the log timestamps when executing serverless functions. These stored logs are useful in identifying the code that is causing the drop in performance.
4. Increase the Memory Allocation
Serverless functions with higher allocated memory can initiate new containers more quickly. As long as it does not increase cloud costs, consider assigning higher memory to functions for a faster startup. In the case of AWS Lambda, in particular, the allocated memory is directly proportional to the dedicated CPU cycles for a Lambda function.
5. Maintain Shared Data Outside the Main Event Handling Functions
Most of the serverless functions feature a handler function interfacing between the code and infrastructure. On invoking any function, it must import third-party libraries or retrieve an object from external storage. This can cause poor latency.
To rectify this problem, follow the strategy of maintaining shared data in the container memory. This means data does not have to be fetched or imported repeatedly, thus enabling faster code execution. Though this strategy cannot prevent cold starts, it is effective in reducing startup time for subsequent requests.
6. Keep Serverless Functions Warm
With this strategy, organizations can effectively reduce (or even eliminate) cold starts. This strategy aims to keep serverless functions "warm" by invoking the functions at regular intervals. At the same time, it's important to "fine-tune" the number of invocations so that it does not increase the cloud costs incurred by the "pay-as-you-use" model.
Conclusion
All in all, the serverless framework is perhaps the best approach to go about developing high-performing applications on the cloud. However, it's immensely critical for enterprises to be cognizant of the impact of problems like serverless cold starts – and especially how they can increase latency.
At Wissen, we enable our customers to leverage the benefits provided by serverless frameworks. We can help you implement the best strategies for making the most of the serverless architecture. Besides, our range of cloud-related services includes developing an effective cloud strategy, driving successful cloud migration, and facilitating effective cloud management. Contact us for more information.