Cloud Architecture

Serverless Computing is Great, Except When it’s Not


Serverless computing is an innovative way of operating your application in the cloud. Essentially, it allows you to run an application without the need to allocate computer instances to the program. This creates a flexible and highly scalable solution since server instance resourcing can be effectively ignored. If you need to run your program once per second, that’s easy. If you need to run it one million times per second, that’s just as easy. The underlying serverless computing infrastructure handles all necessary scaling and logistics.

AWS Lambda is the most widely used platform for serverless computing and has garnered wide-scale adoption. It’s a groundbreaking innovation that has truly taken the world of cloud-native computing by storm. AWS Lambda is commonly used in many production workloads today.

Serverless computing in general, and AWS Lambda in particular, is the savior of scalable, cloud-native applications, or so the proponents would tell you. It’s so popular I’ve had people come up to me at trade shows and proudly announce, “We built our entire application using AWS Lambda! Isn’t that great?”

Well, no, actually. It’s not great.

Serverless computing, such as AWS Lambda, is great. But like any tool, it can be overused. There are places where serverless computing is wonderful and places where it shouldn’t be used. When used appropriately, serverless computing provides huge benefits to your application architecture. When used inappropriately, it can cause headaches, huge technical debt and poor performance.

Issue One: Complexity

Overusing serverless computing can dramatically over-complicate the architecture of your application. A common question that comes up when creating an application architecture based on services and microservices architecture patterns is this: How large should I make my services? Microservices architecture patterns tend to encourage an application to be split into many smaller services. The very name of the pattern—microservices architecture—encourages the creation of smaller services. The thinking is that smaller services are easier to design and construct and easier to understand, hence they are easier to support.

Yet as you create smaller services, you also need to use more services to accomplish a given task. As a result, your overall application complexity goes up. You haven’t removed complexity from your application. Instead, you’ve moved the complexity from being internal to the service to complexity within the interconnection of the many services. Your overall application complexity often goes up more than is saved by using smaller individual services.

In short, if you make your services too small, your overall application complexity can actually increase. How do you architect your application so your services aren’t too large (and too complex) or too small (which makes the interaction between them too complex)? The goal is to create services that are just the right size. I call this the Goldilocks Calculation.

The problem with serverless computing, such as AWS Lambda, is they tend to push the size of your services to be smaller than is otherwise necessary. This means your overall application complexity is increased. Overusing serverless computing can result in a dramatic increase in the complexity of your overall application architecture.

So, while the individual services are simpler—simpler to build, operate and scale—the overall application complexity has increased.

Issue Two: Fluctuating Performance

Serverless computing platforms such as AWS Lambda depend on large numbers of users using the service to level out the resource needs efficiently and effectively. This means that, as usage varies across the system, so does the performance of individual instances.

Combined with issues involving the time it takes to cold-start instances and how best to use pre-warmed instances, the performance of an individual Lambda instance can vary wildly. It may take hundreds of milliseconds to start up, or it may execute extremely rapidly. The performance of an individual Lambda instance can vary by an order of magnitude or more from one call to another.

This performance variability means that AWS Lambda is best suited for cases where deterministic performance is not required. Examples of use cases that do not need deterministic performance include backend queue processing. Handling asynchronous backend processing is something that often requires high scale (something serverless computing is good at) but does not require consistent performance (which serverless computing is not good at).

However, it also means that AWS Lambda and similar services are not well suited for cases where deterministic performance is required—such as for a frontend website API. When performance is inconsistent and non-deterministic, it’s hard to depend on it to deliver a consistent front-end user experience.

It’s important to understand that serverless computing, such as AWS Lambda, is neither good nor bad. It’s good for some things and not so good for other things. Make sure your use of it leverages its strengths and minimizes its weaknesses. This implies that you don’t simply use it for everything, as many people want to do.

Be smart about your use of serverless computing and your application will reap the benefits. But overuse serverless computing and your application will become overly complex, full of technical debt and suffer from inconsistent and often unacceptable performance variations.



Source

Related Articles

Back to top button