Serverless: The Next Major Shift In Cloud Computing

Seems like these days, every time someone talks about cloud technology, they’re talking about containers. I’m sure you’re going to hear a lot about them over the course of Cloud Expo.

And that’s great! Because containers are, without question, a nearly perfect fix to the challenges around DevOps.

Let me illustrate what a serverless solution might look like and how it’s different from traditional and containerized applications.

N-Tier Web
A typical n-tier Web application, made up of front-end, publicly exposed Web servers; a back-end, highly isolated data tier; and a middleware tier.

First, let’s consider a traditional n-tier application. You have a Web server farm on the front end, and a series of databases on the back end. In the middle is your middleware, with application servers that process the inbound web requests and works with your back-end data store.

In this model, each of your web servers and app servers are actual bare-metal servers or virtual machines. But we can replicate this same pattern to the cloud, fairly easily, using containers.

Microservices

The next step is to use a microservice, to break apart these monoliths.

In this pattern, rather than having servers that render all our user interfaces and, for example, handle all of our business objects and logic with a single application, we would create several application programming interfaces, or APIs.

Microservice Architecture
A microservices-based approach to a modern web and mobile application; in this case, for a hotel.

Suppose I run a hotel chain. I need public-facing services to allow users to log in to my website, to reserve rooms, to sign up for promotional emails through a third-party mailing list provider, and a general means of delivering web and mobile app pages.

I could create APIs that would handle user data requests, shown here in white. Rather than having a traditional, server-rendered user interface, we could create single page web applications and mobile apps that use this API to get the information needed to present information in a way that’s appropriate for that platform.

That removes entire development chains from my workload: My designers and front-end developers can make their interfaces look nice and perform efficiently, and my back-end team needs only focus on creating a single means of providing them both with the same information!

We might further break our application up into additional APIs: One that handles user logins. Another that handles reservation requests. A third that leverages a partner API to sign people up for promotional emails. And so on.

That would give me complete control over each of these units. I can manage them independently. If I need to completely overhaul the reservation system, I don’t need to worry about how that affects the user interface API or email API or authentication API, because they are separated from that reservations API.

This also simplifies development because I can focus on only those features that need improvement, without having to test and QA the entire functionality of my application.

And I can reuse these parts, too. I could develop an entirely different solution that needs authentication, and use this authentication API microservice to provide it. Or I could build automated systems that seek out unreserved, soon-to-expire room availability in my reservation API, and sells remaining inventory at a discount. And so on.

Each of these smaller components is a microservice. It would be wildly impractical, not to mention expensive, for each of these services to live on their own bare-metal servers, or even their own virtual machines. But because I can size containers to the work they need to do, I can deliver this kind of architecture via containers, reduce my infrastructure footprint and lower costs significantly.

“You simply pack your code and its dependencies into a container that can then run anywhere — and because they are usually pretty small, you can pack lots of containers onto a single computer.” — TechCrunch

Containers: Sensible For Business

What’s not to love about containers from the business viewpoint?

Containers are great because they let us package up everything — operating system, code, services, all the stuff our application needs to work — and run them just about anyplace. This not only streamlines the DevOps pipeline significantly, providing a huge cost savings in time and staff, it also gives us options on how to deploy solutions, many of which are now free of specific vendors.

Plus, as TechCrunch notes here, we can run several containers on a single host machine, which means we’re wasting far fewer resources and, in turn, money!

So I can use containers to power these microservices. That is, my user interface API could be running in Container A, and my reservations API in container B, and my email newsletter API in Container C, and my authentication API in Container D.

As you’ll learn throughout this conference, containers make all of this highly portable and, if properly built, highly scalable and even fairly resilient to failure.

Containers: Sensible For DevOps

And what’s not to love about container technology from the DevOps perspective?

  • You can deploy them quickly.
  • From some base configurations, you can install the software and services you need, configure the environment to support your application, and script the entire process.
  • It doesn’t take a team of engineers entire days to get each new environment set up; you script it once and it’s done.
  • That makes them relatively inexpensive to operate.

Provided your application is properly engineered to scale to demand, scaling with containers is as simple as starting up another instance of your application image. You can even automate this process; your applications can respond to real-time demands and scale to meet that demand.

But more importantly, you can automate the build and deployment processes. Containers work great within automated build processes and application lifecycle management.

You can version containers easily, making reversion simple and again, simplifying your build process. And all of this is easily orchestrated using open-source or low-cost tools and processes.

Containers have their limits. Photo via Hans on Pixabay, in the public domain.

The Downside To Containers

But there are downsides to containers, not the least of them being that containerization is still a young technology, so there’s a lot of change. I don’t think you need to worry about Docker or Mesosphere going away anytime soon, but they could change significantly. And there’s anything but consensus on the best way to manage container images within a deployment pipeline.

By their nature, containers need to run with elevated permissions on their guest machines. This makes them more dangerous if they are successfully exploited than, for example, a single virtual machine being compromised on a guest server.

It’s also easy to wind up with more containers than you need, and orphaned containers that once performed some workload, but are no longer needed and were never cleaned up.

There are some studies out there that estimate a quarter to a third of all cloud-based virtual machines are zombies — and it’s not difficult to see containers having the same problem. If anything, because containers are so quick and easy to create, and are designed in large part to handle disposable workloads, the problem could be even greater.

Most containers require external assemblies — that is, “helper” software that enables their primary applications to run. It can be difficult to ensure the repositories where these assemblies live are online when a container is created from an image, and we can run into circumstances where a container’s dependence on these assemblies can become broken.

Finally, making secure and efficient network connections with certain container technologies, such as Docker, requires a practiced hand.

Serverless Is The Future

So what if I told you there is a third way to the cloud — one that eliminates almost all of the problems with traditional infrastructure and most of the problems with containers, and does so for a total cost of ownership that represents a fraction of the costs of either?

That’s the promise of serverless technology.

Let’s be clear: Serverless technology doesn’t mean there aren’t any servers running your code. Of course there are!

Instead, the promise of serverless is that you no longer need to manage servers. Instead, you will focus on your code alone — and the provider will handle the rest.

Like all buzzwords, serverless computing has some shifting definitions, depending on to whom you’re speaking. But the central concepts are the same among all the vendors:

  • A serverless computing offering provides a generalized and anonymous operating system on which your code will run. We’ll get into this more in a moment.
  • These instances are completely managed by the cloud provider. They do all the patching, tuning, service installations, whatever. You just write code that runs on this service, and the cloud provider makes sure that your code will run in that environment.
  • Each of these anonymous, generalized environments is provisioned only when it’s needed. And as soon as your code is no longer needed, that environment is deprovisioned.
  • Finally, you pay only for your code’s actual use: The number of times your code has been executed, and the amount of computing resources it required.

Microservices Aren’t Monoliths

But if I have all these relatively simple tasks, or microservices, that I mix and match into solutions, what sense does it make to continue to deliver them as though they are entire server applications?

Sure, a container gives us flexibility in how we create and manage our server-based solutions — but it’s still managing the architecture as though a single instance is managing the entire workflow.

That’s where serverless technology comes in.

Serverless is a new approach to workflows. It says, “There will be some event that invokes my code. And when that event happens, I will likely retrieve input from something, and likely create some sort of output. And that’s all I need to concern myself with.”

Because of this, a serverless application can scale quickly to demand. And serverless is, because of its design, highly available. Let me delve into this with an example.

Typical workflow for a serverless function listening on an HTTP endpoint.

Here’s a typical example of how a serverless function works. In serverless technology, the on-demand running of code is called functions as a service, and they are called that because each of these on-demand executions serves a specific purpose, or function.

In this example, we have created a function that handles a web request. First, when the inbound request is received, the cloud provider looks to see if there is an available instance of our function already running. If not, it creates one; but if so, it hands this request off to that available function.

The function code then parses the web request, processes it somehow, and returns a response to the requesting client.

Typically, the cloud provider will then leave that instance running for a few minutes, to speed up the processing of the next request by eliminating the need to spawn another instance. But after about five minutes or so, if there isn’t a second request for the same function, the cloud provider will deprovision the instance.

One For All, All For One

I mentioned earlier that serverless functions are hosted on anonymous, generalized operating system instances.

That’s the key to how they work. The cloud provider customizes a base operating system — such as Linux or Windows — with environment and service settings that are designed to work with a specific set of programming languages, such as node JS, or Java, or Python, or dot net core, or the like.

The cloud provider can create the instances very quickly because they are all exactly the same. There’s nothing special about them; each works exactly the same as any other instance.

When you deploy your code, it’s saved to the cloud provider’s storage service.

When your code needs to run, the provider retrieves the code, starts one of these instances, drops your code onto it, and then executes that code. And as we noted before, once the code is done executing, the provider usually leaves the instance running for a little bit, to handle any subsequent requests, but then deprovisions the instance once demand falls off.

Serverless costs less than most pricing models, including containers. Photo via Joshua_Willson on Pixabay, in the public domain.

A Simpler, Cheaper Pricing Model

This methodology leads to a different pricing model than that used for virtual machines or containers.

In the case of VMs, you generally pay an hourly fee based on the capacity of the VM in terms of CPU cores and memory. You’ll often have software licensing fees bundled in there, too. And you have to pay to store your data and OS disks.

The same is generally true of container pricing: You pay for the VMs that host the containers. The difference is, where traditional code on a VM might leave a lot of unused resources, you can pack several containers onto that same VM, to handle multiple workloads for the same price.

In the case of serverless functions, you pay for the number of times your code runs, and the amount of computing resources each execution uses.

Which brings us to the payoff: What’s in it for you?

In short, lower actual infrastructure costs, a drastically simplified deployment pipeline, and a total cost of ownership that’s a fraction of your current costs.

Let’s look at the direct hosting costs for functions versus virtual machines.

Cost of handling 500,000 executions per month, at 4 GB/s

Here, I am going to assume a workload that includes about a half-million executions every month. That’s about two requests every second. It’s going to take about 4 gigabytes of memory per second to get this work done.

For the virtual machines, I therefore need to provision at least eight gigabytes of memory each. In Azure, the cheapest option that meets that need is a D3v2 VM, which costs about one hundred and four dollars per month to operate, if I am running Linux. For AWS, the least expensive EC2 instance is a t2.large, which costs about seventy dollars per month.

But I can cut my costs dramatically if I use serverless functions. In the case of Azure Functions, I can cut my actual infrastructure costs to about a quarter. And using Lambda, I can better than halve my costs versus EC2.

Cost of handling 2 million executions per month, at 4 GB/s for each execution.

If I increase the number of executions to 2 million, I get an even better savings. My VM costs significantly increase to meet the 32 GB memory requiremen.

My cost for serverless also increases, narrowing the percentage savings I realize over VMs; but the actual dollar savings is even better.

In the case of Azure, I save nearly one hundred dollars per month by using functions. And in the case of AWS, I save almost one hundred and fifty dollars per month.

Cost of handling 20,000 executions per month at 512 MB/s for each execution.

The savings are also pronounced if I have a smaller workload. Let’s change the assumption to be 20,000 requests per month, or about one request every two minutes, that can be processed with a half-gigabyte of RAM every second.

In that case, I can use an Azure standard A1 v2 VM for about thirty-two dollars per month, or a t2 small EC2 instance in AWS for about seventeen dollars per month.

My cost for Azure functions for that workload? Nothing. Zip. Zilch. Free. And the same is true for Lambda.

‘Long Tail’ Pricing

You’re probably asking yourself, “How can AWS and Azure make this service free?”

Again, it goes back to the fact that serverless instances are all the same. Because the underlying images are exactly alike, the cloud providers can provision them easily, and each instance actually becomes cheaper to provide than the previous one. So there’s huge scale at play.

Much as it is with Facebook, in that each new user cost virtually nothing to create and thus, is actually profitable just by virtue of creating the account, the same is true, ultimately, of serverless instances.

Each workload cost virtually nothing to deploy, so giving you a large amount of free compute time and instances is profitable, because getting even a minority of users to exceed that free service threshold, just periodically, pays off handsomely.

And users of serverless technology almost always need additional services: Database as a Service. Message queues. Memory caches. API gateways and the like. It’s the modern equivalent of giving away the razor handle and selling the blades at a markup.

It’s a long tail, but one where the profit parabola is very steep. You get a lot of free executions and compute time because it only takes going over those limits a little, for the cloud provider to realize a profit.

So, we get significant price savings when we go serverless.

Regardless of workloads, big or small, it costs less to host the same amount of work on a serverless function versus a virtual machine.

And better yet, with serverless, there are no zombies.

Because the service provider handles provisioning and deprovisioning automatically, and you only pay for the number of times your code runs and how much compute resource it uses when running, you never pay for server instances that are doing nothing but eating electricity.

Serverless doesn’t mean NoOps, but it does mean a significantly leaner DevOps team. Photo via alehildago on Pixabay, in the public domain.

Not Quite NoOps, But Close

Which brings us to the practical benefits of serverless computing: If there are no servers to manage, isn’t there less work on your end for building and deploying your solutions?

There sure are. It’s not really NoOps, but your lead times will be drastically reduced, and the overall work-hours per employee needed to get your solution changes pushed to production will be starkly lowered.

So what do I mean by NoOps?

Again, it’s a buzzword, so definitions vary. But I think most people would agree that at the core of NoOps is the idea that we can use automation, service abstraction and vendor-provided services to reduce or eliminate the tasks we traditionally need to perform in DevOps.

For example, you may have an agile process today, in which work is assigned to sprints. Every week or two weeks or whatever, your team implements its changes, which are then built and tested. If they test OK, they get deployed.

This involves the work of QA engineers, build engineers, systems engineers, the development team, and so on. And it tends to generate a lot of anxiety, especially when builds fail.

In NoOps, we incorporate a much faster pace of development. Using continuous integration and continuous deployment, we deploy changes immediately. Automated builds and testing are used to ensure each feature or fix does not break the solution, and if it does, we quickly adjust the change, repeating this automated build and test processes until the feature or fix works. It’s then pushed immediately to staging and production.

This fast pace works because our architecture, through microservices, is intentionally distributed.

By treating our application and a combination of several small tasks, we can safely work on each of those small tasks without significant concern about any change in a given microservice breaking a different microservice. That lowers our downtime.

Also, because functions are automatically provisioned and deprovisioned to meet demand, applications based on them tend to scale quickly and be highly available.

That is, we don’t really need to monitor a microservice to see if its online or meeting a demand spike. If the cloud provider’s underlying serverless service is running properly, they are taking care of demand spikes and sick instances.

So by definition, there cannot be downtime in a severless application, unless the cloud provider is experiencing a service outage. Even that we can protect against by deploying the exact same code base to a different region, and using DNS-based routing to ensure failover protection.

The Benefits Of Serverless: A Recap

So to recap the benefits of serverless functions:

  • Actual infrastructure costs are lower.
  • They work through microservices architecture, which is intentionally designed to separate business logic concerns from one another. This makes your software development processes simpler because you can manage your solution’s functionality as independent units.
  • Because there are no servers to manage, we can focus solely on code, reducing the staffing and lead times for each code change.
  • In fact, this allows us to adopt a very fast development process focused on automation, abstraction and continuous delivery and continuous integration.
  • And because the cloud provider is handling when instances are provisioned and deprovisioned, we effectively have high availability and disaster recovery built in to our serverless based solution.

True, the underlying service could encounter problems, but we can use traditional cloud-based business continuity techniques to manage that circumstance.

Sold On Serverless

It’s not surprising at all that, given the benefits to cloud provider and customer alike, the big cloud providers, several industry leaders and many startups, are all pushing serverless tech.

“With AWS Lambda, we eliminate the need to worry about operations. We just write code, deploy it, and it scales infinitely; no one really has to deal with infrastructure management. The size of our team is half of what is normally needed to build and operate a site of this scale.” — Tyler Love, CTO, Bustle

Here’s a quote from Tyler Love, the CTO of Bustle, a news and entertainment website. They rebuilt their monolith websites to be powered by AWS Lambda, using a microservices architecture similar to the one I described earlier.

Note the key points: No longer does his team focus on operations. They focus on code. It goes into production and it just plain works, regardless of demand. That’s led to staffing savings and productivity far beyond what similarly sized teams are managing.

“In 5 years, every modern business will have a substantial portion of their systems running the cloud. But that’s only the first step. The next step comes when you free your developers from the tedious work of configuring and deploying even virtual cloud-based servers.” — Greg DeMichillie, Head of Developer Platform and Infrastructure, Adobe

And here’s Greg DeMicillie, Adobe’s DevOps chief, who notes that moving to the cloud is inevitable. But it’s also only the beginning.

Like him, I believe the days of having to configure and deploy even virtual machines and containers are numbered, because they inhibit productivity and progress. And if there’s anything technology does well on the whole, it’s eliminating anything that slows or prevents innovation and speed.

What If Everyone Could Code?

In fact, I see serverless computing as a stepping stone, itself: One that promises to make everyone a programmer. Because once we’ve done away with the arcane work of configuring and deploying servers, we can focus on the automation of code itself.

Microservices architecture is, of itself, just the chaining together of several small tasks into a workflow. We can combine these tasks together in new ways, based on new inputs, to create all-new solutions to problems.

Already, we are reaching a point, with artificial intelligence and intelligent devices, such as Siri and Google search and Alexa, for example, to understand relatively unstructured requests and produce meaningful responses. Why can’t these same kinds of services be used to create code, or, at least, intelligent workflows?

Why shouldn’t creative people who have visions of producing new and important solutions be able to create these workflows themselves, using the power of modern computing?

Microsoft’s Three Attributes of Modern Solutions: Telemetry from “intelligent edge” devices is consolidated and interpreted in an “intelligent center,” using artificial intelligence, with serverless technology handling computation in the center and at the edge.

I wish I could take credit for what I just said. But it’s largely borrowed from Satya Nadella’s keynote presentation at Microsoft Build, the annual developer conference held in May.

Nadella unveiled a vision for the future of computing that consists of two realms: The “intelligent edge” and the “intelligent cloud.”

The intelligent edge is all the devices we have that are connected to the Internet and, usually, are powerful enough to think on their own, too. Not only our computers and smart phones and tablets. But our televisions, our appliances, our cars, our medical monitors, our toys, even our tools.

From all the input of all these devices, there is a unifying force: The intelligent cloud.

As each of our devices speak to the cloud about us and our lives, the cloud used artificial intelligence and big data to derive insights and actions, which it then feeds back to each of our devices on the edge, instructing them on how, and why, to act.

In Microsoft’s vision, this cloud processing is conducted on serverless platforms, which are infinitely scalable.

“The combination of multi-device, AI everywhere and serverless computing is driving this new era of intelligent cloud and intelligent edge.” — Microsoft

“The combination of multi-device, AI everywhere and serverless computing is driving this new era of intelligent cloud and intelligent edge,” Microsoft says.

And that’s a pretty powerful thought leader.

Tearing down your monoliths and rebuilding them as microservices might not be worth it. Photo via CyberComputers on Pixabay, in the public domain.

Not For Everything

So, now that I’ve told you serverless is the inevitable robot overlord and your resistance is futile … let’s talk about what it can’t do, and why virtual machines, containers and the like aren’t going away anytime soon.

Obviously, changing your monolith to a microservices architecture is no small thing.

There are huge up-front costs. And chances are you have partnerships or other obligations and requirements, such as data privacy and sovereignty, that limit or prevent your ability to simply abandon the way you’ve been building software to date.

Even if you have good basic n-tier architecture, the benefits you’d get from using containers could vastly outweigh the benefits you’d reap from rebuilding your application into portable units.

Also, not every workload is appropriate for microservices. If your task doesn’t really need to scale — if it has a predicable workload — there’s a strong argument against using microservices.

That’s especially true if your solution is heavily dependent on other services, or you need highly specialized control over your runtime environment. Don’t try to work around the limitations of serverless runtime environments or compensate for a lot of external requests and responses; just build your solution as a package and deploy it inside a container.

Or if your software needs massive computing resources running all the time to accomplish its work. Sure, you might save a little money in terms of hosting, but the reliability of constantly provisioned VMs or containers could outweigh the code limitations inherent to serverless functions.

The Limitations Of Serverless

Which is a good segue to talk about the limitations of serverless solutions.

An obvious problem is that because instances are only provisioned when they are needed, there can be some lag dealing with cold starts on rarely used serverless code. You can fix this to some degree by running probes to keep that code warm, but that’s kind of a hack. Maybe it’s best to simply put infrequently accessed code into an always-warm container, especially if performance is critical on those rare occasion when the code is called.

Additionally, you need to prepare your solution to deal with lag and dropped connections between microservices. For example, if you have an authentication API that runs as a microservice for all your solutions, even a 400 microsecond lag in its communication can become amplified into a crippling, systemwide bug.

While containers might be an immature technology, serverless is even more so. Azure Functions aren’t yet a year old in general availability. Google’s serverless solution is still in beta and IBM’s efforts to create an open serverless standard is still in its infancy.

Which leads to another concern: Whatever serverless approach you take, it will be somewhat wedded to a vendor at this time. You can pretty much run a container anywhere, but how you get serverless code to run will be somewhat dependent on the languages supported by your cloud provider and the means they use to create serverless instances, as well as supporting services.

You are limited in terms of what code you can run to the runtimes and versions your cloud provider supports, and it can be difficult to bring in certain libraries or assemblies as a result, which might further complicate programming.

Finally, the serverless programming model — of an event that triggers the receipt of some input and the creation of some output — might not be the right solution for every need.

To Recap

So, to recap: Serverless is the next wave in cloud computing.

It offers you huge time and cost savings, and an exceptionally low total cost of ownership, even over containers. You pay only for the compute you use, and there’s no more zombie services sucking up your profits.

Cloud providers also receive significant benefits by provisioning what is essentially the same servers to everyone who needs them, which promises to keep costs low and improve performance.

Because of the automated, generic aspects of serverless technology, high availability and disaster recovery are built in. And you can use traditional cloud-based business continuity techniques to keep running in the event of regional service outages.

The entire microservice concept is built around fast deployment, continuous integration, continuous delivery and separated concerns, making serverless an ideal approach to improved service delivery and fast adaptation.

But serverless isn’t for every workload. It may well be your monolith is better off in a container, or you’re better off building a solution in containers. But five years hence, expect serverless to be as hot as containers are now.

Leave a Reply

  • Check out the Commenting Guidelines before commenting, please!
  • Want to share code? Please put it into a GitHub Gist, CodePen or pastebin and link to that in your comment.
  • Just have a line or two of markup? Wrap them in an appropriate SyntaxHighlighter Evolved shortcode for your programming language, please!