The term “serverless” is used ambiguously, so let’s start off with a specific definition. Serverless apps are applications that are not written as software servers. “Server-less” code is code that responds to events (such as HTTP requests), but does not run as a daemon process listening on a socket. The networking part of serverless architectures is relegated to the infrastructure. One key feature of most serverless offerings is that they run in multi-tenant environments. That is, I can deploy my code to someone else’s infrastructure, where they may (safely) run that code in the same service in which they run other people’s code.
The most prominent example of first-wave serverless was AWS’s Lambda. And now, a new wave of serverless is emerging with some key differentiators against the first wave. And a few companies have now broken away from the pack, showing off these new features (spoiler: It’s Fermyon, Vercel, and Deno).
Let’s start from the beginning (a very good place to start). Because back before AWS, there were a couple of technologies that set the course for serverless.
Before Serverless: CGI and PHP
In the early days of the web, web servers could deliver only static assets: HTML, images, and files. But developers wanted a way to write bits of code that could run on-demand and produce HTML (or other web file types) as output.
The Common Gateway Interface (CGI) protocol defined a simple standard for how a web server could run such programs. It also defined how an HTTP request could be passed to a program, and how the program could then return a response. The key innovation was that a single program could be run in many different web servers (Apache, IIS, and Jigsaw were a few examples back then).
CGI programs were not servers. They didn’t start a socket listener. Instead, the web server listened for requests, and then invoked the CGI program directly. There was no security sandbox, and CGI was definitely not safe for multi-tenancy. But the programming model was simple: programs received a request and returned a response.
Perl was arguably the most popular language for writing CGI programs initially. What made it great was its combination of expressive syntax, great web libraries, and neat language features (by which I mean built-in regular expressions — a novelty at the time).
But a challenger emerged. PHP started as a template language: One wrote HTML files and embedded executable snippets. The language grew and matured. More libraries were added. And soon, major applications were being written in the language. While PHP has gotten a bad reputation, shunned by “purists,” it has been in the top 10 most popular languages for decades.
While PHP’s runtime implementation changed from a CGI shim to an Apache module and on from there, its core programming model is a reified version of CGI’s programming model: A program is invoked on request, and returns a response. Only this time, the code is the response (and the request is presented as a nicely structured object).
Like CGI, PHP was never multi-tenant safe. But the model is a remarkable precursor to serverless functions.
Historical Interlude: The Rise of Cloud
I used to pay for a web hosting provider to run a chroot’ed virtual environment with an instance of Apache where I could dump my PHP files. The very thought of this setup now makes me cringe. It was not a particularly safe way to run a site. And, yes, like many others, I woke up to find my site hacked via another user’s account, or my files deleted by a glitch in a backup script, and so on.
Then along came two technologies that changed things: AWS’ Elastic Compute (EC2) and Heroku’s PaaS.
EC2 let me run an entire server on Amazon’s hardware. That was neat, provided I wanted to spend as much time administering my system as I did writing my code.
Heroku made it possible for me to run just a server on someone else’s hardware, and in a true multitenant way.
Both were cool technologies. But both introduced complexity into the developer’s world (at varying degrees). Heroku made developer self-service a thing, and that was excellent. But the code I had to write to run there was definitely more complex than the code I wrote back in the CGI and PHP days. And, yes, I ran PHP on both of these platforms, though in both cases I had to manage the web server as well as the code.
In a large part, the cloud took a turn toward devops and platform engineering. Infrastructure as a service, database as a service, EVERYTHING as a service… and everything needing wiring up and monitoring and administration and… work. Kubernetes promised to make all of this easier… and then ended up making it harder. But a little tiny experiment in making use of spare compute capacity turned into a revolution for AWS.
As the story goes, Amazon had spare compute capacity that they wanted to put to productive use. So they created a product that ran small bits of code for a short period of time. Users could upload these tiny code bits. Events, such as an inbound HTTP request, could then trigger the small bit of code, which would be started, run to completion, and then shut down. This was AWS Lambda, described more generically as serverless functions.
This spawned a wave of copycat technologies: Azure Functions, IBM Cloud Functions, Google Cloud Functions, and so on. Even edge companies like Cloudflare got on board (though in a more limited fashion).
This first generation of serverless functions were built on one of two existing technologies:
- Virtual machines, where each virtual machine ran exactly one serverless function
- Containers, where each container ran exactly one serverless function
The cloud runtime, be it a VM or a container, provide a secure single-use wrapper for a function. But because neither compute type is designed to start quickly, an elaborate dance of pre-warming compute capacity and loading a workload just-in-time made this first generation of serverless slow and inefficient. At the time of this writing, Lambda functions require a cold start time of 200 milliseconds or more.
Furthermore, the developer experience of this first generation of serverless functions was less than ideal. With no common packaging format, no standard set of APIs, and no strong day 2 operational story, developers had to write platform-specific code packaged in bespoke ways. And often the debugging and troubleshooting story was convoluted and frustrating.
The attributes of each of these systems meant that developers were locked into one particular platform as they wrote their serverless functions.
Finally, because the infrastructure was costly to operate (with pre-warmed compute power sitting around), the more the serverless functions were executed, the pricier to platform became.
As we at Fermyon looked into the situation, we were convinced that a new technology, WebAssembly, could vastly improve these aspects of serverless applications.
Defining Features of Next Wave Serverless
To summarize the four issues with serverless v1 listed in the previous section:
- Serverless functions are slow
- The developer experience for serverless functions is sub-par
- Serverless functions come with vendor lock in
- Cost eventually gets in the way
As we looked at this list back when Fermyon started, we wondered if there was a single technology that could make progress in all of these areas. And WebAssembly seemed like the right tool.
WebAssembly binaries can start up far faster than containers and VMs. Orders upon orders of magnitude faster. And with some optimizations, it is possible to cold start serverless functions in one or two milliseconds.
Cold start performance is only one reason WebAssembly is a better fit for the next generation of serverless. Its isolation model and security sandbox mean we can safely run multiple tenants (each in its own sandbox) in the same WebAssembly supervisor. And that, in turn, means we can have thousands of serverless functions on a single virtual machine (and cluster VMs to run tens of thousands without breaking a sweat). This translates to reduced cost since we don’t need queues of warmed VMs or containers sitting idly by waiting for requests.
That takes care of both the performance and cost stories above. What about developer experience?
Since WebAssembly is just a compile target, many languages already have support for building this brand of serverless app. And the list of supporting languages continues to grow rapidly. This means that the developer is already in their comfort zone when using their regular development tools. There was still plenty of room for improvement, though. Spin uses the OCI packaging format (used by the container ecosystem), and applications deployed to Fermyon Cloud can access the dashboard for visibility into each execution of each serverless function.
Spin also allows multiple functions to be grouped together into a serverless app. And this is a critical feature of a new wave of serverless functions: A group of functions is easily deployed in concert, ameliorating the difficult procedural roll-outs one must use with Lambda.
The future of debugging and analyzing is even more exciting. Since all apps, regardless of source language, are represented as WebAssembly bytecode, a new generation of analysis tools is emerging. With these, developers do not need to instrument their code manually (at development time) in order to investigate the application when it is running. We imagine a feature in which a user can log into their dashboard and say, “This function is misbehaving. I would like to enable function tracing right now.” The serverless function runtime can then immediately begin tracing without any sort of recompile on the user’s side.
Finally, there is the matter of vendor lock-in. The first-generation serverless function environments were each designed to run in a specific cloud, use specific APIs, and have access to specific services. Effectively, a Lambda function cannot just be run in Azure. And neither can be easily run on-prem or in an IoT device or even in a Kubernetes cluster. The developer must pick (or at least know) their production requirement from day 1, and must write code specific to that environment.
Our view is that this is contrary to the spirit of serverless, in which a developer should be required to know as little as possible about the deployment environment. Deployment is an operations issue. And an operations team should be free to choose a deployment environment that suits their needs without requiring the developer to rewrite an app.
Spin is designed to run in a wide variety of environments. Whether Kubernetes or Fermyon Cloud or on-prem, it should be possible to run the same app. WebAssembly is, of course, a big part of this puzzle. The format itself is OS-neutral and architecture-neutral. The same binary can run on Windows with an Intel processor or macOS with an Arm processor… and various other permutations. But another piece of the puzzle is providing standard interfaces to frequently required services.
That brings us to the next point.
Data Services are Part of Next Generation Serverless
Heroku was the paradigmatic case of a developer self-service platform. Bring your own language and deploy it to Heroku’s platform in just a few commands. But if you need a database, things get more complicated.
The story is the same for first-generation serverless. When it comes to adding a database or a pubsub or key/value storage or object storage… it was up to the developer to do all the heavy lifting. Local development required setting up local data services. One’s laptop was suddenly home to a Postgres server, a Redis instance, and so on. And the developer had to inject connection info, manage local accounts, and do all of this without leaking credentials into Git.
As it came time to deploy into staging, and then production, the same operational dance must be done on the cloud side. Account management. Connection management. Configuration management. And with one more addition: feature parity between the development environment (of every developer) and staging and production.
That’s a lot of ops.
What if the next generation of serverless eliminates all of that?
When we introduced key/value storage earlier this year, our goal was to provide such an experience. All the developer needs to do to get key/value storage is say so in
spin.toml. Spin itself automatically creates a local key/value storage for local use, and the developer doesn’t need to do anything at all to manage it. No process. No connection string. No accounts or usernames or passwords or permissions… it’s just there.
And when the developer deploys to Fermyon Cloud? Same thing. Fermyon Cloud creates an in-cloud highly available key/value store for the app. And once more, the developer needs to do nothing at all to manage it.
When the operations team sets up a Spin runtime in another environment, the team can choose to bring their own backend, be it Redis or CosmosDB or a custom one. And it doesn’t require a single change to the code. It’s all backend configuration that the developer needs never touch.
This makes things much simpler for the developer. Using key/value storage is suddenly reduced to just a handful of API calls like
A few weeks after Fermyon released key/value storage, Deno and then Vercel announced their own offerings. This is a trend marking those who break away from serverless v1.
In addition to being faster, easier, cheaper, and more portable, the next generation of serverless functions includes ops-less cloud services that free the developer.
Serverless applications have a surprisingly long lineage from CGI and PHP on through the evolution of cloud. But this new generation of serverless is not only more powerful, but also easier to use. Freeing the developer from operational concerns, and the platform engineer from developer concerns, this approach to serverless reduces friction on both sides of the deployment divide.