Serverless Aurora: What it means and why it's the future of data

Dec 4, 2017

AWS had their annual re:Invent conference last week (missed it? Check out our full recap).

AWS Lambda started the Serverless movement by releasing Lambda at re:Invent 2014. But the Lambda releases this year were run-of-the-mill incremental improvements—higher memory limits, concurrency controls, and of course, Golang support (coming soon!).

All this to say, there was nothing game-changing in the functions-as-a-service (FaaS) world itself.

Well then. Does this mean that AWS is slowing down on serverless?

Hardly.

We saw AWS asserting that serverless is more than just functions:

For a deeper explanation of this, check out Ben Kehoe's excellent post on The Serverless Spectrum.

In five years when we look back at re:Invent 2017, we won't be talking about the different managed container offerings. We'll be talking about this:

That's right. Serverless Aurora.

Why is Serverless Aurora so important? We first need to understand two things: the technology-driven changes in software architectures in the cloud era, and the current state of the data layer in serverless architectures.

The Architectural Evolution

Earlier this year, Adrian Cockcroft wrote a piece on the Evolution of Business Logic from Monoliths through Microservices, to Functions that blew my mind. It showed how changes in technology are driving changes in development patterns and processes. Adrian has had a front row seat for these changes over the years from his work at eBay, Netflix, and now AWS.

A bunch of unrelated technologies combined to drive these changes. Faster networks and better serialization protocols enabled compute that was distributed rather than centralized. This enabled API-driven architecture patterns that used managed services from SaaS providers and broke monoliths into microservices.

Chef, Puppet, EC2 and Docker and eventually Lambda combined to enable and promote ephemeral compute environments that reduced time to value and increased utilization. These tools were combined with the necessary process improvements from the DevOps movement to increase velocity. We're seeing smaller teams deliver features faster with lower costs.

These changes have been huge, but the data layer has been lagging. Adrian touched on database improvements, but they aren't as mind-blowing. They have explicit tradeoffs of simple query patterns:

Compared to relational databases, NoSQL databases provide simple but extremely cost effective, highly available and scalable databases with very low latency.

The lagging data layer is particularly problematic in Serverless architectures.

The Problem of the Serverless Data Layer

I spoke on this problem at ServerlessConf NYC in October. In short, there are two approaches you can take with databases with serverless compute: server-full or serverless.

Server-full databases

A server-full approach uses instance-based solutions such as MySQL, Postgres, or MongoDB. I classify them as instance-based when you can tell me how many instances you have running and what their hostnames are.

I like Postgres + Mongo because of their popularity, which means data design patterns are well-known and language libraries are mature.

However, these instance-based solutions were designed for a pre-serverless world with long-running compute instances. This leads to the following problems:

Connection Limits

Postgres and MySQL have limits of the number of active connections (e.g. 100) you can have at any one time. This can cause problems if you get a spike in traffic which causes a large number of Lambda to fire.

Networking issues

Your database instances will often have strict firewall rules about which IP addresses can access them. This can be problematic with ephemeral compute -- adding custom network interfaces will add latency to your compute's initialization.

Provisioning issues

Serverless architectures fit well with defining Infrastructure as Code. This is harder with something like Postgres roles (users). These aren't easily scriptable in your CloudFormation or Terraform, which spreads your configuration out across multiple tools.

Scaling issues

This is one of the most important problems. Instance-based databases aren't designed to scale up and down quickly. If you have variable traffic during the week, you're likely paying for the database you need at peak rather than adjusting throughout the week.

Serverless databases

In contrast to server-full, instance-based databases, there is a class of serverless databases. Serverless databases are different in that you're usually paying for throughput rather than a particular number and size of instances.

There are a few options for serverless databases, including Firebase and FaunaDB. However, the most common of these databases is DynamoDB from AWS.

DynamoDB addresses most of the problems listed above with server-full databases. There are no connection limits, just the general throughput limits from your provisioned capacity. Further, DynamoDB is mostly easy to scale up and down with some caveats. Also, the networking and provisioning issues are mitigated as well. All access is over HTTP and authentication / authorization is done with IAM permissions. This makes it much easier to use in a world with ephemeral compute.

However, DynamoDB isn't perfect as a database. You should really read Forrest Brazeal's excellent piece on Why Amazon DynamoDB isn't for everyone. In particular, the query patterns can be very difficult to get correct. DynamoDB is essentially a key-value store, when means you need to configure your data design very closely to your expected query patterns.

To me, the biggest problem is the loss of flexibility in moving from a relational database to DynamoDB. With a relational model, it's usually easy to query the data in a new way for a new use case. There isn't that same flexibility for DynamoDB.

Developer agility is one of the key benefits of serverless architectures. Having to migrate and rewrite data is a major blocker to this agility.

The Future of Data

Ben Kehoe loves to hammer the point that to be truly serverless, your compute should not exist when it's not handling data. This hyper-ephemeral compute requires a new type of database. Highly-scalable, automation-friendly, global, with a flexible data model to boot.

Distributed databases are hard. The NoSQL movement, including the Dynamo paper that describes the principles of DynamoDB and influenced its cousins (Apache Cassandra, Riak, etc.), was a first step in the database revolution.

The second step is in motion now. AWS announced multi-master Aurora, allowing for your Aurora instances to have masters that accept writes in different Availability Zones. Similarly, they announced DynamoDB Global Tables which syncs data from DynamoDB tables across different regions (!). Writes in São Paulo will be replicated to your copies in Ohio, Dublin, and Tokyo, seamlessly. These manage the difficulty of multi-master global databases.

The next step is Serverless Aurora, due sometime in 2018. It checks all the boxes for a serverless database:

✔︎ Easy scaling.

✔︎ Pay-per-use.

✔︎ Accessible over HTTP.

✔︎ Authentication & authorization over tightly-scoped IAM roles rather than database roles.

✔︎ A flexible relational data model that most developers know.

This is a big deal.

We've seen the hints that Amazon recognizes the issues with existing relational solutions in the cloud-native paradigm. They've implemented IAM authentication for MySQL and Aurora MySQL databases already. Further, the Aurora design paper notes how they have changed the relational database for a cloud-native world.

I believe this is only the first step in Amazon's plan to push the database further. With the rise of social networks and recommendation engines, graph databases have become more popular. Amazon's new Neptune graph database is an foray into another data area. Graph databases are notoriously hard to shard, so it may be a while before we see a Serverless Neptune. I wouldn't bet against it coming eventually.

re:Invent is about the future, and that's why it's my favorite conference of the year. When we look back on re:Invent 2017, I have a feeling the data layer improvements will be the most important of all.

Subscribe to our newsletter to get the latest product updates, tips, and best practices!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.