Choosing Consistency

• 894 words

Amazon SimpleDB has launched today with a new set of features giving the customer more control over which consistency and concurrency models to use in their database operations. There are now conditional put and delete operations as well a new "consistent read" option. These new features will make it easier to transition those applications to SimpleDB that are designed with traditional database tools in mind.

Revisiting the Consistency Challenges

Architecting distributed systems that need to reliably operate at world-wide scale is not a simple task. There are many factors that come into play when you need to meet stringent availability and performance requirements under ultra-scalable conditions. I laid out some of these challenges in an article explaining the concept of eventual consistency. If you need to achieve high-availability and scalable performance, you will need to resort to data replication techniques. But replication in itself brings a whole set of challenges that need to be addressed. For example updates to data now needs to happen in several locations, so what do you do if one or more of those locations is (temporarily) not accessible?

A whole field of computer science is dedicated to finding solutions for the hard problems of building reliable distributed systems. Eric Brewer of UC Berkeley summarized these challenges in what has been called the CAP Theorem, which states that of the three properties of shared-data systems--data consistency, system availability, and tolerance to network partitions--only two can be achieved at any given time. In practice the possibility of network partitions is a given, so the real trade-off is between availability and consistency: In a system that needs to be highly available there are a number of scenarios under which consistency cannot be guaranteed and for a system that needs to be strongly consistent, there are a number of scenarios under which availability may need to be sacrificed. These trade-off scenarios generally involve edge conditions that almost never happen, but in a system that needs to operate at massive scale serving trillions and trillions of requests on a daily basis, one needs to be prepared to handle these cases.

Next to this fundamental tension between availability and consistency, there is also a trade-off between concurrency, performance and consistency. Achieving strict consistency can come at a cost in update or read latency, and may result in lower throughput. Developers should be aware of these trade-offs, as they are not simply an esoteric concept only impacting the internals of the system. Rather, the consistency model of the underlying service will have explicit downstream effects on the developer's application.

Data Consistency Models in the Amazon Services

Within Amazon we strongly favor the use of services that are highly-available and as such we are willing to deal with a relaxed form of consistency called eventual consistency. Our applications are designed to handle this consistency model and there are many cases where this has allowed us to architect services and applications that can meet the most stringent reliability requirements.

Amazon S3 and Amazon SimpleDB are examples of AWS services that are designed using these high-availability principles using an eventual consistency model. Our customers have been able to handle this consistency model as well and have built scalable and reliable applications on top of these services. Now that these services have been in operation for quite some time we have received feedback from a number of our advanced customers asking whether they could get more control over the trade-offs made by the service architects, specifically over the consistency of the read operations. Under the eventual consistency model there is the possibility of stale reads, while in a strongly consistent model reads always returns the latest update, though with the potential for a performance penalty and lower availability under certain rare partition scenarios.

Two new features released today by Amazon SimpleDB put control over the consistency model in the hands of the customers. The first is the ability for customers to perform a Consistent Read, which will prevent the read operation from possibly returning stale data. The second is a Conditional Put and Delete which will ensure that these operations are only performed if certain attribute values are matched.

These new features will make it easier for traditional database application scenarios such as concurrency control, item counters, conditional update/delete, etc., to be implemented using Amazon SimpleDB.

Consistent Read

Until recently SimpleDB only supported eventually consistent reads. This meant that under certain conditions a read, for a small period of time, may not reflect the result of a recently completed write. Yet there is a class of applications that would be much easier to implement if reads would reflect the outcome of writes performed prior to the read. In order to facilitate building such applications SimpleDB now also supports a strong consistency option called consistent read which will reflect all writes successfully returned prior to that read. Especially those applications that originally were designed with a traditional database in mind will be easier to adapt to Simple DB.

Select & GetAttributes request parameters now include an optional Boolean flag ConsistentRead which is set to false by default. If ConsistentRead is absent or set to false, SimpleDB will default to an eventually consistent read. If ConsistentRead is set to true, SimpleDB will return a consistent read.

The table below summarizes the characteristics of the two SimpleDB read consistency options:

Eventually consistent readConsistent read
Stale reads possible
Lowest read latency
Highest read throughput
No stale reads
Higher read latency
Lower read throughput

Since a consistent read may incur higher latency and lower read throughput it should only be used when an application scenario mandates returning the very latest value of an item or attribute. For all other scenarios an eventually consistent read will yield the best performance.

Conditional Put and Delete

Many developers have asked for primitives in SimpleDB for implementing optimistic concurrency control, item level transactions, locks, counters, etc. The new conditional puts and deletes feature in SimpleDB enables building these primitives.

Conditional Puts allow inserting or replacing values of one or more attributes for a given item if the expected consistent value of a single-valued attribute matches the specified expected value. Conditional puts are a mechanism to eliminate lost updates caysed by concurrent writers writing to the same item as long as all concurrent writers use conditional updates.

Conditional deletes allow deleting an item, an attribute or an attribute's value for a given item if the expected consistent value of a single-valued attribute of an item matches the specified expected value. If the current value does not match the expected value, or if the attribute is gone altogether, the delete is rejected.

What about availability

Even though SimpleDB now enables operations that support a stronger consistency model, under the covers SimpleDB remains the same highly-scalable, highly-available, and highly durable structured data store. Even under extreme failure scenarios, such as complete datacenter failures, SimpleDB is architected to continue to operate reliably. However when one of these extreme failure conditions occurs it may be that the stronger consistency options are briefly not available while the software reorganizes itself to ensure that it can provide strong consistency. Under those conditions the default, eventually consistent read will remain available to use.

Summary

By providing developers control over the consistency model, they are now empowered to make the consistency and performance trade-offs. This flexibility is important in scenarios where they need to adapt existing applications that have been designed with a traditional database operational model in mind. Developers can now choose to use a consistent read approach if their application can accept the trade-offs. And with conditional put and delete, developers have the ability to use an optimistic concurrency control approach to managing their database.

You can find more details on the Amazon SimpleDB product pages and Jeff Barr's posting at the Amazon Developer's blog.