It’s been a few months since I last wrote about Amazon Redshift and I thought I’d update you on some of the things we are hearing from customers. Since we launched, we’ve been adding over a hundred customers a week and are well over a thousand today. That’s pretty stunning. As far as I know, it’s unprecedented for this space. We’ve enabled our customers to save tens of millions of dollars in up front capital expenses by using Amazon Redshift.
It’s clear that Amazon Redshift’s message of price, performance and simplicity has resonated with our customers. That’s no surprise – these are core principles for every AWS service. But when we launched Amazon Redshift, a number of people asked me, “Aren’t data warehouses enterprise products? Do you really do enterprise? How do you handle availability, security, and integration?” My first reaction to that was, “Wait, doesn’t everybody care about these things? These aren’t enterprise-specific.” But, even if we accept that framing, we don’t have to limit ourselves to the enterprise. A lot of the power of AWS comes from the fact that we can invest in expertise in these sorts of areas and spread the benefits across all our customers. I’ve talked about performance and availability in Amazon Redshift before. This time, let’s take a look at how Amazon Redshift secures your data. I suspect it’s a lot more than what most people are doing on premise today. This is especially timely since Amazon Redshift has just been included in our SOC1 and SOC2 compliance reports.
Amazon Redshift starts with the security foundation underlying all AWS services. There are physical controls for datacenter access. Machines are wiped before they are provisioned. Physical media is destroyed before leaving our facilities. Drivers, BIOS, and NICs are protected. Access by our staff is monitored, logged and audited.
As a managed service, Amazon Redshift builds on this foundation by automatically configuring multiple firewalls, known as security groups, to control access to customer data warehouse clusters. Customers can explicitly set up ingress and egress rules or place their SQL endpoint inside their own VPC, isolating it from the rest of the AWS cloud. For multi-node clusters, the nodes storing customer data are isolated in their own security group, preventing direct access by the customer or the rest of the AWS network. Customers can require SSL for their own accesses to their cluster while the AWS operations to monitor and manage their cluster are always secured by SSL.
Amazon Redshift can also be set up to encrypt all data at rest using hardware-accelerated AES-256. All data includes all data blocks, system metadata, partial results from queries and backups stored in S3. Each data block gets its own unique randomly generated key. These keys are further encrypted with a randomly generated cluster-specific key that is encrypted and stored off-cluster, outside the AWS network, and only kept in-memory on the cluster itself. By using unique keys per-block and per-cluster, Amazon Redshift dramatically reduces the cost of key rotation and helps prevent unauthorized splicing of data from different blocks or clusters. We plan to support customer-managed hardware security modules (HSM) by further encrypting the cluster key, using Amazon CloudHSM or an on-premise HSM environment.
I always tell developers that they should obsess not only on the things that our customers ask for but also on the things they just expect. Chief amongst those is security. Every day, we need to come in and work hard to maintain the trust our customers have placed in us. I’m delighted to see how much work the Amazon Redshift team put into this area out of the gate.
We love getting feedback so we can deliver the improvements and new features that really matter to our customers. You can see from the pace at which we roll out new functionality that teams across AWS take this very seriously. One of the teams that’s iterating quickly is DynamoDB. They recently launched Local Secondary Indexes and today they are releasing several new features that will help customers build faster, cheaper, and more flexible applications:
Parallel Scans – To be able to increase the throughput of table scans, the team has introduce new functionality that allows you to scan through the table with multiple threads concurrently. Until now scans could only be performed sequentially, but with this new feature the scan can be split into multiple segments, each retrieved on their own thread.
Provisioned Throughput – To allow customer to respond to changes in load more rapidly, DynamoDB will allow the provisioned throughput to be decreased 4 times per day (was twice per day).
Read Capacity Metering – The read capacity unit will be increased from 1KB to 4KB. As a result many read scenarios will have their cost reduced to ¼ of the original cost. This also makes the DynamoDB/Redshift integration even more cost-effective, as exporting data from DynamoDB into Redshift could be up to four times cheaper.
We’re excited to give DynamoDB customers better read performance, lower costs, and more provisioning flexibility. Please keep the feedback coming – we’re listening.
I have returned from a great series of AWS Summits in NYC and in Europe so it is time to get back to some weekend reading.
During the nineties much operating systems research focussed on microkernels, which resulted in a large collection of prototype systems: Mach 3.0, L3/L4, Plan 9, Xenokernel, Minix and others. Not many of those made into production, the version of Mach that rolled into Mac OS X through the XNU integration was an earlier, monolithic version. I believe commercially QNX has been the most successful microkernel.
There was a wealth of interesting, fundamental research triggered by the concepts of microkernels: new communication paradigms, memory management structures, schedulers, etc. It resulted in many publications that go back to the roots of OS research. For this weekends reading I picked a more esoteric paper. As part of the Mach 3.0 research Rich Draves implemented continuations inside the operating system as a fundament structuring component for communication, thread management and exception handling. There were performance improvements but more importantly, and my reason for reading the paper again, it had impact on how the OS was structured, leading to a reduction in complexity.
Using continuations to implement thread management and communication in operating systems, Richard Draves , Brian N. Bershad , Richard F. Rashid , All W. Dean, Proceedings of the thirteenth ACM symposium on Operating systems principles Pages 122-136
Today, I’m thrilled to announce that we have expanded the query capabilities of DynamoDB. We call the newest capability Local Secondary Indexes (LSI). While DynamoDB already allows you to perform low-latency queries based on your table’s primary key, even at tremendous scale, LSI will now give you the ability to perform fast queries against other attributes (or columns) in your table. This gives you the ability to perform richer queries while still meeting the low-latency demands of responsive, scalable applications.
Our customers have been asking us to expand the query capabilities of DynamoDB and we’re excited to see how they use LSI. Milo Milovanovic, Washington Post Principal Systems Architect reports that “database performance and scalability are critical for delivering new services to our 34+ million readers on any device. For this reason, we chose DynamoDB to power our popular Social Reader app and site experience on socialreader.com. The fast and flexible query performance that local secondary indexes provide will allow us to further optimize our social intelligence, and continue to improve our readers’ experiences.”
As I discussed in a recent blog post, after years of building highly scalable and highly available e-commerce and cloud computing services, Amazon has come to realize that relational databases should only be used when an application truly needs the complex query, table join and transaction capabilities of a full-blown relational database. In all other cases, when such relational features are not needed, we default to DynamoDB as it offers a more available, more scalable and ultimately a lower cost solution.
When DynamoDB launched last year, it offered simple but powerful query capabilities. Customers could choose from two types of keys for primary index querying: Simple Hash Keys and Composite Hash Key / Range Keys:
Simple Hash Key gives DynamoDB the Distributed Hash Table abstraction. The key is hashed over the different partitions to optimize workload distribution. For more background on this please read the original Dynamo paper.
Composite Hash Key with Range Key allows the developer to create a primary key that is the composite of two attributes, a “hash attribute” and a “range attribute.” When querying against a composite key, the hash attribute needs to be uniquely matched but a range operation can be specified for the range attribute: e.g. all orders from Werner in the past 24 hours, or all games played by an individual player in the past 24 hours.
With LSI we expand DynamoDB’s existing query capabilities with support for more complex queries. Customers can now create indexes on non-primary key attributes and quickly retrieve records within a hash partition (i.e., items that share the same hash value in their primary key).
Since we launched DynamoDB, we have seen many database customers migrate their apps from traditional sharded relational database deployments to DynamoDB. Some of these developers who were used to the broad query flexibility offered by relational databases asked us to add more query functionality to DynamoDB. These developers will now find LSI to be useful and familiar, as it enables them to index non-primary key attributes and quickly query records within a hash partition. LSI enables more applications to benefit from DynamoDB’s scalability, availability, resilience, low cost and minimal operational overhead.
What are Local Secondary Indexes (LSI)?
As an example, let’s say that your social gaming application tracks player activity. Database scalability is important for social games, which can attract tens of millions of players soon after launch. Consistent, rock solid low-latency database performance is important too, because social games are highly interactive. Let’s examine how DynamoDB would support a social game, and then add the benefit of local secondary indexes.
DynamoDB stores information as database tables, which are collections of individual items. Each item is a collection of data attributes. The items are analogous to rows in a spreadsheet, and the attributes are analogous to columns. Each item is uniquely identified by a primary key, which is composed of its first two attributes, called the hash and range.
DynamoDB queries refer to the hash and range attributes of items you’d like to access. Local secondary indexes let you query for hash keys together with other attributes besides the range key. LSI queries are local in the sense they always refer to the same hash key as standard queries.
Based on the design of your game, you might decide to record each player’s final score for each game he completes. You would track at least three pieces of data:
In DynamoDB, your Player Activity table might look like this:
Suppose you always want to show players a history of the last 10 games they played. This is a natural fit for DynamoDB. By setting up a DynamoDB table with PlayerName as the hash key and GameStartTime as the range key, you can quickly run queries like: “Show me the last 10 games played by John”. However, once you set up your table like this, you couldn’t run efficient queries on other attributes like “Score”. That was before LSI. Now, you can use LSI to define a secondary index on the “Score” attribute and quickly run queries like “Show me John’s all-time top 5 scores.” The query result is automatically ordered by score.
With LSI, your application can get the data it needs much more quickly and efficiently than ever before. No more downloading and sorting through results. By using LSI, you can now push that work to DynamoDB. Crucially, it does so while protecting the scalability and performance that our customers demand. Tables with one or more LSI’s will exhibit the same latency and throughput performance as those without any indexes.
Start with DynamoDB
The enhanced query flexibility that local secondary indexes provide means DynamoDB can support an even broader range of workloads. As I mentioned earlier, since scalability and availability of our apps are of critical importance at Amazon, we have already come to start with DynamoDB as the default choice for every application that does not require the flexibility of relational databases like Oracle or MySQL. Customers tell us they’re adopting the same practice, particularly in the areas of digital advertising, social gaming and connected device applications where high availability, seamless scalability, predictable performance and low latency are very critical.
Valentino Volonghi, Chief Architect of retargeting platform AdRoll, says “we use DynamoDB to bid on more than 7 billion impressions per day on the Web and FBX. AdRoll’s bidding system accesses more than a billion cookie profiles stored in DynamoDB, and sees uniform low-latency response. In addition, the availability of DynamoDB in all AWS regions allows our lean team to meet the rigorous low latency demands of real-time bidding in countries across the world without having to worry about infrastructure management.” In the past I have also highlighted other advertising applications from customers like Madwell and Shazam where seamless scale, high availability, predictable performance and low latency are very important.
Ankur Bulsara, CTO of the Scopely social gaming platform, says LSI will enable his team to deploy DynamoDB even more broadly. “We default to DynamoDB wherever we can, and also use MySQL for some query types,” he says. “We’re very excited that local secondary indexes will allow us to further remove traditional RDMSes from our ever-growing stack. DynamoDB is the future, and with LSI, the future is very bright.” In the past, I have highlighted many other gaming customers such as Electronic Arts and Halfbrick Studios. Gaming customers value DynamoDB’s seamless scale, since successful games can scale from a few users to tens of millions of users in a matter of weeks.
Today, local secondary indexes must be defined at the time you create your DynamoDB tables. In the future, we plan to provide you with an ability to add or drop LSI for existing tables. If you want to equip an existing DynamoDB table to local secondary indexes immediately, you can export the data from your existing table using Elastic Map Reduce, and import it to a new table with LSI.
You can get started with DynamoDB and Local Secondary Indexes right away with the DynamoDB free tier – LSI is available today in all AWS regions except GovCloud.
For more information, please see the appropriate topics in the Amazon DynamoDB developer guide.
Joins are one of the fundamental relational database query operations. It is very hard to implement the join operation efficiently as there any many unknowns in the execution of the operation. In the early days much relation database research was done in understanding the complexity of performing joins, what exactly impacted their performance and which approach performed better under which conditions. In 1992 Priti Mishra and Margaret Eich conducted a survey on what was achieved until then in Join Processing and described in details the algorithms, the implementation complexity and the performance. Which make it a good back-to-basics paper to read this weekend.
Join Processing in Relational Databases, Priti Mishra and Margaret H. Eich, ACM Computing Surveys (CSUR) Surveys, Volume 24 Issue 1, March 1992, Pages 63 - 113
At the end of the 80's Ceri and Widom were researching the fundamentals of integrity constraints in databases. In 2000 they were invited by the VLDB conference to review 10 years of work around Constraints and Triggers with an eye on the practical application of both abstractions. The resulting paper gives a good overview of the fundamentals of both concepts.
Practical Applications of Triggers and Constraints: Success Stories and Lingering Issues, Stefano Ceri, Roberta Cochrane, and Jennifer. Widom, In 26th Very Large Data Bases Conference Proceedings, Cairo, September 2000, Pages 254-262
I have been reading mainly newer papers in the beginning of this year, but it is time to get back to the basics and start reading some more historical papers again. From the time when researchers and engineers where laying the foundations for our current systems. A good early paper to start again is the Survey that Härder en Reuter did on Database Recovery in 1983.
Principles of Transaction-Oriented Database Recovery, Theo Härder and Andreas Reuter, ACM Computing Surveys, Volume 15 Issue 4, December 1983, Pages 287-317
Netflix has over the years become one of the absolute best engineering powerhouses for building cloud-native applications. At AWS we are very proud to be their infrastructure partner and every day we learn from how they use our cloud services. Many of the observations I talk about in my “21st Century Application Architectures” presentation come from seeing Netflix architects at work.
Netflix has gone beyond just building great applications; they have made fundamental pieces of their cloud platform available as open source and many in the industry have responded to that with great enthusiasm, evidenced by the packed Netflix House in February where people came to hear more about NetflixOSS.
But Netflix has even gone a step beyond this by sponsoring a contest for the best open source contributions to the NetflixOSS platform. The Netflix OSS Cloud Prize carries a cash reward of $100K distributed over 10 categories. Each of the category winners will also receive $5K worth of AWS credits. The contest will run through September 15 2013 after which a Judging Panel, which I am excited to be part of, will pick the winners. The winner will be announced on October 16 and the trophies will be presented at the AWS Re: Invent conference in November in Las Vegas.
The ten categories are:
- Best example application mash-up
- Best new monkey
- Best contribution to code quality
- Best new feature
- Best contribution to operational tools, availability and manageability
- Best portability enhancement
- Best contribution to performance improvements
- Best datastore integration
- Best usability enhancement
- Judges choice award
Then go fork the repo and start building! I am very much looking forward to the results.
I spent a lot of time talking to AWS developers, many working in the gaming and mobile space, and most of them have been finding Node.js well suited for their web applications. With its asynchronous, event-driven programming model, Node.js allows these developers to handle a large number of concurrent connections with low latencies. These developers typically use EC2 instances combined with one of our database services to create web services used for data retrievals or to create dynamic mobile interfaces.
Today, AWS Elastic Beanstalk just added support for Node.js to help developers easily deploy and manage these web applications on AWS. Elastic Beanstalk automates the provisioning, monitoring, and configuration of many underlying AWS resources such as Elastic Load Balancing, Auto Scaling, and EC2. Elastic Beanstalk also provides automation to deploy your application, rotate your logs, and customize your EC2 instances. To get started, visit the AWS Elastic Beanstalk Developer Guide.
Two years, lots of progress, and more to come…
Almost two years ago, we launched Elastic Beanstalk to help developers deploy and manage web applications on AWS. The team has made significant progress and continues to iterate at a phenomenal rate.
Elastic Beanstalk now supports Java, PHP, Python, Ruby, Node.js, and .NET. You can deploy and manage your applications in any AWS region (except for GovCloud). Many tools are available for you to deploy and manage your application, just choose your favorite flavor. If you’re building Java applications, you can use the AWS Toolkit for Eclipse. If you’re building .NET applications, you can use the AWS Toolkit for Visual Studio. If you prefer to work in a terminal, you can use a command line tool called ‘eb’ along with Git. Partners like eXoCloud IDE also offer integration with Elastic Beanstalk.
Elastic Beanstalk seamlessly connects your application to an RDS database, secures your application inside a VPC, and allows you to integrate with any AWS resource using a new mechanism called configuration files. Simply put, Elastic Beanstalk is highly customizable to meet the needs of your applications.
Who is using Elastic Beanstalk?
Companies of all sizes are using Elastic Beanstalk. Intuit for example uses Elastic Beanstalk for a mobile application backend called txtweb. Peel uses Elastic Beanstalk to host a real-time web service that interacts with DynamoDB.
The one commonality for all these customers is the time savings and the productivity increase that they get when using Elastic Beanstalk. Elastic Beanstalk helps them focus on their applications, on scaling their applications, and on meeting tight deadlines. Productivity gains don’t always mean that you merely deliver things faster. Sometimes with increased productivity, you can also do more with less.
The Elastic Beanstalk team helped me get some data around how small teams can build large scale applications. One company with less than 10 employees runs a mobile backend on Elastic Beanstalk that handles an average of 17,000 requests per second and peak traffic of more than 20,000 requests per second. The company drove the project from development to delivery in less than 2 weeks. It’s very impressive to see the innovation and scale that Elastic Beanstalk can provide even for small teams.
Time passes very quickly around here and I hadn’t realized until recently that over a year has gone by since we launched DynamoDB. As I sat down with the DynamoDB team to review our progress over the last year, I realized that DynamoDB had surpassed even my own expectations for how easily applications could achieve massive scale and high availability with DynamoDB. Many of our customers have, with the click of a button, created DynamoDB deployments in a matter of minutes that are able to serve trillions of database requests per year. I’ve written about it before, but I continue to be impressed by Shazam’s use of DynamoDB, which is an extreme example of how DynamoDB’s fast and easy scalability can be quickly applied to building high scale applications. Shazam’s mobile app was integrated with Super Bowl ads, which allowed advertisers to run highly interactive advertising campaigns during the event. Shazam needed to handle an enormous increase in traffic for the duration of the Super Bowl and used DynamoDB as part of their architecture. After working with DynamoDB for only three days, they had already managed to go from the design phase to a fully production-ready deployment that could handle the biggest advertising event of the year.
In the year since DynamoDB launched, we have seen widespread adoption by customers building everything from e-commerce platforms, real-time advertising exchanges, mobile applications, Super Bowl advertising campaigns, Facebook applications, and online games. This rapid adoption has allowed us to benefit from the scale economies inherent in our architecture. We have also reduced our underlying costs through significant technical innovations from our engineering team. I’m thrilled that we are able to pass along these cost savings to our customers in the form of significantly lower prices - as much as 85% lower than before.
The details of our price drop are as follows:
Throughput costs: We are dropping our provisioned throughput costs for both read requests and write requests by 35%. We are also introducing a Reserved Capacity model that offers customers discounted pricing if they reserve read and write capacity for one or three years. For customers reserving capacity for three years, the price of throughput will drop from today’s prices by 85%. For customers reserving capacity for one year, the price of throughput will drop from today’s prices by 70%. For more details on reserved capacity, please read the DynamoDB FAQs.
Indexed Storage costs: We are lowering the price of indexed storage by 75%. For example, in our US East (N. Virginia) Region, the price of data storage will drop from $1 per GB per month to $0.25. All data items continue to be stored on Solid State Drives (SSDs) and are automatically replicated across multiple distinct Availability Zones to provide very high durability and availability.
How are we able to do this?
DynamoDB runs on a fleet of SSD-backed storage servers that are specifically designed to support DynamoDB. This allows us to tune both our hardware and our software to ensure that the end-to-end service is both cost-efficient and highly performant. We’ve been working hard over the past year to improve storage density and bring down the costs of our underlying hardware platform. We have also made significant improvements to our software by optimizing our storage engine, replication system and various other internal components. The DynamoDB team has a mandate to keep finding ways to reduce the cost and I am glad to see them delivering in a big way. DynamoDB has also benefited from its rapid growth, which allows us to take advantage of economies of scale. As with our other services, as we’ve made advancements that allow us to reduce our costs, we are happy to pass the savings along to you.
When is it appropriate to use DynamoDB?
I am often asked: When is it appropriate to use DynamoDB instead of a relational database?
We used relational databases when designing the Amazon.com ecommerce platform many years ago. As Amazon’s business grew from being a startup in the mid-1990s to a global multi-billion-dollar business, we came to realize the scaling limitations of relational databases. A number of high profile outages at the height of the 2004 holiday shopping season can be traced back to scaling relational database technologies beyond their capabilities. In response, we began to develop a collection of storage and database technologies to address the demanding scalability and reliability requirements of the Amazon.com ecommerce platform. This was the genesis of NoSQL databases like Dynamo at Amazon. From our own experience designing and operating a highly available, highly scalable ecommerce platform, we have come to realize that relational databases should only be used when an application really needs the complex query, table join and transaction capabilities of a full-blown relational database. In all other cases, when such relational features are not needed, a NoSQL database service like DynamoDB offers a simpler, more available, more scalable and ultimately a lower cost solution.
We now believe that when it comes to selecting a database, no single database technology – not even one as widely used and popular as a relational database like Oracle, Microsoft SQL Server or MySQL - will meet all database needs. A combination of NoSQL and relational database may better service the needs of a complex application. Today, DynamoDB has become very widely used within Amazon and is used every place where we don’t need the power and flexibility of relational databases like Oracle or MySQL. As a result, we have seen enormous cost savings, on the order of 50% to 90%, while achieving higher availability and scalability as our internal teams have moved many of their workloads onto DynamoDB.
So, what should you do when you’re building a new application and looking for the right database option? My recommendation is as follows: Start by looking at DynamoDB and see if that meets your needs. If it does, you will benefit from its scalability, availability, resilience, low cost, and minimal operational overhead. If a subset of your database workload requires features specific to relational databases, then I recommend moving that portion of your workload into a relational database engine like those supported by Amazon RDS. In the end, you’ll probably end up using a mix of database options, but you will be using the right tool for the right job in your application.