London Calling! An AWS Region is coming to the UK!

| Comments ()

Yesterday, AWS evangelist Jeff Barr wrote that AWS will be opening a region in South Korea in early 2016 that will be our 5th region in Asia Pacific. Customers can choose between 11 regions around the world today and, in addition to Korea, we are adding regions in India, a second region in China, and Ohio in 2016.

Today, I am excited to add the United Kingdom to that list! The AWS UK region will be our third in the European Union (EU), and we're shooting to have it ready by the end of 2016 (or early 2017). This region will provide even lower latency and strong data sovereignty to local users.

More startups, small and medium businesses, large enterprises, universities, and government organizations all over the world are moving to the AWS Cloud faster than ever before. We are committed to meeting our customers’ increasing needs for capacity and for powerful AWS services that eliminate the heavy lifting of the underlying IT infrastructure -- allowing them to focus more of their precious resources on their core business.

Leading UK organizations were among the early adopters of the cloud when we first started AWS back in 2006 and we continue to help them drive increased agility, lower IT costs, and easily scale globally. Here are some examples of how our UK customers are using the AWS platform:

The new region, coupled with the existing AWS regions in Dublin and Frankfurt, will provide customers with quick, low-latency access to websites, mobile applications, games, SaaS applications, big data analysis, Internet of Things (IoT) applications, and more.

Expanding the Cloud: Introducing Amazon QuickSight

| Comments ()

We live in a world where massive volumes of data are being generated from websites, connected devices and mobile apps. In such a data intensive environment, making key business decisions such as running marketing and sales campaigns, logistic planning, financial analysis, and ad targeting require deriving insights from these data. However, the data infrastructure to collect, store, and process data is geared primarily towards developers and IT professionals (e.g., Amazon Redshift, Amazon DynamoDB, Amazon EMR) whereas insights need to be derived by not just technical professionals but also non-technical, business users.

In our quest to enable the best data storage options for customers, over the years we have built several innovative database solutions such as Amazon RDS, Amazon RDS for Aurora, Amazon DynamoDB, and Amazon Redshift. Not surprisingly, customers are using them to collect and store massive amounts of data. Yet, the process of deriving actionable insights out of this wide variety of data sources is not easy. Traditionally, companies had to invest in a lot of complex tools to discover their data sets, ETL tools to prepare for analysis, and separate tools for analyzing and providing visually interactive dashboards.

Today, I am excited to share with you a brand new service called Amazon QuickSight that aims to simplify the process of deriving insights from a wide variety of data sources quickly, easily and at a low cost. QuickSight is a very fast, cloud powered, business intelligence service for the 1/10th the cost of old-guard BI solutions.

Big data challenges

Over the last several years, AWS has delivered on a comprehensive set of services to help customers collect, store, and process their growing volume of data. Today, many thousands of companies—from large enterprises such as Johnson & Johnson, Samsung, and Philips to established technology companies such as Netflix and Adobe to innovative startups such as Airbnb, Yelp, and Foursquare—use Amazon Web Services for their big data needs.

Every day, large amount of data is generated from customer applications running on top of AWS infrastructure, collected and streamed using services like Amazon Kinesis, and stored in AWS relational data sources such as Amazon RDS, Amazon Aurora, and Amazon Redshift; NoSQL data sources such as Amazon DynamoDB; and file-based data sources such as Amazon S3. Customers also use a variety of different tools, including Amazon EMR for Hadoop, Amazon Machine Learning, AWS Data Pipeline, and AWS Lambda to process and analyze their data.

There’s an inherent gap between the data that is collected, stored, and processed and the key decisions that business users make on a daily basis. Put simply, data is not always readily available and accessible to organizational end users. Most business users continue to struggle answering key business questions such as “Who are my top customers and what are they buying?”, “How is my marketing campaign performing?”, and “Why is my most profitable region not growing?” While BI solutions have existed for decades, customers have told us that it takes an enormous amount of time, IT effort, and money to bridge this gap.

Traditional BI solutions typically require teams of data engineers to spend several months building complex data models and synthesizing the data before they can generate their first report. These solutions lack interactive data exploration and visualization capabilities, limiting most business users to canned reports and pre-selected queries.

On-premise BI tools also require companies to provision and maintain complex hardware infrastructure and invest in expensive software licenses, maintenance fees, and support fees that cost upwards of thousands of dollars per user per year. To scale to a larger number of users and support the growth in data volume spurred by social media, web, mobile, IoT, ad-tech, and ecommerce workloads, these tools require customers to invest in even more infrastructure to maintain a reasonable query performance. This cost and complexity to implement and scale BI makes it difficult for most companies to make BI ubiquitous across their organizations.

Enter Amazon QuickSight

QuickSight is a cloud powered BI service built from the ground up to address the big data challenges around speed, complexity, and cost. QuickSight puts data scattered across various different big data sources such as relational data sources, NoSQL data sources, and streaming data sets at the fingertips of your business users in an easy-to-use user interface and at one-tenth the cost of traditional BI solutions. Getting started with QuickSight is straightforward. Let me walk you through some of the core experiences of QuickSight that makes it so easy to set up, connect to your data sources, and build visualizations in minutes.

QuickSight is built on large number of innovative technologies to get a business user their first insights fast. Here are the few key innovations that power QuickSight:

SPICE: One of the key ingredients that make QuickSight so powerful is the Super-fast, Parallel, In-memory Calculation Engine (SPICE). SPICE is a new technology built from the ground up by the same team that has also built technologies such as DynamoDB, Amazon Redshift, and Amazon Aurora. SPICE enables QuickSight to scale to many terabytes of analytical data and deliver response time for most visualization queries in milliseconds. When you point QuickSight to a data source, data is automatically ingested into SPICE for optimal analytical query performance. SPICE uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations, machine code generation, and data compression to allow users to run interactive queries on large datasets and get rapid responses. SPICE supports rich calculations that help customers derive valuable insights as they explore their data without having to worry about provisioning or managing infrastructure. SPICE automatically replicates data for high availability and performance. This allows us to enable organizations to scale to thousands of users who can all perform fast, interactive analysis across a wide variety of AWS data sources. In addition to powering QuickSight, we are also enabling our AWS BI partners to integrate with SPICE, so that customers who use our partner tools can visualize their data quickly with a user interface that they are already familiar with.

Auto discovery: One of the challenges with BI and analytics is discovering the data and curating it for analytics. This requires an IT department to build a data catalog and make it discoverable with an analytics engine and tools. When a user logs in to QuickSight, it automatically discovers the list of data sources that a customer has access to and analyzes them without database configuration, setup, and so on. For instance, customers can visualize their data on an Amazon Redshift cluster by picking a table and then get to a visualization in less than 3 clicks. To enable this, we have built a live metadata catalog service that builds a catalog of data sources (e.g., Amazon Redshift, RDS, S3, Amazon EMR, and DynamoDB) to which the customer has access.

AutoGraph: Picking the right visualization is not easy, and there is lot of science behind it. For instance, optimal visualization depends on various factors: the type of data field one has selected (e.g., “Is it time, number, or string?”), cardinality of the data (e.g., “Does this field have only 4 unique values or 1 million values?”), and number of data fields one is trying to visualize. While QuickSight supports multiple graph types (e.g., bar charts, line graphs, scatter plots, box plots, pie charts, and so on), one of the things we have tried to simplify is a capability that automatically picks the right visualization for selected data using a technology called AutoGraph. With this, users pick which data fields they want to visualize and QuickSight automatically selects the right visual type for them.

Suggestions: Often the sheer volume of data can be overwhelming; many users just want to explore their data to learn interesting characteristics. For example, the most common query for sales data in an Amazon Redshift cluster might be “How do overall sales grow over time across different categories?” With QuickSight, we have built an engine that provides suggestions for interesting analytics that users might be interested in when they pick a data source to analyze. The engine derives its suggestions by analyzing the metadata of the data source, its most accessed queries and several other parameters. We believe this provides a simple way for users to deriving valuable insights without too much work.

Collaboration and sharing of live analytics: Often users want to slice and dice their data and share their analysis in a secure manner. With QuickSight, users can build a “storyboard” that contains multiple analyses with appropriate annotations, and share it with others in their organization. Unlike traditional tools, they can share live analysis instead of static images so that recipients can also derive insights on the storyboard that was shared. For enterprises, we are also providing Active Directory integration so that customers can share insights using their existing credentials.

I have highlighted only some of the key innovations behind QuickSight in this post. For detailed information about this product, visit the AWS Blog, the QuickSight Detail Page and the FAQ page.

What our customers are saying about QuickSight

As I mentioned earlier, many innovations at Amazon and AWS, including QuickSight, are driven by customer feedback. We actively listen to your pain points and handle the undifferentiated heavy lifting across the various dimensions of infrastructure, data management, and analytics. This strategy of constantly listening to customer feedback, and iterating on our capabilities rapidly, has been a virtuous cycle that has consistently worked well for us. QuickSight also started with similar roots and during the final stages of launch, I am pleased to hear such positive feedback from customers. We have heard great excitement from our customers like Nasdaq, and Intuit.

Nasdaq enables their customers to plan, optimize, and execute their business vision with confidence, using proven technologies to provide transparency and insight for navigating today's global capital markets. Their technology powers more than 100 marketplaces, clearinghouses, and central securities depositories in 50 countries, and so generates a lot of data. Nate Simmons, Principal Architect of Nasdaq Inc., tells us that they are always interested in new tools to analyze the data we have stored in Amazon Redshift, Amazon S3, and other sources. For him, having super-fast performance as the data volumes and usage grows is critical to their users. Based on their preview of QuickSight, they found the SPICE in-memory calculation engine combined with an easy-to-use UI to be appealing for their use cases.

Similarly, Troy Otillio, Director of Public Cloud at Intuit, tells us that based on their initial preview of QuickSight, they think this service is going to challenge the status quo. He mentions that it appears to be intuitive for their business users, particular those in marketing who need an easy-to-user tool with super-fast performance.

Summing it all up

We are excited about the launch of Amazon QuickSight and its early feedback. We believe this is one of the critical parts of our big data offerings. If you are interested in trying the product during our preview, you can sign up for the preview today.

The Startup Experience at AWS re:Invent

| Comments ()

AWS re:Invent is just over one week away—as I prepare to head to Vegas, I’m pumped up about the chance to interact with AWS-powered startups from around the world. One of my favorite parts of the week is being able to host three startup-focused sessions Thursday afternoon:

The Startup Scene in 2016: a Visionary Panel [Thursday, 2:45PM]
In this session, I’ll moderate a diverse panel of technology experts who’ll discuss emerging trends all startups should be aware of, including how local governments, microeconomic trends, evolving accelerator programs, and the AWS cloud are influencing the global startup scene. This panel will include:

  • Tracy DiNunzio, Founder & CEO, Tradesy
  • Michael DeAngelo, Deputy CIO, State of Washington
  • Ben Whaley, Founder & Principal Consultant, WhaleTech LLC
  • Jason Seats, Managing Director (Austin), & Partner, Techstars

CTO-to-CTO Fireside Chat [Thursday, 4:15 PM]
This is one of my favorite sessions as I get a chance to sit down and get inside the minds of technical leaders behind some of the most innovative and disruptive startups in the world. I’ll have 1x1 chats with the following CTOs:

  • Laks Srini, CTO and Co-founder, Zenefits
  • Mackenzie Kosut, Head of Technical Operations, Oscar Health
  • Jason MacInnes, CTO, DraftKings
  • Gautam Golwala, CTO and Co-founder, Poshmark

4th Annual Startup Launches [Thursday, 5:30 PM]
To wrap up our startup track, in the 4th Annual Startup Launches event we’ll invite five AWS-powered startups to launch their companies on stage, immediately followed by a happy hour. I can’t share the lineup as some of these startups are in stealth mode, but I can promise you this will be an exciting event with each startup sharing a special offer, exclusive to those of you in attendance.

Other startup activities

Startup Insights from a Venture Capitalists Perspective [Thursday, 1:30 PM]
Immediately before I take the stage, you can join a group of venture capitalists as they share insights and observations about the global startup ecosystem: each panelist will share the most significant insight they’ve gained in the past 12 months and what they believe will be the most impactful development in the coming year.

The AWS Startup Pavilion [Tuesday – Thursday]
If you’re not able to join the startup sessions Thursday afternoon, I encourage you to swing by the AWS Startup Pavilion (within re:Invent Central, booth 1062) where you can meet the AWS startup team, mingle with other startups, chat 1:1 with an AWS architect, and learn about AWS Activate.

Startup Stop on the re:Invent Pub Crawl [Wednesday evening]
And to relax and unwind in the evening, you won’t want to miss the startup stop on the re:Invent pub crawl, at the Rockhouse within The Grand Canal Shoppes at The Venetian. This is the place to be for free food, drinks, and networking during the Wednesday night re:Invent pub crawl.

Look forward to seeing you in Vegas!

The AWS Pop-up Lofts are opening in London and Berlin

| Comments ()

Amazon Web Services (AWS) has been working closely with the startup community in London, and Europe, since we launched back in 2006. We have grown substantially in that time and today more than two thirds of the UK’s startups with valuations of over a billion dollars, including Skyscanner, JustEat, Powa, Fanduel and Shazam, are all leveraging our platform to deliver innovative services to customers around the world.

This week I will have the pleasure of meeting up with our startup customers to we celebrate the opening of the first of the AWS Pop-up Lofts to open outside of the US in one of the greatest cities in the World, London. The London Loft opening will be followed in quick succession by our fourth Pop-up Loft opening its doors in Berlin. Both London and Berlin are vibrant cities with a concentration of innovative startups building their businesses on AWS. The Loft’s will give them a physical place to not only learn about our services but will aim to help cultivate a community of AWS customers that can learn from each other.

Every time I’ve visited the Loft’s in both San Francisco and New York there has been a great buzz with people getting advice from our solution architects, getting training or attending talks and demos. By opening the London and Berlin Loft’s we’re hoping to cultivate that same community and expand on the base of loyal startups we have, such as Hailo, YPlan, SwiftKey, Mendley, GoSquared, Playmob and Yoyo Wallet, to help them to grow their companies globally and be successful.

You can expect to see some of the brightest and most creative minds in the industry being on hand in the Lofts to help and I’d encourage all local startups to make the most of the resources which will be at your fingertips, ranging from technology resources through access to our vast network of customers, partners, accelerators, incubators and venture capitalists who will all be in the loft to help you gain the insight you need and provide advice on how to secure funding, and gain the ‘softer skills’ needed to to grow your businesses.

The AWS Pop-up Loft, in London will be open from September 10 to October 29 between 10am and 6pm and later for evening events, Monday through Friday, in Moorgate. You can go online now at, to make one-on-one appointments with an AWS expert, register for boot camps and technical sessions, including:

  • Ask an Architect: an hour session which can be scheduled with a member of the AWS technical team. Bring your questions about AWS architecture, cost optimisation, services and features, or anything else AWS related. You can also drop in if you don’t have an appointment.
  • Technical Bootcamps: a one-day training sessions, taught by experienced AWS instructors and solutions architects. You will get hands-on experience using a live environment with the AWS Management Console. There is a ‘Getting started with AWS’ bootcamp on Chef bootcamp which will show customers how they can safeguard their infrastructure, manage complexity, and accelerate time to market.
  • Self-paced Hands-on Labs: beginners through advanced users can attend the labs which will help sharpen AWS technical skills at a personal pace and are available for free in the Loft during operating hours.

The London Loft will also feature an IoT Lab with a range of devices running on AWS services, many of which have been developed by our Solutions Architects. Visitors to the Loft will be able to participate in live demos and Q&A opportunities, as our technical team demonstrates what is possible with IoT on AWS.

You are all invited to join us for the grand opening party at the Loft in London on September 10 at 6PM. There will be food, drinks, DJ, and free swag. The event will be packed, so RSVP today if you want to come and mingle with hot startups, accelerators, incubators, VCs, and our AWS technical experts. Entrance is on a first come, first serve basis.

Look out for more details on the Berlin Loft, which will follow soon. I look forward to seeing you in new European Lofts in the coming weeks!

Today, we are releasing a plugin that allows customers to use the Titan graph engine with Amazon DynamoDB as the backend storage layer. It opens up the possibility to enjoy the value that graph databases bring to relationship-centric use cases, without worrying about managing the underlying storage.

The importance of relationships

Relationships are a fundamental aspect of both the physical and virtual worlds. Modern applications need to quickly navigate connections in the physical world of people, cities, and public transit stations as well as the virtual world of search terms, social posts, and genetic code, for example. Developers need efficient methods to store, traverse, and query these relationships. Social media apps navigate relationships between friends, photos, videos, pages, and followers. In supply chain management, connections between airports, warehouses, and retail aisles are critical for cost and time optimization. Similarly, relationships are essential in many other use cases such as financial modeling, risk analysis, genome research, search, gaming, and others. Traditionally, these connections have been stored in relational databases, with each object type requiring its own table. When using relational databases, traversing relationships requires expensive table JOIN operations, causing significantly increased latency as table size and query complexity grow.

Enter graph databases

Graph databases belong to the NoSQL family, and are optimized for storing and traversing relationships. A graph consists of vertices, edges, and associated properties. Each vertex contains a list of properties and edges, which represent the relationships to other vertices. This structure is optimized for fast relationship query and traversal, without requiring expensive table JOIN operations.

In this way, graphs can scale to billions of vertices and edges, while allowing efficient queries and traversal of any subset of the graph with consistent low latency that doesn’t grow proportionally to the overall graph size. This is an important benefit for many use cases that involve accessing and traversing small subsets of a large graph. A concrete example is generating a product recommendation based on purchase interests of a user’s friends, where the relevant social connections are a small subset of the total network. Another example is for tracking inventory in a vast logistics system, where only a subset of its locations is relevant for a specific item. For us at Amazon, the challenge of tracking inventory at massive scale is not just theoretical, but very real.

Graph databases at Amazon

Like many AWS innovations, the desire to build a solution for a scalable graph database came from Amazon’s retail business. Amazon runs one of the largest fulfillment networks in the world, and we need to optimize our systems to quickly and accurately track the movement of vast amounts of inventory. This requires a database that can quickly traverse the logistics history for a given item or order. Graph databases are ideal for the task, since they make it easy to store and retrieve each item’s logistics history.

Our criteria for choosing the right graph engine were:

  1. The ability to support a graph containing billions of vertices and edges.
  2. The ability to scale with the accelerating pace of new items added to the catalog, and new objects and locations in the company’s expanding fulfillment network.

After evaluating different technologies, we decided to use Titan, a distributed graph database engine optimized for creating and querying large graphs. Titan has a pluggable storage architecture, using existing NoSQL databases as underlying storage for the graph data. While the Titan-based solution worked well for our needs, the team quickly found itself having to devote an increasing amount of time to provisioning, managing, and scaling the database cluster behind Titan, instead of focusing on their original task of optimizing the fulfillment inventory tracking.

Thus, the idea was born for a robust, highly available, and scalable backend solution that wouldn’t require the burden of managing a massive storage layer. As I wrote in the past, I believe DynamoDB is a natural choice for such needs, providing developers flexibility and minimal operational overhead without compromising scale, availability, durability, or performance. Making use of Titan’s flexible architecture, we created a plugin that uses DynamoDB as the storage backend for Titan. The combination of Titan with DynamoDB is now powering Amazon’s fulfillment network, with a multi-terabyte dataset.

Sharing it with you

Today, we are happy to bring the result of this effort to customers by releasing the DynamoDB Storage Backend for Titan plugin on GitHub. The plugin provides a flexible data model for each Titan backend table, allowing developers to optimize for simplicity (single-item model) or scalability (multi-item model).

The single-item model uses a single DynamoDB item to store edges and properties of a vertex. In DynamoDB, the vertex ID is stored as the hash key of an item, vertex property and edge identifiers are attribute names, and the vertex property values and edge property values are stored in the respective attribute values. While the single-item data model is simpler, due to DynamoDB’s 400 KB item size limit, you should only use it for graphs with fairly low vertex degree and small number properties per vertex.

For graphs with higher vertex degrees, the multi-item model uses multiple DynamoDB items to store properties and edges of a single vertex. In the multiple-item data model, the vertex ID remains the DynamoDB hash key, but unlike the single-item model, each column becomes the range key in its own item. Each column value is stored in its own attribute. While requiring more writes to initially load the graph, the multiple-item model allows you to store large graphs without limiting vertex degree.

Amazon’s need for a hassle-free, scalable Titan solution is not unique. Many of our customers told us they have used Titan as a scalable graph solution, but setting up and managing the underlying storage are time-consuming chores. Several of them participated in a preview program for the plugin and are excited to offload their graph storage management to AWS. Brian Sweatt, Technical Advisor at AdAgility, explained:

“At AdAgility, we store data pertaining to advertisers and publishers, as well as transactional data about customers who view and interact with our offers. The relationships between these stakeholders lend themselves naturally to a graph database, and we plan to leverage our experience with Titan and Groovy for our next-generation ad targeting platform. Amazon's integration between Titan and DynamoDB will allow us to do that without spending time on setting up and managing the storage cluster, a no brainer for an agile, fast-growing startup.”

Another customer says that AWS makes it easier to analyze large graphs of data and relationships within the data. According to Tom Soderstrom, Chief Technology Officer at NASA’s Jet Propulsion Laboratory:

“We have begun to leverage graph databases extensively at JPL and running deep machine learning on these. The open sourced plugin for Titan over DynamoDB will help us expand our use cases to larger data sets, while enjoying the power of cloud computing in a fully managed NoSQL database. It is exciting to see AWS integrate DynamoDB with open sourced projects like Elasticsearch and Titan, while open sourcing the integrations.”

Bringing it all together

When building applications that are centered on relationships (such as social networks or master data management) or auxiliary relationship-focused use cases for existing applications (such as a recommendation engine for matching players in a game or fraud detection for a payment system), a graph database is an intuitive and effective way to achieve fast performance at scale, and should be on your database options shortlist. With this launch of the DynamoDB storage backend for Titan, you no longer need to worry about managing the storage layer for your Titan graphs, making it easy to manage even very large graphs like the ones we have here at Amazon. I am excited to hear how you are leveraging graph databases for your applications. Please share your thoughts in the comment section below.

For more information about the DynamoDB storage backend plug-in for Titan, see Jeff Barr’s blog and the Amazon DynamoDB Storage Backend for Titan topic in the Amazon DynamoDB Developer Guide

Under the Hood of Amazon EC2 Container Service

| Comments ()

In my last post about Amazon EC2 Container Service (Amazon ECS), I discussed the two key components of running modern distributed applications on a cluster: reliable state management and flexible scheduling. Amazon ECS makes building and running containerized applications simple, but how that happens is what makes Amazon ECS interesting. Today, I want to explore the Amazon ECS architecture and what this architecture enables. Below is a diagram of the basic components of Amazon ECS:

How we coordinate the cluster

Let’s talk about what Amazon ECS is actually doing. The core of Amazon ECS is the cluster manager, a backend service that handles the tasks of cluster coordination and state management. On top of the cluster manager sits various schedulers. Cluster management and container scheduling are components decoupled from each other allowing customers to use and build their own schedulers. A cluster is just a pool of compute resources available to a customer’s applications. The pool of resources, at this time, is the CPU, memory, and networking resources of Amazon EC2 instances as partitioned by containers. Amazon ECS coordinates the cluster through the Amazon ECS Container Agent running on each EC2 instance in the cluster. The agent allows Amazon ECS to communicate with the EC2 instances in the cluster to start, stop, and monitor containers as requested by a user or scheduler. The agent is written in Go, has a minimal footprint, and is available on GitHub under an Apache license. We encourage contributions and feedback is most welcome.

How we manage state

To coordinate the cluster, we need to have a single source of truth on the clusters themselves: EC2 instances in the clusters, tasks running on the EC2 instances, containers that make up a task, and resources available or occupied (e.g., networks ports, memory, CPU, etc). There is no way to successfully start and stop containers without an accurate knowledge of the state of the cluster. In order to solve this, state needs to be stored somewhere, so at the heart of any modern cluster manager is a key/value store.

This key/value store acts as the single source of truth for all information on the cluster (state, and all changes to state transitions) are entered and stored here. To be robust and scalable, this key/value store needs to be distributed for durability and availability, to protect against network partitions or hardware failures. But because the key/value store is distributed, making sure data is consistent and handling concurrent changes becomes more difficult, especially in an environment where state constantly changes (e.g., containers stopping and starting). As such, some form of concurrency control has to be put in place in order to make sure that multiple state changes don’t conflict. For example, if two developers request all the remaining memory resources from a certain EC2 instance for their container, only one container can actually receive those resources and the other would have to be told their request could not be completed.

To achieve concurrency control, we implemented Amazon ECS using one of Amazon’s core distributed systems primitives: a Paxos-based transactional journal based data store that keeps a record of every change made to a data entry. Any write to the data store is committed as a transaction in the journal with a specific order-based ID. The current value in a data store is the sum of all transactions made as recorded by the journal. Any read from the data store is only a snapshot in time of the journal. For a write to succeed, the write proposed must be the latest transaction since the last read. This primitive allows Amazon ECS to store its cluster state information with optimistic concurrency, which is ideal in environments where constantly changing data is shared (such as when representing the state of a shared pool of compute resources such as Amazon ECS). This architecture affords Amazon ECS high availability, low latency, and high throughput because the data store is never pessimistically locked.

Programmatic access through the API

Now that we have a key/value store, we can successfully coordinate the cluster and ensure that the desired number of containers is running because we have a reliable method to store and retrieve the state of the cluster. As mentioned earlier, we decoupled container scheduling from cluster management because we want customers to be able to take advantage of Amazon ECS’ state management capabilities. We have opened up the Amazon ECS cluster manager through a set of API actions that allow customers to access all the cluster state information stored in our key/value store in a structured manner.

Through ‘list’ commands, customers can retrieve the clusters under management, EC2 instances running in a specific cluster, running tasks, and the container configuration that make up the tasks (i.e., task definition). Through ‘describe’ commands, customers can retrieve details of specific EC2 instances and the resources available on each. Lastly, customers can start and stop tasks anywhere in the cluster. We recently ran a series of load tests on Amazon ECS, and we wanted to share some of the performance characteristics customers should expect when building applications on Amazon ECS.

The above graph shows a load test where we added and removed instances from an Amazon ECS cluster and measured the 50th and 99th percentile latencies of the API call ‘DescribeTask’ over a seventy-two hour period. As you can see, the latency remains relatively jitter-free despite large fluctuations in the cluster size. Amazon ECS is able to scale with you no matter how large your cluster size – all without you needing to operate or scale a cluster manager.

This set of API actions form the basis of solutions that customers can build on top of Amazon ECS. A scheduler just provides logic around how, when, and where to start and stop containers. Amazon ECS’ architecture is designed to share the state of the cluster and allow customers to run as many varieties of schedulers (e.g., bin packing, spread, etc) as needed for their applications. The architecture enables the schedulers to query the exact state of the cluster and allocate resources from a common pool. The optimistic concurrency control in place allows each scheduler to receive the resources it requested without the possibility of resource conflicts. Customers have already created a variety of interesting solutions on top of Amazon ECS and we want to share a few compelling examples.

Hailo – Custom scheduling atop an elastic resource pool

Hailo is a free smartphone app, which allows people to hail licensed taxis directly to their location. Hailo has a global network of over 60,000 drivers and more than a million passengers. Hailo was founded in 2011 and has been built on AWS since Day 1. Over the past few years, Hailo has evolved from a monolithic application running in one AWS region to a microservices-based architecture running across multiple regions. Previously, each microservice ran atop a cluster of instances that was statically partitioned. The problem Hailo experienced was low resource utilization across each partition. This architecture wasn’t very scalable, and Hailo didn’t want its engineers to worry about the details of the infrastructure or the placement of the microservices.

Hailo decided it wanted to schedule containers based on service priority and other runtime metrics atop an elastic resource pool. They chose Amazon ECS as the cluster manager because it is a managed service that can easily enforce task state and fully exposes the cluster state via API calls. This allowed Hailo to build a custom scheduler with logic that met their specific application needs.

Remind – Platform as a service

Remind is a web and mobile application that enables teachers to text message students and stay in touch with parents. Remind has 24M users and over 1.5M teachers on its platform. It delivers 150M messages per month. Remind initially used Heroku to run its entire application infrastructure from message delivery engine, front-end API, and web client, to chat backends. Most of this infrastructure was deployed as a large monolithic application.

As the users grew, Remind wanted the ability to scale horizontally. So around the end of 2014, the engineering team started to explore moving towards a microservices architecture using containers. The team wanted to build a platform as a service (PaaS) that was compatible with the Heroku API on top of AWS. At first, the team looked to a few open-source solutions (e.g., CoreOS and Kubernetes) to handle the cluster management and container orchestration, but the engineering team was small so they didn’t have the time to manage the cluster infrastructure and keep the cluster highly available.

After briefly evaluating Amazon ECS, the team decided to build their PaaS on top of this service. Amazon ECS is fully managed and provides operational efficiency allowing engineering resources to just focus on developing and deploying applications; there are no clusters to manage or scale. In June, Remind open-sourced their PaaS solution on ECS as “Empire”. Remind saw large performance increases (e.g., latency and stability) with Empire as well as security benefits. Their plan over the next few months is to migrate over 90% of the core infrastructure onto Empire.

Amazon ECS – a fully managed platform

These are just a couple of the use cases we have seen from customers. The Amazon ECS architecture allows us to deliver a highly scalable, highly available, low latency container management service. The ability to access shared cluster state with optimistic concurrency through the API empowers customers to create whatever custom container solution they need. We have focused on removing the undifferentiated heavy lifting for customers. With Amazon ECS, there is no cluster manager to install or operate: customers can and should just focus on developing great applications.

We have delivered a number of features since our preview last November. Head over to Jeff Barr’s blog for a recap of the features we have added over the past year. Read our documentation and visit our console to get started. We still have a lot more on our roadmap and we value your feedback: please post questions and requests to our forum or on /r/aws.

Back-to-Basics Weekend Reading - Data Compression

| Comments ()

Data compression today is still as important as it was in the early days of computing. Although in those days all computer and storage resources were very limited, the objects in use were much smaller than today. We have seen a shift from generic compression to compression for specific file types, especially those in images, audio and video. In this weekend's back to basic reading we go back in time, 1987 to be specific, when Leweler and Hirschberg wrote a survey paper that covers the 40 years of data compression research. It covers all the areas that we like in a back to basics paper, it does not present the most modern results but it gives you a great understanding of the fundamentals. It is a substantial paper but easy to read.

Data compression, D.A. Lelewer and D.S. Hirschberg, Data compression, Computing Surveys 19,3 (1987) 261-297.

In just three short years, Amazon DynamoDB has emerged as the backbone for many powerful Internet applications such as AdRoll, Druva, DeviceScape, and Battlecamp. Many happy developers are using DynamoDB to handle trillions of requests every day. I am excited to share with you that today we are expanding DynamoDB with streams, cross-region replication, and database triggers. In this blog post, I will explain how these three new capabilities empower you to build applications with distributed systems architecture and create responsive, reliable, and high-performance applications using DynamoDB that work at any scale.

DynamoDB Streams enables your application to get real-time notifications of your tables’ item-level changes. Streams provide you with the underlying infrastructure to create new applications, such as continuously updated free-text search indexes, caches, or other creative extensions requiring up-to-date table changes. DynamoDB Streams is the enabling technology behind two other features announced today: cross-region replication maintains identical copies of DynamoDB tables across AWS regions with push-button ease, and triggers execute AWS Lambda functions on streams, allowing you to respond to changing data conditions. Let me expand on each one of them.

DynamoDB Streams

DynamoDB Streams provides you with a time-ordered sequence, or change log, of all item-level changes made to any DynamoDB table. The stream is exposed via the familiar Amazon Kinesis interface. Using streams, you can apply the changes to a full-text search data store such as Elasticsearch, push incremental backups to Amazon S3, or maintain an up-to-date read cache.

I have heard from many of you that one of the common challenges you have is keeping DynamoDB data in sync with other data sources, such as search indexes or data warehouses. In traditional database architectures, database engines often run a small search engine or data warehouse engines on the same hardware as the database. However, the model of collocating all engines in a single database turns out to be cumbersome because the scaling characteristics of a transactional database are different from those of a search index or data warehouse. A more scalable option is to decouple these systems and build a pipe that connects these engines and feeds all change records from the source database to the data warehouse (e.g., Amazon Redshift) and Elasticsearch machines.

The velocity and variety of data that you are managing continues to increase, making your task of keeping up with the change more challenging as you want to manage the systems and applications in real time and respond to changing conditions. A common design pattern is to capture transactional and operational data (such as logs) that require high throughput and performance in DynamoDB, and provide periodic updates to search clusters and data warehouses. However, in the past, you had to write code to manage the data changes and deal with keeping the search engine and data warehousing engines in sync. For cost and manageability reasons, some developers have collocated the extract job, the search cluster, and data warehouses on the same box, leading to performance and scalability compromises. DynamoDB Streams simplifies and improves this design pattern with a distributed systems approach.

You can enable the DynamoDB Streams feature for a table with just a few clicks using the AWS Management Console, or you can use the DynamoDB API. Once configured, you can use an Amazon EC2 instance to read the stream using the Amazon Kinesis interface, and apply the changes in parallel to the search cluster, the data warehouse, and any number of data consumers. You can read the changes as they occur in real time or in batches as per your requirements. At launch, an item’s change record is available in the stream for 24 hours after it is created. An AWS Lambda function is a simpler option that you can use, as it only requires you to code the logic, set it, and forget it.

No matter which mechanism you choose to use, we make the stream data available to you instantly (latency in milliseconds) and how fast you want to apply the changes is up to you. Also, you can choose to program post-commit actions, such as running aggregate analytical functions or updating other dependent tables. This new design pattern allows you to keep your remote data consumers current with the core transactional data residing in DynamoDB at a frequency you desire and scale them independently, thereby leading to better availability, scalability, and performance. The Amazon Kinesis API model gives you a unified programming experience between your streaming apps written for Amazon Kinesis and DynamoDB Streams.

DynamoDB Cross-region Replication

Many modern database applications rely on cross-region replication for disaster recovery, minimizing read latencies (by making data available locally), and easy migration. Today, we are launching cross-region replication support for DynamoDB, enabling you to maintain identical copies of DynamoDB tables across AWS regions with just a few clicks. We have provided you with an application with a simple UI to set up and manage cross-region replication groups and build globally-distributed applications with ease. When you set up a replication group, DynamoDB automatically configures a stream between the tables, bootstraps the original data from source to target, and keeps the two in sync as the data changes. We have publicly shared the source code for the cross-region replication utility, which you can extend to build your own versions of data replication, search, or monitoring applications.

A great example for the application of this cross-region replication functionality is Mapbox, a popular mapping platform that enables developers to integrate location information into their mobile or online applications. Mapbox deals with location data from all over the globe and their key focus areas have been availability and performance. Having been part of the preview program, Jake Pruitt, Software Developer at Mapbox told us, “DynamoDB Streams unlocks cross-region replication - a critical feature that enabled us to fully migrate to DynamoDB. Cross-region replication allows us to distribute data across the world for redundancy and speed.” The new feature enables them to deliver better availability and improves the performance because they can access all needed data from the nearest data center.

DynamoDB Triggers

From the dawn of databases, the pull method has been the preferred model for interaction with a database. To retrieve data, applications are expected to make API calls and read the data. To get updates from a table, customers have to constantly poll the database with another API call. Relational databases use triggers as a mechanism to enable applications to respond to data changes. However, the execution of the triggers happens on the same machine as the one that runs the database and an errant trigger can wreak havoc on the whole database. In addition, such mechanisms do not scale well for fast-moving data sets and large databases.

To achieve a truly scalable, high-performance, and flexible system, we need to decouple the execution of triggers from the database and bring the data changes to the applications as they occur. Enter DynamoDB Triggers—an event-driven mechanism that enables developers to define Java or JavaScript functions that run outside the database in response to specific data changes in your DynamoDB tables. Specifically, these functions are configured and executed as AWS Lambda functions, giving you the ability to scale on the fly and only pay for the fractions of the computing seconds consumed. All you need to do is register the AWS Lambda function that needs to be executed in response to a specific data change in the DynamoDB table. Lambda and DynamoDB take care of the rest. DynamoDB creates a stream and pushes the data to the trigger code. Lambda automatically creates and manages the resources needed to handle the trigger. Since the Lambda function executes on hosts that are different from that of the DynamoDB table, both the DynamoDB table and Lambda function scale independently, thus isolating the risk of errant triggers.

Triggers are powerful mechanisms that react to events dynamically and in real time. Here is a practical real-world example of how triggers can be very useful to businesses: TOKYU HANDS is a fast-growing business with over 70 shops all over Japan. Their cloud architecture has two main components: a point-of-sales system and a merchandising system. The point-of-sales system records changes from all the purchases and stores them in DynamoDB. The merchandising system is used to manage the inventory and identify the right timing and quantity for refilling the inventory. The key challenge for them has been keeping these systems in sync constantly. After previewing DynamoDB Triggers, Naoyuki Yamazaki, Cloud Architect told us, “TOKYU HANDS is running in-store point-of-sales system backed by DynamoDB and various AWS services. We really like the full-managed service aspect of DynamoDB. With DynamoDB Streams and DynamoDB Triggers, we would now make our systems more connected and automated to respond faster to changing data such as inventory.” This new feature will help them manage inventory better to deliver a good customer experience while gaining more business efficiency.

You can also use triggers to power many modern Internet of Things (IoT) use cases. For example, you can program home sensors to write the state of temperature, water, gas, and electricity directly to DynamoDB. Then, you can set up Lambda functions to listen for updates on the DynamoDB tables and automatically notify users via mobile devices whenever specific levels of changes are detected.

Summing It All Up

If you are building mobile, ad-tech, gaming, web, or IOT applications, you can use DynamoDB to build globally distributed applications that deliver consistent and fast performance at any scale. With the three new features that I mentioned, you can now enrich those applications to consume high velocity data changes and react to updates in near real time. What this means to you is that, with DynamoDB, you are now empowered to create unique applications that have been difficult and expensive to build and manage before. Let me illustrate with an example.

Let’s say that you are managing a supply-chain system with suppliers all over the globe. We all know the advantages that real-time inventory management can provide to such a system. Yet, building such a system that provides speed, scalability, and reliability at a low cost is not easy. On top of this, adding real-time updates for inventory management or extending the system with custom business logic with your own IT infrastructure is complex and costly.

This is where AWS and DynamoDB with cross-region replication, DynamoDB Triggers, and DynamoDB Streams can serve as a one-stop solution that handles all your requirements of scale, performance, and manageability, leaving you to focus on your business logic. All you need to do is to write the data from your products into DynamoDB. As illustrated in this example, if you use RFID tags on your products, you can directly feed the data from the scanners into DynamoDB. Then, you can use cross-region replication to sync the data across multiple AWS regions and bring the data close to your supply base. You can use triggers to monitor for inventory changes and send notifications in real time. To top it all off, you will have the flexibility to extend the DynamoDB Streams functionality for your custom business requirements. For example, you can feed the updates from the stream into a search index and use it for a custom search solution, thereby enabling your internal systems to locate the updates to inventory based on text searches. When you put it all together, you have a powerful business solution that scales to your needs, lets you pay only for what you provision, and helps you differentiate your offerings in the market and drive your business forward faster than before.

The combination of DynamoDB Streams, cross-region replication, and DynamoDB Triggers certainly offers immense potential to enable new and intriguing user scenarios with significantly less effort. You can learn more about these features on Jeff Barr’s blog. I am definitely eager to hear how each of you will use streams to drive more value for your businesses. Feel free to add a comment below and share your thoughts.

Today, Amazon announced the Alexa Skills Kit (ASK), a collection of self-service APIs and tools that make it fast and easy for developers to create new voice-driven capabilities for Alexa. With a few lines of code, developers can easily integrate existing web services with Alexa or, in just a few hours, they can build entirely new experiences designed around voice. No experience with speech recognition or natural language understanding is required—Amazon does all the work to hear, understand, and process the customer’s spoken request so you don’t have to. All of the code runs in the cloud — nothing is installed on any user device.

The easiest way to build a skill for Alexa is to use AWS Lambda, an innovative compute service that runs a developer’s code in response to triggers and automatically manages the compute resources in the AWS Cloud, so there is no need for a developer to provision or continuously run servers. Developers simply upload the code for the new Alexa skill they are creating, and AWS Lambda does the rest, executing the code in response to Alexa voice interactions and automatically managing the compute resources on the developer’s behalf.

Using a Lambda function for your service also eliminates some of the complexity around setting up and managing your own endpoint:

  • You do not need to administer or manage any of the compute resources for your service.
  • You do not need an SSL certificate.
  • You do not need to verify that requests are coming from the Alexa service yourself. Access to execute your function is controlled by permissions within AWS instead.
  • AWS Lambda runs your code only when you need it and scales with your usage, so there is no need to provision or continuously run servers.
  • For most developers, the Lambda free tier is sufficient for the function supporting an Alexa skill. The first one million requests each month are free. Note that the Lambda free tier does not automatically expire, but is available indefinitely.

AWS Lambda supports code written in Node.js (JavaScript) and Java. You can copy JavaScript code directly into the inline code editor in the AWS Lambda console or upload it in a zip file. For basic testing, you can invoke your function manually by sending it JSON requests in the Lambda console.

In addition, Amazon announced today that the Alexa Voice Service (AVS), the same service that powers Amazon Echo, is now available to third party hardware makers who want to integrate Alexa into their devices—for free. For example, a Wi-Fi alarm clock maker can create an Alexa-enabled clock radio, so a customer can talk to Alexa as they wake up, asking “What’s the weather today?” or “What time is my first meeting?” Read the press release here.

Got an innovative idea for how voice technology can improve customers’ lives? The Alexa Fund was also announced today and will provide up to $100 million in investments to fuel voice technology innovation. Whether that’s creating new Alexa capabilities with the Alexa Skills Kit, building devices that use Alexa for new and novel voice experiences using the Alexa Voice Service, or something else entirely, if you have a visionary idea, Amazon would love to hear from you.

For more details about Alexa you can check out today’s announcements on the AWS blog and Amazon Appstore blog.

This weekend we go back in time all the way to the beginning of operating systems research. In the first SOSP conference in 1967 there were several papers that laid the foundation for the development of structured operating systems. There was the of course the lauded paper on the THE operating system by Dijkstra but for this weekend I picked the paper on memory locality by Peter Denning as this work laid the groundwork for the development of virtual memory systems.

The Working Set Model for Program Behavior, Peter J. Denning, Proceedings of the First ACM Symposium on Operating Systems Principles, October 1967, Gatlinburg, TN, USA.