Transforming Development with AWS

| Comments ()

In my keynote at AWS re:Invent today, I announced 13 new features and services (in addition to the 15 we announced yesterday).

My favorite parts of James Bond movies is are where 007 gets to visit Q to pick up and learn about new tools of the trade: super-powered tools with special features which that he can use to complete his missions, and, in some cases, get out of some nasty scrapes. Bond always seems to have the perfect tool for every situation that he finds himself in. *

At AWS, we want to be the Q for developers, giving them the super-powered tools and services with deep features in the Cloud. In the hands of builders, the impact of these services has been to completely transform the way applications are developed, debugged, and delivered to customers.

I was joined by 32,000 James Bonds at the conference today from all around the world, and we introduced new services focused on accelerating this transformation across development, testing and operations, data and analytics, and computation itself.

Transformation in Development, Testing, & Operations

Although development and operations are often overlooked, they are the engines of agility for most organizations. Today, cCompanies cannot afford to wait two or three years between releases, and; customers have found that continually releasing incremental functionality to customer frequently reduces risk and improves quality.

Today, we're making available broad new services which that let builders prepare and operate their applications more quickly and efficiently, and respond to changes in both their business and their operating environment, swiftly. We launched the following new services and features today to help.

AWS OpsWorks for Chef : a fully managed Chef Automate environment, available through AWS OpsWorks to fuel even more automation and reduce the heavy lifting associated with continuous deployment.

Amazon EC2 Systems Manager : A collection of tools for package installation, patching, resource configuration, and task automation on Amazon EC2.

AWS Codebuild: A new, fully managed, extensible service for compiling source code and running unit tests, which is integrated with other application lifecycle management services— such as AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline— for dramatically decreasing the time between iterations of software.

Amazon X-Ray: A new service to analyze, visualize, and debug distributed applications, allowing builders to identify performance bottlenecks and errors.

Personal Health Dashboard: A new personalized view of AWS service health for all customers, allowing developers to gain visibility into service health issues which that may be affecting their application.

AWS Shield : protective Protective armor against distributed denial of service (DDoS) attacks, available as Shield Standard and Shield Advanced. Shield Standard gives DDoS protection to all customers using API Gateway, Elastic Load Balancing, Route 53, CloudFront, and EC2. Shield Advanced protects against more sophisticated DDoS attacks, with access to help through a 24x7 AWS DDoS response team.

Transformation in Data

In the old world, access to infrastructure resources was a big differentiator for big, wealthy companies. No more. Today, any developer can have access to a wealth of infrastructure technology services which that bring advanced technology to their fingertips times in the Cloud. The days of differentiation through infrastructure are behind us; the technology is now evenly distributed.

Instead, most companies today and in the future will differentiate themselves through the data that they collect and have access to, and the way in which they can put that data to work for the benefit of their customers. We rolled out three new services today to make that easier.:

Amazon Pinpoint : A data-driven engagement service for mobile apps. Define which segment of customers to engage with, schedule a push notification engagement campaign, and track the results in real-time.

AWS Batch: Fully- managed batch processing at any scale, with no batch processing software to install or servers to manage.

Dynamically provision compute resources and optimize task distribution based on volume and resource requirements

AWS Glue : A fully- managed data catalog and ETL service, which that makes it easy to move data between data stores, while also simplifying and automating time time-consuming data discovery, conversion, mapping, and job scheduling tasks.

Transformation in Compute

Amazon EC2 made it possible to build application architectures in a way we had always wanted to; and, over the past decade, gave us the opportunity to build secure, resilient, available applications with decoupled application components which that can be scaled independently, and updated more frequently. When I talk to our customers, I hear time and again how they are taking these transformative principles, and taking them to the next level, by building smaller, more discrete, distributed components using containers and AWS Lambda.

Today, we're accelerating this transformation with a new distributed application coordination service, new Lambda functionality, and an open source container framework.:

AWS Step Functions: Coordinate the components of distributed applications using visual workflows. Step through functions at scale.

Lambda@Edge: Enable Lambda functions in Edge locations, and run functions in response to CloudFront events. We also added C# support for AWS Lambda.

Blox: A collection of open source projects for container management and orchestration.

Thirteen new services and major features focused on developers. We're excited to see how customers you will put these new features to work.

*: Sean Connery is the definitive Bond.

Bringing the Magic of Amazon AI and Alexa to Apps on AWS.

| Comments ()

From the early days of Amazon, Machine learning (ML) has played a critical role in the value we bring to our customers. Around 20 years ago, we used machine learning in our recommendation engine to generate personalized recommendations for our customers. Today, there are thousands of machine learning scientists and developers applying machine learning in various places, from recommendations to fraud detection, from inventory levels to book classification to abusive review detection. There are many more application areas where we use ML extensively: search, autonomous drones, robotics in fulfillment centers, text processing and speech recognition (such as in Alexa) etc.

Among machine learning algorithms, a class of algorithms called deep learning has come to represent those algorithms that can absorb huge volumes of data and learn elegant and useful patterns within that data: faces inside photos, the meaning of a text, or the intent of a spoken word.After over 20 years of developing these machine learning and deep learning algorithms and end user services listed above, we understand the needs of both the machine learning scientist community that builds these machine learning algorithms as well as app developers who use them. We also have a great deal of machine learning technology that can benefit machine scientists and developers working outside Amazon. Last week, I wrote a blog about helping the machine learning scientist community select the right deep learning framework from among many we support on AWS such as MxNet, TensorFlow, Caffe, etc.

Today, I want to focus on helping app developers who have chosen to develop their apps on AWS and have in the past developed some of the seminal apps of our times on AWS, such as Netflix, AirBnB, or Pinterest or created internet connected devices powered by AWS such as Alexa and Dropcam. Many app developers have been intrigued by the magic of Alexa and other AI powered products they see being offered or used by Amazon and want our help in developing their own magical apps that can hear, see, speak, and understand the world around them.

For example, they want us to help them develop chatbots that understand natural language, build Alexa-style conversational experiences for mobile apps, dynamically generate speech without using expensive voice actors, and recognize concepts and faces in images without requiring human annotators. However, until now, very few developers have been able to build, deploy, and broadly scale applications with AI capabilities because doing so required specialized expertise (with Ph.D.s in ML and neural networks) and access to vast amounts of data. Effectively applying AI involves extensive manual effort to develop and tune many different types of machine learning and deep learning algorithms (e.g. automatic speech recognition, natural language understanding, image classification), collect and clean the training data, and train and tune the machine learning models. And this process must be repeated for every object, face, voice, and language feature in an application.

Today, I am excited to announce that we are launching three new Amazon AI services that eliminate all of this heavy lifting, making AI broadly accessible to all app developers by offering Amazon's powerful and proven deep learning algorithms and technologies as fully managed services that any developer can access through an API call or a few clicks in the AWS Management Console. These services are Amazon Lex, Amazon Polly, and Amazon Rekognition that will help AWS app developers build these next generation of magical, intelligent apps. Amazon AI services make the full power of Amazon's natural language understanding, speech recognition, text-to-speech, and image analysis technologies available at any scale, for any app, on any device, anywhere.

Amazon Lex

After the launch of the Alexa Skill Kit (ASK), customers loved the ability to build voice bots or skills for Alexa. They also started asking us to give them access to the technology that powers Alexa, so that they can add a conversational interface (using voice or text) to their mobile apps. They also wanted the capability to publish their bots on chat services like Facebook Messenger and Slack.

Amazon Lex is a new service for building conversational interfaces using voice and text. The same conversational engine that powers Alexa is now available to any developer, making it easy to bring sophisticated, natural language 'chatbots' to new and existing applications. The power of Alexa in the hands of every developer, without having to know deep learning technologies like speech recognition, has the potential of sparking innovation in entirely new categories of products and services. Developers can now build powerful conversational interfaces quickly and easily, that operate at any scale, on any device.

The speech recognition and natural language understanding technology behind Amazon Lex and Alexa is powered by deep learning models that have been trained on massive amounts of data. Developers can simply specify a few sample phrases and the information required to complete a user's task, and Lex builds the deep learning based intent model, guides the conversation, and executes the business logic using AWS Lambda. Developers can build, test, and deploy chatbots directly from the AWS Management Console. These chatbots can be accessed anywhere: from web applications, chat and messenger apps such as Facebook Messenger (with support for exporting to Alexa Skills Kit and Slack support coming soon), or connected devices. Developers can also effortlessly include their Amazon Lex bots in their own iOS and Android mobile apps using the new Conversational Bots feature in AWS Mobile Hub.

Recently, a few selected customers participated in a private beta of Amazon Lex. They provided us with valuable feedback as we rounded off Amazon Lex for a preview launch. I am excited to share some of the feedback from our beta customers HubSpot and Capital One.

HubSpot, a marketing and sales software leader, uses a chatbot called GrowthBot to help marketers and sales personnel be more productive by providing access to relevant data and services. Dharmesh Shah, HubSpot CTO and Founder, tells us that Amazon Lex enabled sophisticated natural language processing capabilities on GrowthBot to provide a more intuitive UI for customers. Hubspot could take advantage of advanced AI and ML capabilities provided by Amazon Lex, without having to code the algorithms.

Capital One offers a broad spectrum of financial products and services to consumers, small businesses, and commercial clients through a variety of channels. Firoze Lafeer, CTO Capital One Labs, tells us that Amazon Lex enables customers to query for information through voice or text in natural language and derive key insights into their accounts. Because Amazon Lex is powered by Alexa's technology, it provides Capital One with a high level of confidence that customer interactions are accurate, allowing easy deployment and scaling of bots.

Amazon Polly

The concept of a computer being able to speak with a human-like voice goes back almost as long as ENIAC (the first electronic programmable computer). The concept has been explored by many popular science fiction movies and TV shows, such as "2001: A Space Odyssey" with HAL-9000 or the Star Trek computer and Commander Data, which defined the perception of computer-generated speech.

Text-to-speech (TTS) systems have been largely adopted in a variety of real-life scenarios such as telephony systems with automated speech responses or help for visually or speech-impaired people. Prof. Stephen Hawking's voice is probably the most famous example of synthetic speech used to help the disabled.

TTS systems have continuously evolved through the last few decades and are nowadays capable of delivering a fairly natural-sounding speech. Today, TTS is used in a large variety of use cases and is turning into a ubiquitous element of user interfaces. Alexa and its TTS voice is yet another step towards building an intuitive and natural language interface that follows the pattern of human communication.

With Amazon Polly, we are making the same TTS technology used to build Alexa's voice to AWS customers. It is now available to any developer aiming to power their apps with high-quality spoken output.

In order to mimic human speech, we needed to address a variety of challenges. We needed to learn how to interpret various text structures such as acronyms, abbreviations, numbers, or homographs (words spelled the same but pronounced differently and having different meanings). For example:

I heard that Outlander is a good read, though I haven't read it yet, or

or
St. Mary's Church is at 226 St. Mary's St.

Last but not least, as the quality of TTS gets better and better, we expect a natural intonation matching the semantics of synthesized texts. Traditional rule-based models and ML techniques, such as classification and regression trees (CART) and hidden Markov models (HMM) present limitations to model the complexity of this process. Deep learning has shown its capacity in representing complex and nonlinear relationships at different levels of speech synthesis process. The TTS technology behind Amazon Polly takes advantage of bidirectional long short-term memory (LSTM) networks using a massive amount of data to train models that convert letters to sounds and predict the intonation contour. This technology enables high naturalness, consistent intonation, and accurate processing of texts.

Amazon Polly customers have confirmed the high quality of generated speech for their use cases. Duolingo uses Amazon Polly voices for language learning applications, where quality is critical. Severin Hacker, the CTO of Duolingo, acknowledged that Amazon Polly voices are not just high in quality, but are as good as natural human speech for teaching a language.

The Royal National Institute of Blind People uses the Amazon TTS technology to support the visually impaired through their largest library of books in the UK. John Worsfold, Solutions Implementation Manager at RNIB, confirmed that Amazon Polly's incredibly lifelike voices captivate and engage RNIB readers.

Amazon Rekognition

We live in a world that is undergoing digital transformation at a rapid rate. One key outcome of this is the explosive growth of images generated and consumed by applications and services across different segments and industries. Whether it is a consumer app for photo sharing or printing, or the organization of images in the archives of media and news organizations, or filtering images for public safety and security, the need to derive insight from the visual content of the images continues to grow rapidly.

There is an inherent gap between the number of images created and stored, and the ability to capture the insight that can be derived from these images. Put simply, most image stores are not searchable, organized, or actionable. While a few solutions exist, customers have told us that they don't scale well, are not reliable, are too expensive, rely on complex pipelines to annotate, verify, and process massive amount of data for training and testing algorithms, need a team of highly specialized and skilled data scientists, and require costly and highly specialized hardware. For companies that have successfully built a pipeline for image analysis, the processes of maintaining, improving, and keeping up with the research in this space proves to be high friction. Amazon Rekognition solves these problems.

Amazon Rekognition is a fully managed, deep-learning–based image analysis service, built by our computer vision scientists with the same proven technology that has already analyzed billions of images daily on Amazon Prime Photos. Amazon Rekognition democratizes the application of deep learning technique for detecting objects, scenes, concepts, and faces in your images, comparing faces between two images, and performing search functionality across millions of facial feature vectors that your business can store with Amazon Rekognition. Amazon Rekognition's easy-to-use API, which is integrated with Amazon S3 and AWS Lambda, brings deep learning to your object store.

Getting started with Rekognition is simple. Let's walk through some of the core features of Rekognition that help you build powerful search, filter, organization, and verification applications for images.

Object and scene detection

Given an image, Amazon Rekognition detects objects, scenes, and concepts and then generates labels, each with a confidence score. Businesses can use this metadata to create searchable indexes for social sharing and printing apps, categorization for news and media image archives, or filters for targeted advertisement. If you are uploading your images to Amazon S3, it is easy to invoke an AWS Lambda function that passes the image to Amazon Rekognition and persist the labels with confidence scores into an Elasticsearch index.

Facial Analysis

With any given image, you can now detect faces present, and derive face attributes like demographic information, sentiment, and key landmarks from the face. With this fast and accurate API, retail businesses can respond to their customers online or in store immediately by delivering targeted ads. Also, these attributes can be stored in Amazon Redshift to generate deeper insights of their customers.

Face recognition

Amazon Rekognition's face comparison and face search features can provide businesses with face-based authentication, verification of identity, and the ability to detect the presence of a specific person in a collection of images. Whether simply comparing faces present in two images using CompareFaces API, or creating a collection of faces by invoking Amazon Rekognition's IndexFace API, businesses can rely on our focus on security and privacy, as no images are stored by Rekognition. Each detected face is transformed into an irreversible vector representation, and this feature vector (and not the underlying image itself) is used for comparison and search.

I am pleased to share some of the positive feedbacks from our beta customers.

Redfin is a full-service brokerage that uses modern technology to help people buy and sell houses. Yong Huang, Director of Big Data & Analytics, Redfin, tell us that Redfin users love to browse images of properties on their site and mobile apps and they want to make it easier for their users to sift through hundreds of millions of listing and images. He also added that Amazon Rekognition generates a rich set of tags directly from images of properties. This makes it relatively simple for them to build a smart search feature that helps customers discover houses based on their specific needs. And, because Amazon Rekognition accepts Amazon S3 URLs, it is a huge time-saver for them to detect objects, scenes, and faces without having to move images around.

Summing it all up

We are in the early days of machine learning and artificial intelligence. As we say in Amazon, we are still in Day 1. Yet, we are already seeing the tremendous value and magical experience Amazon AI can bring to everyday apps. We want to enable all types of developers to build intelligence in to their applications. For data scientists, they can use our P2 instances, Amazon EMR Spark MLLib, deep learning AMIs, MxNet and Amazon ML to build their own ML models. For app developers, we believe that these three Amazon AI services enable them to build next-generation apps to hear, see, and speak with humans and the world around us.

We'll also be hosting a Machine Learning " State of the Union" that covers all the three new AmazonAI services announced today along with demos from Motorola Solutions and Ohio Health – head over to Mirage (as we added more seating!). Also, we have a series of breakout sessions on using MXNet at AWS re:Invent on November 30th at the Mirage Hotel in Las Vegas.

MXNet - Deep Learning Framework of Choice at AWS

| Comments ()

Machine learning is playing an increasingly important role in many areas of our businesses and our lives and is being employed in a range of computing tasks where programming explicit algorithms is infeasible.

At Amazon, machine learning has been key to many of our business processes, from recommendations to fraud detection, from inventory levels to book classification to abusive review detection. And there are many more application areas where we use machine learning extensively: search, autonomous drones, robotics in fulfillment centers, text and speech recognitions, etc.

Among machine learning algorithms, a class of algorithms called deep learning hascome to represent those algorithms that can absorb huge volumes of data and learn elegant and useful patterns within that data: faces inside photos, the meaning of a text, or the intent of a spoken word. A set of programming models has emerged to help developers define and train AI models with deep learning; along with open source frameworks that put deep learning in the hands of mere mortals. Some examples of popular deep learning frameworks that we support on AWS include Caffe, CNTK, MXNet, TensorFlow, Theano, and Torch.

Among all these popular frameworks, we have concluded that MXNet is the most scalable framework. We believe that the AI community would benefit from putting more effort behind MXNet. Today, we are announcing that MXNet will be our deep learning framework of choice. AWS will contribute code and improved documentation as well as invest in the ecosystem around MXNet. We will partner with other organizations to further advance MXNet.

AWS and Support for Deep Learning Frameworks

At AWS, we believe in giving choice to our customers. Our goal is to support our customers with tools, systems, and software of their choice by providing the right set of instances, software (AMIs), and managed services. Just like in Amazon RDS―where we support multiple open source engines like MySQL, PostgreSQL, and MariaDB, in the area of deep learning frameworks, we will support all popular deep learning frameworks by providing the best set of EC2 instances and appropriate software tools for them.

Amazon EC2, with its broad set of instance types and GPUs with large amounts of memory, has become the center of gravity for deep learning training. To that end, we recently made a set of tools available to make it as easy as possible to get started: a Deep Learning AMI, which comes pre-installed with the popular open source deep learning frameworks mentioned earlier; GPU-acceleration through CUDA drivers which are already installed, pre-configured, and ready to rock; and supporting tools such as Anaconda and Jupyter. Developers can also use the distributed Deep Learning CloudFormation template to spin up a scale-out, elastic cluster of P2 instances using this AMI for even larger training runs.

As Amazon and AWS continue to invest in several technologies powered by deep learning, we will continue to improve all of these frameworks in terms of usability, scalability, and features. However, we plan to contribute significantly to one in particular, MXNet.

Choosing a Deep Learning Framework

Developers, data scientists, and researchers consider three major factors when selecting a deep learning framework:

  • The ability to scale to multiple GPUs (across multiple hosts) to train larger, more sophisticated models with larger, more sophisticated datasets. Deep learning models can take days or weeks to train, so even modest improvements here make a huge difference in the speed at which new models can be developed and evaluated.
  • Development speed and programmability, especially the opportunity to use languages they are already familiar with, so that they can quickly build new models and update existing ones.
  • Portability to run on a broad range of devices and platforms, because deep learning models have to run in many, many different places: from laptops and server farms with great networking and tons of computing power to mobiles and connected devices which are often in remote locations, with less reliable networking and considerably less computing power.

The same three things are important to developers at AWS and many of our customers. After a thorough evaluation, we have selected MXNet as our deep learning framework of choice , where we plan to use it broadly in existing and upcoming new services.

As part of that commitment, we will be actively promoting and supporting open source development through code contributions (we've made quite a few already), improving the developer experience and documentation online and on AWS, and investing in supporting tools for visualization, development, and migration from other frameworks.

Background on MXNet

MXNet is a fully featured, flexibly programmable, and ultra-scalable deep learning framework supporting state of the art in deep learning models, including convolutional neural networks (CNNs) and long short-term memory networks (LSTMs). MXNet has its roots in academia and came about through the collaboration and contributions of researchers at several top universities. Founding institutions include the University of Washington and Carnegie Mellon University.

"MXNet, born and bred here at CMU, is the most scalable framework for deep learning I have seen, and is a great example of what makes this area of computer science so beautiful - that you have different disciplines which all work so well together: imaginative linear algebra working in a novel way with massive distributed computation leading to a whole new ball game for deep learning. We're excited about Amazon's investment in MXNet, and can't wait to see MXNet go from strength to strength" Andrew Moore – Dean of Computer Science at Carnegie Mellon University.

Scaling MXNet

The efficiency by which a deep learning framework scales out across multiple cores is one of its defining features. More efficient scaling allows you to significantly increase the rate at which you can train new models, or dramatically increase the sophistication of your model for the same amount of training time.

This is an area where MXNet shines: we trained a popular image analysis algorithm, Inception v3 (implemented in MXNet and running on P2 instances), using an increasing number of GPUs. Not only did MXNet have the fastest throughput of any library we evaluated (as measured by the number of images trained per second), but the throughput rose by almost the same rate as the number of GPUs used for training (with a scaling efficiency of 85%).

Developing With MXNet

In addition to scalability, MXNet offers the ability to both mix programming models (imperative and declarative), and code in a wide number of programming languages, including Python, C++, R, Scala, Julia, Matlab, and JavaScript.

Efficient Models & Portability In MXNet

Computational efficiency is important (and goes hand in hand with scalability) but nearly as important is the memory footprint. MXNet can consume as little as 4 GB of memory when serving deep networks with as many as 1000 layers. It is also portable across platforms, and the core library (with all dependencies) fits into a single C++ source file and can be compiled for both Android and iOS. You can even run it in your browser using the JavaScript extensions!

Learn more about MXNet

We're excited about MXNet. If you would like to learn more, you can check out the MXNet home page, or GitHub repository for more information, and can get started right now, using the Deep Learning AMI, or on your own machine. We'll also be hosting a Machine Learning "State of the Union" and a series of breakout sessions and workshops on using MXNet at AWS re:Invent on November 30th at the Mirage Hotel in Las Vegas.

It's still day one for this new era of machine intelligence; in fact, we probably haven't even woken up and had our first cup of coffee yet. With tools like MXNet (and the other deep learning frameworks), and services such as EC2, it's going to be an exciting time.

Previously, I wrote about Amazon QuickSight, a new service targeted at business users that aims to simplify the process of deriving insights from a wide variety of data sources quickly, easily, and at a low cost. QuickSight is a very fast, cloud-powered, business intelligence service for the 1/10th the cost of old-guard BI solutions. Today, I am very happy to announce that QuickSight is now generally available in the N. Virginia, Oregon, and Ireland regions.

When we announced QuickSight last year, we set out to help all customers—regardless of their technical skills—make sense out of their ever-growing data. As I mentioned, we live in a world where massive volumes of data are being generated, every day, from connected devices, websites, mobile apps, and customer applications running on top of AWS infrastructure. This data is collected and streamed using services like Amazon Kinesis and stored in AWS relational data sources such as Amazon RDS, Amazon Aurora, and Amazon Redshift; NoSQL data sources such as Amazon DynamoDB; and file-based data sources such as Amazon S3. Along with data generated in the cloud, customers also have legacy data sitting in on-premises datacenters, scattered on user desktops, or stored in SAS applications.

There’s an inherent gap between the data that is collected, stored, and processed and the key decisions that business users make on a daily basis. Put simply, data is not always readily available and accessible to organizational end users. The data infrastructure to collect, store, and process data is geared primarily towards developers and IT professionals whereas insights need to be derived by not just technical professionals but also non-technical business users. Most business users continue to struggle to answer key business questions such as, “Who are my top customers and what are they buying?”, “How is my marketing campaign performing?”, and “Why is my most profitable region not growing?” While BI solutions have existed for decades, customers have told us that it takes an enormous amount of time, IT effort, and money to bridge this gap.

The reality is that many traditional BI solutions are built on top of legacy desktop and on-premises architectures that are decades old. They require companies to provision and maintain complex hardware infrastructure and invest in expensive software licenses, maintenance fees, and support fees that cost upwards of thousands of dollars per user per year. They require teams of data engineers to spend months building complex data models and synthesizing the data before they can generate their first report. To scale to a larger number of users and support the growth in data volume spurred by social media, web, mobile, IoT, ad-tech, and ecommerce workloads, these tools require customers to invest in even more infrastructure to maintain performance. Finally, their complex user experiences are designed for power users and not suitable for the fast-growing segment of business users. The cost and complexity to implement, scale, and use BI makes it difficult for most companies to make data analysis ubiquitous across their organizations.

Enter Amazon QuickSight

QuickSight is a cloud-powered BI service built from the ground up to address the big data challenges around speed, complexity, and cost. QuickSight puts data at the fingertips of your business users in an easy-to-use user interface and at one-tenth the cost of traditional BI solutions, even if that data is scattered across various sources such as Amazon Redshift, Amazon RDS, Amazon S3, or Salesforce.com; legacy databases running on-premises; or even user desktops in Microsoft Excel or CSV file formats.

Getting started with QuickSight is simple. Let’s walk through some of the core experiences of QuickSight that make it so easy to set up, connect to your data sources, and build visualizations in minutes.

Powered by Innovation

QuickSight is built on a large number of innovative technologies to get a business user their first insights fast. Here are a few of the key innovations that power QuickSight:

SPICE: One of the key ingredients that makes QuickSight so powerful is the Super-fast, Parallel, In-memory Calculation Engine (SPICE). SPICE is a new technology built by the same team that created technologies such as DynamoDB, Amazon Redshift, and Amazon Aurora. It is the underlying engine that allows QuickSight to deliver blazing fast response times on large data sets. SPICE sits between the user interface and the data source and can rapidly ingest all or part of the data into its fast, in-memory, columnar-based data store that’s optimized for analytical queries. SPICE is cloud-native, which means that customers don’t need to provision, manage, or scale infrastructure manually. Data is automatically replicated across multiple Availability Zones for redundancy and also backed up to S3 for durability. This allows us to enable organizations to reliably and securely scale to support thousands of users who can all perform fast, interactive analysis across a wide variety of AWS data sources.

Auto-discovery: One of the challenges with BI is discovering and accessing the data. As a native offering from AWS, QuickSight comes deeply integrated with AWS data sources such as Amazon Redshift, RDS, and S3. For instance, QuickSight auto-discovers all RDS instances and Amazon Redshift clusters to which any logged-in user has access. Customers can visualize their data by picking a table and then getting to a visualization in just a few clicks. In addition to AWS data sources, QuickSight also lets customers connect to third-party databases running on Amazon EC2 or on-premises and popular business applications like Excel and Salesforce.

AutoGraph: Picking the right visualization is not easy, and there is lot of science behind it. For instance, optimal visualization depends on various factors: the type of data field selected (“Is it time, number, or string?”), cardinality of the data (“Does this field have only 4 unique values or 1 million values?”), and number of data fields that you are trying to visualize. While QuickSight supports multiple graph types (e.g., bar charts, line graphs, scatter plots, box plots, pie charts, and so on), one of the things we have tried to simplify is a capability that automatically picks the right visualization for selected data using AutoGraph. Users pick the data fields to visualize and QuickSight automatically selects the most optimal visual type.

Collaboration and sharing of live analytics: Users often want to slice and dice their data and share it in various ways. With QuickSight, you can collaborate on analyses, which are visual explorations of your data, and allow others to modify the analyses in any way. You can also share your analyses as read-only dashboards and allow your viewers to interact and filter the visualizations without modification. QuickSight lets you combine visualizations into guided tours, or stories, that you can share with other users to tell the story of your data.

What our customers are saying about QuickSight

In the past months, thousands of AWS customers participated in the preview of QuickSight, including global enterprises and startups from a range of industries. Many worked closely with the team to provide early feedback and helped us rapidly iterate on the product. I am pleased to share some of the positive feedback from our preview customers like MLB Advanced Media, Infor, and Hotelbeds.com.

MLB Advanced Media (MLBAM) is a digital media and content infrastructure provider that powers an ever-growing number of massively popular media and entertainment properties. Brandon SanGiovanni, who is a traffic manager at MLBAM, tells us that QuickSight made it easy for them to explore and analyze their data in a fraction of time expected and provided them with a comprehensive view of their business without being constrained by pre-built dashboards and metrics.

Infor is a business application provider with more than 90,000 customers and 58 million cloud users. Steve Stahl, who is a Senior BI Development Manager at Infor, tells us that they found QuickSight’s SPICE engine to be fast and let them easily and quickly process and visualize datasets on RDS and Amazon Redshift. They’ve been using QuickSight during preview to analyze their customer data and look forward to continuing to use it after launch.

Miguel Iza is the Head of Data and Analytics at Hotelbeds, a global distributor of accommodations serving more than 185 destination countries worldwide and more than 25 million room nights annually. He tells us that QuickSight simplifies the way their users access data to perform self-service analysis and share insights with other. They plan to adopt QuickSight for their new data solution and looks forward to QuickSight democratizing business analytics in their company.

How you can get started

I’m excited that QuickSight is now generally available in N. Virginia, Oregon, and Ireland, with other regions coming soon. Get started by signing up for free at Amazon QuickSight, with 1 user and 1 GB of SPICE capacity. For more details, see Amazon QuickSight Now Generally Available: Fast, Easy to Use Business Analytics for Big Data on the AWS Blog.

Meet the Teams Competing for the Alexa Prize

| Comments ()

On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. We received applications from leading universities across 22 countries. Each application was carefully reviewed by senior Amazon personnel against a rigorous set of criteria covering scientific contribution, technical merit, novelty, and ability to execute. Teams of scientists, engineers, user experience designers, and product managers read, evaluated, discussed, argued, and finally selected the twelve teams who would be invited to participate in the competition.

Today, we’re excited to announce the 12 teams selected to compete with an Amazon sponsorship. In alphabetical order, they are:

  • Carnegie-Mellon University: CMU Magnus
  • Carnegie-Mellon University: TBD
  • Czech Technical University, Prague: eClub Prague
  • Heriot-Watt University, UK: WattSocialBot
  • Princeton University: Princeton Alexa
  • Rensselaer Polytechnic Institute: BAKAbot
  • University of California, Berkeley: Machine Learning @ Berkeley
  • University of California, Santa Cruz: SlugBots
  • University of Edinburgh, UK: Edina
  • University of Montreal, Canada: MILA Team
  • University of Trento, Italy: Roving Minds
  • University of Washington, Seattle: HuskyBot

These teams will each receive a $100,000 research grant as a stipend, Alexa-enabled devices, free Amazon Web Services (AWS) services to support their development efforts, access to new Alexa Skills Kit (ASK) APIs, and support from the Alexa team. Teams invited to participate without sponsorship will be announced on December 12, 2016.

We have challenged these teams to create a socialbot, a conversational AI skill for Alexa that converses engagingly and coherently with humans for 20 minutes on popular topics and news events such as Entertainment, Fashion, Politics, Sports, and Technology. This seemingly intuitive task continues to be one of the ultimate challenges for AI.

Teams will need to advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, common sense reasoning, and dialog planning. We will provide students with data and technical support to help them tackle these problems at scale, and live interactions and feedback from Alexa’s large user base to help them test ideas and iterate their algorithms much faster than previously possible.

As teams gear up for the challenge, we invite all of you to think about what you’d like to chat with Alexa about. In April, you and millions of other Alexa customers will be able to test the socialbots and provide feedback to the teams to help them create a socialbot you’ll want to chat with every day. Your feedback will also help select the finalists. In the meantime, follow the #AlexaPrize hashtag and bookmark the Alexa Prize site for updates.

Welcoming Adrian Cockcroft to the AWS Team.

| Comments ()

I am excited that Adrian Cockcroft will be joining AWS as VP of Cloud Architecture. Adrian has played a crucial role in developing the cloud ecosystem as Cloud Architect at Netflix and later as a Technology Fellow at Battery Ventures. Prior to this, he held positions as Distinguished Engineer at eBay and Sun Microsystems. One theme that has been consistent throughout his career is that Adrian has a gift for seeing the bigger engineering picture.

At Netflix, Adrian played a key role in the company's much-discussed migration to a "cloud native" architecture, and the open sourcing of the widely used (and award-winning) NetflixOSS platform. AWS customers around the world are building more scalable, reliable, efficient and well-performing systems thanks to Adrian and the Netflix OSS effort.

Combine Adrian's big thinking with his excellent educational skills, and you understand why Adrian deserves the respect he receives around the world for helping others be successful on AWS. I'd like to share a few Adrian's own words about his decision to join us....

"After working closely with many folks at AWS over the last seven years, I am thrilled to be joining the clear leader in cloud computing.The state of the art in infrastructure, software packages, and services is nowadays a combination of AWS and open source tools. -- and they are available to everyone. This democratization of access to technology levels the playing field, and means anyone can learn and compete to be the best there is."

I am excited about welcoming Adrian to the AWS team where he will work closely with AWS executives and product groups and consult with customers on their cloud architectures -- from start-ups that were born in the cloud to large web-scale companies and enterprises that have an “all-in” migration strategy. Adrian will also spend time engaging with developers in the Amazon-sponsored and supported open source communities. I am looking really looking forward to working with Adrian again and seeing the positive impact he will have on AWS customers around the world.

Expanding the AWS Cloud: Introducing the AWS US East (Ohio) Region

| Comments ()

Today I am very happy to announce the opening of the new US East (Ohio) Region. The Ohio Region is the fifth AWS region in the US. It brings the worldwide total of AWS Availability Zones (AZs) to 38, and the number of regions globally to 14. The pace of expansion at AWS is accelerating, and Ohio is our third region launch this year. In the remainder of 2016 and in 2017, we will launch another four AWS regions in Canada, China, the United Kingdom, and France, adding another nine AZs to our global infrastructure footprint.

We strive to place customer feedback first in our considerations for where to open new regions. The Ohio Region is no different. Now customers who have been requesting a second US East region have more infrastructure options for running workloads, storing files, running analytics, and managing databases. The Ohio Region launches with three AZs so that customers can create high-availability environments and architect for fault tolerance and scalability. As with all AWS AZs, the AZs in Ohio each have redundant power, networking, and connectivity, which are designed to be resilient to issues in another AZ.

We are also glad to offer low transfer rates between both US East Regions. Data transfer between the Ohio Region and the Northern Virginia Region is priced the same as data transfer between AZs within either of these regions. We hope this will be helpful for customers who want to implement backup or disaster recovery architectures and need to transfer large amounts of data between these regions. It will also be useful for developers who simply want to use services in both regions and move resources back and forth between them. The Ohio Region also has a broad set of services comparable to our Northern Virginia Region, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and AWS Marketplace. Check out the Regional Products and Services page for the full list.

We’ll continue to add new infrastructure to grow our footprint and make AWS as useful as possible for all of our customers around the world. You can learn more about our growing global infrastructure footprint at https://aws.amazon.com/about-aws/global-infrastructure/.

Accelerating Data: Faster and More Scalable ElastiCache for Redis

| Comments ()

Fast Data is an emerging industry term for information that is arriving at high volume and incredible rates, faster than traditional databases can manage. Three years ago, as part of our AWS Fast Data journey we introduced Amazon ElastiCache for Redis, a fully managed in-memory data store that operates at sub-millisecond latency. Since then we’ve introduced Amazon Kinesis for real-time streaming data, AWS Lambda for serverless processing, Apache Spark analytics on EMR, and Amazon QuickSight for high performance Business Intelligence.

While caching continues to be a dominant use of ElastiCache for Redis, we see customers increasingly use it as an in-memory NoSQL database. Developers love the blazing fast performance and in-memory capabilities provided by Redis, making it among the most popular NoSQL key-value stores. However, until now ElastiCache for Redis customers could only run single-shard Redis. This limited the workload size and write throughput to that of a single VM, or required application level sharding. Today, as a next step in our Fast Data journey, we have extended the ElastiCache for Redis service to support “Redis Cluster,” the sharding capability of Redis. Customers can now scale a single deployment to include up to 15 shards, making each Redis-compatible data store up to 3.5 terabytes in size, that operate on microsecond time scales. We also do this at very high rates: up to 4.5 million writes per second and 20 million reads per second. Each shard can include up to five read replicas to ensure high availability so that both planned and unforeseen outages of the infrastructure do not cause application outages.

Building upon Redis

There are some great examples and use cases for Redis, which you can see at companies like Hudl, which offers mobile and desktop video analytics solutions to sports teams and athletes. Hudl is using ElastiCache for Redis to provide millions of coaches and sports analysts with near real-time data feeds that they need to help drive their teams to victory. Another example is Trimble, a global leader in location services who is using ElastiCache for Redis as their primary database for workforce location, helping customers like DirecTV get the right technician to the right location as quickly and inexpensively as possible, enabling both reduced costs and increased satisfaction for their own subscribers.

Increasingly, ElastiCache for Redis has become a mission critical in-memory database for our customers whose availability, durability, performance and scale matter to their business. We have therefore been enhancing the Redis engine running on ElastiCache for the last few years using our own expertise in making enterprise infrastructure scalable and reliable. Amazon’s enhancements address many day-to-day challenges with running Redis. By utilizing techniques such as granular memory management, dynamic I/O throttling and fine grained replica synchronization, ElastiCache for Redis delivers a more robust Redis experience. It enables customers to run their Redis nodes at higher memory utilization without risking swap usage during events such as snapshotting and replica synchronization. It also offers improved synchronization of replicas under load. In addition, ElastiCache for Redis provides smoother Redis failovers by combining our Multi-AZ automated failover with streamlined synchronization of read replicas. Replicas now recover faster as they no longer need to flush their data to do a full resynchronization with the primary. All these capabilities are available to customers at no additional charge, and maintain open-source Redis compatibility.

With this launch, we augmented the client-based failover logic of Redis 3.2 with ElastiCache for Redis Multi-AZ. If a customer is running a self-managed Redis environment on EC2 instead of using ElastiCache for Redis and the primary node fails, the cluster relies on a majority of primaries to determine and execute a failover. If such a majority doesn’t exist, the cluster will go into failed state, rejecting any further reads and writes. This could lead to a major availability impact on the application, requiring human intervention to manually salvage the cluster. This does not happen with ElastiCache for Redis. ElastiCache for Redis Multi-AZ capability is built to handle any failover case for Redis Cluster with robustness and efficiency. The combination of ElastiCache for Redis with the intelligent Redis 3 clients leads to maximum performance and availability of your Redis environment. The client keeps a map of Redis nodes, which is updated in case of failover. This allows for faster failover times while minimizing latency. Alternative solutions frequently use proxy layers to achieve failover and sharding, which slow down your application by requiring requests to do double the network hops.

Redis and Fast Data

Fast data can have a transformational impact on data-driven development that is underlying a lot of the interesting things happening in the Cloud today. This enables “here-and-now” real-time processing and dashboards as well as predictions that enable smart applications. As data sizes grow and expectations move from analytics on a daily basis to analytics on a real-time basis, the need to process data quickly increases. With the latest enhancement we have made to ElastiCache for Redis, we are excited to help these customers with a more robust, high performance, highly scalable in-memory database solution.

Many of our customers share my excitement:

Interactive Intelligence, Inc. is a software company providing unified business communications solutions for call centers, including real-time reporting and analytics. “We have been eagerly awaiting ElastiCache for Redis support for Redis Cluster, and are excited to take advantage of it for easy to setup redundancy, fast failure recovery, and ultra-high scalability” said Anthony Roach, Chief Architect. “We are heavy users of ElastiCache for Redis for both caching and fast data structure storage due to its ease of management and reliability, and the addition of Redis Cluster makes it even more compelling.”

Team Internet AG is an ad tech company with a focus on domain monetization and real-time bidding. “In the last years we have moved quite a significant workload over to ElastiCache for Redis for caching and ephemeral data,” said Markus Ostertag, Head of Development. “Now with the support for Redis Cluster, we’re very happy to be able to scale out much more easily, get higher performance and better reliability for our whole Redis infrastructure.”

This is a great time to be watching the rapid development of AWS Cloud capabilities for Fast Data management and I urge you to take a few minutes to take a look at the new ElastiCache for Redis and see how you might be able to use it for your own projects.

Introducing the Alexa Prize, It’s Day One for Voice

| Comments ()

In the past voice interfaces were seen as gimmicks, or a nuisance for driving “hands-free.” The Amazon Echo and Alexa have completely changed that perception. Voice is now seen as potentially the most important interface to interact with the digitally connected world. From home automation to commerce, from news organizations to government agencies, from financial services to healthcare, everyone is working on the best way is to interact with their services if voice is the interface. Especially for the exciting case where voice is the only interface.

Voice makes access to digital services far more inclusive than traditional screen-based interaction, for example, an aging population may be much more comfortable interacting with voice-based systems than through tablets or keyboards.

Alexa has propelled the conversational interface forward given how natural the interactions are with Alexa-enabled devices. However, it is still Day One, and a lot of innovation is underway in this world. Given the tremendous impact of voice on how we interact with the digital world, it influences how we will build products and services that can support conversations in ways that we have never done before. As such there is also a strong need for fundamental research on these interactions, best described as “Conversational Artificial Intelligence.”

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to accelerate advancements in conversational AI. With this challenge, we aim to advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. The goal is that through the innovative work of students, Alexa users will experience novel, engaging conversational experiences.  

Teams of university students around the world are invited to participate in a conversational AI challenge (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses with users on popular topics. Social conversation can occur naturally on any topic, and teams will need to create an engaging experience while maintaining relevance and coherence throughout the interaction. For the grand challenge we ask teams to invent a socialbot smart enough to engage in a fun, high quality conversation on popular societal topics for 20 minutes.

As part of the research and judging process, millions of Alexa customers will have the opportunity to converse with the socialbots on popular topics by saying, “Alexa, let’s chat about (a topic, for example, baseball playoffs, celebrity gossip, scientific breakthroughs, etc.).” Following the conversation, Alexa users will give feedback on the experience to provide valuable input to the students for improving their socialbots. The feedback from Alexa users will also be used to help select the best socialbots to advance to the final, live judging phase.

The team with the highest-performing socialbot will win a $500,000 prize. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans for 20 minutes.

Teams of university students can submit applications now and the contest will conclude at AWS re:Invent in November 2017, where the winners will be announced. Up to ten teams will be sponsored by Amazon and receive a $100,000 stipend, Alexa-enabled devices, free AWS services and support from the Alexa team.

Participating teams will receive special access to new Alexa Skills Kit (ASK) APIs to build their skills. Registration opened today and teams have until October 28, 2016 to submit their applications. The competition will officially start on November 14, 2016 and run until November 2017, concluding with an award ceremony to be held at AWS re:Invent in Las Vegas, NV.

For more information, check out the Alexa Prize page. And remember: it is still Day One!

Allez, rendez-vous à Paris – An AWS Region is coming to France!

| Comments ()

Today, I am very excited to announce our plans to open a new AWS Region in France! Based in the Paris area, the region will provide even lower latency and will allow users who want to store their content in datacenters in France to easily do so. The new region in France will be ready for customers to use in 2017.

Over the past 10 years, we have seen tremendous growth at AWS. As a result, we have opened 35 Availability Zones (AZs), across 13 AWS Regions worldwide. We have announced several additional regions in Canada, China, Ohio, and the United Kingdom – all expected in the coming months. We don’t plan to slow down or stop there. We are actively working to open new regions in the locations our customers need them most.

French organizations were amongst the first to use AWS when we launched in 2006. Since we opened the first AWS EU Region in Ireland in November 2007, we have seen an acceleration of companies adopting the AWS Cloud. To support our customers’ growth, their digital transformation, and to speed up their innovation and lower the cost of running their IT, we continue to build out additional European infrastructure. Our CDN and DNS network now has 18 points of presence across Europe, we have added a third AZ in Ireland, a second infrastructure region in Frankfurt and a third region in the UK (due in coming months). After the launch of the French region there will be 10 Availability Zones in Europe.

We have also expanded our presence in France over the last ten years. We have launched three points of presence, with two in Paris and one in Marseille, and also opened offices in the country, employing account managers, solutions architects, trainers, Business Development and Professional Services teams, as well as other job functions. Our teams are helping companies of all sizes, operating in various industries, such as finance, business, media, and many others, move to the cloud. As a result, more than 80 percent of companies listed on the CAC 40, the French stock market index, are now using AWS Cloud technology to speed their time-to-market, lower their costs, and support their businesses globally.

Within the thousands of businesses using AWS in France, we count enterprises such as Schneider Electric, Lafarge and Dassault Systemes as customers as well as CAC40, multinational bank, Societe Generale Group. When we first talked to Societe Generale Group about opening the AWS region, Carlos Goncalves, Head of Global Technology Services, said, "We are delighted to learn that Amazon Web Services will open a region in France. Using the AWS Cloud, and the extended services offered by the platform, is an opportunity for us to accelerate our transformation and focus on how we can better serve our clients.”

Another CAC40 company using the cloud to support its digital transformation is Veolia Water France, a subsidiary of Veolia, specialized in the distribution and the treatment of water. In the past we have had Benito Diz, ‎CIO Veolia Water France speak at our events where he has talked about how they have been able to achieve important cost reductions while improving security and agility by moving to AWS. He has said, “By moving a large part of our IT system from our old IBM mainframe to AWS, we have adopted a cloud first strategy, boosting our power of innovation. By launching a new platform to analyze the Terabytes of data collected by the sensors located in our thousands of water meter or water vats we are creating an Internet of Things (IoT) system that helps us to reduce the maintenance intervention time, anticipate the refills and have in real time the information on the key indicators (temperature, water purity, pH level ...). We couldn’t have launched this industrial IoT project without the AWS flexibility.”

In other sectors, government organizations, as well as French charities such as Les Restos du Coeur, are also adopting the AWS Cloud to innovate and better serve the citizens of France. We are also seeing a vibrant start-up community growing in the country thanks to the cloud. This is producing some very innovative and disruptive companies using AWS to launch, rapidly scale their businesses and go global, such as Aldebaran Robotics, Captain Dash, Payplug, and Leboncoin. Another of these exciting start-ups is Teads which runs video advertising for publishers and advertisers. What makes Teads interesting is the rapid growth they have been able to achieve. In four years of existence they have been able to expand their business to touch over 1.3 billion users across the web. When we informed him of the new region, Loïc Jaurès, Teads CTO told us “Without AWS we would have had to focus our time and efforts on the infrastructure instead of growing and innovating in our core business. By offloading the running of the infrastructure to AWS, today we have customers all over the US, in Asia and also in Europe. A new region will help us to better serve our French customers which have high expectations in term of content delivery such as Le Monde, Condé Nast, Les Echos, and more.”

The new European region, coupled with the existing AWS Regions in Dublin and Frankfurt, and a future one in London, will provide customers with quick, low-latency access to websites, mobile applications, games, SaaS applications, Big Data analysis, Internet of Things applications, and more.

Continue reading...