The global healthcare pandemic has been like nothing many of us in Europe have ever known. During this time, many organizations have been contemplating their role in the COVID-19 crisis, and how they can best serve their communities. I can tell you it has been no different for us at Amazon Web Services (AWS). We are focused on where we can make the biggest difference, to help the global communities in which we all live and work. This is why today we are announcing that the AWS Europe (Milan) Region is now open. The opening of the AWS (Milan) Region demonstrates our ongoing commitment to the people of Italy and the long-term potential we believe there is in the country.
La maggior parte di noi, in Europa, non aveva mai conosciuto prima una pandemia globale come quella in corso. Durante questo periodo, molte organizzazioni stanno riflettendo sul proprio ruolo nella crisi COVID-19 e su quale può essere il modo migliore per supportare la propria comunità. Posso dirvi che per noi di Amazon Web Services (AWS) non è stato diverso. Ci siamo concentrati su come e dove avremmo potuto fare la differenza più grande aiutando le comunità globali in cui viviamo e lavoriamo. Con questo obiettivo in mente, oggi annunciamo l'apertura della Regione AWS Europe (Milano). Il lancio della Regione AWS in Italia conferma il nostro costante impegno per gli italiani e rafforza ulteriormente il nostro sostegno al grande potenziale del paese.
As COVID-19 has disrupted life as we know it, I have been inspired by the stories of organizations around the world using AWS in very important ways to help combat the virus and its impact. Whether it is supporting the medical relief effort, advancing scientific research, spinning up remote learning programs, or standing-up remote working platforms, we have seen how providing access to scalable, dependable, and highly secure computing power is vital to keep organizations moving forward. This is why, today, we are announcing the AWS Africa (Cape Town) Region is now open.
On March 16, 2020, at 9:26 PM, I received an urgent email from my friend DJ Patil, former White House Chief Data Scientist, Head of Technology for Devoted Health, a Senior Fellow at the Belfer Center at the Harvard Kennedy School, and Advisor to Venrock Partners. You don’t get that many titles after your name unless you’re pretty good at something. For DJ, that “something” is math and computer science.
DJ was writing to me from the California crisis command center. He explained that he was working with governors from across the country to model the potential impact of COVID-19 for scenario planning. He wanted to help them answer critical questions, like “How many hospital beds will we need?” and “Can we reduce the spread if we temporarily close places where people gather?” and “Should we issue a shelter-in-place order and for how long?” While nobody can predict the future, modeling the virus with all the factors they did know was their best shot at helping leaders make informed decisions, which would impact hundreds of thousands of lives.
Back when Jeff Bezos filled orders in his garage and drove packages to the post office himself, crunching the numbers on costs, tracking inventory, and forecasting future demand was relatively simple. Fast-forward 25 years, Amazon's retail business has more than 175 fulfillment centers (FC) worldwide with over 250,000 full-time associates shipping millions of items per day.
Amazon's worldwide financial operations team has the incredible task of tracking all of that data (think petabytes). At Amazon's scale, a miscalculated metric, like cost per unit, or delayed data can have a huge impact (think millions of dollars). The team is constantly looking for ways to get more accurate data, faster.
That's why, in 2019, they had an idea: Build a data lake that can support one of the largest logistics networks on the planet. It would later become known internally as the Galaxy data lake. The Galaxy data lake was built in 2019 and now all the various teams are working on moving their data into it.
A data lake is a centralized secure repository that allows you to store, govern, discover, and share all of your structured and unstructured data at any scale. Data lakes don't require a pre-defined schema, so you can process raw data without having to know what insights you might want to explore in the future. The following figure shows the key components of a data lake.
Have you ever received a call from your bank because they suspected fraudulent activity? Most banks can automatically identify when spending patterns or locations have deviated from the norm and then act immediately. Many times, this happens before victims even noticed that something was off. As a result, the impact of identity theft on a person's bank account and life can be managed before it's even an issue.
Having a deep understanding of the relationships in your data is powerful like that.
Consider the relationships between diseases and gene interactions. By understanding these connections, you can search for patterns within protein pathways to find other genes that may be associated with a disease. This kind of information could help advance disease research.
The deeper the understanding of the relationships, the more powerful the insights. With enough relationship data points, you can even make predictions about the future (like with a recommendation engine). But as more data is connected, and the size and complexity of the connected data increases, the relationships become more complicated to store and query.
During AWS re:Invent 2019, we announced a number of High Performance Computing (HPC) innovations including the Amazon EC2 M6g, C6g, and R6g instances powered by next-generation Arm-based AWS Graviton2 Processors. We also recently announced that new AMD-powered, compute-optimized EC2 instances are in the works.
Today, I'm happy to share some exciting news about our HPC solutions. On November 18, AWS won six HPCwire Readers' and Editors' Choice Awards at SC19, the International Conference for High Performance Computing, Networking, Storage, and Analysis.
Today, I am happy to announce our plans to open a new AWS Region in Spain in late 2022 or early 2023! I'm excited by the opportunities the availability of hyper scale infrastructure will bring to Spanish organizations of all sizes. When the AWS Europe (Spain) Region is launched, developers, startups, and enterprises, as well as government, education, and non-profit organizations will be able to run their applications and serve end users across the region from data centers located in Spain.
Currently, AWS provides 69 Availability Zones across 22 infrastructure regions worldwide, with announced plans for thirteen more Availability Zones and four more Regions in Indonesia, Italy, South Africa, and Spain in the next few years. The new AWS Europe (Spain)Region will consist of three Availability Zones (AZs) at launch, and will be AWS's seventh region in Europe, joining existing regions in Dublin, Frankfurt, London, Paris, Stockholm, and the upcoming Milan region launching in early 2020. AZs refer to data centers in separate distinct locations within a single Region that are engineered to be operationally independent of other AZs, with independent power, cooling, physical security, and are connected via a low latency network. AWS customers focused on running highly available applications can architect their applications to run in multiple AZs to achieve even higher fault-tolerance.
Today is another milestone for us in Spain. This Region adds to other investments we have been making, over the past years, to provide customers with advanced and secure cloud technologies.
There are places so remote, so harsh that humans can't safely explore them (for example, hundreds of miles below the earth, areas that experience extreme temperatures, or on other planets). These places might have important data that could help us better understand earth and its history, as well as life on other planets. But they usually have little to no internet connection, making the challenge of exploring environments inhospitable for humans seem even more impossible.
How do we push the boundaries of what's possible?
The answer to this question is actually on your phone, your smartwatch, and billions of other places on earth—it's the Internet of Things (IoT). Connected devices allow us to extend our senses to remote locations, such as a robot carrying out work on Mars or monitoring remote oil wells.
This is the exciting future for IoT, and it's closer than you think. Already, IoT is delivering deep and precise insights to improve virtually every aspect of our lives. Here's a few examples:
- IoT sensors in a factory can monitor and predict equipment failure before an accident.
- Healthcare providers can provide remote monitoring of patient health—improving patient care.
- Security cameras can better protect people with real-time notifications.
Because these IoT devices are powered by microprocessors or microcontrollers that have limited processing power and memory, they often rely heavily on AWS and the cloud for processing, analytics, storage, and machine learning. But as the number of IoT devices and use cases grow, people are finding that managing these connected devices presents new challenges. Sometimes an internet connection is weak or not available at all, as is often the case in remote locations. For some applications, a trip to the cloud and back isn't possible because of latency requirements (for example, an autonomous car interpreting its environment in real time).
There's also the cost to send data to the cloud to consider. Some sensors, like those in factories, are collecting an incredible amount of data and sending it all to the cloud could get expensive. These barriers are driving some people to the edge—literally.
In this post, I want to talk about edge computing, the power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. In other words, process the data closer to where it's created.
Innovation has always been part of the Amazon DNA, but about 20 years ago, we went through a radical transformation with the goal of making our iterative process—"invent, launch, reinvent, relaunch, start over, rinse, repeat, again and again"—even faster. The changes we made affected both how we built applications and how we organized our company.
Back then, we had only a small fraction of the number of customers that Amazon serves today. Still, we knew that if we wanted to expand the products and services we offered, we had to change the way we approached application architecture.
The giant, monolithic "bookstore" application and giant database that we used to power Amazon.com limited our speed and agility. Whenever we wanted to add a new feature or product for our customers, like video streaming, we had to edit and rewrite vast amounts of code on an application that we'd designed specifically for our first product—the bookstore. This was a long, unwieldy process requiring complicated coordination, and it limited our ability to innovate fast and at scale.