Today, I'm excited to announce the general availability of Amazon DynamoDB Accelerator (DAX), a fully managed, highly available, in-memory cache that can speed up DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. You can add DAX to your existing DynamoDB applications with just a few clicks in the AWS Management Console – no application rewrites required.

DynamoDB has come a long way in the 5 years since we announced its availability in January 2012. As we said at the time, DynamoDB was a result of 15 years of learning in the area of large scale non-relational databases and cloud services. Based on this experience and learning, we built DynamoDB to be a fast, highly scalable NoSQL database to meet the needs of Internet-scale applications.

DynamoDB was the first service at AWS to use SSD storage. Development of DynamoDB was guided by the core set of distributed systems principles outlined in the Dynamo paper, resulting in an ultra-scalable and highly reliable database system. DynamoDB delivers predictable performance and single digit millisecond latencies for reads and writes to your application, whether you're just getting started and want to perform hundreds of reads or writes per second in dev and test, or you're operating at scale in production performing millions of reads and writes per second.

Saving crucial milliseconds

Having been closely involved in the design and development of DynamoDB over the years, I find it gratifying to see DynamoDB being used by more than 100,000 customers - including the likes of AirBnB, Amazon, Expedia, Lyft, Redfin, and Supercell. It delivers predictable performance, consistently in the single-digit milliseconds, to users of some of the largest, most popular, iconic applications in use today. I've had a chance to interact with many of these customers on the design of their apps. These interactions allow me to understand their emerging needs, which I take back to our development teams to further iterate on our services. Many of these customers have apps with near real-time requirements for accessing data that need even faster performance than single-digit milliseconds. These are the apps that have motivated us to develop DAX.

To give you some examples of my interactions, I've been talking to a few ad-tech companies lately, and their conversations are about how they can save milliseconds of performance. For their applications, they have 20-50 ms to decide whether or not to place a bid for an ad. Every millisecond that is spent querying a database and waiting for a key piece of data is time that they could otherwise use to make better decisions, process more data, or improve calculations to place a more accurate bid.

These high-throughput, low-latency requirements need caching, not as a consideration, but as a best practice. Caches reduce latencies to microseconds, increases throughput, and in many cases, help customers save money by reducing the amount of resources they have to overprovision for their databases.

Caching is not a new concept, and I have always wondered, why doesn't everyone cache?

I think the reasons are many, but most follow a similar trend. Although many developers are aware of the patterns and benefits of adding a cache to an application, it's not easy to implement such functionality correctly. It's also time consuming and costly. When you write an application, you might not need or design for caching on day one. Thus, caching has to be shoehorned into an application that is already operational and experiencing load that would necessitate the added benefits. Adding caching when your app is already experiencing load is not easy. As a result, we see many folks trying to squeeze out every last drop of performance, or significantly overprovision their database resources to avoid adding a cache.

Fully managed cache for DynamoDB

What if you could seamlessly add caching to your application without requiring a re-write?

Enter DynamoDB Accelerator. With the launch of DAX, you now get microsecond access to data that lives in DynamoDB. DAX is an in-memory cache in front of DynamoDB and has the identical API as DynamoDB. There's no need to rewrite your applications to access your cache. You just point your existing application at the DAX endpoint, and as a read-through/write-through cache, DAX seamlessly handles caching for you. Microsecond response times, millions of requests per second—and of course, it's a fully managed environment that is highly available over multiple Availability Zones so you no longer have to worry about managing your cache.

With DAX, we've created a fully managed caching service that is API-compatible with DynamoDB. What this means to you as a developer is that you don't have to re-write your DynamoDB application to use DAX. Instead, using the DAX SDK for Java, you just point your existing application at a DAX endpoint, and DAX handles the rest. As a read-through/write-through cache, DAX will intercept both reads and writes to DynamoDB. For read-through caching, when a read is issued to DAX, it will first check to see if that item is in cache. If it is, DAX returns the value with response times in microseconds. If the item is not in cache, DAX automatically fetches the item from DynamoDB, caches the result for subsequent reads, and returns the value to the application. This is done transparently to the developer. Similarly, for writes, DAX first writes the value to DynamoDB, cache the value in DAX, and then returns success to the application. This way, reads after writes are available for cache hits, which further simplifies the application. With cache eviction handled by time-to-live (TTL) and write-through evictions, you no longer need the code to perform this task. DAX provides all the benefits of a cache, with a much simpler developer experience.

The following is code for an application that talks to DynamoDB:

All you have to do is point your application at the DAX endpoint with three lines of code. You've added in-memory caching without performing brain surgery on the application.

Adding DAX is as simple as the following code:

Why doesn't everyone cache? Many times, it is too costly in terms of time and complexity because developers have to alter some of their most critical code paths. With DAX, you get faster reads, more throughput, and cost savings - without having to write any new code.

What's not to like? This is a fantastic addition for our DynamoDB customers. To get started with DAX today, see Amazon DynamoDB Accelerator (DAX).

Many of our customers share my excitement:

10 billion matches later, Tinder has changed the way people meet around the world. "For Tinder, performance is absolutely key. We are major users of DynamoDB. We love its simplicity and ability to scale with consistent performance," said Maria Zhang, VP of Engineering at Tinder. "With DAX, AWS has taken performance to a new level, with response times in microseconds. We really like how DAX integrates seamlessly with DynamoDB, is API-compatible, and doesn't require us to write any new code. We are excited for the General Availability of DAX."

Careem is a car-booking service and app that serves more than 40 cities and 11 countries in the broader Middle East. The company uses a number of AWS services, including Amazon DynamoDB to store locations of its captains, promotions, and configurations. "We have been involved early on during the DAX public preview, and have been running our production workload on DAX with no issues," said Tafseer-ul-Islam Siddiqui, Software Architect at Careem. "We are using DAX to scale our reads across our network of services. As a write-through cache, DAX has simplified our application stack and has removed the need for building a central service for our caching needs. A key feature that motivated our adoption of DAX was that it is API-compatible with DynamoDB and thus required minimal changes to use with our existing app - you only need to change the DynamoDB client to the DAX client. Our team was really impressed with the built-in failover and replication support."

Canon INC. Office Imaging Products Development Planning & Management Center provides mission-critical cloud services connecting to business machines for worldwide customers across four continents. "Amazon DynamoDB Accelerator (DAX) is a very wonderful service to improve the user experience of Amazon DynamoDB," said Takashi Yagita, Principal Engineer, Office Imaging Products Development Planning & Management Center, Canon INC. "Our developers like the excellent design concept of DAX SDK, which enables us to switch from DynamoDB and start using DAX seamlessly. Our team has succeeded in keeping the DynamoDB capacity units far lower while improving the data access speed by DAX. We welcome that DAX is generally available."

This is a really a fantastic addition for our DynamoDB customers. To get started with DAX today, please see https://aws.amazon.com/dynamodb/dax/.

Expanding the Cloud – An AWS Region is coming to Hong Kong

| Comments ()

Today, I am very excited to announce our plans to open a new AWS Region in Hong Kong! The new region will give Hong Kong-based businesses, government organizations, non-profits, and global companies with customers in Hong Kong, the ability to leverage AWS technologies from data centers in Hong Kong. The new AWS Asia Pacific (Hong Kong) Region will have three Availability Zones and be ready for customers for use in 2018.

Over the past decade, we have seen tremendous growth at AWS. As a result, we have opened 43 Availability Zones across 16 AWS Regions worldwide. Last year, we opened new regions in Korea, India, the US, Canada, and the UK. Throughout the next year, we will see another eight zones come online, across three AWS Regions (France, China, and Sweden). However, we do not plan to slow down and we are not stopping there. We are actively working to open new regions in the locations where our customers need them most.

In Asia Pacific, we have been constantly expanding our footprint. In 2010, we opened our first AWS Region in Singapore and since then have opened additional regions: Japan, Australia, China, Korea, and India. After the launch of the AWS APAC (Hong Kong) Region, there will be 19 Availability Zones in Asia Pacific for customers to build flexible, scalable, secure, and highly available applications.

As well as AWS Regions, we also have 21 AWS Edge Network Locations in Asia Pacific. This enables customers to serve content to their end users with low latency, giving them the best application experience. This continued investment in Asia Pacific has led to strong growth as many customers across the region move to AWS.

Organizations in Hong Kong have been increasingly moving their mission-critical applications to AWS. This has led us to steadily increase our investment in Hong Kong to serve our growing base of enterprise, public sector, and startup customers.

In 2008, AWS opened a point of presence (PoP) in Hong Kong to enable customers to serve content to their end users with low latency. Since then, AWS has added two more PoPs in Hong Kong, the latest in 2016. In 2013, AWS opened an office in Hong Kong. Today we have local teams in Hong Kong to help customers of all sizes as they move to AWS, including account managers, solutions architects, business developers, partner managers, professional services consultants, technology evangelists, start-up community developers, and more.

Some of the most successful startups in the world—including 8 Securities, 9GAG, and GoAnimate—are already using AWS to deliver highly reliable, scalable, and secure applications to customers.

9GAG is a Hong Kong-based company responsible for 9gag.com, one of the top traffic websites in the world. It's an entertainment website where users can post content or "memes" that they find amusing and share them across social media networks. 9GAG generates millions of Facebook shares and likes per month, attracts over 78 million global unique visitors, and receives more than 1 billion page views per month. 9GAG has a small team of nine people, including three engineers to support the business, and uses AWS to service their global visitors.

GoAnimate is a Hong Kong-based company that allows companies and individuals to tell great visual stories via its online animation platform. GoAnimate uses many AWS services, including Amazon Polly, to allow users to make their visual animations speak. They chose to use AWS in order to focus on developing their platform, instead of managing infrastructure. They believe that they have reduced development time from 20 to 30 percent by having done so.

Some of the largest, and most well respected, enterprises in Hong Kong are also using AWS to power their businesses, enabling them to be more agile and responsive to their customers. These companies include Cathay Pacific, CLSA, HSBC, Gibson Innovations, Kerry Logistics, Ocean Park, Next Digital, and TownGas.

Hong Kong's largest listed multimedia group, Next Digital, operates businesses spanning Hong Kong, Taiwan, Japan, and the United States. They operate in an industry where malicious groups frequently launch distributed denial-of-service (DDoS) attacks to disrupt availability. Then too, Internet service providers can shut down their services any time they feel threatened by the DDoS attacks. Next Digital operates on AWS in a more highly available and fault-tolerant environment than their previous colocation solution. Beyond running their web properties and applications, Next Digital also uses Amazon RDS (database), Amazon ElastiCache (caching), and Amazon Redshift (data warehousing). Further, taking advantage of the local AWS Hong Kong-based team, Next Digital uses AWS Enterprise Support for Infrastructure Event Management and other high-touch support services.

Kerry Logistics, a global logistics company based in Hong Kong, runs a number of corporate IT applications on AWS, including its Infor Sun Accounting Environment and Kewill Freight Forwarding Systems across multiple regions on AWS globally. Their goal has been to ensure that their IT infrastructure sits as closely to their customers and users as possible.

In addition to established enterprises, government organizations, and rapidly growing startups, AWS also has a vibrant ecosystem in Hong Kong, including partners that have built cloud practices and innovative technology solutions on AWS. AWS Partner Network (APN) Consulting Partners in Hong Kong help customers migrate to the cloud. APN Consulting Partners include global partners such as Accenture, Datapipe, Deloitte, Infosys, KPMG and Rackspace, and local partners such as ICG, eCloudValley, Masterson, and Nextlink, among many others.

The new AWS Asia Pacific (Hong Kong) Region, coupled with the existing AWS Regions in Singapore, Tokyo, Sydney, Beijing, Seoul, and Mumbai, and a future one in Ningxia, will provide customers with quick, low-latency access to websites, mobile applications, games, SaaS applications, big data analysis, Internet of Things (IoT) applications, and more. I'm excited to see the new and innovative use cases coming from our customers in Hong Kong and across Asia Pacific, all enabled by AWS.

Unlocking the Value of Device Data with AWS Greengrass.

| Comments ()

Unlocking the value of data is a primary goal that AWS helps our customers to pursue. In recent years, an explosion of intelligent devices have created oceans of new data across many industries. We have seen that such devices can benefit greatly from the elastic resources of the cloud. This is because data gets more valuable when it can be processed together with other data.

At the same time, it can be valuable to process some data right at the source where it is generated. Some applications – medical equipment, industrial machinery, and building automation are just a few – can't rely exclusively on the cloud for control, and require some form of local storage and execution. Such applications are often mission-critical: safeties must operate reliably, even if connectivity drops. Some applications may also rely on timely decisions: when maneuvering heavy machinery, an absolute minimum of latency is critical. Some use cases have privacy or regulatory constraints: medical data might need to be stored on site at a hospital for years even if also stored in the cloud. When you can't address scenarios such as these, the value of data you don't process is lost.

As it turns out, there are three broad reasons that local data processing is important, in addition to cloud-based processing. At AWS we refer to these broad reasons as "laws" because we expect them to hold even as technology improves:

  1. Law of Physics. Customers want to build applications that make the most interactive and critical decisions locally, such as safety-critical control. This is determined by basic laws of physics: it takes time to send data to the cloud, and networks don't have 100% availability. Customers in physically remote environments, such as mining and agriculture, are more affected by these issues.

  2. Law of Economics. In many industries, data production has grown more quickly than bandwidth, and much of this data is low value. Local aggregation and filtering of data allows customers to send only high-value data to the cloud for storage and analysis.

  3. Law of the Land. In some industries, customers have regulatory or compliance requirements to isolate or duplicate data in particular locations. Some governments impose data sovereignty restrictions on where data may be stored and processed.

Today, we are announcing the general availability of AWS Greengrass, a new service that helps unlock the value of data from devices that are subject to the three laws described above.

AWS Greengrass extends AWS onto your devices, so they can act locally on the data they generate while still taking advantage of the cloud. AWS Greengrass takes advantage of your devices' onboard capabilities, and extends them to the cloud for management, updates, and elastic compute and storage.

AWS Greengrass provides the following features:

  • Local execution of AWS Lambda functions written in Python 2.7 and deployed down from the cloud.
  • Local device shadows to maintain state for the stateless functions, including sync and conflict resolution.
  • Local messaging between functions and peripherals on the device that hosts AWS Greengrass core, and also between the core and other local devices that use the AWS IoT Device SDK.
  • Security of communication between the AWS Greengrass group and the cloud. AWS Greengrass uses the same certificate-based mutual authentication that AWS IoT uses. Local communication within an AWS Greengrass group is also secured by using a unique private CA for every group.

Before AWS Greengrass, device builders often had to choose between the low latency of local execution, and the flexibility, scale, and ease of the cloud. AWS Greengrass removes that trade-off—manufacturers and OEMs can now build solutions that use the cloud for management, analytics, and durable storage, while keeping critical functionality on-device or nearby.

AWS Greengrass makes it easier for customers to build systems of devices (including heterogeneous devices) that work together with the AWS Cloud. Our goal is not to provide an alternative for the cloud, but to provide tools for customers to use the cloud to build applications and systems that can't be moved entirely to the cloud. Using AWS Greengrass for local execution, customers can identify the most valuable data to process, analyze, and store in the cloud.

With AWS Greengrass, we can begin to extend AWS into customer systems—from small devices to racks of servers—in a way that makes it easy to do the things locally that are best done locally, and to amplify those workloads with the cloud.

Getting started: AWS Greengrass is available today to all customers, in US East (N. Virginia) and US West (Oregon). You can get started by visiting http://aws.amazon.com/greengrass.

In many high-throughput OLTP style applications, the database plays a crucial role in achieving scale, reliability, high-performance, and cost efficiency. For a long time, these requirements were almost exclusively served by commercial, proprietary databases. Soon after the launch of the AWS Relational Database Service (RDS) customers gave us feedback that they would love to migrate to RDS. Yet, what they desired more, was a reality that unshackled them from the high-cost, punitive licensing schemes, which came with proprietary databases.

They would love to migrate to an open-source style database like MySQL or PostgreSQL, if such a database could meet the enterprise-grade reliability and performance these high-scale applications required.

We decided to use our inventive powers to design and build a new database engine that would give database systems such as MySQL and PostgreSQL reliability and performance at scale. Meaning, at a level that could serve even the most demanding OLTP applications. It gave us the opportunity to invent a new database architecture that would address to needs of modern cloud-scale applications, departing from the traditional approaches that had their roots in databases of the nineties. That database engine is now known as "Amazon Aurora" and launched in 2014 for MySQL, and in 2016 for PostgreSQL.

Amazon Aurora has become the fastest-growing service in the history of AWS and frequently is the target of migration from on-premise proprietary databases.

In a paper published this week at SIGMOD'17, the Amazon Aurora team presents the design considerations for the new database engine and how they addressed them. From the abstract:

Amazon Aurora is a relational database service for OLTP workloads offered as part of Amazon Web Services (AWS). In this paper, we describe the architecture of Aurora and the design considerations leading to that architecture. We believe the central constraint in high throughput data processing has moved from compute and storage to the network. Aurora brings a novel architecture to the relational database to address this constraint, most notably by pushing redo processing to a multi-tenant scaleout storage service, purpose-built for Aurora. We describe how doing so not only reduces network traffic, but also allows for fast crash recovery, failovers to replicas without loss of data, and fault-tolerant, self-healing storage. We then describe how Aurora achieves consensus on durable state across numerous storage nodes using an efficient asynchronous scheme, avoiding expensive and chatty recovery protocols. Finally, having operated Aurora as a production service for over 18 months, we share lessons we have learned from our customers on what modern cloud applications expect from their database tier.

I hope you will enjoy this weekend's reading, as it contain many gems about modern database design.

"Amazon Aurora: Design Considerations for HighThroughput Cloud-Native Relational Databases", Alexandre Verbitski, Anurag Gupta, Debanjan Saha, Murali Brahmadesam, Kamal Gupta, Raman Mittal, Sailesh Krishnamurthy, Sandor Maurice, Tengiz Kharatishvili, Xiaofeng Bao, in SIGMOD '17 Proceedings of the 2017 ACM International Conference on Management of Data, Pages 1041-1052 May 14 – 19, 2017, Chicago, IL, USA.

This article titled "Wie die Digitalisierung Wertschöpfung neu definiert" appeared in German last week in the "Größer, höher, weiter (bigger, higher, further)" column of Wirtschaftwoche.

Germany's "hidden champions" – family-owned companies, engineering companies, specialists – are unique in the world. They stand for quality, reliability and a high degree of know-how in manufacturing. Hidden champions play a significant role in the German economy; as a result, Germany has become one of the few countries in Western Europe where manufacturing accounts for more than 20% of GDP. By contrast, neighboring countries have seen a continuous decline in their manufacturing base. What's more, digital technologies and business models that are focused on Industry 4.0 (i.e., the term invented in Germany to refer to the digitalization of production) have the potential to reinforce Germany's lead even more. According to estimates by Bitkom, the German IT industry association, and the Fraunhofer Institute of Industrial Engineering IAO, Germany's hidden champions will contribute a substantial portion to the country's economic growth by 2025 and create new jobs. At the same time, many experts believe the fundamental potential of Industry 4.0 has not even been fully leveraged yet.

The power of persistence versus the speed of adjustment

Most of Germany's hidden champions have earned their reputation through hard work: they have been optimizing their processes over decades. They have invested the time to perfect their processes and develop high-quality products for their customers. This has paid off – and continues to do so.

However, digital technologies are now ushering in a paradigm change in value creation. Manufacturing can be fully digitalized to become part of a connected "Internet of Things" (IoT), controlled via the cloud. And control is not the only change: IoT creates many new data streams that, through cloud analytics, provide companies with much deeper insight into their operations and customer engagement. This is forcing Mittelstand companies to break down silos between departments, think beyond their traditional activities, and develop new business models.

In fact, almost every industrial company in Germany already has a digitalization project in place. Most of these companies are extracting additional efficiency gains in their production by using digital technologies. Other companies have established start-ups for certain activities, or pilot projects aimed at creating showcases. But many of these initiatives never get beyond that point. The core business, which is doing well, remains untouched by all this. And one of the main reasons why is because the people with the necessary IT expertise in Mittelstand companies sometimes are not sitting at the strategy table as often as they should.

Will these initiatives be enough to secure the pole position for Germany's Mittelstand? Probably not. Companies in growth markets are catching up. China's industry, for example, is making huge progress – something that took years to achieve in other places. The role of Chinese manufacturers in the worldwide market is changing: from low-cost workbench to global provider of advanced technology. Market leaders from Germany therefore realize they cannot afford to rest on their laurels.

Competitors from the software side are also reshuffling the balance of power, because their offerings will create a completely new market alongside the traditional business of Mittelstand toolmakers and mechanical and systems engineering companies. If new and innovative companies, such as providers of data analytics, specialized software providers or companies that can bundle complementary offerings, appear on the scene, traditional manufacturing would suddenly become just one module among many – namely manufacturing-as-a-service.

Creating added value in an Industry-4.0 environment often happens when B2B companies integrate B2C approaches, in turn sparking change in their own industry. This requires using agile development processes for continuous improvement and creating a broader portfolio of solutions, for example by increasingly connecting the shop floor with data "from the outside" such as logistics and inventory management. Software that plays an ever-greater role in the "digital factory" of the future will continuously expand its functionalities. Already today, traditional components used in automated industry and made by companies such as Beckhoff, Harting, WAGO, etc. can connect seamlessly to the cloud. Hidden champions from the field of automation technology digitalize their products, enabling their customers to easily join the "smart factory", an environment in which manufacturing facilities and logistics systems are interconnected without the need for a person to operate them. A great example of this kind of digital transformation outside of Germany is General Electric. It is best captured in the words of their CEO Jeffrey Immelt: "If_you went to bed last night as an industrial company, you're going to wake up today as a software and analytics company_."

Efficient individualization

The example of Stölzle Oberglas, a leading Austrian glass producer, shows how an industrial company is able to weave the laws of the consumer industry into its own industry. If a customer decides at the last moment (for example due to a large upcoming sports event) to sell a special edition with the name of the winning team on it, Stölzle needs to deliver at short notice. In the past, this would have been cost-prohibitive to do, but in today's digital age, such a highly customized product must not cost more than an off-the-shelf product. Stölzle can afford that because, with the help of software provider Actyx, it has consolidated data from its entire production process, can analyze the data intelligently, and makes it available for the user. In this way, changing specifications can flow into the production process practically in real time using cloud technology. But client-driven innovation in an Industry 4.0 environment doesn't stop here. Actyx uses the insights gained in this project, continues to develop the solution based on those insights, and makes it available to a broader group of users through its solution portfolio. It is similar to what we do at Amazon Web Services too: we develop new features and services based on concrete feedback from customers and then make them available to all our users.

Ecosystem of additional services Knowing how to connect the knowledge of digital native in a meaningful way with engineering knowledge will be critical for hidden champions' future success. Almost daily, new start-ups are formed in Silicon Valley, Tel Aviv, London and Berlin. The business model of many of these new firms is about creating even greater added value for the user of a machine or device: Using sensors that connect machines and products in the "Internet of Things, other services can now also be created that are no longer limited to the assembly line. At the same time, this kind of experimentation poses only a small risk, because in the cloud, services and the exact server capacity can be reserved for each individual application purpose and then paid per use.

These kinds of services are developed by the Berlin-based company WATTx, an independent spin-off from the 100-year-old heating engineer Viessmann. WATTx was created by the company owners to supplement Viessmann's standard products with intelligent digital services, such as an IoT platform for commercial buildings. According to data from sensors inside and outside the building, heater grids, lighting and window shades can be managed remotely. In the meantime, WATTx is doing much more than that. It brings together all of its digital talents in Berlin and gives them unlimited access to new technologies, such as the cloud. On the one hand, this allows ideas to be realized very quickly –or thrown out quickly if they are not achievable. Ideas are also developed and tested here before they hit the market as new companies. In the meantime, Viessmann is developing services on its own that offer added value related to its basic products of heating and thermostats. By working in this way, this traditional German company is able to maintain the contact to end-customers and explore completely new markets.

Keeping an eye on the big changes

Software and services are areas where a producer of a machine or device initially does not feel at home, simply because software and services were never part of its core business. Changing processes that already work seems to be a high risk, at least in the short term. But if the strategic dimension is lacking in Industry 4.0 projects, many companies may not generate any innovative added value at all — neither at the micro- nor macroeconomic level. In the long term, there is a high risk that more agile competitors will take the lead over 'traditional' industrial companies if the latter fail to develop a new path through the global ecosystem of machines, products and (digital) services. But those industrial companies that do take the bold step of implementing new approaches and solutions by embracing cloud technologies will maintain their hard-won status in the German economy. And there's a good chance they will play a more important role in the future.

Coming to STATION F: The first Mentor's Office powered by AWS!

| Comments ()

I am excited to announce that AWS is opening its first Mentor's Office at STATION F in Paris! The Mentor's Office is a workplace exclusively dedicated to meetings between AWS experts and the startups. STATION F is the world's biggest startup campus. With this special offer starting at the end of June, at the campus opening, AWS increases the support already available to startup customers in France.

All year long, AWS experts will deliver technical and business assistance to startups based on campus. AWS Solutions Architects will meet startup members for face-to-face sessions, to share guidance on how cloud services can be used for their specific use cases, workloads, or applications. Startup members will also have the possibility to meet with AWS business experts such as account managers, business developers, and consultants. They can explore the possibilities of the AWS Cloud and take advantage of our IT experience and business expertise. With these 1:1 meetings, AWS delivers mentoring to startups to help them bring their ideas to life and accelerate their business using the cloud.

AWS will also provide startups with all of the benefits of the AWS Activate program, including AWS credits, training, technical support, and a special startup community forum to help them successfully build their business. For more details about the Mentor's Office at STATION F, feel free to contact the AWS STATION F team.

With this opening, Amazon continues to build out global programs to support startup growth and to speed up innovation. Startups can also apply to other Amazon programs to boost their businesses, such as:

  • Amazon Launchpad, which makes it easy for startups to launch, market, and distribute their products to hundreds of millions of Amazon customers across the globe.
  • Alexa Fund, which provides up to $100 million in venture capital funding to fuel voice technology innovation.

After the launch of AWS in 2006, we saw an acceleration of French startups adopting the cloud. Successful French startups already using AWS to grow their businesses, across Europe and around the world, include Captain Dash, Dashlane, Botify, Sketchfab,Predicsis, Yomoni, BidMotion, Teads, FrontApp, Iconosquare, and many others. They all get benefits from AWS's highly flexible, scalable, and secure global platform. AWS eliminates the undifferentiated heavy lifting of managing underlying infrastructure and provides elastic, pay-as-you-go IT resources.

We have also seen start-ups in France using AWS to grow and become household names in their market segment, such as Aldebaran Robotics (SoftBank Robotics Europe). This startup uses AWS to develop new technologies. They are able to concentrate their engineering resources on innovation, rather than maintaining technology infrastructure, which is leading to the development of autonomous and programmable humanoid robots.

Cloud is also an opportunity for startups to reach security standards that were not accessible before. For example, PayPlug is an online payment by credit card solution enabling e-merchants to enrich the customer experience by reinventing the payment experience. Such a service requires suppliers to get PCI DSS certification for the "Service Provider" level, a very demanding certification level. Using AWS's PCI DSS Level 1 compliant infrastructure, Payplug has been certified by L'ACPR (L'Autorité de contrôle prudentiel et de resolution, the French supervisory for prudential and resolution authority) as a financial institution, a major step in their development.

I look forward to meeting the builders of tomorrow at STATION F in the near future.

Go French Startups!

I will be returning this weekend to the US from a very successful AWS Summit in Sydney, so I have ample time to read during travels. This weekend however I would like to take a break from reading historical computer science material, to catch up on another technology I find fascinating, that of functional Magnetic Resonace Imaging, aka fMRI.

fMRI is a functional imagine technology, meaning that it just records the state of the brain at one particular point in time, but the changing state over a period of time. The basic technology records brain activity by measuring changes in blood flow through the brain. The technology relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases.

There have been significant advances in the use of fMRI technology, but mostly in research. It also comes with significant ethical questions: if you can "read" someone's brain, what are you allowed to do what that knowledge?

For my flight back to the US this weekend I will read two papers: one by Peter Bandettini pubslished in NeruImagine about the history of fMRI and one from Poldrack and Farah on the state of the art in fMRI and its applications, published in Nature.

"Twenty years of functional MRI: The science and the stories, Peter A. Bandettini, Neuroimage 62, 575–588 (2012)

"Progress and challenges in probing the human brain", Russell A. Poldrack and Martha J. Farah, Nature 526, 371–379 (15 October 2015)

Today, I am very excited to announce our plans to open a new AWS Region in the Nordics! The new region will give Nordic-based businesses, government organisations, non-profits, and global companies with customers in the Nordics, the ability to leverage the AWS technology infrastructure from data centers in Sweden. The new AWS EU (Stockholm) Region will have three Availability Zones and will be ready for customers to use in 2018.

Over the past decade, we have seen tremendous growth at AWS. As a result, we have opened 42 Availability Zones across 16 AWS Regions worldwide. Last year, we opened new regions in Canada, India, Korea, the UK, and the US. Throughout the next year we will see another five zones, across two AWS Regions, come online in France and China. However, we do not plan to slow down and we are not stopping there. We are actively working to open new regions in the locations our customers need them most.

In Europe, we have been constantly expanding our footprint. In 2007, we opened our first AWS Region in Ireland and since then have opened additional regions, in Germany and the UK, with France still to come. After the launch of the AWS EU (Stockholm) Region, there will be 13 Availability Zones in Europe for customers to build flexible, scalable, secure, and highly available applications. It will also give customers another region where they can store their data with the knowledge that it will not leave the EU unless they move it.

As well as AWS Regions, we also have 24 AWS Edge Network Locations in Europe. This enables customers to serve content to their end users with low latency, giving them the best application experience. This continued investment in Europe has led to strong growth as many customers across the region move to AWS.

Organizations across the Nordics—Denmark, Finland, Iceland, Norway, and Sweden—have been increasingly moving their mission-critical applications to AWS. This has led us to steadily increase our investment in the Nordics to serve our growing base of enterprise, public sector, and startup customers.

In 2011, AWS opened a Point of Presence (PoP) in Stockholm to enable customers to serve content to their end users with low latency. In 2014 and 2015 respectively, AWS opened offices in Stockholm and Espoo, Finland. We have also added teams in the Nordics to help customers of all sizes as they move to AWS, including account managers, solutions architects, business developers, partner managers, professional services consultants, technology evangelists, start-up community developers, and more.

Some of the most successful startups in the world, including Bambora, iZettle, King, Mojang, and Supercell are already using AWS to deliver highly reliable, scalable, and secure applications to customers.

Supercell is responsible for several of the highest grossing mobile games in history, and they rely on AWS for their entire infrastructure. With titles like Boom Beach, Clash of Clans, Clash Royale, and Hay Day, Supercell has 100 million people playing their games every single day.

iZettle, a mobile payments startup, is also ‘all-in’ on AWS. After finding it cost prohibitive to use colocation centers in local markets where their users are based, iZettle decided to give up hardware. They migrated their IT infrastructure, including mission-critical payments platforms, to AWS in just six weeks. After migrating, database queries that took six seconds now take three seconds in their AWS infrastructure. That’s 100% faster.

Some of the largest, and most well respected, enterprises in the Nordics also depend on AWS to power their businesses, enabling them to be more agile and responsive to their customers. These companies include ASSA ABLOY, Finnair, Husqvarna Group, IKEA, Kauppalehti, Kesko, Sanoma, Scania, Schibsted, Telenor, and WOW Air.

Scania, a world leading manufacturer of commercial vehicles, is using AWS to bring advanced technologies to their trucks, buses, coaches, and diesel engines. AWS is helping them reach their goal of becoming the leader in sustainable transport. Scania is planning to use AWS for their connected vehicle systems, which allows truck owners to track their vehicles, collect real-time running data, and run diagnostics to understand when maintenance is needed to reduce vehicle downtime.

Icelandic low-cost airline carrier WOW air is using AWS for its Internet-facing IT infrastructure, including its booking engine, development platforms, and web servers. In making the switch to AWS, WOW air has saved between $30,000 and $45,000 on hardware, and software licensing. The airline has also been able to scale quickly to cope with spikes in seasonal traffic, cutting application latency and improving the overall customer experience. AWS was crucial to the successful launch of WOW air’s U.S. flights, allowing the airline to expand twelvefold to cope with the spike in traffic that it experienced at the time.

In addition to established enterprises, government organizations, and rapidly growing startups, AWS also has a vibrant ecosystem in the Nordics, including partners that have built cloud practices and innovative technology solutions on AWS. AWS Partner Network (APN) Consulting Partners in the Nordics help customers migrate to the cloud. APN Partners include Accenture, Capgemini, Crayon Group, CSC, Cybercom, Dashsoft, Enfo Group, Evry, Jayway, Nordcloud, Proact IT Group, Solita, Tieto, Wipro, and many others. Among the APN Technology Partners and independent software vendors (ISVs) in the Nordics using AWS to deliver their software to customers around the world are Basware, eBuilder, F-Secure, Queue-it, Xstream, and many others.

The new AWS EU (Stockholm) Region, coupled with the existing AWS Regions in Dublin, Frankfurt, and London, and a future one in France, will provide customers with quick, low-latency access to websites, mobile applications, games, SaaS applications, big data analysis, Internet of Things (IoT) applications, and more. I’m excited to see the new and innovative use cases coming from our customers in the Nordics and across Europe, all enabled by AWS.

This weekend I am travelling to Australia for the first AWS Summit of 2017. I find on such a long trip, to keep me from getting distracted, I need an exciting paper that is easy to read. Last week's 'Deep Learning' overview would have not met those requirements.

One topic that always gets me excited is how to take computer science research and implement it in production systems. There are often so many obstacles that we do not see much of this work happening. For example when building Dynamo, where we put a collection of different research technologies together in production, we struggled with all the assumptions the researchers had made. At times, it makes research unsuitable for production (e.g. real systems do not fail by stopping in a nice and clean way).

In the early nineties, Mendel Rosenblum and John Ousterhout had made a major breakthrough in the design of file systems with "The Design and Implementation of a Log-Structured File System." That alone is an interesting paper to read, but this weekend we will be looking at the actual implementation of an LFS by Margo Seltzer and other members of the BSD team.

It is one of the first papers to describe the implementation of a research system, and measure the result within a production system. I hope you will also enjoy it!

"An implementation of a log-structured file system for UNIX.", Margo Seltzer, Keith Bostic, Marshall Kirk Mckusick, and Carl Staelin. 1993, In Proceedings of the USENIX Winter 1993 Conference Proceedings on USENIX Winter 1993 Conference Proceedings (USENIX'93). USENIX Association, Berkeley, CA, USA, 3-3.

In the past few years, we have seen an explosion in the use of 'Deep Learning' as its software platforms and the supporting hardware mature, especially as GPUs with larger memories become widely available. Even though this is a recent development, 'Deep Learning' has entrenched historical roots, tracing back all the way to the sixties or possibly earlier.

By reading-up on its history, we get a better understanding of the current state of the art of 'Deep Learning algorithms' and the 'Neural Networks' that you build with them.

There is a broad set of papers to read if we want to dive deep into the history. It would take us multiple weekends. Instead, we will be reading an excellent overview paper from 2014 by Jürgen Schmidhuber. Jürgen evaluates the current state of the art in 'Deep Learning' by tracing it back to its roots. Ergo, we get excellent historical context.

Enjoy!

"Deep Learning in Neural Networks: An Overview." Jürgen Schmidhuber, in Neural Networks, Volume 61, January 2015, Pages 85-117 (DOI: 10.1016/j.neunet.2014.09.003)