How AI coding companions will change the way developers work

• 3167 words

Werner, Doug, and Sandeep behind the scenes

This is the third installment of the Hello World series, where I discuss the broad landscape of generative AI with AI and ML experts at Amazon. If you haven’t already, I encourage you to watch my conversations with Swami Sivasubramanian, and with Sudipta Sengupta and Dan Roth.

(The picture above is me doing my homework in 1988 when I went back to school to study computer science…. :-))

I like to think that as developers, we have one of the most creative jobs in the world. Every day we work towards building something new. And some of the greatest joy as a developer comes from knowing that you’ve solved a complex problem or created a delightful product for your customers. But writing code is only one part of the job (albeit an important one), there’s also brainstorming with product teams, designing the user experience, determining implementation details, and drafting system designs. I would argue, and I hope you would as well, that a developer’s time is better spent on these creative tasks than writing boilerplate code to upload a file to Amazon S3.

Developer tools are one area where generative AI is already having a tangible impact on productivity and speed, and it’s the reason I’m excited about Amazon CodeWhisperer. A coding companion that uses a large language model (LLM) trained on open-source projects, technical documentation, and AWS services to do a lot of the undifferentiated heavy lifting that comes along with building new applications and services.

I recently met with Doug Seven, GM of Amazon CodeWhisperer, and Sandeep Pokkunuri, a senior principal engineer at AWS, to learn more about the impact that generative AI is having on software development — and to find out if AI coding companions make the job less fun.

Coding companions and code completion software aren’t new. We’ve been able to iterate through properties and methods using popular IDEs for well over a decade. What’s fundamentally different this time, is that LLMs offer the potential to not only predict the next line of code, but to understand your intent and infer context from what you’ve already written (including comments) to generate syntactically valid, idiomatic code. Not to mention, it makes mundane and time consuming tasks, like writing unit tests or translating code from one language to another much easier.

As Doug said during our conversation, this is not a replacement for expertise. It’s a tool that allows developers to spend more time on the fun part of their job — solving hard problems.

The entire transcript of my conversation with Doug and Sandeep is available below. If you want to try out CodeWhisperer, installation instructions are available here.

Now, go build!



Transcription

This transcript has been lightly edited for flow and readability.

***

Werner Vogels: Doug, Sandeep, thank you for meeting with me here today. We’re going to talk a bit about the tech behind how we are helping developers with Generative AI. But can you first tell me a bit, what is your role within Amazon and in this world?

Doug Seven: Sure. So I’m the general manager for Code Whisper, which is our large language model product for developers. And I came here by way of about two decades in developer tools and focused on developer productivity and how to help developers do what they do faster, better, more fun.

WV: Did you used to be a developer yourself?

DS: I have been a developer for a very long time, which is how I got into it. I spent a lot of time writing code and figuring things out.

WV: Sandeep?

Sandeep Pokkunuri: I’ve been a developer myself for twelve years at Amazon. Actually, today is the 12th year of completion. I worked on distributed systems, products, DynamoDB, SQS over the past six or seven years close to now. I’ve been working in the machine learning organization, building various services like Lex and Voice ID. I’m actually working on large language models myself now.

WV: So, we hear a lot about all this Generative AI stuff and large language models and things like that. And the word “language” in there suggests that it’s all about text – writing poetry or new articles or things like that. What are we doing using this technology to help developers?

DS: Well, language isn’t all about text, right? That’s just one expression of language. But certainly when you’re a developer, you’re writing code that’s a form of text. And so if you think of the process a developer goes through, I’m going to write some code, I’m going to think about what I’m doing. I’m trying to solve a problem, f. The idea of backing that up with a large language model and say, hey, let me understand what you’re doing. And from what I understand of that, let me infer what I think you want to do next and suggest that to you and give you that suggestion in the form of maybe I’m just going to offer you the completion of the line of code you’re working on. You’re writing a method signature, and I’m going to give you the parameters that you want to fill in.

WV: But didn’t we have this completion already in IDEs and things like that for particular signatures, for example?

DS: Yeah, code completion has been around for a long time. And the evolution of code completion from something as simple as I type a class name, I hit a period, and then we’re just going to iterate the methods and properties that are available and list them as a really simple form of code completion. The evolution of that to not just say, here’s the properties and methods that are available to you," but to say, “I think I know what you’re doing, let me suggest you even more code that would help you complete that task.

WV: It’s almost like continuous pair programming.

DS: Yes, exactly.

WV: Your peer here is not a human, but it’s…

DS: We phrase it as your AI coding companion. It’s just that it’s like we’re sitting next to each other, we’re writing code, we’re solving this problem.

WV: And it doesn’t need to read the documentation.

DS: It’s already read it all.

WV: So where does the inference happen? On your laptop? Or do you need to be connected to the Code Whisperer backend?

SP: Inference is just one part of the story. The full story is more complex. For example, on the IDE, the plugin is doing a lot of work. It is seeing, okay, what programming language is the developer using? Where are they in the current context? Are they opening a function? Are they trying to finish a comment? Are they trying to write a block, for loop, or an if condition or something like that? It figures out the exact time where you might need a code recommendation. That logic is embedded in the plugin wherever it is, and then it makes an API request. And even when it shows you one recommendation, it is still working. So all of that logic lives on the service side. And of course, we also have some cutting edge response features such as reference tracker. All of those also reside on the service side, trying to help the developer make the best decision for their customers and their applications.

WV: So tell me a bit about sort of how these models are created? I mean, it’s not all the text in the World Wide Web, I mean, because that won’t help you as a developer. So what sits inside the model?

SP: Generally when we train large language models, we collect a lot of data from the public Internet. We clean it up and make sure that we train these models such that they understand the vocabulary and the structure of the language. How do you make meaningful sentences and paragraphs in the language?

WV: If you look at sort of the imperative programming languages, let’s say you have example code that you’ve found in Java. Would the model be able to translate that into C++? So you don’t need to have the C++ code originally into the model?

SP: Yeah, the models that we build, the transformer architecture absolutely allows for that. So very soon we will be seeing automatic translation from one language to another. Especially some of the legacy languages of the older times. They want to upgrade to a newer language or even the more recent languages. You want to go from one language to another because your development team is more familiar with it or it’s more efficient. For example, Rust is quite popular these days for high performance applications. So absolutely it’s going to be possible with large language models.

WV: So I always thought that as engineers or as programmers, we have one of the most creative jobs in the world. You can go to work every morning and create something new, and it’s fun. Does this take the fun away?

DS: The way I look at this is the idea behind Code Whisper is if you and I were going to sit down and write an application together, you bring to the problem a knowledge set, I bring to the problem a knowledge set, and together we’re going to solve this problem and figure it out. And you might have some suggestions for how to do things that I wasn’t aware of. I’m like, oh, I didn’t ever think of doing it that way, and vice versa. And so Code Whisper and these generative tools work largely in the same way. We’re just going to suggest things and sometimes you’re like, yes, that’s exactly what I would have done, but now I don’t have to type it. And other times it’s like, oh, well, that’s interesting. I maybe wouldn’t have done it that way. One of the most interesting things for me was the ability to approach something that I’m not familiar with. So in my case, I wanted to just try something and I wanted to go use an API that I didn’t have a lot of experience with, and I wanted to use a programming language I hadn’t really worked in before just to see what the experience would be like.

WV: Okay, so there’s a lot of work that goes in there.

DS: A tremendous amount of work.

WV: And it’s truly augmenting my skills as a developer because quite a few of those things I would maybe by myself not be aware of.

SP: I love coding, okay? The part of the job that I do that is the most fun is actually writing code. But to me, my job is actually a lot of creation. It is a creative profession. So it’s a lot about brainstorming with the product managers about what we want for our customers, what is the desired customer experience, what makes our customers delighted? And then the implementation part is, okay, how do I convert that into designs? How do I make sure that this is highly available, highly scalable, all of that. And then finally, the last part is actually writing code. I don’t measure my self esteem based on the amount of code that I write. I measure my self esteem based on how happy the customer is.

DS: Some of my favorite comments are when we talk to people who are like, “this is bringing the fun back!” Because you think about the day in the life of a developer, and the process a developer goes through, like I said, fundamentally you’re problem solving. A part of your day is sort of mundane. A really trivial example is, oh, I’ve got to write a class to represent a data object. That’s just like, I’m going to spend the next three or four minutes typing gets and sets to represent the things that it needs to do. Or I can just type a comment that says, “a class to represent this data object” and I’m going to start generating that code and I’m going to be done with it in like 30 seconds.

WV: So that’s the way you interact with it. Basically, you give it a regular text prompt and it will go and try and find out whether it can help you with that.

DS: There’s essentially two ways. One is, as I’m writing code, so like I was saying earlier, I’m writing method signature and it’s understanding what I’m doing and it’s inferring from that that I’m going to maybe want some parameters or here’s what the function is going to look like. And so as I’m writing code, it’s kind of completing the code, sort of code completion. The other is, before I’m writing the code, I’m documenting my intent. Here’s what I want. I’m going to write a comment that describes what I want, and the language model can understand, can look at that comment and say, okay, I understand what you’re describing, and then it’ll go through and start producing that code.

WV: Okay.

SP: Let’s say you’re writing a Lambda function and you’re inside the Lambda console, Lambda editor, and you say, hey, I just want to read a message from the Kinesis stream and I want to send an SMS to the customer through Twilio. So that’s your top of the Lambda function comment. So from there you just say def read message or something. And then from the context, Code Whisperer can figure out that, okay, this person is trying to read a Kinesis message. Let me read it and let me parse it and let me pick the interesting thing and it’ll fill for me. And if I need to change something, I can just do the last bit. The last mile, I’ll take care. Don’t get me wrong, ultimately the developer is in control. They are the ones who decide whether this code is good. They’re the ones that will run and verify that it is working as expected. They’re the ones that will ship. What the generative AI based tools like Code Whisperer are helping with is you don’t have to do a lot of reading documentation pages. They’re just saying, hey, this is stuff that is easy to get. You as an application developer should be focusing on creating value for your customer by doing higher level things, not boilerplate undifferentiated heavy lifting.

DS: So you’re saying the fun part of being a developer is not reading the documentation?

SP: Yeah, absolutely. Reading documentation is not the fun part of being a developer. For sure.

WV: You’ve been using Code Whisperer probably much longer than we have. So what is it that you really like about it?

SP: To me, the most compelling part of Code Whisperer is the reference tracker feature. It was launched with it. On the day it launched, it was there. So the idea is that you’re training on a lot of public code and it’s possible that the models, the large language models, they may repeat something that they have seen at training time. And the person who is using the assistant, they may just accept your recommendation and move on. But that may not be the ideal thing to do because there may be a license associated with the repository from where the training data was procured, and the person who is using that code should know, this belongs to a certain license, then there are obligations that I must meet and so on and so forth. And the developer may choose to say, hey, I looked at the license, I’m good with it, I’ll proceed or say, oh, I don’t want to pick any software that looks like this license, I’m going to just edit it myself. Or pick a different recommendation from the list of…

WV: Or your company made.

SP: Yeah, exactly.

WV: This changes life for developers dramatically. So does this mean that the skill sets of developers are going to change? The requirements? I mean, you no longer need a four-year computer science degree to actually do these things.

DS: We’re making the developer more productive. We’re helping them do the same things faster. They still have to know what they’re doing. They still have to be able to look at the suggestion they’re getting and understand what it’s doing. And saying, yes, that’s what I want, or maybe, yes, that’s what I want, but I just want to change this one or two things. To some degree, I always equate this to mathematics class. As you’re learning mathematics, you have to learn the fundamentals. You have to learn addition, subtraction, multiplication, division. And then you move on to learning some basic algorithms and some basic algebra capabilities. And eventually you get to a point where your teacher says, okay, you can bring a calculator to class now, and you’re going to use that to speed yourself up in doing the things that you already learned how to do by hand. And that’s what Code Whisperer is. It’s the calculator for a developer.

WV: Sometimes it’s being looked at as that this is a paradigm shift, but I think it’s much more in the tooling space than it is in sort of the shifts we saw with object orientation or functional programming or things like that. Where do you see this go? What is the Holy Grail?

SP: The paradigm shift is going to happen not in the core programming software development process. We are traveling on the same road. Instead of going on a bicycle, you’re going on a Ferrari or something. That’s what we are doing here. i DS: It is a huge change in how developers work. And Generative AI has become so important in our conversations and everything we’re doing about how is this going to affect what we do, that we want to get this into as many hands as possible, get as many people the ability to use this tool and get the productivity gains and do more.

SP: It is part of our democratizing AI story. Usually these productivity tools, big companies can pay for them, for their developers. But at the same time, there are a lot of app developers and freelancers who are just beginning. They don’t have big companies to pay for these licenses and all that. They’re just starting to build a mobile app. They want to do a quick POC, get feedback from their customers. They should be moving at the same pace as a person working for a very big company who can afford those licenses.

WV: You guys are building amazing tools and I hope that we can build a lot more to make our developers much more successful.