To Work Well with GenAI, You Need to Learn How to Talk to It

Posted By: Tom Morrison Community,

As Chief Scientist at Microsoft, my job is to conduct research that makes work better, and it’s an incredibly exciting time to have that job. Study after study show that recent AI advances will enable people to work in substantially new and more productive ways, and we’ve just started to scratch the surface of what is possible with AI. In the 17 years I’ve spent at Microsoft, I’ve never been more optimistic about the opportunity for computing to change work for the better.

One reason this latest generation of AI tools has so much promise is that they make it so people can now interact with our computers in the same way we’ve interacted with other humans for millennia: natural language. This is a huge change. Previously the only way to communicate complex ideas to computers was to use languages designed for computers — writing a spreadsheet formula exactly right, remembering the exact keywords for an email search, or learning a programming language. A whole lot of that restriction is now gone; you can now tell an AI what to do by simply writing a natural language prompt.

While overall this ability to communicate with AI systems using prompts is an enormous advance, research is showing it also introduces an understandable learning curve. Even for me, it’s a strange new thing to be able to talk to a computer in plain English. Scientists around the world are working hard to flatten this learning curve and are making lots of progress (e.g., work on prompt optimization). However, the research is clear that you can get a lot more out of AI right now if you do a little training about how to write good prompts. A recent study with management consultants at BCG, for example, found that consultants who get some prompt training are better at leveraging the power of AI than those who don’t.

Fortunately, research done by Microsoft and the academic community has early findings that can help people accelerate their prompting journey. Most of these findings ladder up to one key insight: Even though you’re using natural language, you have to remember that an AI system needs to know different things than a human.

Mastering the Language of AI

What does a computer need to know in a prompt that a human might not? How we can you use that information to improve prompts? Below are a few of the theories and results from the scientific literature that can help answer that question.

Provide more context than you do with a person

Psycholinguistics, which studies the psychological aspects of language, has long taught us about the central role of grounding in any communication. Roughly speaking, grounding is the process of coming to a mutually understood meaning through conversation — put simply, making sure you’re on the same page. For example, if the people attending a meeting possess a shared understanding of the actions they need to take at the end of that meeting (and know they have that shared understanding), it’s probably because they spent a lot of that time grounding on what the next steps are.

The process of grounding with a large language model is different from the grounding you do with another person because the model typically has less shared context. Making that context explicit in the prompt helps you get better results. For example, when I’m talking to a researcher on my team, we both know about all the brainstorming sessions we’ve had about the topic in the past, the skills that person has, and so on, but LLMs don’t, at least not yet. So, if I use AI to help me write an e-mail to that person, it can be helpful to provide the most important pieces of context about all that other stuff I know that the LLM might not. A person might consider it rude to tell them exactly what background they need, but the LLM of course won’t.

Thanks to techniques like “retrieval-augmented generation” and other recent technological advances, the amount of context you’ll need to provide will go down by quite a bit. AI can search your past emails and documents for useful context for example. Your current context also provides grounding material. Some systems, for example, ground the questions you ask about a meeting in the meeting transcript. That said, given how important grounding is to effective cooperation, giving the LLM the right context will continue to be crucial.

Use the “wisdom of the crowd”

Research suggests people can capture the “wisdom of the crowd” by approaching a problem from different perspectives — now we can do that with AI. I often find it useful to ask for at least three replies (e.g., “generate at least three titles” or “tell me three ways you’d re-write this paragraph”), sometimes even giving some structure for the ideas (“make at least one funny and one formal”). And when you have a good sense of exactly what you’re looking for from the model, give it a few examples. That process, called “few shot learning,” will help the LLM model its reply on what you want.

Rely on recognition, not recall

One core principle in computer interaction is that it’s much easier for people to recognize commands they want to issue than to recall them. Think about how much easier it is to choose something from a list compared to coming up with something from scratch. This is why almost all of us use graphical interfaces instead of “command lines” like DOS; it’s so much easier to, for instance, double click to open an app logo than to remember the specific command to open an app and then remember the app’s formal name.

Helping people recognize the prompt they might want versus having to develop it from scratch is a motivating factor behind lots of new AI features. In some advanced AI systems, you can access a huge library of prompts that are pre-written, you can save prompts that you like so you don’t have to remember them, and so on. These show up in the user experience, and over time the best prompts you and your organization use will start to be included as well. In the meantime, I keep a file with some of my favorite personal prompts that aren’t yet included in the library. For example, here’s one I use a lot:

I’m a researcher sending this email to a bunch of teams that I collaborate with. Please tell me the red flags I might raise when I send this.

Make it a conversation, not a single request

One key finding in the literature is that breaking down complex requests into multiple steps can help people get what they want out of LLMs more successfully. There are more formal ways to do this (e.g., “chain of thought” prompting), but informal strategies are likely to be successful too. For instance, I’ve found that it’s best to first ask for a summary of an article I want to understand, and then separately ask for insights. For example:

  • Please outline the article in bullets, with a focus on what a Microsoft executive (with a particular interest in research) might be interested in.
  • What questions should that exec ask of the article? Please include answers to the questions, with quotes from the article as often as possible. And if the answer to the question is not contained in the article, please answer with: “The answer is not included in the article.” (Or “The question is only partially answers” — with a description of the partial answer and what else needs to be known to provide a full answer.)

If at first you don’t succeed, try again (a different way)

LLMs are relatively new technologies, and there’s a lot we don’t understand about them. This means that sometimes we can’t explain why a prompt phrased one way works well and one phrased another way doesn’t, and this can vary across different versions and applications of the same AI model. So, if you try a prompt and it doesn’t work, experiment with rephrasing to find out what does work well. You can even ask it how you should go about asking an LLM your question. It’s not easy to wipe the slate clean and try something new with a person, but you can do that easily with AI. Have fun seeing where different strategies might lead you.

Prompt Support Will Be the “Ribbon” of the LLM

If LLMs represent a new paradigm for interacting with computers, good prompt support can be like the “ribbon” of common commands that appear in applications like Word and Excel that allows people to unlock more functionality of the application. We are doing research, for example, on how to automatically recommend personalized prompts that can help you take the next step in your workflow or address an item on your to-do list. And there’s a lot more interesting research to do ahead of us. The best prompt strategies are going to change over time as new functionality becomes available and we start to unlock what is truly exceptional about integrating LLMs into our work. Increased personalization, for example, will make the need to specify context less important, while the ability for LLMs to take actions will make prompts that support planning more important.

Thus far, many of the best prompt strategies have been developed by researchers. They’ve shown that a good prompt can often provide even more benefit than improving the underlying model. For example, a very recent paper out of Microsoft showed one can get huge performance boosts from LLMs in the medical domain just by changing one’s prompting strategy, challenging assumptions about the need for new model training processes. But as clever as these researchers are, the space they can explore is relatively limited. Increasingly we’re going to be able to learn the best strategies from the millions of people using LLMs. As more people use these tools and the tools evolve, we’ll continue to learn more.

Natural language conversations are at the foundation of how people work. Historically that’s been true for how we work together, and now that’s also true for how we work with our computers. These conversations contain a lot of knowledge that LLMs will unlock. Your conversation with PowerPoint, for example, can now become an amazing presentation. But conversations aren’t just about facts and data, the grounding and structure are important as well. That is what prompt engineering is. And as we start to discover the new ways of work that LLMs unlock, this structure is going to evolve as well. I’m excited to keep practicing and learning — and I hope you are too.

Written by:  Jaime Teevan, chief scientist at Microsoft, for Harvard Business Review.