Home
Blog
Best prompts for summarizing online meetings with large language models

Best prompts for summarizing online meetings with large language models

Best prompts for summarizing online meetings with large language models
Published on
Jul 2024

Online meetings quickly generate copious amounts of information, often including quite a bit of “noise” and with important takeaways interspersed with a multitude of less relevant details and dead-end discussions. The sheer abundance of information deliberated during online meetings can be overwhelming, and your users may want a solution to that.

Modern speech recognition tools coupled to large language models (LLMs) allows you today to nail that exact feature for the benefit of your users, in the form of tools that automatically craft summaries and minutes of virtual online meetings or even of in-person meetings if recorded.

In particular, LLMs like those from the ChatGPT / GPT, Bard and LLaMA families are very powerful to distill texts, including those of meetings, into concise and insightful summaries. Plus, speech recognition APIs have become so robust and accessible today that you can right away write desktop programs and web apps that do active listening, summarization and rephrasing automatically, essentially in real time.

Surely, the most critical part of software for virtual meeting summarization is the crafting of effective prompts to instruct the language model (how) to create the summary. 

In this article we'll explore how to do just that, specifically focusing on the best prompts for summarizing online meetings.

Key elements of the best prompts for summarizing online meetings via language models

Prompts are the links that connect human intention with AI execution, in this case the instructions that tell the language model what to summarize and how. Crafting the right prompt, from deciding where to put the input text to formatting what instructions to ask and how, is essential to achieve a coherent and brief but complete summary of the input text -in this case of the online meeting transcript. But what makes an effective prompt for virtual meeting summarization?

1. Clarity and specificity

The cardinal rule of prompts, not only specifically for summarization but in general, holds true here: be clear and specific. 

Instead of using a vague instruction like, “Summarize the meeting”, opt for something more precise, such as: “Provide a concise summary of the key decisions made during the meeting whose transcript I provide below.”.

Perhaps you want the output to consist in a list of items rather than free text, in which case your prompt can be something like: “Provide a list of the key decisions taken during the meeting whose transcript I provide below.”

2. Context

Virtual meetings vary widely in content, from project updates to brainstorming sessions to sales presentations. Providing context in your prompts is paramount, especially if the topic isn’t entirely clear – perhaps because it was determined prior to the meeting, or because the transcript doesn’t begin at time 0 of the meeting, or because the topic involves very technical words that the speech recognition program missed. 

One of our previous articles explains how prompt injection in speech recognition is used to enhance the quality of transcription. Capitalizing on Whisper’s attention mechanism, our API in particular allows you to add context – including in real-tme – that helps the system to capture, identify, and extract specific pieces of information to optimize the transcript quality for the summarization that follows. To know more about this, check this blog post.

To sum up, prompts are there to guide the language model – so it will definitely help to add a prompt that mentions the purpose of the meeting, who was there as participant, and any relevant background information. This information can be supplied via statements like: “Summarize the marketing strategy discussion from yesterday's team meeting attended by marketing, sales, and product teams.”

3. Level of detail

Just like in any other kind of summarization, specify the level of detail you want in the summary. Depending on your needs, you might request a high-level overview or a detailed breakdown of discussion points. For instance, “Summarize the quarterly sales review meeting in detail, focusing on revenue figures and the key takeaways.”

4. Desired language style

Large language models can adapt their summaries to different styles and perspectives, and you can exploit this to format style (more or less formal, persuasive, etc.) and layout (for example free text summary vs. bullet points, etc). For example, you can ask the language model to “Summarize the presentation in a persuasive tone suitable for our stakeholders.”

5. Including relevant questions

If there are specific questions that were answered in the meeting or can be kind of answered from its contents, include them explicitly in your prompts. For instance, “Summarize the Q&A session of the town hall meeting, addressing the top three questions raised by employees.” for a meeting specifically built around questions and answers, or “Summarize the meeting transcript trying to find out whether the predominant idea is that we should post more or rather less on the company’s blog.”

Some more examples of prompts for virtual meeting summarization using LLMs

After a project status meeting: "Provide a detailed summary of the project status meeting held today, whose transcript I provide below between triple quotes, including updates on milestones, challenges discussed, and action items assigned to team members."

• After a presentation discussing a project with a client and the associated costs: "Summarize the presentation I gave to prospective client John Smith (transcript provided at the end) on how to work on ghostwriting for his company. Make sure you include a section listing the different pricing alternatives we discussed, and another section summarizing any objections raised by the client."

After a brainstorming session: "Generate a concise summary of the brainstorming session whose transcript I provide below between triple quotes, focusing on marketing ideas for the upcoming product launch. Highlight the top 3 to 5 suggestions that look more innovative based on the discussion."

Key takeaways

Summarization of online meetings with LLMs using carefully designed prompts that consider the points discussed here can greatly enhance productivity and speed in making decisions, saving valuable time and helping your users to extract meaningful insights from the deluge of information produced during meetings. 

By being clear, specific and detailed in your prompts, and including context information when needed, you can ensure that the AI-generated summaries are both reliable and tailored to your users’ specific needs.

Now go put these tips into practice and write the next-generation virtual meeting summarizer app!

Other useful resources related to the topic

Article written for Gladia by Luciano Abriata, PhD.

About Gladia

At Gladia, we built an optimized version of Whisper in the form of an enterprise-grade API, adapted to real-life professional use cases and distinguished by exceptional accuracy, speed, extended multilingual capabilities and state-of-the-art features.

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more

Speech-To-Text

Keeping LLMs accurate: Your guide to reducing hallucinations

Over the last few years, Large Language Models (LLMs) have become accessible and transformative tools, powering everything from customer support and content generation to complex, industry-specific applications in healthcare, education, and finance.

Case Studies

Transforming note-taking for students with AI transcription

In recent years, fuelled by advancements in LLMs, the numbers of AI note-takers has skyrocketed. These apps are increasingly tailored to meet the unique needs of specific user groups, such as doctors, sales teams and project managers.

Speech-To-Text

RAG for voice platforms: combining the power of LLMs with real-time knowledge

It happens all the time. A user submits a query to a large language model (LLM) and swiftly gets a response that is clear, comprehensive, and obviously incorrect.