Get insights from customer interactions with Azure Communication Services and Azure OpenAI Service
Customer conversations are a gold mine of information that can help service providers create a better customer experience. When a customer contacts a service provider, they are often routed between different departments and speak to different agents to address their concerns. During the conversation, the customer is usually asked to provide specifics around their issue. Service providers do their best to solve the issue. With Azure Communication Services and Azure OpenAI Service, we can automatically generate insights from the customer call including topic, summary, highlights, and more. These insights can help service providers during the call to get the necessary context to help the customer as well as organizations to learn about their customer support experience.
You can access the completed code for this blog on GitHub.
Architecture
Before we get started, we will quickly review the proposed architecture of our solution. We will leverage a console application in Python as our “orchestrating service” that has access to an Azure Communication Services resource. This console application will generate the chat threads from which we will generate insights. In a more complex architecture, this console application would be replaced by a service that is used by the clients to manage the creation and access to chat threads. The console application will then connect with Azure OpenAI Service to generate insights based on the transcribed thread and provided prompts.
Pre-requisites
To get started we will need to:
- Create an Azure account with an active subscription. For details, see Create an account for free.
- Create an Azure Communication Services resource. For details, see Quickstart: Create and manage Communication Services resources. You'll need to record your resource endpoint and connection string for this quickstart.
- Create an Azure OpenAI Service resource. See instructions.
- Deploy an Azure OpenAI Service model (Can use GPT-3, ChatGPT or GPT-4 models). See instructions.
- Install Python 3.11.2
- Install dependencies with pip:
pip install openai azure.communication.chat azure.communication.identity
Setting up the console application
Open your terminal or command windows, create a new directory for your application and go to it.
In your favorite text editor, create a file called chatInsights.py and save it to the folder we created before: chatInsights. In the file, configure the imports you will need for this project. Add a connection string and endpoint for Azure Communication Services which can be found on the Azure Portal under your Azure Communication Services resource.
Configuring the chat client
For this example, we will generate a summary from a pre-populated chat thread. In a real-world scenario, you would be accessing existing threads that have been created on your resource. To create the sample thread, we will start by creating two identities that will be mapped to two users. We will then initialize our chat client with those identities. Finally, we will populate the chat thread with some sample messages we have created.
To set up the chat clients we will create two chat participants using the identities we had generated. We will add the participants to a chat thread.
Next we will populate the chat thread with some sample conversation between a customer support agent and a customer chatting about a dishwasher.
Generating a conversations transcript
Now that the thread is generated, we will convert it into a transcript that can be fed to the Azure OpenAI Service model. We will use the list_messages method from our chat client to get the messages from the thread. We can select a date range for the messages we want. In this case we will get the last day. We will store each message into a string.
Summarize the chat thread
Finally, we will use the transcript of the conversation to ask the Azure OpenAI Service to generate insights. As part of the prompt to the GPT model, we will ask for:
- A generated topic for the thread
- A generated summary for the thread
- A generated sentiment for the thread
Run it
Now that everything is ready, we will run it. The application will create threads, populate it with messages, and then export them to summarize them. Remember, in a production environment the threads will be populated directly by your users and thus you will only need to focus on extracting the messages and summarizing them. To run the full code we generated run the following command within the ChatInsights folder.
Once you run the application you will see insights generated by Azure OpenAI Service from the conversation. You can leverage these insights to better inform your service providers on the context of a conversation or after the fact to help inform your customer support experience.
If you want to access the completed code find it on GitHub. Now it is your turn to try it! Let us know how it goes in the comments.
Published on:
Learn moreRelated posts
Fabric Mirroring for Azure Cosmos DB: Public Preview Refresh Now Live with New Features
We’re thrilled to announce the latest refresh of Fabric Mirroring for Azure Cosmos DB, now available with several powerful new features that e...
Power Platform – Use Azure Key Vault secrets with environment variables
We are announcing the ability to use Azure Key Vault secrets with environment variables in Power Platform. This feature will reach general ava...
Validating Azure Key Vault Access Securely in Fabric Notebooks
Working with sensitive data in Microsoft Fabric requires careful handling of secrets, especially when collaborating externally. In a recent cu...
Azure Developer CLI (azd) – May 2025
This post announces the May release of the Azure Developer CLI (`azd`). The post Azure Developer CLI (azd) – May 2025 appeared first on ...
Azure Cosmos DB with DiskANN Part 4: Stable Vector Search Recall with Streaming Data
Vector Search with Azure Cosmos DB In Part 1 and Part 2 of this series, we explored vector search with Azure Cosmos DB and best practices for...
General Availability for Data API in vCore-based Azure Cosmos DB for MongoDB
Title: General Availability for Data API in vCore-based Azure Cosmos DB for MongoDB We’re excited to announce the general availability of the ...
Efficiently and Elegantly Modeling Embeddings in Azure SQL and SQL Server
Storing and querying text embeddings in a database it might seem challenging, but with the right schema design, it’s not only possible, ...