Transmit custom information
When interacting with a Conversational AI agent, you can transmit custom context information from the client, such as the user's speaking status, selected text, personal signature, or score, enabling the agent to generate responses tailored to the user's needs.
This document explains how to use the capabilities of the Signaling SDK to include custom information in interactions with the Conversational AI agent.
Understand the tech
Agora Signaling SDK allows users in a channel to set custom temporary status information and notifies other online users in the channel through event notifications.
If your app integrates both Voice SDK and Signaling services, you can leverage the Signaling features when creating a Conversational AI agent. This allows the agent to retrieve temporary status information from the Signaling channel before invoking the LLM. The information is used as context to guide the agent in generating responses that better align with user needs.
Prerequisites
Before you begin, ensure that you have:
- Implemented the basic logic for interacting with a Conversational AI agent by following the REST Quickstart.
- Integrated the Signaling SDK into your app and implemented basic messaging functionality by following the Signaling SDK Quickstart.
Implementation
Take the following steps to transmit custom context information from the client to the LLM.
Enable Signaling
To enable Signaling integration, set advanced_features.enable_rtm
to true
in the POST request when creating a Conversational AI agent. Refer to the following sample request:
Set Custom Information
Refer to Signaling User status management to set status information for users in the channel.
Transmit Custom Information
Before invoking the LLM, the agent automatically retrieves the active user's temporary status information and transmits it as context to the model. This temporary status information is stored in the context.presence
field.
The following example illustrates a scenario where UserA selects the text "Pythagorean theorem" in the app and asks the agent, "What does this mean?". The JSON example shows how the agent retrieves a temporary status field named selection
from Signaling before calling the LLM. The request structure is as follows:
LLM Adaptation
Adapt the LLM to process the temporary status information in the context.presence
field and generate content that better meets user needs. For implementation details, refer to the Custom LLM documentation.