Skip to main content

Display live transcripts

When interacting with conversational AI in real time, you can enable real-time transcripts to display the conversation content. This page explains how to implement real-time transcripts in your app.

Understand the tech

Agora provides a flexible, scalable, and standardized conversational AI engine toolkit. The toolkit supports iOS, Android, and Web platforms, and encapsulates scenario-based APIs. You can use these APIs to integrate the capabilities of the Agora Signaling SDK and Video SDK to enable the following features:

The toolkit receives transcript content through the onTranscriptUpdated callback and supports monitoring the following types of transcript data:

  • Agent transcript: Transcribes the agent’s speech. Includes real-time updates and final results.

  • User transcript: Transcribes the user’s speech. Supports real-time display and status management.

  • Transcript status: Reports status updates such as in progress, completed, or interrupted.

The following diagram outlines the step-by-step process to integrate live transcript functionality into your application:

Transcript rendering workflow

Prerequisites

Before you begin, ensure the following:

  • You have implemented the Conversational AI Engine REST quickstart.
  • Your app integrates Video SDK v4.5.1 or later and includes the video quickstart implementation.
  • You have enabled Signaling in the Agora Console and completed Signaling quickstart for basic messaging.
  • You maintain active and authenticated RTC and Signaling instances that persist beyond the component's lifecycle. The toolkit does not manage the initialization, lifecycle, or authentication of RTC or Signaling.

Implementation

This section describes how to receive transcript content from the transcript processing module and display it on your app UI.

  1. Integrate the toolkit

    Copy the convoaiApi folder to your project and import the toolkit before calling the toolkit API. Refer to Folder structure to understand the role of each file.

  2. Create a toolkit instance

    Create a configuration object with the Video SDK and Signaling engine instances. Set the transcript rendering mode, then use the configuration to create a toolkit instance.

    // Create configuration objects for the RTC and RTM instances val config = ConversationalAIAPIConfig(     rtcEngine = rtcEngineInstance,     rtmClient = rtmClientInstance,     // Set the transcript rendering mode. Options:     // - TranscriptRenderMode.Word: Renders transcript word by word.     // - TranscriptRenderMode.Text: Renders the full sentence at once.          renderMode = TranscriptRenderMode.Word,     enableLog = true ) // Create component instance val api = ConversationalAIAPIImpl(config)
  3. Subscribe to the channel

    Transcript data is delivered through Signaling channel messages. To receive transcript data, call subscribeMessage before starting the agent session.

    api.subscribeMessage("channelName") { error ->     if (error != null) {         // Handle error     } }
  4. Receive transcript

    Call the addHandler method to register your implementation of the transcription callback.

    api.addHandler(covEventHandler)
  5. Implement UI rendering logic

    Inherit your UI module from the IConversationalAIAPIEventHandler interface. Implement the onTranscriptUpdated method to handle the logic for rendering transcript to the UI.

     private val covEventHandler = object : IConversationalAIAPIEventHandler { override fun onTranscriptUpdated(agentUserId: String, transcription: Transcription) {         // Handle transcript data and update the UI here     }       }
  6. Add a Conversational AI agent to the channel

    To start a Conversational AI agent, configure the following parameters in your POST request:

    ParameterDescriptionRequired
    advanced_features.enable_rtm: trueStarts the Signaling serviceYes
    parameters.data_channel: "rtm"Enables Signaling as the data transmission channelYes
    parameters.enable_metrics: trueEnables agent performance data collectionOptional
    parameters.enable_error_message: trueEnables reporting of agent error eventsOptional

    After a successful response, the agent joins the specified Video SDK channel and is ready to interact with the user.

  7. Unsubscribe from the channel

    After an agent session ends, unsubscribe from channel messages to release transcription resources:

    api.unsubscribeMessage("channelName") { error ->
    if (error != null) {
    // Handle the error
    }
    }
  8. Release resources

    At the end of each call, use the destroy method to clean up the cache.

    api.destroy()

Reference

This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.

Folder structure

  • IConversationalAIAPI.kt: API interface and related data structures and enumerations
  • ConversationalAIAPIImpl.kt: ConversationalAI API main implementation logic
  • ConversationalAIUtils.kt: Tool functions and event callback management
  • subRender/
    • v3/: Transcription module
      • TranscriptionController.kt: Transcription Controller
      • MessageParser.kt: Message Parser

API Reference

This section provides API reference documentation for the transcript module.