By default, the Agora SDK uses default audio and video modules for capturing and rendering in real-time communications.

However, the default modules might not meet your development requirements, such as in the following scenarios:

  • Your app has its own audio or video module.
  • You want to use a non-camera source, such as recorded screen data.
  • You need to process the captured video with a pre-processing library for functions such as image enhancement.
  • You need flexible device resource allocation to avoid conflicts with other services.

This page explains how to customize the audio source with the Agora Web SDK.


Before customizing the audio source, ensure that you have implemented the basic real-time communication functions. For details, see Start a call or Start Live Interactive Streaming.

When creating a stream with the createStream method, you can specify the customized audio source by the audioSource property when creating a stream.
For example, you can use the mediaStream method to get the audio track from MediaStreamTrack, and then set audioSource:

    {video: false, audio: true}
    var audioSource = mediaStream.getAudioTracks()[0];
    // After processing audioSource
    var localStream = AgoraRTC.createStream({
        video: false,
        audio: true,
        audioSource: audioSource
        client.publish(localStream, function(e){
MediaStreamTrack refers to the MediaStreamTrack object supported by the browser. See MediaStreamTrack API for details.

We also provide an open-source AgoraAudioIO-Web-Webpack demo project on GitHub. You can try the demo, or view the source code in the rtc-client.js file.