By default, the Agora SDK uses default audio and video modules for capturing and rendering in real-time communications.

However, the default modules might not meet your development requirements, such as in the following scenarios:

  • Your app has its own audio or video module.
  • You want to use a non-camera source, such as recorded screen data.
  • You need to process the captured video with a pre-processing library for functions such as image enhancement.
  • You need flexible device resource allocation to avoid conflicts with other services.

This article explains how to customize the video source and renderer with the Agora Web SDK.

Click the online demo to try this feature out.


Before customizing the video source and renderer, ensure that you have implemented the basic real-time communication functions. For details, see Start a Video Call or Start Interactive Video Streaming.

Customize the audio/video source

When creating a stream with the createStream method, you can specify customized audio/video sources by the audioSource and videoSource properties.

For example, you can use the mediaStream method to get the audio and video tracks from MediaStreamTrack, and then set audioSource and videoSource:

    {video: true, audio: true}
    var videoSource = mediaStream.getVideoTracks()[0];
    var audioSource = mediaStream.getAudioTracks()[0];
    // After processing videoSource and audioSource
    var localStream = AgoraRTC.createStream({
        video: true,
        audio: true,
        videoSource: videoSource,
        audioSource: audioSource
        client.publish(localStream, function(e){
MediaStreamTrack refers to the MediaStreamTrack object supported by the browser. See MediaStreamTrack API for details.

Customize the video renderer

Call the Stream.getVideoTrack method and attach the track to the local canvas.

Sample code

We provide an open-source Agora-Custom-VideoSource-Web-Webpack demo project on GitHub. You can try the demo, or view the source code in the rtc-client.js file.