Introduction

By default, the Agora SDK uses default audio and video modules for capturing and rendering in real-time communications.

However, the default modules might not meet your development requirements, such as in the following scenarios:

  • Your app has its own audio or video module.
  • You want to use a non-camera source, such as recorded screen data.
  • You need to process the captured video with a pre-processing library for functions such as image enhancement.
  • You need flexible device resource allocation to avoid conflicts with other services.

This article tells you how to use the Agora Native SDK to customize the audio source and sink.

Implementation

Before customizing the audio source or sink, ensure that you implement the basic real-time communication functions in your project. For details, see the following documents:

Custom audio source

Refer to the following steps to customize the audio source in your project:

  1. Call the enableExternalAudioSourceWithSampleRate method to enable the external audio source before joining a channel.
  2. Record and process the audio data on your own.
  3. Send the audio data back to the SDK using the pushExternalAudioFrameRawData or pushExternalAudioFrameSampleBuffer method according to the format of the audio data..

API call sequence

Refer to the following diagram to customize the audio source in your project.

Sample code

Refer to the following code to customize the audio source in your project.

// Swift
// Push the audio frame in the rawData format.
agoraKit.pushExternalAudioFrameRawData("your rawData", samples: "per push samples", timestamp: 0)

// Push the audio frame in the CMSampleBuffer format.
agoraKit.pushExternalAudioFrameSampleBuffer("your CMSampleBuffer")
// Objective-C
// Push the audio frame in the rawData format.
[agoraKit pushExternalAudioFrameRawData: "your rawData" samples: "per push samples", timestamp: 0];

// Push the audio frame in the CMSampleBuffer format.
[agoraKit pushExternalAudioFrameSampleBuffer: "your CMSampleBuffer"];

API Reference

Custom audio sink

Refer to the following steps to customize the audio sink in your project:

  1. Call the enableExternalAudioSink method to enable the external audio sink before joining a channel.
  2. After you join the channel, call the pullPlaybackAudioFrameRawData or pullPlaybackAudioFrameSampleBufferByLengthInByte method to get the remote audio data according to the format of the audio data.
  3. Play the remote audio data on your own.

API call sequence

Refer to the following diagram to customize the audio sink in your project.

Sample code

Refer to the following code to customize the audio sink in your project.

// Swift
// Pull the audio frame in the rawData format
agoraKit.pullPlaybackAudioFrameRawData("your rawData", lengthInByte: "data length in byte of the external audio data")

// Pull the audio frame in the CMSampleBuffer format
agoraKit.pullPlaybackAudioFrameSampleBufferByLengthInByte(lengthInByte: "data length in byte of the external audio data")
// Objective-C
// Pull the audio frame in the rawData format
[agoraKit pullPlaybackAudioFrameRawData: "your rawData" lengthInByte: "data length in byte of the external audio data"];

// Pull the audio frame in the CMSampleBuffer frame
[agoraKit pullPlaybackAudioFrameSampleBufferByLengthInByte: lengthInByte: "data length in byte of the external audio data"];

API reference

Consideration

Customizing the audio source and sink requires you to manage audio data recording and playback on your own.

  • When customizing the audio source, you need to record and process the audio data on your own.
  • When customizing the audio sink, you need to process and play back the audio data on your own.