Raw Data API

RtcEngine Interface Class

  • For Android/Windows, the APIs in this section are under the RtcEngine Interface Class.
  • For iOS/Mac, the APIs in this section are under AgoraRtcEngineKit Interface Class.

Set Recording Audio Format (setRecordingAudioFrameParameters)

This method sets the format of the callback data in onRecordAudioFrame.

Android/Windows

int setRecordingAudioFrameParameters(int sampleRate,
                                     int channel,
                                     RAW_AUDIO_FRAME_OP_MODE_TYPE mode,
                                     int samplesPerCall);
Name Description
sampleRate It specifies the sampling rate in the callback data returned by onRecordAudioFrame, which can set be as 8000, 16000, 32000, 44100 or 48000.
channel It specifies the number of channels in the callback data returned by onRecordAudioFrame, which can be set as 1 or 2:
1: Mono
2: Dual-Track
mode It specifies the use mode of onRecordAudioFrame callback:
  • RAW_AUDIO_FRAME_OP_MODE_READ_ONLY = 0: Read-only mode, the users only read the AudioFrame data without modifying anything. For example,

    The users acquire data with Agora SDK, then push RTMP streams by themselves, then they can use this mode.

  • RAW_AUDIO_FRAME_OP_MODE_WRITE_ONLY = 1: Write-only mode, the users replace the AudioFrame data with their own data for the Agora SDK to do the coding transmission. For example,

    The users acquire data by themselves, then they can use this mode.

  • RAW_AUDIO_FRAME_OP_MODE_READ_WRITE = 2: Read and Write mode, the users read the data from AudioFrame, modify it and then play it. For example,

    The users have their own sound-effect processing module, and they want to do some voice pre-processing based on actual needs, for example, voice change, then they can use this mode.

samplesPerCall It specifies the sampling points in the called data returned in onRecordAudioFrame, for example, it is usually set as 1024 for stream pushing.

iOS/Mac

- (int)setRecordingAudioFrameParametersWithSampleRate:(NSInteger)sampleRate
                                              channel:(NSInteger)channel
                                                 mode:(AgoraRtcRawAudioFrameOpMode)mode
                                       samplesPerCall:(NSInteger)samplesPerCall;
Name Description
sampleRate It specifies the sampling rate in the callback data returned by onRecordAudioFrame, which can set be as 8000, 16000, 32000, 44100 or 48000.
channel It specifies the number of channels in the callback data returned by onRecordAudioFrame, which can be set as 1 or 2:
1: Mono
2: Dual-Track
mode It specifies the use mode of onRecordAudioFrame callback:
  • AgoraRtc_RawAudioFrame_OpMode_ReadOnly = 0: Read-only mode, the users only read the AudioFrame data without modifying anything. For example,

    The users acquire data with Agora SDK, then push RTMP streams by themselves, then they can use this mode.

  • AgoraRtc_RawAudioFrame_OpMode_WriteOnly = 1: Write-only mode, the users replace the AudioFrame data with their own data for the Agora SDK to do the coding transmission. For example,

    The users acquire data by themselves, then they can use this mode.

  • AgoraRtc_RawAudioFrame_OpMode_ReadWrite = 2: Read and Write mode, the users read the data from AudioFrame, modify it and then play it. For example,

    The users have their own sound-effect processing module, and they want to do some voice pre-processing based on actual needs, for example, voice change, then they can use this mode.

samplesPerCall It specifies the sampling points in the called data returned in onRecordAudioFrame, for example, it is usually set as 1024 for stream pushing.

Set Playback Audio Format (setPlaybackAudioFrameParameters)

This method sets the format of the callback data in onPlaybackAudioFrame.

Android/Windows

int setPlaybackAudioFrameParameters(int sampleRate,
                                    int channel,
                                    RAW_AUDIO_FRAME_OP_MODE_TYPE mode,
                                    int samplesPerCall);
Name Description
sampleRate It specifies the sampling rate in the callback data returned by onPlaybackAudioFrame, which can set be as 8000, 16000, 32000, 44100 or 48000.
channel It specifies the number of channels in the callback data returned by onPlaybackAudioFrame, which can be set as 1 or 2:
1: Mono
2: Dual-Track
mode It specifies the use mode of onPlaybackAudioFrame callback:
  • RAW_AUDIO_FRAME_OP_MODE_READ_ONLY = 0: Read-only mode, the users only read the AudioFrame data without modifying anything. For example,

    The users acquire data with Agora SDK, then push RTMP streams by themselves, then they can use this mode.

  • RAW_AUDIO_FRAME_OP_MODE_WRITE_ONLY = 1: Write-only mode, the users replace the AudioFrame data with their own data for the Agora SDK to do the coding transmission. For example,

    The users acquire data by themselves, then they can use this mode.

  • RAW_AUDIO_FRAME_OP_MODE_READ_WRITE = 2: Read and Write mode, the users read the data from AudioFrame, modify it and then play it. For example,

    The users have their own sound-effect processing module, and they want to do some voice post-processing based on actual needs, for example, voice change, then they can use this mode.

samplesPerCall It specifies the sampling points in the called data returned in onPlaybackAudioFrame, for example, it is usually set as 1024 for stream pushing.

iOS/Mac

- (int)setRecordingAudioFrameParametersWithSampleRate:(NSInteger)sampleRate
                                              channel:(NSInteger)channel
                                                 mode:(AgoraRtcRawAudioFrameOpMode)mode
                                       samplesPerCall:(NSInteger)samplesPerCall;
Name Description
sampleRate It specifies the sampling rate in the callback data returned by onPlaybackAudioFrame, which can set be as 8000, 16000, 32000, 44100 or 48000.
channel It specifies the number of channels in the callback data returned by onPlaybackAudioFrame, which can be set as 1 or 2:
1: Mono
2: Dual-Track
mode It specifies the use mode of onPlaybackAudioFrame callback:
  • AgoraRtc_RawAudioFrame_OpMode_ReadOnly = 0: Read-only mode, the users only read the AudioFrame data without modifying anything. For example,

    The users acquire data with Agora SDK, then push RTMP streams by themselves, then they can use this mode.

  • AgoraRtc_RawAudioFrame_OpMode_WriteOnly = 1: Write-only mode, the users replace the AudioFrame data with their own data for the Agora SDK to do the coding transmission. For example,

    The users acquire data by themselves, then they can use this mode.

  • AgoraRtc_RawAudioFrame_OpMode_ReadWrite = 2: Read and Write mode, the users read the data from AudioFrame, modify it and then play it. For example,

    The users have their own sound-effect processing module, and they want to do some voice post-processing based on actual needs, for example, voice change, then they can use this mode.

samplesPerCall It specifies the sampling points in the called data returned in onPlaybackAudioFrame, for example, it is usually set as 1024 for stream pushing.

Set Mixed Data Format of Recording and Playback (setMixedAudioFrameParametersWithSampleRate)

This method sets the format of the callback data in onMixedAudioFrame.

Android/Windows

int setMixedAudioFrameParametersWithSampleRate(int sampleRate,
                                               int samplesPerCall);
Name Description
sampleRate It specifies the sampling rate in the callback data returned by onMixedAudioFrame, which can set be as 8000, 16000, 32000, 44100 or 48000.
samplesPerCall It specifies the sampling points in the called data returned in onMixedAudioFrame, for example, it is usually set as 1024 for stream pushing.

iOS/Mac

- (int)setMixedAudioFrameParametersWithSampleRate:(NSInteger)sampleRate
samplesPerCall:(NSInteger)samplesPerCall;
Name Description
sampleRate It specifies the sampling rate in the callback data returned by onMixedAudioFrame, which can set be as 8000, 16000, 32000, 44100 or 48000.
samplesPerCall It specifies the sampling points in the called data returned in onMixedAudioFrame, for example, it is usually set as 1024 for stream pushing.

IAudioFrameObserver Interface Class

Get Recorded Audio Frame (onRecordAudioFrame)

This method gets the recorded audio frame.

virtual  bool  onRecordAudioFrame(AudioFrame&audioFrame)  override {
return  true;
}
Name Description
AudioFrame samples: number of samples in the frame
bytesPerSample: number of bytes per sample: 2 for PCM 16
channels: number of channels(data are interleaved if stereo)
samplesPerSec: sampling rate
buffer: data buffer
renderTimeMs: The timestamp to render the audio stream. It instruct the users to use this timestamp to synchronize the audio stream render while rendering the audio streams. [1]

Footnotes

[1](1, 2) This timestamp is for audio stream rendering, not the timestamp of capturing the audio stream.
struct  AudioFrame  {
AUDIO_FRAME_TYPE  type;
int  samples;
int  bytesPerSample;
int  channels;
int  samplesPerSec;
void*  buffer;
int64_t renderTimeMs;
}

Get the Playback Audio Frame (onPlaybackAudioFrame)

This method gets the playback audio frame. The parameter description is the same as onRecordAudioFrame。

virtual  bool  onPlaybackAudioFrame(AudioFrame&  audioFrame)  override {
return  true;
}

Get the Playback Audio Frame of a Specific User (onPlaybackAudioFrameBeforeMixing)

This method gets the playback audio frame of a specific user. The parameter description is the same as onRecordAudioFrame.

virtual  bool  onPlaybackAudioFrameBeforeMixing(unsigned  int  uid,  AudioFrame&  audioFrame)  override
{
return  true;
}
Name Description
uid UID of specified user

Get the Mixed Data of Recording and Playback Audio (onMixedAudioFrame)

This method gets the mixed data of recording and playback audio. It only returns the single-channel data.

virtual bool onMixedAudioFrame(AudioFrame& audioFrame) override
{
return true;
}
Name Description
AudioFrame samples: number of samples in the frame
bytesPerSample: number of bytes per sample: 2 for PCM 16
channels: number of channels(data are interleaved if stereo)
samplesPerSec: sampling rate
buffer: data buffer
renderTimeMs: The timestamp to render the audio stream. It instruct the users to use this timestamp to synchronize the audio stream render while rendering the audio streams. [1]

Register Audio Observer Object (registerAudioFrameObserver)

int IMediaEngine::registerAudioFrameObserver(IAudioFrameObserver* observer);

This method registers audio observer object. When you need the engine to return the callback of onRecordAudioFrame,onPlaybackAudioFrame or onPlaybackAudioFrameObserver, call this method to register the callback.

Name Description
observer Interface Object instance.
Set the value to NULL to cancel the registration if necessary.

IVideoFrameObserver Interface Class

Get Captured Video Frame(onCaptureVideoFrame)

virtual bool onCaptureVideoFrame(VideoFrame&videoFrame)

This method gets the camera captured image.

Name Description
VideoFrame yBuffer: Pointer to the Y buffer pointer in the YUV data
uBuffer: Pointer to the U buffer pointer in the YUV data
vBuffer: Pointer to the V buffer pointer in the YUV data
width: Video pixel width
height: Video pixel height
yStride: Line span of Y buffer in YUV data
uStride: Line span of U buffer in YUV data
vStride: Line span of V buffer in YUV data
rotation: Set the rotation of this frame before rendering the video, and it supports 0, 90, 180, 270 degrees.
renderTimeMs: The timestamp to render the video stream. It instruct the users to use this timestamp to synchronize the video stream render while rendering the video streams. [2]
Return Value None

Footnotes

[2]This timestamp is for video stream rendering, not the timestamp of capturing the video stream.

The video data format is YUV420. The buffer provides a pointer to a pointer. However, the interface user cannot modify the pointer of the buffer, and can only modify the contents of the buffer.

struct  VideoFrame  {
VIDEO_FRAME_TYPE  type;
int  width;
int  height;
int  yStride;
int  uStride;
int  vStride;
void*  yBuffer;
void*  uBuffer;
void*  vBuffer;
int rotation; // rotation of this frame (0, 90, 180, 270)
int64_t renderTimeMs;
};

Get Video Frame of Other User(onRenderVideoFrame)

virtual bool onRenderVideoFrame(unsigned int uid, VideoFrame& videoFrame)

This method processes the received image of the other user(post-processing).

Name Description
uid UID of specified user
VideoFrame yBuffer: Pointer to the Y buffer pointer in the YUV data
uBuffer: Pointer to the U buffer pointer in the YUV data
vBuffer: Pointer to the V buffer pointer in the YUV data
width: Video pixel width
height: Video pixel height
yStride: Line span of Y buffer in YUV data
uStride: Line span of U buffer in YUV data
vStride: Line span of V buffer in YUV data
Return Value None
struct  VideoFrame  {
VIDEO_FRAME_TYPE  type;
int  width;
int  height;
int  yStride;
int  uStride;
int  vStride;
void*  yBuffer;
void*  uBuffer;
void*  vBuffer;
};

Register Video Observer Obejct (registerVideoFrameObserver)

int registerVideoFrameObserver(agora::media::IVideoFrameObserver *observer);

This method registers the video observer object. When you need the engine to return the callback of onCaptureVideoFrame or onRenderVideoFrame, call this method to register the callback.

Name Description
observer Interface Object instance.
Set the value to NULL to cancel the registration if necessary.

For the Android platform registerVideoFrameObserver is defined in libHDACEngine.so, which you need to load by yourself.