Skip to main content
Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
Python
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Migrate from Voice SDK 3.x

Migration steps

This section introduces the main steps to upgrade the SDK from v2.x or v3.7.x to v4.x.

  1. Integrate the SDK

    See Get started for more information about integrating the v4.x SDK into your project.

  2. Rename imported classes

    After successfully integrating the SDK, you need to update the codes of imported classes from the io.agora.rtc prefix to the io.agora.rtc2 prefix in the /app/java/com.example.<projectname>/MainActivity file of your project.

    Notice

    The location of AgoraRtcChannelMediaOptions.java in the 4.x SDK is also changed from /models to the root directory.

  3. Update the Agora code in your app

    The 4.x SDK has optimized or modified the implementation of some functions, resulting in incompatibility with the v3.7.x SDK. In order to retain Agora functionality in your app, update the code in your app according to What has changed.

What has changed

This section introduces the main changes of 4.x compared to v3.7.x in the following categories. You need to update the code of your app according to your business scenario.

  • Breaking changes: Introduces API compatibility changes that have a big impact. You need to spend significant time modifying the related implementation.

  • Behavior changes: Introduces changes caused by reasonable optimization of the SDK default behavior and API behavior. Less time is required to modify the related implementation, if any.

  • Function gaps: Introduces functions that were supported in v3.7.x but are not supported in 4.x. However, these functions are intended to be added in a future release.

  • Removed APIs: Introduces APIs that were supported in v3.7.x but removed in 4.x. Most of these APIs have alternatives in 4.x. Modifying the related implementation should require less time.

  • Naming and data type changes: Introduces the naming and data type changes of the main APIs. You can update the relevant implementation according to the error message in the IDE, which is expected to take less time.

Breaking changes

After upgrading from v3.7.x to 4.x, the way the APIs implement some functions is different. This section introduces compatibility changes for these APIs and the logic for updating the code of your app.

Multiple channels

In v3.7.x, the SDK provides the RtcChannel and IRtcChannelEventHandler classes to implement multi-channel control. The v3.7.x SDK supports subscribing to the audio and video streams of multiple channels, but only supports publishing one group of audio and video streams in one channel.

4.x introduces the following changes:

  • The SDK provides a RtcEngineEx class to implement multi-channel functions. Combined with the multi-channel capabilities, an RtcEngine instance can simultaneously collect multiple audio and video streams and publish them to the remote end to adapt to various business scenarios.

    After calling joinChannel to join the first channel, call joinChannelEx multiple times to join multiple channels to publish the specified stream to different channels using different user IDs (localUid) and ChannelMediaOptions settings.

  • A new RtcConnection class is added to represent the connection established by joinChannel. A connection is determined by the channel name (channelId) and localUid. You can control the publishing and subscribing state of different connections through RtcConnection. The SDK adds Ex in the name of all APIs with a connection parameter, corresponding to the RtcConnection class, to distinguish them, and gathers these APIs in the RtcEngineEx class to implement more multi-stream functions.

Setting ChannelMediaOptions, 4.x supports using one RtcEngine instance to capture audio and video streams from multiple sources at the same time and publish them to the remote user, adapting to various business scenarios. For example:

  • Simultaneously publish video streams collected by multiple cameras or multiple screen sharing streams.
  • Simultaneously publish a media player stream, a screen-sharing stream, and a video stream captured by the front and rear cameras.
  • Simultaneously publish one audio stream captured by the microphone and one by the custom audio source, and one media player steam.

Combined with the multi-channel capability, you can also experience the following functions:

  • Publish multiple groups of audio and video streams to the remote user through different localUids.

  • Mix multiple audio streams and publish them to the remote user through one localUid.

  • Mix multiple video streams and publish them to the remote user through one localUid.

    RtcChannel and RtcEngine of v3.7.x are partially duplicated and overlap in their functionality, so 4.x hides the RtcChannel and IRtcChannelEventHandler classes. See the JoinMultiChannel sample project for more details on how to replace RtcChannel with joinChannel and ChannelMediaOptions. The expected migration cost is one day or less.

If you need to continue to use the RtcChannel and IRtcChannelEventHandler classes, contact support@agora.io. The decision whether to maintain compatibility in a future release is based on your feedback.

Media stream publishing control

In v3.7.x, the SDK uses the publishLocalAudio and publishLocalVideo members in ChannelMediaOptions to control the audio and video publishing state in the channel.

In 4.x, the SDK gathers more channel-related settings into ChannelMediaOptions, including publishing of audio and video streams from different sources, automatic subscribing of audio and video streams, user role switching, token updating, and default dual stream options. You can determine the media stream publishing and subscribing behavior by calling joinChannel or joinChannelEx when joining a channel, or you can flexibly update the media options by calling updateChannelMediaOptions after joining a channel, such as switching video sources.

See the JoinChannelVideo sample project to update the code in your app.

Custom video source and renderer

In v3.7.x, the SDK provides the following ways to implement the custom video source and renderer:

  • Push mode for custom video source
  • Raw video data mode for custom video renderer
  • MediaIO mode (IVideoSource) for custom video source
  • MediaIO mode (IVideoSink) for custom video renderer

4.x unifies the audio and video processing pipeline internally. Push mode and raw video data mode are simpler for integration, so Agora recommends using them for custom video source and renderer and removes the following related APIs of the MediaIO mode:

  • IVideoSource
  • IVideoSink
  • IVideoFrameConsumer
  • setVideoSource
  • setLocalVideoRenderer
  • setRemoteVideoRenderer

If you use the MediaIO mode in v3.7.x to implement custom video source, custom video renderer, switching video source, and other functions, Agora recommends updating the code of your app by referring to the following sample projects:

Error codes and warning codes

In v3.7.x, the SDK returns warning codes through the onWarning callback.

To facilitate locating and troubleshooting issues, 4.x reports problems and causes through the return values of APIs or different callbacks for listening to states. For example:

  • getConnectionState: Reports the network connection state.

  • onLocalAudioStateChanged: Reports the local audio state.

  • onLocalVideoStateChanged: Reports the local video state.

  • onRemoteAudioStateChanged: Reports the remote audio state.

  • onRemoteVideoStateChanged: Reports the remote video state.

As a consequence, 4.x removes the onWarning callback.

Behavior changes

This section introduces changes caused by reasonable optimization of the SDK default behavior and API behavior.

Channel profile

In v3.7.x, the default channel profile is CHANNEL_PROFILE_COMMUNICATION (the communication profile).

Since the interactive streaming profile supports seamless switching from one-to-one calls to multi-user interaction, since v3.0.0, Agora has changed the internal transmission protocol and the ability to resist poor network conditions in the communication profile to be consistent with the interactive streaming profile. In 4.x, Agora also changed the default channel profile to CHANNEL_PROFILE_LIVE_BROADCASTING (the interactive streaming profile).

Default log file

In v3.7.x, when the SDK creates multiple log files, the earlier files are named in a agorasdk_x.log format, such as agorasdk_1.log. 4.x modified the naming format to agorasdk.x.log, such as agorasdk.1.log. Additionally, 4.x adds the agoraapi.log file to record API logs.

Fast channel switching

In v3.7.x, you need to call switchChannel to quickly switch a channel.

In 4.x, you can achieve the same switching speed as switchChannel in v3.7.x by switching a channel through leaveChannel and joinChannel. Therefore, 4.x removes switchChannel. If you call switchChannel to quickly switch a channel in v3.7.x, see the VideoQuickSwitch sample project to update the code in your app.

Agora self-developed extensions

v4.0.0 adds the feature of automatically loading self-developed dynamic libraries based on v4.0.0 Beta. As of this release, when using an Agora self-developed extension, you do not need to manually integrate the dynamic library in the project. The SDK automatically loads the dynamic library during the initialization phase of RtcEngine. You can directly call the corresponding method of the extension to enable this feature.

APIExtension type
enableVirtualBackgroundVirtual background extension
  • setBeautyEffectOptions
  • setVideoDenoiserOptions
  • setLowlightEnhanceOptions
  • setColorEnhanceOptions
  • Video enhancement extension
    enableRemoteSuperResolutionSuper resolution extension
  • setAudioEffectPreset
  • setVoiceBeautifierPreset
  • setVoiceConversionPreset
  • Voice beautifier extension
    enableSpatialAudioSpatial audio extension
    enableContentInspectContent moderation extension

    Virtual metronome

    When you call startRhythmPlayer, the SDK publishes the sound of the virtual metronome to the remote by default. If you do not want the remote users to hear the virtual metronome, refer to the following operations:

    • In v3.7.0, call the configRhythmPlayer, and set publish to false.
    • In v4.0.0, set publishRhythmPlayerTrack in ChannelMediaOptions to false.

    Volume indication

    You can call the enableAudioVolumeIndication method to enable the user's volume indication function. There is a difference in the definition of the interval parameter in the enableAudioVolumeIndication method between v3.7.0 and v4.0.0, as follows:

    • In v3.7.0, Agora recommends that you set the interval to be greater than 200 ms. The minimum is 10 ms; otherwise, the onAudioVolumeIndication callback is not received.
    • In v4.0.0, you must set the interval to an integer that is a multiple of 200 ms. If the value of interval is lower than 200, the SDK automatically adjusts it to 200.

    When the user's volume indication is enabled, the SDK triggers the onAudioVolumeIndication callback at the time interval set in this method. If the local user calls muteLocalAudioStream to mute themselves, the SDK behaves inconsistently between v3.7.0 and v4.0.0:

    • In v3.7.0, the SDK immediately stops reporting the local user's volume indication callback.
    • In v4.0.0, the SDK continues to report the local user's volume indication callback.

    Device permissions

    • In v3.7.0, LOCAL_AUDIO_STREAM_ERROR_DEVICE_NO_PERMISSION in onLocalAudioStateChanged reports that there is no permission to start the capture device, and LOCAL_VIDEO_STREAM_ERROR_DEVICE_NO_PERMISSION in onLocalVideoStateChanged reports that there is no permission to start the video capture device.

    • In v4.0.0, the permission statuses of the audio and video capture devices are both reported in the onPermissionError callback.

    Pre-call network test

    If you need to start or stop the network connection quality test, note the following:

    • In v3.7.0, you can call enableLastmileTest to start the network quality test. If you want to stop the network test, you need to call disableLastmileTest.

    • In v4.0.0, you can call startLastmileProbeTest to enable network quality testing. If you want to stop network testing, you need to call stopLastmileProbeTest.

    Remote media event triggering mechanism

    In the following scenarios, the mechanism of triggering remote media events is changed:

    • Scenario 1: When the host calls muteLocalAudioStream or muteLocalVideoStream outside the channel to change the publishing status of the local audio or video stream and then joins the channel.
    • Scenario 2: When the host calls muteLocalAudioStream or muteLocalVideoStream within the channel to change the publishing status of the local audio or video stream, and then other users join the channel.

    The behavior differences of Agora SDK between v3.7.0 and v4.0.0 are listed as follow:

    • In v3.7.0, the local user receives the onRemoteAudioStateChanged or onRemoteVideoStateChanged callback, which reports the status changes of the remote host's audio or video streams.
    • In v4.0.0, instead of the onRemoteAudioStateChanged or onRemoteVideoStateChanged callback, the local user receives the onUserMuteAudio or onUserMuteVideo callback, which reports the changes in the remote host's publishing status.

    Media options

    There are differences in the behavior of the SDK when setting channel media options while joining a channel between v3.7.x and 4.x:

    • In v3.7.x, if you set publishLocalAudio in ChannelMediaOptions to false, it will stop publishing the local audio stream within the channel.
    • In 4.x, if you set publishMicrophoneTrack in ChannelMediaOptions to false, it will not only stop publishing the local audio stream within the channel but also stop local microphone capture.

    Function gaps

    This section introduces functions that were supported in v3.7.x but are no longer supported or behave inconsistently in 4.x. Plans exist to support them or make them consistent in a future release, however.

    Audio application scenarios.

    4.x reconstructs the audio application scenarios, which can replace most of the audio application scenarios of v3.7.x. The following table shows the correspondence of audio application scenarios in the two releases:

    v3.7.xv4.x
    AUDIO_SCENARIO_DEFAULTAUDIO_SCENARIO_DEFAULT
    AUDIO_SCENARIO_CHATROOM_ENTERTAINMENTAUDIO_SCENARIO_CHATROOM
    AUDIO_SCENARIO_EDUCATIONAUDIO_SCENARIO_DEFAULT
    AUDIO_SCENARIO_GAME_STREAMINGAUDIO_SCENARIO_GAME_STREAMING
    AUDIO_SCENARIO_SHOWROOMAUDIO_SCENARIO_DEFAULT
    AUDIO_SCENARIO_CHATROOM_GAMINGAUDIO_SCENARIO_CHATROOM
    AUDIO_SCENARIO_IOTAUDIO_SCENARIO_DEFAULT
    AUDIO_SCENARIO_MEETINGAUDIO_SCENARIO_MEETING

    The following table shows the differences in the behavior of APIs related to the audio route between v3.7.x and 4.x:

    API v3.x v4.x

    setDefaultAudioRouteToSpeakerphone

    • You can only set the audio route before joining a channel.

    • This method only controls the initial state of the audio route and does not change the default audio route of the system. For example, regardless of whether you set the parameter of setDefaultAudioRouteToSpeakerphone to true or false, calling setEnableSpeakerphone(false) changes the audio route to the earpiece.

    • You can set the audio route either before or after joining a channel.

    • This method is a steady API and can change the default audio route of the system. For example, after calling setDefaultAudioRouteToSpeakerphone(true) to set the initial audio route to the speakerphone, calling setEnableSpeakerphone(false) cannot change the audio route to the earpiece.

    setEnableSpeakerphone

    After connecting external playback devices such as Bluetooth and wired headphones, calling setEnableSpeakerphone cannot switch the audio route to the speakerphone or earpiece.

    Not recommended.

    Also, when an external playback device is removed, for example, by disconnecting the Bluetooth headset, the audio route change is different between v3.5.0 and 4.x:

    • In v3.7.x, the audio route changes as follows (in terms of priority): The external device connected next to last (if any) > … > The external device connected first > setEnableSpeakerphone > setDefaultAudioRoutetoSpeakerphone > The default audio route.

    • In 4.x, the audio route changes as follows (in terms of priority): The external device connected next to last (if any) > … > The external device connected first > setDefaultAudioRoutetoSpeakerphone > The default audio route.

    Default video bitrate

    In v3.7.x, if you set the video bitrate in VideoEncoderConfiguration as STANDARD_BITRATE, the default video bitrate in the CHANNEL_PROFILE_LIVE_BROADCASTING profile is twice that of the CHANNEL_PROFILE_COMMUNICATION profile.

    In 4.x, the video bitrate in the CHANNEL_PROFILE_COMMUNICATION profile is the same as that in the CHANNEL_PROFILE_LIVE_BROADCASTING profile, which means the video bitrate in the CHANNEL_PROFILE_COMMUNICATION profile is doubled.

    Virtual background

    See Virtual Background to update the code in your app.

    Image enhancement

    4.x modifies the calling logic of setBeautyEffectOptions. Before calling setBeautyEffectOptions, you need to do the following:

    1. Call addExtension(agora_video_process) during RtcEngine initialization to specify the extension’s library path.

    2. Call enableExtension (agora, beauty, true) to enable the extension.

    3. Call enableVideo to enable the video module.

    See the VideoProcessExtensionVideoProcessExtension sample project to update the code in your app.

    Unsupported functions

    Compared to v3.7.x, some features are not supported or only partially supported in 4.x. This section shows the APIs currently unsupported but for which support is planned for a future release.

    Remote video stream fallback:

    • setRemoteUserPriority

    Screen sharing:

    • onScreenCaptureInfoUpdated

    Removed APIs

    The 4.x removes deprecated or unrecommended APIs. Alternatives to the removed API or reasons for their removal are shown as follows:

    • onVirtualBackgroundSourceEnabled: Use the return value of enableVirtualBackground instead.

    • onUserSuperResolutionEnabled: Use the RemoteVideoStats member of the superResolutionType class instead.

    • setAudioMixingPlaybackSpeed: Use setPlaybackSpeed instead.

    • setExternalAudioSourceVolume: Use adjustCustomAudioPublishVolume instead.

    • setAudioMixingDualMonoMode: Use setAudioDualMonoMode instead.

    • getEffectCurrentPosition: Use getPosition instead.

    • setEffectPosition: Use seek instead.

    • getEffectDuration:Use getDuration instead.

    • setAgoraLibPath: Use the mNativeLibPath member in RtcEngineConfig instead when calling create [2/2].

    • getAudioFileInfo and onRequestAudioFileInfo:Use getDuration instead.

    • onAudioDeviceTestVolumeIndication:Use onAudioVolumeIndication instead.

    • onFirstLocalAudioFrame:Use onFirstLocalAudioFramePublished instead.

    • getRecordAudioParams:Use setRecordingAudioFrameParameters instead.

    • getMixedAudioParams:Use setMixedAudioFrameParameters instead.

    • getPlaybackAudioParams:Use setPlaybackAudioFrameParameters instead.

    • The pushMode parameter in setExternalVideoSource: The default value of this parameter is true, and this parameter only takes effect when it is set to true. After deletion, it does not affect the function.

    • The channel parameter in takeSnapshot and onSnapshotTaken has been removed: This change was made as the channel parameter was found to be redundant and was not essential for the functionality of these APIs.

    • setLocalPublishFallbackOption and onLocalPublishFallbackToAudioOnly: Rarely used in v3.7.x.

    • RENDER_MODE_FILL(4) in RENDER_MODE_TYPE: This mode might cause image overstretch and is not recommended.

    • The following enumerations of audio mixing: Rarely used in v3.7.x.

      • AUDIO_MIXING_REASON_STARTED_BY_USER

      • AUDIO_MIXING_REASON_ONE_LOOP_COMPLETED

      • AUDIO_MIXING_REASON_START_NEW_LOOP

      • AUDIO_MIXING_REASON_ALL_LOOPS_COMPLETED

      • AUDIO_MIXING_REASON_STOPPED_BY_USER

      • AUDIO_MIXING_REASON_PAUSED_BY_USER

      • AUDIO_MIXING_REASON_RESUMED_BY_USER

    • onAudioMixingFinished: Use onAudioMixingStateChanged instead.

    • The info parameter in joinChannel [2/2]: This parameter is optional and rarely used in v3.7.x.

    • enableDeepLearningDenoise: The SDK will add deep-learning noise reduction as one of its capability in a future release instead of implementing through an API.

    • setDefaultMuteAllRemoteVideoStreams: Use autoSubscribeVideo in ChannelMediaOptions.

    • setDefaultMuteAllRemoteAudioStreams: Use autoSubscribeAudio in ChannelMediaOptions.

    • The replace parameter in startAudioMixing: Use publishMicrophoneTrack in the ChannelMediaOptions instead.

    Naming and data type changes

    The naming and data type changes in 4.x cause error messages in the IDE when you compile your project, and you need to update the code of your app according to each error message.

    Naming changes

    The main API and parameter name changes are as follows:

    • adjustLoopbackRecordingSignalVolume is changed to adjustLoopbackRecordingVolume.

    • The fileSize member in LogConfig is renamed to fileSizeInKB.

    • The options parameter in joinChannel[2/2] is changed to mediaOptions.

    • The report_vad parameter in enableAudioVolumeIndication is changed to reportVad.

    • registerVideoEncodedFrameObserver is changed to registerVideoEncodedImageReceiver.

    Data type changes

    The main API data type changes are as follows:

    • The state and reason parameters in onRemoteAudioStateChanged are changed from integer to enumeration.

    • The oldState and newState parameters in onAudioPublishStateChanged, onVideoPublishStateChanged, onAudioSubscribeStateChanged, and onVideoSubscribeStateChanged are changed from integer to enumeration.

    • The state and error parameters in onLocalAudioStateChanged are changed from integer to enumeration.

    • The state and error parameters in onRtmpStreamingStateChanged are changed from integer to enumeration

    Voice Calling