This page provides the release notes for the Video Calling 4.x.
If your target platform is Android 12 or higher, add the
android.permission.BLUETOOTH_CONNECT permission to the
AndroidManifest.xml file of the Android project to enable the Bluetooth function of the Android system.
v4.1.1 was released on February 8, 2023.
As of this release, the SDK optimizes the video encoder algorithm and upgrades the default video encoding resolution from 640 × 360 to 960 × 540 to accommodate improvements in device performance and network bandwidth, providing users with a full-link HD experience in various audio and video interaction scenarios.
setVideoEncoderConfiguration method to set the expected video encoding resolution in the video encoding parameters configuration.
1. Instant frame rendering
This release adds the
enableInstantMediaRendering method to enable instant rendering mode for audio and video frames, which can speed up the first video or audio frame rendering after the user joins the channel.
2. Video rendering tracing
This release adds the
startMediaRenderingTracingEx methods. The SDK starts tracing the rendering status of the video frames in the channel from the moment this method is called and reports information about the event through the
Agora recommends that you use this method in conjunction with the UI settings, such as buttons and sliders, in your app. For example, call this method when the user clicks Join Channel and then get the indicators in the video frame rendering process through the
This enables developers to optimize the indicators and improve the user experience.
1. Video frame observer
As of this release, the SDK optimizes the
onRenderVideoFrame callback, and the meaning of the return value is different depending on the video processing mode:
- When the video processing mode is
PROCESS_MODE_READ_ONLY, the return value is reserved for future use.
- When the video processing mode is
PROCESS_MODE_READ_WRITE, the SDK receives the video frame when the return value is
true. The video frame is discarded when the return value is
2. Super resolution
This release improves the performance of super resolution. To optimize the usability of super resolution, this
enableRemoteSuperResolution. Super resolution is now included in the online strategies of video quality enhancement which does not require extra configuration.
This release fixes the following issues:
- Playing audio files with a sample rate of 48 kHz failed.
- Crashes occurred after users set the video resolution as 3840 × 2160 and started CDN streaming on Xiaomi Redmi 9A devices.
- In real-time chorus scenarios, remote users heard noises and echoes when an OPPO R11 device joined the channel in loudspeaker mode.
- When the playback of the local music finished, the
onAudioMixingFinishedcallback was not properly triggered.
- When using a video frame observer, the first video frame was occasionally missed on the receiver's end.
- When sharing screens in scenarios involving multiple channels, remote users occasionally saw black screens.
- Switching to the rear camera with the virtual background enabled occasionally caused the background to be inverted.
- When there were multiple video streams in a channel, calling some video enhancement APIs occasionally failed.
v4.1.0 was released on December 15, 2022.
1. Headphone equalization effect
This release adds the
setHeadphoneEQParameters method, which is used to adjust the low- and high-frequency parameters of the headphone EQ. This is mainly useful in spatial audio scenarios. If you cannot achieve the expected headphone EQ effect after calling
setHeadphoneEQPreset, you can call
setHeadphoneEQParameters to adjust the EQ.
2. Encoded video frame observer
This release adds the
setRemoteVideoSubscriptionOptionsEx methods. When you call the
registerVideoEncodedFrameObserver method to register a video frame observer for the encoded video frames, the SDK subscribes to the encoded video frames by default. If you want to change the subscription options, you can call these new methods to set them.
For more information about registering video observers and subscription options, see the API reference.
3. MPUDP (MultiPath UDP) (Beta)
As of this release, the SDK supports MPUDP protocol, which enables you to connect and use multiple paths to maximize the use of channel resources based on the UDP protocol. You can use different physical NICs on both mobile and desktop and aggregate them to effectively combat network jitter and improve transmission quality.
To enable this feature, contact firstname.lastname@example.org.
4. Camera capture options
This release adds the
followEncodeDimensionRatio member in
CameraCapturerConfiguration, which enables you to set whether to follow the video aspect ratio already set in
setVideoEncoderConfiguration when capturing video with the camera.
5. Multi-channel management
This release adds a series of multi-channel related methods that you can call to manage audio and video streams in multi-channel scenarios.
muteLocalVideoStreamExmethods are used to cancel or resume publishing a local audio or video stream, respectively.
muteAllRemoteVideoStreamsExare used to cancel or resume the subscription of all remote users to audio or video streams, respectively.
stopRtmpStreamExmethods are used to implement Media Push in multi-channel scenarios.
stopChannelMediaRelayExmethods are used to relay media streams across channels in multi-channel scenarios.
- Adds the
leaveChannelEx[2/2] method. Compared with the
leaveChannelEx[1/2] method, a new
optionsparameter is added, which is used to choose whether to stop recording with the microphone when leaving a channel in a multi-channel scenario.
6. Video encoding preferences
In general scenarios, the default video encoding configuration meets most requirements. For certain specific scenarios, this release adds the
advanceOptions member in
VideoEncoderConfiguration for advanced settings of video encoding properties:
compressionPreference: The compression preferences for video encoding, which is used to select low-latency or high-quality video preferences.
encodingPreference: The video encoder preference, which is used to select adaptive preference, software encoder preference, or hardware encoder video preferences.
7. Client role switching
In order to enable users to know whether the switched user role is low-latency or ultra-low-latency, this release adds the
newRoleOptions parameter to the
onClientRoleChanged callback. The value of this parameter is as follows:
AUDIENCE_LATENCY_LEVEL_LOW_LATENCY(1): Low latency.
AUDIENCE_LATENCY_LEVEL_ULTRA_LOW_LATENCY(2): Ultra-low latency.
8. Brand-new AI Noise Suppression
The SDK supports a new version of Noise Suppression (in comparison to the basic Noise Suppression in v3.7.x). The new AI Noise Suppression has better vocal fidelity, cleaner noise suppression, and adds a dereverberation option. To enable this feature, contact email@example.com.
9. Spatial audio effect
This release adds the following features applicable to spatial audio effect scenarios, which can effectively enhance the user's sense of presence experience in virtual interactive scenarios.
- Sound insulation area: You can set a sound insulation area and sound attenuation parameter by calling
setZones. When the sound source (which can be a user or the media player) and the listener belong to the inside and outside of the sound insulation area, the listner experiences an attenuation effect similar to that of the sound in the real environment when it encounters a building partition. You can also set the sound attenuation parameter for the media player and the user, respectively, by calling
setRemoteAudioAttenuation, and specify whether to use that setting to force an override of the sound attenuation paramter in
- Doppler sound: You can enable Doppler sound by setting the
SpatialAudioParams, and the receiver experiences noticeable tonal changes in the event of a high-speed relative displacement between the source source and receiver (such as in a racing game scenario).
- Headphone equalizer: You can use a preset headphone equalization effect by calling the
setHeadphoneEQPresetmethod to improve the hearing of the headphones.
1. Bluetooth permissions
To simplify integration, as of this release, you can use the SDK to enable Android users to use Bluetooth normally without adding the
2. CDN streaming
To improve user experience during CDN streaming, when your camera does not support the video resolution you set when streaming, the SDK automatically adjusts the resolution to the closest value that is supported by your camera and has the same aspect ratio as the original video resolution you set. The actual video resolution used by the SDK for streaming can be obtained through the
3. Relaying media streams across channels
This release optimizes the
updateChannelMediaRelay method as follows:
- Before v4.1.0: If the target channel update fails due to internal reasons in the server, the SDK returns the error code
RELAY_EVENT_PACKET_UPDATE_DEST_CHANNEL_REFUSED(8), and you need to call the
- v4.1.0 and later: If the target channel update fails due to internal server reasons, the SDK retries the update until the target channel update is successful.
4. Reconstructed AIAEC algorithm
This release reconstructs the AEC algorithm based on the AI method. Compared with the traditional AEC algorithm, the new algorithm can preserve the complete, clear, and smooth near-end vocals under poor echo-to-signal conditions, significantly improving the system's echo cancellation and dual-talk performance. This gives users a more comfortable call and live-broadcast experience. AIAEC is suitable for conference calls, chats, karaoke, and other scenarios.
5. Virtual background
This release optimizes the virtual background algorithm. Improvements include the following:
- The boundaries of virtual backgrounds are handled in a more nuanced way and image matting is is now extremely thin.
- The stability of the virtual background is improved whether the portrait is still or moving, effectively eliminating the problem of background flickering and exceeding the range of the picture.
- More application scenarios are now supported, and a user obtains a good virtual background effect day or night, indoors or out.
- A larger variety of postures are now recognized, when half the body is motionless, the body is shaking, the hands are swinging, or there is fine finger movement. This helps to achieve a good virtual background effect in conjunction with many different gestures.
This release includes the following additional improvements:
- Reduces the latency when pushing external audio sources.
- Improves the performance of echo cancellation when using the
- Improves the smoothness of SDK video rendering.
- Enhances the ability to identify different network protocol stacks and improves the SDK's access capabilities in multiple-operator network scenarios.
This release fixed the following issues:
- Audience members heard buzzing noises when the host switched between speakers and earphones during live streaming.
- The call
getExtensionPropertyfailed and returned an empty string.
- When entering a live streaming room that has been played for a long time as an audience, the time for the first frame to be rendered was shortened.
onApiCallExecuted. Use the callbacks triggered by specific methods instead.
- Removes deprecated member parameters
v4.0.1 was released on September 29, 2022.
This release deletes the
sourceType parameter in
enableDualStreamMode [3/3] and
enableDualStreamModeEx, and the
enableDualStreamMode [2/3] method, because the SDK supports enabling dual-stream mode for various video sources captured by custom capture or SDK, you don't need to specify the video source type any more.
1. In-ear monitoring
This release adds
getEarMonitoringAudioParams callback to set the audio data format of the in-ear monitoring. You can use your own audio effect processing module to pre-process the audio frame data of the in-ear monitoring to implement custom audio effects. After calling
registerAudioFrameObserver to register the audio observer, set the audio data format in the return value of the
getEarMonitoringAudioParams callback. The SDK calculates the sampling interval based on the return value of the callback, and triggers the
onEarMonitoringAudioFrame callback based on the sampling interval.
2. Audio capture device test
This release adds support for testing local audio capture devices before joining channel. You can call
startRecordingDeviceTest to start the audio capture device test. After the test is complete, call the
stopPlaybackDeviceTest method to stop the audio capture device test.
3. Local network connection types
To make it easier for users to know the connection type of the local network at any stage, this release adds the
getNetworkType method. You can use this method to get the type of network connection in use, including UNKNOWN, DISCONNECTED, LAN, WIFI, 2G, 3G, 4G, 5G. When the local network connection type changes, the SDK triggers the
onNetworkTypeChanged callback to report the current network connection type.
4. Audio stream filter
This release introduces filtering audio streams based on volume. Once this function is enabled, the Agora server ranks all audio streams by volume and transports 3 audio streams with the highest volumes to the receivers by default. The number of audio streams to be transported can be adjusted; you can contact firstname.lastname@example.org to adjust this number according to your scenarios.
Meanwhile, Agora supports publishers to choose whether or not the audio streams being published are to be filtered based on volume. Streams that are not filtered will bypass this filter mechanism and transported directly to the receivers. In scenarios where there are a number of publishers, enabling this function helps reducing the bandwidth and device system pressure for the receivers.
To enable this function, contact email@example.com.
5. Dual-stream mode
This release optimizes the dual-stream mode, you can call
enableDualStreamModeEx before and after joining a channel.
The implementation of subscribing low-quality video stream is expanded. The SDK enables the low-quality video stream auto mode on the sender by default (the SDK does not send low-quality video streams), you can follow these steps to enable sending low-quality video streams:
- The host at the receiving end calls
setRemoteDefaultVideoStreamTypeto initiate a low-quality video stream request.
- After receiving the application, the sender automatically switches to sending low-quality video stream mode.
If you want to modify the default behavior above, you can call
setDualStreamMode [1/2] or
setDualStreamMode [2/2] and set the
mode parameter to
DISABLE_SIMULCAST_STREAM (always do not send low-quality video streams) or
ENABLE_SIMULCAST_STREAM (always send low-quality video streams).
1. Video information change callback
This release optimizes the trigger logic of
onVideoSizeChanged, which can also be triggered and report the local video size change when
startPreview is called separately.
This release fixed the following issues.
- When calling
setVideoEncoderConfigurationExin the channel to increase the resolution of the video, it occasionally failed.
- In online meeting scenarios, the local user and the remote user might not hear each other after the local user is interrupted by a call.
- After calling
setCloudProxyto set the cloud proxy, calling
joinChannelExto join multiple channels failed.
- When using the Agora media player to play videos, after you play and pause the video, and then call the seek method to specify a new position for playback, the video image might remain unchanged; if you call the resume method to resume playback, the video might be played in a speed faster than the original one.
v4.0.0 was released on September 15, 2022.
1. Integration change
This release has optimized the implementation of some features, resulting in incompatibility with v3.7.x. The following are the main features with compatibility changes:
- Multiple channel
- Media stream publishing control
- Custom video capture and rendering (Media IO)
- Warning codes
After upgrading the SDK, you need to update the code in your app according to your business scenarios. For details, see Migrate from v3.7.x to v4.0.0.
2. Callback exception handling
To facilitate troubleshooting, as of this release, the SDK no longer catches exceptions that are thrown by your own code implementation when triggering callbacks in the
IRtcEngineEventHandler class. You need to catch and handle the exceptions yourself; otherwise, it can cause a crash.
1. Multiple media tracks
This release supports one
RtcEngine instance to collect multiple audio and video sources at the same time and publish them to the remote users by setting
joinChannel to join the first channel, call
joinChannelEx multiple times to join multiple channels, and publish the specified stream to different channels through different user ID (
Besides, this release adds
createCustomVideoTrack method to implement video custom capture. You can refer to the following steps to publish multiple custom captured videos in the channel:
- Create a custom video track: Call this method to create a video track, and get the video track ID.
- Set the custom video track to be published in the channel: In each channel's
ChannelMediaOptions, set the
customVideoTrackIdparameter to the ID of the video track you want to publish, and set
- Pushing an external video source: Call
pushVideoFrame, and specify
videoTrackIdas the ID of the custom video track in step 2 in order to publish the corresponding custom video source in multiple channels.
You can also experience the following features with the multi-channel capability:
- Publish multiple sets of audio and video streams to the remote users through different user IDs (
- Mix multiple audio streams and publish to the remote users through a user ID (
- Combine multiple video streams and publish them to the remote users through a user ID (
2. Ultra HD resolution (Beta)
In order to improve the interactive video experience, the SDK optimizes the whole process of video capture, encoding, decoding and rendering, and now supports 4K resolution. The improved FEC (Forward Error Correction) algorithm enables adaptive switches according to the frame rate and number of video frame packets, which further reduces the video stuttering rate in 4K scenes.
Additionally, you can set the encoding resolution to 4K (3840 × 2160) and the frame rate to 60 fps when calling
setVideoEncoderConfiguration. The SDK supports automatic fallback to the appropriate resolution and frame rate if your device does not support 4K.
This feature has certain requirements with regards to device performance and network bandwidth, and the supported upstream and downstream frame rates vary on different platforms. To enable this feature, contact firstname.lastname@example.org.
3. Agora media player
To make it easier for users to integrate the Agora SDK and reduce the SDK's package size, this release introduces the Agora media player. After calling the
createMediaPlayer method to create a media player object, you can then call the methods in the
IMediaPlayer class to experience a series of functions, such as playing local and online media files, preloading a media file, changing the CDN route for playing according to your network conditions, or sharing the audio and video streams being played with remote users.
4. Ultra-high audio quality
To make the audio clearer and restore more details, this release adds the
ULTRA_HIGH_QUALITY_VOICE enumeration. In scenarios that mainly feature the human voice, such as chat or singing, you can call
setVoiceBeautifierPreset and use this enumeration to experience ultra-high audio quality.
5. Spatial audio
This feature is in experimental status. To enable this feature, contact email@example.com. Contact technical support if needed.
You can set the spatial audio for the remote user as following:
- Local Cartesian Coordinate System Calculation: This solution uses the
ILocalSpatialAudioEngineclass to implement spatial audio by calculating the spatial coordinates of the remote user. You need to call
updateRemotePositionto update the spatial coordinates of the local and remote users, respectively, so that the local user can hear the spatial audio effect of the remote user.
You can also set the spatial audio for the media player as following:
- Local Cartesian Coordinate System Calculation: This solution uses the
ILocalSpatialAudioEngineclass to implement spatial audio. You need to call
updatePlayerPositionInfoto update the spatial coordinates of the local user and media player, respectively, so that the local user can hear the spatial audio effect of media player.
6. Real-time chorus
This release gives real-time chorus the following abilities:
- Two or more choruses are supported.
- Each singer is independent of each other. If one singer fails or quits the chorus, the other singers can continue to sing.
- Very low latency experience. Each singer can hear each other in real time, and the audience can also hear each singer in real time.
This release adds the
AUDIO_SCENARIO_CHORUS enumeration. With this enumeration, users can experience ultra-low latency in real-time chorus when the network conditions are good.
7. Extensions from the Agora extensions marketplace
In order to enhance the real-time audio and video interactive activities based on the Agora SDK, this release supports the one-stop solution for the extensions from the Agora extensions marketplace:
- Easy to integrate: The integration of modular functions can be achieved simply by calling an API, and the integration efficiency is improved by nearly 95%.
- Extensibility design: The modular and extensible SDK design style endows the Agora SDK with good extensibility, which enables developers to quickly build real-time interactive apps based on the Agora extensions marketplace ecosystem.
- Build an ecosystem: A community of real-time audio and video apps has developed that can accommodate a wide range of developers, offering a variety of extension combinations. After integrating the extensions, developers can build richer real-time interactive functions. For details, see Use an Extension.
- Become a vendor: Vendors can integrate their products with Agora SDK in the form of extensions, display and publish them in the Agora extensions marketplace, and build a real-time interactive ecosystem for developers together with Agora. For details on how to develop and publish extensions, see Become a Vendor.
8. Enhanced channel management
To meet the channel management requirements of various business scenarios, this release adds the following functions to the
- Sets or switches the publishing of multiple audio and video sources.
- Sets or switches channel profile and user role.
- Sets or switches the stream type of the subscribed video.
- Controls audio publishing delay.
ChannelMediaOptions when calling
joinChannelEx to specify the publishing and subscription behavior of a media stream, for example, whether to publish video streams captured by cameras or screen sharing, and whether to subscribe to the audio and video streams of remote users. After joining the channel, call
updateChannelMediaOptions to update the settings in
ChannelMediaOptions at any time, for example, to switch the published audio and video sources.
9. Screen sharing
This release optimizes the screen sharing function. You can enable this function in the following ways.
- Call the
startScreenCapturemethod before joining a channel, and then call
joinChannel[2/2] to join a channel and set
- Call the
startScreenCapturemethod after joining a channel, and then call
10. Subscription allowlists and blocklists
This release introduces subscription allowlists and blocklists for remote audio and video streams. You can add a user ID that you want to subscribe to in your allowlist, or add a user ID for the streams you do not wish to see to your blocklists. You can experience this feature through the following APIs, and in scenarios that involve multiple channels, you can call the following methods in the
setSubscribeAudioBlacklist：Set the audio subscription blocklist.
setSubscribeAudioWhitelist：Set the audio subscription allowlist.
setSubscribeVideoBlacklist：Set the video subscription blocklist.
setSubscribeVideoWhitelist：Set the video subscription allowlist.
If a user is added in a blocklist and a allowlist at the same time, only the blocklist takes effect.
11. Set audio scenarios
To make it easier to change audio scenarios, this release adds the
setAudioScenario method. For example, if you want to change the audio scenario from
AUDIO_SCENARIO_GAME_STREAMING when you are in a channel, you can call this method.
1. Fast channel switching
This release can achieve the same switching speed as
switchChannel in v3.7.x through the
joinChannel methods so that you don't need to take the time to call the
2. Push external video frames
This releases supports pushing video frames in I422 format. You can call the
pushExternalVideoFrame [1/2] method to push such video frames to the SDK.
3. Voice pitch of the local user
This release adds
onAudioVolumeIndication. You can use
voicePitch to get the local user's voice pitch and perform business functions such as rating for singing.
4. Device permission management
This release adds the
onPermissionError method, which is automatically reported when the audio capture device or camera does not obtain the appropriate permission. You can enable the corresponding device permission according to the prompt of the callback.
5. Video preview
This release improves the implementation logic of
startPreview. You can call the
startPreview method to enable video preview at any time.
6. Video types of subscription
You can call the
setRemoteDefaultVideoStreamType method to choose the video stream type when subscribing to streams.
- After you enable Notifications, your server receives the events that you subscribe to in the form of HTTPS requests.
- To improve communication security between the Notifications and your server, Agora SD-RTN™ uses signatures for identity verification.
- As of this release, you can use Notifications in conjunction with this product.