Skip to main content
Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Release notes

This page provides the release notes for the following:

Video SDK

If your target platform is Android 12 or higher, add the android.permission.BLUETOOTH_CONNECT permission to the AndroidManifest.xml file of the Android project to enable the Bluetooth function of the Android system.

Known issues

The list of known issues page is continuously updated as the systems evolve. Agora suggests you regularly upgrade to the latest version of the SDK, which includes new features, bug fixes and improvements.

  • Android SDK v4.2.3

    Android 14 screen sharing issue

    On Android 14 devices, such as OnePlus 11, screen sharing may not be available when targetSdkVersion is set to 34. For example, half of the shared screen may be black. To avoid this issue, Agora recommends setting targetSdkVersion to 34 or below. However, this may cause the screen sharing process to be interrupted when switching between portrait and landscape mode. In this case, a window will pop up on the device asking if you want to start recording the screen. After confirming, you can resume screen sharing.

v4.5.0

v4.5.0 was released on November 27, 2024.

Compatibility changes

This version includes optimizations to some features, including changes to SDK behavior and API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release.

Note
As of v4.5.0, both Video SDK and Signaling SDK (v2.2.0 and above) include the libaosl.so library. If you manually integrate Video SDK via CDN and also use Signaling SDK, delete the earlier version of the libaosl.so library to avoid conflicts. The libaosl.so library version in Video SDK v4.5.0 is 1.2.13.
  1. Changes in strong video noise suppression

    The VIDEO_DENOISER_LEVEL_STRENGTH enumeration is removed.

    Instead, after enabling video noise suppression by calling setVideoDenoiserOptions [1/2], you can call the setBeautyEffectOptions [1/2] method to enable the beauty skin smoothing feature. Using both together will help achieve better video noise suppression effects. For strong noise suppression, it is recommended to set the skin smoothing parameters as detailed in setVideoDenoiserOptions [1/2].

    Additionally, due to this adjustment, to achieve the best low-light enhancement effect with a focus on image quality, you need to enable video noise suppression first and use specific settings as detailed in setLowlightEnhanceOptions [1/2].

  2. Changes in video encoding preferences

    To enhance the user’s video interaction experience, this version optimizes the default preferences for video encoding:

    • In the COMPRESSION_PREFERENCE enumeration class, a new PREFER_COMPRESSION_AUTO (-1) enumeration is added, replacing the original PREFER_QUALITY (1) as the default value. In this mode, the SDK will automatically choose between PREFER_LOW_LATENCY or PREFER_QUALITY based on your video scene settings to achieve the best user experience.
    • In the DEGRADATION_PREFERENCE enumeration class, a new MAINTAIN_AUTO (-1) enumeration is added, replacing the original MAINTAIN_QUALITY (1) as the default value. In this mode, the SDK will automatically choose between MAINTAIN_FRAMERATE, MAINTAIN_BALANCED, or MAINTAIN_RESOLUTION based on your video scene settings to achieve the optimal overall quality of experience (QoE).
  3. 16 KB memory page size

    Starting from Android 15, the system adds support for 16 KB memory page size, as detailed in Support 16 KB page sizes. To ensure the stability and performance of the app, starting from this version, the SDK supports 16 KB memory page size, ensuring seamless operation on devices with both 4 KB and 16 KB memory page sizes, enhancing compatibility and preventing crashes.

New features

  1. Live show scenario

    This version adds the APPLICATION_SCENARIO_LIVESHOW(3) (Live Show) enumeration to VideoScenario. You can call setVideoScenario to set the video business scenario to show room. In this scenario, fast video rendering and high image quality are crucial. The SDK implements several performance optimizations, such as enabling accelerated audio and video frame rendering to minimize first-frame latency (no need to call enableInstantMediaRendering) for better image quality and bandwidth efficiency.

  2. Maximum frame rate for video rendering

    This version adds the setLocalRenderTargetFps and setRemoteRenderTargetFps methods, which support setting the maximum frame rate for video rendering locally and remotely. The actual frame rate for video rendering by the SDK will be as close to this value as possible.

    In scenarios where the frame rate requirement for video rendering is not high (for example, screen sharing, online education) or when the remote end uses mid-to-low-end devices, you can use this set of methods to limit the video rendering frame rate, thereby reducing CPU consumption and improving system performance.

  3. Filter effects

    This version introduces the setFilterEffectOptions [1/2] method. You can pass a cube map file (.cube) in the config parameter to apply custom filter effects such as whitening, vivid, cool, black and white, and others. Additionally, the SDK provides a built-in built_in_whiten_filter.cube file to quickly apply a whitening filter effect.

  4. Local audio mixing

    This version introduces the local audio mixing feature. You can call the startLocalAudioMixer method to mix the audio streams from the local microphone, media player, sound card, and remote audio streams into a single audio stream, which can then be published to the channel. When you no longer need audio mixing, you can call the stopLocalAudioMixer method to stop local audio mixing. During the mixing process, you can call the updateLocalAudioMixerConfiguration method to update the configuration of the audio streams being mixed.

    Example use cases for this feature include:

    • When using the local video mixing feature, the associated audio streams of the mixed video streams can be simultaneously captured and published.
    • In live streaming scenarios, users can receive audio streams within the channel, mix multiple audio streams locally, and then forward the mixed audio stream to other channels.
    • In educational scenarios, teachers can mix the audio from interactions with students locally and then forward the mixed audio stream to other channels.
  5. External MediaProjection

    This version introduces the setExternalMediaProjection method, which allows you to set an external MediaProjection and replace the MediaProjection applied by the SDK.

    If you have the capability to apply for MediaProjection on your own, you can use this feature to achieve more flexible screen capture.

  6. EGL context

    This version introduces the setExternalRemoteEglContext method, which is used to set the EGL context for rendering remote video streams. When using the Texture format video data for remote video self-rendering, you can use this method to replace the SDK's default remote EGL context, resulting in unified EGL context management.

  7. Color space settings

    This version adds getColorSpace and setColorSpace to VideoFrame. You can use getColorSpace to obtain the color space properties of the video frame and use setColorSpace to customize the settings. By default, the color space uses Full Range and BT.709 standard configuration. Developers can flexibly adjust according to their own capture or rendering needs, further enhancing the customization capabilities of video processing.

Improvements

  1. Virtual background algorithm optimization

    This version upgrades the virtual background algorithm, making the segmentation between the portrait and the background more accurate. There is no background exposure, the body contour of the portrait is complete, and the detail recognition of fingers is significantly improved. Additionally, the edges between the portrait and the background are more stable, reducing edge jumping and flickering in continuous video frames.

  2. Snapshot at specified video observation points

    This version introduces the takeSnapshot [2/2] and takeSnapshotEx [2/2] methods. You can use the config parameter when calling these methods to take snapshots at specified video observation points, such as before encoding, after encoding, or before rendering, to achieve more flexible snapshot effects.

  3. Custom audio capture improvements

    This version adds the enableAudioProcessingmember parameter to AudioTrackConfig, which is used to control whether to enable 3A audio processing for custom audio capture tracks of the AUDIO_TRACK_DIRECT type. The default value of this parameter is false, meaning that audio processing is not enabled. Users can enable it as needed, enhancing the flexibility of custom audio processing.

  4. Other improvements

    • In scenarios where Alpha transparency effects are achieved by stitching video frames and Alpha data, the rendering performance on the receiving end has been improved, effectively reducing stuttering and latency.
    • This version optimizes the logic for calling queryDeviceScore to obtain device score levels, improving the accuracy of the score results.
    • When calling switchSrc to switch between live streams or on-demand streams of different resolutions, smooth and seamless switching can be achieved. An automatic retry mechanism has been added in case of switching failures. The SDK will automatically retry 3 times after a failure. If it still fails, the onPlayerEvent callback will report the PLAYER_EVENT_SWITCH_ERROR event, indicating that an error has occurred during media resource switching.
    • When calling setPlaybackSpeed to set the playback speed of an audio file, the minimum supported speed is 0.3x.

Bug fixes

This version fixes the following issues:

  • When the video source of the sender is in the JPEG format, the frame rate on the receiving end occasionally falls below expectations.
  • Occasional noise and stuttering when playing music resources from the music content center.
  • During audio and video interaction, after being interrupted by a system call, the user volume reported by the onAudioVolumeIndication callback was incorrect.
  • When the receiving end subscribes to the low-quality video stream by default and does not automatically subscribe to any video stream when joining the channel, calling muteRemoteVideoStream(uid, false) after joining the channel to resume receiving the video stream results in receiving the high-quality stream.
  • Calling startAudioMixing [1/2] and then immediately calling pauseAudioMixing to pause the music file playback does not take effect.
  • Occasional crashes during audio and video interaction.

v4.4.1

v4.4.1 was released on August 8, 2024.

Issues fixed

This release fixes the issue where io.agora.rtc:full-rtc-basic:4.4.0 and io.agora.rtc:voice-rtc-basic:4.4.0 were not working properly on Maven Central due to an upload error.

v4.4.0

v4.4.0 was released on August 5, 2024.

Compatibility changes

This version includes optimizations to some features, including changes to the SDK behavior and API renaming and deletion. To ensure normal operation of the project, update the code in the app after upgrading to this release.

Note
Starting from v4.4.0, the SDK provides an API sunset notice, which includes information about deprecated and removed APIs in each version. See API Sunset Notice.
  1. To distinguish context information in different extension callbacks, this version removes the original extension callbacks and adds new corresponding callbacks that contain context information (see table below). You can identify the extension name, the user ID, and the service provider name through ExtensionContext in each callback.

    Original callbackNew callback
    onEventonEventWithContext
    onStartedonStartedWithContext
    onStoppedonStoppedWithContext
    onErroronErrorWithContext
  2. This version removes the buffer, uid, and timeStampMs parameters of the onMetadataReceived callback. You can get metadata-related information, including timeStampMs (timestamp of the sent data), uid (user ID), and channelId (channel name) through the newly-added metadata parameter.

New features

  1. Lite SDK

    Starting from this version, Agora introduces Lite SDK, which includes only the basic audio and video capabilities and partially cuts advanced features, effectively reducing the app size after integrating the SDK.

  2. Alpha transparency effects

    This version introduces the Alpha transparency effects feature, supporting the transmission and rendering of Alpha channel data in video frames for SDK capture and custom capture scenarios, enabling transparent gift effects, custom backgrounds on the receiver end, and so on:

    • VideoFrame and AgoraVideoFrame add the alphaBuffer member, which sets the Alpha channel data.
    • AgoraVideoFrame adds the fillAlphaBuffer member. For BGRA or RGBA formatted video data, it sets whether to automatically extract the Alpha channel data and fill it into alphaBuffer.
    • VideoFrame and AgoraVideoFrame add the alphaStitchMode member, which sets the relative position of alphaBuffer and video frame stitching.

    Additionally, AdvanceOptions adds a new member encodeAlpha, which is used to set whether to encode and send Alpha information to the remote end. By default, the SDK does not encode and send Alpha information; if you need to encode and send Alpha information to the remote end (for example, when virtual background is enabled), explicitly call setVideoEncoderConfiguration to set the video encoding properties and set encodeAlpha to true.

  3. Voice AI tuner

    This version introduces the voice AI tuner feature, which can enhance the sound quality and tone, similar to a physical sound card. You can enable the voice AI tuner feature by calling the enableVoiceAITuner method and passing in the sound effect types supported in the VOICE_AI_TUNER_TYPE enum to achieve effects like deep voice, cute voice, husky singing voice, and so on.

Improvements

  1. Adaptive hardware decoding support

    This release introduces adaptive hardware decoding support, enhancing rendering smoothness on low-end devices and effectively reducing system load.

  2. Facial region beautification

    To avoid losing details in non-facial areas during heavy skin smoothing, this version improves the skin smoothing algorithm. The SDK now recognizes various parts of the face, applying smoothing to facial skin areas excluding the mouth, eyes, and eyebrows. In addition, the SDK supports smoothing up to two faces simultaneously.

  3. Other improvements

    This version also includes the following improvements:

    • Optimizes the parameter types of the following APIs. These improvements enhance code readability, reduce potential errors, and facilitate future maintenance.
      • Deprecates the option parameter of type int in setRemoteSubscribeFallbackOption [1/2], and adds an overloaded function setRemoteSubscribeFallbackOption [2/2] with the option parameter of type StreamFallbackOptions.
      • Deprecates the streamType parameter of type int in setRemoteVideoStreamType [1/2], setRemoteDefaultVideoStreamType [1/2], and setRemoteVideoStreamTypeEx [1/2], and adds overloaded functions setRemoteVideoStreamType [2/2], setRemoteDefaultVideoStreamType [2/2], and setRemoteVideoStreamTypeEx [2/2] with the streamType parameter of type VideoStreamType.
    • Optimizes the transmission strategy: Calling enableInstantMediaRendering no longer impacts the security of the transmission link.
    • Deprecates redundant enumerations CLIENT_ROLE_CHANGE_FAILED_REQUEST_TIME_OUT and CLIENT_ROLE_CHANGE_FAILED_CONNECTION_FAILED.

Issues fixed

This release fixes the following issue:

  • Audio playback failed when pushing external audio data using pushExternalAudioFrame and the sample rate was not set as a recommended value, such as 22050 Hz and 11025 Hz.

v4.3.2

v4.3.2 was released on June 4, 2024.

Improvements

  1. This release enhances the usability of the setRemoteSubscribeFallbackOption method by removing the timing requirements for invocation. It can now be called both before and after joining the channel to dynamically switch audio and video stream fallback options in weak network conditions.

  2. The Agora media player now supports playing MP4 files with an Alpha channel.

  3. The Agora media player now fully supports playing music files located in the /assets/ directory or from URI starting with content://.

Issues fixed

This version fixed the following issues:

  • Occasional video smoothness issues during audio and video interactions.
  • The app occasionally crashed when the decoded video resolution on the receiving end was an odd number.
  • The app crashed when opening the app and starting screen sharing after the first installation or system reboot.
  • Local audio capture failed after joining a channel while answering a system phone call and hanging up, preventing remote users from hearing any sound.
  • During the interaction process on certain devices (for example, Redmi Note8), after answering and hanging up a system call, local media files were played without sound and no sound was heard from the remote end (Android).
  • The app occasionally crashed when remote users left the channel.
  • The values of cameraDirection and focalLengthType returned by queryCameraFocalLengthCapability could not be read directly.

v4.3.1

v4.3.1 was released on April 29, 2024.

Compatibility changes

To ensure parameter naming consistency, this version renames channelName to channelId and optionalUid to uid in joinChannel [1/2]. Update your app's code after upgrading to this version to ensure normal project operations.

New features

  1. Wide and ultra-wide cameras

    To allow users to capture a broader field of view and more complete scene content, this release introduces support for wide and ultra-wide cameras. You can first call queryCameraFocalLengthCapability to check the device's focal length capabilities, and then call setCameraCapturerConfiguration and set cameraFocalLengthType to the supported focal length types, including wide and ultra-wide.

  2. Multi-camera capture

    This release introduces additional functionalities for Android camera capture:

    1. Support for capturing and publishing video streams from the third and fourth cameras:
      • New enumerators VIDEO_SOURCE_CAMERA_THIRD(11) and VIDEO_SOURCE_CAMERA_FOURTH(12) are added to VideoSourceType, specifically for the third and fourth camera sources. This change allows you to specify up to four camera streams when initiating camera capture by calling startCameraCapture.
      • New parameters publishThirdCameraTrack and publishFourthCameraTrack are added to ChannelMediaOptions. Set these parameters to true when joining a channel with joinChannel [2/2] to publish video streams captured from the third and fourth cameras.
    2. Support for specifying cameras by camera ID:
      • A new parameter cameraId is added to CameraCapturerConfiguration. For devices with multiple cameras, where cameraDirection cannot identify or access all available cameras, you can obtain the camera ID through Android's native system APIs and specify the desired camera by calling startCameraCapture with the specific cameraId.
      • A new method switchCamera [2/2] supports switching cameras by cameraId, allowing apps to dynamically adjust camera usage during runtime based on available cameras.
  3. Data stream encryption

    This version adds datastreamEncryptionEnabled to EncryptionConfig for enabling data stream encryption. You can set this when you activate encryption with enableEncryption. If there are issues causing failures in data stream encryption or decryption, these can be identified by the newly added ENCRYPTION_ERROR_DATASTREAM_DECRYPTION_FAILURE and ENCRYPTION_ERROR_DATASTREAM_ENCRYPTION_FAILURE enumerations.

  4. Local video rendering

    This version adds the following members to VideoCanvas to support more local rendering capabilities:

    • surfaceTexture: Set a native Android SurfaceTexture object as the container providing video imagery, then use SDK external methods to perform OpenGL texture rendering.
    • enableAlphaMask: This member enables the receiving end to initiate Alpha mask rendering. Alpha mask rendering can create images with transparent effects or extract human figures from video content.
  5. Adaptive configuration for low-quality video streams

    This version introduces adaptive configuration for low-quality video streams. When you activate dual-stream mode and set up low-quality video streams on the sending side using setDualStreamMode, the SDK defaults to the following behaviors:

    • The default encoding resolution for low-quality video streams is set to 50% of the original video encoding resolution.
    • The bitrate for the small streams is automatically matched based on the video resolution and frame rate, eliminating the need for manual specification.
  6. Other features

    • A new method enableEncryptionEx is added for enabling media stream or data stream encryption in multi-channel scenarios.
    • A new method setAudioMixingPlaybackSpeed is introduced for setting the playback speed of audio files.
    • A new method getCallIdEx is introduced for retrieving call IDs in multi-channel scenarios.
  7. Beta features

Improvements

  1. Optimization of local video status callbacks

    This version introduces the following enumerations, allowing you to understand more about the reasons behind changes in local video status through the onLocalVideoStateChanged callback:

    • LOCAL_VIDEO_STREAM_REASON_DEVICE_INTERRUPT (14): Video capture is interrupted due to the camera being occupied by another app or the app moving to the background.
    • LOCAL_VIDEO_STREAM_REASON_DEVICE_FATAL_ERROR (15): Video capture device errors, possibly due to camera equipment failure.
  2. Camera capture improvements

    Improvements have been made to the video processing mechanism of camera capture, reducing noise, enhancing brightness, and improving color, making the captured images clearer, brighter, and more realistic.

  3. Virtual background algorithm optimization

    To enhance the accuracy and stability of human segmentation when activating virtual backgrounds against solid colors, this version optimizes the green screen segmentation algorithm:

    • Supports recognition of any solid color background, no longer limited to green screens.
    • Improves accuracy in recognizing background colors and reduces the background exposure during human segmentation.
    • After segmentation, the edges of the human figure, especially around the fingers, are more stable, significantly reducing flickering at the edges.
  4. CPU consumption reduction of in-ear monitoring

    This release adds an enumerator EAR_MONITORING_FILTER_REUSE_POST_PROCESSING_FILTER. For complex audio processing scenarios, you can specify this option to reuse the audio filter post sender-side processing in in-ear monitoring, thereby reducing CPU consumption. Note that this option may increase the latency of in-ear monitoring, which is suitable for latency-tolerant scenarios requiring low CPU consumption.

  5. Other improvements

    This version also includes the following improvements:

    • Enhanced performance and stability of the local compositing feature, reducing its CPU usage.
    • Enhanced media player capabilities to handle WebM format videos, including support for rendering Alpha channels.
    • New chorus effect ROOM_ACOUSTICS_CHORUS is added to enhance the spatial presence of vocals in chorus scenarios.
    • In RemoteAudioStats, a new e2eDelay field is added to report the delay from when the audio is captured on the sending end to when the audio is played on the receiving end.

Issues fixed

This version fixed the following issues:

  • Fixed an issue where SEI data output did not synchronize with video rendering when playing media streams containing SEI data using the media player.
  • After joining a channel and calling disableAudio, audio playback did not immediately stop.
  • Broadcasters using certain models of devices under speaker mode experienced occasional local audio capture failures when switching the app process to the background and then back to the foreground, causing remote users to not hear the broadcaster's audio.
  • On devices with Android 8.0, enabling screen sharing occasionally caused the app to crash.
  • In scenarios using camera capture for local video, when the app was moved to the background and disableVideo or stopPreview was called to stop video capture, camera capture was unexpectedly activated when the app was brought back to the foreground.
  • When the network conditions of the sender deteriorated (for example, in poor network environments), the receiver occasionally experienced a decrease in video smoothness and an increase in lag.

API changes

Added

v4.3.0

v4.3.0 was released on February 22, 2024.

Compatibility changes

This release has optimized the implementation of some functions, which involved renaming or deletion of some APIs. To ensure normal operation of the project, update the code in the app after upgrading to this release.

  1. Raw video data callback behavior change

    As of this release, the callback processing related to raw video data changes from the previous fixed single thread to a random thread, meaning that callback processing can occur on different threads. Due to limitations in the Android system, OpenGL must be tightly bound to a specific thread. Therefore, Agora suggests that you make one of the following modifications to your code:

    • (Recommended) Use the TextureBufferHelper class to create a dedicated OpenGL thread for video pre-processing or post-processing (for example, image enhancement, stickers, and so on).
    • Use the eglMakeCurrent method to associate the necessary OpenGL context for each video frame with the current thread.
  2. Renaming parameters in callbacks

    In order to make the parameters in some callbacks and the naming of enumerations in enumeration classes easier to understand, the following modifications have been made in this release. Modify the parameter settings in the callbacks after upgrading to this release.

    CallbackOriginal parameter nameNew parameter name
    onLocalAudioStateChangederrorreason
    onLocalVideoStateChangederrorreason
    onDirectCdnStreamingStateChangederrorreason
    onPlayerStateChangederrorreason
    onRtmpStreamingStateChangederrCodereason
    Original enumeration classNew enumeration class
    DirectCdnStreamingErrorDirectCdnStreamingReason
    MediaPlayerErrorMediaPlayerReason
    MusicContentCenterStatusCodeMusicContentCenterStateReason

    Note: For specific renaming of enumerations, refer to API changes.

  3. Channel media relay

    To improve interface usability, this release removes some methods and callbacks for channel media relay. Use the alternative options listed in the table below:

    Deleted methods and callbacksAlternative methods and callbacks
    • startChannelMediaRelay
    • updateChannelMediaRelay
    startOrUpdateChannelMediaRelay
    • startChannelMediaRelayEx
    • updateChannelMediaRelayEx
    startOrUpdateChannelMediaRelayEx
    onChannelMediaRelayEventonChannelMediaRelayStateChanged
  4. Custom video source

    Since this release, pushExternalVideoFrameEx[1/2] and pushExternalVideoFrameEx[2/2] are renamed to pushExternalVideoFrameById[1/2] and pushExternalVideoFrame[1/2], and are migrated from RtcEngineEx to RtcEngine.

  5. Audio route

    Since this release, RouteBluetooth is renamed to AUDIO_ROUTE_BLUETOOTH_DEVICE_HFP, representing a Bluetooth device using the HFP protocol. The AUDIO_ROUTE_BLUETOOTH_DEVICE_A2DP(10) is added to represent a Bluetooth device using the A2DP protocol.

  6. The state of the remote video

    To make the name of the enumeration easier to understand, this release changes the name of the enumeration from REMOTE_VIDEO_STATE_PLAYING to REMOTE_VIDEO_STATE_DECODING, while the meaning of the enumeration remains unchanged.

  7. Reasons for local video state changes

    The LOCAL_VIDEO_STREAM_ERROR_ENCODE_FAILURE enumeration has been changed to LOCAL_VIDEO_STREAM_REASON_CODEC_NOT_SUPPORT.

  8. Log encryption behavior changes

    For security and performance reasons, as of this release, the SDK encrypts logs and no longer supports printing plaintext logs via the console.

    Refer to the following solutions for different needs:

    • If you need to know the API call status, please check the API logs and print the SDK callback logs yourself.
    • For any other special requirements, please contact technical support and provide the corresponding encrypted logs.
  9. Removing IAgoraEventHandler interface

    This release deletes the IAgoraEventHandler interface class. All callback events that were previously managed under this class are now processed through the IRtcEngineEventHandler interface class.

New features

  1. Custom mixed video layout on the receiving end

    To facilitate customized layout of mixed video stream at the receiver end, this release introduces the onTranscodedStreamLayoutInfo callback. When the receiver receives the channel's mixed video stream sent by the video mixing server, this callback is triggered, reporting the layout information of the mixed video stream and the layout information of each sub-video stream in the mixed stream. The receiver can set a separate view for rendering the sub-video stream (distinguished by subviewUid) in the mixed video stream when calling the setupRemoteVideo method, achieving a custom video layout effect.

    When the layout of the sub-video streams in the mixed video stream changes, this callback will also be triggered to report the latest layout information in real time.

    Through this feature, the receiver end can flexibly adjust the local view layout. When applied in a multi-person video scenario, the receiving end only needs to receive and decode a mixed video stream, which can effectively reduce the CPU usage and network bandwidth when decoding multiple video streams on the receiving end.

  2. Local preview with multiple views

    This release supports local preview with simultaneous display of multiple frames, where the videos shown in the frames are positioned at different observation positions along the video link. Examples of usage are as follows:

    1. Call setupLocalVideo to set the first view: Set the position parameter to VIDEO_MODULE_POSITION_POST_CAPTURER_ORIGIN (introduced in this release) in VideoCanvas. This corresponds to the position after local video capture and before preprocessing. The video observed here does not have preprocessing effects.
    2. Call setupLocalVideo to set the second view: Set the position parameter to VIDEO_MODULE_POSITION_POST_CAPTURER in VideoCanvas, the video observed here has the effect of video preprocessing.
    3. Observe the local preview effect: The first view is the original video of a real person; the second view is the virtual portrait after video preprocessing (including image enhancement, virtual background, and local preview of watermarks) effects.
  3. Query device score

    This release adds the queryDeviceScore method to query the device's score level to ensure that the user-set parameters do not exceed the device's capabilities. For example, in HD or UHD video scenarios, you can first call this method to query the device's score. If the returned score is low (for example, below 60), you need to lower the video resolution to avoid affecting the video experience. The minimum device score required for different business scenarios is varied. For specific score recommendations, please contact technical support.

  4. Select different audio tracks for local playback and streaming

    This release introduces the selectMultiAudioTrack method that allows you to select different audio tracks for local playback and streaming to remote users. For example, in scenarios like online karaoke, the host can choose to play the original sound locally and publish the accompaniment in the channel. Before using this function, you need to open the media file through the openWithMediaSource method and enable this function by setting the enableMultiAudioTrack parameter in MediaPlayerSource.

  5. Audio playback device test

    This release introduces the startPlaybackDeviceTest method to allow you to test whether your local audio device for playback works properly. You can specify the audio file to be played through the testAudioFilePath parameter and see if your audio device works properly. After the test is completed, you need to call the newly added stopPlaybackDeviceTest method to stop the test.

  6. Others

    This release has passed the test verification of the following APIs and can be applied to the entire series of RTC 4.x SDK.

Improvements

  1. SDK task processing scheduling optimization

    This release optimizes the scheduling mechanism for internal tasks within the SDK, with improvements in the following aspects:

    • The speed of video rendering and audio playback for both remote and local first frames improves by 10% to 20%.
    • The API call duration and response time are reduced by 5% to 50%.
    • The SDK's parallel processing capability significantly improves, delivering higher video quality (720P, 24 FPS) even on lower-end devices. Additionally, image processing remains more stable in scenarios involving high resolutions and frame rates.
    • The stability of the SDK is further enhanced, leading to a noticeable decrease in the crash rate across various specific scenarios.
  2. In-ear monitoring volume boost

    This release provides users with more flexible in-ear monitoring audio adjustment options, supporting the ability to set the in-ear monitoring volume to four times the original volume by calling setInEarMonitoringVolume.

  3. Spatial audio effects usability improvement

    • This release optimizes the design of the setZones method, supporting the ability to set the zones parameter to NULL, indicating the clearing of all echo cancellation zones.
    • As of this release, it is no longer necessary to unsubscribe from the audio streams of all remote users within the channel before calling the ILocalSpatialAudioEngine method.
  4. Optimization of video pre-processing methods

    This release adds overloaded methods with the sourceType parameter for the following 5 video preprocessing methods, which support specifying the media source type for applying video preprocessing effects by passing in sourceType (for example, applying on a custom video capture media source):

  5. Other improvements

    This release also includes the following improvements:

    • Adds codecType in VideoEncoderConfiguration to set the video encoding type.
    • Adds allowCaptureCurrentApp member in AudioCaptureParameters, which is used to set whether to capture audio from the current app during screen sharing. The default value of this member is true, which means it collects the audio from the current app by default. In certain scenarios, the shared screen audio captured by the app may cause echo on the remote side due to signal delay and other reasons. Agora suggests setting this member as false to eliminate the remote echo introduced during the screen sharing process.
    • This release optimizes the SDK's domain name resolution strategy, improving the stability of calling setLocalAccessPoint to resolve domain names in complex network environments.
    • When passing in an image with transparent background as the virtual background image, the transparent background can be filled with customized color.
    • This release adds the earMonitorDelay and aecEstimatedDelay members in LocalAudioStats to report ear monitor delay and acoustic echo cancellation (AEC) delay, respectively.
    • The onPlayerCacheStats callback is added to report the statistics of the media file being cached. This callback is triggered once per second after file caching is started.
    • The onPlayerPlaybackStats callback is added to report the statistics of the media file being played. This callback is triggered once per second after the media file starts playing. You can obtain information like the audio and video bitrate of the media file through PlayerPlaybackStats.

Issues fixed

This release fixed the following issues:

  • When sharing two screen sharing video streams simultaneously, the reported captureFrameRate in the onLocalVideoStats callback is 0, which is not as expected.
  • In an online meeting scenario, occasional audio freezes occurred when the local user was listening to remote users.

API changes

Added

Modified

  • pushExternalVideoFrameEx[1/2] and pushExternalVideoFrameEx[2/2] are renamed to pushExternalVideoFrameById[1/2] and pushExternalVideoFrameById[2/2], and are migrated from RtcEngineEx to RtcEngine
  • REMOTE_VIDEO_STATE_PLAYING enumeration name changed toREMOTE_VIDEO_STATE_DECODING
  • ROUTE_BLUETOOTH is renamed as AUDIO_ROUTE_BLUETOOTH_DEVICE_HFP
  • All ERROR fields in the following enumerations are changed to REASON:
    • LOCAL_AUDIO_STREAM_ERROR_OK
    • LOCAL_AUDIO_STREAM_ERROR_FAILURE
    • LOCAL_AUDIO_STREAM_ERROR_DEVICE_NO_PERMISSION
    • LOCAL_AUDIO_STREAM_ERROR_DEVICE_BUSY
    • LOCAL_AUDIO_STREAM_ERROR_CAPTURE_FAILURE
    • LOCAL_AUDIO_STREAM_ERROR_ENCODE_FAILURE
    • LOCAL_VIDEO_STREAM_ERROR_OK
    • LOCAL_VIDEO_STREAM_ERROR_FAILURE
    • LOCAL_VIDEO_STREAM_ERROR_DEVICE_NO_PERMISSION
    • LOCAL_VIDEO_STREAM_ERROR_DEVICE_BUSY
    • LOCAL_VIDEO_STREAM_ERROR_CAPTURE_FAILURE
    • LOCAL_VIDEO_STREAM_ERROR_CODEC_NOT_SUPPORT
    • LOCAL_VIDEO_STREAM_ERROR_DEVICE_NOT_FOUND
    • PLAYER_ERROR_NONE
    • PLAYER_ERROR_INVALID_ARGUMENTS
    • PLAYER_ERROR_INTERNAL
    • PLAYER_ERROR_NO_RESOURCE
    • PLAYER_ERROR_INVALID_MEDIA_SOURCE
    • PLAYER_ERROR_UNKNOWN_STREAM_TYPE
    • PLAYER_ERROR_OBJ_NOT_INITIALIZED
    • PLAYER_ERROR_CODEC_NOT_SUPPORTED
    • PLAYER_ERROR_VIDEO_RENDER_FAILED
    • PLAYER_ERROR_INVALID_STATE
    • PLAYER_ERROR_URL_NOT_FOUND
    • PLAYER_ERROR_INVALID_CONNECTION_STATE
    • PLAYER_ERROR_SRC_BUFFER_UNDERFLOW
    • PLAYER_ERROR_INTERRUPTED
    • PLAYER_ERROR_NOT_SUPPORTED
    • PLAYER_ERROR_TOKEN_EXPIRED
    • PLAYER_ERROR_UNKNOWN
    • RTMP_STREAM_PUBLISH_ERROR_OK
    • RTMP_STREAM_PUBLISH_ERROR_INVALID_ARGUMENT
    • RTMP_STREAM_PUBLISH_ERROR_ENCRYPTED_STREAM_NOT_ALLOWED
    • RTMP_STREAM_PUBLISH_ERROR_CONNECTION_TIMEOUT
    • RTMP_STREAM_PUBLISH_ERROR_INTERNAL_SERVER_ERROR
    • RTMP_STREAM_PUBLISH_ERROR_RTMP_SERVER_ERROR
    • RTMP_STREAM_PUBLISH_ERROR_TOO_OFTEN
    • RTMP_STREAM_PUBLISH_ERROR_REACH_LIMIT
    • RTMP_STREAM_PUBLISH_ERROR_NOT_AUTHORIZED
    • RTMP_STREAM_PUBLISH_ERROR_STREAM_NOT_FOUND
    • RTMP_STREAM_PUBLISH_ERROR_FORMAT_NOT_SUPPORTED
    • RTMP_STREAM_PUBLISH_ERROR_NOT_BROADCASTER
    • RTMP_STREAM_PUBLISH_ERROR_TRANSCODING_NO_MIX_STREAM
    • RTMP_STREAM_PUBLISH_ERROR_NET_DOWN
    • RTMP_STREAM_PUBLISH_ERROR_INVALID_PRIVILEGE
    • RTMP_STREAM_UNPUBLISH_ERROR_OK

Deleted

  • startChannelMediaRelay
  • updateChannelMediaRelay
  • startChannelMediaRelayEx
  • updateChannelMediaRelayEx
  • onChannelMediaRelayEvent

v4.2.6

v4.2.6 was released on November 17, 2023.

Issues fixed

This release fixed the following issues:

  • Issues occurring when using Android 14:

    • When switching between portrait and landscape modes during screen sharing, the screen sharing process was interrupted. To restart screen sharing, users needed to confirm recording the screen in the pop-up window.
    • When integrating the SDK, setting the Android targetSdkVersion to 34 could cause screen sharing to be unavailable or even the app to crash.
    • Calling startScreenCapture without sharing video, that is, setting captureVideo to false, and then calling updateScreenCaptureParameters to share video, that is, setting captureVideo to true, resulted in a frozen shared screen on the receiving end.
    • When screen sharing in a landscape mode, the shared screen seen by the audience was divided into two parts: One side of the screen was compressed, and the other side was black.
  • In live streaming scenarios, the video on the audience end was occasionally distorted.

  • In specific scenarios, such as when the network packet loss rate was high or when the broadcaster left the channel without destroying the engine and then re-joined the channel, the video on the receiving end stuttered or froze.

v4.2.3

v4.2.3 was released on October 11, 2023.

Compatibility changes

This version optimizes the management of Texture Buffer in the SDK capture and custom video capture scenarios, effectively eliminating the potential for frame loss and crashes. Starting from this version, the texture format of the TextureBuffer type no longer includes the OES format, only the RGB format. Add adaptation to the I420 and RGB texture formats when processing video data.

New features

  1. Update video screenshot and upload

    To facilitate the integration of third-party video moderation services from Agora Extensions Marketplace, this version has the following changes:

    • The CONTENT_INSPECT_TYPE_IMAGE_MODERATION enumeration is added in the type parameter of ContentInspectModule, which means using video moderation extensions from Agora Extensions Marketplace to take video screenshots and upload them.
    • An optional parameter serverConfig is added in ContentInspectConfig, which is for server-side configuration related to video screenshot and upload via extensions from Agora Extensions Marketplace. By configuring this parameter, you can integrate multiple third-party moderation extensions and achieve flexible control over extension switches and other features. For more details, please contact technical support.

    In addition, this version also introduces the enableContentInspectEx method, which supports taking screenshots for multiple video streams and uploading them.

  2. Check device support for advanced features

    This version adds the isFeatureAvailableOnDevice method to check whether the capability of the current device meets the requirements of the specified advanced feature, such as virtual background and image enhancement.

    Before using advanced features, you can check whether the current device supports these features based on the call result. This helps to avoid performance degradation or unavailable features when enabling advanced features on low-end devices. Based on the return value of this method, you can decide whether to display or enable the corresponding feature button, or notify the user when the device's capabilities are insufficient.

    In addition, since this version, calling enableVirtualBackground and setBeautyEffectOptions automatically triggers a test on the capability of the current device. When the device is considered underperformed, the error code -4:ERR_NOT_SUPPORTED is returned, indicating the device does not support the feature.

Improvements

  1. Optimize virtual background memory usage

    This version has upgraded the virtual background algorithm, reducing the memory usage of the virtual background feature. Compared to the previous version, the memory consumption of the app during the use of the virtual background feature on low-end devices has been reduced by approximately 4% to 10% (specific values may vary depending on the device model and platform).

  2. Screen sharing scenario optimization

    This release also optimizes the video encoding configuration in screen sharing scenarios. When users customize the width and height properties of the video, the SDK rounds down the actual encoding resolution while maintaining the aspect ratio of the video and the screen, ensuring that the final encoding resolution does not exceed the user-defined encoding resolution, thereby improving the accuracy of billing for screen sharing streams.

Other improvements

This release includes the following additional improvements:

  • Optimizes the management method of Texture Buffer for SDK capture and custom video capture scenarios, effectively eliminating frame dropping and crash risks.
  • Optimizes the logic of handling invalid parameters. When you call the setPlaybackSpeed method to set the playback speed of audio files, if you pass an invalid parameter, the SDK returns the error code -2, which means that you need to reset the parameter.
  • Optimizes the logic of Token parsing, in order to prevent an app from crash when an invalid token is passed in.

Issues fixed

This release fixed the following issues:

  • When using the H.265 encoding mode, when a Web client joined the interactivity, it caused a redundant onUserEnableLocalVideo callback on the native side: when the host called enableLocalVideo (true), the receiving end first received a onUserEnableLocalVideo callback (with enabled as false) before receiving a onUserEnableLocalVideo callback (with enabled as true).
  • Occasional failure of joining a channel when the local system time was not set correctly.
  • When calling the playEffect [2/2] method to play two audio files using the same soundId, the first audio file was sometimes played repeatedly.
  • When the host called the startAudioMixing [2/2] method to play music, sometimes the host couldn't hear the music while the remote users could hear it.
  • Occasional crashes occurred on certain Android devices.
  • Calling takeSnapshotEx once receives the onSnapshotTaken callback for multiple times.
  • In channels joined by calling joinChannelEx exclusively, calling setEnableSpeakerphone is unable to switch audio route from the speaker to the headphone.

API changes

Added

  • enableContentInspectEx
  • CONTENT_INSPECT_TYPE_IMAGE_MODERATION in type of ContentInspectModule.
  • serverConfig in ContentInspectConfig
  • isFeatureAvailableOnDevice
  • FeatureType

v4.2.2

v4.2.2 was released on july 27, 2023.

New features

  1. Wildcard token

    This release introduces wildcard tokens. Agora supports setting the channel name used for generating a token as a wildcard character. The token generated can be used to join any channel if you use the same user id. In scenarios involving multiple channels, such as switching between different channels, using a wildcard token can avoid repeated application of tokens every time users joining a new channel, which reduces the pressure on your token server. See Secure authentication with tokens.

    All 4.x SDKs support using wildcard tokens.
  2. Preloading channels

    This release adds preloadChannel[1/2] and preloadChannel[2/2] methods, which allows a user whose role is set as audience to preload channels before joining one. Calling the method can help shortening the time of joining a channel, thus reducing the time it takes for audience members to hear and see the host.

    When preloading more than one channels, Agora recommends that you use a wildcard token for preloading to avoid repeated application of tokens every time you joining a new channel, thus saving the time for switching between channels. See Secure authentication with tokens.

  3. Customized background color of video canvas

    In this release, the backgroundColor member has been added to VideoCanvas, which allows you to customize the background color of the video canvas when setting the properties of local or remote video display.

Improvements

  1. Improved camera capture effect

    Since this release, camera exposure adjustment is supported. This release adds isCameraExposureSupported to query whether the device supports exposure adjustment and setCameraExposureFactor to set the exposure ratio of the camera.

  2. Virtual Background Algorithm Upgrade

    This version has upgraded the portrait segmentation algorithm of the virtual background, which comprehensively improves the accuracy of portrait segmentation, the smoothness of the portrait edge with the virtual background, and the fit of the edge when the person moves. In addition, it optimizes the precision of the person's edge in scenarios such as meetings, offices, homes, and under backlight or weak light conditions.

  3. Channel media relay

    The number of target channels for media relay has been increased to 6. When calling startOrUpdateChannelMediaRelay and startOrUpdateChannelMediaRelayEx, you can specify up to 6 target channels.

  4. Enhancement in video codec query capability

    To improve the video codec query capability, this release adds the codecLevels member in CodecCapInfo. After successfully calling queryCodecCapability, you can obtain the hardware and software decoding capability levels of the device for H.264 and H.265 video formats through codecLevels.

This release includes the following additional improvements:

  1. To improve the switching experience between multiple audio routes, this release adds the setRouteInCommunicationMode method. This method can switch the audio route from a Bluetooth headphone to the earpiece, wired headphone or speaker in communication volume mode (MODE_IN_COMMUNICATION).
  2. The SDK automatically adjusts the frame rate of the sending end based on the screen sharing scenario. Especially in document sharing scenarios, this feature avoids exceeding the expected video bitrate on the sending end to improve transmission efficiency and reduce network burden.
  3. To help users understand the reasons for more types of remote video state changes, the REMOTE_VIDEO_STATE_REASON_CODEC_NOT_SUPPORT enumeration has been added to the onRemoteVideoStateChanged callback, indicating that the local video decoder does not support decoding the received remote video stream.

Issues fixed

This release fixed the following issues:

  • Slow channel reconnection after the connection was interrupted due to network reasons.
  • In screen sharing scenarios, the delay of seeing the shared screen was occasionally higher than expected on some devices.
  • In custom video capturing scenarios, setBeautyEffectOptions, setLowlightEnhanceOptions, setVideoDenoiserOptions, and setColorEnhanceOptions could not load extensions automatically.

API changes

Added

  • setCameraExposureFactor
  • isCameraExposureSupported
  • preloadChannel[1/2]
  • preloadChannel[2/2]
  • updatePreloadChannelToken
  • setRouteInCommunicationMode
  • CodecCapLevels
  • VideoCodecCapLevel
  • backgroundColor in VideoCanvas
  • codecLevels in CodecCapInfo
  • REMOTE_VIDEO_STATE_REASON_CODEC_NOT_SUPPORT

v4.2.1

This version was released on June 21, 2023.

Improvements

This version improves the network transmission strategy, enhancing the smoothness of audio and video interactions.

Issues fixed

This version fixed the following issues:

  • Inability to join channels caused by SDK's incompatibility with some older versions of AccessToken.
  • After the sending end called setAINSMode to activate AI noise reduction, occasional echo was observed by the receiving end.
  • Brief noise occurred while playing media files using the media player.
  • In screen sharing scenarios, some Android devices experienced choppy video on the receiving end.

v4.2.0

v4.2.0 was released on May 24, 2023.

Compatibility changes

If you use the features mentioned in this section, ensure that you modify the implementation of the relevant features after upgrading the SDK.

1. Video data acquisition

The onCaptureVideoFrame and onPreEncodeVideoFrame callbacks are added with a new parameter called sourceType, which is used to indicate the specific video source type.

2. Channel media options

  • publishCustomAudioTrackEnableAec in ChannelMediaOptions is deleted. Use publishCustomAudioTrack instead.
  • publishTrancodedVideoTrack in ChannelMediaOptions is renamed to publishTranscodedVideoTrack.
  • publishCustomAudioSourceId in ChannelMediaOptions is renamed to publishCustomAudioTrackId.

3. Miscellaneous

  • onApiCallExecuted is deleted. Agora recommends getting the results of the API implementation through relevant channels and media callbacks.
  • enableDualStreamMode[1/2] and enableDualStreamMode[2/2] are deprecated. Use setDualStreamMode[1/2] and setDualStreamMode[2/2] instead.
  • startChannelMediaRelay, updateChannelMediaRelay, startChannelMediaRelayEx, and updateChannelMediaRelayEx are deprecated. Use startOrUpdateChannelMediaRelay and startOrUpdateChannelMediaRelayEx instead.

New features

1. AI Noise Suppression

This release introduces public APIs for the AI Noise Suppression function. Once enabled, the SDK automatically detects and reduces background noises. Whether in bustling public venues or real-time competitive arenas that demand lightning-fast responsiveness, this function guarantees optimal audio clarity, providing users with an elevated audio experience. You can enable this function through the newly-introduced setAINSMode method and set the noise suppression mode to balance, aggressive, or low latency according to your scenarios.

Agora charges separately for this function. See AI Noise Suppression unit pricing.

2. Enhanced Virtual Background

To increase the fun of real-time video calls and protect user privacy, this version has enhanced the Virtual Background function. You can now set custom backgrounds of various types by calling the enableVirtualBackground method, including:

  • Process the background as Alpha information without replacement, only separating the portrait and the background. This can be combined with the local video mixing feature to achieve a portrait-in-picture effect.
  • Replace the background with various formats of local videos.

See Virtual Background documentation.

3. Video scenario settings

This release introduces setVideoScenario for setting the video application scene. The SDK will automatically enable the best practice strategy based on different scenes, adjusting key performance indicators to optimize video quality and improve user experience. Whether it is a formal business meeting or a casual online gathering, this feature ensures that the video quality meets the requirements.

Currently, this feature provides targeted optimizations for real-time video conferencing scenarios, including:

  • Automatically activate multiple anti-weak-network technologies to enhance the capability and performance of low-quality video streams in meeting scenarios where high bitrate is required, ensuring smoothness when multiple streams are subscribed by the receiving end.
  • Monitor the number of subscribers for the high-quality and low-quality video streams in real time, dynamically adjusting the configuration of the high-quality stream and dynamically enabling or disabling the low-quality stream, to save uplink bandwidth and consumption.

4. Local video mixing

This release adds the local video mixing feature. You can use the startLocalVideoTranscoder method to mix and render multiple video streams locally, such as camera-captured video, screen sharing streams, video files, images, etc. This allows you to achieve custom layouts and effects, making it easy to create personalized video display effects to meet various scenario requirements, such as remote meetings, live streaming, online education, while also supporting features like portrait-in-picture effect.

Additionally, the SDK provides the updateLocalTranscoderConfiguration method and the onLocalVideoTranscoderError callback. After enabling local video mixing, you can use the updateLocalTranscoderConfiguration method to update the video mixing configuration. Where an error occurs in starting the local video mixing or updating the configuration, you can get the reason for the failure through the onLocalVideoTranscoderError callback.

Local video mixing requires more CPU resources. Therefore, Agora recommends enabling this function on devices with higher performance.

5. Cross-device synchronization

In real-time collaborative singing scenarios, network issues can cause inconsistencies in the downlinks of different client devices. To address this, this release introduces getNtpWallTimeInMs for obtaining the current Network Time Protocol (NTP) time. By using this method to synchronize lyrics and music across multiple client devices, users can achieve synchronized singing and lyrics progression, resulting in a better collaborative experience.

Improvements

1. Improved voice changer

This release introduces the setLocalVoiceFormant method that allows you to adjust the formant ratio to change the timbre of the voice. This method can be used together with the setLocalVoicePitch method to adjust the pitch and timbre of voice at the same time, enabling a wider range of voice transformation effects.

2. Enhanced screen share

This release adds the queryScreenCaptureCapability method, which is used to query the screen capture capabilities of the current device. To ensure optimal screen sharing performance, particularly in enabling high frame rates like 60 fps, Agora recommends you to query the device's maximum supported frame rate using this method beforehand.

This release also adds the setScreenCaptureScenario method, which is used to set the scenario type for screen sharing. The SDK automatically adjusts the smoothness and clarity of the shared screen based on the scenario type you set.

3. Improved compatibility with audio file types

As of v4.2.0, you can use the following methods to open files with a URI starting with content://:

  • startAudioMixing [2/2]
  • playEffect [3/3]
  • open [2/2]
  • openWithMediaSource

4. Audio and video synchronization

For custom video and audio capture scenarios, this release introduces getCurrentMonotonicTimeInMs for obtaining the current Monotonic Time. By passing this value into the timestamps of audio and video frames, developers can accurately control the timing of their audio and video streams, ensuring proper synchronization.

5. Multi-camera capture

This release introduces startCameraCapture. By calling this method multiple times and specifying the sourceType parameter, developers can start capturing video streams from multiple cameras for local video mixing or multi-channel publishing. This is particularly useful for scenarios such as remote medical care and online education, where multiple cameras need to be connected.

6. Channel media relay

This release introduces startOrUpdateChannelMediaRelay and startOrUpdateChannelMediaRelayEx, allowing for a simpler and smoother way to start and update media relay across channels. With these methods, developers can easily start the media relay across channels and update the target channels for media relay with a single method. Additionally, the internal interaction frequency has been optimized, effectively reducing latency in function calls.

7. Custom audio tracks

To better meet the needs of custom audio capture scenarios, this release adds createCustomAudioTrack and destroyCustomAudioTrack for creating and destroying custom audio tracks. Two types of audio tracks are also provided for users to choose from, further improving the flexibility of capturing external audio source:

  • Mixable audio track: Supports mixing multiple external audio sources and publishing them to the same channel, suitable for multi-channel audio capture scenarios.
  • Direct audio track: Only supports publishing one external audio source to a single channel, suitable for low-latency audio capture scenarios.

Issues fixed

This release fixed the following issues:

  • Occasional crashes occurred on Android devices when users joined or left a channel.
  • When the host frequently switched the user role between broadcaster and audience in a short period of time, the audience members could not hear the audio of the host.
  • Occasional failure when enabling in-ear monitoring.
  • Occasional echo.
  • Abnormal client status caused by an exception in the onRemoteAudioStateChanged callback.

API changes

Added

  • startCameraCapture
  • stopCameraCapture
  • startOrUpdateChannelMediaRelay
  • startOrUpdateChannelMediaRelayEx
  • getNtpWallTimeInMs
  • setVideoScenario
  • getCurrentMonotonicTimeInMs
  • startLocalVideoTranscoder
  • updateLocalTranscoderConfiguration
  • onLocalVideoTranscoderError
  • queryScreenCaptureCapability
  • setScreenCaptureScenario
  • setAINSMode
  • createAudioCustomTrack
  • destroyAudioCustomTrack
  • AudioTrackConfig
  • AudioTrackType
  • VideoScenario
  • The mDomainLimit and mAutoRegisterAgoraExtensions members in RtcEngineConfig
  • The sourceType parameter in onCaptureVideoFrame and onPreEncodeVideoFrame callbacks
  • BACKGROUND_NONE(0)
  • BACKGROUND_VIDEO(4)

Deprecated

  • enableDualStreamMode[1/2]
  • enableDualStreamMode[2/2]
  • startChannelMediaRelay
  • startChannelMediaRelayEx
  • updateChannelMediaRelay
  • updateChannelMediaRelayEx
  • onChannelMediaRelayEvent

Deleted

  • onApiCallExecuted
  • publishCustomAudioTrackEnableAec in ChannelMediaOptions in ChannelMediaOptions

v4.1.1

v4.1.1 was released on February 8, 2023.

Compatibility changes

As of this release, the SDK optimizes the video encoder algorithm and upgrades the default video encoding resolution from 640 × 360 to 960 × 540 to accommodate improvements in device performance and network bandwidth, providing users with a full-link HD experience in various audio and video interaction scenarios.

Call the setVideoEncoderConfiguration method to set the expected video encoding resolution in the video encoding parameters configuration.

The increase in the default resolution affects the aggregate resolution and thus the billing rate. See Pricing.

New features

1. Instant frame rendering

This release adds the enableInstantMediaRendering method to enable instant rendering mode for audio and video frames, which can speed up the first video or audio frame rendering after the user joins the channel.

2. Video rendering tracing

This release adds the startMediaRenderingTracing and startMediaRenderingTracingEx methods. The SDK starts tracing the rendering status of the video frames in the channel from the moment this method is called and reports information about the event through the onVideoRenderingTracingResult callback.

Agora recommends that you use this method in conjunction with the UI settings, such as buttons and sliders, in your app. For example, call this method when the user clicks Join Channel and then get the indicators in the video frame rendering process through the onVideoRenderingTracingResult callback. This enables developers to optimize the indicators and improve the user experience.

Improvements

1. Video frame observer

As of this release, the SDK optimizes the onRenderVideoFrame callback, and the meaning of the return value is different depending on the video processing mode:

  • When the video processing mode is PROCESS_MODE_READ_ONLY, the return value is reserved for future use.
  • When the video processing mode is PROCESS_MODE_READ_WRITE, the SDK receives the video frame when the return value is true. The video frame is discarded when the return value is false.

2. Super resolution

This release improves the performance of super resolution. To optimize the usability of super resolution, this release removes enableRemoteSuperResolution. Super resolution is now included in the online strategies of video quality enhancement which does not require extra configuration.

Issues fixed

This release fixes the following issues:

  • Playing audio files with a sample rate of 48 kHz failed.
  • Crashes occurred after users set the video resolution as 3840 × 2160 and started CDN streaming on Xiaomi Redmi 9A devices.
  • In real-time chorus scenarios, remote users heard noises and echoes when an OPPO R11 device joined the channel in loudspeaker mode.
  • When the playback of the local music finished, the onAudioMixingFinished callback was not properly triggered.
  • When using a video frame observer, the first video frame was occasionally missed on the receiver's end.
  • When sharing screens in scenarios involving multiple channels, remote users occasionally saw black screens.
  • Switching to the rear camera with the virtual background enabled occasionally caused the background to be inverted.
  • When there were multiple video streams in a channel, calling some video enhancement APIs occasionally failed.
  • At the moment when a user left a channel, a request for leaving was not sent to the server and the leaving behavior was incorrectly determined by the server as timed out.

API changes

Added

  • enableInstantMediaRendering
  • startMediaRenderingTracing
  • startMediaRenderingTracingEx
  • onVideoRenderingTracingResult
  • MEDIA_RENDER_TRACE_EVENT
  • VideoRenderingTracingInfo

Deleted

  • enableRemoteSuperResolution
  • superResolutionType in RemoteVideoStats

v4.1.0

v4.1.0 was released on December 15, 2022.

New features

1. Headphone equalization effect

This release adds the setHeadphoneEQParameters method, which is used to adjust the low- and high-frequency parameters of the headphone EQ. This is mainly useful in spatial audio scenarios. If you cannot achieve the expected headphone EQ effect after calling setHeadphoneEQPreset, you can call setHeadphoneEQParameters to adjust the EQ.

2. Encoded video frame observer

This release adds the setRemoteVideoSubscriptionOptions and setRemoteVideoSubscriptionOptionsEx methods. When you call the registerVideoEncodedFrameObserver method to register a video frame observer for the encoded video frames, the SDK subscribes to the encoded video frames by default. If you want to change the subscription options, you can call these new methods to set them.

For more information about registering video observers and subscription options, see the API reference.

3. MPUDP (MultiPath UDP) (Beta)

As of this release, the SDK supports MPUDP protocol, which enables you to connect and use multiple paths to maximize the use of channel resources based on the UDP protocol. You can use different physical NICs on both mobile and desktop and aggregate them to effectively combat network jitter and improve transmission quality.

To enable this feature, contact support@agora.io.

4. Camera capture options

This release adds the followEncodeDimensionRatio member in CameraCapturerConfiguration, which enables you to set whether to follow the video aspect ratio already set in setVideoEncoderConfiguration when capturing video with the camera.

5. Multi-channel management

This release adds a series of multi-channel related methods that you can call to manage audio and video streams in multi-channel scenarios.

  • The muteLocalAudioStreamEx and muteLocalVideoStreamEx methods are used to cancel or resume publishing a local audio or video stream, respectively.
  • The muteAllRemoteAudioStreamsEx and muteAllRemoteVideoStreamsEx are used to cancel or resume the subscription of all remote users to audio or video streams, respectively.
  • The startRtmpStreamWithoutTranscodingEx, startRtmpStreamWithTranscodingEx, updateRtmpTranscodingEx, and stopRtmpStreamEx methods are used to implement Media Push in multi-channel scenarios.
  • The startChannelMediaRelayEx, updateChannelMediaRelayEx, pauseAllChannelMediaRelayEx, resumeAllChannelMediaRelayEx, and stopChannelMediaRelayEx methods are used to relay media streams across channels in multi-channel scenarios.
  • Adds the leaveChannelEx [2/2] method. Compared with the leaveChannelEx [1/2] method, a new options parameter is added, which is used to choose whether to stop recording with the microphone when leaving a channel in a multi-channel scenario.

6. Video encoding preferences

In general scenarios, the default video encoding configuration meets most requirements. For certain specific scenarios, this release adds the advanceOptions member in VideoEncoderConfiguration for advanced settings of video encoding properties:

  • compressionPreference: The compression preferences for video encoding, which is used to select low-latency or high-quality video preferences.
  • encodingPreference: The video encoder preference, which is used to select adaptive preference, software encoder preference, or hardware encoder video preferences.

7. Client role switching

In order to enable users to know whether the switched user role is low-latency or ultra-low-latency, this release adds the newRoleOptions parameter to the onClientRoleChanged callback. The value of this parameter is as follows:

  • AUDIENCE_LATENCY_LEVEL_LOW_LATENCY (1): Low latency.
  • AUDIENCE_LATENCY_LEVEL_ULTRA_LOW_LATENCY (2): Ultra-low latency.

8. Brand-new AI Noise Suppression

The SDK supports a new version of Noise Suppression (in comparison to the basic Noise Suppression in v3.7.x). The new AI Noise Suppression has better vocal fidelity, cleaner noise suppression, and adds a dereverberation option. To enable this feature, contact support@agora.io.

9. Spatial audio effect

This release adds the following features applicable to spatial audio effect scenarios, which can effectively enhance the user's sense of presence experience in virtual interactive scenarios.

  • Sound insulation area: You can set a sound insulation area and sound attenuation parameter by calling setZones. When the sound source (which can be a user or the media player) and the listener belong to the inside and outside of the sound insulation area, the listner experiences an attenuation effect similar to that of the sound in the real environment when it encounters a building partition. You can also set the sound attenuation parameter for the media player and the user, respectively, by calling setPlayerAttenuation and setRemoteAudioAttenuation, and specify whether to use that setting to force an override of the sound attenuation paramter in setZones.
  • Doppler sound: You can enable Doppler sound by setting the enable_doppler parameter in SpatialAudioParams, and the receiver experiences noticeable tonal changes in the event of a high-speed relative displacement between the source source and receiver (such as in a racing game scenario).
  • Headphone equalizer: You can use a preset headphone equalization effect by calling the setHeadphoneEQPreset method to improve the hearing of the headphones.

Improvements

1. Bluetooth permissions

To simplify integration, as of this release, you can use the SDK to enable Android users to use Bluetooth normally without adding the BLUETOOTH_CONNECT permission.

2. CDN streaming

To improve user experience during CDN streaming, when your camera does not support the video resolution you set when streaming, the SDK automatically adjusts the resolution to the closest value that is supported by your camera and has the same aspect ratio as the original video resolution you set. The actual video resolution used by the SDK for streaming can be obtained through the onDirectCdnStreamingStats callback.

3. Relaying media streams across channels

This release optimizes the updateChannelMediaRelay method as follows:

  • Before v4.1.0: If the target channel update fails due to internal reasons in the server, the SDK returns the error code RELAY_EVENT_PACKET_UPDATE_DEST_CHANNEL_REFUSED(8), and you need to call the updateChannelMediaRelay method again.
  • v4.1.0 and later: If the target channel update fails due to internal server reasons, the SDK retries the update until the target channel update is successful.

4. Reconstructed AIAEC algorithm

This release reconstructs the AEC algorithm based on the AI method. Compared with the traditional AEC algorithm, the new algorithm can preserve the complete, clear, and smooth near-end vocals under poor echo-to-signal conditions, significantly improving the system's echo cancellation and dual-talk performance. This gives users a more comfortable call and live-broadcast experience. AIAEC is suitable for conference calls, chats, karaoke, and other scenarios.

5. Virtual background

This release optimizes the virtual background algorithm. Improvements include the following:

  • The boundaries of virtual backgrounds are handled in a more nuanced way and image matting is is now extremely thin.
  • The stability of the virtual background is improved whether the portrait is still or moving, effectively eliminating the problem of background flickering and exceeding the range of the picture.
  • More application scenarios are now supported, and a user obtains a good virtual background effect day or night, indoors or out.
  • A larger variety of postures are now recognized, when half the body is motionless, the body is shaking, the hands are swinging, or there is fine finger movement. This helps to achieve a good virtual background effect in conjunction with many different gestures.

Other improvements

This release includes the following additional improvements:

  • Reduces the latency when pushing external audio sources.
  • Improves the performance of echo cancellation when using the AUDIO_SCENARIO_MEETING scenario.
  • Improves the smoothness of SDK video rendering.
  • Enhances the ability to identify different network protocol stacks and improves the SDK's access capabilities in multiple-operator network scenarios.

Issues fixed

This release fixed the following issues:

  • Audience members heard buzzing noises when the host switched between speakers and earphones during live streaming.
  • The call getExtensionProperty failed and returned an empty string.
  • When entering a live streaming room that has been played for a long time as an audience, the time for the first frame to be rendered was shortened.

API changes

Added

  • setHeadphoneEQParameters

  • setRemoteVideoSubscriptionOptions

  • setRemoteVideoSubscriptionOptionsEx

  • VideoSubscriptionOptions

  • leaveChannelEx [2/2]

  • muteLocalAudioStreamEx

  • muteLocalVideoStreamEx

  • muteAllRemoteAudioStreamsEx

  • muteAllRemoteVideoStreamsEx

  • startRtmpStreamWithoutTranscodingEx

  • startRtmpStreamWithTranscodingEx

  • updateRtmpTranscodingEx

  • stopRtmpStreamEx

  • startChannelMediaRelayEx

  • updateChannelMediaRelayEx

  • pauseAllChannelMediaRelayEx

  • resumeAllChannelMediaRelayEx

  • stopChannelMediaRelayEx

  • followEncodeDimensionRatio in CameraCapturerConfiguration

  • hwEncoderAccelerating in LocalVideoStats

  • advanceOptions in VideoEncoderConfiguration

  • newRoleOptions in onClientRoleChanged

  • adjustUserPlaybackSignalVolumeEx

  • IAgoraMusicContentCenter interface class and methods in it

  • IAgoraMusicPlayer interface class and methods in it

  • IMusicContentCenterEventHandler interface class and callbacks in it

  • Music class

  • MusicChartInfo class

  • MusicContentCenterConfiguration class

  • MvProperty class

  • ClimaxSegment class

Deprecated

  • onApiCallExecuted. Use the callbacks triggered by specific methods instead.

Deleted

  • Removes deprecated member parameters backgroundImage and watermark in LiveTranscoding class.
  • Removes RELAY_EVENT_PACKET_UPDATE_DEST_CHANNEL_REFUSED(8) in onChannelMediaRelayEvent callback.

v4.0.1

v4.0.1 was released on September 29, 2022.

Compatibility changes

This release deletes the sourceType parameter in enableDualStreamMode [3/3] and enableDualStreamModeEx, and the enableDualStreamMode [2/3] method, because the SDK supports enabling dual-stream mode for various video sources captured by custom capture or SDK, you don't need to specify the video source type any more.

New features

1. In-ear monitoring

This release adds getEarMonitoringAudioParams callback to set the audio data format of the in-ear monitoring. You can use your own audio effect processing module to pre-process the audio frame data of the in-ear monitoring to implement custom audio effects. After calling registerAudioFrameObserver to register the audio observer, set the audio data format in the return value of the getEarMonitoringAudioParams callback. The SDK calculates the sampling interval based on the return value of the callback, and triggers the onEarMonitoringAudioFrame callback based on the sampling interval.

2. Audio capture device test

This release adds support for testing local audio capture devices before joining channel. You can call startRecordingDeviceTest to start the audio capture device test. After the test is complete, call the stopPlaybackDeviceTest method to stop the audio capture device test.

3. Local network connection types

To make it easier for users to know the connection type of the local network at any stage, this release adds the getNetworkType method. You can use this method to get the type of network connection in use, including UNKNOWN, DISCONNECTED, LAN, WIFI, 2G, 3G, 4G, 5G. When the local network connection type changes, the SDK triggers the onNetworkTypeChanged callback to report the current network connection type.

4. Audio stream filter

This release introduces filtering audio streams based on volume. Once this function is enabled, the Agora server ranks all audio streams by volume and transports 3 audio streams with the highest volumes to the receivers by default. The number of audio streams to be transported can be adjusted; you can contact support@agora.io to adjust this number according to your scenarios.

Meanwhile, Agora supports publishers to choose whether or not the audio streams being published are to be filtered based on volume. Streams that are not filtered will bypass this filter mechanism and transported directly to the receivers. In scenarios where there are a number of publishers, enabling this function helps reducing the bandwidth and device system pressure for the receivers.

To enable this function, contact support@agora.io.

5. Dual-stream mode

This release optimizes the dual-stream mode, you can call enableDualStreamMode and enableDualStreamModeEx before and after joining a channel.

The implementation of subscribing low-quality video stream is expanded. The SDK enables the low-quality video stream auto mode on the sender by default (the SDK does not send low-quality video streams), you can follow these steps to enable sending low-quality video streams:

  1. The host at the receiving end calls setRemoteVideoStreamType or setRemoteDefaultVideoStreamType to initiate a low-quality video stream request.
  2. After receiving the application, the sender automatically switches to sending low-quality video stream mode.

If you want to modify the default behavior above, you can call setDualStreamMode [1/2] or setDualStreamMode [2/2] and set the mode parameter to DISABLE_SIMULCAST_STREAM (always do not send low-quality video streams) or ENABLE_SIMULCAST_STREAM (always send low-quality video streams).

Improvements

1. Video information change callback

This release optimizes the trigger logic of onVideoSizeChanged, which can also be triggered and report the local video size change when startPreview is called separately.

Issues fixed

This release fixed the following issues.

  1. When calling setVideoEncoderConfigurationEx in the channel to increase the resolution of the video, it occasionally failed.
  2. In online meeting scenarios, the local user and the remote user might not hear each other after the local user is interrupted by a call.
  3. After calling setCloudProxy to set the cloud proxy, calling joinChannelEx to join multiple channels failed.
  4. When using the Agora media player to play videos, after you play and pause the video, and then call the seek method to specify a new position for playback, the video image might remain unchanged; if you call the resume method to resume playback, the video might be played in a speed faster than the original one.

API changes

Added

  • getEarMonitoringAudioParams
  • startRecordingDeviceTest
  • stopRecordingDeviceTest
  • getNetworkType
  • isAudioFilterable in the ChannelMediaOptions
  • setDualStreamMode [1/2]
  • setDualStreamMode [2/2]
  • setDualStreamModeEx
  • SIMULCAST_STREAM_MODE
  • setZones
  • setPlayerAttenuation
  • setRemoteAudioAttenuation
  • muteRemoteAudioStream
  • SpatialAudioParams
  • setHeadphoneEQPreset
  • HEADPHONE_EQUALIZER_PRESET

Modified

  • enableDualStreamMode [1/3]

  • enableDualStreamMode [3/3]

  • enableDualStreamModeEx

Deprecated

  • startEchoTest [2/3]

Deleted

  • enableDualStreamMode [2/3]

v4.0.0

v4.0.0 was released on September 15, 2022.

Compatibility changes

1. Integration change

This release has optimized the implementation of some features, resulting in incompatibility with v3.7.x. The following are the main features with compatibility changes:

  • Multiple channel
  • Media stream publishing control
  • Custom video capture and rendering (Media IO)
  • Warning codes

After upgrading the SDK, you need to update the code in your app according to your business scenarios. For details, see Migrate from v3.7.x to v4.0.0.

2. Callback exception handling

To facilitate troubleshooting, as of this release, the SDK no longer catches exceptions that are thrown by your own code implementation when triggering callbacks in the IRtcEngineEventHandler class. You need to catch and handle the exceptions yourself; otherwise, it can cause a crash.

New features

1. Multiple media tracks

This release supports one RtcEngine instance to collect multiple audio and video sources at the same time and publish them to the remote users by setting RtcEngineEx and ChannelMediaOptions.

After calling joinChannel to join the first channel, call joinChannelEx multiple times to join multiple channels, and publish the specified stream to different channels through different user ID (localUid) and ChannelMediaOptions settings.

Besides, this release adds createCustomVideoTrack method to implement video custom capture. You can refer to the following steps to publish multiple custom captured videos in the channel:

  1. Create a custom video track: Call this method to create a video track, and get the video track ID.
  2. Set the custom video track to be published in the channel: In each channel's ChannelMediaOptions, set the customVideoTrackId parameter to the ID of the video track you want to publish, and set publishCustomVideoTrack to true.
  3. Pushing an external video source: Call pushVideoFrame, and specify videoTrackId as the ID of the custom video track in step 2 in order to publish the corresponding custom video source in multiple channels.

You can also experience the following features with the multi-channel capability:

  • Publish multiple sets of audio and video streams to the remote users through different user IDs (uid).
  • Mix multiple audio streams and publish to the remote users through a user ID (uid).
  • Combine multiple video streams and publish them to the remote users through a user ID (uid).

2. Full HD and Ultra HD resolution (Beta)

In order to improve the interactive video experience, the SDK optimizes the whole process of video capturing, encoding, decoding, and rendering. Starting from this version, it supports Full HD (FHD) and Ultra HD (UHD) video resolutions. You can set the dimensions parameter to 1920 × 1080 or higher when calling the setVideoEncoderConfiguration method. If your device does not support high resolutions, the SDK automatically falls back to an appropriate resolution.

The UHD resolution (4K, 60 fps) is currently in beta and requires certain device performance and network bandwidth. If you want to enable this feature, contact technical support.

High resolution typically means higher performance consumption. To avoid a decrease in experience due to insufficient device performance, Agora recommends that you enable FHD and UHD video resolutions on devices with better performance.

The increase in the default resolution affects the aggregate resolution and thus the billing rate. See Pricing.

3. Agora media player

To make it easier for users to integrate the Agora SDK and reduce the SDK's package size, this release introduces the Agora media player. After calling the createMediaPlayer method to create a media player object, you can then call the methods in the IMediaPlayer class to experience a series of functions, such as playing local and online media files, preloading a media file, changing the CDN route for playing according to your network conditions, or sharing the audio and video streams being played with remote users.

4. Ultra-high audio quality

To make the audio clearer and restore more details, this release adds the ULTRA_HIGH_QUALITY_VOICE enumeration. In scenarios that mainly feature the human voice, such as chat or singing, you can call setVoiceBeautifierPreset and use this enumeration to experience ultra-high audio quality.

5. Spatial audio

This feature is in experimental status. To enable this feature, contact support@agora.io. Contact technical support if needed.

You can set the spatial audio for the remote user as following:

  • Local Cartesian Coordinate System Calculation: This solution uses the ILocalSpatialAudioEngine class to implement spatial audio by calculating the spatial coordinates of the remote user. You need to call updateSelfPosition and updateRemotePosition to update the spatial coordinates of the local and remote users, respectively, so that the local user can hear the spatial audio effect of the remote user. Spatial effect

You can also set the spatial audio for the media player as following:

  • Local Cartesian Coordinate System Calculation: This solution uses the ILocalSpatialAudioEngine class to implement spatial audio. You need to call updateSelfPosition and updatePlayerPositionInfo to update the spatial coordinates of the local user and media player, respectively, so that the local user can hear the spatial audio effect of media player. Spatial effect

6. Real-time chorus

This release gives real-time chorus the following abilities:

  • Two or more choruses are supported.
  • Each singer is independent of each other. If one singer fails or quits the chorus, the other singers can continue to sing.
  • Very low latency experience. Each singer can hear each other in real time, and the audience can also hear each singer in real time.

This release adds the AUDIO_SCENARIO_CHORUS enumeration. With this enumeration, users can experience ultra-low latency in real-time chorus when the network conditions are good.

7. Extensions from the Agora extensions marketplace

In order to enhance the real-time audio and video interactive activities based on the Agora SDK, this release supports the one-stop solution for the extensions from the Agora extensions marketplace:

  • Easy to integrate: The integration of modular functions can be achieved simply by calling an API, and the integration efficiency is improved by nearly 95%.
  • Extensibility design: The modular and extensible SDK design style endows the Agora SDK with good extensibility, which enables developers to quickly build real-time interactive apps based on the Agora extensions marketplace ecosystem.
  • Build an ecosystem: A community of real-time audio and video apps has developed that can accommodate a wide range of developers, offering a variety of extension combinations. After integrating the extensions, developers can build richer real-time interactive functions. For details, see Use an Extension.
  • Become a vendor: Vendors can integrate their products with Agora SDK in the form of extensions, display and publish them in the Agora extensions marketplace, and build a real-time interactive ecosystem for developers together with Agora. For details on how to develop and publish extensions, see Become a Vendor.

8. Enhanced channel management

To meet the channel management requirements of various business scenarios, this release adds the following functions to the ChannelMediaOptions structure:

  • Sets or switches the publishing of multiple audio and video sources.
  • Sets or switches channel profile and user role.
  • Sets or switches the stream type of the subscribed video.
  • Controls audio publishing delay.

Set ChannelMediaOptions when calling joinChannel or joinChannelEx to specify the publishing and subscription behavior of a media stream, for example, whether to publish video streams captured by cameras or screen sharing, and whether to subscribe to the audio and video streams of remote users. After joining the channel, call updateChannelMediaOptions to update the settings in ChannelMediaOptions at any time, for example, to switch the published audio and video sources.

9. Screen sharing

This release optimizes the screen sharing function. You can enable this function in the following ways.

  • Call the startScreenCapture method before joining a channel, and then call joinChannel [2/2] to join a channel and set publishScreenCaptureVideo as true.
  • Call the startScreenCapture method after joining a channel, and then call updateChannelMediaOptions to set publishScreenCaptureVideo as true.

10. Subscription allowlists and blocklists

This release introduces subscription allowlists and blocklists for remote audio and video streams. You can add a user ID that you want to subscribe to in your allowlist, or add a user ID for the streams you do not wish to see to your blocklists. You can experience this feature through the following APIs, and in scenarios that involve multiple channels, you can call the following methods in the RtcEngineEx interface:

  • setSubscribeAudioBlacklist:Set the audio subscription blocklist.
  • setSubscribeAudioWhitelist:Set the audio subscription allowlist.
  • setSubscribeVideoBlacklist:Set the video subscription blocklist.
  • setSubscribeVideoWhitelist:Set the video subscription allowlist.

If a user is added in a blocklist and a allowlist at the same time, only the blocklist takes effect.

11. Set audio scenarios

To make it easier to change audio scenarios, this release adds the setAudioScenario method. For example, if you want to change the audio scenario from AUDIO_SCENARIO_DEFAULT to AUDIO_SCENARIO_GAME_STREAMING when you are in a channel, you can call this method.

Improvements

1. Fast channel switching

This release can achieve the same switching speed as switchChannel in v3.7.x through the leaveChannel and joinChannel methods so that you don't need to take the time to call the switchChannel method.

2. Push external video frames

This releases supports pushing video frames in I422 format. You can call the pushExternalVideoFrame [1/2] method to push such video frames to the SDK.

3. Voice pitch of the local user This release adds voicePitch in AudioVolumeInfo of onAudioVolumeIndication. You can use voicePitch to get the local user's voice pitch and perform business functions such as rating for singing.

4. Device permission management

This release adds the onPermissionError method, which is automatically reported when the audio capture device or camera does not obtain the appropriate permission. You can enable the corresponding device permission according to the prompt of the callback.

5. Video preview

This release improves the implementation logic of startPreview. You can call the startPreview method to enable video preview at any time.

6. Video types of subscription

You can call the setRemoteDefaultVideoStreamType method to choose the video stream type when subscribing to streams.

Notifications

2022.10

  • After you enable Notifications, your server receives the events that you subscribe to in the form of HTTPS requests.
  • To improve communication security between the Notifications and your server, Agora SD-RTN™ uses signatures for identity verification.
  • As of this release, you can use Notifications in conjunction with this product.

AI Noise Suppression

Agora charges additionally for this extension. See Pricing.

v1.1.0

Improvement

This release improves the calculation performance of the AI-powered noise suppression algorithm.

New features

This release adds the following APIs and parameters:

  • APIs:
    • checkCompatibility: Checks whether the AI Noise Suppression extension is supported on the current browser.
    • setMode: Sets the noise suppression mode as AI noise suppression or stationary noise suppression.
    • setLevel: Sets the AI noise suppression level.
  • Parameters:
    • elapsedTime in onoverload: Reports the time in ms that the extension needs to process one audio frame.

For API details, see AI Noise Suppression.

Compatibility changes

This release brings the following changes:

  • AI Noise Suppression supports Agora Video SDK for Web v4.15.0 or later.
  • The extension has Wasm dependencies only. Because JS dependencies are removed, you need to publish the Wasm files located in the node_modules/agora-extension-ai-denoiser/external directory again. If you have enabled the Content Security Policy (CSP), you need to modify the CSP configuration. See AI Noise Suppression for details.
  • The audio data is dumped in PCM format instead of WAV format.
  • To adjust the intensity of noise suppression, best practice is to call setLevel.

v1.0.0

First release.

Virtual Background

v1.2.0

v1.2.0 was released on December 10, 2023.

Compatibility changes

As of this version, the Virtual Background extension incorporates the necessary Wasm module. You no longer need to publish the Wasm file separately, and pass the wasmDir parameter when calling the init method to initialize the extension.

After upgrading to this version, please modify your code accordingly.

Improvements

This release upgrades the background segmentation algorithm of the extension, optimizing the segmentation effects on the subject, edges, and fingers in complex static and dynamic backgrounds.

Fixed issues

This release fixed the issue that checkCompatibility could return inaccurate results on specific devices.

API changes

The wasmDir parameter of the init method is now optional.

v1.1.3

Fixed issues

This release fixes the occasional issue of jagged background images on Chrome for Android.

v1.1.2

New features

You can now specify the fit property when calling setOptions. This sets how the background is resized to fit the container. For API details, see Virtual background.

Compatibility changes

Virtual Background supports Agora Video SDK for Web v4.15.0 or later.

v1.1.1

New features

You can now call checkCompatibility and test if AI Noise Suppression extension is supported on the current browser. For API details, see Virtual background.

Fixed issues

A black bar is no longer displayed to the left of the virtual background.

v1.1.0

New features

You can create multiple VirtualBackgroundProcessor instances to process multiple video streams.

v1.0.0

First release.

MetaKit

v2.2.0

v2.2.0 was released on September 13, 2024.

New features

Stickers

This version adds the sticker feature that uses the face capture driver and follows the character's head movements. The SDK provides subpackages for glasses (material_sticker_glass), masks (material_sticker_facemask), veils (material_sticker_veil), and dragon head hats (material_sticker_dragonhat). You can load the corresponding scene resources to quickly experience the effects of different stickers.

Portrait edge light

This release adds a new portrait edge light effect. To experience it, specify the subpackage resource as material_effect_ray when loading scene resources.

Advertising light text-to-image

This version adds a text-to-image feature for the advertising lights. When setting the advertising light effects, you can specify the text content through the text parameter and add animation effects through the animation parameter. This includes jumping, waving, or swaying, to make the text appear dynamically on the screen.

Improvements

This update adds support for loadMaterial and unloadMaterial in setExtensionProperty.

  • When using loadMaterial to load scene resources, you can specify the resource path of the required subpackage. The engine will automatically request the scene texture and render the scene.
  • If you need to switch to another scene, just use loadMaterial again to pass in another subpackage resource path. The engine will automatically switch to the new scene.
  • When you no longer need to experience the scene, use unloadMaterial to uninstall scene resources.

This improvement significantly enhances the usability of the extension. You can quickly experience different functional scenarios by simply specifying different subpackage resource paths when loading.

API changes

Added

  • loadMaterial
  • unloadMaterial

v2.1.0

v2.1.0 was released on March 6, 2024.

Improvements

This update optimizes the performance of the MetaKit extension and adds support for two new keys in setExtensionProperty: requestTexture and switchTextureAvatarMode. These replace the original addSceneView and switchAvatarMode functions, respectively. The new interface supports automatic generation and return of texture data, which is then directly rendered, previewed, encoded, and transmitted through the SDK. This enhancement improves rendering performance, reduces latency, and ensures more efficient operation and a better user experience.

API changes

Added

v2.0.0

v2.0.0 was released on February 23, 2024.

This is the first release of the MetaKit extension. This extension integrates multiple AI technologies to provide users with diverse video enhancement functions in audio and video interaction scenarios.

Camera Movement

v1.2.0

v1.2.0 was released on April 29, 2024.

Improvements

  • Added support for Video SDK v4.3.x.
  • Improved the performance of the Camera Movement extension, reducing potential stuttering issues on low-end devices.

v1.0.0

v1.0.0 was released on February 23, 2024.

This is the initial release of the Agora Camera Movement extension.

The extension uses the AI technology and the intelligent camera movement algorithms to provide various features for audio and video interaction scenarios.

Video Calling