Agora Native SDK for iOS Reference Manual

Introduction

Agora Native SDK is mobile-optimized for smartphones and devices, allowing access to the Agora Global Network along with device-specific mobile optimizations. The Agora Native SDK for iOS allows you to perform the following operations:

  • Session setup: Join and leave shared Agora conferencing sessions (identified by unique channel names), where there may be many global users conferenced together or simply one other user for one-to-one communication. Your application code should create and manage unique channel names; these usually originate in user, session, and date/time information that your application is already managing.
  • Media control: Enable and disable voice and video (allowing muting) and set various voice and video parameters that help the Agora Native SDK optimize communications. Many of the SDK operations are automated and do not require developer intervention if these parameter settings are not provided.
  • Device control: Access the microphone or speakerphone, set the volume, select from alternative cameras, and set the video window display.
  • Session control: Receive events when other users join or leave a conferencing session (channel), and understanding who’s speaking, who’s muted, and who’s viewing the session. These events allow the application to decide when to make changes to the delivery of video and audio streams, and other application-specific user and status information.
  • Quality management: Obtain statistics and quality indicators about network conditions, run built-in tests, submit ratings and complaints about sessions and enable different levels of error logging.
  • Recording: Record audio or video and audio in one or multiple channels simultaneously. This function is only applicable to the users who have adopted the Recording Key schema. For details, refer to Agora Recording Server.
  • Data management: Encrypt the audio and video packets, modify the audio or video raw data and receive reliable and ordered packets via data channels.

The Agora Native SDK for iOS provides multiple Objective-C classes to deliver the following features. For detailed API usage, see API Reference - Agora Native SDK for iOS.

  • AgoraRtcEngineKit class provides all the methods that can be invoked by your application.
  • AgoraRtcEngineDelegate class enables callback event notifications to your application.

Delegate methods are replacing the use of some Block callbacks from version 1.1 of the SDK. The Block callbacks are also documented below. However, where appropriate we recommend replacing them with Delegate methods. The SDK calls the Block method if a callback is defined in both Block and Delegate.

The Agora Native SDK for iOS provides two C++ classes to deliver the following features. For details, see Raw Data API.

  • IAudioFrameObserver class modifies the audio raw data.
  • IVideoFrameObserver class modifies the video raw data.

The Agora Native SDK may return some error codes or warning codes when calling APIs or during runtime. For details, see Agora Native SDK in Error and Warning Messages.

Required Development Environments

  • Apple XCode version 6.0 or higher
  • iOS simulator or real devices with audio functionality.

Note

If you want to use the startAudioMixing API, then ensure that iOS device is >=8.0.

Required Libraries

Agora Native SDK for iOS requires iOS 6.0 SDK or a newer version. Make sure that your main project is linked with the following libraries in the SDK:

  • AgoraRtcEngineKit.framework (the Agora Native SDK)
  • AudioToolbox.framework
  • VideoToolbox.framework
  • AVFoundation.framework
  • CoreMedia.framework
  • CoreTelephony.framework
  • CoreMotion.framework
  • SystemConfiguration.framework
  • libc++.dylib

Note

By default, Agora Native SDK uses libc++ (LLVM). Contact support@agora.io if you prefer to use libstdc++ (GNU). In the source file, use the following command to import AgoraRtcEngineKit:

#import <AgoraRtcEngineKit/AgoraRtcEngineKit.h>

The SDK provides FAT Image libraries with multi-architecture support for both 32/64-bit audio simulators and 32/64-bit audio/video real devices.

Getting Started

Obtaining SDK

Download the latest SDK from http://www.agora.io/downloads/ or contact sales@agora.io.

About Keys

This section describes the concepts and use of App ID, App Certificate, Dynamic Key, Channel Key and Recording Key. For details, see Agora Keys User Guide.

Creating a New Project

  1. Create a new project in XCode.

    ../_images/ios-project-1.png
  2. After you create a project, the development environment interface is displayed.

  3. Navigate to libsAgoraRtcEngineKit.framework in Agora Native SDK for iOS.

  4. Right click the file AgoraRtcEngineKit.framework and select Add Files to Project from the pop-up menu to add AgoraRtcEngineKit.framework to the project.

  5. AgoraRtcEngineKit.framework requires some system framework support as listed in Required Libraries. Add the system framework with the Link Binary with Libraries in the Build Phase.

  6. Include the header file into your source code with #import <AgoraRtcEngineKit/AgoraRtcEngineKit.h> to enable the use of Agora Native SDK for iOS.

Refer to the API Reference - Agora Native SDK for iOS for a complete reference of the SDK methods.

Using the Demo Application

Agora provides the following demo application in the Agora Native SDK for iOS zip file (describing here the FULL package with both video and voice):

AgoraDemo includes basic methods for entering and leaving a call, demonstrates the core operations of Agora Native SDK for iOS, and enables audio and video calls with simple method calls.

  • AgoraDemo/AgoraDemo: The source file folder of the demo program
  • AgoraDemo/AgoraDemo.xcodeproj: The project file of the demo program
  • AgoraDemo/AgoraDemoTests: The test file folder of the demo program
  • libs/AgoraRtcEngineKit.framework: The SDK framework

Note

Visit https://github.com/AgoraLab/ for more open source demo applications.

Compiling the Demo Application

  1. Run the XCode.
  2. Open the AgoraDemo.xcodeproj file.
../_images/ios-demo-2.png
  1. Open the project file. The Navigation panel on the left displays the file structure of AgoraDemo highlighted as follows:
../_images/ios-demo-3.png
  1. Select the project.
  2. Click on Build and Run button to start compiling.
../_images/ios-demo-4.png

Note

  • AgoraDemo uses both video and voice and only supports real devices. It does not support simulators.
  • If a “Failed to code sign” error occurs, change the code signing to your activated device, or use your Apple developer account
../_images/ios-demo-5.png

Executing the Demo Application

Run the AgoraDemo demo program after the deployment to your iOS device is complete.

  1. The following shows the first page of the demo application. You need two devices to run the demo for both voice and video calls.
../_images/ios-demo-6.png
  1. Make a voice call:
  1. On each device, enter your App ID and meeting room name (channel name), for example, 2804. The entries should match, as shown above.

    ../_images/ios-demo-7.png
  2. Click or tap the Join Voice Call button to enter the audio call page.

    ../_images/ios-demo-8.png
  3. The call page features the following functions: mute/unmute, enable/disable speaker, leave the room.

  1. Make a video call:
  1. Click or tap on the Video Call button to enter the main call page.

  2. On each device, enter your App ID and meeting room name (channel name). The entries should match.

    ../_images/ios-demo-9.png
  3. Click or tap Join Video Call to enter the video call page.

The call page features the following functions: mute/unmute, enable/disable speaker, open/close camera, switch cameras, and leave the room.

  1. Switch between Video and Voice calls if necessary. Use the Video Call and Voice Call buttons at the bottom of the screen as shown in the above figure.

Note

The SDK and the AgoraDemo application support up to 5-way group video sessions, as shown in the following figure. Multiple users join using the same App ID and room number (channel name in API).

../_images/ios-demo-10.png

Encrypting Data

The Agora Native SDK allows your application to encrypt audio and video packets in one of the following ways:

Note

If your application has integrated the built-in encryption of Agora SDK, then users are able to record an encrypted channel.

Using Agora Native SDK Built-in Encryption

Agora Native SDKs support built-in encryption using the AES-128 or AES-256 encryption algorithm. You can call setEncryptionSecret to enable the encryption function and then call setEncryptionMode to set the mode to be used. For details, see API Reference - Agora Native SDK for iOS.

The following diagram depicts the built-in encryption/decryption process:

../_images/agora-encryption.png

Implementing Your Customized Data Encryption Algorithm

The Agora Native SDK allows your application to encrypt audio and video packets by implementing the customized data encryption algorithm of your application.

The following diagram below depicts the data encryption/decryption process:

../_images/developer-encryption.png

These are the steps:

  1. Registering a Packet Observer
  2. Implementing a Customized Data Encryption Algorithm
  3. Registering Instance
Registering a Packet Observer

Agora Native SDK allows your application to register a packet observer to receive events whenever an audio or video packet is transmitting.

Register a packet observer on your application using the following API:

virtual int registerPacketObserver(IPacketObserver* observer);

The observer must inherit from agora::rtc::IPacketObserver and be implemented in C++. The following is the definition of the IPacketObserver class:

class IPacketObserver
{
public:

    struct Packet
    {
            const unsigned char* buffer;
            unsigned int size;
    };
    /**
    * called by sdk before the audio packet is sent to other participants
    * @param [in,out] packet:
    *      buffer *buffer points the data to be sent
    *      size of buffer data to be sent
    * @return returns true to send out the packet, returns false to discard the packet
    */
    virtual bool onSendAudioPacket(Packet& packet) = 0;
    /**
    * called by sdk before the video packet is sent to other participants
    * @param [in,out] packet:
    *      buffer *buffer points the data to be sent
    *      size of buffer data to be sent
    * @return returns true to send out the packet, returns false to discard the packet
    */


virtual bool onSendVideoPacket(Packet& packet) = 0;
    /**
    * called by sdk when the audio packet is received from other participants
    * @param [in,out] packet
    *      buffer *buffer points the data to be sent
    *      size of buffer data to be sent
    * @return returns true to process the packet, returns false to discard the packet
    */
    virtual bool onReceiveAudioPacket(Packet& packet) = 0;
    /**
    * called by sdk when the video packet is received from other participants
    * @param [in,out] packet
    *      buffer *buffer points the data to be sent
    *      size of buffer data to be sent
    * @return returns true to process the packet, returns false to discard the packet
    */
    virtual bool onReceiveVideoPacket(Packet& packet) = 0;
Implementing a Customized Data Encryption Algorithm

Inherit from agora::rtc::IPacketObserver to implement the customized data encryption algorithm on your application. The following example uses XOR for data processing. For Agora Native SDK, sending and receiving packets are handled by the different threads, which is why encryption and decryption can use different buffers:

class AgoraPacketObserver : public agora::rtc::IPacketObserver
 {
 public:
     AgoraPacketObserver()
     {
         m_txAudioBuffer.resize(2048);
         m_rxAudioBuffer.resize(2048);
         m_txVideoBuffer.resize(2048);
         m_rxVideoBuffer.resize(2048);
     }
     virtual bool onSendAudioPacket(Packet& packet)
     {
         int i;
         //encrypt the packet
         const unsigned char* p = packet.buffer;
         const unsigned char* pe = packet.buffer+packet.size;


              for (i = 0; p < pe && i < m_txAudioBuffer.size(); ++p, ++i)
         {
             m_txAudioBuffer[i] = *p ^ 0x55;
         }
         //assign new buffer and the length back to SDK
         packet.buffer = &m_txAudioBuffer[0];
         packet.size = i;
         return true;
     }

     virtual bool onSendVideoPacket(Packet& packet)
     {
         int i;
         //encrypt the packet
         const unsigned char* p = packet.buffer;
         const unsigned char* pe = packet.buffer+packet.size;
         for (i = 0; p < pe && i < m_txVideoBuffer.size(); ++p, ++i)
         {
             m_txVideoBuffer[i] = *p ^ 0x55;
         }
         //assign new buffer and the length back to SDK
         packet.buffer = &m_txVideoBuffer[0];
         packet.size = i;
         return true;
     }

     virtual bool onReceiveAudioPacket(Packet& packet)
     {
         int i = 0;
         //decrypt the packet
         const unsigned char* p = packet.buffer;
         const unsigned char* pe = packet.buffer+packet.size;
         for (i = 0; p < pe && i < m_rxAudioBuffer.size(); ++p, ++i)
         {
             m_rxAudioBuffer[i] = *p ^ 0x55;
         }
         //assign new buffer and the length back to SDK
         packet.buffer = &m_rxAudioBuffer[0];
         packet.size = i;
         return true;
     }

     virtual bool onReceiveVideoPacket(Packet& packet)
     {
         int i = 0;
         //decrypt the packet
         const unsigned char* p = packet.buffer;
         const unsigned char* pe = packet.buffer+packet.size;


             for (i = 0; p < pe && i < m_rxVideoBuffer.size(); ++p, ++i)
         {
             m_rxVideoBuffer[i] = *p ^ 0x55;
         }
         //assign new buffer and the length back to SDK
         packet.buffer = &m_rxVideoBuffer[0];
         packet.size = i;
         return true;
     }

 private:
     std::vector<unsigned char> m_txAudioBuffer; //buffer for sending audio data
     std::vector<unsigned char> m_txVideoBuffer; //buffer for sending video data

     std::vector<unsigned char> m_rxAudioBuffer; //buffer for receiving audio data
     std::vector<unsigned char> m_rxVideoBuffer; //buffer for receiving video data
 };
Registering Instance

Call registerAgoraPacketObserver to register the instance of agora::rtc::IPacketObserver class implemented by your application.

Modifying Agora Raw Data

Agora raw data interface is an advanced feature provided in the SDK library for users to obtain the audio/video raw data of the SDK engine. You, as a developer, can modify the audio or video data and create special effects to meet special needs of their applications.

You can insert a pre-processing stage before sending the data to the encoder, modifying the captured of video frames or audio signals. You can also insert a post-processing stage before sending the data to the decoder, modifying the received video frames or audio signals.

Agora raw data interface is a C++ interface.

Modifying Audio Raw Data

See IAudioFrameObserver Interface Class for detailed API description. For details, see Raw Data API.

  1. Define AgoraAudioFrameObserver by inheriting IAudioFrameObserver(The IAudioFrameObserver Class is defined in IAgoraMediaEngine.h). You need to realize the following virtual interfaces:

For example,

class AgoraAudioFrameObserver : public agora::media::IAudioFrameObserver
{
  public:
    virtual bool onRecordAudioFrame(AudioFrame& audioFrame) override
    {
      return true;
    }
    virtual bool onPlaybackAudioFrame(AudioFrame& audioFrame) override
    {
      return true;
    }
    virtual bool onPlaybackAudioFrameBeforeMixing(unsigned int uid, AudioFrame& audioFrame) override
    {
      return true;
    }
};

The above example returns true for audio pre-processing or post-processing interfaces. Users can modify the data if necessary:

class IAudioFrameObserver
{
  public:
    enum AUDIO_FRAME_TYPE {
    FRAME_TYPE_PCM16 = 0,  //PCM 16bit little endian
    };
  struct AudioFrame {
    AUDIO_FRAME_TYPE type;
    int samples;  //number of samples in this frame
    int bytesPerSample;  //number of bytes per sample: 2 for PCM 16
    int channels;  // number of channels (data are interleaved if stereo)
    int samplesPerSec;  //sampling rate
    void* buffer;  //data buffer
    };
  public:
    virtual bool onRecordAudioFrame(AudioFrame& audioFrame) = 0;
    virtual bool onPlaybackAudioFrame(AudioFrame& audioFrame) = 0;
    virtual bool onPlaybackAudioFrameBeforeMixing(unsigned int uid, AudioFrame& audioFrame) = 0;
};
  1. Register the audio frame observer to the SDK engine. After creating the IRtcEngine object, and before joining a channel, you can register the object of audio observer.
AgoraAudioFrameObserver s_audioFrameObserver;

agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
mediaEngine.queryInterface(*engine, agora::rtc::AGORA_IID_MEDIA_ENGINE);
if (mediaEngine)
{
  mediaEngine->registerAudioFrameObserver(&s_audioFrameObserver);
}

Note

Use the following method to obtain *engine, and kit means AgoraRtcEngineKit*.

agora::rtc::IRtcEngine* rtc_engine = (agora::rtc::IRtcEngine*)kit.getNativeHandle;
agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
mediaEngine.queryInterface(*rtc_engine, agora::rtc::AGORA_IID_MEDIA_ENGINE);

Modifying Video Raw Data

See IVideoFrameObserver Interface Class for detailed API explanation. For details, see Raw Data API.

  1. Define AgoraVideoFrameObserver by inheriting IVideoFrameObserver(The IVideoFrameObserver Class is defined in IAgoraMediaEngine.h). You need to realize the following virtual interfaces:

    For example,

    class AgoraVideoFrameObserver : public agora::media::IVideoFrameObserver
    {
      public:
      virtual bool onCaptureVideoFrame(VideoFrame& videoFrame) override
      {
        return true;
      }
      virtual bool onRenderVideoFrame(unsigned int uid, VideoFrame& videoFrame) override
      {
        return true;
      }
    };
    

The above example returns true for audio pre-processing or post-processing interfaces. Users can modify the data if necessary:

class IVideoFrameObserver
{
  public:
    enum VIDEO_FRAME_TYPE {
    FRAME_TYPE_YUV420 = 0,  //YUV 420 format
    };
  struct VideoFrame {
    VIDEO_FRAME_TYPE type;
    int width;  //width of video frame
    int height;  //height of video frame
    int yStride;  //stride of Y data buffer
    int uStride;  //stride of U data buffer
    int vStride;  // stride of V data buffer
    void* yBuffer;  //Y data buffer
    void* uBuffer;  //U data buffer
    void* vBuffer;  //V data buffer
    };
  public:
    virtual bool onCaptureVideoFrame(VideoFrame& videoFrame) = 0;
    virtual bool onRenderVideoFrame(unsigned int uid, VideoFrame& videoFrame) = 0;
};
  1. Register the video frame observer to the SDK engine. After creating the IRtcEngine object and enabling the video mode, and before joining a channel, you can register the object of video observer.
AgoraVideoFrameObserver s_videoFrameObserver;

agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
  mediaEngine.queryInterface(*engine,agora::rtc::AGORA_IID_MEDIA_ENGINE);
  if (mediaEngine)
  {
     mediaEngine->registerVideoFrameObserver(&s_videoFrameObserver);
  }

Note

Use the following method to obtain *engine, and kit means AgoraRtcEngineKit*.

agora::rtc::IRtcEngine* rtc_engine = (agora::rtc::IRtcEngine*)kit.getNativeHandle;
agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
mediaEngine.queryInterface(*rtc_engine, agora::rtc::AGORA_IID_MEDIA_ENGINE);