# Overview
This CL chain exposes new API from ObjC WebRTC SDK to inject custom
means to play and record audio. The goal of CLs is achieved by having
additional implementation of `webrtc::AudioDeviceModule`
called `ObjCAudioDeviceModule`. The feature
of `ObjCAudioDeviceModule` is that it does not directly use any
of OS-provided audio APIs like AudioUnit, AVAudioEngine, AudioQueue,
AVCaptureSession etc. Instead it delegates communication with specific
system audio API to user-injectable audio device instance which
implements `RTCAudioDevice` protocol.
`RTCAudioDevice` is new API added to ObC WebRTC SDK in the CL chain.
# AudioDeviceBuffer
`ObjCAudioDeviceModule` does conform to heavy `AudioDeviceModule`
interface providing stubs for unrelated methods. It also implements
common low-level management of audio device buffer, which glues audio
PCM flow to/from WebRTC.
`ObjCAudioDeviceModule` owns single `webrtc::AudioDeviceBuffer` which
with the help of two `FineAudioBuffer` (one for recording and one for
playout) is exchanged audio PCMs with user-provided `RTCAudioDevice`
instance.
`webrtc::AudioDeviceBuffer` is configured to work with specific audio:
it has to know sample rate and channels count of audio being played and
recorded. These formats could be different between playout and
recording. `ObjCAudioDeviceModule` stores current audio parameters
applied to `webrtc::AudioDeviceBuffer` as fields of
type `webrtc::AudioParameters`. `RTCAudioDevice` has it's own variable
audio parameters like sample rate, channels count and IO buffer
duration. The audio parameters of `RTCAudioDevice` must be kept in sync
with audio parameters applied to `webrtc::AudioDeviceBuffer`, otherwise
audio playout and recording will be corrupted: audio is sent only
partially over the wire and/or audio is played with artifacts.
`ObjCAudioDeviceModule` reads current `RTCAudioDevice` audio parameters
when playout or recording is initialized. Whenever `RTCAudioDevice`
audio parameters parameters are changed, there must be a notification to
`ObjCAudioDeviceModule` to allow it to reconfigure
it's `webrtc::AudioDeviceBuffer`. The notification is performed
via `RTCAudioDeviceDelegate` object, which is provided
by `ObjCAudioDeviceModule` during initialization of `RTCAudioDevice`.
# Threading
`ObjCAudioDeviceModule` is stick to same thread between initialization
and termination. The only exception is two IO functions invoked by SDK
user code presumably from real-time audio IO thread.
Implementation of `RTCAudioDevice` may rely on the fact that all the
methods of `RTCAudioDevice` are called on the same thread between
initialization and termination. `ObjCAudioDeviceModule` is also expect
that the implementation of `RTCAudioDevice` will call methods related
to notification of audio parameters changes and audio interruption are
invoked on `ObjCAudioDeviceModule` thread. To facilitate this
requirement `RTCAudioDeviceDelegate` provides two functions to execute
sync and async block on `ObjCAudioDeviceModule` thread.
Async block could be useful when handling audio session notifications to
dispatch whole block re-configuring audio objects used
by `RTCAudioDevice` implementation.
Sync block could be used to make sure changes to audio parameters
of ADB owned by `ObjCAudioDeviceModule` are notified, before interrupted
playout/recording restarted.
Bug: webrtc:14193
Change-Id: I5587ec6bbee3cf02bad70dd59b822feb0ada7f86
Reviewed-on: https://webrtc-review.googlesource.com/c/src/+/269006
Reviewed-by: Henrik Andreasson <henrika@google.com>
Commit-Queue: Yury Yarashevich <yura.yaroshevich@gmail.com>
Reviewed-by: Peter Hanspers <peterhanspers@webrtc.org>
Reviewed-by: Henrik Andreassson <henrika@webrtc.org>
Reviewed-by: Tomas Gunnarsson <tommi@webrtc.org>
Cr-Commit-Position: refs/heads/main@{#37928}