This CL contains major modifications of the audio output parts for WebRTC on iOS:
- general code cleanup
- improves thread handling (added thread checks, remove critical section, atomic ops etc.)
- reduces loopback latency of iPhone 6 from ~90ms to ~60ms ;-)
- improves selection of audio parameters on iOS
- reduces complexity by removing complex and redundant delay estimates
- now instead uses fixed delay estimates if for some reason the SW EAC must be used
- adds AudioFineBuffer to compensate for differences in native output buffer size and
the 10ms size used by WebRTC. Same class as is used today on Android and we have unit tests for
this class (the old code was buggy and we have several issue reports of crashes related to it)
Similar improvements will be done for the recording sid as well in a separate CL.
I will also add support for 48kHz in an upcoming CL since that will improve Opus performance.
BUG=webrtc:4796,webrtc:4817,webrtc:4954, webrtc:4212
TEST=AppRTC demo and iOS modules_unittests using --gtest_filter=AudioDevice*
R=pbos@webrtc.org, tkchin@webrtc.org
Review URL: https://codereview.webrtc.org/1254883002 .
Cr-Commit-Position: refs/heads/master@{#9875}
* In AudioCodingModuleImpl::PlayoutData10Ms, don't reset the timestamp got from GetAudio.
* When there're more than one participant, set AudioFrame's RTP timestamp to 0.
* Copy ntp_time_ms_ in AudioFrame::CopyFrom method.
* In RemixAndResample, pass src frame's timestamp_ and ntp_time_ms_ to the dst frame.
* Fix how |elapsed_time_ms| is computed in channel.cc by adding GetPlayoutFrequency.
Tweaks on ntp_time_ms_:
* Init ntp_time_ms_ to -1 in AudioFrame ctor.
* When there're more than one participant, set AudioFrame's ntp_time_ms_ to an invalid value. I.e. we don't support ntp_time_ms_ in multiple participants case before the mixing is moved to chrome.
Added elapsed_time_ms to AudioFrame and pass it to chrome, where we don't have the information about the rtp timestmp's sample rate, i.e. can't convert rtp timestamp to ms.
BUG=3111
R=henrik.lundin@webrtc.org, turaj@webrtc.org, xians@webrtc.org
TBR=andrew
andrew to take another look on audio_conference_mixer_impl.cc
Review URL: https://webrtc-codereview.appspot.com/14559004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@6346 4adac7df-926f-26a2-2b94-8c16560cd09d
r4326 was mistakenly committed to stable, so this is to re-merge back to trunk.
Fixed the AGC and interface problems on the new path.
In order to make the AGC work properly, we need to cache the volume value passed
by the callback, compare it with the value returned by
shared->transmit_mixer()->CaptureLevel(). If they are the same, we need to
return 0 to indicate no volume needs changing, otherwise return the new volume.
By doing this, we avoid setting the volume all the same, which allows the users
to change the volume manually.
This patch also fixes some minor issues with the interfaces too: make the int
channel[] const, and correct the order of the input params in
channel::Demultiplex.
R=tommi@webrtc.org
BUG=[2134]
TEST=compile && manual AGC test
Review URL: https://webrtc-codereview.appspot.com/1921004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@4450 4adac7df-926f-26a2-2b94-8c16560cd09d
r4326 was mistakenly committed to stable, so this is to re-merge back to trunk.
Add new interface to support multiple sources in webrtc.
CaptureData() will be called by chrome with a flag |need_audio_processing| to
indicate if the data needs to be processed by APM or not. Different from the old
interface that will send the data to all voe channels, the new interface will
specify a list of voe channels that the data is demultiplexing to.
R=tommi@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/1919004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@4449 4adac7df-926f-26a2-2b94-8c16560cd09d