Mirko Bonadei 65ce31158f Removing useless dependencies on //testing/gmock.
If a WebRTC build target requires gmock it has to include
test/gmock.h and just depend on //test:test_support.

Unfortunately //testtest_support was a leaky abstraction because it
wasn't propagating the correct -I compiler flag. To make everything
work, all the targets that use gmock started also to depend on
//testing/gmock (even if they were not including any gmock header
directly).

This CL makes //testtest_support propagate the include path up in the
dependency chain so it is possible to remove unused dependencies.

Note: all_dependent_configs should probably be used in the original
gmock target. There is an ongoing discussion about it. This CL solves
the problem on WebRTC side and it is forward compatible.

TBR=phoglund@webrtc.org

Bug: webrtc:8603
Change-Id: If08daf2ce9a6431a6e881a236743b4ec33b59ea7
Reviewed-on: https://webrtc-review.googlesource.com/44340
Commit-Queue: Mirko Bonadei <mbonadei@webrtc.org>
Reviewed-by: Oleh Prypin <oprypin@webrtc.org>
Cr-Commit-Position: refs/heads/master@{#21776}
2018-01-26 13:34:12 +00:00
..

Conversational Speech generator tool

Tool to generate multiple-end audio tracks to simulate conversational speech with two or more participants.

The input to the tool is a directory containing a number of audio tracks and a text file indicating how to time the sequence of speech turns (see the Example section).

Since the timing of the speaking turns is specified by the user, the generated tracks may not be suitable for testing scenarios in which there is unpredictable network delay (e.g., end-to-end RTC assessment).

Instead, the generated pairs can be used when the delay is constant (obviously including the case in which there is no delay). For instance, echo cancellation in the APM module can be evaluated using two-end audio tracks as input and reverse input.

By indicating negative and positive time offsets, one can reproduce cross-talk (aka double-talk) and silence in the conversation.

Example

For each end, there is a set of audio tracks, e.g., a1, a2 and a3 (speaker A) and b1, b2 (speaker B). The text file with the timing information may look like this:

A a1 0
B b1 0
A a2 100
B b2 -200
A a3 0
A a4 0

The first column indicates the speaker name, the second contains the audio track file names, and the third the offsets (in milliseconds) used to concatenate the chunks. An optional fourth column contains positive or negative integral gains in dB that will be applied to the tracks. It's possible to specify the gain for some turns but not for others. If the gain is left out, no gain is applied.

Assume that all the audio tracks in the example above are 1000 ms long. The tool will then generate two tracks (A and B) that look like this:

Track A

  a1 (1000 ms)
  silence (1100 ms)
  a2 (1000 ms)
  silence (800 ms)
  a3 (1000 ms)
  a4 (1000 ms)

Track B

  silence (1000 ms)
  b1 (1000 ms)
  silence (900 ms)
  b2 (1000 ms)
  silence (2000 ms)

The two tracks can be also visualized as follows (one characheter represents 100 ms, "." is silence and "*" is speech).

t: 0         1        2        3        4        5        6 (s)
A: **********...........**********........********************
B: ..........**********.........**********....................
                                ^ 200 ms cross-talk
        100 ms silence ^