NetEq: Guarding against reading outside of memory
In rare and pathological circumstances, it could happen that the input length to the merge function is very short. This CL will avoid one of the problems with out-of-bounds read that could result from this. Bug: chromium:799499 Change-Id: I6bde105ae88f9d130764b6dfb3d25443d07e214b Reviewed-on: https://webrtc-review.googlesource.com/57582 Reviewed-by: Ivo Creusen <ivoc@webrtc.org> Commit-Queue: Henrik Lundin <henrik.lundin@webrtc.org> Cr-Commit-Position: refs/heads/master@{#22180}
This commit is contained in:
parent
132e28e6aa
commit
8b84365c81
@ -291,7 +291,12 @@ void Merge::Downsample(const int16_t* input, size_t input_length,
|
||||
decimation_factor, kCompensateDelay);
|
||||
if (input_length <= length_limit) {
|
||||
// Not quite long enough, so we have to cheat a bit.
|
||||
size_t temp_len = input_length - signal_offset;
|
||||
// If the input is really short, we'll just use the input length as is, and
|
||||
// won't bother with correcting for the offset. This is clearly a
|
||||
// pathological case, and the signal quality will suffer.
|
||||
const size_t temp_len = input_length > signal_offset
|
||||
? input_length - signal_offset
|
||||
: input_length;
|
||||
// TODO(hlundin): Should |downsamp_temp_len| be corrected for round-off
|
||||
// errors? I.e., (temp_len + decimation_factor - 1) / decimation_factor?
|
||||
size_t downsamp_temp_len = temp_len / decimation_factor;
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user