This method should be used when the SurfaceTextureHelper is created to use a specific handler.
This now guarantee that the looper used by handler is destroyed after a frame has been returned.
Review URL: https://codereview.webrtc.org/1465163003
Cr-Commit-Position: refs/heads/master@{#10767}
It do the following:
The SurfaceTexture.updateTexImage() calls are moved from the video renderers into MediaCodecVideoDecoder, and the destructor of the texture frames will signal MediaCodecVideoDecoder that the frame has returned. This CL also removes the SurfaceTexture from the native handle and only exposes the texture matrix instead, because only the video source should access the SurfaceTexture.
It moves the responsibility of calculating the decode time to Java.
Patchset2 Refactor MediaCodecVideoDecoder to drop frames if a texture is not released.
R=magjed@webrtc.org
Review URL: https://codereview.webrtc.org/1440343002 .
Cr-Commit-Position: refs/heads/master@{#10706}
Reason for revert:
Causes fallback to SW decoder if a renderer is put in the background.
Original issue's description:
> Patchset 1 is a pure
> revert of "Revert of "Android MediaCodecVideoDecoder: Manage lifetime of texture frames" https://codereview.webrtc.org/1378033003/
>
> Following patchsets move the responsibility of calculating the decode time to Java.
>
> TESTED= Apprtc loopback using H264 and VP8 on N5, N6, N7, S5
>
> Committed: https://crrev.com/9cb8982e64f08d3d630bf7c3d2bcc78c10db88e2
> Cr-Commit-Position: refs/heads/master@{#10597}
TBR=magjed@webrtc.org,glaznev@webrtc.org
NOPRESUBMIT=true
NOTREECHECKS=true
Review URL: https://codereview.webrtc.org/1441363002 .
Cr-Commit-Position: refs/heads/master@{#10637}
This reverts commit 12f680214e28dc5f0a13ac8afc0d1445f89e67e6.
Original cl in https://codereview.webrtc.org/1396073003/
Prepare MediaCodecVideoEncoder for surface textures.
This refactors MediaVideoEncoder to prepare for adding support to encode from textures. The C++ layer does not have any functional changes.
- Moves ResetEncoder to always work on the codec thread
- Adds use of ThreadChecker.
- Change Java MediaEncoder.Init to return true or false and introduce method getInputBuffers.
- Add simple unit test for Java MediaCodecVideoEncoder.
The pure revert of the revert is in patchset 1.
Patchset 2, moves getting the input buffer to before storing pending timestamps etc to fix b/24984012.
BUG=webrtc:4993 b/24984012
Review URL: https://codereview.webrtc.org/1406203002
Cr-Commit-Position: refs/heads/master@{#10622}
revert of "Revert of "Android MediaCodecVideoDecoder: Manage lifetime of texture frames" https://codereview.webrtc.org/1378033003/
Following patchsets move the responsibility of calculating the decode time to Java.
TESTED= Apprtc loopback using H264 and VP8 on N5, N6, N7, S5
Review URL: https://codereview.webrtc.org/1422963003
Cr-Commit-Position: refs/heads/master@{#10597}
This change VideoCapturerAndroid to attempt 3 times with a period of 300ms to open the camera if it fails.
This is so that if another application have it already opened, it would have more time to release it.
BUG=b/25190234
Review URL: https://codereview.webrtc.org/1422023007
Cr-Commit-Position: refs/heads/master@{#10559}
Also distinguish between camera failures and failures due to that buffers has not been returned.
Adds unit tests for making sure CameraEventHandler.onError is triggered if frames are not returned.
BUG=b/25514149
Review URL: https://codereview.webrtc.org/1415013006
Cr-Commit-Position: refs/heads/master@{#10555}
The purpose with this change is to support older API levels by replacing EGL14 (API lvl 17) with EGL10 (API lvl 1). The main purpose is to lower API lvl requirement for SurfaceViewRenderer from API lvl 17 to API lvl 15. Also, camera texture capture will work on API lvl < 17 (and texture encode/decode in MediaCodec, but we don't use MediaCodec below API lvl 18?).
GLSurfaceView/VideoRendererGui is already using EGL10.
EGL 1.1 - 1.4 added new functionality, but won't affect performance. We don't need the functionality, so there should be no reason to not use EGL 1.0.
I have profiled AppRTCDemo with Qualcomm Trepn Profiler on a Nexus 5 and Nexus 6 and couldn't see any difference.
Specifically, this CL:
* Update EglBase to use EGL10 instead of EGL14.
* Update imports from EGL14 to EGL10 in a lot of files (plus changing import order in some cases).
* Update VideoCapturerAndroid to always support texture capture.
Review URL: https://codereview.webrtc.org/1396013004
Cr-Commit-Position: refs/heads/master@{#10378}
Add events to track when camera is requested to open,
when first camera frame is available and when camera is
closed.
BUG=b/24271359
R=perkj@webrtc.org
Review URL: https://codereview.webrtc.org/1398793005 .
Cr-Commit-Position: refs/heads/master@{#10306}
This reverts commit 90754174d98d6b71fd4aaed897bd54980f7e59c4.
Revert "Fix use of scaler in MediaCodecVideoEncoder"
This reverts commit ec93628e75fdb81f23635b39b5f3da846bcefd21.
R=magjed@webrtc.orgTBR=glaznev@webrtc.org
BUG=webrtc:4993 b/24984012
Review URL: https://codereview.webrtc.org/1407263002 .
Cr-Commit-Position: refs/heads/master@{#10300}
The code that depends on the reverted CL is disabled but not removed. NativeHandleImpl is reverted to the previous implementation, and the new implementation is renamed to NativeTextureHandleImpl. Texture capture can not be used anymore, because it will crash in peerconnection_jni.cc.
Reason for revert:
Increased HW decoder latency and crashes related to that. Also suspected cause of video tearing.
Original issue's description:
> This CL should be the last one in a series to finally
> unblock camera texture capture.
>
> The SurfaceTexture.updateTexImage() calls are moved from
> the video renderers into MediaCodecVideoDecoder, and the
> destructor of the texture frames will signal
> MediaCodecVideoDecoder that the frame has returned. This
> CL also removes the SurfaceTexture from the native handle
> and only exposes the texture matrix instead, because only
> the video source should access the SurfaceTexture.
>
> BUG=webrtc:4993
> R=glaznev@webrtc.org, perkj@webrtc.org
>
> Committed: https://crrev.com/91b348c7029d843e06868ed12b728a809c53176c
> Cr-Commit-Position: refs/heads/master@{#10203}
TBR=glaznev
BUG=webrtc:4993
Review URL: https://codereview.webrtc.org/1394103005
Cr-Commit-Position: refs/heads/master@{#10288}
SurfaceViewHelper requires EGL14 that was added in API level 17. Since the SurfaceViewHelper is only neeed when we capture to textures, this cl change back to not use it when we are capturing to byte buffers.
Also, thread.quitsafely was added in level 18. Instead a new ThreadUtil method has been added for this.
BUG=b/24782220
TEST = run
ninja -C out/Debug libjingle_peerconnection_android_unittest && CHECKOUT_SOURCE_ROOT=`pwd` build/android/adb_install_apk.py --debug out/Debug/apks/libjingle_peerconnection_android_unittest.apk && ./third_party/android_tools/sdk/platform-tools/adb shell am instrument -w -e class org.webrtc.VideoCapturerAndroidTest org.webrtc.test/android.test.InstrumentationTestRunner on a device running Android 4.1 (I tried Nexus 7, the first version)
Review URL: https://codereview.webrtc.org/1401023003
Cr-Commit-Position: refs/heads/master@{#10265}
This make small refactorings to MediaVideoEncoder to prepare for adding support to encode from textures. The C++ layer does not have any functional changes.
- Moves ResetEncoder to always work on the codec thread
- Adds use of ThreadChecker.
- Change Java MediaEncoder.Init to return true or false and introduce method getInputBuffers.
- Add simple unit test for Java MediaCodecVideoEncoder.
BUG=webrtc:4993
Review URL: https://codereview.webrtc.org/1396073003
Cr-Commit-Position: refs/heads/master@{#10250}
These are the necessary changes in C++ related to the video capturer necessary to capture to a surface texture.
It does not handle scaling / cropping yet though.
BUG=
R=magjed@webrtc.org
Review URL: https://codereview.webrtc.org/1395673003 .
Cr-Commit-Position: refs/heads/master@{#10218}
This adds support for capturing to a texture in the Java part of VideoCapturerAndroid.
After this cl, the C++ also needs modification.
https://codereview.webrtc.org/1375953002/ contains the idea and have a working version where textures can be rendered in local preview.
BUG=webrtc:4993
R=magjed@webrtc.org
Review URL: https://codereview.webrtc.org/1383413002 .
Cr-Commit-Position: refs/heads/master@{#10213}
Fixed a problem where eglBase.makecurrent() could be called after the context had been released if SurfaceTextureHelper was first created and immedately disconnected.
Add the possibility to inject a thread to use instead of creating a new.
BUG= webrtc:4993
R=magjed@webrtc.org
Review URL: https://codereview.webrtc.org/1384923002 .
Cr-Commit-Position: refs/heads/master@{#10174}
This CL refactors RendererCommon.getSamplingMatrix() so it does not have any dependecy to SurfaceTeture. The purpose is to prepare for a change in how texture frames are represented - only the texture matrix will be exposed, not the SurfaceTexture itself. This CL also adds an extra test for RendererCommon.rotateTextureMatrix().
R=hbos@webrtc.org
Review URL: https://codereview.webrtc.org/1375593002 .
Cr-Commit-Position: refs/heads/master@{#10118}
Reason for revert:
The top row in the video stream from the camera is messed up. The byte[] pointer is not the same as GetDirectBufferAddress() apparently.
Original issue's description:
> Android VideoCapturer: Send ByteBuffer instead of byte[]
>
> The purpose with this CL is to replace GetByteArrayElements() and ReleaseByteArrayElements() with GetDirectBufferAddress().
>
> R=hbos@webrtc.org
>
> Committed: https://crrev.com/cb3649b40b3fd6d5bbb0a92003b717e46ce90924
> Cr-Commit-Position: refs/heads/master@{#10091}
TBR=hbos@webrtc.org
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
Review URL: https://codereview.webrtc.org/1377783002
Cr-Commit-Position: refs/heads/master@{#10103}
The purpose with this CL is to replace GetByteArrayElements() and ReleaseByteArrayElements() with GetDirectBufferAddress().
R=hbos@webrtc.org
Review URL: https://codereview.webrtc.org/1372813002 .
Cr-Commit-Position: refs/heads/master@{#10091}
This CL makes the following changes:
* Instead of creating a new thread per startCapture()/stopCapture() cycle, VideoCapturerAndroid has a single thread that is initialized in the constructor and kept during the lifetime of the instance. This is more convenient because then it is always possible to post runnables without if-checks. This way, a lot of synchronize statements can be removed. Also, when the camera thread is preserved after stopCapture() it is possible to post late returnBuffer() calls to the correct thread.
* FramePool now enforces single thread use and returnBuffer() calls are posted to the camera thread. This is important because the camera should only be used from one thread, and we call camera.addCallbackBuffer() in returnBuffer().
* switchCamera() no longer returns false on failure, but instead signals the result via the callback.
* Update the test testCaptureAndAsyncRender() - it's not a valid use case to have outstanding frames when calling PeerConnectionFactory.dispose(). Instead, the renderer implementations should have release() functions that block until all frames are returned. The release() functions need to be called in the correct order with PeerConnectionFactory.dispose() last.
BUG=webrtc:4909
R=hbos@webrtc.org, perkj@webrtc.org
Review URL: https://codereview.webrtc.org/1350863002 .
Cr-Commit-Position: refs/heads/master@{#10025}
Enumerating using android.hardware.camera2 is 10x faster than enumerating using android.hardware.camera, but they don't list exactly the same formats. android.hardware.camera2 support higher resolutions for some cameras, and also different framerates.
R=tommi@webrtc.org
Review URL: https://codereview.webrtc.org/1321893003 .
Cr-Commit-Position: refs/heads/master@{#9861}
The purpose with this CL is to remove some code bloat. A subtle change is that GL_TEXTURE_MIN_FILTER in MediaCodecVideoDecoder is changed from GL_NEAREST to GL_LINEAR. This may lead to slightly worse performance when the decoded video is rendered minified, but with better visual quality. After reading https://crbug.com/351458 and the fix https://codereview.chromium.org/713603002 I think this is the right choice.
BUG=webrtc:4742
R=hbos@webrtc.org, tommi@webrtc.org
Review URL: https://codereview.webrtc.org/1303373005 .
Cr-Commit-Position: refs/heads/master@{#9845}
of decoder factory class.
- Add new Peer connection factory method to initialize shared
EGL context.
This provides an option to use single peer connection factory
in the application and create peer connections from the same
factory and reinitialize shared EGL context for video
decoding HW acceleration.
R=wzh@webrtc.org
Review URL: https://codereview.webrtc.org/1304063011 .
Cr-Commit-Position: refs/heads/master@{#9838}
Enumerating camera capabilities in the deprecated android.hardware.Camera interface is really slow because of the need to open and release the camera. By making getSupportedFormats() an interface, we allow apps the opportunity to inject their own implementation, such as storing the supported formats offline in the device's internal storage. It will also be possible to add an implementation of getSupportedFormats() using the new android.hardware.Camera2 interface in a follow-up CL.
R=tommi@webrtc.org
Review URL: https://codereview.webrtc.org/1321903002 .
Cr-Commit-Position: refs/heads/master@{#9819}
This CL makes the Java render interface asynchronous by requiring every call to renderFrame() to be followed by an explicit renderFrameDone() call. In JNI, this is implemented with cricket::VideoFrame::Copy() before calling renderFrame(), and a corresponding call to delete in renderFrameDone(). This CL is primarily done to prepare for a new renderer implementation.
BUG=webrtc:4742, webrtc:4909
R=glaznev@webrtc.org
Review URL: https://codereview.webrtc.org/1313563002 .
Cr-Commit-Position: refs/heads/master@{#9814}
Reason for revert:
AppRTCDemo often crashes in loopback mode and incorrect layout when connection is established
BUG=webrtc:4909,webrtc:4910
Original issue's description:
> AppRTCDemo: Render each video in a separate SurfaceView
>
> This CL introduces a new org.webrtc.VideoRenderer.Callbacks implementation called SurfaceViewRenderer that renders each video stream in its own SurfaceView. AppRTCDemo is updated to use this new rendering.
>
> This CL also does the following changes:
> * Make the VideoRenderer.Callbacks interface asynchronous and require that renderFrameDone() is called for every renderFrame(). In JNI, this is implemented with cricket::VideoFrame::Copy()/delete.
> * Make public static helper functions: convertScalingTypeToVisibleFraction(), getDisplaySize(), and getTextureMatrix().
> * Introduces new helper functions surfaceWidth()/surfaceHeight() in EGlBase that allows to query the surface size.
> * Introduce PercentFrameLayout that implements the percentage layout that is used by AppRTCDemo.
>
> BUG=webrtc:4742
>
> Committed: https://crrev.com/05bfbe47ef6bcc9ca731c0fa0d5cd15a4f21e93f
> Cr-Commit-Position: refs/heads/master@{#9699}
TBR=glaznev@webrtc.org,wzh@webrtc.org
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=webrtc:4742
Review URL: https://codereview.webrtc.org/1286133002
Cr-Commit-Position: refs/heads/master@{#9703}
This CL introduces a new org.webrtc.VideoRenderer.Callbacks implementation called SurfaceViewRenderer that renders each video stream in its own SurfaceView. AppRTCDemo is updated to use this new rendering.
This CL also does the following changes:
* Make the VideoRenderer.Callbacks interface asynchronous and require that renderFrameDone() is called for every renderFrame(). In JNI, this is implemented with cricket::VideoFrame::Copy()/delete.
* Make public static helper functions: convertScalingTypeToVisibleFraction(), getDisplaySize(), and getTextureMatrix().
* Introduces new helper functions surfaceWidth()/surfaceHeight() in EGlBase that allows to query the surface size.
* Introduce PercentFrameLayout that implements the percentage layout that is used by AppRTCDemo.
BUG=webrtc:4742
Review URL: https://codereview.webrtc.org/1257043004
Cr-Commit-Position: refs/heads/master@{#9699}