-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow notify subtilte to upper app through event #14176
Comments
In a similar manner how Android MediaPlayer handles MEDIA_TIMED_TEXT, this approach involves the player framework taking care of demuxing and decoding each subtitle track, as well as syncing them with the video frames. It only notifies other display controls of the subtitle data and duration when they need to be rendered, rather than overlaying them directly within the player. If we want to implement this feature, can the current mpv architecture handle it relatively easily? |
Why not
I think vo_gpu offers better feature supports and quality. Do you have to use vo_mediacodec_embed? |
Yes, compared to other opengl-based VO , mediacodec_embed has a superior performance and can easily achieve video rendering at 60fps,little cpu overhead. Moreover, it has an advantage that others lacks: on SurfaceView, hardware manufacturers usually use a dedicated Video Panel layer to play videos. This layer is specifically designed for video display and offers better picture quality optimization (PQ) and superior display performance, especially on large screens of TV devices, where the advantages are more pronounced. |
I'm afraid they can also add some stupid "optimizations" that change the color, sharpen the picture, etc.
Rendering ASS is hard. I don't think you can use TextView to render ASS. Even if mpv provides you with image, synchronization is also difficult. You can modify your mpv to expose such interfaces, but I'm afraid that the upstream won't add such interfaces for the benefit of this single VO. The extra maintenance effort is not worth it. |
On MTK Android TV devices, it seems we're stuck with no other option if you want better PQ and MEMC,etc..,those can only work by VDP (that's video display panel), which means using SurfaceView. in practice, SurfaceView really does perform better. It's got superior actual performance and delivers higher image quality (with richer, more vibrant colors).
My understanding is that we just pass video frame overlay part to the app to handle, and the synchronization part is done inside the player, right? of course, there might be some latency in event callbacks to the app due to asynchronous reasons, like tens of milliseconds or a hundred milliseconds, but it feels relatively acceptable. For example, the following pseudo-code:
|
I'm currently stuck on this and have no idea how to continue. |
Expected behavior of the wanted feature
Currently, on the Android platform, if -vo=mediacodec_embed is used, the subtitles feature becomes completely unavailable. Is it possible to notify the app layer via events or callbacks with the already decoded subtitle (img or text) so that the Android app layer can easily receive each frame of subtitle data (timed text or img) and display them using separate TextView or TextureView ?
If subtitles could be sent to the client as an event or callback just before rendering, apps wouldn't have to worry about synchronizing with low-level primitives. They'd only have to focus on displaying the captioned data. This wouldn't just benefit mediacodec_embed, but all other video outputs (vo) that don't support subtitles.
Alternative behavior of the wanted feature
No response
Log File
No response
Sample Files
No response
The text was updated successfully, but these errors were encountered: