This fixes the build in mingw-w64/Clang on MSYS2. It also disables the
use of gnu_printf in Clang, which was what was causing most of the
warnings. The Clang-compiled mpv binary appears to work, but there are
no guarantees yet, since until now mpv has only been tested with
mingw-w64/GCC on Windows.
Fixes#3800
This fixes waf setting the wrong LIBDIR for DEST_OS=win32 in
waflib/Tools/c_config.py:get_cc_version()
Any scripts assuming the implib and pkgconfig are in the wrong
place should be changed to move the .dll instead.
Thread-local storage in GCC is platform-specific, and some platforms that
are otherwise perfectly capable of running mpv may lack TLS support in GCC.
This change adds a test for GCC variant of TLS and relies on its result
instead of assumption.
Provided that LLVM's `__thread` support is similar to GCC, the test is
called "GCC/LLVM TLS".
Signed-off-by: wm4 <wm4@nowhere>
We always want to use __declspec(selectany) to declare GUIDs, but
manually including <initguid.h> in every file that used GUIDs was
error-prone. Since all <initguid.h> does is define INITGUID and include
<guiddef.h>, we can remove all references to <initguid.h> and just
compile with -DINITGUID to get the same effect.
Also, this partially reverts 622bcb0 by re-adding libuuid.a to the
build, since apparently some GUIDs (such as GUID_NULL) are not declared
in the source file, even when INITGUID is set.
'cuda-gl' isn't right - you can turn this on without any GL and
get some non-zero benefit (with the cuda-copy hwaccel). So
'cuda-hwaccel' seems more consistent with everything else.
Minimal support just for testing.
Only the window surface creation (including size determination) is
really platform specific, so this could be some generic thing with
platform-specific support as some sort of sub-driver, but on the other
hand I don't see much of a need for such a thing.
While most of the fbdev usage is done by the EGL driver, using this
fbdev ioctl is apparently the only way to get the display resolution.
Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross
platform, but nvidia proprietary API that exposes their hardware
video decoding capabilities. It is analogous to their DXVA or VDPAU
support on Windows or Linux but without using platform specific API
calls.
As a rule, you'd rather use DXVA or VDPAU as these are more mature
and well supported APIs, but on Linux, VDPAU is falling behind the
hardware capabilities, and there's no sign that nvidia are making
the investments to update it.
Most concretely, this means that there is no VP8/9 or HEVC Main10
support in VDPAU. On the other hand, NvDecode does export vp8/9 and
partial support for HEVC Main10 (more on that below).
ffmpeg already has support in the form of the "cuvid" family of
decoders. Due to the design of the API, it is best exposed as a full
decoder rather than an hwaccel. As such, there are decoders like
h264_cuvid, hevc_cuvid, etc.
These decoders support two output paths today - in both cases, NV12
frames are returned, either in CUDA device memory or regular system
memory.
In the case of the system memory path, the decoders can be used
as-is in mpv today with a command line like:
mpv --vd=lavc:h264_cuvid foobar.mp4
Doing this will take advantage of hardware decoding, but the cost
of the memcpy to system memory adds up, especially for high
resolution video (4K etc).
To avoid that, we need an hwdec that takes advantage of CUDA's
OpenGL interop to copy from device memory into OpenGL textures.
That is what this change implements.
The process is relatively simple as only basic device context
aquisition needs to be done by us - the CUDA buffer pool is managed
by the decoder - thankfully.
The hwdec looks a bit like the vdpau interop one - the hwdec
maintains a single set of plane textures and each output frame
is repeatedly mapped into these textures to pass on.
The frames are always in NV12 format, at least until 10bit output
supports emerges.
The only slightly interesting part of the copying process is that
CUDA works by associating PBOs, so we need to define these for
each of the textures.
TODO Items:
* I need to add a download_image function for screenshots. This
would do the same copy to system memory that the decoder's
system memory output does.
* There are items to investigate on the ffmpeg side. There appears
to be a problem with timestamps for some content.
Final note: I mentioned HEVC Main10. While there is no 10bit output
support, NvDecode can return dithered 8bit NV12 so you can take
advantage of the hardware acceleration.
This particular mode requires compiling ffmpeg with a modified
header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg
yet.
Usage:
You will need to specify vo=opengl and hwdec=cuda.
Note that hwdec=auto will probably not work as it will try to use
vdpau first.
mpv --hwdec=cuda --vo=opengl foobar.mp4
If you want to use filters that require frames in system memory,
just use the decoder directly without the hwdec, as documented
above.
This time it's emulation that's supposed to work (not just dummied out).
Unlike the previous emulation, no mpv code has to be disabled, and
everything should work (albeit possibly a bit slowly). On the other
hand, it's not possible to implement this kind of emulation without
compiler support. We use GNU statement expressions and __typeof__ in
this case.
This code is inactive if stdatomic.h is available.
The current stdatomic check verifies the availability of the function by
calling atomic_load(). It also uses this test to check if linking
against libatomic is needed or not.
Unfortunately, on specific architectures (namely SPARC), using
atomic_load() does *not* require linking against libatomic, while other
atomic operations do. Due to this, mpv's wscript concludes that
stdatomic is available, and that linking against libatomic is not
needed, causing the following link failure:
[190/190] Linking build/mpv
audio/out/ao.c.13.o: In function `ao_query_and_reset_events':
/home/peko/autobuild/instance-0/output/build/mpv-0.18.1/build/../audio/out/ao.c:399: undefined reference to `__atomic_fetch_and_4'
In order to fix this, the stdatomic check is adjusted to call
atomic_fetch_add() instead, which does require libatomic. Thanks to
this, the wscript realizes that linking against libatomic is needed, and
the build works fine.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Always require them, instead of just for some components which have hard
requirements on correct atomic semantics. They should be widely
available, and are supported by all recent gcc and clang compiler
versions. We even have the fallbacks builtins, which should keep this
working on very old gcc releases.
In particular, w32_common.c recently added a hard requirement on
atomics, but checking this properly in the build system would have been
messy. This commit makes sure it always works.
The fallback where weak atomic semantics are always fine is in theory
rather questionable as well.
This greatly improves the result when decoding typical (ST.2084) HDR
content, since the job of tone mapping gets significantly easier when
you're only mapping from 1000 to 250, rather than 10000 to 250.
The difference is so drastic that we can now even reasonably use
`hdr-tone-mapping=linear` and get a very perceptually uniform result
that is only slightly darker than normal. (To compensate for the extra
dynamic range)
Due to weird implementation details, this only seems to be present on
keyframes (or something like that), so we have to cache the last seen
value for the frames in between.
Also, in some files the metadata is just completely broken /
nonsensical, so I decided to apply a simple heuristic to detect
completely broken metadata.
This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
This reverts commit b51957fab5.
Breaks big time. It appears to ignore explicitly configured paths within
the libav* .pc files, which for example breaks mpv-build.
Distros and users alike should be made aware of the fact that old
FFmpeg versions are bad. When users come to us with FFmpeg-related
trouble, the answer is “update FFmpeg” more often than not
(and no further support will be provided until they have done so),
so instead we just nag them about it here.
This now lets us auto-detect appropriately tagged HDR content using
FFmpeg's new TRC entries (when available).
Hidden behind an #if because Libav stable doesn't have it yet.
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
ANGLE is _really_ annoying to build. (Requires special toolchain and a
recent MSVC version.) This results in various issues with people
having trouble to build mpv against ANGLE (apparently linking it
against a prebuilt binary doesn't count, or using binaries from
potentially untrusted sources is not wanted).
Dynamically loading ANGLE is going to be a huge convenience. This commit
implements this, with special focus on keeping it source compatible to
a normal build with ANGLE linked at build-time.
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes#3097
FFmpeg partially merged the API change. It added the AVCodecParameters
definition, but not the AVCodecContext.codecpar field. The new code
compiles only with the API fully merged, so adjust the check.
Encoding mode uses deprecated API. See previous commit. Encoding mode
will stop working/compiling at some point in the future, so unless
someone fixes the encoding code, it will stay disabled by default.
(Note that the deprecations are not merged in FFmpeg yet, but they will
soon. They've been deprecated in Libav for a while now.)
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
To be more specific, enable vo_opengl and vo_opengl_cb, if libmpv is
compiled, and the GL headers happen to be in the default search paths.
Although platform specific code can be useful for libmpv (for window
embedding, and even with vo_opengl_cb for certain forms of hardware
decoding), it's not a requirement to use the opengl_cb API.
Enabling vo_opengl is not useful here, but the rest of the build system
doesn't distinguish vo_opengl and vo_opengl_cb, and I see no reason to.
We either need to be on Windows, or have posix_spawn available.
If someone can come up with a system that is POSIX, but does not provide
posix_spawn, we could make it optional.
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
It existed for XP-compatibility only. There was also a time where
ao_wasapi caused issues, but we're relatively confident that ao_wasapi
works better or at least as good as ao_dsound on Windows Vista and
later.
Commit 1a6f3c56 added a fallback for the case when C11 TLS was not
available, but GCC TLS was. But it forget to enable VAAPI EGL interop in
the build system in this case.
Just remove the build system check. Should someone find a compiler that
works on Linux and does not support GCC extensions or C11, it will still
compile and just fail to init at runtime.
Actually fixes#2631 (hopefully).
libwaio was added due to the complete inability to cancel synchronous
I/O cleanly using the public Windows API in Windows XP. Even calling
TerminateThread on the thread performing I/O was a bad solution, because
the TerminateThread function in XP would leak the thread's stack.
In Vista and up, however, this is no longer a problem. CancelIoEx can
cancel synchronous I/O running on other threads, allowing the thread to
exit cleanly, so replace libwaio usage with native Vista API functions.
It should be noted that this change also removes the hack added in
8a27025 for preventing a deadlock that only seemed to happen in Windows
XP. KB2009703 says that Vista and up are not affected by this, due to a
change in the implementation of GetFileType, so the hack should not be
needed anymore.
This is used for dithering, although I'm not aware of anyone who got
higher than 8 bit depth support to work on Linux.
Also put this into egl_helpers.c. Since EGL is pseudo-portable at best I
have no hope that the EGL context creation code in all the backends can
be fully shared. But some self-contained functionality can definitely be
shared.
WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It
allows a texture to be shared between Direct3D and WGL, so that
rendering can be done with WGL and presentation can be done with
Direct3D. This should allow us to work around some persistent WGL
issues, such as dropped frames with some driver/OS combos, drivers that
buffer frames to increase performance at the cost of latency, and the
inability to disable exclusive fullscreen mode when using WGL to render
to a fullscreen window.
The addition of a DX_interop backend might also enable some cool
Direct3D-specific enhancements in the future, such as using the
GetPresentStatistics API to get accurate frame presentation timestamps.
Note that due to a driver bug, this backend is currently broken on
Intel. It will appear to work as long as the window is not resized too
often, but after a few changes of size it will be unable to share the
newly created renderbuffer with GL. See:
https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
This is only for specific Hauppage cards. According to the comments in
who is actively using this feature. Get it out of the way.
Anyone who still wants to use this should complain. Keeping this code
would not cause terribly much additional work, and it could be restored
again. (But not if the request comes months later.)
There are claims that nnedi3.c doesn't constitute its own new
implementation, but is derived from existing HLSL or OpenCL shaders
distributed under the LGPLv3 license.
Until these are resolved, do the "correct" thing and require
--enable-gpl3 to build nnedi.
ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for
rendering, enabling vo_opengl to work on systems with poor OpenGL
drivers and bypassing some of the problems with native GL, such as VSync
in fullscreen mode.
Unfortunately, using GLES2 means that most of vo_opengl's advanced
features will not work, however ANGLE is under rapid development and
GLES3 support is supposed to be coming soon.
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
0.34 and 0.35 don't have the buffer API, such as vaAcquireBufferHandle.
This is only needed for the EGL interop, but why bother staying
compatible for such old things (0.36 was released over a year ago).
We also can drop some minor compatibility ifdeffery.
This AVPacket field was a hack against the fact that the duration field
was merely an int (too small for things like subtitle durations). Newer
libavcodec drops this field and makes duration 64 bit.
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.
The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
It looks like my hope that we can unconditionally include EGL headers in
the OpenGL code is not coming true, because OSX does not support EGL at
all. So I prefer loading the VAAPI EGL/GL specific extensions manually,
because it's less of a mess. Partially reverts commit d47dff3f.
This makes it much faster if the surface is really mapped from GPU
memory. It's slightly slower than system memcpy if used on system
memory. We don't really know definitely in which type of memory
it's located, so we use the GPU memcpy in all cases.
Fixes#2317.
Make the GPU memcpy from the dxva2 code generally useful to other parts
of the player.
We need to check at configure time whether SSE intrinsics work at all.
(At least in this form, they won't work on clang, for example. It also
won't work on non-x86.)
Introduce a mp_image_copy_gpu(), and make the dxva2 code use it. Do some
awkward stuff to share the existing code used by mp_image_copy(). I'm
hoping that FFmpeg will sooner or later provide a function like this, so
we can remove most of this again. (There is a patch, bit it's stuck in
limbo since forever.)
All this is used by the following commit.
Broken by commit d47dff3f. If something is going to include EGL.h,
header_fixes.h has to know. This definitely affected vo_rpi, and
probably affects wayland builds (with x11egl didabled) as well.
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:
--vo=opengl:backend=x11egl
Should it turn out that the new interop works well, we will try to
autodetect EGL by default.
This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)
This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).
(This is the 3rd VAAPI GL interop that was implemented in this player.)
The VAAPI EGL interop code will need access to the X11 Display. While
GLX could return it from the current GLX context, EGL has no such
mechanism. (At least no standard one supported by all implementations.)
So mpv makes up such a mechanism.
For internal purposes, this is very rather awkward solution, but it's
needed for libmpv anyway.
These extensions use a bunch of EGL types, so we need to include the EGL
headers in common.h to use our GL function loader with this.
In the future, we should probably require presence of the EGL headers to
reduce the hacks. This might be not so simple at least with OSX, so for
now this has to do.
It's possible to build vf_vapoursynth with either the Python or Lua
backend (or both or none). The check for the vapoursynth core itself was
hidden away and couldn't be disabled, which would link mpv with
vapoursynth even if all backends were disabled.
Rearrange the checks so that the core will be disabled if no backend is
found (or both are disabled). This duplicates the check for
vapoursynth.pc, but since it's trivial, this is not that bad.
These were normalized and are saner now. We want to use the new fields,
and also get rid of the deprecation warnings, so use them. There's no
release yet which uses these, so some ifdeffery is unfortunately needed.
Some users still use this filter, so the filter was going to be kept.
But I overlooked that libavfilter provides this filter. Remove the
redundant wrapper from mpv. Something like --af=lavfi=bs2b should work
and give exactly the same results.
All of these filters are considered not useful anymore by us. Some have
replacements in libavfilter (useable through af_lavfi).
af_center, af_extrastereo, af_karaoke, af_sinesuppress, af_sub,
af_surround, af_sweep: pretty simple and useless filters which probably
nobody ever wants.
af_ladspa: has a replacement in libavfilter.
af_hrtf: the algorithm doesn't work properly on most sources, and the
implementation was buggy and complicated. (The filter was inherited from
MPlayer; but even in mpv times we had to apply fixes that fixed major
issues with added noise.) There is a ladspa filter if you still want to
use it.
af_export: I'm not even sure what this is supposed to do. Possibly it
was meant for GUIs rendering audio visualizations, but it couldn't
really work well. For example, the size of the audio depended on the
samplerate (fixed number of samples only), and it couldn't retrieve the
complete audio, only fragments. If this is really needed for GUIs, mpv
should add native visualization, or a proper API for it.
Slightly faster than using the dispmanx mess (perhaps to a large amount
due to the rather stupid C-only unoptimized ASS->RGBA blending code).
Do this by reusing vo_opengl's subtitle renderer, and vo_opengl's RPI
backend.
This works similar to the existing .rar support, but uses libarchive.
libarchive supports a number of formats, including zip and (most of)
rar.
Unfortunately, seeking does not work too well. Most libarchive readers
do not support seeking, so it's emulated by skipping data until the
target position. On backwards seek, the file is reopened. This works
fine on a local machine (and if the file is not too large), but will
perform not so well over network connection.
This is disabled by default for now. One reason is that we try
libarchive on every file we open, before trying libavformat, and I'm not
sure if I trust libarchive that much yet. Another reason is that this
breaks multivolume rar support. While libarchive supports seeking in
rar, and (probably) supports multivolume archive, our support of
libarchive (probably) does not. I don't care about multivolume rar, but
vocal users do.
While the "old" libavcodec vdpau API is not deprecated (only the very-
old API is), it's still relatively complicated code that badly
duplicates the much simpler newer vdpau code. It exists only for the
sake of older FFmpeg releases; get rid of it.