Movie playback usually consists of two crucial parts: 1) Video rendering and 2) Audio rendering. Both tasks are performed by dedicated renderers. madVR is a video renderer, so it concentrates on just that: video. It doesn't care at all about the audio side of things. In the same way the audio renderer typically doesn't care about the video side of things. However, if both renderers flat out ignore each other, then why do audio and video appear to stay in sync? The reason is that there's a "master clock" which both audio and video renderers have to strictly follow.
GPUs and soundcards have hardware clock circuits, which you can think of as the "engine" which decides the rhythm/speed in which audio and video frames are transmitted to the display or amplifier. Unfortunately, these hardware clock circuits don't always run at the perfect speed, and sometimes if video is driven by a different clock than audio, there can be a clock drift, which means either video or audio is rendered faster than the other. In this situation audio/video sync would get lost if we didn't have a master clock.
In DirectShow based movie players, usually the audio clock is declared to be the master clock. As a result, audio playback is usually perfect, without any frame drops or repeats. On the other hand, the video renderer has to make sure that it stays in sync with the master (= audio) clock. Which means that if the video clock is running slightly faster or slower than the audio clock, the video renderer has to repeat or drop video frames. Otherwise video and audio would lose sync.
Not happy with the state of things, in 2002 doom9 developer "Ogo" created the ingenious "ReClock" DirectShow Audio Renderer. This renderer is a very clever piece of software which basically measures the VSync interrupt and adjusts the audio clock (which is the master clock) in such a way that the video renderer doesn't ever have to drop or repeat any frames. Obviously this means that video playback has the potential to be perfect. But now the audio renderer has the problem of having to adjust to a different clock.
ReClock solves this problem by resampling audio on the fly. This works reasonably well, but it does cost a little bit of audio quality. Furthermore, it doesn't work really well with bitstreaming. Furthermore, Ogo stopped development at some point. SlySoft has taken over and has updated ReClock now and then, but it's not really actively developed, anymore, either. There are alternative audio renderers available now which do similar things as ReClock, e.g. J.River MC's "VideoClock" renderer, or Sanaer. But both share the problem with losing audio quality due to resampling, and not working well with bitstreaming.
On the quest to find the "perfect" solution, let's remember what the original problem was: If audio and video clocks run at different speed, playback can't be perfect, because either the audio renderer has to compromise (e.g. by resampling audio), or the video renderer (e.g. by dropping or repeating frames). So let's stop trying to beautify compromises and instead solve the root of the problem. We do that by trying to modify the video clock in such a way that it perfectly matches the audio clock!
Sounds simple enough, doesn't it? GPUs have a so-called "pixel clock" which is actually programmable. And it gets even better: The VESA DMT timing standard asks displays to accept pixel clock variances of up to 5% - which is much bigger than we ever need! So if the video renderer could simply do infinitesimally small corrections to the pixel clock on the fly, all problems would be solved! Unfortunately neither the OS nor the GPU manufacturers offer any way to do small pixel clock corrections on the fly. Maybe nobody has thought that this might be useful? Or maybe there's the fear that if we exceed a certain amount of pixel clock modification, the display might consider that a refresh rate change and might resync? Anyway, sadly on the fly pixel clock corrections are currently not technically possible! (Let's not talk about FreeSync/GSync here, please.)
So if we can't do pixel clock adjustments on the fly, let's do them statically. And that's actually possible. So basically what we'll be doing is to measure the exact speed of the audio and video clocks and then tell the GPU to use a corrected/optimized pixel clock instead in the future. Easy peasy, no? Unfortunately, the pixel clock has a rather coarse raster. So while we can perform some improvements, we can't reach the desired perfection this way.
The final solution to the problem is that we don't stop at modifying the pixel clock, but we're modifying some other timing parameters, too. Specifically, we're modifying the horizontal and vertical "back porch" in addition to the pixel clock. You don't really need to know what a "back porch" is, it's enough to know that it's another timing parameter of a GPU output mode, and modifying it in addition to the pixel clock allows us to get near enough to perfection for our needs.
Let's take a moment to talk about how we can convince the GPU to use our modified timing parameters (pixel clock + back porches). We could use EDID overrides for that, which is what the "CRU" (Custom Resolution Utility) does. However, this method has some disadvantages which I'd like to avoid. So instead madVR uses private APIs provided by AMD, Intel and Nvidia, which allow madVR to define custom timing parameters for any GPU output mode.
Unfortunately there's a catch to this whole concept: While finetuning the pixel clock is something that virtually any display will accept without hesitation, using different "back porches" is a gray area. Some displays will accept slighly modified back porches, others might not. In my experience there's a good chance it will work for most displays, but there's no guarantee. It's also somewhat of a "trial and error" approach, because it's hard to predict which combinations of back porches the display might like or dislike. So it's quite possible that you'll try some modes which don't work (the display will likely simply go black or report a non-supported mode).
DISCLAIMER: Digital displays should usually not break if you drive them with a mode they don't support. But this is not something I can guarantee, so use this functionality at your own risk!
Before we get busy, let's first have a quick look at the EDID (Extended Display Identification Data) block of my LCD monitor, which we will be working with in this tutorial:
Important things to note are the native resolution of 1680x1050, the max supported refresh rate of 75Hz and the max pixel clock of 150MHz, which is rather low.
Starting with madVR v0.92.0 the settings dialog has a new tab named "custom modes" under "devices\YourDisplay\display modes". This tab only appears if you open the settings dialog on the same PC the display is actually connected to. Furthermore, the display must be "active", which means it must be listed as an existing display in the OS display configuration control panel. Otherwise the tab won't show. The tab looks like this:
The "missing mode"s are modes which would be very useful for video playback, but which the OS currently doesn't list as supported for this display. *Probably* the OS is right in that the display doesn't support these modes, BUT it won't hurt to try. So this is why madVR lists such modes for you, to make it easier for you to try adding them.
It seems the only mode (at native resolution) that the OS lists as supported is 1680x1050 at 60Hz. Let's double click this mode to get more information:
As you can see here, the EDID contains information about this mode, and interestingly, the EDID timing information matches almost exactly the "CVT CRT" standard. So it seems that this specific LCD monitor likes "CVT CRT" timings. That's good to know!
Let's go ahead and try adding the missing 23p mode. We do that by double clicking on the mode in the list, or by single clicking it, and then pressing the "add" button. Doing that opens the following window:
We know that this LCD monitor seems to like "CVT CRT" timings, so we use these to create the new 23p move. We click on "CVT CRT", then on "apply", which gives us this window:
And what the window says is correct, because when we go back to the most list, and double check the OS control panel, we can see that the OS doesn't know about the new 23p mode yet:
I'm too lazy to reboot the OS, or to disconnect the display, so I usually simply press the "reset gpu" button, which restarts the GPU driver. There's a small risk that it might leave your GPU stuck, in which case you'd have to reboot, so use the button at your own risk. But in my experience, at least in Windows 10, it works very reliably. Please note that some users have reported that the "reset gpu" button doesn't do the trick for them. They have to reboot for the changes to take affect! So if things don't work as expected for you, try rebooting instead of pressing "reset gpu". Anyway, afterwards we get this:
Looks good, right? So I tried actually switching to the new 23p mode, but unfortunately, my monitor went black and reported "mode not supported". So I waited for the OS to revert to 60p, which it did automatically after a couple of seconds. Too bad, it seems my LCD monitor doesn't support 23p. I guess I could try other timings instead of "CVT CRT", but I don't have much hope, so I probably won't.
As mentioned before, the VESA DMT standard asks displays to tolerate a 5% pixel clock spread. So we already know for (almost) sure that if our LCD monitor supports 60Hz, then it will *very* likely also support 59.940Hz. So let's double click the missing 59p mode now:
The EDID usually contains information for just 60p or 59p, but not for both. The other mode you can simply get by applying a 1.001 factor on the pixel clock. So in order to create the 59p mode, we use the EDID timing information, with the appropriate pixel clock, and we're very confident it will work. Don't worry, you don't have to do any math yourself. madVR does everything for you. All you have to do is click on the "EDID" list entry, then "apply", and then the "reset gpu" button. Afterwards we get this, as expected:
And (drum-roll)... Switching to 59p works just fine - yey! :) Oh, and the "optimize" button is available now! Let's press it - it opens up this window:
Alright. So this is what we need to do now: Make sure 59p is the active mode, play any 59p video you like (it must have an audio track), and let it play for at least 30 minutes. The longer the better. Actually, madVR is already satisfied with 10 minutes, but I really *really* recommend letting the video run for at least 30 minutes, to get more accurate measurements. Also, please be careful not to touch/interrupt playback in any way. Otherwise measurement might have to restart, and you'll have to wait another 30 minutes!
So, 10 minutes later (ok, I admit it, I didn't want to wait 30 minutes ;) we get this:
Nice! Let's press the "optimize" button once more. Now we get this:
At the top of the window you can see that the measured mode resulted in about "1 frame repeat every 5.57 minutes". That's pretty bad, really, but it was to be expected. After all, we've not done any optimizations just yet!
There are multiple sections in the most list: First is always the "current timings", which is the timings currently used by the GPU for this mode. Second is the "measured timings", which is the timings that were used when you played the 59p video for 10+ minutes. Right now "current" and "measured" timings are identical. But when we later try different modes from this big list, the "current" timings will change to whatever timings we're evaluating. While the "measured" timings will stay identical until we do another measurement run for 10+ minutes. So by applying the "measured" timings in the list, you can always go back to the timings that we measured last, as a safe fallback option.
Next in the list are standard timings, starting with EDID timings (if any), followed by CVT and GTF timings. All these already have a slightly tweaked pixel clock, based on the 10+ minutes measurements we just did! So if you apply any of those, we should already get an improvement. But really, this is just meant to be used if all the other modes that follow below don't work.
Current Intel GPU drivers don't handle pixel clock adjustments well at all, which makes it impossible for madVR to calculate good custom modes for you, with modified pixel clocks. So for Intel GPUs you can use the modes listed as "same pixel clock". These modes are optimized by modified "back porches", while keeping the pixel clock identical. You can achieve an improvement this way, but probably not achieve perfection.
Finally we get to the "optimized pixel clock" modes. These are our best hope for perfection. Try one of the modes listed as "no frame drops/repeats expected". If the display likes the mode, there's a great chance to get a very big improvement! However, these modes have modified pixel clock *and* modified back porches. So there's a chance the display might dislike them and refuse to sync. So to make your life a bit easier, the modes are classified with a "compatability" rating. The higher the rating, the higher the chance that the display might actually like the mode. This is just a "guess", though, so don't take it as gospel. It's possible that a highly rated mode won't work, but a badly rated mode might work just fine.
Now let's try applying the "optimized pixel clock #1" mode:
Oooops. The "apply" button refuses to work, because 1680x1050p59 is still the active mode! There's a good reason for this. Because let's imagine that the "optimized pixel clock #1" mode doesn't actually work. If we had applied the timing changes, and then restarted the GPU driver, we might have ended up with a black screen, with no way to revert to 1680x1050p60! We'd have to boot into safe mode then to get the PC to work again, which is of course very ugly. Because of that madVR's custom mode functionality requires us to first switch to a safe mode, before we can press the "apply" button. Makes sense?
So of course now I switched the GPU to 60p, then hit the "apply" button. This time it worked. Then I hit the "reset gpu" button once more, just to be extra safe that the new timings are actually going to be used (I'm not actually sure if this is necessary at this stage). Then I switched the GPU back to 59p. If at this point the display would reject the mode, the OS would automatically go back to 60p after a few seconds, so the whole process is much safer this way. Fortunately, the display liked this mode just fine!
After re-doing the measurement with my 59fps test video for another 10+ minutes, I finally got this result:
The new measurement got "1 frame drop every 9.37 hours" - wow, that's pretty good! It's not perfect, but actually good enough for all intents and purposes. Of course you can iteratively continue to try new modes, and you're likely to get closer to perfection each time. Once you reach "1 frame drop/repeat every 7 days", madVR just reports "no frame drops/repeats expected" instead, because having 1 full week of playback without any frame drop/repeats is simply beyond good enough. If you reach that measure of perfection, actually the "optimize" button will be disabled... :)
There are various bugs in the different drivers. I've reported these as best as I could, but I'm not sure how much hope there is for quick fixes. Highest hope for fixes would be for Nvidia. Next best Intel. My experience with AMD is that it's really hard to get them to fix any bugs. But let's see, maybe we'll get lucky this time.
There are currently 2 bigger issues I'm aware of: With my 4K TV, the GPU driver refuses to accept custom modes for anything higher than 30Hz. Probably the GPU driver thinks that my TV can't handle such high refresh rates, but ironically, a 60Hz mode is available and working well. The other issue is that I can't get a custom mode for 4kp23 to work, while a custom mode for 24p works just fine. The 23p mode "seems" to work, but the measurements don't report any changes.
Current Intel drivers don't seem to like any refresh rates below 24.000Hz, which is of course a dramatic problem. Another problem is that we can't seem to define different custom modes for 23p and 24p (or 59p and 60p). Finally, pixel clock adjustments are not working well at all, currently. Basically the Intel functionality is the most broken one atm.
Nvidia has a nice API that allows me to offer a "try" button, but unfortunately sometimes the "try" feature doesn't activate the custom mode we asked for, but something different instead. Also sometimes the GPU driver completely refuses to install a specific custom mode, for unknown reasons.