This is a systematic test and review of the quality differences when using different settings in Blender VSE, so that you can find the FFMPEG Codec settings best to render video in Blender.
What options do you have?
With FFMPEG video as a rendering option in Blender, you get several container choices, but the relevant ones for our purpose of generating YouTube videos are these:
- MPEG4 = .mp4 (Tip: use this for reels and other stuff you want to upload from your phone)
- Matroska = .mkv (my personal favorite)
- Webm = .webm
- Quicktime = .mov
- MPEG4 (divx) (This is the worst, do not use this, ever!)
- H.264 (this is okay for most purposes)
- (Webm container with) VP9 (Google Alternative to H.265) (also okay)
- Perceptually lossless
Everything after that, I won’t even list, because it is all eye-cancer-inducing.
- choose “slowest” if you can render over night and want a small file
- choose “realtime” if you want it asap
Use AAC or MP3 with the same bit rate as your original file. If you don’t know how to look that up just open the file in VLC player and look at the info menu. It’s somewhere in there. I honestly most of the time just use MP3 and 320 kbps, cause that is already insane and I have never even met a person who could tell the difference between that and 128 kbps.
What problems can occur?
In general, a problem that I have encountered a lot (and also read about on multiple sources) are color banding effects of “fringes”. Color banding is the correct term. It is a phenomenon that occurs when you compress a video or an image. Neighboring areas are lumped together into one color to make the palette—and therefore the file—smaller. More colors = more information = more disk space needed. At least for the formats we are talking about here. (There are ones where this doesn’t matter, but since you are not an image processing nerd writing your thesis using this knowledge, don’t worry about it.)
Bad news: Color banding occurs in all files that are available in Blender VSE, at least all that I have rendered on my debian.
If you want to avoid color banding, do this:
- export every frame as a PNG file
- render audio (with the native quality your audio has into the same file format)
- combine PNGs and audio with an external tool, not in blender / with ffmpeg
Yes, as of 2022, this is the “solution”. Otherwise you just have to live with sub-optimal output—or use the tricks I will show you in this article.
There are no true lossless options. The only proven lossless output in blender is PNG or raw (both do not include audio). “Lossless” always results in huge files, but with little to no benefit to the visual perception.
Which FFMPEG encoding leads to the fewest color banding effects?
Honest answer would be: “None, get a windows PC and install Premiere Pro and do everything with that, cause Open Source Codecs and rendering software is trash.” But I am trying to help you, because I have myself sifted through countless useless Stackexchange posts. So here is my test run:
I analyzed part of a cross-section of the first frame from a short video I rendered with these different settings and compared it to the original. My conclusion is: Nothing but PNG output (images, not video, there is no “PNG video”) looks like the original. And you can tell. You clearly see bands in both “lossless” and “perceptually lossless” video output. However, this might not be much of a problem, if you don’t film large areas of similar color with a slow gradient, e.g. skies, building walls or other, preferably light, surfaces. But since I film skies a lot, it annoys me.
Now here is the output I generated with different settings:
All of these snapshots are from videos created with different rendering settings. Can you tell the difference? I can’t, but I know there are slight differences. And even this is a little bit of a lost cause, I still want the least shitty result…
A systematic overview of the best FFMPEG codecs to render video in Blender
So here is a table of the options I used and the images, in addition to the median over the absolute deviation from the original value by pixel. In case you are now thinking: Value? But this is an RGB image. Yes, you are right. I converted it to grayscale, which is absolutely okay and sufficient for this example. It always comes down to value in the end, I could’ve just taken a channel mean or anything else really, but this is not my job, so you do that, if you got nothing better to do, and then brag about it on your own blog.
Different export settings for the different FFMPEG codecs to render video in blender
Which Codec looks most visually pleasing?
Theoretically, the smoothest looking curve should be the visually most pleasing… but well, you tell me… I really can’t say. I see fringes in all of them and I don’t really find any of them okay. The last one has definitely a lot of lost information but it is smooth, in a way, mathematically though… not really. In my opinion, from the lossy options, webm/PV9 was best. It is, after all, the Google version of H.265 (which is unavailable with FFMPEG/Blender) atm. So I will use this in the future, I think.
“Lossless” export is also possible, but will lead to non-playable video, aka a huge lag. I had the idea of just doing that, and letting YouTube do the encoding for me. They do that when you upload a video, but this might also result in more losses. I wasn’t feeling eager to try that, to be honest. So I didn’t. But feel free to check that out. My guess would be that you get a laggy YouTube video doing that.
My solution to reduce color banding without messing with the Codec
I personally don’t have the perfectionism in me to worry about three fringy frames. I don’t think there is one best codec to render video in Blender. So what I will do is just flatten the value curve, and therewith the relative distances between these unwanted steps in the cross-section will be flattened as well. That leads to a smaller depth of whatever surface it is applied to. E.g. if you have a sky with a gradient, like in my example, you will lose some of the 3D look of that sky. It will look more moody and less far away, less cloudy etc. To me that does not matter much, because it was a moody day and it matches the color grading. But if you have a gradient that you want to show off, “swallowing it” is not really an option. I would say, you should also look into your camera options and choose a higher quality format. That alone has worked pretty well for me with my Sony already. Because due to the smaller sampling rate the footage is kinda “bandy” already. And that can only get worse after rendering it a second time.
You might not see it well in these sample images. But when I looked at the final footage, it really did make a difference. After all these bands look most annoying when you move the camera “across” the gradient and it changes. That results in the bands moving and it looks very cheap and sad. So that effect can definitely be reduced drastically by applying a profile like the above.
Conclusion: What is the best codec to render video in Blender?
Right now, there is no optimal or best FFMPEG codec to render video in Blender. They are all flawed. If you don’t have the option to use other codecs or proprietary software, this is what you have to deal with. H.265 could be a solution in the future, but I do not have much hope here, because blender VSE is not the most important part of blender. (VP9 also seems to be a decent alternative!) But yeah, VSE… It’s more like the unwanted step child of the 3D animation editor… And FFMPEG is notoriously outdated and a bit primitive. So if you are desperate to get good video quality, but for free, get DaVinci resolve and stop messing around with FFMPEG and its sad sad Blender module. 🙂