Some thoughts on log curve, color correction, white balance and exposure adjustment
Recently I’ve talked to a lot of professionals and prosumers on color grading. I was surprised to see a lot of misunderstandings on the color grading workflows. I think it is partly due to the conventional workflow of RAW footage that a lot of high end productions are used to, and partly it’s a misunderstanding of the log curve.
Let me start with RAW.
The good aspect of RAW is that it retains the original sensor information for post and it’s linear. The input output response is illustrated below.
This is VERY important for color grading. For example, when adjusting white balance essentially we are changing the R/G & B/G ratios globally, implying that all the pixels would be multiplied by the same coefficient. If it’s not performed in a linear space like above, it would destruct the color of the image. If you get it right at a certain brightness level it would go wrong in another brightness level. That’s why I’m against color grading in the non-linear space. Some people said it’s fast and easy. But I believe one would have to spend a lot of time tweaking the details eventually wasting more time on it.
If we can retain all the information from the sensor output, with the power of PC/server and given no time constraint, RAW is the best choice obviously. That’s the reason why big productions always use RAW. With unconstrained data storage, time, processing power and adequate knowledge on each step of the proper pipeline, RAW is better than all other formats. Big productions do not have these constraints, so RAW is their obvious choice. That’s also why consumer electronics evolves so fast.
But in reality we always have constraints. How do we solve it?
1. Dynamic range
Log curve is designed to compress dynamic range. It’s designed to be a container. In the below diagram you can see the response curve. Log curves from different vendors are quite similar. There are some interesting numbers about log:
a. look at the diagram below as an example, the log curve compressed 8 to a little more than 1.2.
b. The beginning of the curve is very steep. It rises up a lot at the beginning and then turn very flat. Let’s limit the range to 0–1 and check the curve again.
c. Highlight area is very flat and it’s close to linear. 0.5–1 occupies around 10%.
d. Exposing Middle Grey to around 0.4 is widely adopted. You can see it’s much less than 0.1 in linear space.
So from the above you can understand how log compresses dynamic range. It compresses the samples of highlight/lowlight area to give more samples to the most eye sensitive area: brightness level close to middle grey. Since human eyes are less sensitive in high light / low light areas, log curve has very high efficiency in dynamic range compression. It’s not a surprise that 10-bit log curve has much better dynamic range performance than 12-bit even 14-bit RAW. For example, the WDR mode of Z CAM E2 can reach 16 stops dynamic range in the current release while recorded with 10-bit compression. But in theory even 12-bit can not achieve that.
2. ISP (Image Signal Processing)
ISP is introduced when we don’t output RAW no matter for live view or recording. Starting from simple debayer to advanced noise reduction, it’s all called ISP. During the past decade, due to the rapid evolution cycles of consumer electronics, ISP has been greatly improved since consumer products always require best possible quality/result at lowest possible cost. Let us dive into a few interesting areas.
Since we are talking about professional video, I presume all footage would require color grading and post processing. So I’ll focus on log based workflow. In this workflow color correction is done via LUT in post process, so let’s talk about white balance.
About White Balance
Professionals always use manual settings for white balance. If it’s RAW it’s the right thing to do as RAW allows you to adjust white balance without any sacrifices (but only if you can get it right). If you are using 8 bit compression (nearly all the DSLRs) that would be a huge problem. 8 bit means you only have 255 grey levels and banding is always a problem when there’s flat area like a plain wall and sky. To do log is kind of difficult and meaningless in a lot of situations with 8 bit. If you do log you need to do curve conversion and banding will be greatly increased by the log->linear->rec709 process.
When you have banding, and you need to adjust white balance, the banding will get worse. When shooting in 8-bit, it’s suggested to get the correct white balance first. My suggestion is to use Auto White Balance and then lock the setting for the rest of the recoding. This is an efficient way.
If it’s 10-bit, it’s another world. During our test, 99% of the time you won’t experience banding at all. So adjusting white balance in the post isn’t a problem if you do it right.
By applying a curve like below the log footage will be converted to linear for color correction and white balance adjustment. Getting everything into linear space is the baseline for any adjustments. If you think it’s a challenging task to do WB adjustment for non-RAW footage, convert them to linear space and do it there. You would be amazed. Accurate color adjustment would only be possible if it’s in the linear space. It is no mystery.
To simplify the process for color grading E2 log footage, we have two approaches:
1. Plug-in. There are more flexibilities than LUT and you can adjust a lot of things within. Every adjustment made by the plugin is in linear space. It is just a matter of get used to and multi-platform compatibility of the plug-in. Getting color/exposure/WB all right is simply a matter of 2 minutes.
2. LUT. It’s a widely adopted approach. We provide all kinds of LUTs for different output. But the most interesting one is Rec709-linear. It’s color corrected but with a gamma 1.0. It means that by applying standard gamma you can convert it to any color space you want. If you create Nodes in Davinci Resolve like below you can adjust white balance/exposure in node 2 the same way you do it in RAW.
To summarize the White Balance section. With 10-bit color depth and proper workflow, adjusting white balance is simple and easy in post.
About Exposure Adjustment
It’s a very interesting topic here. The engineers are talking in a different language comparing to the artists in film world. Engineers deals with gains: Analog gain and digital gain. Engineers cares about the noise/SNR but most of them are not aware of the aesthetic side that artists wanted.
There are a few situations that we may need to do Exposure Adjustment:
- We don’t have correct lighting when shooting due to all kinds of reasons. For example, we shoot the concert in automatic exposure mode with log curve. In our test, ISO may run between ISO100 to ISO1600 in a typical event. We can’t control the lighting at that moment so relying on auto exposure is the last choice. Auto exposure generally works well but it’s adjusting the exposure according to the metering and sometimes it’s not as we wished. So adjusting Exposure is necessary.
2. We want to achieve aesthetic needs. This is similar to EI(I’ll talk about it in another article). Basically it’s like we shoot at noon but we want it appear to be darker, like in the evening.
There’s ‘gain’ in all the post production software but it seems that by default it deals it in current color space. It means that all the adjustment is based on the curve. If you do it in that way it’s easy to break the color. Human eye isn’t that sensitive to the color variation but you still can easily tell the color difference. If you do this in linear color space before color correction, it’s like the digital gain in camera(Changing ISO).
So basically if you do exposure adjustment in linear space it will maintain the color and if this is done correctly it will work as EI: capturing with ISO 400 but processing it like ISO 200.
Well, If you want to some detailed tweaks it’s better to do it in plugin: For example, you want to apply gain to the whole image because it’s too dark but you don’t want to apply gain to dark area as it brings in a lot of noise. Then we have ‘gain anchor’ to be used: it defines starting from where we started to apply gain. Then you get dark area remained dark but shadow and bright area brighter! I believe in standard post process software you can do this but remember to do it in linear space.
This is the part I like best. Throughout the whole evolution of ISP during the past 10 years it’s all about noise reduction. Consumer electronic products requires the ISP to deal with color and noise in real time for instant consumption. An advance and modern ISP is equipped with thousands of filters to detect what kind of noise and determine how to deal with it.
When we made our last generation of camera products we often got questions on noise reduction. Many customers asked: “close the noise reduction for us please”. We didn’t do that. We talked to them to understand. We compare with other cameras and improve our image tuning strategies.
Here let me introduce some basic noise reduction methods and then our strategy on E2:
Intra-frame noise reduction is employed in all ISPs and that’s the easiest method to adopt. Basically it uses filters inside the frame to detect noise and apply different blur parameters to make noise weaken. The tradeoff is more or less sharpness and details. I think this is where previous complaint on our noise reduction comes from. If bit depth is enough, PC software can always do a better job on this part. So our strategy on this part is to minimize the noise reduction for log and perform fine tuning only on sRGB output.
We want the best detail and best SNR for final output, then what can we do with it? There are two steps:
In camera, we do advanced MCTF. In short, we detect noise with an important factor: time. Noise is always high frequency signals in time scale. E2 has the world’s most advanced ISP inside to buffer multiple frames and analyse them pixel by pixel in real time even in 4K@120fps! In this step, we use a conservative and complicated strategy to make sure the pixels that falls into the rules has a lower likelihood to be correct pixels and when we apply noise reduction we do it more gently. Some people don’t like noise reductions just because of their bad experience on other cameras with too aggressive/naive tuning strategy. The resulting artifact is ghosting. Worst case you would see the ghost edges of the objects. And in slightly better ones, when you have lights in dark area you would see ghosts of the lights. E2 will give you a brand new impression with our new technology and implementation.
When in-camera noise reduction is so called conservative (even it’s conservative I believe it’s still way better than other cameras in the same circumstances), if we want to reduce further noise but without the right knowledge, what do we do about it? Well, here comes our optical-flow based noise reduction software. Thanks to the GPU technology, we can calculate the movement of each single pixel accurately and then apply different strength on noise reduction accordingly. The most important thing is that it is fully automatic. You don’t need to understand how it works. It would simply work for you.