Jason Zhang
6 min readOct 1, 2018

--

WDR of E2

Obviously a lot of people had bad experience on HDR. Because of all kinds of the limitations and artifacts prefer not to use it. There are several times when I talked to people on WDR they just say they don’t want it.This article is to give some introduction to popular HDR technology and then you can understand the limitation and how to avoid it. Yes, it’s an article to encourage you using WDR.

HDRI(High dynamic range imaging)/WDRI(wide dynamica range imaging) here are referring to the same techonology. It has long histroy since first use in photography back to 1850s by Gustave Le Gray. With the smarphone’s huge demand on better dynamic range, this technology becomes more and more mature especially on phones. Video and Photo HDR are different just because of the different real time process requirement but they share the same basics. So let me start this topic by borrowing some pictures from wiki:

Normally HDR starts with 2 or more pictures with different exposure and then combine those pictures together. Here from Wiki the example uses 4 different exposures: -2 EV/ -1EV/+1EV/+2EV as followed:

Here are two results with different tone mapping method:

It’s very easy to find out that the final result merged all the best details together to form an amazing picture. It has the details of the bright area and dark area. Then why we don’t just use it?

ARTIFACTS!

It’s PERFECT when it’s static view. As long as it’s static the merge will be perfect. BUT there are always moving objects so let’s take a look at those artifacts first before we started the tech details:

These are different kinds of artifacts created by HDR merging but caused by different technical solutions. The root cause is motion but some of them can be avoid with different implementations.

The Ideal implementation is dual gain(Maybe there are triple or Quad gain sensor but I don’t know so far). It means for a single pixel at the same time you can have two different gains(ISOs) and then no matter what kind of motion the object is you can start/stop exposure at the same time with different ISOs. When merging these two images, there will be no motion difference and position difference between them so ideal merge can be achieved. So far as I know that it’s invented more than ten years ago but only be used in very limited cameras. I’m not sure why it’s not popular but in theory it’s the best solution.

The other method is dual/triple exposure. Since dual gain sensor isn’t popular the only way we can do HDR is by different exposures. This technology evolves in the past 10 years and there are a few generations of HDR technologies from sensor:

  1. DOL-HDR: taking multiple shots with different exposure time, then digitally scale/combine to form HDR. This is acheived by doubling the frame rate and using different exposures on even/odd frames. The biggest problem is that with this method EVERY PIXEL of the image is exposing at different time. It means for all moving pixels it will have ghost. The exposure starting gap is the frame rate. That’s huge difference. This is adopted in early days as video HDR solution and that’s why people don’t like HDR. In my opinion it can only be used for photography.

2. BME-HDR (binned multiplexed exposure)To improve it people started to use even/odd line pairs of a frame for different exposures. Then the starting time difference between two exposures are limited to one line. So with this method as long as the motion is slight you can’t sense the difference between the two exposures. It gives videographer more freedom to shot moving objects now. The problem of this method is resolution loss and jaggy. Just like interlaced video VS progress video. Vertically it’s 1:2 binning.

3. SME-HDR Spatially multiplexed exposure. As below an image contains different arrays of RGB pixels, each RGB colors have lower, standard, higher exposure value, then leveraging Sony’s unique algorithm to generate a HDR image without compromising resolution and frame rate. However, Sony doesn’t reveal the details what the algorithm is. But in general understanding it’s using special debayer method to acheive pixel merge. Apparently the good aspect is it doesn’t lose resolution/frame rate as above methods but since there are different exposure groups(3 different exposures) it will somehow contain the infomation from the neightbor pixels I may worry about the details. The other problem is all pixels will be merged. no matter how close these pixels are, motion will still create problems. The faster the worse.

4. QBC-HDR Quad Bayer Coding. This is what E2 adopted. A special sensor design that each pixel is subdivided into a 2x2 sub-pixel group with the same Bayer filter color. Followed is the CFA structure of the sensor E2 adopts. You see 4 sub pixels per pixel. In HDR mode 1–4 and 2–3 are forming two exposure groups. The center of the two groups are at the same postion and that creates a lot of benefits. There will be no ghost created by different exposure positions of different pixels and we actually get two complete images which are exposed at the same tick with the same resolution and position. Then there will be only one kind of artifact: motion blur created by long exposure to be merged with short exposure which has no motion blur. In our solution pixel merge will only happen when using short exposure to compensate the over exposed area. It means that in most of the dynamic range there’s no chance to have artifacts.Thus it leads to close to perfect result.

Then how does it compare to the dual gain solution?

  1. From the HDR merge point of view, I think dual gain solution is ideal. It won’t create artifacts as the dual exposure share the same exposure time.
  2. From the SNR point of view if QBC-HDR may have better performance. If you need to acheive 3 stops of more dynamic range you may need to boost your ISO 8X. In enviroment not lighting good it will be a problem. I haven’t had chance to look into the real footage so this is just my theory.
  3. QBC-HDR has artifacts but I do think it is usable. If you care about artifacts, and you want zero artifacts, you can set long exposure to 1/200 as it freezes almost all the normal speed objects. In worst situations, the artifacts will be only happening at highlight area where there’s no details only if the dark objects crossing the bright area as long exposure will have motion blur and it will change the image in that area. In normal brightness area(almost 0–12stop), there’s no pixel merge so no artifacts.

--

--