Jump to content

Is GH4 worth it if I won't be using 4k?


James J. Yoo
 Share

Recommended Posts

4K when edited in 1080 has so many uses I don't think I could go back.  Here's an example of a video I did last week. Shot and edited in one day. No color correction, to be viewed at the wedding.  There are many shots where I cropped into a 4k image and it gave me a two camera shoot look without doing a darn thing.     The chase around the tree scene for example here.

 

Great video.  and good point.  However; can i ask what framerate you edited this in?  because it looks 30-ish.  I assume tho its 24fps?

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

30P for this Perplex.  I did this will the GH4, but the videographer I was working with for the wedding uses Canon and wanted to shoot all 30P, so I decided to go with 30p here to match him.    Generally I like 24p better, but you know how it goes.  Compromises.   

I figured so.  It looks good. 30p gives it more of a personal feel IMO. interesting choice.

Link to comment
Share on other sites

Hi Andrgl, dunno where you got that info, but looking at 4k files on my computer they are definitely not 1080. There really is 8.8megapixels of information there,.. or very darn close to it.

 
Maybe. But I can take an 8x8 pixel image and scale it up to 8640x8640 using an algorithm to make it seem like there are more pixels than 64.




For the data to truly contain 4K worth of RGB, the GH4 would have to read out it's full sensor. But we know it doesn't because of the greater crop factor when shooting 4k. So we know it's using ROI.
 
So why isn't 4K really 4K? 1 pixel is made up of Red Green and Blue. CMOS sensors have more Green photosites than Blue or Red, the ratio is 2:1:1. Meaning, for every 1 green photosite, the GH4 only has 0.5 red and blue photosites.
 
hKgnunL.png

This is what the GH4 sensor captures really. It's data from a Bayer pattern. This "image" has to then be 'debayered'. Essentially what happens is a block of 2x2 "pixels" are pulled together to form a single pixel.

gfikoxb.png

Once the data is debayered, you're left with half of the original resolution (like the image above.) That frame is then scaled 2X using interpolation.



Anyway, I ripped those pictures off Wikipedia. You can find out more by searching for Active pixel sensor, Bayer and demosaicing.
Link to comment
Share on other sites

Down converting the resolution from 4K to 1080 is a nice advantage.  You don't necessarily need a great computer to handle that job.  

 

And once it's 1080 converted to an edit friendly codec, like ProRes422, then you can cut (and color) your stuff easily and effectively.

Link to comment
Share on other sites

GH4 CODEC INFORMATION:

(GH3 1080P has profile 4.2 cbr about 50Mbs)

GH4 1080P has profile 5.0 vbr about 80Mbs max 106Mbs

GH4 4k has profile 5.1 vbr about 80Mbs max 106Mbs

Bitrate bits/pixel/frame: GH3 50P-0.457 GH4 60P-0.652 GH4 4k 30P-0.313

GH4 4k has relatively slow bitrate per pixel but I think that better profile compensates that partly.

GH4 files are more difficult to decompress because of more complex profile.

 

I found also an interesting thing:

When shooting 1080P video in creative video mode it uses video profile 5.0

When shooting in photo mode using red video button is uses video profile 4.2

Link to comment
Share on other sites

An advantage of 4K is that when it is downscalled to 1K, or 1920x1080, it has greater color correctness.  As Andrgl says, each 4-pixel block of a bayer camera image is really 1 red, 1 blue and 2 green sensor pixels.  Through de-bayering, each borrows color data from the neighboring pixels to create 4 RGB values.  Put another way, in 1080, the camera has sample 25% of the images' red channel, 25% blue and 50% green (because we are most sensitive to green in luma).  

 

In 4K you have a box sized at 16 pixels.  Again, they are 25% red, 25% blue and 50% green.  However, when you downsample them you're actually giving a bigger sample of each channels data; that is, you have 4 pixels of red, 4 pixels of blue and 8 pixels of green.  

 

My GM1 takes very sharp video.  I suspect they're already doing some form of 4K down to 1080 in the camera; that is, doing pixel binning of 8 pixels for every 1.  But I'm just guessing.

 

In any case, from all the samples I see GH4 downsampled 4K produces the best 1080 you can get at the moment (in shaprness).  However, I'd still rather have 1080 from RAW, all things equal ;)

Link to comment
Share on other sites

Per Nyquist sampling theory, we need to sample at 2x the frequency (or resolution for images) to get alias-free results. A 4K Bayer sensor can't actually capture 4K- it's some percentage less, perhaps 3.2K-3.8K. Adobe Camera Raw does an excellent job with Canon 5D3 1920x1080 debayered to 1920x1080. The price for this super-reconstruction algorithm is it's very slow. 4K from the GH4 is greater than 1080p- perhaps see test charts shot for the GH4 to see how much resolution it can resolve before aliasing and extinction of details. Watching GH4 4K material on my 2560x1600 monitor is clearly more resolution than downscaled 4K to 1080p.

 

The only mathematically true (almost!) 4K video camera is the Sony F65 with an 8K sensor. The RED Dragon at 6K can produce very good 4K, especially with high-quality post-debayering. The GH4's internal debayer is pretty good- looks great compared to post-debayered RED Epic.

Link to comment
Share on other sites

Correct me if I'm wrong but I was under the impression that de-bayering does not require any downsizing from the original pixel density of the sensor, but in fact it uses algorithms that look at neighboring pixels and can recreate an image that represents the full pixel density of the chip when finished, only with more "accuracy" in the green channel and less in the rb channels, depending on end gamut.

 

From what I'm reading above, there needs to be an over-sampling to prevent aliasing that forces a reduction in the pixel density in the encoded image?

 

http://videsignwire.com/cmos-image-sensor-processing-with-fpgas/4/

 

 

altera-5-400w.jpg
View full size

Figure 5. Bayer Patterns

 

When interpolating the missing values of R and B on a green pixel, as in Figure 5(a) and ( B), the average values of the two nearest neighbors of the same color are used. For example, in Figure 5(a), the value for the blue component on a shaded G pixel will be the average of the blue pixels above and below the G pixel, while the value for the red component will be the average of the two red pixels to the left and right of the G pixel.

In Figure 6(a), the value of the green component is to be interpolated on an R pixel. The value used for the G component here is:

altera-eq1.jpg

To determine the value of the red component on a B pixel in Figure 6( B), take the average of the four nearest red pixels cornering the B pixel.

altera-6-400w.jpg
View full size

Figure 6. Two Bayer Pattern Cases

 

In order to improve the resulting image quality an adaptive interpolation algorithm must be employed. Here, a 5×5 grid is used to estimate the missing pixel colors. Consider the Bayer pattern in Figure 8.

altera-8-400w.jpg
View full size

Figure 8. 5×5 Bayer Pattern Grid

 

First, a horizontal laplacian is determined using the following equation.

altera-eq2.jpg

Secondly, a corresponding vertical laplacian is determined using the following equation.

altera-eq3.jpg

Using these horizontal and vertical metrics the interpolation algorithm is adapted according to the following rules.

altera-eq4.jpg

 

A similar algorithm is applied to the remaining mosaic colors. These rules determine a pixel location in which a color change has taken place and adapts the interpolation accordingly. The result of this adaptation on the output image quality can be seen below in Figures 9(a), ( B) and ©. The adaptive algorithm greatly reduces the blur and hue effect.

altera-9-400w.jpg

Link to comment
Share on other sites

In simple terms, it's possible to get decent reconstruction using statistical analysis (vs. simpler methods such as bilinear interpolation (which is easy to code and is very fast)). If one starts with super high resolution, such as with RED (say 5K) and are targeting 2K, the quality of the debayer step is less important than say 5D3 RAW 2K debayered to 2K (ACR and Resolve do a much better job than the on-camera hardware+firmware: much sharper and more detailed images when debayered in post). Interestingly , RED Epic at 5K debayered (and without any sharpening) looks softer and less detailed than GH4 4K H.264 compressed (100Mbps): 

Link to comment
Share on other sites

I'm sorry, it sounded like you were saying you couldn't get an actual 4k sized image after de-bayering, I think I took the part about the "Nyquist 2x sample theory" and "A 4K Bayer sensor can't actually capture 4K- it's some percentage less, perhaps 3.2K-3.8K" a little too literally. I can see now that you were just saying that it's not true trichroic RGB. And you think with 5k to 2k, the de-bayering factor is less important than 2k to 2k.

 

Got spammed with an interesting article on 4k today from RedShark news.

 

"Before we all leap into 4K, maybe we need to understand resolution better!"

 

http://www.redsharknews.com/technology/item/1799-before-we-all-leap-into-4k,-we-need-to-understand-resolution-better?utm_source=www.lwks.com+subscribers&utm_campaign=fe5b3d60da-RSN_June27_2014&utm_medium=email&utm_term=0_079aaa3026-fe5b3d60da-76206849

 

Essentially he's making a point about motion and how higher resolutions change the perception of motion.

Link to comment
Share on other sites

I find it best to put your imagination in the form of a sensor.  Each pixel of your eye sees either red, green or blue.

 

Keep this grid in mind for a 20x5 sensor R=Red, G=Green, B=Blue

 

RGRGRGRGRGRGRGRGRGRG

GBGBGBGBGBGBGBGBGBGBG

RGRGRGRGRGRGRGRGRGRG

GBGBGBGBGBGBGBGBGBGBG

RGRGRGRGRGRGRGRGRGRG

 

 

 

 If you're out in a field and shoot a landscape with a power-line glinting in the sun, how does the sensor "see" it?  If the line is perfectly horizontal to the sensor and is two pixels high, then each sensel is debayering using the same light.  The 4 sensels get full values for Red, Green and Blue.  They de-bayer them and each comes up with white light.  The power line will look perfect.  These are the pixels that "see" the line

 

RGRGRGRGRGRGRGRGRGRG

GBGBGBGBGBGBGBGBGBGBG

 

Now let's imagine that the line is only one pixel high.  That means you might have full red, green, red, green, and NO light from the line on green, blue, green blue, etc.  So what happens when you de-bayer each block of 4 pixels?  Naturally, you get a redish tint because the power line didn't register on the blue pixels.

 

RGRGRGRGRGRGRGRGRGRG

 

 If the line is on the green. blue line of the sensor, you get bluish (because it's missing red).  Keep in mind, the field is green, so the sensor isn't going to figure out the missing red or blue if it doesn't register it.  

 

These are the sensels the register

 

GBGBGBGBGBGBGBGBGBGBG

 

Usually, "lines" fall above and blow sensor pixels.  If you look at any video with these cameras and look as thin lines you will usually see some form of chromatic distortion because, as jcs says, the camera isn't resolving enough detail (color samples).

 

In general, there are issues with missing color at all stated resolutions for bayer sensors.  It blends in when we don't have sharp edges that fall between a minimum of 3 pixels we need to properly ascertain the color

Link to comment
Share on other sites

Maxotics-

 

Haven't seen ascii art in a while. I understand what you are saying. I think that this applies to more of a bi-linear method though? If you were using a rules based algorithm, like in an adaptive laplacian method, this might help prevent the horizontal (or perfectly vertical) problem?

 

I'm interested in learning more about FPGA programming, and experimenting with de-bayering, but this all makes me think that 3 chip cameras are not dead yet... unless someone invents a chip that has substrates like color film. 

Link to comment
Share on other sites

Isnt the GH4's 4k really 1080?

CMOS sensors have RGBG arrays. 2:1:1 ratio RGB effectively halves resolution.

By downscaling a 4k file, the image is truer to how much detail the sensor captures.

It isn't in the exact ratio of two to one, because the processing pulls out luminance information out of all the color channels.  Resolution tests are conducted with black and white images, so they do not take into account chroma resolution or chroma artifacts.

 

Reviewed.com measured the GH4's 4K video resolution in good light at 975 lph (line picture heights) horizontal and 1100 lph vertical, essentially what 1080 resolution is.  You may get higher resolutions than this with a better lenses.  A GH3  measured at 850/825, which is far above most other 1080 DSLRs, and at a level of a top level camcorder.  Most DSLRs are 600-700 lph, and in low light, even lower.

 

So 4K out of a GH4 is going to look super sharp downsampled to 1080, because most cameras and content delivery systems can't produce images at full 1080 resolution.

 

BTW, a lot of great posts in this thread. :)

 

Michael

Link to comment
Share on other sites

@Aaron, Was that a good barf or bad barf ? :)  @Sunyata I believe someone could come up with a rules-based de-bayering algo that could deal with these issues.  All the formula based algos have strengths and weaknesses.  If I had another 24 hours in a day I would have tried creating my own algo for the EOS-M RAW, or other Canon RAW where there is a lot of line-skipping and this problem aggravated.

 

In the end, I'd have to yawn, or maybe like Aaron, barf and go to sleep.  In the real world, moire is not noticed by audiences unless someone is wearing some crazy shirt, and even then, no one ever walked out of a theater from it.  

Link to comment
Share on other sites

Most of these de-bayer methods would be easy to test with a Nuke expression node, or with a flame matchbox shader, the latter are written in GLSL. Unfortunately you can't do anything temporal in matchbox, but the trade-off is that you're doing near real-time rendering.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...