Leaderboard
Popular Content
Showing content with the highest reputation on 10/24/2012 in all areas
-
Interview with Canon's Mike Burnhill on the Canon 1D C 4K DSLR
OzNimbus reacted to Andrew Reid for a topic
[url="http://www.eoshd.com/wp-content/uploads/2012/10/canon-1d-c.jpg"][img]http://www.eoshd.com/wp-content/uploads/2012/10/canon-1d-c.jpg[/img][/url] At Photokina Canon had their latest build of the EOS 1D C on display, a $15,000 DSLR which is heavily based on the $6000 1D X but has extra recording modes for 4K/24p and 1080/50/60p. At the time, Canon's product manager at Photokina (not a hired hand or a sales rep) to my surprise told me the main difference to the 1D X was just firmware and that the hardware was identical other than a slightly different selection of connectivity ports. Canon contacted me a couple of weeks ago to expand on and correct this from their side - there are other subtle hardware changes to enable 4K video, other than just firmware.1 point -
Interview with Canon's Mike Burnhill on the Canon 1D C 4K DSLR
ScreensPro reacted to jessekorgemaa for a topic
I mean no offense, but all of the spots you put under "eoshd remark", those are the tough questions that you should have asked. Rather than slap on some back handed remarks with no option for them to actually answer you. You asked questions in this article, not tough questions. Mind you I am not great on my feet either.1 point -
Full-frame 4-2-2 8-bit in Pro-res seems pretty good. Plus you get the Canon look. Plus it's an amazing still camera. Steeper than BMD overall, but no focal length crop to worry about!1 point
-
Recording direct to ProRes on a Ninja 2 would make editing quicker & nicer. It should also evade the 29'59" clip length limit as this only applies to recordings on the camera. The clean HDMI out should have no time limitation as it could be feeding a monitor rather than recording.1 point
-
i know you said nothing slower than f2.8, but the super takumar 35mm f3.5 is pretty cinematic when slightly underexposed. and its dirt cheap as well, you could probably find one for less than US$50 cos its so slow.1 point
-
5DtoRGB VS GH2 banding issue
John Twigt reacted to see ya for a topic
The extra detail and brightness 'gained' is due to a simple reason, the mapping your NLE or media player does between the YCbCr (YCC for short) color model GH2 video and the RGB preview and color processing done by the NLE / media player. The 'correct' way is to take the 8bit 16 - 235 luma range and 16 - 240 chroma range and convert to 0 - 255 RGB. So 16 YCC (black) and 235 YCC (White) is mapped to 0 RGB (Black) and 255 RGB (White). YCC video has a luma channel (kind of greyscale) and two chroma channels Cr & Cr, chroma red and blue. They are all kept seperate and added together using a 'standard' calculation to give color, saturation and contrast and on computers thats an RGB preview. RGB color space on the other hand entombs the brightness value into every pixel value, not kept seperate in a luma channel. So YCC gives us the advantage of how we combine luma and chroma to create RGB but it needs to be done to recreate the RGB data the camera started out with before encoding to AVCHD. Checking the histogram, which is an RGB tool, can help establish correct conversion where visually it might look good but if combing or gaps are show in the histogram it illustrates a bad conversion, which becomes evident when trying to grade certainly at lower bit depth. Also a luma waveform can highlight problems too. Saying something 'looks' good also depends on the calibration of a monitor, it might look great on one persons and bad on another. Histograms can lie, but they are a better illustration of state of an image. The key point though is the weighted calculation done to generate the RGB values that you see and those are the values that you base the quality of the image to be and therefore the performance of the camera. Important is the fact that a YCC color space can generate more range of values than can be held or displayed by '8bit' RGB, so 32bit float RGB processing is offered to negate this. But what you percieve to be the 'quality' of the camera file depends on how the NLE interpretats the YCC and how the RGB values are calculated and stored in the memory of the computer as well as an interpolation of YCC values into RGB pixels with regard to edges, stepping etc. Depending on what algo your NLE does in the conversion will depend on the percieved smoothness of edges. Ie: does it do nearest neighbour, bilinear, bicubic etc. 5DToRGB offers custom algo I believe. But to concentrate on levels handling... It's important to understand that the RGB display you see is not necessarily the extent of what is actually in the YCC, just how the NLE is previewing it by default. We need to seperate what is displayed from what is really in the file and that a simple levels filter can make detail appear, this really needs to be done at 32bit processing though as clipping may well occur otherwise, as mentioned above. Our monitors are 8bit RGB display, they can't display 32bit as 32bit or wider dynamic range but that's not to say that the RGB values calculated in the YCC to RGB conversion by the NLE or media player don't contain values greater than can be displayed at 8bit and that includes negative RGB values that would have been clipped in an 8bit YCC to RGB conversion. Being able to store and manipulate negative values helps black levels / shadows and whites greater than a value of 1 can be held and manipulated over 8bit clipping level white. 0 - 255 is described as 0 - 1 in 32bit terms. So back to YCC to RGB conversion. Your GH2 captures luma in the 16 - 235 range, it's not full range, it's limited but it's the 'correct' range for YCC to RGB conversion. 16 - 235 mapped to 0 - 255 RGB based on the standard calculation. This is all in 8bit terms and 8bit considerations like clipping 8bit values generated that are negative or greater than 1. What 5DToRGB offers is for you to say 'nah I don't want to use the standard YCC to RGB mapping based on 16 - 235 luma. I want you to calculate RGB values based on the assumption that luma levels in my files are full range 0 - 255. So doing that means that instead of 16 YCC being treated as 0 black RGB, it's treated as 16 RGB (grey) and 255 YCC treated as 235 RGB. Result is levels of the original file get stretched out and the image looks washed out or not so contrasty and you can see more detail as a result. That's all 8bit world and if your NLE is 8bit then you may have to resort to that sort of workflow. 10bit and 16bit just provide finer gradients and precision, they do not however provide the ability to store negative RGB and RGB values over 1. A 32bit float workspace is required for that, 32bit is not all about compositing and higher precision. Such apps as Premiere CS5 and AE CS7 onwards with a 32bit float workspace work differently to 8bit and 16bit. 16bit just spreads 8bit over wider range of levels prorata. So 8bit black level 0 hits the bottom of the 16bit range, 255 hits top of 16bt range. Where as Adobes 32bit float workspace centres the 0 of the 8bit in the 0 - 65536 levels range so you have room to go negative as well, blacker than black so 8bit black doesn't hit 32bit 0 black. The importance of this is that where your shadows would have been clipped to black and detail lost or hidden deep, 32bit allows the values to be generated and held onto, our 8bit display still can't display them but they are there safe in memory same for brighter than 8bit white. We can reassured that we can shift levels about freely until what we see in our 8bit preview is what we want, those details you miraculasly see appear by magic with 5DToRGB which is really just a remapping of levels YCC to RGB. Iin 32bit the default GH2 import levels and detail will appear and disapear depending on your grading, but they are not lost, they slide in and out of the 8bit preview window into the 32bit world, no need to transcode. This makes the 5DToRGB process pointless with regard to levels handling and gamma reasons when you have a 32bit float workspace. 16bit donesn't offer this. Just import GH2 source into 32bit and grade. You can see whether your NLE handles this by importing the fullrangetest files in one of my posts above. Try grading the unflagged file, it's full range luma so initially the display will show black and white horizontal bars, this is the default mapping for 8bit RGB preview I mentioned above, relate that to your default contrasty, crushed blacks, lost detail preview in the NLE. Put a levels filter on it or grade and pull the luma levels up and down, if you see the 16 & 235 text appear you can see your NLE has not clipped the values converting from YCC to RGB. You'll be safe in the knowledge you haven't been shafted by Apple and lost detail and that 5DToRGB is doing magic. If you don't see any 16 & 235 text and conclude that your NLE is doing an 8bit conversion going YCC to RGB then options like 5DToRGB transcoding etc may be the only options. Not sure why anyone would want to transcode to 4444, the last '4' is for alpha channel. The conversion from 4:2:0 YCC ie: subsampled chroma to 444 Prores is a big interpolation process, creating 'fake' chroma values by interpolating the bit of half size chroma captured in camera. It's possible that some dithering or noise is added to help with that, so 444 is manufactured from very little, again it's interpolation via an algo like bilinear, bicubic, smoothing chroma a bit, but does nothing to levels and gamma. Just manufactures a bit extra color. 444 is similar to RGB and the process of generating RGB in the NLE for preview on our RGB monitors is done very similarly, as soon as you import the YCC sources into the NLE, preferably at higher precision than 8bit chroma is interpolated and mixed with the non subsampled luma and RGB values created. The higher the bit depth the better the gradients and edges created. I don't think transcoding to 444 or even 4444 for import into a 32bit NLE or grading package is worthwhile. All this goes back to a simple workflow, suggest using a 32bit enabled NLE / grading tool and a slight gentle denoise to handle the interpolation of values at higher bitdepth, rather than interpolate to 444 using algos that can over sharpen edges and create black, white, colored halos and fringes at edges depending on pixel values and lower bit depth, if over done or pushed in the grade. Gentle denoise will give other benefits too, obviously. But suggest doing own tests and whatever suites an individual is the way to go of coarse.1 point