cpc
Members-
Posts
211 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by cpc
-
sunyata's analogy is quite good. A small correction only: Prints don't have a linear representation of the scene light, not at all. Prints are heavily gamma corrected for projection in dark environments much more so than material meant to be shown on emitting displays. Print-through film curves (that is, scene-to-projection) typically have a gamma in the range 2.5-2.8. Now first it is important where the "log" comes from. It is because humans perceive exponential light changes as linear changes. This is a logarithmic relationship. Hence, log. Log curves mimic this. Exponential scene light changes are recorded as linear changes. In other words, each increase of exposure with a stop (or doubling the light) takes the same number of coding values to encode, and not double the values of the previous stop (as do linear encodings). There are a couple of technical benefits: 1) Much more effective and economical utilization of available coding space. This is the reason log curves encode wide dynamic ranges effectively in a smaller bitdepth. Cineon was developed to capture the huge DR of negative film in only 10 bits. 2) (And related to 1) Increased tonal precision in the dark parts of the picture, compared to a physically correct linear encoding (when using the same coding space). Since sensors work linearly, purely logarithmic curves would waste some coding space in the blacks, because there is not enough density there. That's why practically all log curves are pseudo-log, with some compression in the black end. Arri's Log-C is probably the closest to pure log. Canon's C-log is the furthest away from pure log. The other reason is, as mentioned, mimicking Cineon. This is also, I believe, one of the main reasons all log curves have a raised pure black level. This mimicks the base density (D min) of film, as encoded in the Cineon curve to accommodate scanning film densities.
-
You can increase the DRO setting if you are going to use Autumn Leaves for grading. DRO +3 flattens the low-mid range of all the styles.
-
It doesn't have more highlights, whait it does is pull mid grey lower and clip signal at IRE 100. The lower midgrey point may create the illusion of more highlights latitude, but it is the actually opposite. The other three Cine curves go to IRE 109. Cine2 is supposed to be used "as is" and be broadcast safe (hence clipped at IRE 100). I don't think any peaking setting will work well with slog. Peaking relies on contrast and contrast is very low with s-log. One way to sort of work around this is to pre-focus with a contrasty PP and switch to s-log for recording.
-
Seems to be clipping quite a lot? There is no reason to ever use Cine2 if you are doing grades. Cine2 has no superbrights.
-
Interesting insights into the new Sony A7R II and RX sensor technology
cpc replied to Andrew Reid's topic in Cameras
Not gonna happen because the m43 lenses can't conver a sensor that big. -
Interesting insights into the new Sony A7R II and RX sensor technology
cpc replied to Andrew Reid's topic in Cameras
Actually, it is highly unlikely the A7rII sensor ends up in a video camera. The A7sII (whenever this comes) sure, but the 42mp A7rII sensor? Not really. It would be a huge waste of sensor area, since the full frame video mode is by Sony's own words not as good as the s35. It simply doesn't make any sense (yet) to use that many megapixels for 4K video. -
Well, I have the A7s and s-log2 is unusable in 8-bit if you care one bit about image quality. Plastic skin everywhere once you restore contrast; simply not enough tonal precision in 8 bits for the flatness of this curve. Besides, the s-log2 setting doesn't use a huge chunk of the coding space, this way further screwing the tonal precision. Canon c-log is a different story. Its curve is not log really, just log-like in the upper end. And it has less DR than s-log. Much more 8-bit friendly. Codec is important but not as important as the bitdepth. Try working with 16 stops log in uncompressed 8-bit and see what happens even if there is no codec at play.
-
Resolve, Scratch, Nuke Studio, Premiere, After Effects, Edius (slow), Lightworks latest release. All work with DNG footage.
-
This is probably for historical reasons and is related to interlaced video. With interlaced, each of the two fields is subsampled separately (because the fields represent different time moments and subsampling them the same way as with progressive images would introduce chroma artifacts related to motion). Now, because each field is subsampled separately, if you use one of the subsampling methods with only half the vertical resolution, for example 4:2:0 or 4:4:0, this would result in gaps of two lines with no chroma samples. Here is how a column of 4 neighbor pixels looks like in this case: Field 1 Top row (chroma sample) Field 2 Top row (chroma sample) Field 1 Bottom row (no chroma sample) Field 2 Bottom row (no chroma sample) But if you use full vertical sampling, as in 4:2:2, there is no such issue. You only get 1 sample gaps horizontally, and no gaps vertically when applying 4:2:2 to interlaced video.
-
Well, since you've been renting lights you should know best what works for you? I'm not sure how the 1x1 lights can be enough for lighting a normal scale set/location from scratch. Are you only shooting closeups or using the LEDs to augment daylight?
-
1) Convert your ML raw/mlv files to dng sequences using one of the many available GUI batch tools. This is essentially a simple and fast rewrapping job. 2) Delete the raw/mlv files. Raw process, edit, color and finish in Resolve Lite, non-destructively, using the dng files. No generations, proxies or whatever. Full original quality at each step. That's what I've used. Now, if you need compositing you'll have to export to a compositing app using whatever format suits you. I've used tiff sequences for that. (I have an extra step, using my compressor http://www.slimraw.com/ to losslessly compress the DNGs (usually 50+% size reduction while retaining full original quality), but this is optional. /end of plug) And apparently there are now ways to mount the ML raw files directly so that they are visible as dng sequences in video apps. You should consult the magic lantern forum. Tons of workflows there. I haven't tried CC 2015 yet, but it looks promising. CC 2014 was way too slow for my liking when working with raw.
-
First Sony A7R II user experiences - global shutter and native ISO 800?
cpc replied to Andrew Reid's topic in Cameras
Yeah, they put global shutter in-there, then forgot to mention it in all the marketing blurb. Not really. -
Well, I wrote this last year: http://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/ Downscaling 4:2:0 8-bit video from 4K to 2K will give you 4:4:4 video with 10-bit luma and 8-bit chroma. But have in mind that these are the theoretical limits. In practice, what you gain depends on pixel variation and compression used on the source image. The more detailed the 4K image, the more true color precision you gain in the downscaled image. In any case, downscaling 4K in post delivers the best looking 2K/1080p from the current gen 4K cameras.
-
Looks good. I would probably have him turned a little more to the left and also move the camera a bit left to have the window offset, and his head against the blue columns. Also, the light would probably be slightly better on him. The changing sun is messing a bit with the brightness of these interview shots, they go continuously darker. Not a big deal, but you may want to match them.
-
You better ask here: http://www.magiclantern.fm/forum/index.php I have tons of ML footage but it is all converted to DNG. No reason to keep the original raw files since they aren't standard.
-
You are downscaling 4k to 1080p, Then upscaling 1080p up to compare to 4k. Downscaling is not a reversible operation. Why? Because different groupings of pixels can result in the same output pixel once downscaled. When upscaling back the software can't create the original pixels out of thin air, because there are many possibilities for these pixels. In the end, you are comparing a 1080p image to a 4K image. And of course, the 4K image will have more resolution. So a proper downscaling is losing you resolution and gaining you bitdepth/color precision at the lower resolution, in the sense that each new pixel is essentially "quantized" at a higher bitdepth.
-
4K for $899 with the Panasonic FZ1000 - but beware the quirks!
cpc replied to Andrew Reid's topic in Cameras
What leads you to believe that 4K video is center crop and not sensor downscale? -
But there is usually some variance, moreso with multiple samples per output pixel. Yes, noise provides this, but so does detail and even gradients can have it (the steeper, the better), especially without compression on top. The thing is, in blacks and dark grays, where precision is most lacking, noise is at its strongest. So in practice you gain the most exactly where you need the gain. (On a remotely related note: Been playing with some Kinefinity footage lately; noise is nice and the image scales down to 2K beautifully.) @Axel: Not necessarily the case with HDMI out being subsampled the same as the in-camera recorded video. Lots of cameras record 4:2:0, but output 4:2:2 on HDMI. And some go 4:4:4 (over SDI), F3 comes to mind.
-
Not sure about that, John. The native fine noise in the 4K image is probably mostly killed by compression as much as the downsampled grain in the original 2K image is killed by compression. I don't think NLEs dither, Resolve doesn't seem to do it. But the scaling algorithms are surely involved, and this affects output because quite a lot source pixels are sampled per output pixel (a generic cubic spline filter would sample 16 source pixels). @sunyata: it is not true 10-bit, and in your example chroma is still 8-bit (with no subsampling though), but it surely is better than a 2K 8-bit subsampled source from camera, if not as good as a true 10-bit source.
-
The numbers above should read 20-25 and 60-65 tonal steps values per stop.
-
Thanks to John for pointing me here, it is an interesting discussion. :) As one of the people who think that there is free tonal precision lunch to be had in 4K to 2K downscale and the one that wrote the ShutterAngle article linked on the previous page, I think the 8-bit display argument is a bit beside the point. The whole idea of shooting 4K for 2K (for me, at least) is in using a flat profile as s-log2 on the A7S. Then working it in post to the appropriate contrast. As my idea of a good looking image is generally inspired by film and includes strong fat mids and nice contrast, this means the source image is gonna take quite the beating before getting in the place I want it. And here is where the increased precision is going to help. To simplify it a bit: when you stretch an 8-bit image on an 8-bit display, you are effectively looking at a, say, 6-7-bit image on an 8-bit display, depending on how flat the source image is. That's why starting with more precision is helpful. Starting with 20-25 values in the mids (which is the case with 8-bit s-log2) is just not gonna handle it, when you are aiming at, say, 60-65 values there in delivery. Compression and dirty quantization to begin with surely affect the result and limit precison gains. But they don't entirely cancel them, and the better codec you use on the hdmi feed, the cleaner the downscale.
-
My guess is the FC process goes like this: 1) Linearize source data. Source can be any transfer curve, and with varying colorimetry. Tonal curve is linearized, and color is transformed (corrected) to some reference color space. This step equalizes the input. 2) An idealized "printed" film transfer function (with the film negative gamma/contrast index expanded to 1, hence "printed") is applied, which pushes colors around, possibly also tweaking contrast (based on the film negative contrast index values). 3) Result is gamma encoded for display. In theory, 1) and 2) (well, and 3, for that matter) can be done in a single composite step (a composite LUT for each possible source type and each possible target film, for example).
-
Yes, but that's not what I am asking. Any negative film is extremely low contrast. We are talking film gammas like 0.5-0.6. The image is never meant to be used at this contrast level. The audience never sees the negative. And printing on paper or on release film restores contrast. Even higher contrast release stocks help battle theater projection issues (stray light, low theater screen luminance, lateral eye adaptation) which tend to make darks appear brighter. The gamma of the whole system, from scene to projection, is higher than 1, often higher than 1.5, depending on the release stock. Hence, the question about FC. There is an abundance of FC footage floating the internets which looks unnaturally flat (apparently, for many people lifted blacks = filmlike), and definitely not in the way a specific stock really looks like when printed. I doubt that FC defaults to digital cinema contrast - digital cinema gamma curves actually have higher contrast than computer displays (sRGB, 2.2), exatly due to the projection issues mentioned above.
-
Looks very neat. I have a question about FilmConvert. Do you select a print release stock in combination with the negative film stock? In reality, there is NO useable look to a film negative, until you print it. And a negative can look one way when printed on low contrast release stock, and differently when printed on a higher contrast release stock, etc.