
cpc
Members-
Posts
211 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by cpc
-
I recall something like 115 for midgrey but it's been 4 years since I've last shot the 5d3, I may be wrong. Having raw white clip referred values is pretty cool, we didn't have these back then. IMO, the problem with using a histogram for exposure is that it kind of promotes post unfriendly habits like ETTR. The spotmeter on the other hand is all about consistency.
-
You can use the spotmeter for this. This simple tool is faster/better than a waveform for judging skin exposure and not nearly as obtrusive as false color (you can have it on ALL the time). All you need to know to make good use of it is the map between the numbers you see in the profile you are monitoring with (say, you have the camera set to Standard while shooting raw) and the the numbers you'll get in post after doing your raw import routine. Shoot a grey chip chart, record what goes where in live view (or just record a clip in Standard), import the raw footage and make a table with two columns. Voila, you now know that +1 is ~175 in "spotmeter values" and falls wherever in your imported footage. You don't really need to memorize the mapping with great precision. All you need is knowing where a -3 to +3 range falls as this is where the important stuff is in an image. Knowing your tonal curves is useful in most situations anyway. But it happens to be priceless when shooting raw and monitoring an image which you know is different than what you'll be seeing in post.
-
Nice shots @Gregormannschaft But this also illustrates the technical problem with s-log + low bitdepth + compression. Skin is really thin on the last image, for example, and the wall is on the wrong side of the banding limit.
-
One discriminating characteristic of log curves compared to negative is that (on most of them) there is no shoulder. (Well, the shoulder on Vision3 series films is very high, so not much of a practical consideration unless you overexpose severely.) An effect of this lack of shoulder is that you can generally re-rate slower without messing up color relations through the range, as long as clipping is accounted for. Arri's Log-C has so much latitude over nominal mid gray that rating it at 400 still leaves tons for highlights. I don't think any other camera has similar (or more) latitude above the nominal mid point? Pretty much all the other camera manufacturers inflate reported numbers by counting and reporting the noise fest at the bottom in the overall DR. No wonder that a camera with "more" DR than an Alexa looks like trash in a side-by-side latitude comparison.
-
Banding is a combination of bitdepth precision, chroma subsampling, compression and stretching the image in post. S-log3 is so flat (and doesn't even use the full 8-bit precision) that pretty much all grades qualify as aggressive tonal changes. S-log2 is a bit better, but still needs more exposure than nominal in most cases. Actually, I can't think of any non-Arri cameras that don't need some amount of overexposure in log even at higher bitdepths. These curves are ISO rated for maximizing a technical notion of SNR which doesn't always (if ever) coincide with what we consider a clean image after tone mapping for display. That said, ETTR isn't usually the best way to expose log (or any curve): too much normalizing work in post on shot by shot basis. Better to re-rate the camera slower and expose consistently. In the case of Sony A series it is probably best to just shoot one of the Cine curves. They have decent latitude without entirely butchering mids density. Perhaps the only practical exception is shooting 4k and delivering 1080p, which restores a bit of density after the downscale.
-
You will still get the best HD/2K delivery quality from a 4K camera even if you never deliver in 4K. A Sony FS700 + Odyssey 7Q goes for $3-4k nowadays and can shoot great 4K 12-bit raw. And it can also do high fps for slow-mo, which may be appealing for your music video/slow movement scenes.
-
Well, shooting negative film is certainly much closer to shooting raw, than baked (as in jpeg). The negative is unusable until printed and there is a great deal of choices that need to be taken during exposure, development and printing. Choices which command the tonal and color characteristics of the image. There may be "immediate patina and abstraction" in the result, but getting the result isn't immediate by any means.
-
In the ASC mag interview Yedlin readily acknowledges there is nothing new in the video (for technically minded people). It is deliberately narrated using language targeting non-technical people. His agenda is that decision makers lack technical skills but impose technical decisions, easily buying into marketing talk. Also, in the video he explicitly acknowledges the benefits of high res capture for VFX and cropping.
-
Testing camera profiles comes with the job characteristic of being a videographer/cinematographer. The easiest (cheapest) way to get an idea of DR distribution is to shoot a chip chart at a few exposures with a constant increment and use these to make a DR curve. Here is an example I did years ago using a Canon DSLR: Also, it is been mentioned already but log profiles do not introduce noise. They raise the blacks making the noise visible. If you see more noise in the graded image, you are not exposing the profile properly. Most log profiles need to be exposed slower than nominal because the nominal ISO of the profile is chosen to maximize some notion of SNR, which is not necessarily the rating you want for a clean image (after grading). In fact, cameras do it all the time. Take a Sony A7 series camera, for example, and look through the minimal ISOs of their video profiles. See how the minimal ISO moves around between 100 and 3200 depending on the profile? That's because the camera is rated differently, depending on how it is supposed to redistribute tones according to the profile.
-
High detail--deep focus--bright will be the highest rate. Movement is irrelevant cause compression is intra-frame.
-
The human eye does not perceive bits so such a claim makes no sense by itself. The necessary precision depends on many factors. There are multiple experiments on the tonal resolution of the eye. Perhaps the most applicable are the ones done during the research and setup of the digital cinema specifications, which determined that for movie theater viewing conditions and for the typical tonal curves dialed in by film colorists during the DI process and for the encoding gamma of the digital cinema spec (2.6 power gamma), you'd need around 11 bits to avoid noticeable posterization/banding artifacts after discretization (seen in gradients in the darks, by the way). This was rounded up to 12 bits in the actual specification. In an office viewing environment (brighter environments) you can do fine with less precision. This is about delivery though -- meant for images for display. You'd usually need more tonal precision when you capture an image, because it needs to withstand the abuse -- pushing and stretching -- of post production. The precision will mostly depend on transfer curves -- you need relatively more for linear images, and relatively less for logarithmic images. With today's DRs 8 bits is absolutely not enough anymore for log curves (and not even remotely close for linear images). It usually does fine for delivery for consumer devices (some of these displays are 8-bit, some are 6-bit; likely moving to 10 bits in the future).
-
The true power of raw shooting is that you don't need to fix anything in advance and you can raw develop, edit and color at the same time in a liquid, flexible and creative post workflow. At least in Resolve you can. Proxies shouldn't be necessary. I don't see how a proxy workflow can be simpler -- you add a step (possibly two steps, if you round trip).
-
Well, this kind of makes my argument irrelevant then.
-
Well, I am the source. I don't need official channels for this, I trust math better than any channel. The ~11 bits used by the BMDFilm curve is orthogonal to how their pipeline constructs a 12-bit digital image. It is the result of the lack of a linear toe (which creates sparsities in the darks, i.e. values which never happen), as well as unused values at the top end of the range. I haven't looked at the specs recently, but 60 fps 4.6K lossless raw would be in excess of 700 MB/s, which I am not sure is even supported by CFast cards currently. edit: FWIW, I've written this, I guess it should be ok as far as credentials go: http://www.slimraw.com/
-
12 bits is 2 bits more than 10 bits, that is 1.2 times the amount of bits (and 4 times the values that can be encoded). These are nominal bits though. You can think of this as the available coding space. What you put in there is another story. First, the tone curve might not use all the available coding space (for example, the BM 4.6K log curve is closer to 11 bits). Then compression comes in. A BM camera may reduce actual precision to as low as 8-7 bits in HFR (4:1 compression), depending on image complexity. You can read the paper on canon log (c-log), canon raw uses the same curve sans debayer.
-
Don't get fooled by nominal bitdepth. A 10-bit raw frame from the Varicam LT or the Canon C500 contains much more color info than a 12-bit 4:1 Blackmagic raw frame.
-
It is actually the other way around. For the same compression algorithm, higher bit rates need more processing power than lower bit rates in both encoding and decoding.
-
Canon cinema cameras raw is logarithmic like ARRIRAW and REDRAW (which are both 12-bit). 12-bit is plenty for log raw. 10 bits is still pretty good with a log curve.
-
24-bit audio, time code, genlock. Possibly other niceties. Also, c300II is priced at $12k, so it is $3K more, not 7k. At 1 gbps their raw is less data than ProRes444. It is around 3:1 compressed. Withholding the XF-AVC codec for 2018 is actually kinda smart cause they buy some time to see what comes next from Pana/Sony, and at the same time don't kill the C300II immediately.
-
No profile --flat or otherwise -- will be able to properly capture what you will be able to recover in post, so you might as well use it to help with other things. A punchy profile like Standard will help with manual focusing. Couple this with digic peaking + magic zoom, and you'll be surprised at what focusing precision is possible with just the camera display. Also, learning how the monitoring profile relates to the specific post workflow pays good dividends, especially if you are in the habit of exposing consistently. And if you know which profile value ends where in post, then all you need for exposure is the digital spotmeter (which is really the ultimate digital exposure tool anyway). For B&W Monochrome might be useful in that you get to view the tones only, without color distractions, and this might help with lighting choices and judging separation. But this can also be a detriment since color cues can help with space orientation and framing if there is movement in the shot. And the profile's mapping to grey values will not necessarily match the channel mix you will do in post, so it might be misleading.
-
It shouldn't really matter even if you are on auto WB. You can always adjust one shot in Resolve and replicate the raw settings to all the others. And not bothering with WB while shooting is one of the advantages of shooting raw. In most cases leaving it at 5600K or 3200K should be fine for monitoring.
-
I am sure you can link to a hundred examples like these. Also sure that you don't need me to produce links to 900 focus tracking shots for you. These are cool and all, but they are certainly not the norm. It is likely that you remember them because they stand out. Do we really need to establish that a huge part of focusing is entirely technical, with no creative intent whatsoever? You wouldn't use AF so that it gets creative on you; software has no intent of its own, it can't conceive anything (well, unless we dive into some metaphysical depths, which we shouldn't). You'd use it to do the chores, not the thinking.
-
If you think you can track focus in a moving shot better than decent AF you are overestimating your focus pulling skills. In a few years even pro pullers won't come close to AF success rates. Focus is simply one of these things an algorithm can do vastly better than humans. Yes, there are some specific shots where doing it through AF would require more work, but how often do you actually rack or do fancy creative focus work? 90% of pulling is tracking objects.
-
No idea. I haven't attempted upscaling to 4K. I don't think it makes much sense. People used to upscale 2.8K Alexa to 4K before Arri enabled open gate, and it looks fine, but 1080p might be a stretch.
-
At 2K you'll be giving up the full frame benefits for a miniscule increase of pixel count (I deliberately don't say "resolution", it is likely you won't get true higher res at 1:1 2K as 1:1 puts much higher requirements on lens sharpness). It is almost certainly better to just upscale 1080p to 2K.