HockeyFan12
Members-
Posts
887 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by HockeyFan12
-
I'm sure they're both great, I just love the colors in this shootout on the Panasonic: https://www.zacuto.com/canon-c200-vs-panasonic-eva1-camera-shootout-2018 Ironically, the Panasonci's resemble C300 colors and the C200 looks like a more sterile magenta-tinted Alexa to me? Fwiw I too am surprised to learn the EVA1 has more dynamic range, but maybe that's not counting RAW. The C200's dynamic range is outstanding. It doesn't have the great "look" of the earlier Canon cameras, though, and you need more talent as a colorist IMO. I think he just wrote not appropriate for broadcast, which is a fair criticism, if you're shooting tv. Then you're 95% of the time using an Alexa anyway. The flip side is, what's cooler than a C100-style body that shoots 60fps 4k raw internally with autofocus and that no longer needs to be built out (the XLRs and onboard mic are on the body and the EVF is useable, no need for the LCD) and that gains up to 102k ISO and has great battery life? For web videos or weddings that's pretty cool. Imagine trying to get that from an Alexa Mini, which eats batteries like crazy, needs a big EVF, lacks AF, can't gain up past 3200 ISO, doesn't shoot 4k, etc. etc. I mean you can use a C200 without any rig at all and it fits fine on a Ronin M with pancake lenses that AF great with it. The heavier body balances lenses better handheld, and it's really not much bigger than a C100 in the first place, so that's super cool. I like the Amira more than the Mini (ergonomically) but even that needs an assistant to operate properly. So "not appropriate for broadcast" seems like an arbitrary distinction to me. For super low end reality tv or maybe just someone who wants a really small documentary-style camera, I get the complaint, though; an FS7 or C300 Mk II would be way more appropriate there. I think Vice uses a lot of those, but that's mostly web anyway. But "not appropriate for enthusiasts" I think is the more accurate description of most cameras that are appropriate for broadcast. Even if you can afford an Alexa or F65, who has the money to hire a camera crew every time they want to shoot something? So I just think if you know your needs you know your needs, and the C200 should meet or exceed them. I do love the EVA1's colors, though....
-
Poor monitoring options (can't monitor in Canon Log 2, can't output HDMI and SDI at the same time), lack of timecode sync, unwieldy codec (that's debatable–it's no bigger than 3.2k XQ). But that's silly, no one is buying a $7k camera for narrative tv. They're renting Alexas, maybe the occasional F55 or Epic for 4k. So it's a bizarre and arbitrary criterion to go by whether something is commonly used in tv, at least without getting into more detail, unless your goal is renting to tv productions. Though it's not a big deal for a colorist, I think I like the colors from the EVA1 more, but I haven't even worked with EVA1 footage yet. The c200 has a magenta bias in the skin tones and overall magenta tint and reverses some of the "magic" from the C300 or 1DC in favor of accuracy and a fake Alexa look, whereas the EVA1 seems to go for richer colors. ?
-
I bet you're right it's at least 95% MX. Where did you get the editorial information from? Was it really cut with Vegas? I guess it doesn't matter but that's an unusual NLE. The grade was done by Dave Hussey at C03, one of the best colorists working at imo the best color grading company, and I discussed it for a while with a senior colorist at another post house (who'd graded an Oscar winning feature but is generally more freelance/indie) as one of the grades he was most impressed by, and it won some awards the year it came out I think. It should be good: I think an hour there is something like $5000. What's interesting to me is it's a really good "film look," with the reds really rich and saturated and darker than they would be on video, but it does seem pretty digital nonetheless. The Zapruder film part (which is kind of tasteless, but whatever) looks a lot like film to me, though. I legit can't tell if it's video or film there. But the video as a whole represents an artist's attempt to make digital look like film, and maybe I'm just a snob, but I swear it still has hints of Red MX there. Looks awesome nonetheless, really bold work. Maybe now with more advanced profiling and software it's easier to make one camera look like another. That piece Yedlin shot looks pretty much just like film to me: http://www.yedlin.net/OnColorScience/ Of course the Alexa always looked pretty good to me...
-
Yeah, that's a fair point. I think saying something is "bs" without saying why it's bs isn't always helpful. I'm mostly just curious about whether the thickness of the CFA matters and in what ways. Arri points out that most sensors can see the whole visible spectrum, but that doesn't mean they can differentiate them all cleanly. I found this thread, which I'm going to read when I get the chance lol: http://www.openphotographyforums.com/forums/showthread.php?t=19894
-
Looks nice. I love how foliage looks with this camera and greens and blues (and yellow) in general. How did you find it compared with the C200? Most sample footage I've seen from the EVA1 looks nicer, even if technical performance seems similar. Seems like they really have something good going here.
-
Fair enough. I shouldn't speak for the experts on the forum, just taking a guess at what they mean. And now we're getting into semantics anyway.
-
I think it's mostly the age-old canard (meme?) about using the word science when it's really about engineering. Anyhow, this stuff is above my head, too, and I think we should leave it to the experts. That is to say, the engineers. (Not scientists. Or marketing departments.) There's a lot of contradictory information online: Sony claims the F65 and F55 have the widest color gamut of any sensor; Canon's cinema gamut is far wider than that, though; and Arri claims that sensors don't have inherent gamuts because they can all "see" any visible color, which makes sense to me. But this is all marketing so it's difficult to get to the truth. So I'd just defer to @Mako Sports and @Mokara here. But it makes sense that they're right (and so is Arri)–any bayer color sensor in use today can see the entire visible spectrum, which is a vastly larger gamut than anything is mastered as. I'm still hoping someone can clarify whether metamerism error at the sensor level matters or not or to what extent it matters and where it comes from and how it can be addressed. It seems in theory you could make an extremely thin CFA that's barely red or green or blue at all and improve noise performance dramatically. In theory the Phase One Trichromatic back is totally bunk, and I'm wondering if I'm a total sucker for wanting one. Engineers I've spoken with claim even the thinnest CFAs we see today are still really excellent, but why are the dyes as thick or thin as they are? If they could be virtually infinitely thin so long as they still have some color, why aren't they? Thinner CFAs would, if anything, see a wider gamut, but I'm guessing would differentiate hues worse? I'm really curious about that still and it would be nice if someone with an engineering background could explain it to us. The Venice footage I've seen so far looks a lot better to me than C700 footage. And I really liked the F35, even the F3. Something about SLOG2 and the way color channels clipped just looked very "video" to me. But I might just be crazy or biased against those cameras-I didn't particularly like the F55 raw footage I worked with, and yet it's supposed to have the same CFA as the F65, which produces really nice images. Who knows... it was still pretty good! I'm not wild about the new Canons, I preferred the original C300's colors. Blues, greens, and skin tones were darker, the look was closer to color negative film, whereas the new ones seem very accurate to me, but they have more of a "video" feel but with a weird magenta tint. I think it's an engineering/market choice to try to emulate the Alexa better, but the old one had charm and wasn't just a "poor man's Alexa that has a strange magenta cast." I think we forget how primitive most colorists are (myself included). I loved the C100 because the look was great out of the box. Same with the Alexa. Loved the 5D Mark II. I'm not claiming one can't get a much better look from plenty of other technically superior cameras (than the C100, at least), and the F55 raw trounces most of those cameras. I'm just saying 99% of colorists can't get that super flat image to look as good as settings that look good out of the box. I include in that some of the top post houses, even. I think people here forget how incredibly skilled they are compared with the general public and even journeyman professional colorists. For most people, out of camera look matters; I'm no engineer and I'm not a great colorist. Neither are most consumers. My big issue is that the hue vs luminance curve introduces a lot of noise into the image and so trying to use that to turn a video look into more of a color negative look is difficult. But I know there are those who can do it. Still, I think this piece was graded expertly by Company 3: And you can see some parts that are clearly video trying to look like film. At 1:51 the grass and picnic blanket have the color and saturation of film, but the luma values still look more like an additive model than a subtractive model and the grass still looks more green than blue, partly as a result. Art Adams wrote about how the DXL darkened saturated green values and stretched the colors between yellow/green and blue/green out to get rid of the "Red look" and to me there are some scenes here that still have the "Red look." The American flag during the Zapruder video portion looks more like a subtractive model to me, though, the red there really pops. I've been told parts of this are shot on film and parts on video, but I lack the eye to differentiate all of it. It's probably all video lol. I bet it's 99% Red MX.
-
I think this is basically the idea behind ACES. But it still doesn't account for metamerism error, which I believe increases as the spectral response curves of a given dye become thinner and as the sensor's inherent gamut becomes smaller. So that's where I think it's not BS. The Phase One Trichromatic back I suspect does render better color than the standard Phase One back, even in sRGB images, even though both sensors cover more than rec709, just as Velvia 50 renders better color than faster slide films. But there's a lot of debate about it and the promotional videos for that back is full of demonstrable half truths and lies and marketing talk. Do you think it makes a difference? Unfortunately I'm not very technical, so I leave it to the experts like you and @Mako Sports for the final word. I agree most raw files of decent quality can be transformed to match if you have the software. I just think it's more true for stills than it is for video because ACR exists for stills but not for most video raw formats. I haven't compared Canon and Sony side-by-side in ACR myself, but DPreview's test charts I suspect are mostly processed through ACR (does anyone know otherwise?) and the color rendering between systems seems nearly identical. But I do suspect there are some subtle differences that show up more in faces and foliage than they do in color charts, but in theory are measurable in both. That might just be me being a sucker for marketing, though. What I'm curious about, and what I wonder if either of you could help me understand, is this: The F5 and F55 have different CFA arrays, one covers an exceptionally wide gamut, the other is designed for rec709. Assuming a rec709 deliverable, is there any difference in color between the two? I'm guessing there isn't unless the colorist wants to bring colors outside the rec709 gamut into it for the graded image–but for most work, there's none. Does a camera that covers rec709 inherently have low enough metamerism error that it can be transformed losslessly to match any other such camera? Does the sensor's gamut encompass all color values that can be derived without substantial error or simply all color values that can be derived at all (with some error or without)? This is where I'm still confused. It's my understanding that metamerism error is a significant problem in terms of fine hue differentiation (and the reason thinner CFAs can be problematic), which is difficult to see in color charts but easier to see in fine skin and foliage details, but I could be tripping up on advertising terminology. As I mentioned, I'm not very technical and do get drawn into ad speak. I also have read that even if it's a problem, it's very overrated. The only sensors with material metamerism problems are Foveon sensors. Someone here did some simple math to derive the chromasticities of the Red sensor in different development settings and found that they all cover >rec709/sRGB, but that the green chromasticity is pushed toward red, possibly accounting for the ruddy "red look." I can tell from the URL that you probably think this is BS, but it seems like their methodology makes sense: https://web.archive.org/web/20160310022131/http://colour-science.org/posts/red-colourspaces-derivation/ An author cited that article when he later claimed that the DXL largely solves this problem with its custom software, but that the DXL doesn't fully provide color that looks as good as the Alexa, for instance: https://www.provideocoalition.com/panavision-millenium-dxl-light-iron-red-create-winning-new-look-large-format-cinematography/ Was he monitoring in rec2020 (or another wide gamut) or is it possible that part of the "red look" is inherent to its sensor? Maybe some of those values we like in certain memory colors come from outside the rec709 gamut? It is VERY VERY lacking in greens relative to human vision, after all. I'd like to understand more about this because the F55, F65, C300 Mk II, etc. all have incredibly wide gamut sensors. Does this matter for rec709 viewing? I suppose one could map out of gamut colors to rec709, which though mathematically wrong might be scientifically pleasing, and only certain sensors would have gamuts wide enough for that... but in real world use is there any difference or not? Any insight would be much appreciated, I've long been confused about this but was ashamed it was a dumb question. I'll just embrace being dumb and ask it.
-
I assume because so long as the chromaticities defined by a given sensor's filter array encompass at least rec709 or sRGB (same chromasticities/gamut, different decoding gammas), then transforms exists to map an image acquired with that sensor to an image acquired with a different sensor that also covers sRGB (assuming the final output is sRGB or rec709). And all color sensors should cover at least sRGB. This is sort of the idea behind ACES if I'm not mistaken. The flilpside of the above argument would be that things like dynamic range, noise performance, and metamerism error can result in an image clipping, being overly noisy, or having poor tonality or colors that bleed into each other or simply false or indistinct colors. And those are inherent to the raw file. And metamerism error cannot be accounted for in software so far as I know. As I wrote above, Phase One's trichromatic back and their standard backs should both more than cover rec709, and yet the images taken with one appear more saturated and with better looking (imo) color than those taken with the other, even on an sRGB monitor. And, for instance, Red's too-close red and green chromasticities make it more difficult to get punchy green foliage in post (though the DXL proves it's possible to and I've also seen work from Company 3 that looks amazing with the Red). So for an expert colorist, I could see making the argument that raw files might as well all be the same... to a point. With raster, however, each company is baking things in quite differently and I think that really does matter, just maybe not where one expects. And that's where Red applies their "color science" label, in the debayer process. So personally I don't think it's entirely BS, even if I think it's mostly a marketing term. (No offense to @Mako Sports, I don't mean to speak for him or disagree with someone more experienced, just hoping to contribute to the discussion even though I'm a real neophyte with this kind of thing. I suspect you have better reasons for claiming it's entirely BS than I do for thinking it's somewhere in-between BS and material.) Also, a lot of raw isn't really really. Canon Raw Light has a lot baked in. I suspect ARRIRAW does, too. As regards color being more important in raster than raw, a friend worked with Stephen Sonnenfeld (founder of Company 3) on a project he cut and even he wasn't able to fully account for chroma clipping on cheaper cameras, though I'm sure what he did still looked absolutely amazing. But it's specific bugaboos like chroma clipping that ended up being the hardest things to address, and with raw that's not so much an issue. Again, this is just my uninformed opinion and I don't mean to speak for anyone or insult their abilities. I'm just a fan of Sonnenfeld's work so that's my bias, to agree with what he says, but it's totally possible someone more technical has figured this out better than he could. Back when I was shooting with the F5 I couldn't grade out clipping color channels in SLOG2, then I worked with it again with a a different LUT (Sony now has Arri-emulating and Kodak-emulating LUTs) that addressed that and the image was much easier to work with. So I think for raster images the pipeline makes a big difference, but I'm not much of a colorist.
-
You can get this data from most manufacturers. Much of it is freely available online in white papers. I worked with someone who helped develop the ACES profiles and every major camera manufacturer (except Red, I believe) sent gamut (chromasticities) information for their sensors to help creates the ACES profiles. They varied significantly. And just compare the Sony spectral response curves above to Velvia: http://www.fujifilm.com/products/professional_films/pdf/velvia_50_datasheet.pdf (Note where 0 relative response is.) I love the look of Velvia, and assume the extremely narrow response curves are responsible for it (and for its low ISO). I even prefer it substantially to Velvia 100. Back in the film days, spectral response curves were obviously a big thing. You'd shoot different stocks even if you were doing a DI. So it's no surprise I've always believed a portion of the look is baked into raw images, even if the difference is much smaller than with film stocks. There's been a big change in image characteristics over the course of Canon's dSLR line, even comparing raw files, and I read an article showing much better color (richer yellow/orange) with the earlier CCD models. Even the original 5D has its own look compared with the newer cameras (imo). Canon's changed its BFA filters many times. If there were no functional difference among BFA arrays, Phase One would have never released the trichromatic back (slower but designed for better color) or Sony wouldn't tout the wider gamut on the F55 compared with the F5. At the design stage it seems to be a trade off between color and low light... On the other hand, Red (apparently) has green and red chromasticities closer than most manufacturers, and that's said to account for the ruddy look you'd see in earlier Red footage, but the DXL seems to time that out well with its color profiling, despite having the same sensor. I've been very impressed with the DXL's images, even though I think Arri has the edge with foliage, still. So a lot can be done in post to account for different filter arrays. That's the whole point of ACES... Despite all the evidence that BFAs result in materially different gamuts, and thus materially different looks, I can't conclude anything with any confidence because I'm not an engineer. I suppose in theory if they all cover rec709 the rest can be done with transforms? The story the guy on the ACES board told me was complicated, and way over my head. Whereas those online who claim raw is raw and it's all the same seem very confident in their belief, so, while I choose to believe what I believe because it doesn't really matter that I'm misinformed if I keep my incorrect beliefs to myself, I would generally tend to side with those who believe all raw is the same since they seem to have done the research and are very very confident in their beliefs. Maybe the Phase One back is for suckers or I misunderstand its appeal. ? I do think it's an interesting topic, and I'm a bit ashamed to admit that my belief that different sensors and manufacturers have different imaging and color characteristics has informed my purchasing decisions. There's a sucker born every minute...
-
In my experience, Red is very generous with its ratings. The MX was rated very high, but I remember in practice it had about one stop less DR than the C300 (which is a more recent sensor design, so it makes sense) when I used it, which in turn had two to three stops less DR than the Alexa. Then the Dragon was noisier in the shadows than the MX but had more highlight detail, still trailing the Alexa by a lot. That was the original OLPF, I think they switched it up. For the time it was pretty good, but today's mirrorless cameras have more DR than the MX ever did. Dragon looks great exposed to the left, though. Good tonality. Recently, Red's gotten a lot better. My friends who've used the Gemini think it's just great. Super clean, good resolution, great DR, too. I've found CML does really good tests that correlate closely with real world use: https://cinematography.net/CineRant/2018/07/30/personal-comments-on-the-2018-cml-camera-evaluations/ They give the Gemini half a stop less than the Alexa–not bad. They also post Vimeo links, where you can see skin tones, etc. Venice looks awesome. Not sure about the Alexa and Amira being any different. Same sensor design and both to me seem leagues beyond anything else I've used. Not just best DR but best tonality and texture and color in the highlights. I remember that the original Alexa had worse performance than the Alexa Mini (pretty subtle, but it was there) and Arri confirmed that they did little tweaks that push the newer models to 15+ stops. But that should favor the Amira, if anything. In my experience the Amira is just as good as the Mini, though, 15+ stops. I think Cinema5D changed their testing methodology so their results are inconsistent, and they've always seemed pretty careless to me.
-
What I didn't like about his methodology is that he showed four images labeled 1, 2, 3, 4, of which one was the clear favorite (the one with the warmest white balance). Then he assigned a brand name erroneously to each, and the brand name that caused the fewest owners of it to switch was Fuji... coincidentally that was also the warmest image that people preferred to start with. Which would make you think Fuji owners have the least bias, as he claims, but they're also the ones who had the most popular image assigned erroneously to their system in the first place... Maybe I'm wrong and he mixed it up when he assigned brand names. Regardless, the big difference in white balance is what stood out to me more than anything. But conspicuous problems like this in tests get people riled up, it's a minor form of trolling. Provocative mistakes like that get people riled up and make them more likely to post something, so I'm not saying he did anything wrong. Just that the methodology is intentionally provocative and more focused on making a point (and thus getting attention) than getting at the truth... The Venice looks incredible.
-
No surprise I just bought my fourth (fifth?) Canon camera. ? Ah well.
-
The methodology is terrible. Fuji shows the "least bias" because everyone goes for the warm Fuji-labelled photograph first, so even if an equal percentage is switching to their favorite brand, more of them landed on the same one to start with. I hate to agree with this, but I do. I've always struggled a lot with grading Red footage, particularly before Dragon Color. Always found Alexa footage easy. But I recently watched Maniac on Netflix. The show has problems, but the look is not one of them. The color is fantastic and it's shot on Red. The Red sensors place the green and red chromasticities too close, so muddy brown is very common. But the foliage and skin tones are fantastic on that show. Now I recognize that the biases I had were largely frustrations with my own weaknesses. That said, by accepting those weaknesses, I can still justify getting a camera system that I think is easier to grade than another. But I know the fault is more mine than the hardware's. I have found the Alexa exceptionally easy to grade. I think that's what people like when they say they like its color. But when I watch something really good shot on another system, it looks better than the Alexa footage I've shot myself. I guess I can justify my own preferences by admitting they're more to do with my own faults than the hardware's faults, weak as that sounds. For me it can be complicated and difficult to match between systems, but after watching some really good work mostly from Light Iron and CO3, I know the fault is my own. I do think you should go easy on people like me and tolerate brand preference–not everyone is an experienced colorist and for us, just choosing a system we like and are familiar with makes a huge difference. And it's much cheaper than bringing in a great colorist. The one exception I think is color channel clipping. That can be a real bugaboo for some under certainly lighting conditions (imo) for almost anyone.
-
Any recommendations to properly learn correction/grading?
HockeyFan12 replied to Gregormannschaft's topic in Cameras
This book is very old and only covers the very very basics, but I believe it's the most well-established text on the subject, or was when I purchased it: https://www.amazon.com/Color-Correction-Handbook-Professional-Techniques-ebook/dp/B004KKXNTQ It's more a theoretical and aesthetic approach than a technical one. I should dig up my old copy. I think on the technical side, the Ripple tutorial is well-regarded. -
I am at Photokina. What would you like me to ask?
HockeyFan12 replied to Andrew Reid's topic in Cameras
When's the C200 price drop coming? -
I shot 6x7 for a while... I found the look similar to a wide open Otus on FF, I'd guess? Good micro contrast, a surprising lack of aberrations, low amounts of grain (for film), shallow depth of field. Look for Hasselblad 500 photos on Flickr for reference. Or the Revenant has that look, but highly processed. I really liked it. The lenses split the difference between super sharp and vintage, and have a lot of depth and micro-contrast to their rendering. A lack of aberrations and very sharp (overall, not per given mm) but also good contrast and unfussy bokeh due to simpler designs with fewer air to glass surfaces and less exotic glass. I found I needed much more light in order to get enough depth of field and to get the right shutter speed (on film I used the reciprocal rule, which doesn't quite hold up on digital, but which disadvantages longer focal lengths needed for the same FOV on MF). It's not great for photojournalism, better for studio work. Hasselblads were popular for fashion. Okay for landscape and architectural/real estate, but you really want a tech camera for that stuff to get lens movements and that gets $$$. There's no magic to MF, though, particularly on digital where the sensors are much much smaller than they are on film, and barely larger than full frame. MFDB are often 33mmX44mm; by contrast, Fuji Rangefinders were nearly 60mmX90mm of film. Just think one step beyond FF but without a lot of lenses available. If you shoot a lot of fashion or high end portraiture, I think you'l like the look a lot. Another big advantage is the leaf shutter, which syncs to higher shutter speeds (I think, forget the details) and is also much lighter. So despite the massive film plane, the shutter has very little mass, like a rangefinder.
-
@kye, yeah I would never auto-white balance, but that's just me. What if your white balance changes mid-shot, or what if you're mixing color temperatures, which I used to do a lot for night exteriors and then it would at the very least be inconsistent in the coverage. Often, I would put a half CTB and half plus green on tungsten lights (or half CTO and half plus green on HMIs) and mix that with an urban vapor on tungsten to emulate street lights (mercury vapor and low pressure sodium, respectively), and that's a mix of wildly divergent color temperatures, neither of which is meant to read as white. That's my concern with self-scaling contrast–what if your scene is mostly flat but then you pan against the sun, will the contrast change on the fly? Will it change fast enough? It just seems like a lot of work for very minimal benefit. But I feel a little old school with white balance. I usually shoot either 5600K or 3200K (or very very rarely 4300K) or on the Red MX I used to shoot 4000K with an 80D filter, which I later learned Cronenweth was also doing, so I felt pretty cool about that. Probably I was inadvertently copying him, though, based on something I overheard or read somewhere. On the very very low end I know of people who white balance each shot to a white card because they don't know what they're doing and read some tutorial about white balancing wrong (in my opinion, but just my opinion); on the super high end there's a vfx supervisor I talked with who when given the choice would only shoot Red Raw and he'd put color filters on cameras to maximize the dynamic range of each channel, and then white balance in post to a white point he set carefully and consistently in the scene based on a true white source, be it tungsten or daylight-balanced. And so he'd maximize (and ETTR) the exposure of each channel, not just the image overall. Kind of genius, and weirdly similar to what I perhaps unfairly dismissed beginners for doing (white balancing every shot). Sort of an extreme version of the Cronenweth 80D technique. But shooting with everything on auto is totally fair. Most directors don't think about any of that stuff–it's just their crew that does. I feel like that's a very Malick-like approach, and I envy people who are able to focus on the content to that extent rather than worrying about technical stuff as I would. I'm planning to buy a C200 later this year and might fool around with the auto focus function for the first time. And with stills I've started using the meter in the camera, rather than a spot meter. Otherwise I sort of enjoy the technical stuff. But I do wish there were an "auto" for sound department, but haven't found one yet other than paying a good sound mixer. ?
-
@kye, fair enough. That's a little over my head but makes sense. Which I think is the bigger issue, it might confuse prosumer-level shooters like me who are used to something more WYSIWYG. But the idea makes sense, though jumping up to 10 bit gets you four times more color information and at best you'd get maybe 30-50% more stretching 8 bit out for low DR scenes. I did that math wrong once myself, equating a stop with a bit, but that's only in true log space, and most pseudo-and-so-called-log gammas aren't actually that flat. And, regardless, I've never found the bit depth to be the issue. (In-camera noise reduction and macro blocking usually are.) But tbh, I've only had banding issues rarely and with A7S footage and 5D Mark III footage, and it's been manageable in post. Usually I set my ISO at native and expose with an incident meter as I used to when I shot film and haven't had any problems with anything else. So I'm a big luddite here, but am probably using an antiquated approach that works only for me–I think the key is this encourages me to underexpose compared with ETTR, which I know many more technical members advocate, but I find ETTR problematic with log profiles as it seems to increase banding and change color/saturation/tonality in the highlights (dramatically less so with the Alexa than with other systems, but I rarely have the budget to shoot with one!). ? I did work with someone who helped develop, or at least worked a lot with ACES and he actually shot on an A7S AND exposed to the right. He used a Q7 recorder and a two-stop pull and custom LUT to clean up shadow noise and improve tonality, and he did some of what you suggested–taking the log signal (fed via HDMI to the recorder) and compressed it into linear space with a custom LUT he developed after contacting Sony for information on the gamma and chromasticities and recorded the final signal externally to ProRes. The footage looked really good and had nice tonality and intercut well with a higher end system. The approach wasn't for me, though. Too technical, too many moving parts, the recorder was too big, and it did nothing to mitigate the chroma clipping issues in the A7 line (which the F5 and F55 finally fixed, but which plagued them when I first used them) and you had to watch your highlights carefully with that in mind because it clipped two stops sooner and not always to white. But for a technical shooter on a budget, it was great, and the footage looked really good. So you're not the first to try something like this–and it does work. I just found it... complicated. But I'm a real luddite–probably one of the least technical shooters here. My entire approach is set to native ISO (or a stop or two faster if needed, but changing it in camera rather than in post), set to the flattest log space, use an incident meter to set stop as I would if shooting film, use false color as I would a spot meter if I were shooting film, and uhh.... that's it. And it works for me to get a pretty adequate/average result and I think also for other old school shooters who are more concerned with repeatability than perfect image quality. I suspect on bigger shows like Game of Thrones they pull a couple stops in-camera for green screen work, though, and I've heard of Fincher using HDRX selectively depending on scene DR (as you mention, switching formats to suit the scene), so this simple approach has its flaws.
-
I sometimes wonder why Canon Log doesn't have any information below 9 IRE and Arri Log C doesn't have any above 93 IRE (or something) but it's not as bad as 40-90 IRE. A similar solution to what you propose is just to change color space and gamma when you don't have high DR in a given scene rather than using log footage. The issue with a format that changes the black and white point per scene or per shot is you can't apply a single LUT or grade as a starting point and you're left with a lot more work to do correcting everything. Far more cost in man hours than simply renting a better camera. Fwiw, I know this conflicts with the conventional wisdom, but I've done really really extensive tests with 8 bit and 10 bit footage and with a high quality codec the difference is extremely small between the two. With externally captured C100 footage I couldn't get any significant banding under any circumstances despite Canon Log being 8 bit and having around a 9 IRE black level. With AVCHD I could get a bit of banding. With the Alexa I AB'd 8 bit transcodes with 10 bit internal ProRes and the only difference even with an aggressive grade is more contrast in the noise from quantization rounding when you zoom in to 800%. Not a hint of banding on either, but remember both of these cameras are cinema cameras that are very very noisy. There WILL be a lot of banding in an 8 bit gradient without dithering that there won't be in a 10 bit gradient without dithering, but that's because of the source However, with older F5 footage (10 bit but a low bitrate codec) I was able to find a bit of banding... mostly due to macro blocking. Macroblocking and in-camera noise reduction (which smooths out the noise that the Alexa has so much of) are MUCH bigger culprits than 8 bit space alone. I think a poor codec will look worse in 10 bit at the same bit rate sometimes, since it's trying to compress more data into less data. Just my opinion, though. Try transcoding some 10 bit Alexa footage into an 8 bit uncompressed/lossless codec (Apple animation or similar) then bring both into Resolve. Apply the same LUT. There will be no increase in banding in the 8 bit footage, just an increase in noise from the rounding error. Now try the same thing but with an aggressive noise reduction algorithm. I expect both of the noise reduced clips would have banding, but the 8 bit one would have much more. So banding isn't about bit depth exclusively, it's also (mostly) about dithering. Not trying to disagree with you outright, I just think this is more complicated than it seems and including a convoluted technique in consumer cameras isn't the answer. Besides, this is already possible (and trivially easy) by using different looks for different scenes or by renting a different camera. Resolve also has some pretty good anti-banding tools already.
-
I know, I've worked on a few Netflix shows, as I mentioned before, and even mixed in a limited amount of footage from 1080p sources. However, they do limit the amount of footage from those cameras. I was just guessing that the OP had a show he was pitching to Netflix or that Netflix had just green lit and he wanted to shoot in a more handheld/verité style and he was doing research to see if he could pitch using a crash cam as an A cam, maybe a sort of Blair Witch style thing or something closer to Crank. But in that case I'd look at the Venice as an A cam and maybe a bunch of GH5s for B cams–or just talking with Netflix directly instead of randos like us on a message board. They're flexible, but not that flexible. They can and will say no, even to the biggest names. I remember there was one feature where the director wanted to use an Alexa and was forced to use an Alexa65 to meet their standards. Which doesn't seem like a compromise, but I'm sure it added to the budget and made some camera moves more difficult. Regardless, it wasn't his first choice. Netflix is flexible, but they still take the 4k native thing really seriously, and they're putting up the money. If you get a show there you can try it, but I would have a back up plan (Hulu?).
-
Heh, I probably suffer from the opposite problem because of abusive people I worked with. But that makes sense. Imo, screenwriting is the hardest part.
-
Fair enough! I certainly don't have access to that kind of money either. I'm just confused why someone would be concerned with requirements for Netflix originals unless they were already in talks with Netflix and were looking for recommendations from lower-budget filmmakers for crash cams or smaller cameras, etc.
-
Fair enough. I suppose he didn't mention his circumstances so either one of us could be right, but I wouldn't assume just because you're working with low budgets so is he. I'm just saying, other than this one thing I happened to work on, I haven't heard of a series being picked up as a Netflix original that wasn't shot specifically as an original. And the irony there is this series was shot in 1080p! So I definitely wouldn't recommend shooting a series hoping for Netflix to acquire it as an original since it's exceptionally rare, and I especially wouldn't recommend letting that dictate the camera you use. Having a good cast and IP will help more. To me going with Netflix standards seems a little arbitrary anyway. 98% of features and tv are still finished on 2k and most of that is Alexa. And all of that is still eligible for Netflix licensing under their normal (non-original series) terms. If I could afford an Alexa for a production, I would never rule it out just because Netflix does for the content they produce themselves. Netflix also released its recommended post requirements and the only vfx suite they include is After Effects. And they don't include FCPX for editing or Nucoda for finishing. What if I want to work in Nuke and FCPX and finish in Nucoda? (I don't, but what if I did?) The fact that they even publish these requirements publicly seems weird to me, like a marketing ploy of some sort. But maybe I'm just frustrated I'm not at the level where I can pitch to Netflix and I'm still mostly making personal projects on spec.
-
Absolutely true. I actually worked on a project that Netflix acquired as an original despite being shot in 1080p... because the content was good and there was a market for it. But for the following seasons they're going with 4k acquisition. But Netflix will still buy films that aren't 4k if they aren't originals, and generally they want some creative control over originals. The above experience is VERY rare. It was even more unusual in that each episode cost (so far as I know) less than a million dollars to produce due to the nature of the show, so the risk was relatively low. Even if you can keep the budget ultra low like that, I still wouldn't throw that kind of money at a series and shoot the whole thing on spec, I would just produce a pilot and go from there to mitigate risk. So I have to assume Dan has already had his series picked up, or there's already interest from different networks based on a script or spec, but that it requires smaller cameras for the filming style.