Jump to content

cpc

Members
  • Posts

    211
  • Joined

  • Last visited

Everything posted by cpc

  1. cpc

    Sony A7S III

    For determining the clip point it doesn't matter if the footage is overexposed; overexposure doesn't move the clip point; if anything, it helps to find this point easier. All you need is locating a hard clipping area (like the sun). re: exposing While a digital spotmeter would be the perfect tool for exposing log, the A7s II does have "gamma assist" where you are recording s-log, but previewing an image properly tone mapped for display. The A7s III likely has this too. You don't really need perfect white balance in-camera when shooting a fully reversible curve like s-log3. This can be white balanced in post in a mathematically correct way, similarly to how you balance raw in post. You only need to have in-camera WB in the ballpark to maximize utilization of available tonal precision.
  2. cpc

    Sony A7S III

    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640. I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera. 🙂
  3. cpc

    Sony A7S III

    With the middle point mapped as per the specification, the camera simply lacks the highlights latitude to fill all the available s-log3 range. Basically, it clips lower than what s-log3 can handle. You should still be importing as data levels: this is not a bug, it is expected. Importing as video levels simply stretches the signal, you are importing it wrong and increasing the gamma of the straight portion of the curve (it is no longer the s-log3 curve), thus throwing off any subsequent processing which relies on the curve being correct.
  4. @Lensmonkey: Raw is really the same as shooting film. Something that you should take into consideration is that middle gray practically never falls in the middle of the exposure range on negative film. You have tons of overexposure latitude, and very little underexposure latitude, so overexposing a negative for a denser image is very common. With raw on lower end cameras it is quite the opposite: you don't really have much (if any) latitude for overexposure, because of the hard clip at sensor saturation levels, but you can often rate faster (higher ISO) and underexpose a bit. This is the case, provided that ISO is merely a metadata label, which is true for most cinema cameras, and looking at the chart it is likely true for the Sigma up to around 1600, where some analog gain change kicks in.
  5. Your "uncompressed reference" has lost information, that the 4:4:4 codecs are taking into consideration, hence the difference. You should use uncompressed RGB for reference, not YUV, and certainly not 4:2:2 subsampled. Remember, 4:2:2 subsampling is a form of compression.
  6. Can't argue with this, I am using manual lenses almost exclusively myself. On the other hand, ML does provide the best exposure and (manual) focusing tools available in-camera south of 10 grand, maybe more, by far, so this offsets the lack of IBIS somewhat. I am amazed these tools aren't matched by newer cameras 8 years later.
  7. A 2012 5d mark 3 shoots beautiful 1080p full-frame 14-bit lossless compressed raw with more sharpness than you'll ever need for YT, at a bit rate comparable to Prores XQ. If IS lenses can do instead of IBIS, I don't think you'll find a better deal.
  8. The problem is missing time codes in the audio files recorded by the camera. Resolve needs these to auto sync audio and image and present them as a single entity. Paul has posted a workaround here: As a general rule, if the uncompressed image and audio don't auto sync in Resolve, the compressed image won't auto sync either.
  9. I will be surprised if Resolves does rescale in anything different than the image native gamma, that is in whatever gamma the values are at the point of the rescale operation. But if anything, some apps convert from sRGB or power gamma to linear for scaling, and then back. You can do various transforms to shrink the ratio between extremes, and this will generally reduce ringing artifacts. I know people deliberately gamma/log transform linear rendered images for rescale. But it is mathematically and physically incorrect. There are examples and lengthy write-ups on the web with what might go wrong if you scale in non-linear gamma, but perhaps most intuitively you can think about it in an "energy conserving" manner. If you don't do it in linear, you are altering the (locally) average brightness of the scene. You may not see this easily in real life images, because it will often be masked by detail, but do a thought experiment about, say, a greyscale synthetic 2x1 image scaled down to a 1x1 image and see what happens. I have a strong dislike for ringing artifacts myself, but I believe the correct approach to reduce these would be to pre-blur to band limit the signal and/or use a different filter: for example, Lanczos with less lobes, or Lanczos with pre-weighted samples; or go to splines/cubic; and sometimes bilinear is fine for downscale between 1x and 2x, since it has only positive weights. On the other hand, as we all very well know, theory and practice can diverge, so whatever produces good looking results is fine. Rescaling Bayer data is certainly more artifact prone, because of the missing samples, and the unknown of the subsequent deBayer algorithm. This is also the main reason SlimRAW only downscales precisely 2x for DNG proxies. It is actually possible to do Bayer aware interpolation and scale 3 layers instead of 4. This way the green channel will benefit from double the information compared to the others. You can think of this as interpolating "in place", rather than scaling with subsequent Bayer rearrangement. Similar to how you can scale a full color image in dozens of ways, you can do the same with a Bayer mosaic, and I don't think there is a "proper" way to do this. It is all a matter of managing trade offs, with the added complexity that you have no control over exactly how the image will be then debayered in post. It is in this sense that rescaling Bayer is worse -- you are creating an intermediate image, which will need to endure some serious additional reconstruction. Ideally, you should resize after debayering, because an advanced debayer method will try to use all channels simultaneously (also, see below). This is possible, and you can definitely extract more information and get better results by using neighboring pixels of different color because channels correlate somewhat. Exploiting this correlation is at the heart of many debayer algorithms, and, in some sense, memorizing many patterns of correlating samples is how recent NN based debayering models work. But if you go this way, you may just as well compress and record the debayered image with enough additional metadata to allow WB tweaks and exposure compensation in post, or simply go the partially debayered route similar to BRAW or Canon Raw Light. In any case, we should also have in mind that the higher the resolution, the less noticeable the artifacts. And 4K is quite a lot of pixels. In real life images I don't think it is very likely that there will be noticeable problems, other than the occasional no-OLPF aliasing issues.
  10. Binning is also scaling. Hardware binning will normally just group per-channel pixels together without further spatial considerations, but a weighted binning techique is basically bilinear interpolation (when halving resolution). Mathematically, scaling should be done in linear, assuming samples are in an approximately linear gamut, which may or may not be the case. Digital sensors, in general, have good linearity of light intensity levels (certainly way more consistent than film), but native sensor gamut is not a clean linear tri-color space. If you recall the rules of proper compositing, scaling itself is very similar -- you do it in linear to preserve the way light behaves. You sometimes may get better results with non-linear data, but this is likely related to idiosyncrasies of the specific case and is not the norm. re: Sigma's downscale I assume, yes, they simply downsample per channel and arrange into a Bayer mosaic. Bayer reconstruction itself is a process of interpolation, you need to conjure samples out of thin air. No matter how advanced the method, and there are some really involved methods, it is really just that, divination of sample values. So anything that loses information beforehand, including channel downsample, will hinder reconstruction. Depending on the way the downscale is done, you can obstruct reconstruction of some shapes more than others, so you might need to prioritize this or that. A simple example of tradeoffs: binning may have better SNR than some interpolation methods but will result in worse diagonal detail.
  11. Only because I have code lying around that does this in multiple ways, and it shows various ways of producing artifacts without doing weird things. It is not necessary to do crazy weird things to break the image. Even the fanciest way of Bayer donwscale will produce an image that's noticeably worse than debayering in full res and then downscaling to the target resolution, there's no way around it even in the conceptually easiest case of 50% downscale.
  12. Thanks. Here is the same file scaled down to 3000x2000 Bayer in four different ways (lineskipping, two types of binning and a bit more fancy interpolation). Not the same as 6K-to-4K Bayer, but it might be interesting anyway. _SDI2324_bin.DNG _SDI2324_interp.DNG _SDI2324_skip.DNG _SDI2324_wbin.DNG
  13. Thanks, these gears look really nice. I don't think there is a single Leica R that can hold a candle to the Contaxes in terms of flare resistance. The luxes also flare a lot (here is the 50), but I haven't found this to be a problem in controlled shoots. Sorry, I meant the DNG file.
  14. I've had both Contax and Leica R, and Contax is technically better, usually sharper and with significantly better flare resistance. The Contax 50/1.7 is likely the sharpest "old" SLR 50mm I've seen, and I still use the 28/2.8 for stills when travelling, at f4 or smaller it pops in that popular Zeiss way. Contax lenses are also much lighter. That said, I find the Leica R's more pleasing with digital cameras, in particular the 50mm and 80mm Summiluxes are gorgeous: they provide excellent microcontrast at low lpmm, but not too strong MTF at high lpmm, which actually seems to work quite well with video resolutions. They draw in a way that's both smooth and with well defined detail (perhaps reminiscent of Cookes), and focus fall-off is very nice. The focus rings are also a bit better for pulling I think (compared to Contax). Are you using M lenses with focus gears? None of the Voigt lenses I have (or had) can be geared, they either have tabs or thin curvy rings. I find that pulling sharpness down to 0 in Resolve's raw settings helps a bit with tricky shots from cameras with no OLPFs. In the case of the fp, these weird pixels might be a result from interactions between debayer algorithm and the way in-camera Bayer scaling is done.
  15. No idea, but I won't be surprised if it does. I don't recall any arcane Windows trickery that would hinder Wine.
  16. As promised, the Sigma fp centered release of slimRAW is now out, so make sure to update. SlimRAW now works around Resolve's lack of affection for 8-bit compressed CinemaDNG, and slimRAW compressed Sigma fp CinemaDNG will work in Premiere even though the uncompressed originals don't. There is also another peculiar use: even though Sigma fp raw stills are compressed, you can still (losslessly) shrink them significantly through slimRAW. It discards the huge embedded previews and re-compresses the raw data, shaving off around 30% of the original size. (Of course, don't do this if you want the embedded previews.)
  17. It does honor settings, and it is most useful when pointed at various parts of the scene to get a reading off different zones without changing exposure, pretty much the same as you'd use a traditional spot meter. The main difference is that the digital meter doesn't have (and need) a notion of mid grey: you get directly the average (raw) value of the spot region, while a traditional spot meter is always mid grey referenced. You can certainly use a light meter very successfully while shooting raw (I always have one on me), but the digital spotmeter gives you a spot reading directly off the sensor which is very convenient, cause you see what is being recorded. Since you'd normally aim to overexpose for dense skin when shooting raw, seeing the actual values is even more useful. Originally, the ML spotmeter was only showing tone mapped values, but they could also be used for raw exposure once you knew the (approximate) mapping. Of course, showing either the linear raw values or EV below the clip point is optimal for raw.
  18. Well, it should be quite obvious that this camera is at its best (video) in 12-bit. The 8-bit image is probably derived from the 12-bit image anyway, so it can't be better than that. I think any raw camera should utilize a digital spotmeter similar to Magic Lantern's. This is really the most simple to implement exposure tool and possibly the only thing one needs for consistent exposure. I don't need zebras, I don't need raw histograms, I don't need false color. It baffles me that ML had it 7+ years ago and it isn't ubiquitous yet. I mean, just steal the damn thing.
  19. Both Resolve and Premiere have problems with 8-bit DNG. I believe Resolve 14 broke support for both compressed and uncompressed 8-bit. Then at some later point uncompressed 8-bit was working again, but compressed 8-bit was still sketchy. This wasn't much of an issue since no major camera was recording 8-bit anyway, but now with the Sigma fp out, it is worked around in the upcoming release of slimRAW.
  20. If you recall the linearisation mapping that you posted before, there is a steep slope upwards at the end. Highlights reconstruction happens after linearisation, so it would have the clipped red channel curving up stronger than the non-clipped green and blue; it needs to preserve the trend. This hypothesis should be easy to test with a green or blue biased light. If the non-linear curve causes this, you will get respectively green or blue biased reconstructed highlights. I don't think the post raised shadows tint is related to incorrect black levels. It is more likely a result of limited tonality, although it might be exaggerated a little bit by value truncation (instead of rounding). This can also be checked by underexposing the 12 bit image additional 2 stops compared to the underexposed 10 bit image, and then comparing shadow tints after exposure correction in post.
  21. I've done a few things that ended in both theaters and online. 1.85:1 is a good ratio to go for if you are targeting both online and festivals, and you can always crop a 1920x1080 video from a 1998x1080 flat DCP, if you happen to need to send a video file somewhere. Going much wider may compromise the online version; contrary to popular belief a cinemascope ratio on a tablet or computer display is not particularly cinematic, what with those huge black strips. Is there a reason you'd want to avoid making a DCP for festivals, or am I misunderstanding? Don't bother with a 4K release, unless you are really going to benefit from the resolution. Many festivals don't like 4K anyway. Master and grade in a common color gamut (rec709/sRGB). DCP creation software will fix gamma for the DCP, if you grade to an sRGB gamma for online. Also, most (all?) media servers in current cinemas do 23.976 (and other frame rates like 25, 29.97, 30) fine, but if you can shoot 24 fps you might just as well do.
  22. The linearisation table is used by the raw processing software to invert the non-linear pixel values back to linear space. This is why you can have any non-linear curve applied to the raw values (with the purpose of sticking higher dynamic range into limited coding space), and your raw processor still won't get confused and will show the image properly. The actual raw processing happens after this linearisation.
  23. 12-bit is linear. 10-bit is linear. 8-bit is non-linear. No idea why Sigma didn't do 10-bit non linear, seeing as they already do it for 8-bit. Here is how 10-bit non linear can look (made from your 12-bit linear sample with slimraw). In particular, note how darks are indistinguishable from the 12-bit original. 10-bit non linear (made from the 12-bit).DNG
  24. You are comparing 6K Bayer-to-4K Bayer downscale + 4K debayer to 6K debayer + 6K RGB-to-4K RGB downscale. The first will never look as good as the second. The 10-bit file is linear and the 8-bit file is non-linear. That's why 10-bit looks suspicious to you, it has lost a lot of precision in the darks. Yeah, well, the main difference with offsetting in log is that you are moving your "zero" around (a "log" curve is never a direct log conversion in the blacks), so you'd need to readjust the black point. Whereas with multiplication (gain) in linear there is no such problem. Still, offsetting is handy with log footage or film scans.
×
×
  • Create New...