cpc
Members-
Posts
211 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by cpc
-
Can't do this if you want constant image quality (noise size, detail and no-crop equivalent DOF). But yes, if you only care about perspective, you can do this.
-
Focal lengths have no "perspective". There is no such thing as "50mm perspective", so you can't maintain this with a 50mm lens on a bigger sensor. Relative sizes of objects depend entirely on the position of the camera. Closer viewpoints will exaggerate perspective and more distant viewpoints will flatten perspective. Hence, some people may say that wider lenses have stronger perspective, which is incorrect. What they actually mean is that with a wide lens you move forward for a similar similar object size in the frame (compared to a longer lens), and this movement forward decreases the camera-subject distance and exaggerates perspective distortion. Surely everyone has seen one of these:
-
You can use the gain control in linear spaces. With linear material gain is equivalent to exposure adjustment (although you don't get the intuitive numerical interpretability of a raw processing exposure control).
-
Historically, the reason for using larger formats was resolution. When the change to widescreen happened, the need for higher horizontal resolutions came naturally. You need more negative in order to cover the wide screen without reducing vertical resolution. But with film stocks getting better and better, this was less of a necessity. Optically, larger sizes can do fine with technically worse lenses because you need less lpmm for the same (perceived) microcontrast after magnification. As a result, larger formats generally have better delineation which promotes a feeling of three dimensionality in the image. Of course, actual FOV has little to do with format size, and perspective entirely depends on viewpoint (camera to subject distance). Anamorphic does have peculiarituies though, since only one FOV (usually vertical) matches the spherical equivalent and the other axis' FOV is wider than nominal.
-
You should be able to do the same with any intraframe codec in a container, no? In any case, whether your intermediate is a sequence (DPX or EXR or whatever) has little to do with whether your source media is a sequence. Are you talking source metadata or intermediate (post) metadata? The latter shouldn't be related to what your source is. I see. If you are actually streaming individual frames from a network that makes sense. It is not for one man bands only though. I've done it on a couple of indie productions where I shared dng proxies with the editor (they did edit in Resolve). I also know for at least two production houses that do work this way. But yes, bigger productions will likely promote a more traditional workflow. Yet I think film post can gain as much from utilizing raw processing controls for correction/grading as any other production, as it is in some ways more intuitive and more mathematically/physically correct than the common alternative. Edit: I see CaptainHook has expressed a similar and more detailed opinon above on this point. It isn't necessary to debayer in order to create a downscaled Bayer mosaic, you can simply resample per channel.
-
You seem to have misunderstood this part. What I am saying is that the choice of the actual curve handicapped the results. It is a curve with lots of holes (unused values) at the low end, which is good for quantization (e.g. quantized DCT), but bad for entropy coded predictors (as in lossless DNG). Also, my point was that if you ditched byte stuffing altogether (which is a trivial mod), without changing anything else, this would speed up both decoding and encoding, as well as give a small bonus in size. For all practical purposes, lossy BM DNG was proprietary, because there was zero publicly available information about it, so BM was in a position to simplify and optimize. Of course I am just theorizing a parallel future. Certainly in hindsight, having an alternative ready would have helped tremendously when the patent thing came up. Sigma have no option but to go the sequence way since there is no support for MXF CinemaDNG anywhere. BM were in the unique position of making cameras AND Resolve. Certainly there are some advantages to discrete frames; the biggest one might be that you can easily swap bad frames. I can't think of a case where you need frame specific metadata (other than time codes and stuff), but you can have this in the MXF version of the CinemaDNG spec too. Also, with frame indexing you can have your file working fine with bad frames in it. And your third point I am not sure I understand, individual frames equal more bytes equal more load on the network, there is no way around this. And certainly reading more files puts more load on the file system, as you need to access the file index for every frame. On a related note, there are benefits to keeping the raw files all the way through DI: raw controls can actually be used creatively in grading, and raw files are significantly smaller than DPX frames. And if you edit in Resolve, you might as well edit raw for anything that doesn't need vfx, as long as your hardware can handle the resolution. After all, working on the raw base all the way is the beauty of the non-destructive raw workflow.
-
I think there are a few upgrades BM could have done to DNG. First, perhaps introduce MXF containers, as per the CinemaDNG spec. This would have decreased filesystem load, reduce wasted storage space, as well as eliminate the need to replicate metadata in each frame. With the rare power to do be able to produce both cameras and Resolve, this sounded like a no brainer. Also, there were some choices which handicapped BM cameras in terms of achievable file sizes. First, the non-linear curve used was not friendly to lossless compression, which basically resulted in ~20-25% bigger lossless files from the start (~1.5:1 vs. ~2:1). Then, BM essentially had the freedom to introduce a lossy codec of their choice, it was a proprietary thing anyway, even though it was wrapped in DNG clothes. Even with the choice BM did, if they didn't treat the Bayer image as monochrome during compression, they would have likely been able to push 5:1 with acceptable quality (as it stands, 4:1 could drop to as low as 7 bits of actual precision, never mind the nominal tag of 12 bits). And finally, BM could have ditched byte stuffing in lossy modes (remember, it is essentially a proprietary thing, you could have done anything!), which would have boosted decoding (and encoding) speed significantly. Of course, reproducibility of results across apps is a valid argument, and is something that the likes of Arri did bet on from the beginning. But you need an SDK for this anyway, and it is by no means bound to the actual format. To promote an uniform look, BM could have done an SDK/plugins/whatever for the BM DNG image, the same way they did with BRAW.
-
It is not that the files are reported as such. Rather, this is the internal working precision of ACR. Just set it to 16 bits and you are good.
-
Here are the two C5D raw samples, but made Premiere CC compatible, if anyone wants to play with the image in Lumetri: https://drive.google.com/open?id=1c3DAPF_heybRJ1oL_ohgjRolBcKWZkwt
-
It is not linear. I haven't looked at the exact curve, but it does non-linear companding for the 8-bit raw. The 12-bit image is linear. The DNG spec allows for color tables to be applied on the developed image. The Sigma sample images do include such tables. No idea if they replicate the internal picture profiles though. AFAIK, only Adobe Camera Raw based software (e.g. Photoshop) honors these. Unless Resolve has gained support for these tables (I am on an old Resolve version), it is very likely that the cinema5d review is mistaken on this point.
-
Without detracting from Graeme's work, it should be made clear that none of the algorithmic REDCODE specifics described in the text are non-trivial for "skilled artisans". I don't think any of this will hold in court as a significant innovation. A few notes: Re: "pre-emphasis curve" used to discard excessive whites and preserve blacks. Everyone here knows it very well, because every log curve does this. Panalog, s-log, Log-C, you name it, do that. In fact, non-linear curves are (and were) so widely used as a pre-compression step, that some camera companies manage to shoot themselves in the leg by applying them non-discriminatively even before entropy coding (where a pure log/power curve can be non-optimal). JPEG has been used since the early 90's to compress images. Practically all images compressed with JPEG were gamma encoded. Gamma encoding is a "simple power law curve". Anyone who has ever compressed a linear image knows what happens (not a pretty picture) to linear signal after a DCT or wavelet transfrom, followed by quantization. And there is nothing special, technically speaking, about raw -- it is linear signal in native camera space. But you don't need to look far for encoding alternatives: film has been around since the 19th century, it does a non-linear transform (more precisely, log with toe and shoulder) on the captured light. In an even more relevant connection, Cineform RAW was developed in 2005 and presented at NAB 2006. It uses a "pre-emphasis" non-linear curve (more precisely, a tunable log curve) to discard excessive whites and preserve blacks. You may also want to consult this blog post from David@Cineform from 2007 about REDCODE and Cineform: http://cineform.blogspot.com/2007/09/10-bit-log-vs-12-bit-linear.html Re: "green average subtraction": Using nearby pixels for prediction/entropy reduction goes at least as far back as JPEG, which specifies 7 such predictors. In a Bayer mosaic, red and blue pixels will always neighbor a green pixel, hence using the brightness correlating green channel for prediction of red and blue channels is a tiny step. Re: using a Bayer sensor, as a an "unconventional avenue": The Dalsa Origin, presented at NAB 2003, and available for renting since 2006, was producing Bayer raw (uncompressed). The Arri Arriflex D-20, introduced in November 2005, was doing Bayer raw (uncompressed). Can't recall the SI-2K release year, but it was doing Bayer compressed raw (Cineform RAW, externally) in 2006.
-
Jinni Tech claims RED Compressed RAW patent filing is invalid
cpc replied to Andrew Reid's topic in Cameras
Red do have multiple patents assigned. Some include claims that are worded very broadly, and some do include quite a lot of specifics. Compare the claims in these two (they use the typical obfuscated language and structure that make patents look impenetrable): https://patents.google.com/patent/US9596385B2/ https://patents.google.com/patent/US8872933B2/ Note, for example, how claim 1 in the "electronic apparatus" patent lists the very specific way a blue or red channel is predicted from nearby green values. Contrast this with the wording in the "video camera" patent's Claim 1, which pretty much covers any raw camera with a resolution of 4K or more (compressed or not). Red do have patents on codec specifics, but most of Red's camera/apparatus/device patents mentioning compression explicitly list a bunch of compression approaches as possible means to achieve said compression. They are, informally speaking, patenting the idea of implementing the (compressed) RAW recording camera, rather than any specific compression technique. It is hardly a coincidence that Blackmagic's own "raw" codec, a response to Red's patent violation claims, appears to be designed so that it isn't actually raw. Now, compression itself is an extensively studied field. It is, very, VERY hard to come with significant innovations in this field. A "raw" codec will use any of a few well known image compression techniques and adapt it for Bayer data. That is all there is to raw compression. Raw codecs are universally rehashing old ideas, and (slightly, if at all) differing in the details of data formatting and layout, which has little to do with actual compression technique. Yes, you can pre-process raw data in a bunch of ways, and these are usually (and I use this as an euphemism for "always") trivial for anyone "skilled in the art" (with being "non-trivial" assumed as a prerequisite for patentability). For the curious, probably the biggest advancement in compression in the last two decades is ANS which, incidentally, was explicitly released into the public domain by its creator, Jarek Duda, with the intention to prevent any patents around it. -
It depends; variable bit rate means size will vary with image complexity. It is usually (but not always) between 3:1 and 4:1, might be around 600GB for a couple of hours of 4K. I wouldn't bother shrinking a 4:1 source any further, but if you really have the inclination, you can try 5:1 or 7:1 on top of it, which will shave off ~20% and ~40% respectively. Ofc, always test and judge results yourself.
-
Yes, a couple of hours of 4K at 5:1 should be somewhere under 500GB. I usually recommend 5:1 for oversampled delivery only (i.e. when shooting 4K or higher, but going for a 2K DCP). I know some users routinely use 5:1 for 4K material and are happy with it, but I am a bit conservative about this. I'd imagine most indie work ends up with a 2K DCP anyway (well, at least anything I've shot that ended in a theater has always been 2K).
-
Only if you will be doing more lossy compression on the same video down the line, and the methods used in the different compression passes differ in some significant way. If you are going to use the same method (only with different amounts of quantization), it doesn't matter much. So if you'd be doing compression after acquisition with, say, slimraw, there are enough differences between lossy slimraw and lossy in-camera to warrant doing lossless in-camera. Well, it is normal. Not only BRAW needs to happen in-camera which imposes some limits (power, memory, real-time, etc), but it is likely hindered by its attempt to avoid Bayer level compression (possibly due to the patent thing). On the other hand, denoising (which often goes together with debayering) does have advantages when done before very high compression. More precisely, lower resolution images can withstand less compression abuse. It should be fairly intuitive: if you have a fixed delivery resolution, let's say 2K, and you arrive at this delivery resolution from a 2K image, you can't afford messing with the original image much. But if you deliver to 2K from a 4K source, you can surely afford doing more compression to the 4K image. BM raw is already tonally remapped through a log curve. The 10-bit log mode in slimraw is only intended for linear raw. No. Size will always go up when transcoding from lossy back to the lossless scheme: this works by decompressing from lossy to uncompressed, then doing lossless compression on the decomrpessed image; you can't do the lossless pass straight on top of the original lossy raw, it doesn't work like this. So going this route only makes sense when people need to maximize recording times (and shoot lossy), but still want to use Premiere in post. If you insist on using DNG, you'll get best quality per size from shooting lossless in-camera, then going through any of the lossy modes in slimraw: which one depends entirely on what target data size you are after. I honestly wouldn't bother doing it for a camera that has in-camera lossy DNG, unless I really, really wanted to shrink down to 5:1 or more.
-
This is very resolution dependent, but assuming 4k, the corresponding settings would be: lossless, 5:1, and 7:1 / VBR LT. Only if you'd use slimraw in a lossy mode afterwards. It is generally better to avoid multiple generations of lossy compression, and there are a few significant differences in how in-camera dng lossy compression works in comparison to slimraw's lossy compression. Yes. Well, 5:1 is matched by 5:1. The meaning of these ratios is that you get down to around 1/5 of the original data size, which is the same no matter what format you are going to use. "Safety" is something only the user can judge. You are always losing something with lossy compression. It is "safe", in the sense that it is reliable, and it will work. VBR HQ will normally produce files between 4:1 and 3:1, but since it's constant quality/variable bit rate it depends somewhat on image complexity. Now, it is important to note that it is probably not a good idea to swap a BRAW workflow for a DNG workflow, unless you need truly lossless files (for VFX work, for example). Even though a low compression lossy DNG file will very likely look better than an equally sized BRAW frame (because by (partially) debayering in BRAW you increase the data size, and then shrink it back down through compression, while there is no such initial step in DNG; remember: debayering triples your data size!), this quality loss is progressively less important with resolution going up. Competing with BRAW is certainly not a goal for slimraw. There are basically 4 types of slimraw users: 1) People shooting uncompressed DNG raw: Bolex D16, Sony FS series, Canon Magic Lantern raw, Panasonic Varicam LT, Ikonoskop, etc. The go-to compression mode for these users is the Lossless 10-bit log mode for 2K video, or one of the lossy modes for higher resolution video. 2) People shooting losslessly compressed DNG on an early BM camera (Pocket, original BMCC, Production 4K) or on a DJI camera: these users normally offload with one of the lossy modes to reduce their footprint (often 3:1 or VBR HQ for the Pocket and BMCC, and 4:1/5:1 for the 4K). Lossless 10-bit log is also popular with DJI cameras. 3) People doing DNG proxies for use in post with Resolve. They are usually using 7:1 compression and 2x downscale for a blazing fast entirely DNG based workflow in Resolve (relinking in Resolve is a one-click affair and you can go back-and-forth between originals and proxies all the time during post). 4) People shooting BM cameras and recording 3:1 or 4:1 CDNG for longer recording times, who do their post in Premiere. They use slimraw to transcode back to lossless CinemaDNG in post and import the footage in Premiere. Of course, there are other uses (like timelapses, or doing lossless-to-lossy on a more recent BM camera, if you are a quality freak (a few users are), slimraw will beat in-camera for the same output sizes, which is expected -- it doesn't have the limitations of doing processing in-camera), but these are less common. So yeah, if you don't need VFX, it is likely best to just stick to BRAW and don't complicate your life.
-
This is most likely uncompressed source to losslessly compressed output. It also looks like a rather old version of slimraw. But if you want to know more about the various types of compression in the dng format, here is an overview: http://www.slimraw.com/article-cdngmodes.html (@Emanuel I am around, just not following the discussion closely )
-
Up till BRAW the only consistently present characteristic of raw video from a camera manufacturer claiming "raw" was a Bayer image. There's been lossy compressed raw (Cineform, Red, BM), there's been tonally remapped raw (Arri, BM, Red, Panasonic, Canon), there's been white balanced raw (Canon), there's been baked ISO raw (Canon, Sony), etc. But all "raw" has always been Bayer. In this sense BRAW is a stretch of the term "raw" as we know it: it is not a Bayer image. I wouldn't call it "raw", but obviously there are market reasons for naming it this way. This is similar to how "visually lossless" is being abused as marketing speak for "lossy". "Visually lossless" can only be applied to delivery images viewed in well defined viewing environments (that's how it is used in any scientific paper that takes itself seriously). By definition, it is not applicable to acquisition formats (raw or anything else) meant to be hammered in post: you can't claim "visually lossless", because you have no knowledge about what will be done to the image, nor where it will end up.
-
I am pretty sure there is nothing patent breaching in BM's lossy take on DNG: it's all common techniques which have been out there for ages. The problem is likely with another company's patents, which are so broad, that they cover a lot of ground in in-camera raw compression (no matter what method or format you use), and if anything, BM's DNG specifics actually appear to be circumventing some details in these patents. I am not a lawyer, and I haven't read all the patents of that other company, but I think BM doesn't actually breach the ones I've read (due to a certain important detail in BM's implementation). Whether BM are aware of this, or this is simply a battle they don't want to pick, is a different story. In any case, it is definitely not a coincidence that BRAW is not raw in the first place, despite its name, -- it is a debayered image with metadata, think ProRes + metadata.
-
No need to feel sorry for us PC users. Resolve has been cutting through 4K raw like butter for years. I've been shooting raw exclusively since 2013. Stopped using proxies in 2015. I've only ever used regular consumer hardware for post. Frankly, raw is old news for PC users.
-
It should be fine in terms of coverage, but Blackmagic cameras have thinner filters, so likely more aberrations.
-
Premiere has issues with missing metadata. It doesn't care if the values are stretched. What happens is that it can infer missing metadata correctly when the values are shifted and zero padded to 16 bits. Also, there is 14-bit CinemaDNG, but Premiere has trouble with 14-bit uncompressed (not with compressed though!). Now that all ML dng is compressed, it should work fine with Premiere at 14 bits.
-
FYI, there is no reason to ever use any of the "maximized" variants in raw2cdng. 10-bit raw can be all you need, if it isn't linear. ML raw is linear though, and 10-bit is noticeably worse than 12- and 14-bit even though it is better than, for example, 10-bit raw from DJI cameras. 12-bit is actually quite good.
-
There is support for MXF containers in the CinemaDNG specification. AFAIK, no support in cameras and apps though. CinemaDNG 3:1 is similar size to ProRes Raw HQ and CinemaDNG 4:1 is similar size to ProRes Raw. DNG performance in Resolve is excellent.