Llaasseerr
Members-
Posts
347 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by Llaasseerr
-
Yeah, it seems the Cinematics lens may be good because it "just works". Which on-set with limited time, is valuable. Correct me if I'm wrong but even if you can use shims on an Ursa to get the regular Sigma lens just so, that's something you have to think about redoing if you want to change lenses. Right, purchasing anything is all theoretical as of right now.
-
Well that's great to know. Choice is good. Before, you said that neither of them were parfocal and now you're saying they are both "near parfocal". Which is it? My information was based on Sigma's own specs, also Cinematics saying "The lens has been changed into parfocal after modification" (quoted from their website). I don't really care either, I was just throwing out some talking points in connection with maybe buying the Ursa Mini 4K and also seeing if anyone had any DNG frames to share 👍 Instead you've gone out of your way to tell me my choices of sensor, lens mount, lens length, lens type and lens features are wrong or misinformed. Okay thanks.
-
The original isn't parfocal. I have a Cooke S16 zoom for my Digital Bolex and am kind of spoilt by that. A lot of cine lenses are rehoused photo lenses - Duclos, etc for vintage photo lenses, but also things like Arri's original 65mm lenses were rehoused Hasselblad V lens elements.
-
Don't assume I have no knowledge of the pros and cons of camera sensor DR. I realise it's lower DR than the 4.6k but I'm a big fan of global shutter among other things. As a personal challenge I'm interested to see what I can eke out of this largely ignored camera that continues to get cheaper. I'm not going for the Pro version, if that was my budget I would get an Alexa Classic. Tbh I'm not a fan of Blackmagic cameras but this old Ursa 4K intrigues me. The Cinematic rehoused Sigma lens I mentioned in EF mount is "a few hundred cheaper" than the PL version. You can check the pricing: http://www.pchood.com/index.php?route=common/home I think you're referring to the original Sigma photo lens but I want something parfocal. There are no rules beyond my individual film making needs. I'm not a gigging camera person that needs a stack of equipment to meet a client's whims. I work in the film industry as a vfx supervisor, but this is for my own no-budget creative projects where I determine the aesthetic including the lens lengths. And also maybe for shooting the odd vfx element where global shutter is very useful. I never said I wouldn't rent an entire set. I said I was interested in maybe getting a cheap great looking Chinese 35 like the 7Artisans Leica copy, and I name checked a movie that shot on a single 35 that I loved the look of. I would say for the majority of what I would do the 18-35 is great as a "variable prime" and if I needed more then I would buy the 50-100 later. On a 35mm film back like the Ursa 4K, going longer than 35 focal length has not been a need of mine so far. Cheers!
-
Thanks for your feedback on this camera and yes if you have any sample DNG frames that would be great! For sure, the constrained DR is an issue. It may be a dealbreaker but I want to do more research. I have a feeling that the extra DR in raw may only be because of highlight reconstruction, but it's still useful. The trick is in how I choose to create the highlight rolloff with the sensor's captured linear data. Even with one less stop, a graceful rolloff created in post goes a long way. I'm concerned about FPN but it seems if handled correctly with fill and maybe some Neat Video that it's manageable. What I'm not clear on is if any firmware tweaks and V2 sensor have improved this since the BMPC4K. But it seems just shoot ISO 400 in most situations. Agreed, global shutter is HUGE for me. I own a Digital Bolex. And CFast cards being much cheaper too. I just want this as an experimental camera right now. No one wants this camera and it's going to keep getting cheaper. And what you're saying about just adding a V-mount - exactly. I'm looking at getting a FXLion Nano that I can use on both my Digital Bolex and this. Yes this would be great for shooting fashion esp. if adding the RawLite OLPF. And dealing with flashes/strobes, which even an Alexa handles badly.
-
Like some others I'm looking into a second hand Ursa Mini 4k now that the price has come down. I'm not sure of the EF vs PL mount. The EF is more practical, but the PL feels more legit and one of the Chinese Cinematics rehoused Sigma 18-35 zooms could be the go. However that also comes in EF which is a few hundred cheaper. The decision would rest on whether I'm likely to want to rent a Cooke lens or something. I also like the idea of a 7artisans 35mm lens on the EF mount - shout out to "Call Me By Your Name"! My main concern is DR, however I noticed how much better the raw footage is in the BMPC4K (earlier gen sensor) than the ProRes, so I plan to shoot raw. I've been searching online for some sample DNGs from this sensor (preferably the Ursa Mini 4K) but so far I haven't found much since a lot of the stuff posted a few years ago has expired Dropbox links. If anyone has any raw footage from this camera I'd love to download a few DNG frames! Hit me up! The alternative is to rent one for a day.
-
Raw over HDMI - just force the bayer image down HDMI then debayer later. It's a neat idea but not really useful for this camera since it will record raw natively. There may be some nice advantages that become apparent later. I didn't mention it with regards to monitoring though. What I'm saying is that to monitor raw in a practical,, usable way, you need a log image to compress the full dynamic range to a normalised space that can be sent over HDMI/SDI to any typical screen, then you add a transform from log to your final look. The final look is typically something like an Arri look lut when shooting Alexa, or ACES rrt/odt combo (they are both similar anyway). I make the LUT in Nuke or Lattice and then upload it to the monitor. So yes to confirm, not monitoring literally in log, but send a log signal over HDMI to put the LUT on it. You have to think of log in this case as being the same as raw, but the sensor's whole dynamic range has been compressed to fit down the HDMI with no clipping, via a log transfer function. That way, you get to monitor the raw image. Sure, some things are baked in. But you get the dynamic range. You get a very good, close representation of what you will look at in Resolve with the raw images later. What you are asking for, to see the clipping point of the raw channels, you can do that by monitoring a log signal since the Sigma log curve will encompass the entire dynamic range of the sensor. However there is a neat view mode on the Digital Bolex that is just the output of the bayer sensor, and you can clearly see where it's clipping the highlights and adjust the exposure accordingly. That would be nice to add. It's actually very similar to what ProRes raw is doing - just outputting the bayer image over HDMI. Yes the log monitoring approach doesn't account for extra white balance or highlight construction flexibility with raw. That's a nice little bonus you get to play with in Resolve.
-
I'm glad they're adding log, but I would only use it for monitoring raw recording. For that, it's crucial. The log image with LUT will then match the RAW image with LUT in Resolve (different input but same output image). So as long as: - They publish the log curve and wide gamut they are using. They need a conventional Cineon/AlexaLogC/ACEScct type of log curve that will hold all the highlight information of the raw sensor. - It can send a log signal over HDMI so that you can record DNG raw and monitor with a lut on the HDMI log image. If it's sending out raw over HDMI that's quite cool, but as far as monitoring, you are limited to the Atomos Ninja V vs. every other EVF or external monitor on the market that allows a custom LUT. As for exposing raw, the only two things you need to think about are: 1. Expose with a grey card and a lightmeter like you are exposing film. You can also use false colour and a grey card but just make sure you're aware of what gamma setting the false colour is expecting, or make your own false colour LUT. 2. Check where the highlights are clipping. If you need to protect the highlights further, then underexpose by 1 or two stops and push by the same amount in post. Use Neat Video to clean up the noise floor if necessary. Zebras are for a video world. They can be semi relevant for checking highlights in log since it puts everything in a 0-1 range. Again, we need log output for monitoring and if for example shooting for ACES we can put a log to ACES rrt/odt LUT on the camera monitor/EVF and see how the full dynamic range of the highlights are rolling off. Then get the exact same result on the ingested DNG raw images to ACES in Resolve. Log for monitoring raw recording is crucial as it allows any Chinese 8 bit monitor with custom LUT option to display the full raw image dynamic range and get a very close match to what you will see in Resolve with your beautiful DNGs - as long as you have your technical LUTs set up correctly. Published log and gamut is a MUST though. Sigma, please do a white paper documenting this. This way, we can do a direct correlation with the linear raw image. And allow sending log over HDMI while recording raw.
-
Agreed, slimRAW is a great workaround even if it does add a step to the ingest process. But it's a shame the Sigma probably won't see a lossless or lossy DNG variant that may have allowed 12 bit internal recording. Still, external SSDs are cheap and it allows for fast offloads. I do hope Apple ultimately prevails here though.
-
Yes there are - I know of a few approaches already, we are really talking OOB though. Yes Resolve is powerful for colour, but it does not make that power explicit and precise like Nuke. It's dumbed down in some ways. Using Gain is the obvious method if you can't use the Raw tab (Prores footage?). As I mentioned, I think using an ACEScc log curve will make the printer lights behave the same as a Gain - I need to check, it's been a while. I personally dont't know though what is a 1 stop increment wiht printer lights. Using Gain is easier, you double it for each stop up or halve it for each stop down. But of course, there should just be a linear Exposure mode in stops that can be toggled right there on the panel as a fully fledged part of the UI. It would require that you tell it what the input transfer function is, then it would bracket that behind the scenes.
-
Right. It would be great if there was an option to use the "printer lights" feature as a multiply (gain) operation in linear space in Resolve. That's how it is in Nuke, for example. I can't remember to be honest, but I think ACEScc may be a direct log conversion in the blacks, because the bottom of the curve is pinned at black - which is why ACEScct was developed, to feel more like a traditional log curve when grading. I'd need to check but I have been able to get the exact same results with offset in log (probably ACEScc) vs Gain in linear.
-
Exactly - it's not intuitive, or consistent across different working scenarios. I do use that. There is also the option to do it in the Offset tab in log space, but that is not particularly transparent either. In theory the more recently added right click option to bracket a grade operation within a certain color space should make this work a little better, but in practice I've found it doesn't work consistently and explicitly as it does in say, Nuke. The presumption in the design of Resolve that one would only want to do a linear exposure adjustment on raw footage in the raw tab is a little odd.
-
Well said. To speak to part what you re saying, I'm still not clear on why Blackmagic can't publish their log curve and gamut though. So we can go to linear. I mean it's one of the main reasons I never ended up buying a Blackmagic camera.
-
If you mean some kind of container with still frames inside, then yes. I'm not clear on how MXF works but I'd like it if it was the best of both worlds and just a wrapper around the DNG frames that you can right click on and then go inside. I'll agree this is all pretty old tech but I'm not as opposed to the format as others. Personally I do think that the same general advantages I talked about with frames apply to intermediate sequences and source media as far as file handling. I'm not 100% on this, but if you record a movie then can't a couple of bad frames corrupt the whole mov file, rather than allowing you to salvage most of what was shot because it was frame based? I'm talking about source metadata that is carried all the way through from shoot to ingest through post and vfx to DI. It could be matrices, CDL, lens information, etc. So if you go to an intermediate format like exr then all the footage metadata from the shoot comes along for the ride. But I'll admit that this workflow is not something that most people on this forum are considering. That's cool. You wrote slimRAW right? I own it ?. I have a Digital Bolex and it's 100% necessary for that camera. I'm sure it will be gold for the Sigma fp. I personally don't believe it's worth the overhead of debayering raw on the timeline, but then again I haven't tried out ProRes RAW. BRaw seems lossy in the chroma channels so I'm not going there. I just feel it's better to ingest to the full dynamic range floating point RGB (EXR) or log dpx/ProRes. And I can assure you that raw processing controls are utilised on big productions, but at an ingest stage. There is always the option to go back and reprocess the raw if the debayer algorithm needs changing - or the colour temperature, but in that case the data bucket of the exr is so huge that a temperature shift to the rgb image is 99% of the time totally fine. The reality is that when you capture the whole thing to an EXR or DPX there is very little that is baked in. As to what you are saying about grading at a raw level in a more precise way - I do agree that grading software can seem kind of slap dash in some ways that is just really odd. I personally use Nuke for things like exposure and temperature changes, and Nuke's grading nodes are much more mathematically oriented than Resolve. Not that Nuke is a good grading tool per se, but it's more suited to making precise changes. It bothers me that if a dp says "can you push this plate +1 stop" that there is no obvious go-to linear exposure control in Resolve - except in the raw controls tab. Also it's really weird the way Resolve does not allow direct colour matrix input. Mate, you're preaching to the choir. I'm 100% with you on exposure in linear - Then why doesn't Resolve offer a linear exposure adjustment tool except on the raw tab? This baffles me. You work for Black Magic right? I'm not actually a colorist though. I mainly use Nuke (all-linear all the time) and am reluctantly learning Resolve. For me, going to and from XYZ to do some comp operation is a lot more intuitive than it is in Resolve. So that concept is not foreign to me either. I tech proof things in Nuke before trying to rebuild them with Resolve nodes. I appreciate that Resolve 16 added the colour temperature adjustment node, but I do agree about white balance in raw as being the best way to do it. A friend who worked on Rogue 1 told me the DP Greig Fraser apparently shot the Alexa 65 all at 6500k temperature since it was all RAW capture, and then of course that can be adjusted on raw ingest. He may be wrong about this, but this is what I heard since he is a UI designer and he was wanting the white point of the UI elements to match the footage - so that came out of a conversation with the DP on set. So yes we are talking about exposure balance, white balance/colour temperature and debayer at the raw ingest stage. Ie. you are proving my point - this is best done at ingest as a kind of tech grade first pass step that can always be revisited if need be. The thing you say this Light Iron guy is further backing up what I'm saying. If you need to "match shots" in DI then you should already be 90% of the way there with your first pass and CDL since by DI stage the film is 90% complete.
-
Are you talking about Sigma adding DNG metadata? Obviously that's great, and I like Resolve's ability to create an IDT on the fly using that data. If it's something else you're talking about, I'd be interested to know. What I was describing is a more high precision IDT based on spectral sensitivity measurements - not "just" a matrix and a 1D delog lut. If you look at the rawtoaces github page, the intermediate method is as you describe the way Resolve works. It only tries that method if the afore-mentioned spectral sensitivity data is not available.
-
Per frame metadata is a big part of feature film production and is not going anywhere. But at the lower end of the market it is probably not that relevant. Have you actually tried it? There are many reasons that VFX and DI facilities work with frames. A single frame is much smaller to read across the network than a large movie file and the software will display a random frame much faster when scrubbing around - not factoring in CPU overhead for things like compression - or debayering. As for my example of upload to cloud, a multithreaded command line upload of a frame sequence is much faster than a movie file, and I'm able to stream frames from cloud to timeline with a fast enough internet connection. But in a small setup where you are just making your own movies at home then this all may be a moot point. In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in Resolve would be possible. Our workflows are very different because of different working environments. And for what you are doing, your way of working may be best for you.
-
I prefer file sequences so I'm fine with what the Sigma fp is doing there. Fingers crossed DNG compression gets added eventually. Coming from using software like Nuke and Resolve in film production, all shows are ingested from raw or pro res as dpx or open exr and go all the way through to DI in this way (editing is dnx in Avid). Indie features and second tier tv shows are typically pro res or Avid dnx, and I understand at the dslr level it's all movie formats not frames. It's only at the dslr level that anyone is actually trying to edit raw. IMO it would be like trying to edit with the original camera negative on a Steenbeck. Maybe we can just say that it's a cultural difference. But discrete frames allow the following: - Varying metadata to be stored per frame. - The whole file isn't hosed because of one bad frame - File transfer is easier. Way less load on the network and this is a biggy, with transfer to/from cloud. Having the option of MXF is fine though. I remember testing a CinemaDNG mxf when the spec was just released, but no-one actually used it.
-
It would be great if Sigma made an ACES IDT based on measuring the sensor's spectral sensitivity data. From the rawtoaces documentation on Github: The preferred, and most accurate, method of converting RAW image files to ACES is to use camera spectral sensitivities and illuminant spectral power distributions, if available. If spectral sensitivity data is available for the camera, rawtoaces uses the method described in Academy document P-2013-001 (.pdf download)
-
Thank you for talking some sanity on this oft-misunderstood subject.
-
My "raw workflow" is just expose with a light meter, also get a grey card and then import to Resolve. In the raw settings I may use highlight reconstruction and do an exposure adjustment. Generally from there, if not working in ACES I do a CST to log (Cineon or Alexa LogC) then put the PFE lut on it to see how it will look before any log space grading. Edit: to be clear the PFE lut is the last thing in the chain but I'll put it on before grading (time-wise, not in the node graph). This gives me a nice quick one-light and should match the on-set monitor. I didn't see anything about these images that would make me deviate from that. They seem like regular raw images to me once they are in Resolve, but I'll have to see when more becomes available.
-
Thanks for clarifying there is some kind of curve on the 8 bit image - nice to know they thought about that! I'm not sure what you mean as far as color tables in DNG (will need to give the spec another look) but I'm mainly referring to Resolve interpreting colour matrices stored as DNG metadata - which it does. So that should be the crux of the colour transform decisions it's making on the raw image. Resolve actually does the best mainstream job at interpreting a Cinema DNG image because it puts it into a high dynamic range space with no highlight clipping if you have the Resolve project set up correctly, and it exposes it more or less correctly for middle grey.
-
I took a look at the CDNGs in Resolve just with a basic ACES setup and they both seem pretty decent - even the 8 bit one which is interesting, because an 8 bit linear image should be practically useless. Maybe it's got a log curve under the hood? That lakeside image has a lot of dynamic range variation between highlight and shadow. It might just be the right kind of image to camouflage potential issues. I'll try a non-ACES workflow where I would just interpret to Alexa LogC/AWG. From there you would write out ProRes or something. I like cinema5d, but if I understood their review correctly they seem to be saying the camera's internal picture profiles are influencing the raw recording. I don't see how that is possible, those profiles would be only baked into the h264s. They are also advocating for a log profile to interpret the raw image as well as an option for the baked h264 footage. Resolve should do a decent job of translating the DNG frames based on metadata, then if you want you can convert to your chosen log profile with CST nodes or similar. If Sigma did create a log profile and gamut, and put out a white paper then that would be nice, but I'm assuming the DNG metadata currently there isn't garbage because the images seem to look okay. A known, published log profile and gamut is essential though when recording raw for monitoring over HDMI - assuming you can add a custom LUT to your monitor. That way, you can get a decent "one light" match to what you will eventually see in Resolve after ingesting your raw footage and applying your LUT.
-
Thanks for letting me know that you are getting consistent behaviour. I think there's some issue just on my Mac, where Resolve full range is not matching h.265 full range because when I rolled back to Resolve 15, I suddenly had the same issue where previously 15 had behaved the same as what you're saying. I have no idea what has happened on my computer because I haven't updated the system software. I've reported this to BMD but they have no idea either.
-
I just tried turning off GPU decoding of HEVC files and the h.265 clip is now way too lifted and flat when set to Full levels. Clearly this is buggy. So doing it on the GPU yields the "correct" result, except for the clipping. Transcoding to ProRes 422HQ in EditReady seems to be the best workaround for me right now to get the full image displaying correctly in Resolve.