androidlad
Members-
Posts
1,215 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by androidlad
-
Blackmagic casually announces 12K URSA Mini Pro Camera
androidlad replied to Andrew Reid's topic in Cameras
This is the CFA pattern of the URSA 12K sensor: 6x6 block instead of traditional 2x2, 18 RGB pixels and 18 white/transparent pixels, which improves SNR a bit but reduces resolution. So the optimal shooting mode will be 4K (full RGB info without interpolation from 3x3 pixel-binning), 8K will be softer, 12K 1:1 will be very very soft. -
Blackmagic casually announces 12K URSA Mini Pro Camera
androidlad replied to Andrew Reid's topic in Cameras
Yes I invented this 12K sensor, which is precisely the same size as RED Komodo at 27.03mm x 14.25mm, and exactly double the horizontal and vertical resolution. And who made the RED Komodo sensor?? 😉 Ok fine, this 12K sensor is a custom version of Canon's 120MXSC, operating in an ROI mode https://canon-cmos-sensors.com/canon-120mxs-cmos-sensor/ -
Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling
androidlad replied to Andrew Reid's topic in Cameras
I can confirm that 4K HFR modes (50 - 120p) are achieved using 2 x 2 pixel binning. It's a relatively elegant solution because there's no line-skipping involved. The image will be very soft with lower moire and lower noise compared to other subsampling methods. -
Free download of F-log ACES IDT DCTL for use in Resolve: https://blog.dehancer.com/articles/fuji-f-log-aces-idt-for-davinci-resolve-download-dctl/
-
Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling
androidlad replied to Andrew Reid's topic in Cameras
R5/R6 both have 10bit internal recording. BTW the new look of the forum, especially the new font, is obnoxious. -
Sony A7S II successor 9m dot 4K native EVF panel?
androidlad replied to Andrew Reid's topic in Cameras
The actual resolution in pixels is 2048 x 1536, versus the current high-end EVF with 1600 x 1200. -
That's their 100MP X-Trans project that has since been shelved.
-
It requires far more aggressive line-skipping to readout the full height of the sensor which is 8736 pixels. Currently GFX100 uses 2/3 subsampling vertically to derive a 4352 pixel Bayer from a 6528 pixel height, and that already saturated the maximum readout time of 32ms required to achieve 30fps video frame rate.
-
Related, but not really directly correlated. Dynamic range is a measure of a camera system - how far it can see into the shadows and how far it can see into the highlights. Dynamic range can be measured objectively, but even then there's a subjective component as each and every viewer will have their own noise tolerance threshold. This governs how much of the shadow part of the dynamic range they find actually usable. Latitude is related to dynamic range, but it is also scene dependent. Latitude is the degree to which you can over or under expose a scene and be able to bring it back to a usable exposure value after recording. It is dependent upon dynamic range, which is going to set the overall boundary of by how much you can over and under expose, but it's limited by the scene too, and how bright and dark the scene itself goes. Say a scene has a range of brightness of 5 stops (a typical Macbeth chart for instance), and let's use a camera that has a 12 stop dynamic range. If we place the scene in the middle of that camera's recordable range, we have 7 stops to play we can we could over or under expose by 3.5 stops and still recover the scene. But if the scene was a real world scene of actor against a sunlit window and the range of brightness of 15 stops, you don't have any latitude at all - no matter how you expose that scene you're going to loose shadow or highlight information. So yes, latitude and dynamic range are related, but different. Latitude can't really be used to infer how much total dynamic range a camera has.
-
Great, this aligns nicely with what C5D did with their over/under tests. But, this is testing the latitude, not dynamic range.
-
Sigma FP as well as Panasonic S1 which has no OLPF, are indeed very prone to moire in real world shooting senarios: This is exactly the reason why S1H has OLPF.
-
It's a shame the FP doesn't have an OLPF, with such a low pixel density it's very prone to moire/aliasing.
-
ProRes HQ cannot compare to ProRes RAW for adjusting white balance or ISO, because RAW is linear, scene-referred, the results are much better than gamma encoded color spaces. You can linearise but it adds quite a few additional steps.
-
"Supports HDR in movie shooting" Anyone tested this? I wonder what it does.
-
If this was truth, then there would be accompanying empirical evidence to support it. For now, it's only your subjective opinion. Also in BRAW, 6K scored 11.8 stops and URSA Mini G2 scored 12.1.
-
The Big EOSHD interview: Metabones lens adapters
androidlad replied to Andrew Reid's topic in Cameras
Is it difficult to make a 0.65x speedbooster? That way full frame glasses would have true and precise full frame equivalent on APS-C cameras. With current 0.71x speedbooster, there's still a crop factor of 1.09x, a 24mm lens would have a 26mm field of view. -
I know what you menat, but it's worded incorrectly, what you wanted to say is it would lose SNR. Note that pure pixel-binning actually increases SNR (2x2, 3x3 etc. you see on smartphone sensors).
-
Oh yeah it's already in the hands of many influencers/industry pros, who are anxiously awaiting the NDA lift.
-
Most cameras that output ProRes RAW at the moment are mirrorless cameras with HDMI output, and Atomos developed the RAW over HDMI protocol, they only license to camera manufacturers for free. For those that output RAW over SDI, BMD need to develop support for the their RAW spec (EVA1 outputs 10bit Log-encoded RAW, Sony CineAlta outputs 16bit linear RAW). And the same applies to Atomos, but Atomos has its RAW over HDMI protocol and it's being widely adopted, so they pretty much have full control over the RAW spec. So instead of saying BRAW is sensor specific, you can say it's brand specific.
-
Nope. BRAW is just a codec, it has nothing to do with sensors or camera models. It requires BMD's FPGA for the encoding. Same for ProRes RAW, Apple has licensed the encoder to Atomos and DJI, it can encode any incoming RAW signal.
-
They did their best to optimise the DR. For a charge domain based global shutter, it's doing ok, but it's poor compared to conventional rolling shutter sensors. It's positioned primarily as a high-end crash cam, only global shutter can guarantee zero skew and zero flash banding.
-
According to a source, X-H2 is likely to have a conventional Bayer CFA.
-
It was a custom made Petzval 85mm lens, and used sparingly in the film only for some portrait shots of Mulan. For all other shots, they were Panavision Sphero 65 lenses, very vintage but with some modern touches, CA is noticeable. Mandy Walker simplified it a bit too much in the interview, it's not really about the CA, it's the distinctive field curvature and sharpness roll-off from centre to edge, it's another way of isolating the subject instead of just using very shallow depth of field. This is one of the shots with the Petzval 85mm lens: