Leaderboard
Popular Content
Showing content with the highest reputation on 12/19/2021 in all areas
-
This is Pretty Neat. Unfortunately, since the Sony IMX377 and other 1/2.3" sensors, most smartphone sensors have abandoned 12-bit, for photo or video (Staggered HDR sensors do 12-bit HDR for photos instead). So, this is limited to 10-bit video for now. But, the video quality looks pretty stellar. Unprocessed and I am guessing the highlight and shadow recovery maybe pretty decent. It apparently does 10-bit 4-2-2 (most smartphones that do 10-bit video, have 4-2-0). (The new Snapdragon and MediaTek SoCs talk about 18-bit imaging, which I am guessing may be some crazy HDR, upressing bit-depth, though that may solely be limited to photos). This RAW video does look pretty neat, and doesn't look over sharpened or over processed like most smartphone footage.1 point
-
1 point
-
...a matter of time or as further business toss-up, well, for some reason people drive their questions through interrogation marks ; ) Here's petapixel's guys by Jaron Schneider: «Sony doesn’t specify when it plans to manufacture sensors at scale using this technology but does say it will continue to iterate on the design to further increase image quality in sensors large and small.» source We tend to immediately read one after the other for Sony's large sensor size tradition but seems we can't separate the waters for much time longer... ;- )1 point
-
I had a closer look at the picture, and yeah, it's not what I thought. Funnily enough, I thought "dual layer" meant two layers. No. Those words in the tweet are complete bullshit. There is already two (or more) layers, and they're making it three (or more), but it's still a single sensor. They've been gradually taking things that were taking up real estate on the front of the sensor (and therefore blocking light) and putting them behind the photo site for years. This is an incremental upgrade that some numpty decided was the ultimate breakthrough and somehow none of the previous ones mattered or count in this "FIRST DUAL LAYER" spasm they posted. I thought the implication that 'one became two' implied a second photodiode under the first, presumably with the ability to be operated independently. Sadly, no.1 point
-
Sorry thats not how it works. All these developments are about increasing the full well capacity, which is very low in small sensors, and decrease noise, which is relatively high in small sensors, to extend the DR, maybe 1/3 of a stop. Here is the formula: 20*log(full well capacity÷noise). You can play with this to estimate how much room to improve is still there. Capacity is already capped at 3500 electron per one um². And noise is already very low. A1 is at 1.8e now. The only way to extend DR by several stops in conventional CMOS sensor is multi exposure solutions. Nikon is playing with a dual layer idea, but its on photo diode section, not electronics. They want to place a CMY layer on top of RGB layer, then set the upper layer exposure differently from the lower layer exposure.1 point
-
The regular metadata tags containing the colorspace indeed indicates it is a rec709 recording. I suspect vlog isn't allowed or known as a color space for that tag. It ensures all players will at least play it back. Mediainfo also shows a blurb of vendor specific data containing the text vlog and vgamut and some additional readable text. It seems all sorts of metadata is put into the video stream, but it's format is specific for Panasonic and most software is unfamiliar with it's format. However. If you set up your nle correctly and explicitly specify vlog-vgamut as the input format, all colors fall into place. In particular when using an aces workflow.1 point
-
1 point
-
RAW Video on a Smartphone
Andrew Reid reacted to billdoubleu for a topic
So, I recently picked up a Pixel 5a on sale and gave the Motion Cam app a go on it. I've been having some fun and finding some fairly consistent results with the full sensor (4032X3024) @ 12FPS. I think the sensor size comes close to super 8mm sizing and the results definitely have an 8mm vibe. The screen grab below is from a cinemaDNG sequence dropped into Resolve on a 1080p/ rec709/ linear timeline -> color space transform (which I didn't even know you could use in the free version) ARRI LogC output gamma -> ARRI Alexa LogC to rec709 LUT -> bumped up the saturation a bit and brought the shadows down with the log wheels Be gentle; what I know of a RAW workflow has been learned in about 2 hours this week. There's no noise reduction applied, but it seems like the downscale helped clean things up and give a more film like quality. The app is a bit clunky, but is obviously in beta and can't be fully criticized yet. Manual focus would be amazing and there were some exposure shifts that may or may not have been my fault. If someone sold this technology with a MFT lens mount on it as a modern 8mm camera I would for sure buy one. It's been fun and I'm enjoying learning about RAW correction/ grading. There seems to be a few people catching on to the app on YouTube. People with more powerful phones seem to be having better luck with things. Hopefully development continues with success and RED doesn't interfere!1 point -
Some DR tests on rare sunny winter days - raw images vs. native camera app video recording. No detail enhancements for raw (no NR, sharpening, dehazing and such), just shadows/highlights and gamma curve adjustments in ACR to get the better idea of what's going on. All shots are in native resolution, 1:1, no scaling. I'm still not confident about exposure settings because of live feed high contrast. I tend to check the overall image with AE on and then dial manual settings to nearly same looking picture. Native video is shot in auto mode. It actually represents well in terms of contrast the image you see in live view shooting raw. So, as you can see there is no big difference in DR (like you may expect comparing raw and "baked-in r709" footage from cinema camera). "Bridge" scene got a bit overexposed in video compared to raw, I have a feeling that it was possible to bring back overexposed building in the back without killing shadows with minor exposure shift. What makes the difference for me is highlight rolloff - it is natural and color consistent, just look at snow in front in "hotel" shot and sun spots on the top of "bridge" shot. Shadows are more manageable for me, too. Yes, there is a lot of chroma noise but it's easily controlled by native ACR/Resolve tools, luma noise is very thin and even, so you can balance between shadow details and noise suppression up to your taste. But the most prominent difference is due to zero sharpening. Just check small details like branches, especially on contrast background. It's not DR related, but I can't avoid mentioning it. Is raw a necessity for such result? Dunno. Maybe no postprocessing, log profile and 10bit 400Mbps h26x instead 8bit 40Mbps will give comparable results. If you have a phone that is capable of such recording modes I would love to see the compare.1 point
-
Benefits of Fresnel Lens LED
techie reacted to UncleBobsPhotography for a topic
LED panels are generally not soft enough by themselves, but an LED panel with a 4x4 diffusion frame will be softer and quieter than a COB/fresnel with the same output. A COB light which interchangable fresnel and softbox gives you as much flexibility as possible. I've got a couple of COBs/fresnels and a couple of panels and both have their prefered use cases. It's also worth noting that not all fresnels have the same focusability. Old-school fresnels typically have a larger lens and larger minumin focus area than specialized COB fresnels that can focus very tightly. Some fresnels will also lose more light than others. A bowens mount attachment fresnel might not be as effective as an all-in-one COB and lens since there is more space and the lens has to handle more than one lightsource.1 point -
You cannot walk with any camera using IBIS and get acceptable stability. Your standards are low if you think so. If the wobble exists below 20mm for static shots, then there is a problem. But the problem is for specific lenses. But in any case you will never be able to walk or run or creep with a camera using IBIS and get results that rival a gimbal or would be acceptable to professionals. If you use a gimbal, none of this matters.1 point
-
If you want to move with the camera, then you certainly need a gimbal, and you do not need IBIS or OIS, so no wobble induced by the camera or lens, if it exists. Even panning really does not need camera stabilization. Is it alleged that wobble occurs without IS?1 point
-
Overheating and the time limit are both now irrelevant if the R5 is combined with the Ninja V+. I think the IBIS wobble was confined to an EF lens with IS, the EF 16-35 L. I don't see it in videos I have shot, with an RF lens with IS.1 point
-
Great! So it looks like a better deal than the Sony Alpha 11 point
-
The R5 comes with a cable holder that locks the HDMI cable to the camera. There is no electronic issue with micro or mini HDMI. You can record 8K 12bit and 4K 120p ProResRAW with the Ninja V+, and there should be no overheating. You can record 10bit 4K 60p ProRes 422 HQ with the Ninja V and there is no overheating.1 point