Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 12/20/2021 in all areas

  1. I have no clue regarding most of what you said, but just looking at the images, wouldn't this enable higher resolution sensors for a given sensor size and potentially higher color sensitivity in all sensors since with this architecture 100% of the sensor real estate is dedicated towards light gathering (I am assuming based on the graphic that current sensors lose 50% of their real estate to the sensor transistor circuitry) since they sit side-by-side in current sensors? This DPReview article kind of says the same thing but they are looking at it from a light gathering standpoint vs sensor resolution. Like I said, I know nothing about how sensors work but a graphic that shows me something is doubling in its ability to perform some task while taking up the same real estate as a previous version seems like a good thing in my mind. The Canon DGO sensor already does this by using dual gain circuits but still tops out around 16 stops. When I first heard about the DGO sensor I thought it was going to be some unheard of breakthrough but as usual it was mostly marketing.
    3 points
  2. I was itching to make another short film about 2 months ago, but I'm not much of a writer. Then I found "Script Revolution". A webiste where you can get free scripts offered by the writers. I'm glad I found this website. I found a great little 3 page script by writer Robert Bruinewoud. As luck would have it, I found out about the Rode Reel contest, "Make a short film 3. minutes or shorter" right after that. Mine ended up being 4 1/2 mins. So I made a cutdown version. I'll be releasing the longer version to film festivals around the world. Shot on the BMPCC 4k using voigtlander primes, Kowa and Moller 16 Amamorphic scopes. Check it out if you want to. Vote if you like it. https://myrodereel.com/watch/12720
    1 point
  3. I think it can work for computational acquisition and all the remaining stuff... : ) But they still have to work hard to mimic our exposure triangle properly as you fairly say. They're a pain to use today other than the pocket small form factor : X Let alone the production values mantra which will stand no matter the tech evolves ;- )
    1 point
  4. Yeah, I read it : ) And basically the same has been said a little bit everywhere... To my view, aside the marketing thing, I guess we have no reason to be pessimist beyond the actual trend the many the better :- )
    1 point
  5. That's not a minor development: "Sony’s new architecture is an advancement in stacked CMOS image sensor technology. Using its proprietary stacking technology, Sony packaged the photodiodes and pixel transistors on separate substrates stacked one atop the other. In conventional stacked CMOS image sensors, by contrast, the photodiodes and pixel transistors sit alongside each other on the same substrate. The new stacking technology enables adoption of architectures that allow the photodiode and pixel transistor layers to each be optimized, thereby approximately doubling saturation signal level relative to conventional image sensors and, in turn, widening dynamic range."
    1 point
  6. Sorry thats not how it works. All these developments are about increasing the full well capacity, which is very low in small sensors, and decrease noise, which is relatively high in small sensors, to extend the DR, maybe 1/3 of a stop. Here is the formula: 20*log(full well capacity÷noise). You can play with this to estimate how much room to improve is still there. Capacity is already capped at 3500 electron per one um². And noise is already very low. A1 is at 1.8e now. The only way to extend DR by several stops in conventional CMOS sensor is multi exposure solutions. Nikon is playing with a dual layer idea, but its on photo diode section, not electronics. They want to place a CMY layer on top of RGB layer, then set the upper layer exposure differently from the lower layer exposure.
    1 point
  7. This is a strange one and I suspect no-one will have come across it but I was doing some recording the other day using a Zoom H6 and the S5. I used the line out of the Zoom to connect into the camera so I had a guide track for synching the zoom audio in post. The Zoom line out is specifically designed for such a scenario (stated in the manual). But here's the thing, after careful tests and some head scratching I came to the unavoidable conclusion that the phase on any input into the Zoom was inverted once it is laid down as the S5 audio track. The cable is an unbalanced 3.5mm so it cannot be that so that leaves either a problem with the Zoom line out (unlikely) or the S5 line input (more likely). Either way this is clearly a factory fault. There is nothing that can fix this at source and it is easy enough to invert the phase in post, and in fact mixing both signals in any final project is very unlikely anyway, but still, it should not be happening. One other fairly obvious observation is that the noise floor of the 55 is noticeably higher than the Zoom which is pretty pleasingly quiet. This is not helped by the fact the Zoom line out is fixed at -10dB so you have to have the S5 at 0dB input at least. Anyone who cares about sound quality and is recording in a quiet environment would do very well not to use the S5 for sound unless you have a high quality mixer that enables you to lower the S5 input level to -6dB at least..
    1 point
  8. Here’s the next one! The footage here is actually a bit older. It was also shot with the GX85 in 4K 24fps, using the Speedmaster 25mm 0.95 and Lumix 12-32mm pancake on location of Filothei Athens Greece, on 16 April 2019. Edited and color graded in Davinci Resolve Studio with Emotive Color S1Alex. Here I used FilmConvert Nitrate with the print slider turned to zero (as a negative film emulation only) and I used a separate PFE node in the end. About the music: I’ve used the following: Yamaha TX802, DOD Rubberneck, Lexicon MX200, Boss DC-2w, Elektron Analog Heat, Valhalla Supermassive. The environmental sounds are again from the actual field recordings during the shooting. The music was composed and mixed in Cubase Pro and mastered in Izotope Ozone. A note about field recordings: they are heavily processed so as to give this “out of this world” atmosphere but also to cover a bit the fact that they were just recorded with GX85’s internal microphone. At the time I didn’t even have the idea of using the original field recordings as I normally never do for my music videos so it didn’t matter! Future projects will be improved in this regard.
    1 point
  9. Basically Grant Petty (Blackmagic CEO) said it becomes a 6K camera when you put an EF, PL, or F/G (Nikon) lens on it, and they're delivering it with an EF mount thrown in. He said they got tired of waiting for backordered sensors for the old Ursa Broadcast so took the opportunity to redesign it around a "new" sensor. Since the specs are so close, everyone is assuming this the same sensor as the Pocket 6K, including its not-so-great rolling shutter. I agree that this looks like a really versatile camera for documentary and indie filmmakers although it sounds like there may be a few limitations compared with the BMPCC 6K (e.g., no cinema 4K modes or 2.8k slow motion, plus you have to use gain to set exposure, which is something broadcast people are used to doing but not cinematographers). Still, I find this form factor more compelling and if I ever upgrade from my current cameras this will be near the top of my list of considerations. Grant said existing Ursa accessories would be compatible.
    1 point
  10. Dynamic range seems fine in raw, just protect the highlights and pull the shadows in post if needed
    1 point
×
×
  • Create New...