Jump to content

sanveer

Members
  • Posts

    2,522
  • Joined

  • Last visited

Everything posted by sanveer

  1. I havent used it on an ILC, but I remember using it on my Samsung S7 smartphone where it had some creative editing with the sound. It vacillated between video slowed down in post and played at real time. It played the sound at real time, it sounded lime regular 24fps. I am guessing at slowed speeds it may sound like some strange animal yawning.
  2. 120fps with sound is actually pretty useless. But some ILC does apparently record sound. Not sure which one it is.
  3. The whole crop factor thing is interesting. Because within APS-C (APS-H) alone, there are like 4 different crop factors (from 1.73x to 1.29x) I also clearly remember the debate about the GH2 having a 1.86x crop, instead of a 2.0x crop (for video, instead of for photos?). I also read somewhere that the Panasonic GH5s has an even wider crop (1.80-1.83x?). I am not sure the argument about the sensor of the GH5s being too large for IBIS seems correct, then. Panasonic could, then, easily fit such a multi-aspect ration sensor (MARS) inside the next GH Camera (GH6). They could keep the MP count at 20, since I am guessing that with the latest sensor tech, the low light video performance should be extremely similar to the GH5s. The GH5s doesn't seem very good for stills in low light, surprisingly, so the GH6 could have an advantage over it here (apart from the IBIS and higher MP count. Also, since the actual area size of sensors isn't much different between APS-C and M43 (most noticeably varying in width, rather than height), especially for MARS, the actual dynamic range and low light performance, too, theoretically, shouldnt be too different.
  4. Hopefully Panasonic starts coming clean about the future of M43, and especially the GH6. If it's still on, it should be future proofed, like the GH5, for another 3-4 years. Full VLog, better low light by 1.5-2 stops, 4k at 120-180fps, atleast 1.5 stops more of video dynamic range, way better video autofocus (PDAF finally), 6k at upto 60p, 14-bit photo, 3.5inch LCD, much lower lag on HDMI. (slightly bigger sensor?)
  5. I was also curious about the lag on HDMI cables, on both ILCs and TVs (especially as compared to SDI). It appears the lag is not due to the HDMI cable (or standard). It's due to the processing Before the feed. Which probably means, that if camera makers don't colour the output to the HDMI, and if the hardware linking the HDMI is sufficient enough, there really shouldn't be much of lag (not noticeable enough). I wish Panasonic would look into this for the next ILC (GH6?), as should all other ILC makers.
  6. Thanks @Andrew Reid and @Anaconda_ The Menu is extremely clean, and the screen appears Larger and more Square than regular LCDs. I guess the next level of ILC Video wars is beginning. It's almost feels like GH5 announcement 3.5 years back. I guess once the Sony 7Siii specs are announced, too, people may start feeling the full thrill. It's Video ILC love in the time of Corona.
  7. Interesting. I am wondering whether larger LCD would have been a better idea. Or using that money for mini XLR and better heat management, and cost cutting perhaps. Since the new sensor and redesigned body for the 10-bit video is going to already push the costs substantially. Plus the new sensor, and 9ther features.
  8. Whoa. That's over 4k resolution on a tiny EVF. I wonder what the refresh rate would be? Edit: I saw that its dots, not pixels. 4 dors pake for a single pixel. So yeah, much lesser but still almost double of what's presently in the best ILC EVFs available right now.
  9. It now gets upto 20 hours of life from 4 x AA batteries. If it has so many inputs, could monitoring have been made easier? Maybe that's on the app(s)?
  10. Is there any proof you've done any manner of filmmaking, professional or otherwise?
  11. Dynamic range is how much information you can See in the shadows and highlights. All of that may not be usable in post, and depending upon signal to rise ration, it could vary greatly. Exposure latitude is how much of that, you can further push in post, without degrading the image enough, to make it unusable. It has got to do the sensor, and more importantly, the codec bit depth. Like a 14-bit codec would always be a (little) better than a 12-bit one, a 12-bit one would be better than a 10-bit one etc. I am guessing the arrangement if middle grey would also govern the latitude, and whether the highlights are better protected or shadow information. Sensor size, to some extent would also govern how much of latitude is available in the final image. Please correct me if I an wrong.
  12. Remember the infamous Michael Woodford case, of him Exposing the over US$1.7 billion in scam money, involving fake auditing and sequestering money (euphemism for illegal wire transfers to Cayman Islands among other places), at Olympus. It appears, that Olympus probably never recovered from that, because when the scan was exposed, they had to isolated one of their 3 businesses. So it was probably prudent to let their Camera Imaging business bite the bullet. Maybe Olympus was ust a sinking ship. It may have merely been a question of when.
  13. The IBIS may be common or similar for most Japanese companies, including the one in Fuji (which had similar tech, but smaller sized components, since it sports larger sensors). I am guessing Olympus may have bene licensing it to some of them. True. Though Olympus has Full Frame lens patents, which they could have used for the L-Mount Alliance. Except, perhaps, they would have to figure which lenses would be covered by Sigma and which by Olympus (?).
  14. I initially thought its Sony. But while Panasonic has 10-bit video and No PDAF, Olympus had PDAF and No 10-bit video. Which makes me suddenly wonder, whether its Sony putting those restrictions or Panasonic and Olympus, themselves. Fuji has 10-bit video, but its 4-2-0, though it isn't as big a difference as one would imagine. And everyone else using Sony Sensors seems to have External 10-bit video, including Sony themselves. About the Sale of the Imaging Division, I am curious why Olympus wants to sell it, to be set up as a completely new entity, unless it plans to further sell it or allow such prospects. It's at an MoU stage, right now, so a miracle could still save Plympus. Though a company would want to buy into M43, if it offered something above other ILCs, and a roadmap for noticeable further improvements and innovations. I am also guessing that Olympus may want to only sell their division to a Japanese company. I am suddenly wondering Panasonic's M43 division would also announce something equally shocking, sooner than later?
  15. The question is, how will Panasonic handle this, considering that a lot of technologies are actually Olympus technologies in the whole M43 ecosystem? Also, should M43 users and buyers suddenly be more conscious, now that development on the part of Olympus has stopped? Also, why suddenly now, since Olympus Imaging had been in losses for a while for while (3 consecutive years, as the statement says)? Hopefully a lot of questions are finally answered regarding why sensors haven't been refreshed in a while, why Olympus didn't get 10-bit video, why Olympus didn't join the L-Mount Alliance and many others.
  16. No word on autofocus. So its neither Contrast-based nor PDAF, in terms of confirmation. Though another G-Series camera, released with such urgency, would mean that it may highlight the roadmap for Panasonic for something (new sensor or new autofocus or some important new technology, that should eventually spill into the majority of Panasonic ILCs?).
  17. True. Even Phantom Cameras have a Rolling Shutter, with a Global Shutter switch. The dynamic range loss, low light issues and the image degradation must be pretty substantial.
  18. You have no idea about the concept of ownership and management of a company (or any other organisation). I wasn't making fun of Red or anyone there. I was making fun of the name thrower. The one who claimed that one has 'to be close to the owner to even get on the list'.
  19. Are you sure it's not Jeffrey Epstein instead of James Jannard here, that you're talking about??? 😂 I am guessing you and Jannard are on buddy nicknames. Elena and Johny, perhaps?
  20. After hyping the Komodo as much as the Hydrogen One project, it suddenly appears to have turned into an exalted and glorified embarrassment. "The KOMODO was RED’s answer for creating a much better crash action cam. They wanted to solve a problem at the high end where GoPro wasn’t creating a good enough image to be intercut with other cameras. KOMODO certainly evolved from that original concept, but it still has managed to maintain a small footprint. Jarred talks about how the KOMODO evolved over the years because the technology wasn’t available back when they first started developing it. He also tells viewers that 'he doesn’t see the point in releasing the full specifications of the camera until you can actually buy it. RED didn’t want to over or under promise what they could deliver by publishing specifications before everything on the camera was finalized'. While I get it, the camera is shipping to select people so I see no reason why the specifications can’t be released." This entire explanation seems both apologetic and like some form of cover-up, much like the Hydrogen One project. Bathos, at its finest. https://www.newsshooter.com/2020/06/06/red-komodo-qa/
  21. I've only upressed a pic once on a laptop. It was a surprising good job, the app earnt too large and the thing happened quick too. But it was a single pic, JPEG, 12MP and from a smartphone. For the 16-bit photos TIFF that Topaz Labs advertises, I am guessing it must be processor intensive. For video it may be a lot worse.
  22. I have been saying this for the longest. And now it appears that most things can be fixed and improved in post. Theoretically, as long as a JPEG (or any other 8-bit photo or video codec) isn't exposed absolutely terribly, its possible to not just improve (usable) dynamic range, but also increase the bit depth (super useful for post work) as well as increase the resolution. This doesn't mean that all photos will suddenly start looking insanely better, but most photos which don't have severe highlight clipping or way too much shadow noise, can look remarkably better. This also means that codecs can stretch dynamic range (way) above the bit depth, as long as the sensor allows for it. Interestingly, this should be even better for modern smartphone photos, since those suffer from actual details and bit depth, noise (except in night mode), lower resolution and bit depth. I noticed it already in the upressing video of Topaz Lapbs where 480p was upressed to 4k. It removed all kinds of artefacts and banding. Which, implies that, intentionally or unintentionally ir was also improving the vit depth. This particular software is more Phtoos. But the same principle applies. Though it would be super strenuous on a system yo handle this. This JPEG to RAW AI engine should be even more effective for smartphones. https://topazlabs.com/jpeg-to-raw-ai/
×
×
  • Create New...