Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 03/06/2022 in all areas

  1. TomTheDP

    Panasonic GH6

    I don't know I am often needing dynamic range indoors when dealing with windows and often shoot at 4000 iso on my S1 indoors. Dynamic range in lower light situations is helpful, especially in the shadows which is where the DGO sensors seem to shine. Yeah it would be nice if the GH6 had more dynamic range but its honestly very respectable and maintaining color accuracy throughout the dynamic range is more important than the dynamic range itself. The S1H released with terrible NR baked in but it was later fixed. The S1 released without that issue and looks incredibly organic. The GH5 had the same NR issue upon release but was later fixed. The GH5 always did have a sharpened look, but the S1/S1H doesn't have that. BRAW has sort of a softer look, not sure why but its some sort of processing going on. If you shoot CDNG on BM cameras you'll see the difference in texture. It's a pretty subtle thing, probably doesn't really matter. Would be more apparent on the big screen like anything else.
    3 points
  2. PannySVHS

    Panasonic GH6

    The 150mbit 10bit codecs on the GH5 and especially on the S series are very robust. It smokes the 240mbit codec from the FS7. It's no contest. The flavour on S1, S1H, S5 also does well compared to Braw being equally robust in the shadows. I was amazed by the Gh5 under good and moderately low light. S series have a powerhouse of a 10bit Long Gop codec. Panny is a big boss when it comes to codecs.
    3 points
  3. Hey, no problem Geoffrey, - it's OK to have a different opinion and express it. I've explored every single setting and then some and in my experience, in regard to tracking people walking towards you at anything above even a moderate pace, it's not reliable. Arguably, no camera is 100% reliable, but in my experience, the XT3 was better than the S5 is and neither are as good as the Nikon Z6 I tried and that isn't as good as anything Sony or Canon. But the S5 still remains my main video tool for professional work.
    3 points
  4. It's probably fine for holidays. Although If I had young children around three or four, I don't know how good the AF would be at following them around. I am sure that my significantly older Sony a6500 or Olympus E-M1 MK II (with firmware 3.0 or later) would track them FAR better. Is the S5 a professional camera? Well, I would say there a significant amount of professionals using it to make money (including yours truly), if not as an A cam, at least as a B or C cam. It is a good jack-of-all-trades camera, IMHO. If the continuous autofocus was brought up to the level of Sony or Canon, or the new OM-1, it would be an absolutely great camera. Paul Byun has made a couple of good videos about his experience as a commercial videographer using the Continuous AF and he has found it useable, but he has made some compromises that, to be honest, kind of offset many of the great benefits of shooting on the S5 or S1 cameras (e.g., shooting at 60fps, shooting in aps-c crop mode, shooting in a profile with higher contrast, shooting on primes at f/2.8 instead of f/1.8, etc.,). So they aren't terrible cameras, and as you said, work great for non-critical work. But they really are just a reliable continuous autofocus system away from being the perfect camera (at least in my experience).
    2 points
  5. Here you go, try it with this thumbnail to tick all the boxes that will get you on the front page.
    2 points
  6. Yeah, the 6K photo mode is actually 5.2K 200Mbit/s H.265. It’s not bad at all but it’s 30fps. As I mainly shoot in 24fps, I like the slight slowing down of the 30fps on a 24fps timeline. I usually don’t need that much resolution but it’s nice it’s there. As for the Long GOP codecs on the S-Series, I’ve read that they’re much more robust than the older Long GOP of the GH cameras. For this reason for example Alex from emotive color always recommended All-I on the GH5 but also Long GOP codecs from the S-Series for his LUTs.
    2 points
  7. I have been using the S5 human detect continuous AF quite a lot recently. I shoot using the CineD2 profile (standard kit lens) and mainly outdoors, hand held. It works pretty well and am a bit baffled by the criticism it gets here. Maybe that is because people expect 99% success with it when in reality it is more like, I dunno 85%? I suppose with crucial commercial shoots this becomes an issue. But, with practice you get to know when it falls over - mainly too dark or too bright and movement is too rapid (so practice with it really does help like most things). Avoid that and it works the vast majority of the time very well and I love the look it gives. In fact as I have got experience with the S5 generally, I have grown to love it more and more, faults and all.
    2 points
  8. It's all such a missed opportunity really. The internet has morphed from "we put ads around what you watch, that keeps it free" to "literally everything is an advert". It all ties up with the false narrative of infinite growth, the endless destruction of resources in order to replace goods with intentionally shortened lifespans, the rewarding of the most narcissistic and least intellectually and emotionally developed people with the greatest prizes... I've had VR since very early on, it's cool but full of flaws, I hope more people do good things with it. But what's supposed to be the draw with a highly corporatised "metaverse"? I get to watch even more invasive adverts, have lowest-common denominator borderline-educationally subnormal "influencers" shouting and pulling WIDEMOUTH WIDEYES face directly injected into my eyes hahahah, all the while being somehow exploited for data in ever more dystopian ways? Why would I bother? On Canonrumors chap's meltdown on Twitter, I find camera YouTube remarkably depressing because it's just endless clones of the same #content over and over saying the same things. Every person with pastel coloured backlight, every person with a softbox off to the side, every person saying the same things and doing the same tests, hoping to get their scraps from the table. Then you buy a camera and actually use it to make stuff and it is NOTHING like any of them said it would be cos they only scratch the surface and move on to pulling pogface beside the next piece of gear... I was thinking maybe I should make a vid that is a "2 years in" review of the S1H cos it'd be a bit different, but I bet the algorithms would just hide it because it isn't selling the latest trending model. That's how disheartening it is, it just makes me think "why bother".
    2 points
  9. Correct, all the codecs on the S1 / S5 / S1R are Long GOP (unless you hook up a recorder, where you can record in ProRes, ProRes RAW, or BRAW. Further, the S5 has internal recording time limits has no internal 6K (the S1 does). There is some "cheat" with 6K photo mode but I haven't used it really. I read that it is only 5K, not 6K or something like that. @interceptor121 Did an examination of the h.264 codec Long GOP frame distributions and came to the conclusion that while the 10-bit codec have more color fidelity and flexibility, the 8-bit codecs offer better distribution of the GOP frames and would be less likely to suffer from motion artefacts. I don't believe he was able to analyze the h.265 codec though, but he said from his experience it is probably better to use 200Mbps h.265 instead of the 150 / 100Mbps h.264 codecs. I will admit that I haven't even TRIED shooting in h.265 at all because I thought it would kill my computer. Maybe my RTX 2060 Super has hardware acceleration for it though??? (I don't know how well my i7-6700, which is now three or four generations old, will handle decoding h.265). I guess there is always transcoding, right???
    2 points
  10. So I have un update on this. I had somehow missed some firmware update that was supposed to fix a stabilization bug. The IBIS could stop working after a WB setting or something along these lines. I flashed the FW v.1.3 and it seems to be working correctly now. I obviously want to test it a bit longer but first observations are encouraging! 🙂
    2 points
  11. Thank you again, @kye, for once more making my life a living Hell... 😀 In all seriousness, I am thinking about getting a GH6 to replace my S1. The better autofocus, lighter weight, bigger lens selection, ProRes 4:2:2 internal, ability to record to an SSD, all-i codecs, appeals to me, and at 4K 60fps (using the dynamic range boost) of the GH6, the DR is going to be roughly equivalent since I tend to be one of those people more likely to stop down a lens, and the added DOF of m43 would allow me to use faster lenses for the same DOF (so use one-stop lower aperture???) Math is not one on of my skills. My only two real issue after that is mixing and matching lenses with my full frame S5, and the fact that the S5 has a thirty-minute time limit (and no internal 6K). I guess logically I would sell the S5, keep the S1 and add the GH6. But logic and I aren't really on speaking terms 🙈
    2 points
  12. basically budget, workflow & shooting scenario but that's where the cameras and what they offer is important to look at. on Alexa's shooting RAW is 3 to 6 times the bitrate over Prores. That transforms into cost when you know how expensive the media is on Alexa's. But if you wanna shoot the highest res in open-gate mode then it's RAW only. RAW also requires additional steps in post and time = money. If you've got very fast turnover project you may simply not afford to go RAW. In some scenarios you may want to even bake-in a LUT. Prores makes sense for that. But on RED/BMD you may still wanna go RAW over ProRes because the RAW compression options are so good, ProRes doesn't go up to 4444 and media isn't proprietary. and even then when you've selected RAW, there are a bunch of compression ratio options you may select depending on requirements. for example RED gives out these suggestions: RED qualifies HQ for VFX work where you need extreme detail or when you need to pull stills from motion. Next is MQ — or medium quality. Non-VFX cinema and high-end TV productions typically use MQ. In MQ, a 512GB card can capture up to 48 minutes. Last up is LQ — or low quality. In LQ, it records 77 minutes on a 512GB card. RED says its intended use is for TV, online content, documentary, interviews, and long takes. In the end, I guess my point to you was that it depends on an ensemble of factors.
    2 points
  13. Well yes it does close the gap faster but I think because you are moving, the lock on the subject seems to stick better maybe because since you and the camera are moving it is keeping check on it more? Might just be luck though but I like the effect anyway. But I do admit, because I am relatively new to using CAF, I have less high expectations of its capacity perhaps. One thing that has helped is that lock on the subject you can do that the video outlined some pages back. Given your experience here - do you have preferred CAF settings (I mean the fine tuning settings deep in the menu)? EDIT: apologies I just see you have already posted these. Very useful, thanks. I shoot 50p (1080 - don't have the gear for 4K editing) so that is good.
    1 point
  14. hyalinejim

    Panasonic GH6

    I seem to remember that the CineD tests showed cleaner shadow noise in Boost.
    1 point
  15. Django

    Panasonic GH6

    All DGO sensors don't work alike: from the test I've seen the DGO on GH6 increases the highlight range but doesn't do much to shadows. It's the opposite of the C70/C300III that emphasises clean shadows but doesn't really boost highlights.
    1 point
  16. Django

    Panasonic GH6

    Really? I aways thought GH series had that sharpened / NR look. BMD cams in contrast have very little image processing especially if you shoot RAW. As for GH6's "DGO" sensor, it does seem a little underwhelming when compared to the P4K. Also pretty annoying DR boost mode turns base ISO to 2000. I'd rather it be at a low value like 400. At 2000 you're going to need ND's or close the iris during daylight which is when you want that DR boost.. odd choice.
    1 point
  17. I agree, I think VR is about the stupidest thing ever created. It has nothing to do with reality so why pursue it. It is like TikTok, dumb as hell.
    1 point
  18. TomTheDP

    Panasonic GH6

    They have similar highlight latitude but when underexposed the P4K doesn't do as well, especially with color. Now the P6K is pretty excellent in that regard, however I have heard issues of fixed noise patterns when the sensor gets hot. The P6K also has no so great rolling shutter. Panasonic has also been known to have a less processed image with more texture.
    1 point
  19. Yeah, understood. I admit to having little experience on other cameras with CAF (or V-Log) but have read the Sonys are 'better' but I do wonder how much better in the conditions you describe (V-Log etc)? If 85% = 0%, in effect for certain situations, what percentage success is OK, as none of them work 100%? We do ask quite a lot of these machines! My impression from the OP was they wanted a cam for holidays and stuff and I would say the S5 CAF is totally fine for that (yeah you get a bit of pulsing too but not much and only really notice it if you look for it which most people wont). Maybe this is the difference - total professional reliability versus more general use, use where some technical failures don't matter. The question I would have there, is the S5 a fully 'professional' camera to be totally relied on? Given the price, I would say it is definitely not and so should be judged accordingly
    1 point
  20. Yes, consider it an MMX of sorts that takes input from the AFX and then talks to the lens of the virtual camera in the same way the MMX talks to FIZ motors. And it speaks to both simultaneously of course. The additional development is in the integration of a positioning element. More (relatively) soon.
    1 point
  21. webrunner5

    Panasonic GH6

    Interesting video with a different take. He makes some good points I think.
    1 point
  22. Thanks for the food for thought. The one saving grace for me is that none of the m43 wide zooms have lens stabilization, and they all rely on IBIS instead. For me, that will be difficult because even though I shoot my real estate videos on a gimbal, I basically NEED only TWO-axis IBIS, along with the three axis that are controlled with lens stabilization. Otherwise, when shooting equivalent full frame of 16- 30mm (i.e., 8-15mm on m43) you get a lot of corner warping if only using IBIS. At least as far as I have experienced so far. Maybe the GH6 will somehow rectify the corner warping problem???
    1 point
  23. Sorry to hear this - it's probably too expensive to fix. A few thoughts - it could be a faulty sensor.. in a nice clean environment maybe give it a gentle blow out with a rocket blower to remove any stray fibres or dust that might be interfering with the mechanism. You could try flashing the firmware again on it, assuming it will let you, as maybe that has gotten a little corrupted over time perhaps. Otherwise, that flipping it around trick might be enough, depending on your shooting style and situation.
    1 point
  24. kye

    Panasonic GH6

    Wow, the S1 has no ALL-I codecs? That's a bummer. The P4K has about the same DR as the GH6 in DR Boost mode - 11.6 / 12.7 vs 11.5 / 12.8 (Source is the P6K tests where the make some comments about the P4K: https://www.cined.com/blackmagic-pocket-cinema-camera-6k-lab-test-dynamic-range-latitude-rolling-shutter-more/) From my perspective the GH6 and P4K are so incredibly different that they're incomparable. You'd have to be shooting in pretty narrow and controlled conditions for their differences to not be relevant - IBIS, battery life, photo mode, automatic settings, etc etc. I doubt it. They've announced Prores RAW with Atomos and I doubt they'll provide a second RAW option - most cameras don't even provide one RAW option!!
    1 point
  25. Some thoughts..... Think of the total ecosystem that you have This includes cameras, lenses, batteries and chargers, and all your accessories, and then build a system that works for you. Think about failures and having equipment redundancy too. My travel setup is two MFT cameras and an action camera. If I want to shoot simultaneously then I can use the same lenses across the two bodies. If my action camera dies I can use a wide on a second body to emulate it. If my main camera (GH5) and lens (Voigt 17.5mm) and mic (Rode VMP) disappear then I have backups for those three (GX85, 14mm f2.5, Rode Video Micro) which are small and I can easily carry as a spare. If my backup / 2nd setup dies then I can use my action camera in a pinch and there's also my phone. The difference in DoF is 2 between MFT and FF, so to match the FOV and DoF of a 50mm at F4 on a FF camera, you would need a 25mm F2 lens for an MFT camera. But you should also think about the base ISO of 2000 on the GH6 as I'd assume you'd be using V-Log and DR Boost, which might require new NDs to suit the new lenses you'll probably have to buy. Don't just think of the fact that MFT has faster lenses with the same DoF, because the sensor is smaller and therefore has higher noise and therefore requires more light to get the same performance, but then technology of the sensor and processing all apply and then there's the sensors with dual-ISO or dual-gain and then there's blah blah blah blah. Long story short - there are too many variables that factor in to be able to predict low-light capability. The only real way to compare low-light is to look at a test of the two cameras you're comparing shooting the same scene with equivalent lenses.
    1 point
  26. Sony color may have been accurate under proper single source lighting in a studio but the real test is under poor lighting conditions. That is where Canon, RED and ARRI are doing their things.
    1 point
  27. Well, as more and more content is being consumed will continue to be consumed on HDR devices (like OLED, QLD, smartphones and computers) I'm surprised at this response. My read on deliverables is that SDR has not only been "shown the door", it is being gingerly escorted out. It's why I began to shoot 10-bit with my S1's and have been an online hound-dog for internal 12-bit. As I mentioned to a (well-known) Panasonic rep at PDN Expo back in 2018 why I wanted this in their cameras, "just because I can't see it now (in the SDR realm) doesn't mean I won't be able to see it when 10- and 12-bit monitoring and consumption become the norm in a few years, I don't have the luxury of hopping in a time machine to go back and "refilm that solar eclipse" or "eagle snatching that fish" in 10- or 12-bit. However, if I shoot that today for HDR I can always come back in the future to make it shine!" As the saying goes, "skate to where the puck is going to be".
    1 point
  28. The role of it in this scenario is just as a cheap and dirty way for me to generate basic input so an LPD-3806 would be overkill 😉 The real time dynamic information is coming from the camera system. Essentially, this is sending the raw data and the Blueprint itself is doing the interpretation so its conceptually the same. Because I am generating the MIDI data directly rather than taking it from a control surface and re-interpreting it, I'm able to use the standard MIDI data structures in different ways to work around the limitations of unsigned 8 bit values because the Blueprint will know how to interpret them. So if I want to change the aperture to f5.6, for example, then I can split that inside a basic Note On message on a specific channel so that I send that on channel 7 with a note value of 5 and a velocity value of 6. The Blueprint knows that anything it receives on MIDI channel 7 will be an aperture value and to take the note value as the unit and the velocity value as the fractional and re-constitutes it to 5.6 for the virtual camera. Using the 14 bit MIDI messages such as Pitch Bend or Breath Control also offers more scope to create workarounds for negative values by using, for example, the midway value of the range (8191) as a "0" point and then interpreting positive and negative values relative to that and using division if necessary to create fractional values (e.g. -819.1 to +819.1 or -81.91 to +81.91 etc). The reason to take the MIDI path was to use BLE MIDI to keep everything wireless and enable it to sit within the existing BLE architecture that I have created for camera and lens/lens motor control so it made for much faster integration. Having said all that...... As the saying goes, there is no point having a mind if you can't change it and things do change with this over the subsequent steps.
    1 point
  29. The video is purely a prototyping illustration of the transmission of the parameter changes to the virtual from an external source rather than a demonstration of the performance, hence the coarse parameter changes from the encoder. So is purely a communication hub that sits within a far more comprehensive system that is taking dynamic input from a positional system but also direct input from the physical camera itself in terms of focus position, focal length, aperture, white balance etc. As a tracking only option, the Virtual Plugin is very good though but, yes, my goal is not currently in the actual end use of the virtual camera. A spoiler alert though, is that what you see here is a description of where I was quite a few weeks ago 😉
    1 point
  30. I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
    1 point
  31. I know very well that you are a very good programmer and surely you are trying to make your own system to control the unreal camera, but after seeing the example in video, I wonder if it is not much more convenient to use an already made app, such as a virtual plugin. that I used in my tests ... You can control different parameters such as movement and focus , you can move around the world with virtual joysticks as well as being able to walk, you can lock the axes at will etc etc .. Try to give it a look, there is also a demo that can be requested via email ... https://www.unrealengine.com/marketplace/en-US/product/virtual-plugin
    1 point
  32. leslie

    bmp4k adventures

    left my run a bit late today. Still i did make some reflectors. camera of choice was the olympus m10 with a viltrox speedbooster and the 50mm super takumar. I learned that i crushed the blacks on an already burnt snag. 🙄 I was almost wide open so thats a pretty narrow depth of field. So tomorrows effort will be a slower shutter, narrower aperture, a bigger reflector and perhaps more light.
    1 point
  33. Pretty cool! I had forgotten about that project. I'm glad that they are still out there. I checked their demo footage and it looks great. I went back to the Apertus homepage too. There two updates in the last year, and the most recent was five months ago. Maybe there's more going on than I realize, but they seem to have lost a lot of momentum.
    1 point
  34. Octopus Cinema Camera actually posted yesterday about a side project they're working on using their software with the Raspberry Pi camera. https://www.instagram.com/p/CaaLmwyKNhO/
    1 point
×
×
  • Create New...