Towd Posted July 10, 2020 Share Posted July 10, 2020 9 hours ago, kye said: Looking at it, there is some evidence that all of them have some jitter, possibly ringing from the IBIS. I got more enthusiastic with my pans in the latter tests so there's less data points, so they're not really comparable directly, but should give some idea. I love this! Honestly, I thought you'd need a more controlled test, but you do seem to be picking up some random motion with these uncontrolled panning tests. It would be nice if you had some some kind of accurately repeatable motion that could be run consistently for each test. Maybe even a metronome, or spinning fan blade if you zoomed in enough to get a nice measurable motion sweep in each frame. I also think that a high speed shutter is great for tracking the inconsistent motion, but I think it may be helpful to repeat the test with a 180 degree shutter. I think one of the key components of proper "motion cadence" is good motion blur. We should see motion blur that streaks through half the distance traveled in each frame if it is being recorded correctly. To this point, I could see when a high contrast scene where a bright area is moving into the shoulder of the exposure the motion blur may become less apparent as an example of something people might perceive as bad motion cadence. Just one of many possible examples of what would constitute bad motion blur. 13 hours ago, kye said: Also, if the shutter operated like a square wave with each pixel going from not being exposed to being fully exposed instantaneously then the edges of the blur would be sharp, although much lower in contrast. I think you are referring to this pic from the BTM_Pix link: I tried to look up what this system he is referring to is, but the link didn't seem to be working. But I believe he is referring to some kind of post processing motion blur system as you could then synthesize whatever kind of exposure curve you wanted digitally from a high sample of frames. To the best of my knowledge there is no camera shutter that ramps up it's photo-site's sensitivity over time. What I mean is that while film or digital exposure is a logarithmic process of recording light that ramps up, you still have a rolling shutter, global shutter, or spinning mechanical shutter where every discreet point on an exposed image is either being fulling exposed or off. It could be argued that there is a microsecond when a mechanical shutter is only partially obscuring a pixel, but I think the effect would be so minimal it would disappear into noise. I guess you could also reproduce that Tessive time filter with some kind of graduated ND mechanical shutter, but I've never heard of that. So while exposure is logarithmic and mapped to an S-Curve, every discreet pixel is either fully on and recording or off, and I've never heard of any kind of "ramping shutter". If there is one out there, I'd love to read about it! So trying not to get too distracted, my point being that photo sensors in cameras are either on or off like in the first square wave picture. What gives motion blur its gradual ramping from on to off is how much time the object being recorded spends lighting that pixel during the time the photo site is "on" no matter the shutter being used. I hope I'm making sense. I really love your tests. One small request though--- can you do your tests in 24 or 23.976 fps? 25 fps weird format. Yes, I know you PAL guys love it because it is *close* to cinema, but it really is a TV standard and not a cinema standard. Also, I don't have any monitors that will do 50, 75, or 100 hz playback. But if that's all you have, I'll make so with PAL. 😜 Quote Link to comment Share on other sites More sharing options...
Towd Posted July 10, 2020 Share Posted July 10, 2020 Okay, so I looked at the video you posted and ran it through a 3D tracking/camera solving app I own called "SynthEyes". Here is the test object I tracked: And this is a graph of it's 2D motion. From what I'm seeing, the horizontal motion seems pretty smooth. The bright red is velocity changes in X and the dark red (brown) is the position changes. Even the velocity changes seem pretty smooth. You mention having the IBIS turned on. I think I'd definitely turn that off because who knows what it is adding to the test. I noticed specifically on the frame where the blue vertical line is (where I had the play head) there is a big bump in velocity. That was probably the floating sensor adjusting to the fact that you were changing the direction of you pan from left to right around that time and it stuttered a bit-- confused on what to lock to. I also resampled the footage down into an 1080p MXF video for playback on my 1080p 120hz monitor and to be honest it looked really smooth. One thing I noticed when trying to play back the original file was a lot of stuttering because my PC couldn't decode the 10 bit UHD in real time. That would definitely lead to cadence issues if you are having a similar issue. It's really important that you're not dropping frames during playback. Before we had realtime playback of video, animators used to use flipbook software to load all their frames into RAM to ensure they were not dropping frames. You might want to try that to ensure you are not dropping frames during playback if you are unsure. There is an open source one I use occasionally called DJV that is pretty good. It wont playback 30 minutes of UHD video because you have to load everything into ram, but it might work in a pinch for short clips. Link: https://darbyjohnston.github.io/DJV/ Anyway, I'm still interested in more tests, but from my end, I think the motion on the GH5 looks pretty good and smooth. Quote Link to comment Share on other sites More sharing options...
Towd Posted July 10, 2020 Share Posted July 10, 2020 Thinking about it some more, I could see where these marked velocity changes in the middle of the pan could be possibly seen a motion cadence problems and they should correspond to what Kye was graphing in his plots-- that is the change in X on each frame. But to my eye they are fairly inconsistent in smoothness on each pan. Some pans were really smooth and some have a bit of random jumpiness which leads me to believe they are probably random noise introduced by the IBIS, or subtle irregularities in Kye's panning. Quote Link to comment Share on other sites More sharing options...
kye Posted July 11, 2020 Author Share Posted July 11, 2020 Great discussion! 4 hours ago, Towd said: I love this! Honestly, I thought you'd need a more controlled test, but you do seem to be picking up some random motion with these uncontrolled panning tests. It would be nice if you had some some kind of accurately repeatable motion that could be run consistently for each test. Maybe even a metronome, or spinning fan blade if you zoomed in enough to get a nice measurable motion sweep in each frame. Yeah, if we are going to compare amounts of jitter then we'd need something repeatable. These tests were really to try and see if I could measure any in the GH5, which I did. The setup I was thinking about was camera fixed on a tripod pointing at a setup where something could swing freely, and if I dropped the swinging object from a consistent height then it would be repeatable. 4 hours ago, Towd said: I also think that a high speed shutter is great for tracking the inconsistent motion, but I think it may be helpful to repeat the test with a 180 degree shutter. I think one of the key components of proper "motion cadence" is good motion blur. We should see motion blur that streaks through half the distance traveled in each frame if it is being recorded correctly. To this point, I could see when a high contrast scene where a bright area is moving into the shoulder of the exposure the motion blur may become less apparent as an example of something people might perceive as bad motion cadence. Just one of many possible examples of what would constitute bad motion blur. If we want to measure jitter then we need to make it as obvious as possible, which is why I did short exposures. When we want to test how visible the jitter is then we will want to add things like a 180 shutter. One is about objective measurement, the other about perception. 4 hours ago, Towd said: I think you are referring to this pic from the BTM_Pix link: Yes, that's what I was referring to. I agree that you'd have to do something very special in order to avoid a square wave, and that basically every camera we own isn't doing that. The Ed David video, shot with high-end cameras supports this too, with motion blurs starting and stopping abruptly: One thing that was discussed in the thread was filming in high-framerates at 360 degree shutter and then putting multiple frames together. That enables adjusting shutter angle and frame rates in post, but also means that you could fade the first/last frames to create a more gradual profile. That could be possible with the Motion Blur functions in post as well, although who knows if it's implemented. 4 hours ago, Towd said: I hope I'm making sense. I really love your tests. One small request though--- can you do your tests in 24 or 23.976 fps? 25 fps weird format. Yes, I know you PAL guys love it because it is *close* to cinema, but it really is a TV standard and not a cinema standard. Also, I don't have any monitors that will do 50, 75, or 100 hz playback. But if that's all you have, I'll make so with PAL. 😜 Sure. I guess that'll make everything cinematic right? (joking..) Considering that it doesn't matter how fast we watch these things, it might be easier to find an option where we can just specify what frame rate to play the file back at - do you know of software that will let us specify this? That would also help people to diagnose what plays best on their system. That "syntheyes" program would save me a lot of effort! But validates my results, so that's cool. I can look at turning IBIS off if we start tripod work. To a certain extent the next question I care about is how visible this stuff is. If the results are that no-one can tell below a certain amount and IBIS sits below that amount then it doesn't matter. In these tests we have to control our variables, but we also need to keep our eyes on the prize 🙂 2 hours ago, Towd said: Thinking about it some more, I could see where these marked velocity changes in the middle of the pan could be possibly seen a motion cadence problems and they should correspond to what Kye was graphing in his plots-- that is the change in X on each frame. But to my eye they are fairly inconsistent in smoothness on each pan. Some pans were really smooth and some have a bit of random jumpiness which leads me to believe they are probably random noise introduced by the IBIS, or subtle irregularities in Kye's panning. One thing I noticed from Ed Davids video (below) was that the hand-held motion is basically shaky. Try and look at the shots pointing at LA Renaissance Auto School frame-by-frame: In my pans it was easy to see that there was an inconsistent speed - ie, the pan would slow down for a few frames then speed up for a few frames. You can only tell that this is inconsistent because for the few frames that are slower, you have frames on both sides to compare those to. You couldn't tell those were slow if the camera was stopped on either side, that would simply appear to be moving vs not moving. This is important because the above video has hand-held motion where the camera will go up for a few frames then stop, then go sideways for a frame, then stop, then.... I think that it's not possible to determine timing errors in such motion because each motion from the hand-held doesn't last long enough to get an average to compare with. I think this might be a fundamental limit of detecting jitter - if the motion has any variation in it from camera movement then any camera jitter will be obscured by the camera shake. Camera shake IS jitter. In that sense, hand-held motion means that your cameras jitter performance is irrelevant. I didn't realise we'd end up there, but it makes sense. Camera moves on a tripod or other system with smooth movement (such as a pan on a well damped fluid-head) will still be sensitive to jitter, and a fixed camera position will be most sensitive to it. That's something to contemplate! Furthermore, all the discussion about noise and other factors obscuring jitter would also apply to camera-shake. Adding noise may reduce perceived camera-shake! Another thing to contemplate! Wow, cool discussion. Towd 1 Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted July 11, 2020 Super Members Share Posted July 11, 2020 While you are down this rabbit hole, you might want to take a look at the RED Motion Mount that they did for the EPIC. http://docs.red.com/955-0013/REDMOTIONMOUNTOperationGuide/Content/1_MM_Intro/1_Intro.htm As well as doing vari ND, it had a few tricks including simulating soft shutter and square shutter. There are links within that page to other explanatory documents about the process. What may be of use to you are the videos of it in action such as this one as its a unique reference really to how the look of the same camera can be changed with these simulations. kye and Towd 1 1 Quote Link to comment Share on other sites More sharing options...
kye Posted July 11, 2020 Author Share Posted July 11, 2020 @BTM_Pix that motion mount is AWESOME! It also, unfortunately, probably means that there's patents galore in there to stop other camera companies from implementing such a thing. eND that does global shutter with gradual onset of exposure is such a clever use of the tech. What is interesting is how much it obscures the edges when in soft shutter mode. Square: Soft: This would really make perception of jitter very difficult as there's no harsh edges for the eye to lock onto, effectively masking jitter from the camera. And integrating ND into it as well is great. One day maybe we'll have an eND in every camera that can do global shutter, soft shutter, and combined with ISO will give us full automatic control of exposure from starlight to direct sunlight, where we simply specify shutter angle and it takes care of the rest. The future's so bright, I have to wear eNDs. Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted July 11, 2020 Super Members Share Posted July 11, 2020 I always found it an interesting example of the reality gap between what everyone says they need, in this case global shutter, and how many people actually then buy it. You don't exactly see a lot of them about. kye 1 Quote Link to comment Share on other sites More sharing options...
Towd Posted July 11, 2020 Share Posted July 11, 2020 18 minutes ago, kye said: The setup I was thinking about was camera fixed on a tripod pointing at a setup where something could swing freely, and if I dropped the swinging object from a consistent height then it would be repeatable. Sounds good to me! 36 minutes ago, kye said: If we want to measure jitter then we need to make it as obvious as possible, which is why I did short exposures. I agree and totally get it. To be honest, I'm going to be really surprised if we discover any reasonable prosumer camera or higher that is recording frames at inconsistent rates. Yes, a cheapo camera or phone might be doing something like a 3:2 pulldown or some drop frame process while recording at a different frame rate. Who knows! I may be wrong, and I'd love to find out if some manufacturer has been sneaking that by us. But, if there is some kind of drop frame processing happening, I would imagine it would be in a very consistent pattern-- like 60 hz converted to 24fps. I really can't see a camera being so dodgy that it is inconsistently recording exposures at subtly different rates from frame to frame to frame. I know in time-lapse mode on some still cameras there can be minor variations, but I'm going to be floored if we discover this in a video recording mode. My personal theory is that most motion cadence issues have to do with motion blur artifacts, incorrect or variable shutter angles(like from aperture priory shooting), rolling shutter, or playback issues on the viewer's side. In regards to 24 vs 25 fps... Isn't PAL a regional standard for like the local news? I'm pretty sure 24 fps is the global film/cinema standard. You are doing the tests, so I'll get by with whatever, but seriously... 25 fps.... really??? 😎 1 hour ago, kye said: That "syntheyes" program would save me a lot of effort! But validates my results, so that's cool. I can look at turning IBIS off if we start tripod work. I can run some more tests if it helps. It's pretty fast on my end. I think you can get a 15 or 30 day trial if you want to try it. It's pretty useful if a little obtuse. In regards to IBIS, I think we are certainly introducing errors into our measurements by using it. Could be interesting to test jitter/cadence with and without it though. 1 hour ago, kye said: The Ed David video, shot with high-end cameras supports this too, with motion blurs starting and stopping abruptly: I'd just note that, this is pretty much exactly how motion blur records to film. If we consider that the holy grail. Everything else I agree with... camera shake is jitter and trying to discern some kind of motion cadence from it is probably optimistic at best. 53 minutes ago, BTM_Pix said: While you are down this rabbit hole, you might want to take a look at the RED Motion Mount that they did for the EPIC. http://docs.red.com/955-0013/REDMOTIONMOUNTOperationGuide/Content/1_MM_Intro/1_Intro.htm @BTM_Pix Thanks for this link! I had a feeling somebody somewhere must have invented a variable ND soft shutter. And I had a strange feeling you'd be the one to know about it! Very interesting. 😊 Quote Link to comment Share on other sites More sharing options...
kye Posted July 11, 2020 Author Share Posted July 11, 2020 1 hour ago, Towd said: I agree and totally get it. To be honest, I'm going to be really surprised if we discover any reasonable prosumer camera or higher that is recording frames at inconsistent rates. Yes, a cheapo camera or phone might be doing something like a 3:2 pulldown or some drop frame process while recording at a different frame rate. Who knows! I may be wrong, and I'd love to find out if some manufacturer has been sneaking that by us. But, if there is some kind of drop frame processing happening, I would imagine it would be in a very consistent pattern-- like 60 hz converted to 24fps. I really can't see a camera being so dodgy that it is inconsistently recording exposures at subtly different rates from frame to frame to frame. I know in time-lapse mode on some still cameras there can be minor variations, but I'm going to be floored if we discover this in a video recording mode. We may find that there are variations, or maybe not. Typically, electronics has a timing function where a quartz crystal oscillator is in the circuit to provide a reference, but they resonate REALLY fast - often 16 million times per second, and that will get used in a frequency divider circuit so that the output clock only gets triggered every X clock cycles from the crystal. In that sense, the clock speed should be very stable, however there are also temperature effects and other things that act over a much slower timeframe and might be in the realm of the frame rates we're talking about. Jitter is a big deal in audio reproduction, and lots of work has been done in that area to measure and reduce its effects. However, audio has sampling rates at 1/44100th of a second intervals so any variations in timing have many samples to be observed over, whereas 1/24th intervals have very few data points to be able to notice patterns in. I've spent way more time playing with high-end audio than I have playing with cameras, and in audio there are lots of arguments about what is audible vs what is measurable etc (if you think people arguing over cameras is savage, you would be in for a shock!). However, one theory I have developed that bridges the two camps is that human perception is much more acute than is generally believed, especially in regards to being able to perceive patterns within a signal with a lot of noise. In audio if a distortion is a lot quieter than the background noise then it is believed to be inaudible, however humans are capable of hearing things well below the levels of the noise, and I have found this to be true in practice. If we apply this principle to video then it may mean that humans are capable of detecting jitter even if other factors (such as semi-random hand-held motion) are large enough that it seems like they would obscure the jitter. In this sense, camera jitter may still be detectable even if there is a lot of other jitter from things like camera-movement also in the footage. 1 hour ago, Towd said: In regards to 24 vs 25 fps... Isn't PAL a regional standard for like the local news? I'm pretty sure 24 fps is the global film/cinema standard. You are doing the tests, so I'll get by with whatever, but seriously... 25 fps.... really??? 😎 LOL, maybe it is. I don't know as I haven't had the ability to receive TV in my home for probably a decade now, maybe more. Everything I watch comes in through the internet. I shoot 25p as it's a frame rate that is also common across more of what I shoot with, smartphones, action cameras, etc so I can more easily match frame rates. If I only shot with one camera then I'd change it without hesitation 🙂 For our tests I'm open to changing it. 1 hour ago, Towd said: I can run some more tests if it helps. It's pretty fast on my end. I think you can get a 15 or 30 day trial if you want to try it. It's pretty useful if a little obtuse. Maybe when I run a bunch of tests I'll record some really short clips, label them nicely, then send them your way for processing 🙂 1 hour ago, Towd said: In regards to IBIS, I think we are certainly introducing errors into our measurements by using it. Could be interesting to test jitter/cadence with and without it though. Yeah, it would be interesting to see what effects it has, if any. The trick will be getting a setup that gives us repeatable camera movement. Any ideas? 1 hour ago, Towd said: I'd just note that, this is pretty much exactly how motion blur records to film. If we consider that the holy grail. We're getting into philosophical territory here, but I don't think we should consider film as the holy grail. Film is awesome, but I think that its strengths can be broken down into two categories: things that film does that are great because that's how human perception works, and things that film does that we like as part of the nostalgia we have for film. For example, film has almost infinite bit-depth, which is great and modern digital cameras are nicer when they have more bit-depth, but film also had gate weave, which we only apply to footage in post when we want to be nostalgic, and no-one is building it into cameras to bake it into the footage natively. From this point of view, I think in all technical discussions about video we should work out what is going on technically with the equipment, work out what aesthetic impacts that has, and then work out how to use the technology in such a way that it creates the aesthetics that will support our artistic vision. Ultimately, the tech lives to support the art, and we should bend the tech to that goal, and learning how to bend the tech to that goal is what we're talking about here. [Edit: and in terms of motion cadence, human perception is continuous and doesn't chop up our perception into frames, so motion cadence is the complete opposite of how we perceive the world, so in this sense it might be something we would want to eliminate as the aesthetic impact just pulls us out of the footage and reminds us we're watching a poor reproduction of something] 1 hour ago, Towd said: Everything else I agree with... camera shake is jitter and trying to discern some kind of motion cadence from it is probably optimistic at best. Maybe and maybe not. We can always do tests to see if that's true or not, but the point of this thread is to test things and learn rather than just assuming things to be true. One thing that's interesting is that we can synthesise video clips to test these things. For example, lets imagine I make a video clip of a white circle on a black background moving around using keyframes. The motion of that will be completely smooth and jitter-free. I can also introduce small random movements into that motion to create a certain amount of jitter. We can then run blind tests to see if people can pick which one has the jitter. Or have a few levels of jitter and see how much jitter is perceivable. Taking those we can then apply varying amounts of motion-blur and see if that threshold of perception changes. We can apply noise and see if it changes. etc. etc. We can even do things in Resolve like film a clip hand-held for some camera-shake, track that, then apply that tracking data to a stationary clip, and we can apply that at whatever strength we want. If enough people are willing to watch the footage and answer an anonymous survey then we could get data on all these things. The tests aren't that hard to design. Towd 1 Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted July 11, 2020 Super Members Share Posted July 11, 2020 4 hours ago, kye said: Yeah, it would be interesting to see what effects it has, if any. The trick will be getting a setup that gives us repeatable camera movement. Any ideas? For a sub £20 budget option, there are plenty of battery operated product display turntables available on Amazon that you can just plonk the camera down in the centre of and will do the job for repeatable panning. If you want something a bit more elaborate and don't mind spending the extra £40-50 then Neewer and Andoer make very neat little motorised dollies. Their wheels can be set straight to do linear movements or angled to do inward or outward pans. They have different speed settings so you have a very repeatable may to monitor the impact the movement speed has on your tests and are operated by remote control. The upside is that aside from using it for your test you will also then have a film making tool that is a bit more creative than a turntable and can be put in your pocket and taken on holiday to up those production values ! Quote Link to comment Share on other sites More sharing options...
kye Posted July 11, 2020 Author Share Posted July 11, 2020 @BTM_Pix can we be sure they're completely smooth? ie, if I put a camera on it, record a pan, analyse it and find jitter, how do I know if the jitter came from the camera or from the slider? I have the BMMCC so I could test things with that, but if I do the test above and get jitter then we won't know which is causing it. It would only be if we tested it with the Micro and got zero jitter that we'd know both were jitter-free. I'm thinking a more reliable method might be an analog movement that relies on physics. Freefall is an option, as there will be zero jitter, although I'm reminded of the phrase "it's not falling that kills you, it's landing" and that's not an attractive sentiment in this instance!! Maybe a non-motorised slider set at an angle so it will "fall" and pan as it does? Relying on friction is probably not a good idea as it could be patchy. The alternative is to simply stabilise the motion with large weights, but then that requires significantly stronger wheels and creates more friction etc. Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted July 11, 2020 Super Members Share Posted July 11, 2020 37 minutes ago, kye said: @BTM_Pix can we be sure they're completely smooth? ie, if I put a camera on it, record a pan, analyse it and find jitter, how do I know if the jitter came from the camera or from the slider? The turntable or the dolly ? Quote Link to comment Share on other sites More sharing options...
kye Posted July 11, 2020 Author Share Posted July 11, 2020 1 hour ago, BTM_Pix said: The turntable or the dolly ? Either? Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted July 11, 2020 Super Members Share Posted July 11, 2020 Just now, kye said: Either? Well I suppose its all relative compared to what sort of tripod head you are using to do the tests now. Whatever the turntable or dolly are bringing to the party negatively, it is at least 100% repeatable and measurable so you can build it in. The turntables are gear driven which gives less variation than a belt driven one and the dolly can be run on a thin rubber mat to absorb the bumps. If you do have a slider then as long as you have some vertical drop space (i.e. put it on a tripod) then you could use a weight attached to a string one end of the carriage to pull it as it falls. A water bottle would do the trick as the weight as you could regulate the speed by how much water you put in. Nothing is going to be ideal unfortunately unless you want to throw a lot more money at it so you'd have to consider how much of a marginal gain any of it will bring you in terms of this project and whether thats worth it. Its all money you could be spending on buying a used EPIC and Motion Mount instead 😉 kye 1 Quote Link to comment Share on other sites More sharing options...
kye Posted July 29, 2020 Author Share Posted July 29, 2020 Ok, here's the test. Video tests the best 24p GH5 modes. Modes: 1080p 422 10-bit ALL-I 200Mbps h264 3.3K 422 10-bit ALL-I 400Mbps h264 (4:3 cropped) C4K 422 10-bit Long-GOP 150Mbps h264 C4K 422 10-bit ALL-I 400Mbps h264 5K 420 10-bit Long-GOP 200Mbps h265 (4:3 cropped) Tests: Motion stress-test x 2 (beach and tree) Motion cadence test Skintone and colour density test All shots in HLG profile. The export file (that I uploaded to YT) is here: https://www.sugarsync.com/pf/D8480669_08693060_8967657 It's C2K Prores (LT I think) at ~87Mbps and 1.17Gb. I can export a C4K version if there is enough interest. I went with C2K as people make feature films in 1080p Prores HQ, so if we can't tell the difference between GH5 modes in Prores LT then what are we even talking about? 🙂 Interesting observations from editing this was that during rendering, the 1080p mode was fastest (at around 30fps), and the 3.3K and 4K ALL-I modes were next at around 18fps, followed by the 4K Long-GOP around 13-15, then the 5K h265 at about 5fps. I don't have hardware h265 decoding so that probably explains the 5K mode, but why is the Long-GOP codec slower when it's a straight sequential export? If I was playing the file backwards or seeking then I understand that ALL-I has the advantage, but in a straight export I don't understand why it would be slower. Regardless, it was something I noticed. Also, in editing, the 4K Long-GOP files aren't that nice to work with, but the ALL-I files are great, playing forwards and backwards basically without hesitation, on my 2016 Dual-Core MBP laptop. Something to consider. Towd 1 Quote Link to comment Share on other sites More sharing options...
Towd Posted July 29, 2020 Share Posted July 29, 2020 Nice tests! Well worth downloading the Prores file as the Youtube compression really plays hell with all the stress tests. To my eye they all looked pretty solid. Maybe someone can pixel peep some motion blur variations, or I'm totally missing something. The only thing that popped out to me was some exposure variation on the trees blowing when using the open gate modes. But since everything else looked perfectly matched up, I'm guess that was just some shoot variation on that test. As a total gut check, I probably liked the 1080p All-I and C4k All-I the best, but if it was a blind test, I'd probably fail identifying them. Also, thanks for 24p on all the tests! 👍 Quote Link to comment Share on other sites More sharing options...
kye Posted July 30, 2020 Author Share Posted July 30, 2020 23 minutes ago, Towd said: Nice tests! Well worth downloading the Prores file as the Youtube compression really plays hell with all the stress tests. To my eye they all looked pretty solid. Maybe someone can pixel peep some motion blur variations, or I'm totally missing something. The only thing that popped out to me was some exposure variation on the trees blowing when using the open gate modes. But since everything else looked perfectly matched up, I'm guess that was just some shoot variation on that test. As a total gut check, I probably liked the 1080p All-I and C4k All-I the best, but if it was a blind test, I'd probably fail identifying them. Also, thanks for 24p on all the tests! 👍 Yeah, the sun was in and out of clouds during the tree test, not the best but it is what it is. I'm kind of having a change of heart with side-by-side tests too. If you can only tell the difference between two modes in a side-by-side test but can't take a collection of shots from one and a different collection of other shots from the other and tell the difference then will you really notice if a film is shot on one vs the other? I don't think so. No worries about 24p. I've now completely changed over. My iPhone only had 24p and 30p, like the PAL countries don't exist. I thought that my Sony X3000 only had 25p, but it turns out that if you set it to PAL then it only has 25p, but if you set it to NTSC then it has 24p and 30p. I guess that cinema is only done by NTSC countries..... Maybe I should buy a bunch of world maps and mail them to every company in silicon valley, they seem to be unaware there is a world out here. I'm also not seeing much difference between the different modes, even with the Prores export. Maybe I'm blind, but there it is. On the back of this I'm tempted to use the 3.3K mode as it's a sweet spot in the middle of the highest bit-rate (for overall image quality), the least resolution (for processing strain on pushing pixels around), and ALL-I for being able to be usable in post. The effective bit-rate is only 300Mbps because a 16:9 crop of a 4:3 only includes 75% of the total pixels. If I used it then I'd have 3.3K timelines for lower CPU/GPU loads in editing and just export at 4K for upload to YT, which would slightly soften the resolution like the Alexa does for 3.2K sensor for 4K files. Quote Link to comment Share on other sites More sharing options...
Towd Posted July 30, 2020 Share Posted July 30, 2020 1 hour ago, kye said: On the back of this I'm tempted to use the 3.3K mode as it's a sweet spot in the middle of the highest bit-rate (for overall image quality), the least resolution (for processing strain on pushing pixels around), and ALL-I for being able to be usable in post. The effective bit-rate is only 300Mbps because a 16:9 crop of a 4:3 only includes 75% of the total pixels. If I used it then I'd have 3.3K timelines for lower CPU/GPU loads in editing and just export at 4K for upload to YT, which would slightly soften the resolution like the Alexa does for 3.2K sensor for 4K files. I've thought about using that mode, but still just prefer to typically capture in a max quality 5k open gate or c4k in 10 bit. But, I typically don't do super fast turnarounds and just run proxies out overnight on all my footage using Premiere. But I can totally see the appeal for home movies and such, and 3k seems like a nice sweet spot for a 1080p product. One thing I have evolved into for slow motion is the 1080p 10 bit at 60fps over one of the 4k 60fps modes in 8 bit. I really much prefer all the 10 bit modes to the 8 bit ones on the GH5, and as your test shows, the 1080p is quite good. Especially on things like skies and clouds approaching clipping the 8 bit modes grade kind of crummy. Oh, but just to note, I just shoot V-Log for everything-- just to keep things consistent. Quote Link to comment Share on other sites More sharing options...
kye Posted July 30, 2020 Author Share Posted July 30, 2020 6 hours ago, Towd said: I've thought about using that mode, but still just prefer to typically capture in a max quality 5k open gate or c4k in 10 bit. But, I typically don't do super fast turnarounds and just run proxies out overnight on all my footage using Premiere. But I can totally see the appeal for home movies and such, and 3k seems like a nice sweet spot for a 1080p product. One thing I have evolved into for slow motion is the 1080p 10 bit at 60fps over one of the 4k 60fps modes in 8 bit. I really much prefer all the 10 bit modes to the 8 bit ones on the GH5, and as your test shows, the 1080p is quite good. Especially on things like skies and clouds approaching clipping the 8 bit modes grade kind of crummy. Oh, but just to note, I just shoot V-Log for everything-- just to keep things consistent. You raise an excellent point about 60p in 1080 and getting the 10-bit. How do I set that on the camera? I have the latest firmware (2.7 only released very recently) and I'm in 24Hz cinema mode, and when I go into the menus there is only 24p modes available, and the 1080 10-bit mode does not have VFR as a valid option - only the 8-bit modes allows it. IIRC I had that mode on 25Hz PAL but not in 24Hz mode. I see it's available in PAL or NTSC modes. Do I have to change system frequency and restart the camera? Or should I be in NTSC mode and be shooting 23.98fps to go with my 24p from my other cameras?? Won't the sync between 24p and 23.98 fail every two seconds or so? If I have to swap between system frequencies that's a PITA if I want to just grab a quick shot.. (and by quick, I mean 5 seconds to change modes rather than 50 seconds). These camera modes and frame rates are doing my head in. Quote Link to comment Share on other sites More sharing options...
Towd Posted July 30, 2020 Share Posted July 30, 2020 10 hours ago, kye said: How do I set that on the camera? I have the latest firmware (2.7 only released very recently) and I'm in 24Hz cinema mode, and when I go into the menus there is only 24p modes available, and the 1080 10-bit mode does not have VFR as a valid option - only the 8-bit modes allows it. IIRC I had that mode on 25Hz PAL but not in 24Hz mode. Ah yeah... I remember trying the 24.0hz cinema mode on my GH5 for a while, but found it didn't offer as many frame rates as the NTSC mode and swapped back. So it looks like the FHD 10 bit 60fps is one of those missing modes. I know this doesn't help you, and would be incredibly annoyed to be shooting all the time in NTSC while living in PAL land. The whole drop frame timing thing in NTSC itself is very annoying, but in the US at least, 23.976 is just the "cinema" standard for most digital delivery except for a strict 24.0 fps DCP file. Swapping between the two I believe just involves adding or subtracting a frame every minute or so. Honest question, in PAL countries when converting a film at 24fps for playback on a TV, is one frame just doubled up every second? In the old days of 29.97 tube TVs we did an annoying 3:2 pulldown that used interlacing to breakup frames and convert them to ~30 fps since the TVs were really just running a 60hz interlaced signal. Modern HD TVs just run at the 23.976 rate. I think it is all very apropos for motion cadence issues users report from cameras. The playback device could possibly be creating hell for a viewer depending on what country they are in! 10 hours ago, kye said: when I go into the menus there is only 24p modes available, and the 1080 10-bit mode does not have VFR as a valid option - only the 8-bit modes allows it. Anyway, I'd love to see Panasonic expand the frame rates they offer in their 24hz Cinema mode. A FHD 10bit 48 fps mode at a minimum. Or they could just add a 10 bit variable frame rate mode that could go up to 60fps (or maybe 72 or 96 😃). Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.