-
Posts
7,835 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
This is a tricky subject, but you have nailed it with your comment "seems to be a lot of opinions". Just like everything else out there, if something is engineering or science, there will be a lot of opinions, and almost be definition they will all be WRONG. People who have OPINIONS about engineering or science are people that don't understand FACTS. I'm all for having opinions, we can talk about who likes what colour science, lighting design preferences, lens aesthetics or if someone is a good actor, but anyone who has an opinion about how many pixels are in the UHD specification is just stupid. This is the same thing. There is a huge level of knowledge about how to get accuracy beyond a certain bit-depth when talking about audio, as properly recorded and processed 16-bit audio can have better signal-to-noise ratios than is mathematically possible because of a technique called dithering which works by adding a very specific type of noise to the signal. https://en.wikipedia.org/wiki/Dither Fortunately, ISO noise on high-quality 4K cameras is a relatively good version of that noise, so we can get a lot of the benefits. Downscaling from 4K to 1080 also involves oversampling which when combined with dither can extract the extra bit depth and eliminate the noise that was added. https://en.wikipedia.org/wiki/Oversampling There is an audio format called SACD which uses a type of digital signal called DSD, which is a 1-bit (yes, the bit depth is one bit!) at 2.8224 MHz, and because of its clever use of noise and processing, can have signal-to-noise rations of up to 120dB, which would require a 20-bit signal from a traditional codec, but because it is oversampling (in a big way) this effect can be achieved. Getting 20-bit from 1-bit is only possible because DSD has about 64x the sampling rate compared to 44kHz audio. https://en.wikipedia.org/wiki/Super_Audio_CD If DSD gets 19 extra bits from a 64x oversampling, then it shouldn't be impossible to do a similar thing with video and get an extra bits from resolution oversampling. However, and this is a key part of the picture, you will only get perfect 10-bit 4:4:4 1080p from 8-bit 4K footage if that 4K footage is RAW and the noise is perfect. Any variation in de-bayering, compression or any other processing that is applied in between that data coming off the sensor and the downscale will have a damaging effect on the final result, and this is where reality differs from theory, and it the overall quality will be different depending on the camera, codec, bitrate, subject matter, and probably other things. If none of that made sense, then here's a TLDR approximation - adding noise to 8-bit helps with banding similarly to why adding noise to your footage helps with YT colour banding. The mechanism is very different, but the effect is broadly similar. Anyway, let's put this to bed and go back to talking about cameras ???
-
Nice video! I make home videos for my family and that's a great one - I'm sure they were really happy. One of the reasons I like doing projects like these is that that footage will have a life of maybe 100 years or more, and it will become more valuable to the clients over time instead of less valuable like a 2-year-old TV show or movie or doc that no-one remembers. I'd also imagine that if you're adapting FF lenses they'd easily cover the whole sensor.. even with a 0.7X adapter? Here's a pic to get you fired up.. SLR Magic 8mm on GH5: and if that tickles you then you'll absolutely love this one.. Lomo 40mm on GH5 via an adapter and mounted in a hole cut in the body cap (it was a fixed lens with proprietary mounting thread so no adapters available):
-
The GH5 has a mode called Open Gate where it shoots with the whole sensor - 5K 4:3 video. This mode has less sharpening than the 4K modes with the sharpening turned all the way down. Also, you can soften things up in post quite easily too. Your vintage lenses will also do that in-camera if you're using them for those shots. I don't know what editing software you are using, but if you have the time and energy to do it, Resolve has a free version that you can use to basically make the footage look however you want it to look. There are also LUTs that make the GH5 look like an ARRI Alexa, including the sharpness, so the image coming out of it is very flexible if you want it to be. 4K can be downsampled to create extra bit depth and colour information, however it depends on how the codec performs and how much noise there is in the signal (less is worse). The explanation is very technical, but basically it works because for every 1080 10-bit pixel you get after the conversion, you've averaged 4x 8-bit pixels, and so the average can be between the 8-bit limitation, and those 4 pixels contain all three colours before debayering. In practice it will be somewhere between 10-bit 4:4:4 and 8-bit 4:2:0 depending on exactly how that camera operates, but the benefit is real. If the OP is willing to wait for the computer to render proxies and take longer to render out the final project then basically any computer can edit any resolution. We forget that people used to make broadcast TV in SD, which is 27x less pixels than 4K. Resolution pretty much doesn't matter when you're editing (it matters when you're grading or doing other things) so that means you can render proxies and edit them nicely on a computer that is less than 4% of the performance required for 4K editing. I render proxies at 720p and edit them on my laptop on the train. My computer is perfectly capable of playing 4K files but I don't need the extra resolution and saving space on the internal SSD that I'm editing from are advantages too.
-
Absolutely. I'd also suggest trying before you buy. One of the things that I thought the GH5 would be absolutely great at would be focus peaking, but it didn't stack up to what I had in my head. To be fair I'm not sure if anything does, but there are situations I found it wasn't that great (ie, terrible) but there are ways around it with punching in to focus, and it wouldn't have changed my decision to buy it, but it was still a disappointment. Sadly, as good as the GH5 is (and I absolutely love mine) no camera is perfect and every feature has its limits on every camera.
-
@kaylee You should also have asked if anyone has heard of it because of the latest news. I hadn't heard of that either, in fact, the only reason I am aware of it now is because you asked about it... Maybe it's a big news story and maybe not, I don't know, but what happens in the US is far less important when you don't live there
-
Daylight WB and shooting straight into the late afternoon sun. If it works for Philip Bloom......
-
Flaring is interesting, I don't really know how that works - if it's different at different apertures or focal distances. The frame is getting kind of crowded at this point! The one thing that this test won't be so good for is the imperfections because most of these lenses are FF and I'm using a MFT sensor, so the lens could have dancing bears in the corners and I'd never know it. This is where CA and the other issues tend to exist, so unfortunately this is a "lens on MFT" test. Having said that, I do want some things in focus near the edges / corners so we can at least see how bad things are on MFT. There should be lots of things in the slightly out-of-focus areas so I think that's covered. Smooth like butter!! It really is incredible, and my one is virtually brand new. The other Takumar lenses are all just as good, if a little firmer. It's a pity they don't look as good or have the focus ring going in the right direction ???
-
I don't use AF on the GH5 so can't comment. Camera reviewers make a big deal of these things, but what they don't tell you is that they're often using it for recording the camera reviews they make. I would imagine it's quite a personal thing, depending on the circumstances that you put the camera into. I think it's kind of gotten blown out of proportion really, it seems like the GH5 can't ever focus and the other cameras never miss focus, but that's just not true. Even if it was 90% vs 95%, that's still 90%. I don't think the XT-3 was out when I bought my GH5, maybe I'm wrong. I guess in the same way that people think of the AF on the GH5 as "risking it", I thought of other cameras IBIS as "risking it" when the GH5 was the IBIS king for a long time. I shoot almost exclusively hand-held so the IBIS is a huge deal for me. I'm not 100% sure that it is still the IBIS king now, but the combination ofnnearly the best IBIS, 10-bit internal, huge range of lenses available, rock solid industry standard performer was really the balance of things for me. People say the XT-3 is great, and they're probably right
-
Ok, that makes sense now, I think. But you've convinced me to shoot some video Maybe I can put up the stills and see which ones people want to see video for. That way I'm not carting around a whole suitcase full of lenses!
-
I struggled for ages with criteria that sound like they might be similar to yours. In the end it came down to the Sony A7III, which has good AF and fast but expensive FF lenses, or the GH5 and 10-bit files with the reliability, lack of overheating issues etc, and pro features it offers but limited AF. I ended up deciding that going to manual focus worked for me and my style of film-making, and because the IBIS is so great you can adapt old lenses and still get great hand-held footage, so I now have the GH5 and a cheap ebay vintage lens collection that is so large that I hide it from my wife.
-
LOL, I was trying to imply motion cadence doesn't exist, more that no-one seems to know where it is With people moving in the background etc, is that a function of how the out of focus areas are rendered? Like, bokeh balls with hard edges look great for city lights but are really distracting for everything else? Or is it something different?
-
The $7 Petri 135mm f3.5 was the first to have cleaning attempted, and holy wow did it need it. It was absolutely filthy. There's a lot of discussion online about how Petri lenses are high quality Japanese lenses, but are "uncleanable" because the manufacturer glued the screws in, and even things like soaking them in acetone for days doesn't loosen them. Someone broke a screwdriver trying to get one apart. Luckily I had no problems with the screws on this one.. Unfortunately, I had two kind of trouble, the first was that I couldn't get the front element out. I tried the old elastic bands and a lens rear cap, I tried PVC pipe and blu-tak (bad idea, it just gets pushed into the grooves), lens caps and tape, nothing would make it move. This is a pity because there is fungus between the front and next element. I was able to clean the back of the second element, which was filthy - it looked like someone had wiped the kitchen and bathroom benches and then tried to wipe this out. Soapy water and rinse cycle of distilled water got it nice and clean. Then the last element came out ok, but unfortunately it was actually two elements glued together as one lens, and naturally the filth was in-between them, so game over at my current skill level. You can see it a bit in the pic above. However, I was able to clean both sides of that, which was also needed. Now it's back together again and still works! Bonus!! Did I mention it was dirty?
-
I think vaguely, yes, but I know nothing about it, including where I might have seen it or where to go to watch it, or even if I should.
-
Damn, I was hoping you wouldn't have any good points, but... I'm not sure that movement is impacted by lenses, they all let light through in a constant stream at, well, the speed of light, so I don't think the mythical motion cadence comes from the lens. In terms of how these things translate to video though, I may have to take a few of the lenses to a nice place and film some stuff so we can see. To me, if you can see lens X recording video and go wow, that's good at Y or not good at Z, then you can use the stills to kind of work out what the other lenses would be like. I definitely don't want to record video for every lens at every aperture setting, that's for sure!!
-
Nah, it's the tiny spider drones and tiny insect drones that are the worrisome ones, because it doesn't matter how many locks you have on the door, they just come in through the vents and crawl on you when you're sleeping... ???
-
Actually, this halation effect also comes from vintage lenses too. I just took this shot of a lens I paid basically nothing for on ebay and it's got the same fringing you show above: If you zoom in around the highlights you'll find that some stems on the plant are very red. (This lens is well regarded and made in Japan, but are available very cheaply because they are almost impossible to take apart and clean)
-
So, I discovered that the Petri 135/3.5 was marked as delivered a couple of weeks ago, and "left in safe place near front door" and I searched for it but couldn't find it. I assumed it was lost or the postman stole it or whatever. It was $7. I was more concerned with the fact I paid almost double that in shipping! Anyway, a lady knocks on the door - she lives some way down the street and found it hidden on her front verandah.. the mighty Petri lives!! So, after cleaning it up (it was absolutely filthy, I think they stored it in a barn, or in the glove box during their desert crossings) and moving the focus control for the first time in about 1000 years back and forth until it stopped making grinding sounds, I took a few test shots. You know how they say that "there's no such thing as a bad 135mm lens", well, I don't know if that's true, but if there is you should pay a lot less than $7 for one, because this one is lovely! A little soft wide-open, sharp as hell stopped down a bit, lovely hex shaped bokeh, and it even has a little retractable sun-hood, and came with a soft pouch (that hasn't aged well). The mighty Petri 135/3.5 (image taken with Konica Hexanon 40/1.8): Sample shots wide open - unedited in any way: It's still filthy, and I think there's fungus in it too, but seriously... Oh, and these are on the GH5 with a straight adapter, so it's a 270mm lens on the 2x MFT crop.
-
screw the hoverboard.. we've gone past the time that Blade Runner took place. I want robotic pets!!
-
The price is definitely exotic, but having a larger screen is useful all the time. I have spent the last few years at a company that does large urban planning projects and people were consistently in meetings and talking about options of various things and so one of them takes out their phone and pulls up Google Earth and then two or three people are all crowded around a 5 or 6 inch screen. Even if you have a laptop the number of times it's easier to pull out your phone, or even when you're comparing a plan on the laptop with satellite imaging on your phone, they're hugely useful. Just don't have notifications on for texts and messages - if you're showing the big boss something and your wife sexts you then that may not go down well!!
-
How does video look different? and wouldn't that be more subject to which camera you have? If I could shoot RAW video then that might be neutral enough perhaps, but unless someone wants to send me a P4K, then.... Broadly, I want to see how my existing lenses (the ones I intend to keep) compare with the others, from the others I want to see which lenses I should keep (if any), I want to learn how to read an image so I can evaluate images I see online better, I want to learn more about what I like and why I like it, and I want to hear what others have to say so I can learn more about how other people see things. I also want to give back because the internet (and all you lovely folks out there) have been crazy good to me over the years and if I'm going to all this effort then why not, and even if I wasn't feeling generous having other people point out things I didn't know to look for is well worth the work involved in sharing it anyway
-
If everyone gets what they want then that hole will be filled with the A7SIII ???
-
My last lenses are due to arrive, and before I shoot 10 million test images I want to make sure the test is good. I will be shooting lenses with a wide range of different focal lengths, maximum apertures, and minimum focal distances. I've done a test mini-shootout that looked like this: ~50mm on SB F2.8: ~40mm no SB F2.8: ~20mm no SB F2.8: I tried to keep the same framing so had to move the camera back with the longer focal-lengths. Unfortunately, because of this the DOF is different on all lenses even though those test shots are all F2.8. Do I make a test scene where the framing is similar in each lens so that you can compare resolution (ie, the texture or reference charts) and accept that the DOF will be different, or do I keep the DOF the same but vary the framing, meaning that you can't compare resolution etc. I'll be testing lenses that range from 8mm F4 to 200mm F4 and with apertures as much as 0.95. In terms of what I want to shoot in the test scene, here are my notes: Shooting stills as they're higher resolution than video and are RAW so it eliminates most of the colour science and processing from the camera Controlled lighting for consistency throughout test Different colour temperature lights for a bit of colour (probably tungsten key with daylight WB and other lights for colour) Things in focus in the middle and at the edges of the frame (to test sharpness) Focus target in the centre Measuring tape running front-to-back to look at focus transitions Colour checker of some kind (I'll use my DIY one, not calibrated but good for comparisons) Fully manual settings and identical WB between shots to show colour differences Some bokeh with things out of focus and probably also an LED light in the background for bokeh ball Test every lens at every f-stop (it will be a huge number of images, but good for comparison and in my mini-test I didn't do that and I wished I had) I'll also shoot every lens at an identical set of settings to determine relative brightness Thoughts? How should I frame and is there anything missing? (No, I don't have easy access to a model, plus I find their non-identical facial expressions taint the interpretation of images )
-
@Towd you might be right that he's shooting an in-camera profile and just being careful to WB. That would make his workflow even faster, which as @mercer points out, is exactly what you want to have from a business perspective. In terms of getting good video from your Sony, if you can get a bit of noise into the footage before it's saved to the card then shooting 4K 8-bit and downscaling in post to 1080 may have almost the same benefits at having 10-bit. I'd suggest dialling the NR as low as possible for this reason (assuming you haven't already). The Sony cameras aren't the best for colours, but we're really being picky here, and they should be fine to replicate the efforts of someone who looks like they don't adjust WB in post at all. Considering this is for business purposes where simplicity is best, perhaps setup a test environment and start with the camera on the simplest settings possible. 4K with a standard colour profile perhaps. Shoot it with proper WB across each of the profiles, then look at the footage and see where the problems are and what you might need to do to fix them, perhaps tweaking each profile to work out how good it can be in-camera, and only then if they don't work would you add in complexity of shooting in LOG and having to grade at all. If you can dial-in the right picture profile in-camera then editing 1080 and outputting 1080 with (maybe) a couple of effects or a LUT or power grade preset then that would be a much more effective workflow, and would allow you to deliver faster, book more clients, and get a pay rise I used to read audio technician magazines for years and the main theme in the magazines was that small business owners were always lusting after exotic microphones and luxury pre-amps when their clients don't notice and they haven't had a holiday in a decade and they should just take their kids to The Grand Canyon and watch a few sunsets with their wife.
-
ok..... I've pulled out my grade and tried to replicate the overall look that @Towd created and watched a couple of the guys videos from his channel and here are my impressions: Towds grade is quite nice, try his power grades and see how you like them. In Resolve you can save the whole thing as a power grade and apply it to each shot, and of course you can adjust each node, but remember you can also adjust the strength of each node in the Key -> Key Output adjustment, so you can dial back any node. You can also save each part of a grade separately and build the grade from these, for example you might have two WB presets, a CST node for each camera, etc and then by right-clicking and selecting Append Node Graph it will add that node to your existing grade, so you can just build it up. Naming the grades "1 - WB warm" "1 - WB cool" "2 - CST a6300" "2 - CST drone" will enable you to quickly choose a "1" node, then a "2" node, etc and not miss any steps. This guy isn't a wizard with colour grading. Whatever he's doing, it's absolutely not some sophisticated post colouring workflow. I watched an early video "Forrestdale residence" where the WB from the drone was too purple (because the ground is mostly green) and so the garden beds are naturally purple, but he hasn't corrected it. This is a basic WB adjustment that he hasn't made. Also, the video "Park Townhomes Herston v1" has two interview shots early in the video and they've edited b-roll in-between them, which is great, because they both have completely different WB and Gamma curves, so much so that I noticed even with the gap. Don't get me wrong, some of his videos look great and from a non-colour perspective seem very nice, he's just not able to get consistently great results, which he would be doing if he was great at colour grading difficult footage. He also clips his highlights as Towd mentioned, so there's that too. I suspect he's setting WB in-camera and probably applying some kind of LUT or basic grading package like Film Convert Pro where you adjust sliders or choose pre-sets. This makes total sense and the guy is a professional videographer, not a professional colourist, so it's in his best interests to be pumping these things out, not fussing over post-processing. One trick for controlling gamma and controlling contrast is to use two contrast nodes. The first should have contrast <1 to reduce the contrast and get the whole DR into the legal range. The second one should increase the contrast again to look good. This has the advantage of the second contrast will compress highlights and shadows but not push them out of legal range, so it's a way of keeping the DR in the shot but also controlling how much visual contrast there is in shot. As a bonus, by adjusting the Pivot on the second contrast node, you can adjust exposure, also without pushing anything out of range. This is a nice way to make all the shots equal brightness, and for R/E where they like the properties to look light and spacious might be a good way to push that to where you want it for that high-key look. In terms of what you can do in camera, there are a few things: Take custom WB with a card on every shot. If you use a monitor or can have display LUTs then pump the saturation right-up so you can see any colour problems really easily on set, but you don't want to bake these into the footage. You can also have a photo mode with the saturation right up and either look at the screen or take a couple of stills before rolling video, to show you the colours. Carry around a set of bulbs and replace any that aren't working (I think I've heard R/E videographers say they do this - Parker Walbeck perhaps). The offending colours are the green/purple, which IIRC are normally florescent tubes? If bulbs are yellow compared with daylight then that's fine because it makes the home look warm and inviting, which is a good thing. Yellow/blue colours aren't a problem. If you can't replace a globe then just avoid the mixed lighting shots. Turn the bad light off and lighten in post, or light with your own light-source for that shot (carrying a single tungsten work-lamp and a light-stand might be a good idea anyway as they're hugely powerful and really cheap) or just play the angles - no-one says you have to be able to watch the video and make a mental 3D map of the house. I've looked at house photos for decades when house-hunting and many of them are so ambiguous that the layout isn't clear even if you study the photos for hours! Depending on your budget, I've seen high-end stills photographers spend an hour setting up huge numbers of lights in a room for one photograph, almost like they were doing dodge-and-burn in real life with little LED lights. This might not seem practical or economical, but if you want to get into higher budget videos then a good strategy is to deliver a video that looks like it cost a lot more than what the client paid, and then not only can you satisfy the customer but when you want to try and book the next shoot with a higher budget you will have an example of creating that higher quality work. Remember that the job of the video is to get people to come to the house and talk to the agent, so it's a brochure, so anything that isn't photogenic might be able to get cut from the edit.
-
In the liquor bottles those reds are very saturated, although the other colours may not have been, maybe that's a Sony thing perhaps? The greens in the foliage through the windows are pretty saturated too. I'm not sure how much improvement there would be from changing cameras, and how much of it is just the actual lighting in the location. Obviously with 10-bit files you can push the colour around a bit easier, but it would be a lot of work to try and get rid of the green/purple behaviour of this shot. What comes to mind is keying the walls and flattening the saturation, obviously a different key for the green vs the blue walls. There are a few ways to reduce colour variation, one is the Colour Compressor (I think that's what it's called?) and another is just to partially desaturate the area and tint it to the desired colour, but these are a lot of work in post.