Jump to content

kye

Members
  • Posts

    7,689
  • Joined

  • Last visited

Posts posted by kye

  1. 11 minutes ago, Django said:

    There will be a DX mode in the Z6/Z7 just like on every current Nikon FF DSLR..

    Depending on their implementation, this can be a very useful feature, both for lens compatibility, but also as a digital teleconverter that doesn't lose resolution.  :)

  2. I've been playing with my Canon 700D, Magic Lantern 3x crop mode, and the 55-250 zoom lens (which is 264-1200mm equivalent in crop mode) and I've noticed a funny thing about it.

    If my kid is on the far side of the football field and I zoom all the way in to 1200mm and then manually focus (which is horrible at this distance on this lens BTW) then when my kid runs towards me and I naturally zoom out, the lens shifts its focus closer to me, effectively keeping my kid in focus, and actually helping me!

    I know about parfocal lenses that maintain the same focus distance throughout the zoom range, but do lens manufacturers deliberately design lenses so that zooming out helps you to maintain focus like this?  Or is it just a random happy coincidence it zooms closer and not further away?

    Considering that the difference between focusing on players on the other side of the football field and focusing on the cars and houses on the other side of the field is a smaller tweak of the focus ring than there is free play in the focus ring, it's pretty difficult to adjust focus, and combined with the Canon screen and lack of good focus assists (that are fast and clear enough to keep up with sports) it would be nearly impossible to film at larger apertures without this lens behaviour!

  3. 6 hours ago, webrunner5 said:

    Well this isn't the cheapest solution, but it looks to be a pretty big improvement.

    https://9to5mac.com/2018/08/14/back-to-the-mac-010-dual-egpus/

    Interesting.  Probably the most interesting thing to me is that adding the second eGPU only seems to benefit the benchmarks and not any real-world results.  I'm not sure if they'll end up optimising for multiple eGPUs or not.  It's definitely harder to write software that uses a flexible number of processors, and IT these days is about replacing the one you have instead of adding a new one and still using last-years model too.

  4. 11 hours ago, TheRenaissanceMan said:

    Most know-nothing DSLR soccer moms and the like got a short zoom, a long zoom, and a 50mm.

    I know quite a few soccer mums who bought prime lenses.  Apart from the nerds in forums like these, there are two customers I used to see all the time, the photography enthusiast who buys a 5DIII and an L series zoom and makes professional but mostly lifeless photos (all equipment no technique), and the clueless soccer mum who came and asked me for a camera to take pictures of the kids and nice holiday pictures.

    The problem with the soccer mum wanting to take pictures of the kids is that they want to take pictures of junior who is running around, inside, in poor lighting and they've found out that by the time their point and shoot has focused the kid is in another room, and if they manage to accidentally get one in focus the picture is blurry because it was a 1/10s exposure, and the ISO noise is shot to hell as well.  They ask what the solution is, and it's a real camera (entry level DSLR) and so they buy that with the kit lens but often that's not enough for the low light conditions so they need a prime to get more light in the camera.  I would always recommend Nikon because the 35mm 1.8 was affordable (Canon 35mm 1.8 was not) and on a APSC body the 50mm is 80mm and that's probably a bit too tight a focal length for indoor photography.  The advice was pretty easy to give, prime lens for inside, zoom for outside, use the mode dial to choose the picture you're taking.  After that I would sometimes get an odd question, but mostly the only feedback I would get from then on was nice pictures.

  5. 2 hours ago, Shirozina said:

    The Dell does throttle the CPU on battery but I have to ask who is actually going to be doing any serious editing just on battery?  There are various battery settings in both windows and the Dell power command app that I have no idea if he set up optimally. Also there are various settings in the Nvidia contol panel which effect how fast it works with video processing and no indication if these were optimised. The basic problem with these laptops not addressed here is that they will both throttle when set to do longer processing tasks like rendering cache files or optimised media which are common tasks you need to do when working with compromised systems and having to do workarounds (Laptops being generaly inferior in CPU and GPU etc to desktops) and throttling is totally unavoidable in laptops due to the physical impossibility of cooling in a thin design. Whatever you choose - Mac or PC I'd say avoid the top end i9 models and spend the money saved on an eGPU system which will mean you can (with Resolve especially) do lengthy high intensity processing tasks without throttling issues.

    People who are shooting on the go and uploading with short time frames are editing on battery.  This is both YouTubers and perhaps mobile documentary makers on very short deadlines (not sure if this is a thing?).  I've seen ENG people editing in the field, but the setups I've seen are laptops in vehicles, so they could have an inverter and be on charge, so this probably doesn't apply to them.

    I would also edit my travel and home videos on the train during my commute, and I shoot in 305Mbit 4K, but I realise I don't represent a huge percentage of the users out there! :)

  6. 2 hours ago, Django said:

    My guess is they were shooting default AF setting.

    This could have a real impact.  When I watched both A7III videos by Matti Haapoja both had AF failures in them which other people don't have in theirs, but Matti mentioned using centre-focus(?) mode when other people use face-detect, so I think it does take a while for people to work out the quirks of a camera.

    I'll be waiting for the full reviews before making any real judgements.

  7. 4 hours ago, jonpais said:

    Sadly the company also says that it won't be sharing the technical details of the mount, preferring to protect sales of its own lenses at the expense of creating of a more inviting, wider ecosystem. So, unlike Micro Four Thirds and Sony E-Mount, third-party makers will have to reverse-engineer the Z mount. - DPReview

    This is sad, but not really a surprise to me.  It's common practice for market leaders to make 'closed' systems to prevent their customers from escaping, and for market challengers to make 'open' systems as they have far less to lose and an open system is better for consumers.  This is a big advantage of m43 for example, but at that point Panasonic and Olympus didn't have a huge customer base to protect.

    I think these new cameras are interesting, but there are many pros and cons in comparison to the other offerings.  I'm not going to be able to make a decision until we start seeing full reviews of production models (not pre-production) and the reviewers have had a chance to discover their strengths and weaknesses.  I've watched enough tests by cinematographers who film scenes at varying levels of exposure (-2, -1, 0, 1, 2 etc) and across various profiles and gamma curves to know that it'll be a while before anyone will be able to show what these cameras are really capable of.

    The full weather sealing is certainly an attractive feature if you're spending so much money on a camera system! :)

  8. 5 hours ago, salim said:

    Max just did a comparisons and on Resolve the Mac is killing it. 

    Knowing how quickly BM release updates and how much effort they're putting into development, I wouldn't be surprised if the Windows version of Resolve gets a big performance bump at some point soon.  The performance differences in that video can't be explained by the hardware differences so I think they must be software optimisations.

    If the OP has a preference for PC then I wouldn't use those test results to recommend a complete change in computer platform.

    @mojo43 sorry to hear about your laptop getting taken!  I keep my carry-on under the seat in front of me but I have a soft backpack and I guess if yours is har-shelled or too big it might not work.

  9. Thanks @webrunner5..  @Mokara Never underestimate what very smart people can do with digital signal processing.  In audio, where CD and the standard delivery formats are 16-bit audio but very common sources of sound are hugely more dynamic and have to be crazily processed before recording (eg, an orchestra), audio engineers have managed to develop very sophisticated ways to dither and add noise so that the performance of 16-bit is stretched considerably.

    It's almost a black art in some ways, but the results speak for themselves on a good system.  I'm not sure how much of that body of knowledge is applicable to video but there's definitely a lot of experience to draw from.

  10. Interesting video.  

    The way that these results seem to be jumping around between Max's videos seem to indicate that he's gradually working stuff out, like the XPS on charge vs battery, but I think there's also an element of moving goalposts as things change with software and firmware updates too.

    The fact that Resolve is much better on the MBP than XPS may be short lived if the Resolve Windows team just haven't released their optimisations yet.  These computers are relatively similar in performance (the video card being the main difference) so it looks like optimisation differences, which could be improved at any time.

    I'm glad I'm not in the market for a new computer, there's a lot of opportunity for buyers regret if an update comes along and changes things!

  11. 1 hour ago, jhnkng said:

    Thanks for your insight, it's great to hear from someone whose actually done it! I like using an EasyRig whenever I can, and  I'd love to try rigging an Insta360 to the top of one, I think that could be a solution to some of those issues you mentioned. 

    A 360 camera may be a much better second camera than a GoPro actually, so I think that's actually a good idea.

    A 360 camera will probably control exposure via shutter speed and you won't be able to put NDs on it, and will therefore have very short exposures.  This combined with the fact it's 360 means that you can stabilise in post and it will probably be completely fine even if someone bumps the setup.  The ability to choose framing after the fact also means you not only have the variety of shots to choose from but you can ensure that the camera and operator never get in frame.

    My recommendation would then be to put it up higher so that the angle that the camera/operator will be included in will be a lot less and you'll be able to use wider shots.  You only have to hold a camera a tiny bit above head level and it looks like drone footage, so there's that too.

    Low bit-rate 4K spread across 360 degrees makes footage pretty grainy so you want to use the widest shots you can, so the further away from the operator it is the more unobstructed field of view you will have to play with.

    Best of luck - it sounds like it would actually make a pretty flexible B-cam.  I hadn't thought about it in this way either, so this conversation is useful to me too. :)

  12. 8 hours ago, Myownfriend said:

    From what I've read,  pixel binning is something that happens during the de-bayering process so, in most cases it will only be used when recording to debayered format in-camera. I'm not sure if there's a way to pixel bin and get a result that looks like bayered data so file size would probably be somewhere between 4K file sizes and 1080p crop sizes. Either way, that really shouldn't disqualify it from being considered RAW. It would still be 12 or 10 bit with no baked in color temperature, sharpening, or digital ISO changes just a loss in spatial data.

    RAW is kind of weird term because it can't necessarily require that the information be bayered data since some sensors don't even use bayer patterns and it can't be required that image recorded be completely un-altered sensor data since we accept lossy compressed RAW and logarithmic storage of 16-bit linear values to be RAW as well.

    Thanks - that makes sense.  I guess from that perspective there's a considerable grey area between completely pure full sensor read-outs and completely processed and compressed data.

    I guess from this perspective it's like many other things where we just need to evaluate it both objectively and subjectively like we already do.  I wouldn't care if they used shamanic runes and astrology processing inside if it gives a lovely image! :)

  13. 4 hours ago, Robert Collins said:

    The A7iii is rated at 5 stops I believe. The A7riii is rated at 5.5 stops as is the EM1ii (not FF). The Fuji X-H1 is 5 stops too. There arent a lot of FF ibis candidates to choose from.

    I didn't realise they measured IBIS.  You don't happen to know how many stops the GH5 is do you?  I read everywhere that the GH5 is so much better, but I'd be interested to know how many stops better it is :)

  14. 3 hours ago, IronFilm said:

    Sure, there are those who value extreme shallow depth of field above all else, this was especially popular in the early days after the 5Dmk2 as it was such an extremely unique look for many indies back then. But I'd hope we've moved on since those olden days. And are not simply striving to shoot with the most extremist minimalistic depth of field that we can physically achieve! 

    Rather we are thoughtfully choosing the right depth of field for the shoot/story/crew/production. 

    In my investigations I've noted that DoF isn't exactly what trends to be relevant in the frame - it's more a case of how blurry the defocused areas in front and behind the subject are.  Ie, if you're shooting a person at a given focal length and F-stop inside, where the background is relatively close to the subject, then that background will be a lot clearer than if you shoot the same settings outside where the background is a lot further away.  It also changes when you move the camera closer to the subject.

    I came to the conclusion that if there was no other 'cost' (ie, lighting or ISO changes) you'd probably dial in a custom f-stop for each camera setup.  I'm lucky in the sense that I just want separation and depth so it's less of a worry for me.

    1 hour ago, Shirozina said:

    But not every situation demands you push the cameras ISO to it's limits - within base or near base ISO there are no practical differences in noise that would make you choose one snesor size over the other

    It depends on who you ask.

    It's a pity that the film industry hasn't adopted any standards for measuring colour shifts or noise, as these would be very useful for rating cameras.  Instead, what we get is people saying that a camera is "usable" to a certain ISO - I've heard people give figures that are usable from 6400 to 20000 for the same camera!

    What we'd also find is that casual shooters have a much greater tolerance for noise and might not even notice subtle colour shifts, pros shooting for online or small-screen distribution would care more about noise but probably not be super picky about colour, and those shooting high-end stuff for the big screen are really concerned even with quite subtle shifts in colour.  Those shooting in RAW (where grain isn't mangled by compression algorithms) will have different tolerances again - my 700D looks abysmal at 6400 compressed but RAW with ML 6400 the grain has a nice quality to it.

    If you hang out with some high-end cinematographers or professional colourists for a while you'll find that they're able to notice colour shift changes with ISO changes almost right down to the base ISO of a camera.

  15. 30 minutes ago, IronFilm said:

    Nope, I feel like you're still missing my point. 

    Think specifically about DoF and how you'd light a scene once you've picked that DoF, and how that changes as change sensor size. 
    (you've almost figured that out... you realize you'll use a four times slower f stop: f2 to f4 as you move from MFT to FF to maintain the same working DoF in practice. What impact does this now have on your lighting team and your production? If as you suggest, you want to keep shutter angle and ISO  the same, what needs to be done by you and your gaffer to keep the same exposure?)

    Ok, got it.  Thanks :)

    The part that I did understand was that to maintain identical DoF you have to adjust aperture with crop factor.. (m43 F2 = FF F4 = given DoF). The part that I didn't understand is that exposure doesn't behave in the same way.

    This would then be another advantage for m43, no?  Get the same look with less lighting?

  16. @IronFilm @webrunner5 this is why these things are hard to discuss, there are so many variables.

    Consider this..  Lenses gather light.  For a given design, the bigger the lens diameter, the more light goes in.  This is why people have telescopes with really large diameters - they want to gather more light.

    Lenses then focus that light into an image circle of a given size.  This is why m43 lenses won't cover a FF sensor - the image circle isn't big enough.

    Sensors detect that light.

    So when people say that "a FF gathers more light" it's only true if there's a FF lens on the front.  If we put a Super35 lens on the front, put in a focal expander to spread the light out a bit more, the FF sensor will see less light because there is less light coming in from the lens.

    The problem with people changing sensor sizes is that they don't change everything else in their camera.  I suspect this is what happened with the vista vision sensor - they probably didn't buy all new lenses.

    If two people go out to buy a camera setup, one buys a FF camera and 50mm F4.0 lens and the other buys a m43 camera and a 25mm F2.0 lens and they then meet and point their cameras at the same object with the cameras right next to each other, then with the same settings (ISO, SS, aperture) and their lenses wide open they will get the same exposure, and they will get identical angle of view and depth of field.  When we're talking about the camera industry this is the comparison we're talking about, not changing from one setup to another.

    Does that make sense or did I mess something up? :)

  17. 10 hours ago, jhnkng said:

    I know there are people who mount GoPros on their cameras to get a wide while shooting run and gun (mostly freelance news or fast moving doc crews), and I wonder if this would be an even better solution to having a second wide angle to fall back on in case you miss something while shooting longer. Assuming the footage can be cut in to each other, I think this could be a really good option when shooting docs or really fast moving scenes where getting the shot is more important than it being beautiful.

    I've tried this type of rig and there are a few issues.  The first issue is that the wide angle of the GoPro makes it difficult to mount so that it can't see the lens or the shotgun mic that are also sticking out the front, let alone your hand playing with the zoom / focus rings.  The second issue is that a second angle is most useful to cut to when something bad happens to the first angle, but if they're mounted together and someone bumps the camera (or some other movement issue) then footage from both cameras are bad at that point.  If you don't need continuity editing and only need a second focal length then it might be ok.

    It would depend on the particular setup.  If I was shooting with a tripod then I'd be tempted to mount the GoPro either quite a lot higher up than the main camera (for events perhaps) or maybe 30cm or so to the side, so they'd still be sharing a single tripod.  This would mean that when you cut between them the angle is a bit different and it doesn't just look like a jump cut with a bit of a crop thrown in to try and cover it up.

×
×
  • Create New...