Jump to content

contrast based af vs phase detect af in real world


zlfan
 Share

Recommended Posts

10 hours ago, kye said:

CDAF vs PDAF has almost nothing to do with the AF performance of modern cameras.

For continuous AF, it absolutely does.  Panasonic spent a lot of years on CDAF/DFD and AF on the GH6/S5/S1/S1H/etc was still not even nearly as reliable as Canon/Sony.  There's a reason that they finally took whatever steps were needed to enable PDAF on their modern bodies (and from everything I've heard, AF on the S5 II and the GH7 is fantastic).  Some people say that with a bunch of tweaking, they could get the CDAF to be acceptable.  That's all fine and good, but Canon and Sony users take the camera out of the box, enable continuous AF and human eye detection, and watch the camera instantly lock on to a subject and stay locked on (some caveats around terrible lighting and multiple subjects in the frame apply).
(And yes, of course PDAF = PDAF+CDAF, but it should be understood that use of the phrase PDAF is usually intended to be inclusive of the two technologies)
(And yes, Canon isn't PDAF, but DPAF, but DPAF is for practical purposes very similar to PDAF, as both are based on parallax)

One's experience with this, though, is likely to vary based at least somewhat on the working aperture.  If you're consistently shooting deeper DOF (like an 50mm at f/8 FF equivalent), the pulsing and occasional refocusing will be less noticeable than if you're shooting with shallower (like a 50mm at f/2).

Simply implementing PDAF doesn't guarantee parity (see: Red and Fuji), but it certainly puts companies on the right path to it.  And as BTM's photos demonstrate, a lot of the subject detection automatic modes on modern cameras need work, even for those vendors whose PDAF is solid.  

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
20 minutes ago, eatstoomuchjam said:

For continuous AF, it absolutely does.  Panasonic spent a lot of years on CDAF/DFD and AF on the GH6/S5/S1/S1H/etc was still not even nearly as reliable as Canon/Sony.  There's a reason that they finally took whatever steps were needed to enable PDAF on their modern bodies (and from everything I've heard, AF on the S5 II and the GH7 is fantastic).  Some people say that with a bunch of tweaking, they could get the CDAF to be acceptable.  That's all fine and good, but Canon and Sony users take the camera out of the box, enable continuous AF and human eye detection, and watch the camera instantly lock on to a subject and stay locked on (some caveats around terrible lighting and multiple subjects in the frame apply).
(And yes, of course PDAF = PDAF+CDAF, but it should be understood that use of the phrase PDAF is usually intended to be inclusive of the two technologies)
(And yes, Canon isn't PDAF, but DPAF, but DPAF is for practical purposes very similar to PDAF, as both are based on parallax)

One's experience with this, though, is likely to vary based at least somewhat on the working aperture.  If you're consistently shooting deeper DOF (like an 50mm at f/8 FF equivalent), the pulsing and occasional refocusing will be less noticeable than if you're shooting with shallower (like a 50mm at f/2).

Simply implementing PDAF doesn't guarantee parity (see: Red and Fuji), but it certainly puts companies on the right path to it.  And as BTM's photos demonstrate, a lot of the subject detection automatic modes on modern cameras need work, even for those vendors whose PDAF is solid.  

 

I partially agree.
As I just wrote, the difference in the very latest generations is the deep learning algorithm and the dataset it is trained with. Recognition of dogs cats trains and motorbikes has nothing to do with CDAF and PDAF. Probably Sony and Canon were more advanced in their implementations or their algorithms were better suited to work with PDAF/DPAF.
Unfortunately I find very few videos on the capabilities of animal eye detection on video (not photo)

Link to comment
Share on other sites

1 hour ago, eatstoomuchjam said:

For continuous AF, it absolutely does.  Panasonic spent a lot of years on CDAF/DFD and AF on the GH6/S5/S1/S1H/etc was still not even nearly as reliable as Canon/Sony.  There's a reason that they finally took whatever steps were needed to enable PDAF on their modern bodies (and from everything I've heard, AF on the S5 II and the GH7 is fantastic).  Some people say that with a bunch of tweaking, they could get the CDAF to be acceptable.  That's all fine and good, but Canon and Sony users take the camera out of the box, enable continuous AF and human eye detection, and watch the camera instantly lock on to a subject and stay locked on (some caveats around terrible lighting and multiple subjects in the frame apply).
(And yes, of course PDAF = PDAF+CDAF, but it should be understood that use of the phrase PDAF is usually intended to be inclusive of the two technologies)
(And yes, Canon isn't PDAF, but DPAF, but DPAF is for practical purposes very similar to PDAF, as both are based on parallax)

One's experience with this, though, is likely to vary based at least somewhat on the working aperture.  If you're consistently shooting deeper DOF (like an 50mm at f/8 FF equivalent), the pulsing and occasional refocusing will be less noticeable than if you're shooting with shallower (like a 50mm at f/2).

Simply implementing PDAF doesn't guarantee parity (see: Red and Fuji), but it certainly puts companies on the right path to it.  And as BTM's photos demonstrate, a lot of the subject detection automatic modes on modern cameras need work, even for those vendors whose PDAF is solid.  

In a contrast detect camera, the camera can tell how in or out of focus an area of the sensor is, but not which direction is more in focus (closer or further).  In a phase detect camera, the camera CAN tell which direction is more in focus.

A CDAF focus system picks a direction randomly (nearer or further), and goes the whole way looking for focus, and often it would pick the wrong way, and that's why that old P&S camera from 2010 would spend 3 seconds racking the whole focus range before zero-ing in on the focus, despite the fact it was only a little bit off.

That's it.

That's the ONLY difference between the two.

What you are talking about is differences in the mechanism that CHOOSES what to focus on.  

A PDAF system can randomly choose to focus on the background just as easily as a CDAF system can - the PDAF system will just do it slightly more confidently because it knows exactly how to get there and roughly how far away it is.

Apart from the Panasonic DFD pulsing issue (which is a side-effect of CDAF), I have not seen a focus error that was CDAF related in probably years.  

The issues with focus today are that it chooses to focus on the wrong thing, or on nothing at all.  This has nothing to do with CDAF or PDAF.

It's a whole other thing.

Sure, PDAF cameras focus much better overall, but it's not the PDAF, it's something else in the AF implementation.  CDAF and PDAF are a very minor part of the whole AF mechanism.

Link to comment
Share on other sites

  • Super Members
43 minutes ago, kye said:

Apart from the Panasonic DFD pulsing issue (which is a side-effect of CDAF), I have not seen a focus error that was CDAF related in probably years.  

This is a comparison that I did between the internal CDAF of a Pocket 6K versus the lens being driven directly (although it uses the same internal motors) by an AFX.

It highlights the general speed issues with CDAF in lower light and lower contrast situations of course but also on when the lens is being driven from near to close targets and its these which are most likely to cause it to not lock and to give up the ghost completely.

Despite Panasonic's interpretation being a lot better it can still have these issues in my experience.

PDAF isn't immune from this stuff either and in the S5ii which uses a combination of both PDAF and CDAF I've had it struggle in lower light.

Low light or absolutely no light doesn't bother a LIDAR based system like the AFX in the slightest of course 😉

 

As I've said numerous times, the fully sentient AF system doesn't exist and an operator chosen combination of AF-S, AF-C and manual focus is still the one that yields the best results.

Link to comment
Share on other sites

1 hour ago, Davide DB said:

As I just wrote, the difference in the very latest generations is the deep learning algorithm and the dataset it is trained with. Recognition of dogs cats trains and motorbikes has nothing to do with CDAF and PDAF. Probably Sony and Canon were more advanced in their implementations or their algorithms were better suited to work with PDAF/DPAF.

I mentioned that (briefly).  Any sort of animal eye focus seems pretty unreliable on almost every camera right now.  I haven't paid much attention to trains or motorbikes, etc, but I'm not even remotely surprised that they aren't very good.

30 minutes ago, kye said:

Apart from the Panasonic DFD pulsing issue (which is a side-effect of CDAF), I have not seen a focus error that was CDAF related in probably years.  

How fortunate that you had better luck than the half dozen people that I know who have Panasonic cameras with CDAF/DFD who complain that the AF on their camera pulses!  It was weird how much it's like my own personal experience with the GH5 which would have a box drawn around the face of a person and yet, every so often...  pulsing.

It's almost like it chose a subject and then when that subject moved slightly, it wasn't sure which direction to go and had to rack focus a little bit to figure out that they were more or less in focus than before.  It's also weird how that fits the description of how CDAF/DFD works almost perfectly.

 

33 minutes ago, kye said:

Sure, PDAF cameras focus much better overall, but it's not the PDAF, it's something else in the AF implementation.  CDAF and PDAF are a very minor part of the whole AF mechanism.

What a pity, then, that the dummies at Panasonic finally gave up on their DFD/CDAF system and switched to PDAF when the DFD/CDAF on their previous cameras was so excellent and well-loved industry-wide.  Guess their management and engineering orgs just don't understand how the two things work.

 

Link to comment
Share on other sites

25 minutes ago, eatstoomuchjam said:

I mentioned that (briefly).  Any sort of animal eye focus seems pretty unreliable on almost every camera right now.  I haven't paid much attention to trains or motorbikes, etc, but I'm not even remotely surprised that they aren't very good.

How fortunate that you had better luck than the half dozen people that I know who have Panasonic cameras with CDAF/DFD who complain that the AF on their camera pulses!  It was weird how much it's like my own personal experience with the GH5 which would have a box drawn around the face of a person and yet, every so often...  pulsing.

It's almost like it chose a subject and then when that subject moved slightly, it wasn't sure which direction to go and had to rack focus a little bit to figure out that they were more or less in focus than before.  It's also weird how that fits the description of how CDAF/DFD works almost perfectly.

What a pity, then, that the dummies at Panasonic finally gave up on their DFD/CDAF system and switched to PDAF when the DFD/CDAF on their previous cameras was so excellent and well-loved industry-wide.  Guess their management and engineering orgs just don't understand how the two things work.

It's like this:

AF step 1: analyse the frame and choose a thing to focus on
AF step 2: adjust the focus motor until that thing is in focus

PDAF and CDAF are in step 2.

The pulsing is a symptom of CDAF in step 2, because it goes back and forth looking for the point where the thing is least blurry.

All the subject recognition such as person-AF, eye-AF, animal-eye-AF, etc are all part of step 1.

I suspect that CDAF systems are using a lot of processor time to do the CDAF analysis, and that takes processor time away from the subject recognition that happens in step 1.  This would explain why CDAF tends to move the focus point slower than PDAF.  I suspect that if you had a dedicated processor for step 1 then the overall differences in performances would be greatly reduced, which would mean that the majority of issues are economics, not CDAF vs PDAF.

Don't get me wrong, we should be choosing the best focus system, which is LiDAR > PDAF > CDAF, but saying that the differences in real use are CDAF vs PDAF is about as correct as saying that the differences between 13Mbps 4K on YT and 3Mbps 1080p on YT is because of the resolution difference.

Link to comment
Share on other sites

30 minutes ago, BTM_Pix said:

This is a comparison that I did between the internal CDAF of a Pocket 6K versus the lens being driven directly (although it uses the same internal motors) by an AFX.

It highlights the general speed issues with CDAF in lower light and lower contrast situations of course but also on when the lens is being driven from near to close targets and its these which are most likely to cause it to not lock and to give up the ghost completely.

Despite Panasonic's interpretation being a lot better it can still have these issues in my experience.

PDAF isn't immune from this stuff either and in the S5ii which uses a combination of both PDAF and CDAF I've had it struggle in lower light.

Low light or absolutely no light doesn't bother a LIDAR based system like the AFX in the slightest of course 😉

 

As I've said numerous times, the fully sentient AF system doesn't exist and an operator chosen combination of AF-S, AF-C and manual focus is still the one that yields the best results.

LiDAR really is the future isn't it!

It's an interesting question, how to get the benefits of AF without losing the expression of MF.

Some cameras have that thing where they look at where your eye is pointing in the EVF and can set that as the focus point, which is intuitive and great.  I wonder if maybe we need a pressure sensor to press on where the harder you press the faster it moves the focus towards where you're looking.  

That way you could lift off for no focus changes, press slightly to really ease in, or press lightly-firmly-lightly to ease out of focus and then ease in again at the destination, capturing the need for focus adjustments that are not only smooth but faster or slower depending on context, and with all the precision of the computer in focusing on the eye and not eyelashes or nose.

Link to comment
Share on other sites

  • Super Members
20 minutes ago, kye said:

Some cameras have that thing where they look at where your eye is pointing in the EVF and can set that as the focus point, which is intuitive and great.  I wonder if maybe we need a pressure sensor to press on where the harder you press the faster it moves the focus towards where you're looking.

I have this in the AFX in differing combinations.

You can set four focus points and then transition to them manually with the stick on the controller (or the Tilta wheel if you have connected that to the AFX) or with different transition times including one called "NATURAL" which is based on the difference between your current focus point and the target.

The transition times also work when it is in AF-C mode to keep the transitions as smooth or as instant as you prefer.

You can also use the first two focus points to set up a ring fenced area between the two where the AF system is only active for targets between the points.

And then of course there is the focus recorder function where you can do real time record of up to two minutes of focus movements using any combination of live LIDAR acquisition in AF-S or AF-C, the four focus memory position recalls and manual focus and then play it back as it was recorded.

As I say, its the combination of all differing methods that, to me, makes the difference between a type of focus and a focusing system.

Link to comment
Share on other sites

31 minutes ago, kye said:

It's like this:

AF step 1: analyse the frame and choose a thing to focus on
AF step 2: adjust the focus motor until that thing is in focus

PDAF and CDAF are in step 2.

The pulsing is a symptom of CDAF in step 2, because it goes back and forth looking for the point where the thing is least blurry.

All the subject recognition such as person-AF, eye-AF, animal-eye-AF, etc are all part of step 1.

I suspect that CDAF systems are using a lot of processor time to do the CDAF analysis, and that takes processor time away from the subject recognition that happens in step 1.  This would explain why CDAF tends to move the focus point slower than PDAF.  I suspect that if you had a dedicated processor for step 1 then the overall differences in performances would be greatly reduced, which would mean that the majority of issues are economics, not CDAF vs PDAF.

Don't get me wrong, we should be choosing the best focus system, which is LiDAR > PDAF > CDAF, but saying that the differences in real use are CDAF vs PDAF is about as correct as saying that the differences between 13Mbps 4K on YT and 3Mbps 1080p on YT is because of the resolution difference.

You don't seem to understand the limitations of CDAF.  A more correct description of why it's slower and they are pulsing is this:
Scenario 1: Subject walking toward camera.
PDAF camera: CDAF system determines accurately and quickly that the subject has moved out of focus.  Uses PDAF to determine difference in location between current focus point and new desired focus point.  It moves lens to approximately the correct location and uses CDAF for micro-adjustments.
CDAF camera: CDAF system determines accurately and quickly that the subject is out of focus.  Does not know which way.  Does not know how far.  Guesses one.  May need to use a relatively small step to avoid overshooting.  If accurate and the amount of OOF frame decreases, continues in that direction.  If wrong and the amount OOF increases, go the other way.  If indeterminate, keep going the same way.  Wrong way/multiple steps the wrong way?  Pulsing time.  DFD helps this by improving the accuracy of the estimates of distance + direction.  Could also potentially optimize by guessing that a subject will keep moving the same direction.  However, this optimization is potentially difficult due to...

Scenario 2: Head and shoulders video, subject cannot sit perfectly still.

PDAF camera: CDAF system determines that the subject has moved out of focus.  Uses PDAF to determine where the subject went.  Jumps to about the right place.  Use CDAF for micro-adjustments after that.
CDAF camera: CDAF system determines the subject has moved out of focus.  Has to guess which way.  Same as above, but now there's a bigger chance of overshooting, as the subject's movements are not in any way smooth or guessable.  In my experience, frequently results in a mess.  Better to stop down the lens a bit and give up on shallow DOF aesthetics in favor of an image that's vaguely in focus.

Both systems use CDAF processing to some extent.  However, courtesy of parallax, one system is able to cheat by knowing the direction and amount of movement.

As far as Lidar, it seems fantastic for certain use cases.  It suffers from range limitations and can work very poorly when in bright sunlight or other areas with a lot of IR pollution.  It also doesn't operate through the lens so requires calibration for sensor location as well as profiling the lenses. This also requires understanding of its coverage vs the lens (very wide lenses may only focus on things in the center, for example). Every existing consumer lidar system I am aware of can store only a few lens profiles so choose carefully.  As mentioned, though, it seems incredible in very low light.  I got the cheapest version of the pdmovie live motor and (thus far) I've kind of failed at testing it in any real way.

Link to comment
Share on other sites

Oh yeah - lidar also suffers with lenses with large front elements, as the sensor/laser thing needs to moved further from the camera's sensor.  Otherwise, part of the sensor's FOV and some of the points illuminated by the laser will be the lens vs the subject.

Link to comment
Share on other sites

21 minutes ago, kye said:

It's like this:

AF step 1: analyse the frame and choose a thing to focus on
AF step 2: adjust the focus motor until that thing is in focus

PDAF and CDAF are in step 2.

The pulsing is a symptom of CDAF in step 2, because it goes back and forth looking for the point where the thing is least blurry.

All the subject recognition such as person-AF, eye-AF, animal-eye-AF, etc are all part of step 1.

I suspect that CDAF systems are using a lot of processor time to do the CDAF analysis, and that takes processor time away from the subject recognition that happens in step 1.  This would explain why CDAF tends to move the focus point slower than PDAF.  I suspect that if you had a dedicated processor for step 1 then the overall differences in performances would be greatly reduced, which would mean that the majority of issues are economics, not CDAF vs PDAF.

Don't get me wrong, we should be choosing the best focus system, which is LiDAR > PDAF > CDAF, but saying that the differences in real use are CDAF vs PDAF is about as correct as saying that the differences between 13Mbps 4K on YT and 3Mbps 1080p on YT is because of the resolution difference.

I agree with pretty much all of this. Honestly we dumb the conversation down when we just talk about PDAF or CDAF, as if each implementation is equal to one another when that simply isn't true. Look at Fuji. It switched to PDAF what, 6 years ago? Yet it still isn't very good. We also ignore that CDAF's hit rate in stills was pretty much on par with PDAF, meaning the issue was largely video related and that issues like pulsing were issues with CDAF in general.

Panasonic built on years and years of fine tuning their auto focusing algorithms and technology, which were always pretty solid, and merely switched how the focusing is done. Do I wish they'd done it earlier? Yes, if only because I got sick of people (mostly people that were never going to use their cameras anyway) complaining about it. 

Link to comment
Share on other sites

  • Super Members
43 minutes ago, eatstoomuchjam said:

It suffers from range limitations and can work very poorly when in bright sunlight or other areas with a lot of IR pollution.

They all differ but the AFX range is 12 metres so it is definitely horses for courses in terms of application although a lot of lenses hit infinity way before 12 metres so it becomes moot in a lot applications anyway.

With regard to bright sunlight, most of the development work I did for it was in an "oh my eyes, my eyes" area of Southern Spain and I never had any issues with testing and operating outdoors although it has to be said that not all sensors are the same in that respect.

43 minutes ago, eatstoomuchjam said:

Every existing consumer lidar system I am aware of can store only a few lens profiles so choose carefully.

I built the AFX in my own lens hoarding image so it can store 128 lenses !

43 minutes ago, eatstoomuchjam said:

It also doesn't operate through the lens so requires calibration for sensor location as well as profiling the lenses.

A big downfall is the bewildering reluctance of users to want to put in the effort of calibration.

They want the miracle of AF-C of manual lenses on any camera or AF-C of electronic lenses on BM cameras with having to use motors either but spending 15 minutes to do a one off instantly recallable calibration ?

Nah, fuck that.

27 minutes ago, eatstoomuchjam said:

Oh yeah - lidar also suffers with lenses with large front elements, as the sensor/laser thing needs to moved further from the camera's sensor.  Otherwise, part of the sensor's FOV and some of the points illuminated by the laser will be the lens vs the subject.

There is no absolute relationship between the camera sensor and the lens though, other than it being visually in focus obviously.

The relationship that matters is between the lens position and the LIDAR unit measured distance at that position (which can be arbitrary) with the camera acting as the visual confirmation of focus so you are free to place it wherever you want as long as you respect the original location of the LIDAR during calibration when using it.

Even then, without making this sound like an infomercial for the AFX, it also has an offset parameter to enable you to move it forwards and backwards from its original position without re-calibration to accommodate rig changes.

LIDAR is a tool in the box and in some instances the only tool if you want to do certain things but its absolutely not the be all and end all in isolation.

The interest for me at least is in using its inherent speed and absolute accuracy as a component within a broader focusing system.

Link to comment
Share on other sites

1 hour ago, eatstoomuchjam said:

I mentioned that (briefly).  Any sort of animal eye focus seems pretty unreliable on almost every camera right now.  I haven't paid much attention to trains or motorbikes, etc, but I'm not even remotely surprised that they aren't very good.

I see a lot of wildlife photographers use it successfully. Could you point me at some example? No polemic at all, I'm trying to understand its use underwater.

I've seen some video of Mark Smith that could be nearly impossible to film without such AF. Of course none of us knows what the hit/miss ratio is...

Link to comment
Share on other sites

38 minutes ago, Davide DB said:

I see a lot of wildlife photographers use it successfully. Could you point me at some example? No polemic at all, I'm trying to understand its use underwater.

I've seen some video of Mark Smith that could be nearly impossible to film without such AF. Of course none of us knows what the hit/miss ratio is...

A lot of the rules are different if we're talking photography vs videography.  I've heard generally good things (and had a good experience, even with Fuji) for animal eye detect in photo mode.  I've had almost nothing but heartbreak with Canon and Fuji using animal eye AF in video mode.  I just switched to centered focus point in those cases and didn't spend a lot of time messing with it.  It's hard to say what others have done when having a good or bad experience doing it.  🙂

Link to comment
Share on other sites

1 hour ago, BTM_Pix said:

They all differ but the AFX range is 12 metres so it is definitely horses for courses in terms of application although a lot of lenses hit infinity way before 12 metres so it becomes moot in a lot applications anyway.

With regard to bright sunlight, most of the development work I did for it was in an "oh my eyes, my eyes" area of Southern Spain and I never had any issues with testing and operating outdoors although it has to be said that not all sensors are the same in that respect.

AFX range is better than some others.  PD Movie, for instance, advertise 4 meters.  Their entire system with motor and handwheel is only like $500, though.  DJI, on the other hand, 14.  I'd love to evaluate the DJI because I don't like the silly proprietary(?) batteries that the PD Movie uses, but they seem to be producing them really slowly judging by the lack of "in stock" notifications I've received after signing up within a day or two of them announcing it.

As far as sunlight, yes, I'd expect that the sensitivity of the sensor and the strength of the IR laser are major factors in how well that works.  At least with the PD Movie, I'm pretty sure some reviewers complained that it lost accuracy on a sunny day.

1 hour ago, BTM_Pix said:

I built the AFX in my own lens hoarding image so it can store 128 lenses !

🤩 (insert meme image of Fry from Futurama saying "take my money already")

1 hour ago, BTM_Pix said:

A big downfall is the bewildering reluctance of users to want to put in the effort of calibration.

They want the miracle of AF-C of manual lenses on any camera or AF-C of electronic lenses on BM cameras with having to use motors either but spending 15 minutes to do a one off instantly recallable calibration ?

Nah, fuck that.

My willingness to calibrate is correlated to how many profiles the thing can store.  I find the calibration step upsetting and annoying if I can only store like 5 lenses in the thing.  It means every time I want to use different lenses on a shoot, I'm going to have to spent 15-45 minutes recalibrating lenses ahead of time.  If I had enough profiles to store a decent subsection of my lens collection, on the other hand...

 

1 hour ago, BTM_Pix said:

There is no absolute relationship between the camera sensor and the lens though, other than it being visually in focus obviously.

The relationship that matters is between the lens position and the LIDAR unit measured distance at that position (which can be arbitrary) with the camera acting as the visual confirmation of focus so you are free to place it wherever you want as long as you respect the original location of the LIDAR during calibration when using it.

Even then, without making this sound like an infomercial for the AFX, it also has an offset parameter to enable you to move it forwards and backwards from its original position without re-calibration to accommodate rig changes.

LIDAR is a tool in the box and in some instances the only tool if you want to do certain things but its absolutely not the be all and end all in isolation.

The interest for me at least is in using its inherent speed and absolute accuracy as a component within a broader focusing system.

I think that it's a fantastic tool to keep in the toolbox and like any of the other tools, it's great to understand the strengths and weaknesses.  The AFX sounds great, but I don't really see any place where I could purchase it.  Assuming that you are cda-tek as well as btm_pix, there are reference to firmware and documentation on the home page, but otherwise there seems to be a BlackMagic-focused focus unit which also looks nice, but wouldn't be useful for me.

As far as the first bit of that, my point was only really that you need to move the motor for certain lenses and that the move is apt to have an impact on the center of the frame (if you move horizontally or vertically) or the focus distance (if you move it in or out) which, since it's not in the lens/sensor pipeline, is apt to have an impact.  Your offset parameter sounds nice, though I'd be guessing at the value since I don't usually carry a ruler on me.  😃

Link to comment
Share on other sites

41 minutes ago, eatstoomuchjam said:

A lot of the rules are different if we're talking photography vs videography.  I've heard generally good things (and had a good experience, even with Fuji) for animal eye detect in photo mode.  I've had almost nothing but heartbreak with Canon and Fuji using animal eye AF in video mode.  I just switched to centered focus point in those cases and didn't spend a lot of time messing with it.  It's hard to say what others have done when having a good or bad experience doing it.  🙂

On a underwater image forum we are debating on this.

On YT there are countless tutorial and examples on its use for underwater photography but zero for underwater video.  So far nobody showed up with some feedback...

Link to comment
Share on other sites

On 7/3/2024 at 8:59 AM, Davide DB said:

I am beginning to develop an allergy and even hatred toward everything I see on YouTube. What a bullshit title. People who have never filmed shit other than cats or begonias in the backyard.

I, with my GH5, am delighted with its performance underwater and on land.

 

 

I agree, never had any issue with my GH5 AF when using PL lenses and "1 Area". I can say the same with my GH6 below 48fps.

Link to comment
Share on other sites

  • Super Members
32 minutes ago, eatstoomuchjam said:

The AFX sounds great, but I don't really see any place where I could purchase it.  Assuming that you are cda-tek as well as btm_pix, there are reference to firmware and documentation on the home page, but otherwise there seems to be a BlackMagic-focused focus unit which also looks nice, but wouldn't be useful for me.

Yes, I'm me on here and the designer for them when I'm not on here!

The AFX sold its production run out (the webpage shows the original Android/Tilta fusion product) and then the COVID supply chain nonsense kicked in which hit us over a couple of components so we paused another run and then have now moved on to other products.

32 minutes ago, eatstoomuchjam said:

Your offset parameter sounds nice, though I'd be guessing at the value since I don't usually carry a ruler on me.  😃

Well, its moot now but with an AFX you always have a very accurate tape measure sitting on top of the camera 🙂

54238807_ScreenShot2024-07-08at18_25_01.thumb.png.ad4ab80bc267be5cf0670c82690007fd.png

Link to comment
Share on other sites

9 hours ago, eatstoomuchjam said:

You don't seem to understand the limitations of CDAF.  A more correct description of why it's slower and they are pulsing is this:
Scenario 1: Subject walking toward camera.
PDAF camera: CDAF system determines accurately and quickly that the subject has moved out of focus.  Uses PDAF to determine difference in location between current focus point and new desired focus point.  It moves lens to approximately the correct location and uses CDAF for micro-adjustments.
CDAF camera: CDAF system determines accurately and quickly that the subject is out of focus.  Does not know which way.  Does not know how far.  Guesses one.  May need to use a relatively small step to avoid overshooting.  If accurate and the amount of OOF frame decreases, continues in that direction.  If wrong and the amount OOF increases, go the other way.  If indeterminate, keep going the same way.  Wrong way/multiple steps the wrong way?  Pulsing time.  DFD helps this by improving the accuracy of the estimates of distance + direction.  Could also potentially optimize by guessing that a subject will keep moving the same direction.  However, this optimization is potentially difficult due to...

Scenario 2: Head and shoulders video, subject cannot sit perfectly still.

PDAF camera: CDAF system determines that the subject has moved out of focus.  Uses PDAF to determine where the subject went.  Jumps to about the right place.  Use CDAF for micro-adjustments after that.
CDAF camera: CDAF system determines the subject has moved out of focus.  Has to guess which way.  Same as above, but now there's a bigger chance of overshooting, as the subject's movements are not in any way smooth or guessable.  In my experience, frequently results in a mess.  Better to stop down the lens a bit and give up on shallow DOF aesthetics in favor of an image that's vaguely in focus.

Both systems use CDAF processing to some extent.  However, courtesy of parallax, one system is able to cheat by knowing the direction and amount of movement.

As far as Lidar, it seems fantastic for certain use cases.  It suffers from range limitations and can work very poorly when in bright sunlight or other areas with a lot of IR pollution.  It also doesn't operate through the lens so requires calibration for sensor location as well as profiling the lenses. This also requires understanding of its coverage vs the lens (very wide lenses may only focus on things in the center, for example). Every existing consumer lidar system I am aware of can store only a few lens profiles so choose carefully.  As mentioned, though, it seems incredible in very low light.  I got the cheapest version of the pdmovie live motor and (thus far) I've kind of failed at testing it in any real way.

Those examples are just how I described..  PDAF knows where to go and CDAF doesn't..  No new information here 🙂 

But you're right, there are a great many things I don't understand...

  • Cameras over 4K that aren't needed for VFX
  • Seeing that high-end movies and TV shows have been softened using filters, vintage lenses, and softened in post, but then pixel peeping the sharpest lenses and highest resolution cameras
  • Trying to compare cameras without discussing what they're being used for
  • Making decisions on the aesthetic of an imaging system without considering the emotional impact it has on the viewer
  • Not understanding that the purpose of an imaging system is having an emotional impact on the viewer
  • People perpetuating myth after myth when each one can be easily proven to be false with a smartphone and an hour of work
  • etc etc...

I mean, I also don't understand why people insist on shooting interviews with a 135mm F0.8 lens, then blaming their AF mechanism for not being able to track the subject, but maybe secretly I'm the dull one when they are deliberately going for that "talking head in a sea of blurry confusion and it seems like I've been drugged and the background is growing and shrinking" aesthetic.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...