Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. ​FF: 16mm F2.8 S35: 16mm/1.5 = 11mm, F1.9. Tokina comes close with an 11-16 F2.8. On m43 with a Speedbooster the crop factor becomes 1.42: m43+SB: 16/1.42 = 11mm, F2. Again, if an 11mm F2 doesn't exist, it's a current limitation of lens systems. Nothing special about the 35mm sensor in terms of looks. Does the 'full frame look' go away when an 11mm F2 becomes available? Looks like Canon already has a patent for an 11mm F2 design: http://www.canonrumors.com/2011/04/ef-s-11mm-f2-patent/ It's best to focus on measurable image characteristics we're looking for, such as shallow/deep DOF, bokeh, contrast, color, resolution/detail, MTF, low light, distortion, flare, starburst pattern, coma, chromatic aberration, astigmatism, etc. (sometimes we want defects, for character). From there we can figure out how to put together a camera+lens system to meet the requirements we're looking for. Focusing on sensor size when it doesn't provide a unique measurable look isn't useful.
  2. ​FF 1.4 equivalence for S35 is F1.4/1.5 = F.93. So, you can stop down an F.7 Kubrick-Zeiss . ​Or use a focal reducer. It's not about the sensor- it's about the optics and the complete camera system. Lens availability/affordability is a valuable point, but a different topic. The sensor size does not create a unique image. This guy doesn't get into the math. At all. However he makes good points: https://fstoppers.com/gear/zack-arias-debunks-full-frame-crop-sensor-debate-26944
  3. Hey guys- the pics seem to have vanished on this forum. Here's a post on my site, you can figure out the answers if you are clever (not very hard) http://brightland.com/w/the-full-frame-look-is-a-myth-heres-how-to-prove-it-for-yourself/ I need to get back to writing code- please share results if you have a case which clearly shows that full frame looks significantly different than Super 35 when using equivalence math. One really needs to do these tests themselves- if you find a case that shows full frame is significantly better, I'll try to replicate your findings. Like that Scientific Method thingy. If you have questions about the math, for example doing the crop in post (I used 2/3 or 66.67 percent in Photoshop, then resized both images to the same resolution), feel free to send me a PM or post on the forum.
  4. Rich- nice job on the comparison. Your FF shot looks a bit different indeed compared to your S35 shot. You used 50mm and 85mm, both Zeiss Contax Planar primes? The FF shot indeed has more blur in the background, and the window handle. The scale is slightly higher as well (lens in foreground is larger. The light bokeh looks about the same, but slightly different scale. There's more contrast in the S35 shot (trees/blacks). Are those differences due to FF vs. S35 crop? I don't think so. Let's look at the math for equivalence: S35: 50mm F1.4 ISO 800(? ISO not important here- including to show math for next step). To get the equivalent optics/physics: FF: 50*1.5 = 75mm, F1.4*1.5 = F2.1, ISO 800*1.5*1.5 = 1800. When shooting primes, we may not have exactly such a lens on hand, so perhaps that's why you shot 85mm F2.8? The results are pretty close, and it's clear that the 85mm shot on FF has shallower DOF, perhaps more edge blur vs. S35 (which uses the sharpest part of the lens- the center). The difference in contrast is perhaps due to different lenses being used. In order to eliminate variables, I shot with the A7S in FF & APS-C mode (S35) using my best lens: the Canon 70-200 F2.8 II and the MB IV adapter. The 'model' is the venerable Canon 5D Mark III sporting the 24-105 F4L going topless (no lens cap). In the background is a kitchen, with an iPhone creating a specular point light for bokeh. Shot 1: Shot 2: Here are the settings: FF: (A7S APS-C mode off): 70mm*1.5 = 105mm, F2.8*1.5 = 4.2, F4 used, ISO 800*1.5*1.5 = 1800, ISO 1600 used, shutter 1/50. S35: (A7S APS-C mode on): 70mm F2.8, ISO 800, shutter 1/50. Which shot is which? Here's a method anyone can do with any camera, regardless of sensor size, to see the effects of cropping and to work with the math of equivalence. I shot the same scene twice in FF mode, where camera settings were set up so cropping for S35 mode could be done in post. This eliminates any possible in-camera processing that is different between FF and S35 mode on the A7S: Shot 3: Shot 4: For these two shots, JPGs were used. Which shot is which?
  5. Hey Ebrahim- I've wondered what the FF look was ever since getting a Canon 5D Mark II when it was launched (after seeing Vincent Laforet's "Reverie"). When the 5D3 was released, I bought two and sold the 5D2. When the FS700 came out, I didn't pay much attention, but when the Speedbooster was released, I bought the FS700+SB. When the GH4 was released, I didn't really consider it until I realized I could get equivalent looks with fast native lenses on m43 after learning about the math and physics of equivalence. When the A7S was released with low light performance and being able to shoot with my existing Canon and Sony lens collections, along with decent color potential (with some work), and much smaller file sizes and effort vs. 5D3 RAW, it was a good deal. After shooting with all these cameras and understanding the math, it's clear the "full frame look" is a myth, an illusion. Full frame provides affordable options, better low light, and higher potential resolution, but it's not really a significant difference on a camera like the A7S. Here's the Nikon D810 again, shooting the same scene with almost the correct settings for equivalence (aperture should have been F4 on FF shot + different ISO): Even when not set exactly right for equivalence- where is the full frame magic? A slight difference in DOF, which can be matched by setting the FF shot to F4. As leeys noted, most people won't notice the difference in this example (which again isn't equivalent). So here's the challenge: you own or have access to the A7S? Shoot a still in APS-C crop mode with good shallow DOF. Then turn APS-C mode off and use the equivalence math: multiply focal length by 1.5, aperture by 1.5, and ISO by (1.5*1.5), then shoot the scene again and post the results. If there's a difference it will be undeniable in the photo(s). Hey Simon- randomly ran across a solution for the F5: http://www.adorama.com/KABEFZEOS.html . Wasn't looking for it- google's ad technology surfaced the link! This means your T1.5/F1.4 24mm lens can look equivalent on your A7S and the F5 (very close: 1.05 crop). This challenge can be done with any full frame camera (shooting a still), even it if doesn't have a crop mode. Follow the math for equivalence, then crop the "1.5 crop mode" settings shots in post, and downscale the FF version to match resolution. You can blur the FF version to perhaps help reduce sharpness to match frames better. Other than sharpness, both cases should look equivalent. If one isn't convinced by the above photo, doing the test yourself is the best way to find the truth, whichever way it leads.
  6. Seeing what happens with politics, religion, and money, it's clear that the best way to influence and do our small part in helping to make the world a better place is to remove money from the equation. We need money to survive, for food, shelter, supporting ourselves and our families. Trying to create art at the same time generates conflicts, driven by our survival instinct. Pure art is feast or famine: until one is really good (and can market well), there isn't much money to be made. Available film work tends to be production oriented, working for someone else's creative vision (the same is true in entertainment software). So many people trying to make it in LA- many broke or nearly so, talking the good talk about how great things are, to keep up the impression of success, living beyond their means, an illusion, when ultimately they are struggling, depressed, lonely. I imagine the back story of so many people driving very expensive and exclusive cars around Beverly Hills. I look at their expressions- they're not happy. LA traffic has a component there, however when talking to people focused on money and materialism, it's clear they are missing something, and don't even know what they're missing, even after they've "made it". This leads to all forms of addiction, and sadly many times overdose: http://variety.com/2015/tv/news/parks-and-recreation-producer-dead-harris-wittels-1201437460/ . Which leads to art drawing attention to the issue: http://www.hollywoodreporter.com/news/cocaine-snorting-oscar-statuette-appears-775594 What people are missing are real connections to other people, fellowship and meaning. This is also helpful for people who have suffered trauma. This is one reason why internet forums are popular: it's a form of fellowship. Of sharing, helping, and learning and relevance (see "Birdman" a brilliant point is made). A recent study on addiction shows that the reason people become and stay addicted is isolation and lack of fellowship. I've gone to AA with a friend who's stopped drinking, and see how powerful that form of fellowship can be. Many times folks need to hit rock bottom, survive (sometimes barely), at which point their ego will finally allow them to accept their condition and chose to change their lives. Clint Eastwood came to mind in my prior post- his films have deeper messages and he never preaches- one of the greatest actor/directors of our time. Last year I took time off from tech work for 9 months and focused on writing and filmmaking. It was a great experience, I learned a lot about the LA and Atlanta film industries, met a lot of interesting people, and ultimately determined that the film business isn't something to do for a living if one is into it for the art, or for helping people. I read somewhere that the best way to "break into movies" is to already be rich. Just about everyone has multiple jobs, outside the film industry. For 2015 I'm back to tech work for the day job, and will produce our next documentary self-funded. Self funding is really the best way for creative work- we can do whatever we feel is right. Since we're not doing it for money, if it doesn't make money, no worries, and if it does, it helps fund other things, like large-scale local indoor organic farming (another side project based on an idea when searching for LED lighting. there are already companies doing this profitably in Chicago and Japan).
  7. Super 35 and Full Frame can be equivalent with the right lenses: the look will be the same. It's easy to figure out the conversion: To get the same FOV and DOF on S35 and ISO as with full frame: Divide the focal length and max aperture by the crop factor, divide the ISO by the crop factor squared. In Simon's example (guessing the T (transmission) rating is just slightly higher than the max aperture, which is perhaps F1.4?): 24mm/1.5 = 16mm, F1.4/1.5 = F.93, ISO 800/(1.5*1.5) = 356 ~= ISO 400. A 16mm F.93- that's a pretty trick lens. Voigtlander makes the 17.5 F.95 for m43, the closest I found for the F5 (PL mount)- Zeiss Superspeed 18mm F1.2. If there was a focal reducer for PL mount, that would provide an easy solution. Another good example of convenience, availability, and price for full frame lenses. Whereas full frame provides the most versatility and affordability (but no advantage in looks when using equivalent lenses and settings compared to Super 35), medium format has the least affordable lens choices: http://www.bhphotovideo.com/c/buy/Medium-Format-Lenses/ci/467/N/4288584244 . In terms of video cameras, is there anything else beyond the ARRI 65 (uses custom Hasselblad lenses)? Perhaps not a big enough market for a medium format to full frame focal reducer. With a little bit of work it looks like MF to FF is possible DIY: http://www.photigy.com/the-dslr-to-large-medium-format-diy-build-nikon-d800e-on-sinar-p-camera/ . Perhaps in the future we'll be able to 3D print any lens we can imagine (it's already possible to 3D print lenses with plastic: https://www.luxexcel.com/news/3d-printed-glasses/ ). Not including telescopes, it looks like some of the best lenses are actually rather small: the lenses of birds (eagles, owls, etc.). If the human eye can resolve ~600 megapixels, birds can resolve many gigapixels. Apparently some birds can 'see' magnetic fields and ultraviolet light. Underwater, who know what the Mantis Shrimp sees- http://phys.org/news/2013-09-mantis-shrimp-world-eyesbut.html (16 photoreceptors compared to our 3!). What's the point of the biological lens discussion: these are relatively small lenses and sensors. Future cell phones will blow away even medium format lenses (perhaps from tech we learn from studying Mantis Shrimp, Eagles, etc.).
  8. Hey Ed- good points, and I agree there isn't only one way to create a film. I look at creativity as a combination of randomness and recognition of value. Randomness can come from many sources, however recognition of value comes from our mind, from experience, and in the case of film, being aware of emotional feedback. What makes a film good or popular is a broad appeal, where the viewers can connect with the emotion of the film experience created by the filmmakers. It's certainly possible to create something great with mostly randomness, and not knowing what we are doing. However, in the long run, it's best to understand what works and what doesn't work and why, when to take risks and when to stick with what is known to work, otherwise we risk being a 'one-hit wonder' (or no hits at all). I was somewhat aware of the design patterns that are known to work, but didn't really follow them for our first short film. After getting feedback from many people, now I appreciate and better understand why those design patterns are important! Some folks complain about the "Save the Cat" formula used for screenwriting. It's based on thousands of years of storytelling, "the hero's journey" and oral tradition. It's possible to make a good film unaware or completely ignoring the formula (and similar methods), however the probability of losing your audience goes up. People really do want "the same thing, only different". The audience expects the standard pattern, allows for some variance, and if all the elements are present which make up a good story, the film is good. Stray too far from the formula, to the point the audience doesn't understand what is going on, steps out of suspension of disbelief, or gets bored, the film won't be received very well. Put another way, think of all the great films- do they follow the pattern? Constraints are good- there's still plenty of room for creativity! Hey mtheory- I got into this industry to learn how to be a better communicator: creating something from nothing and influencing people in a positive way. There are other directors with the same goal- you can tell by the films they make.
  9. ​Hey jase- for this example case, we'd have roughly equivalent DOF and FOV. Normally FF captures more photons/light, however we have to bump ISO by 1.5^2 to be equivalent, so read noise could wash out any FF advantage for light gathering. I haven't tested this with the A7S, but for all but the highest usable ISO's, noise on the A7S probably won't be an issue when needing deeper DOF. For handheld and/or 60p, on the A7S the 35 1.4 setup is better due to less RS and aliasing. In general, the only real advantage for FF is lower cost lens choices for shallow DOF, higher potential resolution (depends on sensor & lenses), and more light gathering performance. The last two points are the same reason telescopes are always getting bigger to allow us to more clearly and deeply see into the universe. For the A7S with its low noise and wide ISO range, it really comes down to lenses you have on hand or can afford to rent or buy. There is no such thing as the full frame look- the same look can be achieved with Super 35 given the right lenses and settings. The full frame look really means extreme shallow DOF, as was started with the 5D2, which opened the door to relatively low cost shallow DOF for video. Hey lafilm- for the A7S and 1080p, most lenses will provide sufficient sharpness and detail. For narrative and closeups, I use a Black Pro-mist 1/4 to soften the image! For wides and landscapes, sure, the sharpest lenses will ultimately help provide the the most detail from the sensor. For max detail, FF is required on the A7S, as crop mode doesn't have enough photosites for 4K without interpolation. To get the max resolution with no aliasing in 4K, we need an 8K sensor (slightly more when using a Bayer sensor to make up for Bayer reconstruction losses).
  10. One advantage of FF or S35 is less diffraction at higher F-stops (loss of sharpness). As Rich noted, FF does indeed have an advantage in terms of lens selection and lower cost for shallow DOF (I use the lenses I already had for the 5D3 with the A7S (with the exception of the Sony SEL18200, which came with the FS700)).
  11. ​Hey Rich- the math doesn't predict this and I haven't seen it in practice. The closest example I found online was this comparison: http://neilvn.com/tangents/full-frame-vs-crop-sensor-cameras-comparison-depth-of-field/ He didn't adjust the aperture correctly for the FF lens: he used F2.8 for both. He should have used 2.8*1.5 = F4.2 (or F4) and adjusted ISO up on FF by 1.5^2. In that case the images, which already looks pretty close, would be as identical as possible in this case with these lenses. It's true that the FF camera gets more shallow DOF with the F2.8 lens, but there's nothing inherent in the FF sensor that gives it a look advantage in this case.
  12. In terms of look for those examples, it appears the FF folks had a challenge color correcting the lighting conditions. The Super 35 examples had much better color. The main difference between S35 and FF on the A7S is the ability to get shallower DOF more easily with FF (and true in general). Both S35 and FF have advantages and disadvantages in terms of look and artifacts, namely rolling shutter, noise, and aliasing (A7S). For the A7S, I prefer the look of full frame for low-light shots where the camera movement is smooth to limit rolling shutter artifacts. Full frame provides less noise and easier shallow DOF. Low-light+smooth-camera+shallow-DOF = use full frame. For shots with camera motion that cannot be smooth, APS-C (Super 35) mode works much better due to less rolling shutter. It's noisier so care must be take with exposure and profile setup. For 60p (for 2.5x slomo) APS-C mode has less aliasing than FF mode. 60p also reduces rolling shutter even more, so for shaky handled shots, 60p APS-C can be used even if intending to use 24p (just drop the 60p footage into a 24p sequence to 'drop frame' down to 24 (no slomo)). Using a Speedbooser along with APS-C and 60p allows for the best of both worlds: full frame shallow DOF (1.1 crop), about the same light performance as full frame (+1 stop from focal reducer), the lowest rolling shutter possible, with options in post for 24p (drop frame) for 2.5x slomo (40% slowdown via re-interpretation (no interpolation or frames dropped)). Here's the A7S with APS-C 60p in low light (stock PP6): https://www.youtube.com/watch?v=aq8m2FaVL1U . IIRC, I only used NR for the brief interview at the beginning which was very, very dark (drop-frame 24p). All the rest is low (mixed color) light at 60p slowed 2.5x (40%). Used AutoWB, IS, AutoFocus, Sony SEL18200 lens, handheld. A full frame sensor with a global shutter, no aliasing under any condition, and options for cropping would be ideal in a future camera. I haven't seen anything magical about FF vs. S35 and the math doesn't seem to show any look advantage to FF or S35 when all things are equivalent: http://www.josephjamesphotography.com/equivalence/ (search page for: Equivalence in 10 Bullets) . In summary, a 33mm lens in S35 at F1.4 and ISO 800 will look exactly the same as a (33*1.5=) 50mm lens in FF at (1.4*1.5=) F2.1 and ISO (800*(1.5^2)=) 1800. By exactly meaning that when viewed normally when using lenses from the same manufacturer/set, no difference will be apparent. If there are other factors not predicted by the math/physics in real-world practice, an example showing this would be insightful. In practice, a 35mm F1.4 lens and ISO 800 for S35 and a 50mm lens at F2, ISO 2000 might be the closest attainable (and would be very close to equivalent).
  13. Ed- I read a lot about filmmaking, lighting, writing, directing, etc., before making our first narrative short, Delta. It ended up being a lot more challenging than we thought it would be. By challenging, the challenges were rarely technical, which is what we discuss the most on this forum. We knew many elements about filmmaking that we should do from the books we read, but during production many times the 'right way' wasn't done due to unknowns during filming. The 'right way' as noted in the literature is indeed helpful, which for the most part falls under doing extensive planning before shooting (fully tested script, practice script- table reading, storyboards, shot planning (camera/lens/lights etc.)). I've watched others shoot by winging it too- with the same results: it takes a lot longer, cast and crew get impatient and frustrated, etc. Ultimately, good preparation and time management really helps get a shoot done smoothly and also keeps the spirit of the cast & crew up, which likely will result in a better production. Having plenty of food & water on set is super important when things don't go as planned (still very important, especially when low budget and low or no pay). Being the director isn't just about creative vision, it's actually more about being an efficient leader: getting everyone working together smoothly and executing efficiently with time and limited resources (no matter the budget- there are always limitations). The most creativity happens with the script and planning, followed by editing and finishing. Shooting the scenes and getting great performances from everyone is also important, but not as important as planning and editing. The reason I say this is the story is the most important element, and the story is created before shooting, can be modified during shooting but is very risky due to time constraints and the coordination of so many moving parts, and the story is finalized during editing. It's possible to have weak performances, camera work, lighting, etc, and still tell a good story. Without a good story, the rest won't matter as much. Some genres don't really need a story, such as anything done by Michael Bay, but are still fun to watch- 'amusement park films', etc. Star Wars didn't really have great acting, but it surely created a new genre- because of the story!
  14. Ebrahim- lighting is kind of trippy as they are recording the photon wave/packets! Just one light source, a titanium sapphire laser (pulsed at 13 nanoseconds). dhessel- re-reading the article it appears they can record 480 frames per 'movie' without repeated sampling. Only issue is that's only one scanline . So they do indeed need to currently shoot the same scene over again per scanline. It would appear with refinement and scaling, an updated camera could capture all scanlines in one go.
  15. ​I think what they meant in their description of how it works is that multiple 'relatively slow' sensors are precisely triggered, one after the other with ultra-high precision (femtosecond accuracy?). The limitation is the recording time length- limited by the number of sensors. What would be trippy is if they slow it down even more and find the light completely disappearing then reappearing as it moves- something like going in and out of phase with our universe and a higher-dimensional state we don't currently understand (related to String theory predictions). Regarding brownies- not that I'm recommending trying this at home (especially where it's illegal), however Francis Crick (co-discoverer of the DNA double helix) Kary Mullis (inventor of PCR- used to amplify DNA- a major breakthrough), Steve Jobs (he did something with computers I think ), the Beatles (wrote a good song or two) and many more, all used LSD. DMT is perhaps today's 'LSD', with many people traveling to south America for mind-expansion therapy (psychotropics are now also being considered for PTSD as the truth about the war on drugs is becoming more well known (hint: it's all about the money)). We're starting development of a new documentary with elements related to addiction and it's clear that some of these banned drugs can be very helpful in a variety of therapies. For example, using (magic) mushrooms to help cure addiction to alcohol. Each case is different, and substituting one addiction for another isn't a good idea though the results of recent findings are very enlightening. In terms of human suffering and harm to life, if we flipped the law and made alcohol and cigarettes illegal and everything else legal, there would be much less pain and suffering. This is already happening slowly, and marijuana is now legal even in Washington DC, but it's still illegal on the federal level. The battle for and against is not about health, crime, etc. It's about money. Legalization (through laws voted on by the people) revolves around taxation which apparently is going to be more profitable/powerful vs. the corpo-pharma-FDA machine.
  16. Very nice job! Guessing you used stock PP7 since using Alister's LUT?
  17. Looks a little bit over-exposed- all I can see is white
  18. http://web.media.mit.edu/~raskar/trillionfps/ , https://www.youtube.com/watch?v=qRV1em--gaM . Mind blowing! A camera system so fast it can record light itself moving through time. One example shows reverse causality- that means what was recorded should really happen after what is seen. They have to correct for it in post using concepts from Einstein's Relativity Theory! This may have implications in helping us better understand the nature of reality itself. This is approaching the maximum speed humans would find interesting for slomo. At this speed, watching a bullet travel through an apple would take a year to watch. Seeing around corners with reconstructed reflected light: there are many more amazing things coming in future cameras. Ultimately, in cameras as small or smaller than your cell phone. Beyond the technology- there are some very cool creative possibilities for science fiction stories (until this tech is available- then it won't be science fiction anymore).
  19. Looks great! And a train with a steering wheel- nice interactive philosophy tutorial on life!
  20. Fast implementations of H.265 decoders are about 2x slower than H.264. If the same holds true for GPU based decoding, direct NLE support would provide real-time performance. That said, I have a fairly powerful 12-Core MacPro with SSDs and a GTX 770 (modded for OSX: use on both OSX and Win7). When editing our short, "Delta", the GH4 4K footage did give Premiere Pro CC trouble. The problem was both the NVidia driver and PPro- even the mouse would lock up on slowdowns. We ultimately finished editing on the OSX side as slowdowns were less of an issue (and no lockups)). So even H.264 4K footage can be helped with transcoding to an intermediate codec such as ProRes or 422 10-bit ALL-I H.264. For simple edits with light grading, both H.264 and H.265 should be editable in real-time with recent computers and GPUs, provided the NLE includes good GPU acceleration.
  21. ​I'm developing a fast transcoding tool for Windows (not sure if a Mac version is needed as iFFmpeg seems to have the bases covered). For testing I bring multiple clips into PPro with different settings and lay them on top of each other. I then toggle between the original and the other versions at 400% zoom, as well as watching in motion and at normal zoom. In addition to supporting ProRes, I'm doing tests with a 10-bit build of ffmpeg to allow the creation of 422 10-bit H.264 (IPB and ALL-I) in an MXF container. This is basically what Sony markets as the high-end XAVC codec used in the FS7 and above.The goal is to see if the newer AVC spec allows for higher quality at a lower bitrate vs. ProRes (and still edits about as fast). More info about ProRes to help with selecting which version to use here: https://www.apple.com/final-cut-pro/docs/Apple_ProRes_White_Paper.pdf
  22. ​It looks like KineMini uses Cineform RAW- which is wavelet compressed Bayer data (http://www.kinefinity.tv/cameras/kinemax-6k/). Wavelet compression doesn't suffer from macroblock artifacts (as ProRes can). However wavelet compression doesn't work very well at higher compression ratios. As compression level goes up, the entire images gets softer (higher frequency information is discarded). Depending on the compression level supported, it's clear that Cineform RAW files (basically a form of MJPEG2000 encoding single-planer Bayer data) could be smaller than ProRes (422/444 YUV (3 planes) MJPEG) and potentially look better with no macroblock artifacts. Decompressing wavelets is very fast, however high quality debayering requires GPU acceleration to run in real-time. Which is better in practice? Sounds like a good test
  23. I have't used the Commlite- reports seem to be that it's as good or better than the MB. My MB IV (latest firmware) locks up the camera occasionally (unlike the V1 SpeedBooster!) but it's easy to deal with- just pull the battery. I'd at least try the Commlite since it's ~$300 less than the MB. If you like solid, metal focus, skip the Canon 1.4 as the focus ring isn't great (you can get used to it, but it's not like the Voigtlander, Nikon, Zeiss, etc. manual lenses). The Canon 1.4 makes sense if using a Canon body to get fast+accurate autofocus. I use it with the A7S because I have it. Sigma also makes a nice 50 1.4 for $399, and a really nice 50 1.4 ART version for $949 (some folks are comparing it to the 55mm Zeiss Otus).
  24. Compare to the Bali Samsara dancers shot in Panavision 65mm and Kodak film: https://www.youtube.com/watch?v=hbfs4WGIjgw (couldn't embed video- showed up blank). Compression makes the comparison tricky though the colors for the Kine look comparatively excellent.
×
×
  • Create New...