Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. Interview shot on A7S II (very low light), C300 II (studio), 1DX II (slomo). Issues with PP CC and NVidia drivers cut short time to fix minor issues with green screen and color. Edited with PP CC and FCPX (last drum segment). Mountains and clouds from the air shot on iPhone 5S. https://www.youtube.com/watch?v=5YxuG3xhudo
  2. Windows 10 Pro (was upgraded from Win7- not a fresh Win10 install), 2010 MacPro 3GHz 12-Core, 24MB RAM, GTX980ti 6GB, lately mostly in OSX 10.11.5. The performance issues are with the NVidia drivers (including crashing the OS). Just moved some projects over to a 2013 MBP (GTX750) and got smooth playback with C300 II, 1DX II, and A7S II 4K files at 1/4 resolution, which is fine for a laptop. FCPX needed the Better Performance render option- smooth playback with the same files.
  3. What hardware: CPUs, memory, GFX card (AMD?), OS & version, sound device? See the comments on No Film School- http://nofilmschool.com/2016/06/latest-version-adobe-premiere-pro-now-available-chock-full-new-features , more folks having the same issues.
  4. Face-aware in PS is cool. Portrait Pro is amazingly useful (can even fix makeup in post): http://www.portraitprofessional.com/ Adobe has never really put much emphasis on performance or stability: feature bloat wins the day in the boardroom. This means that the folks in business and product development have advised engineering to bloat-it-up vs. improve stability and performance. If this is a wrong assumption, then epic fail on the bug fixes! PP CC is now so slow and buggy, it has 'jumped the shark' and is not really usable for 4K material (C300 II, A7S II, and 1DX II footage). On the exact same hardware, FCPX is mostly smooth as butter, and very rarely crashes. So even though the NVidia drivers are buggy too, FCPX seems to be doing a better job working around them (as I have done back when I wrote a lot of Windows audio/video software). Photoshop, Illustrator, and Audition (and AE for simple effects else way too slow) are useful tools. Acrobat DC doesn't crash, however it's unbelievably slow (hint: turn off font smoothing while editing PDFs). Premiere gets useful new features along with incredibly time-wasteful bugs, and either doesn't get faster or gets slower. At first I thought maybe 2015.3 was slightly faster, however after using it on a medium complexity project, it's much slower and far buggier. I know there's also problems with NVidia GPU drivers, however I haven't changed them, so the extra bugs and slowdowns are Adobe's. These issues have come and gone, for many years. I thought changing back to a GTX770 would help as perhaps my 980ti has hardware issues, however that doesn't appear to be the case: http://forums.macrumors.com/threads/wierd-issues-with-cmp-gtx-980-ti-and-premiere.1966577/. Note that in Windows 10 I haven't gotten any BSODs, however it's slower than OSX (CUDA. OSX lately is faster with OpenCL). Also note that Resolve hasn't done anything weird with GPU accelerated effects or crashed (have only done basic C300 II, 1DX II, and A7S II tests). PP CC in 2014- same issues- Windows/Mac different GPUs, doesn't matter, clearly PP CC bugs: https://forums.adobe.com/thread/1515611 FCPX will need some plugins to get closer to PP CC basic features; I think it's time to bail on PP CC. EDIT: in order to finish a PP CC project in progress, I installed an older BETA version of the NVidia drivers (first one for 10.11.5 here): http://www.macvidcards.com/drivers.html . The theory is the Adobe developers write their code based on drivers available when they started the bug fixes. They tend to 'lock' code based on said drivers, so that by the time the product finishes QA and is released, it may actually work better on the now older (even BETA) drivers. Surprisingly and thankfully, the older BETA drivers are at least usable with PP CC 2015.3 (still buggy, but can make progress on project). So, to be fair to Adobe, while the glitches and bugs are annoying and hurt creativity, they can usually be worked around and aren't show stoppers. The show-stopping performance bugs are clearly related to the NVidia drivers (Windows and OSX).
  5. Unfortunately Abode seems to be adding bugs faster than they are removing them. 2015.3 is more buggy with GPU accelerated rendering (OSX latest, GTX980ti latest drivers). Not only does the display corrupt (or go black) frequently, it's also even slower than the prior version (both OpenCL and CUDA). The solution? Turn off Mercury and run Software Only and 1/4 or 1/8 rendering. Faster, less slowdowns and rendering issues. Since Premiere GPU crashes also crash OSX, NVidia's drivers are also an issue. [EDIT] Even with 1/8 Software rendering, PP CC is so slow as to be almost unusable when jumping around the sequence (4K material).
  6. YouTube uses ffmpeg- this looks like an ffmpeg bug for videos > 1080p. Something related to bt601 / bt709 / rec2020 etc. I've also noticed this issue- too much saturation and red. The workaround has been to pull saturation and red, re-render, upload and check the results, repeating until it's OK. It might be possible to use ffmpeg locally to rewrap without transcoding and changing the color info (colormatrix etc.). See the options here: http://video.stackexchange.com/questions/16840/ffmpeg-explicitly-tag-h-264-as-bt-601-rather-than-leaving-unspecified. Flags to rewrap audio and video without transcoding: -c:a copy -c:v copy.
  7. In order for sensor size to have a real effect on the captured image, the math and physics must predict that at least one component changes with sensor size for the final image. I noted that in the DOF simulator the Circle of Confusion (CoC) changes with sensor size, and the CoC has defined limits based on sensor size here: https://en.wikipedia.org/wiki/Circle_of_confusion . My understanding is that just means the limit for sensels on the sensor to resolve what will become a single dot (not blurry). This may have effects on real-world physics and sensel optics (CFA, low pass filter, etc.) which could explain real-world differences in imaging with different sensor sizes. Unless there's a single parameter in the math model that predicts some kind of variance with sensor size, any effects in the real-world are due to optical variances and will probably be relatively subtle effects. In the extreme case of something like the (x)Cyclops, it looks like the in-focus region is in really good focus, and the near and far defocus regions are out of focus, with the 3 regions being somewhat constant in appearance, almost like a sharp photo where depth-constant Gaussian blur is applied to the foreground and background (as we can simulate with Snapseed Tilt-Shift simulation etc.), instead of nearer objects being progressively more out of focus and more distant objects the same. This is happening, but it looks 'slowed down' and thus some kind of non-linear function. How would such non-linear effects be modeled by the math & physics? The concept of CoC means the beginning of the in-focus region is focused so that a dot can be fully sensed and not blurry at all, all the way to the end of the in-focus region, thus the entire DOF range is in focus. Just before and just after, the points are now blurring so that the sensor no longer can resolve them as a point anymore. Intuitively this is all linear. Thus how would the math describe non-linear behavior for the 'through-focus MTF'? One way would be non-linear effects of lenses, which of course is not due to the sensor size itself, and could thus be done for many different sensor sizes if desired and/or physically possible. Again, a ray-tracer which simulates photons through 'perfect' optics could be used to explore this. I am opened-minded that there could be an interesting effect not predicted by the math, even if a common side-effect of MF lens design. However, it doesn't seem likely as if there was some 'magic', there would be a huge market opportunity for a MF to FF SB and that hasn't happened (especially as richg101 pointed out the low ISO limits of MF sensors). Real world examples using 'equivalence' again would be helpful to figure out either way what, if anything interesting (and real) is happening.
  8. Haven't done any complex text yet in FCPX. However this sounds like something you might be able to tweak in Motion 5? You could also render out text with alpha in Premiere (or After Effects) and bring in to FCPX- that way you can use the best of both apps.
  9. Are you saying that any effect such as 'DOF falloff' are not real effects or are related to a particular MF lens design, and not the physics of light imaging a subject on a larger sensor size? (xCyclops uses a 5D2/A7S to capture a projected image on a plane).
  10. Thanks for watching, racer5!
  11. Those are indeed visible differences for as 'ideal' a test as is physically possible since a very high quality zoom lens was used on the same camera (cropping in post after changing settings to simulate an APS-C crop). Since the differences are very subtle, the aesthetic quality boost may be lost on the general public, in the same way as fine food and alcohol, can be etc. Were the Hateful 8 and The Revenant better movies being shot on the ARRI 65? Did they make more money because of the large format camera used? (smaller format cameras were also used). The answers to both questions is probably not significantly. With effectively unlimited budgets, why not shoot on the 'best'? The relative cost of the larger format camera is insignificant to the film's budget. However using the pinnacle-best gear makes the directors and DPs happy, and that counts for something. Using the DOF simulator, I noticed the circle of confusion and other parameters weren't exactly the same between formats. While FOV and DOF will be nearly identical, subtle things as noted such as blur transition region stretching can be analyzed by tracing individual light rays- photons. My crop test was not perfect, however there is a way to study this without worrying about issues of optics or lenses: computer simulation via ray tracing. This could be used to create graphs showing the increase in blur transition regions based on the sensor size. If it's a real effect, the results could be presented to optical designers such as Caldwell to show a possible market for a MF to FF focal reducer. If not by Caldwell, perhaps someone else in HK or CN. Someone with the desire and free time could explore ray tracing to create renders showing any advantages to larger sensor sizes using Blender (free): https://www.blender.org/features/cycles/ . Since this is a controversial subject, showing these kinds of results could help visually explain effects previously difficult to put into words. I need to get back to working on improving content and storytelling- more important than tech!
  12. Well, that's not the same thing as a lens system and camera sensor collecting photons. A lot of folks here don't like math (including me- I learned it to write video games!). So you can set up two different camera systems and match them as close as you can: FOV, DOF, and exposure. When you look at the final results, you may be surprised. The DOF simulator is also helpful if one is short on time.
  13. Are you joking? Multiplying the aperture by the crop factor is mathematically required if one wishes to shoot with equivalence between two camera systems (for example, shooting a GH4 and 5D3 and desiring to get the same basic FOV and DOF for both cameras). Did you try the DOF/POV simulator? http://dofsimulator.net/en/ . Of course you can also try setting up two different sensor size camera systems and try to match FOV and DOF. If you do, you'll find that you need to multiply the aperture by the crop factor (and the ISO by the square of the crop factor if desiring to also match exposure). Perhaps you could post an example with your cameras where you match DOF and FOV along with focal length, aperture, and ISO for both cameras? (you can skip the math and do it by eye, the results may surprise you)
  14. How is that helpful? Copying other's work without fundamentally understanding why something looks good isn't being an artist. Understanding why something looks good from a technical standpoint doesn't eliminate aesthetic sensibility. Rather it helps with original creative works since it's not necessary to mindlessly copy-paste other people's ideas into a new work. Nothing wrong with being inspired by other works, only that original works are better when it's understood why inspirational material is good, and those concepts can be applied in more general and creative ways. Not taking the time to understand the tech behind the art is lazy, and will fundamentally limit creative works. All art is technical. The greatest artists are also master technicians: Michelangelo, DaVinci, etc. Who was perhaps the biggest camera nerd ever? Stanley Kubrick. Understanding why something is aesthetically pleasing is technical. If a particular camera and lens combo has a pleasing aesthetic, we understand why using technology.
  15. If the numbers I found are correct, the crop factor for the Leaf AFI-II 10 is: sqrt(36^2 + 24^2)/sqrt(56^2 + 36^2) = .65 FF lens and aperture equivalents: .65*180mm = 117mm, .65*F2.8 = F1.82 The closest camera lens combo I have is a 5D3, A7S II, or 1DX II with a Canon 135 F2, however it wont look exactly the same (it's close, see below). Here's a cool tool to help visualize equivalence and show DOF and other stats: Leaf 180mm, F2.8 (.62 was closest crop for this tool): http://dofsimulator.net/en/?x=AcIA4MF3AAAIJAckAAA Full frame, 117mm, F1.82 (pretty much identical to the Leaf setup): http://dofsimulator.net/en/?x=ASSAkQF3AAAIJAckAAA Here's what the 135mm F2 on full frame looks like: http://dofsimulator.net/en/?x=AVGAoQF3AAAIJAckAAA Similar and clearly not the same as the 117mm F1.82 which is pretty much the same as 180mm F2.8 on the Leaf (CoC, DOF, and hyperfocal distance aren't exactly the same (very close), however it would be hard to see the difference in a real photo). Here's what the Sigma 50-100mm F1.8 would look like at 100mm F1.8: http://dofsimulator.net/en/?x=APoAkQF3AAAIJAckAAA Also close to the look of the 180mm F2.8 Leaf and still different. That's why it's important to match settings exactly and why using a zoom lens on the same camera is useful in understanding crop factor and DOF. In summary, the equivalence equations work, real world tests match up as expected, as do all properly implemented online simulators. Any effect of MF vs. FF is related to sensor design (sensel size etc.), sensor optics, lenses, firmware/software processing and not sensor size itself. If someone would like to send me a MF camera system I'd be happy to shoot equivalence tests with interesting subjects: real people. Somehow I don't think Phase One or Hasselblad would be game for such a test* * if something remarkable was discovered, I'd be as thrilled as anyone to share the results showing an MF camera system outperforming a modern high-end FF camera system.
  16. If no one is willing to do some work and actually do a test comparing MF to FF in the real world with equivalent settings, then what's the point of debating without any comparative evidence? I'm a cognitive scientist, open-minded and willing to learn new things. That said, a valid scientific test, which is easy to do is all that's needed to make a useful point regarding MF vs. FF. If the only difference is MF lens design, that's cool, however this debate was MF vs. FF cameras (and again, Caldwell says no advantage for MF lenses anymore; no point in a MF => FF SpeedBooster). If an MF camera system (body + lens) is truly better than FF, I'd invest in MF. So far there's no comparative evidence showing this to be true (the only scientifically valid way to determine if there's a difference).
  17. This debate is visual- where's the visual proof: correctly set up MF and FF systems for equivalence? I posted a link to the 5DSR compared to the Phase One. The difference is minor, and not enough for a business to invest in MF systems. To learn and help others one must do some work- shoot a correctly set up MF vs. FF test. Shooting with a zoom- that's perfectly valid, the debate is sensor size, not lenses, though now the consensus is it's not sensor size anymore but MF lenses themselves. So a SpeedBooster for FF would make sense if that were true, and thus Brian Caldwell's comment that it's not worth it because the MF lenses aren't better than the top FF lenses. So if true the debate would shift to MF provides a better value due to lower cost. Is that really true?
  18. Winning arguments is a waste of time unless prosecuting a lawsuit. Learning new things and helping others is time better spent. MF vs. FF can be tested with the same camera and lens (changing lens settings and post cropping), which will typically show that the sensor size isn't where any effect is coming from. Now the point shifts to lenses- cool, that could be helpful to understand for those thinking about getting an MF body to take advantage of lenses. However Brian Caldwell stated that the current top 35mm lenses are as good or better than MF lenses. Posting equivalence-matched MF-cameras+lenses to FF could be helpful in showing the strengths of the MF lenses vs. FF lenses. Even more useful would be showing the MF lenses can more cost effectively produce images than very expensive FF lenses (Otus etc.). This same argument is valid when comparing FF to m43: FF lenses are effectively cheaper to get super shallow DOF.
  19. I have provided evidence from my own work and made a good faith effort to find any online images or other evidence that shows that MF provides something not possible with FF- I'm seeking truth, wherever it may lead. You have provided only words with nothing to back them up and no effort to do any actual work. The burden of proof is on you to prove that MF is in any way better than FF due to sensor size alone.
  20. I think we all agree with that point- what we can do in the real world with available equipment is all the really matters vs. math and theory. If there was a business reason to use a Phase One, I'd use one. Their cameras systems are top notch (design, usability, and final image quality). Currently the 5D3 and 1DX II with fast lenses (especially the 85mm 1.2L and 135mm F2L) can create crazy shallow DOF (sometimes too shallow to be usable wide open). Here's a test many don't realize they can try with any camera to better understand equivalence. Shoot with the same lens, change settings, then crop in post (images 3 and 4 from: http://brightland.com/w/the-full-frame-look-is-a-myth-heres-how-to-prove-it-for-yourself/ ) Full Frame Super 35 (cropped in post- same lens) Is there a difference? Sure, but so minor that the average person (client etc.) won't ever notice the difference. You can do this test with your MF camera, and crop to FF! Since you are using the exact same lens, only changing camera settings and cropping in post, lens and sensor technology will be identical. The only thing changing is effective sensor size. Try it! Thus unless a client could clearly see the difference, why would a business invest in MF bodies and lenses? At the ultra high end, it's more marketing/appearances, and perhaps most importantly, Phase One for example does produce the highest megapixel professional/studio cameras(?) along with what looks like the best processing currently available. As noted in the link posted in this thread, the 50Mpixel 5DSR is very similar in quality to the 50Mpixel Phase One for ultimate image quality.
  21. I'd rent or buy a Phase One- as noted earlier in this thread I think it's the best camera available right now, though not because it's medium format-sized. I know a lot of people don't like math, however the equivalence math is very simple (basic linear math- can be done with a calculator), I've done the tests, others have as well, and sensor size by itself doesn't do anything magical: FF vs. S35 or MF vs. FF. Equivalence really needs very similar lens designs for both cameras for such tests. For example, Cooke vs Leica vs Zeiss vs Canon for Super35 produce very different looks. Clearly the sensor size isn't changing- the optics account for the differences. If you feel that the math is wrong and all the links and tests I posted are invalid, you can provide counter proof with links or your own tests. Asking me to prove your point isn't how it's done via the scientific method- the burden of proof is up to you.
  22. Well, if you feel that MF is the way, why not contribute helpful info and post real-world images (with cameras set up for proper equivalence) showing how you feel MF is superior to FF (and S35 etc.)?
  23. You're right I don't understand. Isn't it best to seek truth and to learn and share to help others? Are you saying that your posts are just to mess with people and not to help or provide any value to others?
  24. What math proof? You posted a random 3D plot with no equations or description. How about real-world proof (again, must shoot with equivalence set ups and same exact shot in controlled conditions. Winging it here won't work)?
×
×
  • Create New...