-
Posts
1,839 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by jcs
-
Zacuto shrink the EVF - Gratical Eye available soon for $1950
jcs replied to Andrew Reid's topic in Cameras
The Gratical Eye will be used with some kind of rig for a DSLR, which will have an external battery and power (to run the Gratical and camera). Something like this little guy won't add much weight (though extra weight can be helpful for balance): http://www.gearbest.com/cables-connectors/pp_304334.html?currency=USD -
We use the LS-1S, LS-1/2 (97 CRI), 528W, 528S, and H198, in daylight color. They all work great, especially for skin tones (better than our more expensive Dracast LEDs, which are only used for bounce/back fill now). Look forward to seeing the variable spot/Fresnels from Aputure (we use a Fiilex P360EX- very nice but not variable beam angle or very bright (can swap in/out the plastic Fresnel insert).
-
Getting people to watch and love your movie is easy. Simply make a great movie. This will result in the movie getting shared (word of mouth, link via email, text, Facebook, Twitter, etc.), and if the viral coefficient is greater than 1.0, the movie has a chance to become exponentially popular and thus successful ("viral").
-
Craft Camera - cardboard and Arduino? http://petapixel.com/2013/02/25/craft-camera-a-diy-digital-camera-made-with-cardboard-and-arduino/ Cardboard is lighter than magnesium or carbon fiber... Take that BM and Red! On a serious note, cameras are now computers with sensors and lens mounts. In the future it will be very easy to buy modular components to build any kind of camera one desires, similar to the way we can build custom PCs today. With computational photography there won't be as much need to change lenses- DOF, 3D etc. can be computed from the multiple sensors. Compare to what happened in the audio world. Analog was thought to be never be matchable with digital by many purists (especially tube amps). Now just about everything is available as a plugin, simulating all major amps, preamps, and effects (even microphones to some extent). Some of it needs more work, but a lot of it is good enough to make even die-hard purists happy. Check out the Axe Fx for guitarists: http://www.fractalaudio.com/p-axe-fx-ii-preamp-fx-processor.php amazing sound!
-
Seems like that'd be quite the trick on a Canon camera
-
I might give it a try. If it's a lot better than the 24-105 F4, could be worth it.
-
It's T4.4, and F4, constant aperture and constant light Transmission. Great for fast interior shooting (no lens changes, can zoom for effect), run and gun, and doc work. To be ~T3 and F2.8 would be a much larger, heavier, and more expensive lens. I might rent one for the C300 II. It's a variable servo cinema zoom- the price is decent considering the price of cinema primes (around $5K each for Canon's cinema primes).
-
A light field camera doesn't have a traditional lens, however since it's computational photography, different lenses can be simulated. Xbox Kinect uses a TOF IR sensor for 3D depth. It's very low resolution and noisy. Also very sensitive to sunlight IR (not an issue indoors with no windows). Lytro is visible light, where the 2D image is computationally assembled and 3D comes from parallax.
-
We've gotten great results with the GH4, A7S, A7SII, FS700. For stills, the 5D3 rocks, and for video the C300 II rules. Looking forward to the 1DX II and 5D4 and whatever Sony and Panasonic counter with. Competition is great!
-
Maybe it's already fixed in the secret 4K60 NAB surprise update Varicam's P2 storage is vastly more expensive than CFast 2.0, and an equivalent Varicam LT is still much more expensive than a stock C300 II. Until other pro cameras get PDAF-level AF, the C300 II is a unique camera. We had budget for an Amira, but went C300 II due to AF and much lower energy costs. Under normal shooting conditions, we've never seen the black-hole sun. While I'm sure canon will fix it, in the rare case it happens before then, it's an easy fix in post (cover with blended white mask and track if needed). A lot easier to fix than messed up colors in skintones...
-
Computational Photography/Imaging is a rapidly growing field: https://en.wikipedia.org/wiki/Computational_photography This is how shallow DOF, HDR, 3D capture/reconstruction, and post-processing DOF control will someday become commonplace with tiny devices.
-
Here's the math and physics: http://www.josephjamesphotography.com/equivalence/ Here I verified the math and physics: http://brightland.com/w/the-full-frame-look-is-a-myth-heres-how-to-prove-it-for-yourself/ I used the Canon 70-200 F2.8 II lens to eliminate lens-lens differences (vs. comparing 2 primes which can't be matched closely- incorrect focal lengths and differing lens construction). Folks desiring to 'debunk' my debunking (the full frame look (or MF, LF) has/have special magical properties that can't be measured) need to carefully perform tests with a static subject/background and camera position (and follow the math and camera/lens set up precisely). If differences are found, that's cool, then the next step is understanding why in practice the math/physics don't match experiment (from which new math can be derived to better describe reality- the scientific method). Provided sufficient optics and sensor tech are developed (such as light-field, multi-sensors, etc.), tiny lenses and sensors will be able to provide very shallow DOF.
-
https://hdfx360.com/pre/us/index.html Look how happy the cellphone photographer is compared to the DSLR photog! lol, however someday this will happen with new lenses and imaging technologies (not going to happen with a clip-on lens). If the optics are decent might be useful for the next iPhone feature...
-
Guys- if there is a real effect, it can be measured and clearly A/B demonstrated. Folks argued endlessly that the full frame look was real. After I did the A/B tests with backing math folks stopped arguing. Not one person presented a single counter example (this applies to medium and large format too). Folks argue that it's easier and cheaper (FF) to get shallow DOF with larger sensors, however everyone agrees with that point. Caldwell designed the SpeedBooster... No one is disputing that old/different lenses look different. Only that the article is pseudoscience...
-
I am also a professional photographer with a complete studio with both strobes and continuous lights: I pay very careful attention to lighting and 'pop' (can do quite a lot in Photoshop too- way beyond what is possible with just lenses and light). I've written software to do complete 3D rendering from scratch. I studied human perception in college (cognitive science). I'm now focused on emotion, communication, and storytelling for both stills and video. This kind of article is fine to help start an interesting discussion on old vs. new lenses. However, since he didn't use any scientific principles, such as the scientific method, there's no way to actually know if anything he says is real. How can one apply any principles presented and use for our own works of art if no real, repeatable principles were provided?
-
Sharpness / resolution / microcontrast / local contrast / global contrast are all the same thing at different frequences. Resolution/sharpness represent the highest frequencies up through larger scale contrast at lower frequencies. An image is a composite of all the frequencies, from lowest to highest. This can be visualized by looking at the discrete cosine transform (used in JPEG, H264 etc.): https://en.wikipedia.org/wiki/Discrete_cosine_transform https://en.wikipedia.org/wiki/Discrete_cosine_transform#/media/File:Dct-table.png The MTF is one form of lens performance measurement: http://www.cambridgeincolour.com/tutorials/lens-quality-mtf-resolution.htm I noticed in the Nikkor vs. Sigma examples, the Nikkor lost a lot of detail in the shadows, and the shadows were also blurrier (less fine detail, along with an overall lower detail image. Less detail/sharpness can help make images look more organic (digitally sharp images look less real)). To generate a similar look in post, we could write code to blur pixels based on luminance level. If one prefers the look of, for example, a particular Nikkor over a Sigma, comparing their MTF charts might be helpful in understanding why. Not all frequencies are captured the same, and performance variation through the spectrum can explain why one lens is preferred over another, for a particular use. If "imperfect" lenses have desired character, these imperfections can be measured, and then used in marketing material to help people get a desired, and known look. Without hand-waving, internet arguing, and pseudoscience
-
First, we'd need a side-by-side same-light-scene comparison where A has one effect and B has another. Then we can try to understand the differences. Hurlbut's Lieca vs. Cooke example showed that a particular softness and distortion was a look he and his team preferred (Cooke over Leica). As for how does a lens produce a more contrasty, 3D look, I'm not a lens designer like Caldwell, however intuitively it would seem a lens design that prevents photon spill/spray such that while local contrast (sharpness) may not be particularly affected, overall contrast is reduced as what should be a very dark area is brighter, those lowering dynamic range, contrast, and '3D-ness'. An experiment one could do is take a very "3D" image and see what it takes to make it look "flat" in Photoshop (lowering contrast, increasing "Haze", blurring/sharpening areas, etc.). The experiment could be repeated by reversing the process starting with a "flat" image.
-
This is a case of Not Even Wrong. https://en.wikipedia.org/wiki/Not_even_wrong
-
It's necessary to shoot in the same conditions for any test of this nature to be taken seriously. I could make such a test completely non-subjective by writing an image cross-correlation algorithm comparison (statistical or neural network) to show exactly where pixel differences are happening and thus the cause of any human-perceived visual effects. This article is completely descriptive and subjective with no math, science, or means to independently test and verify to back up their statements.
-
That was not the point. The point was the method of showing an effect as something real vs. made up. 3D is more lighting and perspective than DOF. Without repeatable, same-conditions, side-by-side comparisons and (mostly) non-subjective evaluation, there's no way to know if an effect is real or not. Lol didn't you 'pick out the full frame sample' by examining the file name? Look forward to seeing your demonstration debunking my debunking
-
Well, for one everything looks made up, with no math or science-based tests showing the statements are true. First, you cannot compare two completely different shots for "3D pop" etc. For example, I debunked the 'full frame look" using exactly the same scene and the math of equivalence: http://brightland.com/w/the-full-frame-look-is-a-myth-heres-how-to-prove-it-for-yourself/ , along with the necessary instructions for anyone else to verify the results. So while Brian's comment is perhaps a bit harsh, it's valid.
-
Is an EF to EF SB possible? Would be cool to have a 'full-frame' C300 II.
-
Crop factor is computed from the diagonal length, as opposed to the width or height. You can think of the diagonal as the diameter of the image circle. When comparing full frame 1080p compared to 4K crop (as with the 1DX II), we must use the diagonals of the imaging areas as opposed to the full sensor (since HD is 16:9 and the sensor is 3:2). Since we know the full sensor/pixel size is 5472 x 3648, we can compute the following: Full frame HD mode uses the full sensor width, so we need to compute the 16:9-based height from the width: 5472*9/16 = 3078 height. 5472/3078 = 1.78 = 16/9. The full frame diagonal is then: sqrt(5472^2 + 3078^2) = 6278.29 DCI 4K cropped is: sqrt(4096^2 + 2160^2) = 4630.64 Which makes the exact crop factor: 6278.29/4630.64 =1.3558
-
DPAF is a game changer. The technology is amazing- it allows fast, accurate focus with no hunting. Someday we'll be able to see DPAF pixels on screen to help with manual focus, much better than peaking. Here's how the hardware works: https://www.learn.usa.canon.com/app/pdfs/articles/Canon_Developers_Interview_nonfacing_pressquality.pdf (Sony has this too, for example, on the A7R II).