-
Posts
1,839 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by jcs
-
Cool photo; the DOF looks odd- starts abruptly then looks like Gaussian blur vs. typical bokeh. Looks like something I could create on my iPhone along with Snapseed.
-
5DSR vs. Phase One: In the end he says he might be able to replace his Phase One with the 5DSR (but will have to see). Thus not a huge difference in any case.
-
lol, those plots looked like something sampled from tests (which would be a good thing), and in any case the plot can be recreated from simple linear equations, which is what the equivalence equations predict as well. Regardless of math or plots, the real proof is real world tests, where the cameras are set up properly for equivalence. Here's a cool test comparing FF Nikon D800E to Hasselblad MF: https://www.photigy.com/nikon-d800e-test-review-vs-hasselblad-h4d40-35mm-against-medium-format/ Bigger pixels, 16-bit vs 14-bit (and perhaps firmware/software) are where he noted a difference. That said, the difference was small (see for yourself).
-
If there is a real effect of any kind, shooting a proper equivalence test will show it.
-
That's a 3D surface heat-map style plot. Where are the equations? Starting with the equations we can set up cameras for equivalence, shoot, and compare to theory. An interesting plot would be MF + lens vs. FF + lens set up for equivalence showing DOF behavior in the real world (vs. just the math). Theory and reality don't always match (theory and math are only approximates to reality anyway). From a purely math/physics point of view, there's nothing really special about sensor size. However the physics of light, manufacturing processes, sensor and software implementations can have a very profound effect on the final image. Thus, we need to shoot proper equivalence tests to see if there is a real result. Brain Caldwell (posts here sometimes, designed the SpeedBoosters), does optics for a living. He posted the same thing- it's actually easier to get the "MF look" with a FF camera (or with a SB on S35 etc.) because there are plenty of fast FF lenses available to provide for equivalence with MF + slower lenses.
-
Here's the math- where's the error? http://www.josephjamesphotography.com/equivalence/ I replicated with actual test and images, and got the results predicted by the math ( http://brightland.com/w/the-full-frame-look-is-a-myth-heres-how-to-prove-it-for-yourself/ ). If Phase One or Hasselblad sent me a camera and lenses, I'd be happy to shoot equivalence tests against the Canon 1DX II and L-lenses and post the results.
-
I took the Full Frame Look (vs. S35) challenge and created images that were equivalent: http://brightland.com/w/the-full-frame-look-is-a-myth-heres-how-to-prove-it-for-yourself/ . Some folks posted 'debunking' shots with MF vs. FF but didn't use equivalent camera settings per the math (and not even the same framing). The math and physics also apply to MF vs FF. If it's possible to match via the equivalence equations an MF camera + lens to a FF camera + lens, the images will be identical. However, part of the difference is the sensor technology and software processing, where Phase One appears to really shine. Thus comparing a Canon 1DX II or Nikon D5 to a Phase One (any recent) may still show the Phase One providing a better image. If that wasn't the case, there wouldn't be a market for these very expensive MF cameras.
-
Yeah, that's a good point. The studio shots were 3 main lights (2 at ~45 degrees and one straight on) and a hair light (plus lights on the green screen). For something dramatic, vs. simple interview on a green screen, shaped light would certainly help. Part of the challenge is we have mostly LED panels which aren't as easily shaped as spots with modifiers (have a few Einstein 640 strobes and a bunch of modifiers for stills). Only recently have LED spots been getting decent color and sufficient light output (still a little ways to go). I can do tricks in post to simulate certain lighting effects; that takes more time. Additionally, we'd need to plan the shots in advance so that if we used dramatic lighting, the foreground would need to match the background plate's lighting, etc. I agree with Laforet's comments in that video- while a beautiful image is very important, story and content are far more important. At this stage we need to improve our content, and from everything I've read, that means shooting as much as possible, and not spending a lot of time on the technical aspects: better to spend more time writing and shooting more until storytelling skills are up to par. Just shot 3 more pieces and edited in Final Cut Pro X vs. Premiere to speed up editing. Was way faster and super easy to learn (only used FCPX for short/simple projects/tests). FCPX is much simpler and constraining vs. PP CC, however the results were good enough and the massive reduction in time wasted dealing with crashes and bugs in PP CC as well as an order of magnitude faster 4K editing performance (C300 II) were very refreshing creative improvements! The more I focus on the creative side, the less time I want to spend/deal with technical elements (especially working around bugs, etc.).
-
Yeah Hasselblad is Swedish. Their main competition is Phase One. A top-of-the-line Phase One camera system starts getting into ARRI Amira price territory. A RED Weapon could also be considered somewhat competitive for those shooting high-end commercials with stills and video. Perhaps 'game changer' is a MF camera system that is much lower cost. This could be very profitable- sell more for less. The high-end market is very small and Phase One looks pretty solid in terms of both hardware and software- if I needed the best still camera money can buy, that's what I'd get.
-
DPAF is pretty similar, though I've seen more hunting with the C300 II. Could be related to CLog2 or simply the shooting conditions. When shot in the studio results were similar. Another point is the 1DX II is weather sealed, the C300 II is not (fan vents too). However I've read that heat effects image noise on 1DX II, and it does get pretty warm being sealed, so I turn it off frequently. The C300 II has a powerful fan, so can leave on all day (wall power); fan stops while recording. When no issues with permits or weather, especially for handheld, the C300 II will create better footage: 10- or more bits, better HL roll off and DR, NDs, pro- audio, much less RS means little or no squiggles after post stabilization. That said in many conditions both cameras can capture quality that is indistinguishable between cameras. The more expensive camera is indeed better, as is the Alexa/Amira/Mini better than the C300 II for DR, HL roll off, noise grain, color. However, again in many situations, most people would not be able to tell ARRI from C300 II or 1DX II.
-
@cantsin as a real-time game developer and video/image developer, if I can decode HD video, perform complex GPU effects and multilayer compositing, along with multiple DSP audio effects, and save the result to HD H.264, all in realtime, any desktop app can easily do all those things and more, especially with a GTX 980ti with 6GB of RAM. How do I know? Because in the above example, I can do all those things on an iPhone! Pretty much everything can be done on the GPU now- 10 or more layers of 4K are easily possible. Note that as nodes are added in Resolve, it doesn't slow down much, as node effects run on GPUs. The bottleneck appears to be video caching and IO, along with audio sync and timing issues (why does everyone have trouble with this? (FCPX does pretty well here)). In summary, there's no technical limitation to performance for any of these desktop apps. The limitations are from antiquated software design which cannot fully utilize the amazing CPU and GPU power available today. These companies should hire game developers to rewrite their graphics engines. Pretty surprising that FCPX beats PP CC and Resolve, which both use GPUs, by a long way for basic real-time editing.
-
OSX (latest), GTX980ti (latest video + CUDA), 12 Core MacPro 3GHz 24GB, SSDs: Resolve 12.5 can play C300 II 4K24p files smoothly, 1DX II 4K24p files almost real-time and choppy, and A7S II 4K24p files are very choppy / slow. It's nice that 12.5 can load all these formats natively with audio- still a ways to go as an NLE replacement in terms of basic real-time performance. FCPX is currently the fastest, PP CC is about 1/2 as fast (but can play near real-time with 1/2 playback resolution or less), and Resolve 12.5 is perhaps faster for C300 II clips, about the same as 1DX II clips (perhaps a bit better compared to PP CC full playback resolution), but much slower with A7S II clips.
-
Rolling shutter appears lower on the C300 II, based on using Warp Stabilizer and getting less 'wiggles' than the 1DX II (4K24p).
-
I suspect he put the barn doors down to make it clearer. Seeing the full beam angle would be helpful, though I would guess the ARRI 650 is indeed that much brighter. Folks are stating the Zylight F8 produces more light than a 650W Fresnel ($2100): http://www.bhphotovideo.com/c/product/886683-REG/Zylight_26_01020_F8_LED_Fresnel_Daylight.html drawing 90W vs 135W for the 120t... Efficiency, beam angle, optics?
-
Flat light for the interview shots? If so, was intentional to deal with poor lighting and skintone issues. Less contrast in this case is more flattering. Masking just the face/skintones takes a lot more time.
-
Viewfinder is helpful outside in bright light, handheld shots are more stable (3 point contact when using viewfinder + handle + mass), Canon Log 2 highlight roll off (much better than even CLog on the 1DC), and most importantly, when using the 'ARRI' config, post grading is super fast and skintones look great with little effort. While this episode was shot in 4K, when 1080p is desired (for faster edits and smaller files), the C300 II's 1080p is much better than the 1DX II's (soft, aliased). For stills, stealth video, and 4K60, the 1DX II is superior.
-
We finished our first full episode of Cosmic Flow on Bigfoot (and aliens): https://www.youtube.com/watch?v=1mIVRphxFHU The was my first time using the 1DX II for an interview (with Canon 24-105 F4L). I used an on-camera light (Aputure AL-H198) which is daylight balanced while the interior lights appeared to be tungsten balanced. Lesson learned: bring the tungsten (orange) filter for these kinds of shoots. It was a bit of a challenge to get skintones looking good in post with mixed lighting. I also used the normally reliable Canon AWB as this was a fast shoot with changing light color. In retrospect I should have shot fixed WB and corrected in post. I used my custom CLog/CineStyle-like picture profile which boosts shadows and pulls highlights down slightly. The forest background shots and plates for greenscreen were also shot on the 1DX II. The studio and Bigfoot recreation shots were shot on the C300 II, set up in "ARRI mode" and I used an ARRI LUT in Premiere Pro CC. PDAF was used on both cameras and worked great. Audio: Sennheiser wireless + Rode NTG2 for interview, Audix SCX-1 (interior), Schoeps CMC641 (interior), and Schoeps CMIT5U (exterior). I used EQ and a simple compressor and expander ('Dynamics') for the interview (no noise reduction- there was fan/white noise on location). No audio tweaking except for minor volume tweaks for the studio and recreation shots. Editing was done in Premiere Pro CC. On a 3GHz 12-Core 2010 MacPro with 24GB RAM, SSDs, and GTX 980ti (6GB), Premiere Pro CC struggled to play back the 4K footage from both cameras, especially green screen which requires processing two streams at once. I used 1/2 resolution for playback and later 1/4 and 1/8 when working on audio to try to get continuous playback without PP CC stopping. One shot was edited in FCPX- the Bigfoot-in-the-tree shot. I wanted to see how well FCPX handled keying, compositing, and masking, and it worked extremely well. It was much easier to get a good key (effortless really), masking was super fast and easy (UI interaction), and playback with smooth at full resolution. So I rendered out that shot in ProRes and brought it into PP CC. The spark gag was done in After Effects. I hadn't done particles outside of 3D Studio Max, so I looked at the various options in Motion, After Effects and 3rd party plugins. After a few minutes in Motion I gave After Effects a try (which I know how to use a bit better, though AE is much slower to use and render than Motion). Googling turned up the Particle World effect and in a few minutes I tweaked it to create the basic effect I was looking for. I saved the composition to disk and was able to load it directly into Premiere and drop on the timeline above the shots (actors and greenscreen). Tweaking the effect to fit the scene back in AE and updating on disk, which updated in PP CC, seemed faster than when I did something similar in the past using Dynamic Link. The UFO was a 'quick & dirty' I created a while ago in 3DS Max (rendered out with an alpha channel). It's a gag prop so no effort to make it look 'real' The C300 II worked great and was easy to use. The Canon 24-105 F4L was used for hand-held shots, and the 24-70 F2.8 II for studio shots. I always shoot in 'ARRI' mode as the colors and post grading are fast, easy, and consistent (Canon Log 2, Cinema Gamut, and 'Production' (ARRI) Matrix). The C300 II is a superior video camera vs. the 1DX II other than size, weight, and stealth. I thought using the joystick for selecting the focus region would be a step down vs. the 1DX II touch screen, however I started using the joystick on the 1DX II since it allows me to maintain a steadier grip while shooting. Let us know what you think of the show (good or bad) and please like & share if you enjoyed it! Cosmic Flow is science (I'm the open-minded science guy) meets metaphysics/paranormal (Jacqui) for drama. We cover a bit of biology, psychology, and health/nutrition as well (those episodes are perhaps more informational than entertaining, though we're working on making those topics more entertaining (e.g. incorporating a story)).
-
Needed to render some basic particles (sparks) quickly. Hadn't used After Effects in a while (never tried particles there- always used 3D Studio Max for advanced 3D rendering). 5 minute googling and a few more to tweak and got the basic look I was going for. Opened the .AEP file in Premiere, placed it over the other content, tweaked it in After Effects, after saving results, showed up in PP. Decent performance and seamless. However rendering again failed with "Unknown Error Compiling Move" (same nested Warp stabilizer bug. Also had many issues with audio failing and having to switch between different audio devices to 'reset' the PP audio system). When everything works, cool, however the many bugs are major time wasters.
-
@AaronChicago is it powerful enough bounce off a white reflector and still be bright enough for key? (solves the diffusion problem)
-
I think the free version supports UHD (3840x2160).
-
http://nofilmschool.com/2016/06/massively-updated-blackmagic-davinci-resolve-125-now-available
-
You're right, CUDA used to be much faster on OSX. I switch between CUDA and OpenCL from time to time (e.g. after driver/OS updates) and performances now switches back and forth (GTX 980ti, 2010 MacPro 24GB RAM, SSDs)- sometimes OpenCL is much faster. When editing 1DX II and C300 II 4K footage with greenscreen keying, sometimes it will run butter smooth, but other times it will choke. Have to start and stop frequently to see sections play smoothly. Clearly a design issue with buffering and timing (something FCPX doesn't suffer from: it runs fast continuously or chugs slower continuously). What gfx card are you using on Win10? I wouldn't mind the subscription model if the products were continuing to improve and were the best on the market. I'd pay for Resolve once it reaches closer to what I can do in Premiere (would make sense for them to start charging for a mid-priced version once it has near feature parity with FCPX and PP CC).
-
Yes, Premiere on OSX can use both OpenCL and CUDA. Windows 7 was faster with Premiere, after 'upgrading' to Windows 10, Premiere became slower than OSX and also buggier. A big factor is the NVidia drivers (980ti)- sometimes they work sometimes they don't, with each new release. Haven't seen a Blue Screen on Windows in a very long time. Lately PP CC (and the NVIDIA drivers) have crashed the OS requiring reboots. Right now FCP X is faster than Premiere ever was (especially 4K), and the last few releases of Premiere haven't gotten faster or less buggy (old bugs may have been fixed, but new ones pop up). FCP X isn't bug free either, though has much less which means a faster workflow.
-
4K RAW 120fps for £3k?! Say hello to the second-hand Canon C500
jcs replied to Andrew Reid's topic in Cameras
Welcome back to Canon (again). Look forward to a C500 test video that also tells a story -
After spending many hours trying to figure out Premiere bugs ('Unknown Error's, incorrect rendering (both were related to medium complexity nesting, as well as Warp Stabilizer (which is much slower and lower quality vs. FCPX's stabilizer) and audio corruption), I'm going to cut our next piece in FCPX. Premiere is very powerful and feature packed out of the box, however the frequent bugs, slowdowns, and general lack of quality with an ongoing monthly fee is pushing us to look elsewhere for our primary NLE. FCPX was a clean rewrite vs. FCP7 (possibly using some of the iMovie code at the time (still much newer than the FCP7 codebase)). Premiere hasn't (ever?) had a clean rewrite (based on looking at the archaic SDK code over the years) and the cracks get bigger every release. Resolve may become the cross-platform NLE of choice in the near future.