-
Posts
1,839 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by jcs
-
If Canon were to render the PDAF pixels showing which pixels were in focus (instead of traditional edge-detector algorithms which mathematically cannot ever always work reliably), that would indeed be very useful and blow away edge-peaking. Perhaps a future feature someday.
-
After using the PDAF on the C300 II, peaking and manual focus is way too slow (and clunky) compared to full AF with face tracking (works amazingly well). For manual focus, PDAF assist is superior to manual with focus peaking. For live, unrehearsed shots, PDAF is amazing and peaking isn't useful. A full 1080p display and viewfinder are more helpful for fast, accurate focus vs. peaking in my experience (peaking is based on contrast/edges and is not always accurate or easy to see).
-
The pink and green skintones remind me of Sony- guess is FS7 due to low RS and good audio quality. Could be A7R II perhaps...
-
Hmm. I computed 1.42 crop from the diagram, others stated 1.3 and here 1.4 is stated: http://www.eoshd.com/comments/topic/19104-canon-1dx-ii-video-camera-perspective-mini-interview-on-the-missing-details/. Also sounds like 1080p will be pristine and alias/moire free- so no pixel binning? This camera is starting to look good as a B-camera for the C300 II.
-
I suppose with enough effort even the C300 colors can be made to look bad (especially the skintones); would be surprised to learn that BTS was shot with the C300.
-
That makes sense re: same as 1DC, however computing crop factor from the diagram and pixel count (using square pixels) on this page gives a crop factor of: sqrt((5472^2)+(3648^2))/sqrt((4096^2)+(2160^2)) = 1.42. 1.3 could come from comparing 1080p's diagonal pixels (imaging area on sensor, not pixels per se). Regardless of crop factor, it's not a big deal, the IQ is great.
-
Seemed like a bigger, heavier camera with low RS and with very nice audio preamps (could be separate audio, but less likely for a BTS doc) though skintones are dodgy. Guessing Sony FS7.
-
Last time I tested Premiere Pro it choked on 1DC 4K MJPEGs in Windows 10 (over 6 months ago). Testing again today in OSX with the latest versions of FCPX and PP CC, both played 1DC 4K MJPEGs 100% real-time with low CPU utilization (perhaps now GPU accelerated). VLC still choked, and OSX Preview and QuickTime were a little choppy. Thus Canon's 4K MJPEGs aren't really an issue for FCPX and PP CC (on machines that play other 4K media smoothly). 1DX II videos- gorgeous color and filmic look. The 1.5 crop is the same as Super 35 in 4K. We switch back and forth between FF and Super 35. The only difference is shallower DOF with FF and effective 1.5x zoom in Super 35. Curious to hear about rolling shutter performance. In any case, from the demo footage so far, this camera looks to be a winner.
-
We've been shooting mostly C300 II lately. Checking old A7S II footage and tweaking in post did not see anything like these patterns. Definitely worth contacting Sony.
-
Focus peaking isn't really needed given touch-screen PDAF. If PDAF-assisted manual focus is provided (as with C300 II), peaking is also not needed. CFast 2.0, while (currently) expensive, rocks. Copying CFast 2.0 to the computer over USB3 is insanely fast. It's not possible to bend pins on CFast 2.0 as with CF. If Adobe (or Apple) add GPU accelerated MJPEG support, MJPEG will be less of a bummer (real-time editing will be possible with no transcoding).
-
MJPEG is 8-bit DCT compressed frames, one after another. ProRes is 10-12 bit DCT compressed frames, one after another. Mathematically, they are very similar, both effectively providing JPG-like frames (with no interframe interpolation). ProRes has faster encoders/decoders, however they're not that fast compared to modern H.264 ALL-I codecs, which have both faster encode/decode (with HW) and are also more efficient. As others have noted, the reason MJPEG is being used is likely cost, and perhaps to differentiate from the C line. Our C300 II does have a fan, however it can be set up to shut off during filming. The fact that the NX1 can do 4K H.265 (without a fan?) means that perhaps some licensing would help with Canon's in-house tech deficit.
-
I created a Canon Log Rec709 to ARRI LogC Widegamut LUT, then created an ARRI LogC to Rec709 LUT and the results looked pretty good. Curious to hear from Policar if this works OK to match Canon footage to actual ARRI footage. Also tested A7S II footage shot in SLog2+SGamut3.cine, converted to ARRI LogC, then from LogC to 709- looked pretty good. Lots of interesting LogC LUTs out there- this can be useful. He also has an OSX desktop app for $2.99 (pretty fair price if it works for you): https://cameramanben.github.io/LUTCalc/
-
I use the A7S II- there's no autofocus for video with MB IV and Canon lenses- is that possible with the A7RII? The 70-200 F2.8 II works great on the C300 II (and C100 II though I have not tested it).
-
It's an amazing lens for stills and video. If you ever have a chance/need to use a C100.2 or C300.2, the autofocus is also amazing. The 16-35 F2.8 II, 24-70 F2.8 II, and 70-200 F2.8 II IS round out a complete kit for many shooters (both stills and video). The 70-200 F2.8 II works great when manual focus tracking isn't too great for a moving shot (or when you have a manual focus puller), however for unplanned/live/run&gun, a native Sony lens with autofocus is more useful.
-
Very cool Julian. A quick test shows Canon Log2+CinemaGamut+Production Camera matrix is very close to ARRI LogC (gamma curve is almost identical). I can test Canon Log to LogC along with LogC to 709 later.
-
This process would make an image shot in Canon Log look like it was shot in ARRI Log C, this includes both color and gamma: it's a full 3D non-linear transform. You're right in that you would need to shoot charts for each color temperature. Shooting a variety of color temps could create an array of 3D LUTs, which could then be interpolated to any desired color temperature.
-
I'm not a Resolve expert, however there's a feature called "Shot Match to This Clip". If it works OK on the two color chip shots (perhaps with additional manual tuning with scopes, etc.), Resolve provides a means to save the current grade as a 3D LUT. Conceptually, as a graphics developer, it wouldn't be too hard to directly create this process using two color charts as input and a 3D LUT as output. Just use the average color in each sample square to create the look up transform (input/output) to apply to the 3D color cube. I wrote code to do something similar years ago (create a 3D LUT to transform key colors in one image to the same location in another image, then apply the final 3D LUT to the entire first image (all intermediate color values are interpolated by the 3D LUT (color cube) either with a trilinear or tricubic function). It worked really well). This type of function might be directly built into other higher-end coloring tools (or could perhaps be created with their scripting languages). There might be a way to do this with Photoshop too. The basic idea is to transform the color chips in the first shot into the color chips of the second shot, then save the 3D transform as a 3D LUT.
-
If you have access to both cameras, you could create a 3D LUT (this is much more than a simple gamma transform): Using a black-body emitter / continuous spectrum light (tungsten etc.), Using the same lens at the same position and camera settings, Shoot a color chart with both cameras (Log for Canon, Log C with ARRI), Create a match transform from one chart to the other (using Resolve or similar- this may be the tricky part), Save the final transform as a 3D LUT.
-
I think we all agree that story and content matters most. On the technical side, color/skintones, and sound are more important, by far, than 4K. However, if you can offer 4K too, why not? As for skin issues (regardless of age), there are amazing plugins which track faces/skintones allowing for smoothing/defect reduction in post, and only on the face/skintones. That way the rest of the scene can maintain full 4K vs. using a diffusion filter which softens everything. Streaming 4K makes sense- VP9 and H.265 cost less than 50% more bandwidth. On 4K displays on my desk the difference between 4K and 1080p is dramatic. Sitting comfortably close to a large HDTV, for the cinema experience, 4K makes a difference there too. I haven't seen it in person, however the new HDR displays shown at CES are getting glowing reviews. 4K HDR is going to be very cool. As for 4K disk space, the C300 II costs 440Mbps for 10-bit 422 4K vs. 225Mbps for 1080p 444 12-bit (160Mbps for 10-bit 422). That's only 1.95x (or 2.75x for lower quality 10-bit 422) for 4K. At these highly efficiently compressed bitrates, that's not a big deal for any serious project. Disk space is incredibly cheap, and CFAST2.0 copies are blazing fast. XFAVC is a much newer and more efficient codec vs. ProRes (basically 10/12-bit 422/444 MJPEG). Even 100Mbps GH4 and A7S/A7R II LGOP H.264 looks great in 4K vs. 1080p. We had issues figuring out how to get fast editing for 4K C300 II files with Premiere Pro CC (FCPX had no issues). Once figured out, editing 4K footage is blazing fast. Anyone with a Retina device looking to go back to fuzzy pixels? Once story/content/color/sound are up to par, 4K (and HDR) and nice bonuses when available!
-
SLOG 2 with Stills colour mode - anyone else tried it?
jcs replied to Andrew Reid's topic in Cameras
Used Slog2 with Pro or Cinema on A7S and SLog2 with SGamut3.cine on A7S II with good results for skintones. -
The 4K C300 II files played OK at 1/2 resolution or in a 1080p sequence. If it was a codec issue, this would not be possible. Another user on the Adobe forum verified the issue, and suggested transcoding ( https://forums.adobe.com/thread/2043865?q=C300 II ). While this worked on their machine, it did not work on mine: even ProRes was slow in a 4K sequence unless played at 1/2 resolution. 4K in a 1080p sequence worked OK too. Neither case worked as well as switching to OpenCL and playing the native C300 II files. Looks like a bug somewhere.
-
C300 II 4K files are ALL-I, no P or B frame interpolation. Thus it's not an H.264 decode issue. The issue reported was purely PP CC's 4K pipeline. Switching from CUDA to OpenCL shouldn't make any noticeable difference in performance (I wouldn't be surprised if NVidia runs OpenCL commands through a translation layer and executes CUDA internally). FCPX is still snappier/faster (uses more CPU too (not just related to GPU)), however at least PP CC is usable now for 4K material on OSX El Capitan, GTX 980ti, etc.
-
While sensors have some effect on noise performance, the biggest effect on noise is in-camera NR. The C300 II defaults to NR off. Turning it on reduces noise while softening the image a bit. I'm sure Neat Video would work better, however ISO 25600 could work in some situations. Sony has a lot of compute power optimized for NR (the real secret of the A7S/II IMO, not so much the sensor). For ultra low light, the color quality difference between the C300 II and A7S/II won't be as visible; the A7S/II makes more sense. That said, we shot a night scene with the C300 II and one LED light and it worked well- noise, when visible, is filmic (in camera NR off). I need to do more tests with camera settings and Alexa LUTs- it looks like Canon tuned the C300 II to better match ARRI, which is a good thing.
-
Using Samsung SSDs- not IO limited. 12 2.93GHz cores, 24GB RAM, GTX 980ti 6GB- not compute limited. The fact that OpenCL runs fully real-time (as one would expect with this hardware), on NVidia gfx (would expect CUDA to be faster), points to some kind of issue with Premiere/NVidia drivers. With earlier versions of Premiere, OpenCL ran much slower than CUDA on this hardware.
-
Changing Premiere Pro CC 2015's Renderer to Mercury - OpenCL (from CUDA) resulted in real-time playback of 4K material at full resolution! This is in OSX El Capitan, GTX980, latest drivers. Simultaneous playback on a 2nd 4K display didn't slow down. Adding a third display (1080p HDTV) continued to run without slowdown.