Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. The SEL18200 is indeed excellent, shot most of our SciFi short with it (FS700 kit lens). The SELP18200 (black, with power zoom) also works well (slow power zoom can be useful). As an event lens on auto-everything (100% A7S with SEL18200):
  2. I noticed Jupiter Ascending used Leica lenses. After a little research I found this: http://www.hurlbutvisuals.com/blog/2014/03/why-do-we-want-flat-glass/ The Cooke has distortion and less pleasing bokeh (only 5 blades), however it just looks better ("3D"). Testing cameras and lenses against each other is a useful exercise. I've been testing the GH4 against the A7S in studio lighting and the GH4 is looking better, especially in skintones. I had tweaked the A7S to look similar to the 5D3 and thought it looked pretty good until I did the same with the GH4. Under studio lighting, the GH4 produces nicer skintones and a cleaner image (+ 4K!). In December the Atomos Shogun will bring 4K to the A7S, however the GH4 will get 10-bit 422 4K at the same time (vs. 8-bit for the A7S).
  3. If you're looking for ND filters, you've got Hot Rod Cameras, Division Camera, and Samy's Camera nearby (Hollywood). If you've got (or know someone) who has Amazon Prime, you can probably get the ND you want overnight with $3.99 shipping. bhphoto has recently reduced shipping costs and doesn't charge tax (Amazon now charges tax depending on where the product is shipped from). More options: http://www.yelp.com/search?cflt=photographystores&find_desc=&find_loc=Los+Angeles%2C+CA# You can get this overnight for $3.99 shipping: http://www.amazon.com/77mm-3-0-1-000X-Single-Coating/dp/B003ZDHP7U/ "Its principal field of application is the observation and documentation of industrial processes with extreme brightness, such as steel furnaces, incinerators, glowing filaments in halogen- and other bulbs. The filter factor is 1000x." That should be dark enough for ya :) More info here: http://wolfcrow.com/blog/the-sony-a7s-4k-guide-part-four-filters-and-internal-recording/
  4. Provided your camera system doesn't produce aliasing/moire (and/or the subject matter can handle a sharp lens), using the sharpest lens will give you the most image control in post. Blurring in post looks better than sharpening, for example. For wide, landscape shots, sharper lenses are helpful. For closeups of people, softer lenses and/or using special filters such as Black Pro Mist et al can help skin look better and give the image a more filmic look. In the end, sharpness depends on the intended goal of the shot.
  5. Hey fuzzynormal, indeed shooting in the desert can be a challenge. One of the benefits is when it's super hot, there's little or no people out there to get in your shot :) Setting goals and deadlines forced us to get it done, no matter what. After this short it's more understandable why it takes so long to complete these projects, and why some projects never get finished. While we can't please everyone, it's helpful to get feedback, good or bad, to help with future projects.
  6. Because they want a small, easy to use camera with the best color science and overall quality for the price and no need to use potentially unreliable adapters for Canon glass. While we enjoyed having the extra features of the FS700, especially slomo, ultimately native Canon EF glass support and superior color science are more valuable in most cases.
  7. Pascal- thanks for the feedback. I can probably fix the green screen shots if I understand what you are seeing as fake. Can you describe in more detail what exactly looks fake? The jump into the pool was real, no green screen. What aspects of the explosion look wrong? I understand there should be major waves on the sea, however that was beyond our budget (this could be done in 3DS Max or AE with a suitable plugin and rendering time). I thought about using a convolution reverb on the outside voices, however no one had an issue with the sound until now. I can take another look at it. Quirky- thanks for the kind words. I recently saw Barbarella for the first time after wrapping Delta. It was entertaining in many ways, though not for the story or acting. The color was amazing, and after researching how it was done, was very interesting (the history of color and Technicolor film). It was also interesting reading about how they did compositing back then, all with film processes (no computers). Jane Fonda looked great, with and without clothes, and the 60's sets had a fun vibe. And interesting to learn the band Duran Duran took their name from Dr. Durand Durand from Barbarella. Our goal with this project, never having done any sort of narrative before, was to get it done in a timely manner. I wrote the first draft script in a couple of hours. We revised it as we shot, and the Planet Dominous scene was written on the drive to the Salton Sea where it was filmed. The original idea was more Zen philosophy and less drama. After showing a rough cut to a few friends and family, we refactored it for more drama. We now have a better idea what we'll need to do in the future for a more engaging story. Moving forward, our primary goal is making sure the story is decent before we begin shooting.
  8. Thanks Ebrahim. An interesting finding is that on some monitors the footage looks green and on others it looks magenta. Even after X-Rite color calibration, the effect is still there (though reduced). I biased towards magenta so that on 'green' monitors it looks OK and not too magenta on 'magenta' monitors. I have not done extensive tests, but perhaps Canon's colors don't swing so much between magenta and green with similar monitors, meaning something deeper is going on with the perceptual color science. Thanks Matt. All the moving shots were on location. The 'beach' shots were in the desert- the Salton Sea at around 111F and high humidity. We delayed shooting until near sunset hoping it would cool off (it didn't), so we didn't have much time, and thus shot plates so we could do the fight scene via green screen. Indeed we were very lucky- the birds are all real. The 5D3 shot was the city pan shot, taken on Runyan Canyon above Hollywood looking toward Century City.
  9. Regarding the shirt being blue: the shirt is lavender under the tungsten light, which naturally has more red light (warmer). In order to compare colors between the shots, he needed to run the color match on the tungsten clip, which he did not do. He trusted his naked eye comparing un-corrected tungsten light against corrected fluorescent, which is not accurate. While our eyes and brain are good at detecting 'healthy' color for skintones, charts and scopes are needed to ensure accurate color- for a variety of reasons, our eyes are not very accurate, especially if one stares at an image for a long time without looking away. This color chart matching method is currently the most accurate possible. For more accuracy we could use more color chips- 64, 100, etc., the more the better.
  10. sunyata- setting white balance (1 linear transform) and/or using the 3-way color corrector will perform at most 3 linear transforms: white balance for shadows, midtones, and highlights. The Resolve color-calibration method uses 20 color patches- a much more accurate non-linear transform in a 3D LUT, which is very helpful for skintones as well as preserving other scene colors correctly. Ebrahim- without even testing it (yet), I'm confident 3D LUT color matching will work with the 5D3/Canon and any other camera, where both cameras shoot the same color chart in the same lighting conditions. The resulting 3D LUT can then be applied to all of the non-Canon clips and look very similar to native Canon footage. What isn't known is if a more generalized 3D LUT can be created that can color-transform (for example) Sony A7S into 5D3 color without having to shoot color charts for both cameras for every scene. For a given WB, ISO, and exposure, it might be possible (otherwise color behavior may change with WB/ISO/exposure). I'll post in a new thread when I have time to test this theory.
  11. $100 may seem like a lot for a color checker, however it's not trivial creating highly accurate color patches good enough for color calibration. The color patches/chips look like clay/paint and are somewhat fragile and shouldn't be touched etc. Apparently over time the patches will change color and no longer be as useful for calibration. It's not clear if OLED displays are accurate enough for color patches- this would probably only be useful for camera matching vs. a purely reflective surface useful for calibrating light and color on location. A purely sensor-based color calibration system which never 'wears out' would be ideal, especially if built into a camera system! (along with a depth sensor for highly accurate autofocus as well as manual focus (this tech exists right now and is very low cost)). My initial tests with the A7S and the color checker were very good. After more testing I'll know if using this technique will render Canon's excellent color science advantage moot (other than out-of-the camera convenience). Especially when we can shoot a photo with the 5D3 and color checker and use that as a "match-to" reference for the A7S, FS700, GH4, etc. This technique would also work with the ARRI Alexa.
  12. VS the FS7, the C100.2 is smaller & lighter, and most importantly, has Canon color science which is nice to have for pleasing colors, especially skintones, for fast turn-around and little or no post work. We can make the FS700 and A7S look great, but it's more work than with the 5D3 (and other Canon cameras). We like the color science of the A7S better than the FS700, perhaps the FS7 will be better than the A7S.
  13. jcs

    How to edit H265?

    Perhaps use Handbrake or similar tool with ffmpeg elements (which include H.265 decoding) to transcode to H.264 at 2x the bitrate or ProRes/DNxHD, etc.
  14. Color correction using the X-Rite Color Checker Passport: Tested: http://www.mattroberts.org/MBR_Color_Corrector/ - interesting (fully automated) but not as good as Resolve 11; not real-time during playback. SpeedGrade- when Dynamic-Linking from PPro- couldn't access the SG Timeline panel to access the color matrix controls. SpeedGrade would not load the A7S's MP4's directly. Since SpeedGrade doesn't appear to use a 3D LUT and instead appears to use a simple linear matrix, the quality wouldn't be as good as a non-linear (and much more complex/detailed) 3D LUT solution. Resolve 11- super easy & fast (after googling how to, video in prior post). The resulting 3D LUT plays back in real-time in PPro and looks very good (add any Lumetri effect to the clip in PPro, then load the Resolve generated .cube file). When running Resolve and PPro at the same time, start Resolve first, else it might complain that there's no GPU memory.
  15. Lighting is indeed very important for color quality, with sun, HMI, and incandescent being the best so far in my testing experience: continuous spectrum. At the very end of the ARRI video (around 24 minutes) he shows the full spectrum LEDs, which have a decent full spectrum (blue LED with local phosphor). The new Remote Phosphor LEDs are excellent (Area 48 etc.), over $2.5k currently. All of our film was shot in the sun except for the green screen shots which were shot with daylight high CRI CFL and high CRI Cree LEDs: here the GH4 and 5D3 do better with color vs. the A7S. From their video, the ARRI LC-7C LED looked decent ($2.7K). Now that I have an X-Rite Color Checker Passport, I can shoot it on location (at least when lighting isn't radically changing) and create a high-quality LUT in Resolve which can be exported as a .cube file and used in PPro- for the ultimate white and (accurate) color balance. Helpful tutorial:
  16. Hey sunyata- good point regarding needing different presets for different conditions. For controlled lighting / studio shots, setting manual ISO and WB is much better for post work. For fast turnaround and changing lighting-condition shoots I use auto ISO and AWB (especially when outside, even more so during sunset after golden hour). Since SGamut(2) changes color behavior with exposure (perhaps even non-linearly- can't be fixed with a linear transform?), I would expect (Slog+SGamut)2 to be best used only for fixed lighting conditions. It would appear Sony didn't provide (Slog+SGamut)3 for the A7S for business reasons: reserved for the pro cameras. (Slog+SGamut)3 doesn't (non-linearly?) change color behavior with exposure- a much better solution and far less work in post, especially for skintones, where accurate color is most important. For final color grading of our first narrative, a Sci-Fi short called Delta, I calibrated 3 different displays with an X-Rite i1Display Pro (working on both Win7 and OSX). Two of the displays were Dells (2405 and 3007) with CFL backlighting and a Samsung D8000 HDTV with white LED backlighting. Even after repeated calibration, the displays weren't exactly the same, however color behavior was fairly close. I also checked color on uncalibrated MacBooks Pros, iPhones, and iPads. This is something I did with videogame development- testing on a wide variety of hardware to see how the product performs. Each display is different, and many times one of the displays would show something the others did not, requiring a grading adjustment. Most end-user displays aren't calibrated: this kind of testing is helpful in making sure the final product looks good in most conditions. In my experience with recent Sony cameras (FS700 and A7S), Sony's sensor+software are 'unstable' around skintones. The color swings around magenta and green, making accurate and good looking skintones much trickier than with Panasonic or Canon. Even when skintones are 'correct' using the vectorscope, they don't always look correct with the naked eye. In other words, good looking skintones for Sony cameras have a small sweet spot, like the peak of a pointed parabola, with magenta on one side and green on the other. Panasonic goes green / orange with a more rounded parabola (more good skintone range), and Canon goes slightly-green / orange with the flattest parabola- the best skintone range with the least amount of post work. I shot the first scenes of Delta with the A7S using stock PP6 (Cine 2 and Cinema Color Mode) outdoors with rapidly varying lighting conditions using auto ISO and AWB. In post I found the best skintones, especially for scenes of an alien planet, had a magenta bias: The magenta bias helped set the mood for an alien world. For the first Earth scene, I used PP7 with Slog2 and Pro Color Mode (essentially Kholi's settings with minor tweaks). The main shot used for this prototype poster is from a 1080p frame grab from the film: The bottom images are GH4 with Natural profile from green screen studio shots. The wall image was shot on the A7S as a background plate during sunset on location. Interestingly, scene to scene color variance was never mentioned during early screenings. While the current version matches more closely, it's not perfect and in the end the goal is for the color to affect emotion and scene energy more than perfect color matching. Note the character on the lower right has a magenta bias. When correcting for the white clothing, the skin ends up too green. The time-consuming fix is masking and correcting separately (I'll fix the colors for the final poster). After working on this production with the A7S and GH4, I really appreciate how Canon handles skintones- the 5D3 requires far less work in post under a wide-variety of lighting conditions. For Sci-Fi, wild+interesting colors work well, however I'd prefer to have accurate+natural color straight from the camera and then make the colors into "Sci-Fi" colors in post. I much prefer the look of the 5D3 (including native H.264, RAW is amazing) vs. the C100/C300, and the 1DC doesn't provide decent price-performance, so we would not switch to the current Canon's C line. If in the unlikely event Canon produced a 5D4 with price+performance close to the A7S/GH4, we'd surely add them to our toolbox. The new Sony FS7 with 10-bit 422 XAVC and internal 4K is looking like a fine upgrade for our FS700+GH4 (the new PXW-X70 looks nice, though Sony's color science is still quite challenging).
  17. Hey JG- as noted in the comments on the Sony forum, the F5, F55, and F65 are all different (as is the A7S). It would be helpful to have a LUT specifically for the A7S. I've had the best luck using PP5 or PP6 (stock) or Slog2 (PP7) with Cinema or Pro color modes (including Kholi's tweaks). Seeing proper handling of A7S Slog2 with SGamut with a mathematically correct conversion would be interesting.
  18. Sometimes I shoot at 60fps to have the option for slow motion in post. In cases where slomo isn't needed, I use 'drop-frame' from 60 to 24: just add a 60fps clip to a 24fps sequence. When played back with a suitable standalone player or real-time hardware (such as streaming to my Samsung HDTV), the resulting motion is butter smooth: no judder or discontinuous motion (beyond what film might do with panning). While there is some interframe variance going from 2.5x 60 to 24, it's not really noticeable. 120fps provides perfect 24fps sampling- take every fifth frame (120/5 = 24). The video players in all web browsers are very low performance and drop frames effectively randomly, especially full screen. This can make otherwise excellent motion look poor.
  19. The PC versions of the GTX 970 (and 980) will work in OSX. However, it requires installing OSX 10.10 (Yosemite, free beta from Apple) and the latest Nvidia driver: http://www.insanelymac.com/forum/topic/301416-nvidia-web-driver-updates-for-yosemite/ (I installed the Public Beta 5 version after updating Yosemite to Beta 5). While CUDA-Z appears to work, Premiere Pro CC 2014.0.1 and 2014.1 report CUDA is unavailable. Guessing updated drivers are needed as PPro 2014.1 works OK with the 970 in Win7x64sp1.
  20. Received an EVGA SC ACX 1.0 GTX 970 card from Amazon today ($339). The beautiful, quiet, 3-fan (Windforce) Gigabyte GTX 770 was removed and replaced with the spartan efficient-looking though flimsy EVGA 970. Upon power up, the EVGA made an interesting 'electrical' noise that I first thought was a fan rubbing. Perhaps this is related to the 'coil whine' comments online. Not clear if the noise is electrical components (sounds a bit like arcing) or a randomized PWM noise (normally PWM noise has a purer tone, like a square or triangle wave). The fans themselves are reasonably quiet. After installing the latest GeForce drivers for the 970 (334.16), the strange noise went away. Firing up Premiere Pro CC 2014.1 (latest), I'm pleased to report that the repeatable GPU crash is gone! The rendering bugs which are clearly on Adobe's side are still there, however that's only during playback, not rendering (there are other bugs I haven't tested that do appear during rendering: the workarounds are to re-order effects, change nesting, etc.). GPU ram during playback gets up to 1.1GB (still not getting near 2GB), however as speculated it's clear the drivers are doing something different with the 970 (updated drivers: 334.16 vs 334.11). During rendering with PPro CC 2014.1 it appears much more CPU utilization is happening all 12 (24) cores are staying saturated longer. On the GPU side, I observed spikes up to 3.5GB and near 100% GPU utilization. Kudos to Adobe for using more of the available computing power. The factory overclocked 970 is so far stable, running at 40C idle and about 52C when rendering (sometimes peaking at 100% GPU utilization, 3.5GB GPU RAM peak). While not as elegant and sturdy as the Gigabyte 770, this EVGA 970 is faster and for the price is an amazing deal. Even better, this hardware/driver combo doesn't crash/lockup Premiere. The EVGA 970 is $30 cheaper than the Gigabyte 770 3-fan version (which isn't in stock, the EVGA is as of right now). The EVGA heat-sink-fan assembly is not securely attached and wobbles when you touch it. For a static computer not being transported, not an issue. For a computer that could be moved/shipped, the extra $30 for the Gigabyte (or similar) would be worth it. If the 980 isn't any faster in Premiere, I'll keep the 970 (uses less power, ~$200 lower cost). The 980 could be here next week, else likely after Oct. 20.
  21. Regarding alpha channels in Photoshop from files: they don't work as transparency blend channels when loaded. It's necessary to select the layer, then menu Layer->Layer Mask->Layer from Transparency. This creates a new alpha channel, from which the file's alpha channel can be copied into (then deleted). Not clear why this last step is sometimes necessary: why it doesn't use the alpha channel as transparency (instead of all white after the menu command). The garbage areas in PPro color channels are then are masked out and the blending looks correct. To test whether 4GB VRAM will change how the Nvidia drivers behave (even though only ~50% VRAM is used when bugs/crashes are happening), I ordered a GTX 970 4GB which should be here tomorrow (980's are still hard to get).
  22. hmcindie- the Quadro 5000 has 2.5GB RAM (770 has 2GB). The GPU memory monitor shows max (peak) ~1GB GPU RAM utilization. A 4K texture takes 3840x2160*4*2 = 66MB of VRAM (RGBA, 16-bits per element)- 2GB is a lot of memory for 2 4K videos with alpha, and a few more 4K elements for effects. While it can peak at ~1GB, it's not anywhere near that during the lockups/crashes. As a long-time software developer, I would have my code degrade in performance gracefully in the event of limited system resources (take a slower path, swapping out memory, etc., vs no error/resource checking and crashing). Examining the stack crash traces in OSX, PPro has a lot of bugs with their internal node-based effects system (separate bugs from Nvidia's drivers): different bugs based on node (effect) ordering, etc. (not always crashes- sometime wrong results). On a positive note, the node-based core of PPro should allow a more advanced GUI with a node-graph similar to more advanced systems such as Resolve, allowing for faster and more efficient workflows as well as much more complex operation support. Perhaps most folks use After Effects or other third party apps for advanced compositing. However, After Effects is so slow compared to PPro I do everything I can to avoid it at all costs. The creative process really benefits from working at or near real-time. The Alpha channel system of PPro is non-standard. Bringing in an alpha-channel rendering from 3DS Max shows PPro's alpha interpretation is way off- the blending doesn't make any sense (premultiplied or not, inverted, etc., still wrong). Here's a test to try: export a frame from PPro with alpha (frame with alpha in it) and bring into Photoshop- the alpha is not correct and the 'gamma' for the alpha looks way off. Additionally, there can be garbage in the color channel. After Effects alpha also didn't work with 3DS Max alpha. Perhaps this is why the specialized compositing apps exist- Adobe's alpha support is broken (along with the crashing bugs)?
  23. Premiere Pro CC 2014.1 released today, tested on Win7x64 with an NVidia GTX 770 (latest drivers): The Good: The new UI colors are cool. Bezier masks are going to be helpful. The Bad: No change: still crashes on nested 4K green screen clips on 1080p sequences (Ultra Key, etc.). No change: long pauses/lockups in Win7x64sp1 The Ugly: It appears the lockups/crashes/slowdowns are related to Nvidia's drivers on both Windows and OSX. After switching to OSX to finish our short film (same MacPro hardware), I also stopped using NVidia's "Web driver" and am using Apple's default Nvidia driver: less crashes. In OSX, there are no slowdowns/lockups, and crashes are far less frequent (same project files) vs. Win7x64sp1. Crashes on the OSX side also appear to be related to Nvidia's drivers. For those experiencing these issues, perhaps trying a fast ATI/AMD card (R9 290X, etc.) and OpenCL might be a workaround.
×
×
  • Create New...