-
Posts
1,839 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by jcs
-
Your top image is named: EditReady.jpg.d7b9435857e491918307a0beea52aae8.jpg and the bottom image is: iFFmpeg.jpg.d89ba4bd239a8061c664c55e0d1c76fb.jpg Unless something got mixed up, the iFFMPEG version has a lot more shadow information (and the EditReady version has severely clipped the blacks). I use FFMPEG extensively, using both ProRes modes, and haven't seen black clipping like that (there are color shift issues with 4K material, however). You might try exporting to H.264 in both EditReady and iFFMPEG and see the difference from ProRes (H.264 runs really fast in FFMPEG). 175-225Mb/s H.264 should be OK for editing coming from 100Mb/s H.265 (nothing really lost as both are 420 8-bit).
-
Top is clipping the blacks heavily. Based on your prior comments, ER top, IF bottom.
-
For ffmpeg-based transcoders, there's a bug in ffmpeg and ProRes: it appears anything > 1080p gets marked as bt601 (instead of rec709) and the resulting incorrect transform(s) change the color and gamma. I tried various workarounds when developing Photon however non of them worked (ProRes mode 1 in Photon has the least color shift vs. mode 2). Thus, I included high-bitrate H.264 420 8-bit as well as H.264 10-bit 422 which preserve color much better and should be visually equivalent to ProRes, especially when coming from H.265 8-bit 420. 10-bit 422 ALL-I H.264 is pretty much the same as ProRes (a form of MJPEG in a sense: however H.264 ALL-I likely has more advanced macroblock handling (variable sizes, quality), which can provide higher quality but is more expensive to decode vs. ProRes, which is very fast (even faster than vanilla MJPEG)).
-
Thanks guys! Currently Windows only, though would expect it to work OK with Wine or any VM on OSX. Might port the project to wxWidgets which would allow an easy OSX port. Photon was designed to the be the fastest transcoder of those using ffmpeg: please let me know if any transcoder is faster. Testing with 12 (or more) NX1 4K H.265 files with the following ProRes settings yields 50fps (4K => 1080p with very high quality scaler): NX1 H.265 4K to ProRes LT 4K = 33.1fps: This is on a 5+ year old MacPro: would be helpful to know performance numbers on other machines.
-
I created a fast transcoding app for in-house use and thought it might be useful to others: you can get it here. I added benchmarking features so you can easily see performance differences between codecs and settings: ETA to complete, MB/s, and FPS. With example NX1 files, it can do ~46fps converting 4K H.265 to 1080p ProRes LT (12-Core 2010 Mac Pro running Win7). Sony A7S Mp4 files can be converted to ProRes LT at 192 FPS. It can rewrap A7S and FS700 files for use with Resolve 11 and audio. Also works well for high-quality GH4 4K to 1080p (Lanczos scaler). 422 10-bit H.264 (XAVC) in both IPB & ALL-I as well as H.265 output also provided for experimental use.
-
Part of ARRI's secret sauce is their camera ergonomics and menu system: very easy to use and widely praised. Compare the Red menu system: great for technophiles, but with so much complexity, it's not easy to use. I've seen very experienced Red operators hunt around for menu settings. These seconds and minutes add up during production while everyone waits- time is money and it also gets on the cast & crews nerves waiting for something that really shouldn't require waiting. Capturing a scene without highlight or shadow clipping with maximum flexibility in post is the most efficient workflow: ARRI is currently the best. ARRI's ProRes files easily compete with Red's raw files and are much faster & easier to work with in post. ARRI now provides 50Mbps 422 MPEG2 for broadcast work! Smaller files are faster to work with and cheaper in the long run. Sony's XAVC (H.264 422 10-bit) available from the FS7 and up is very efficient and useful. H.264 with 444 12-bit would be another useful option. H.265 provides twice the efficiency of H.264 and can also support 444 and 10+ bits. Even without using a GPU, current H.265 decoders easily run much faster than real-time on current computers. The trend is clear, even at the high end, compressed codecs are replacing raw (which is Red's case is lightly wavelet compressed Bayer data). In terms of image quality vs. resolution, for Skyfall's 4K they could have shot Red 5K and scaled down, instead they shot on Alexa and scaled up from ~3K. ARRI has years of experience with film cameras and digital film scanners: their cameras produce the most film-like images possible. That said, the F65 is less film like and more... something else- addictive color for the eyes, perhaps different in the way Technicolor 3 strip was compared to Kodak film: The F65 is of course different from Technicolor 3 strip; the point is the color rendering makes you stare at the beauty of it. ARRI, Canon, Red, Sony, Panasonic (even the NX1!) can produce this kind of color, however Oblivion & Lucy (even After Earth's F65 shots) look different than other cameras (perhaps not better if wanting a film look, but the color is amazingly addictive).
-
Hey j.f.r, if we want to be objective, top of the food chain means best of the best. Hurlbut's article via the OP was about skin tones & color, AOT HFR. House of Cards doesn't look very compelling for color & skin tone quality. Marco Polo, shot on the F55 didn't look very good either in the beginning, but looked better in later episodes (different DPs & directors throughout the series). How do we measure something that can be very subjective? By looking at what cameras were used to win Oscars: http://nofilmschool.com/2014/01/which-cameras-were-used-on-the-oscar-nominated-films-of-2014 . For 2014 the A-cameras were ARRI (Alexa or film). I rate the F65 higher than the Alexa for the same reason Besson and his DP did: it provides the best color and skin tones. Why wasn't the F65 used much in Hollywood? Because it is physically an ugly camera (per Besson's DP; absurd but apparently true). They looked past that and tested all the top cameras, and went with the F65. I don't know how hard it is to make an F65 look good, but I do know it's relatively easy with an Alexa, yet another reason it is used so much. Ease of use and reliability put some cameras much higher than others (regardless of image quality). Red makes fine cameras for the price, however Sony, Panasonic, and ARRI make cameras which provide higher image quality, are much easier to use, are more reliable, at similar and higher price points. No one would spend extra for an Alexa over Red if the quality wasn't there. While the F65 has the best skin tones and color, the Alexa is the top of the food chain in Hollywood for the simple fact that it's the most used high end camera when budgets can afford it. A combination of great skin tones & color quality, usability, and ease of getting said great color & skin tones in post. Besson used a bunch of Reds for the car chase in Lucy because they couldn't find more F65s in Europe (they had to purchase 2 F65s of their own- none were available for rental). Reds, 5Ds, C300s, etc. are fine cameras for many purposes, but they are not top of the food chain for skin tones and image quality as A-cameras. For HFR, Phantom is top of the food chain (ARRI also provides excellent HFR).
-
Nice color, skin tones & detail. This is the first NX1 footage I've seen with natural looking blue sky (vs. too much green).
-
Pretty cool- love the organic grain from the global shutter CCD. Would be interesting to see a side-by-side with the A7S in 60p crop mode (or full frame if low light).
-
Referencing Oblivion and Lucy, the F65 is top of the food chain for color and skintones (Luc Besson and his DP reached the same conclusion). The ARRI 65 looks interesting- expect it to be very competitive. For ease of use and the most foolproof color, ARRI is top of the food chain (check the cameras used in all the Oscar winners for the last few years). For Shane's test referenced above, the Dragon was noisier and the skin tones look better on the C500. Didn't see anything showing better performance vs. ARRI for skin tones or highlight rolloff. The Dragon sensor and software upgrades are certainly a step in the right direction for Red, and the Weapon looks to continue that trend. Red cameras are priced right for their performance level (looks like the ARRI 65 is $10k/day (rental only)). Sony has improved the F55 with firmware/LUTs (including matching Alexa skintones), and expect Sony to be competitive at the Red price range (above and below as well). Panasonic's Varicam 35 doesn't get much love, but the skintones looked pretty good from what I've seen so far. Some friends are moving away from Red after the latest branding- "Weapon" and skulls, etc. Weapon & skulls might be good branding for firearms/motorcycles, but seems odd for a camera company.
-
O7Q+ records A7S 4K.
-
Andrew- as a successful blog owner you have the option to help or to hurt the world with your voice. Clarkson is an alcoholic, an addict. His behavior is irrational and he can't be reasoned with while he's drunk. Even sober, addicts tend to not behave rationally. This doesn't mean they should be isolated, in fact the best way to help a person suffering from addiction is to immerse them in compassionate fellowship. Do you know any addicts? Have you seen any turn their life around, and help others to heal? If not, perhaps attend an Alcoholics Anonymous meeting (or similar) to witness addicts helping each other heal through fellowship (addicts are always addicts and never 'cured'; always mindful to avoid falling into old patterns). Since moving to LA in 2006 I was surprised how pervasive drug, alcohol, and sex addiction is in the entertainment industry. People doing drugs and drinking on set isn't healthy and is unfortunately very common here (not allowed on my sets- what one does on their own time is their own business). Instead of turning a blind eye to addicts on productions, we should provide daily reminders that there is free fellowship available to help people deal with life (such as AA). Even better, entertainment companies should provide in-house help to encourage people to live healthy, drug-free lives through fellowship. The BBC did the right thing in letting Clarkson go. Clarkson needs change in his life to give him a chance to deal with his addictions, ideally through positive connections to other people through fellowship. Discussion addiction on a filmmaking blog is totally appropriate. If the film industry can help heal its players, then it can help create messages and positive influence to help millions of people suffering in our world.
-
We must consider the result of the Influence of our Voice: do we ultimately want to heal or to harm? A story which debuts in a lonely place which heals can go viral, connecting many lonely people and bringing much needed fellowship and healing.
-
The challenge with integrated 3D lens is the separation isn't adjustable and the distance is too small (less than 1"?). For normal views it needs to be around 2.5". The resolution will be halved as well? I built a rig out of wood and fiberglass from scratch- alignment wasn't an issue. The 3D photos were made with this rig: http://www.brightland.com/Akumira/Akumira.htm
-
The most flexible high quality solution uses two cameras and a half-silvered mirror (beam splitter). This allows the cameras to get close enough together to prevent hyper stereo (for normal/close ups). Otherwise you can mount one camera upside down to get the bodies close together (but still too far apart for close ups: need to be close- about 2.5 inches between the center of the lenses). This solution provides the highest quality- used in high-end film, etc. It allows control of stereo depth by moving cameras closer or farther apart. For objects far away, the cameras can be placed on a simple side-by-side rig. Genlock/camera sync is best, however syncing via sound can work pretty well: https://www.youtube.com/watch?v=0rW2WXkFjno If you can find something like this to rent: http://www.3dfilmfactory.com/index.php?option=com_content&view=article&id=72&Itemid=91 it shouldn't cost much more than a DSLR fee. Probably easiest to try to rent a pro stereo 3D camera (Panasonic, etc.). I have a Sony TD10 I should probably sell (consumer quality only). GoPro is another option: http://www.provideocoalition.com/gopro-dual-hero-3d Efficiently editing stereo 3D is tricky. Cineform + Premiere was extremely buggy, as was direct editing in Vegas (both back in 2010; Sony TD10 footage). Resolve now supports direct stereo 3D editing- might be a good choice.
-
If we rename the Internet the Argunet all will be good. Then we can start the Rantnet and Snipenet. We should probably take down Catnet, Gossipnet, and Adnet. After that we'll all be busy running from SkyNet.
-
Shoot in APS-C mode.
-
I figured they did it on the pans and really bright/dark transitions. It required a lot more work: http://blogs.indiewire.com/thompsononhollywood/how-they-did-it-technicolors-secret-recipe-for-best-picture-winner-birdman-20150223
-
A7S is the best indie camera right now: low light, decent color with a little work (lighting, WB, post), small manageable files, reliable, great lens options. Available used on ebay etc. for $2000 or less (even new / gray market? from Hong Kong, etc.).
-
None of the 'free' codecs (e.g. WebM/VP9, Theora) are as good as H.264 let alone H.265. If any of them ever became popular, and were included in all NLEs like H.264 so they'd be practical and useful, 'submarine' patents would pop up to stop them. The way the patent system is set up, it's pretty much impossible to create a competing codec without stepping on an existing patent. The Cineform codecs are cool in terms of quality (I purchased them from time to time over the years), however they just couldn't make them very reliable. Wavelet compression is useful for light compression only, as DCT-based methods (such as ProRes, DNxHD, H.264/265, pretty much everything else), are much better for high compression ratios for efficiency ('long GOP', e.g. interframe (IPB) vs. intraframe-only (I-Only)). For light compression and high bitrates, ProRes is the leader. When efficiency is important, H.264 (and soon H.265) is the leader. Even 'ancient' 50Mbps 422 MPEG2 (long GOP) is still very popular for broadcast- it was recently added to ARRI's cameras.
-
hmcindie- I don't think it's a bug, I think what happens is the button is pressed twice really fast (effectively). This happened once, and after that I always watch to make sure it's running. Yell 'camera speed!' or 'action' on set after finger is off the button and the counter is going.
-
422 XAVC and 420 XAVC-S are marketing names from Sony for H.264 with specific settings. So any camera which currently supports H.264 should be able to support these formats. It's not likely there are additional patent costs. Typically higher-end features of the H.264 spec are reserved for the pro cameras for business reasons. If we lived in a world without patents and all the cameras were open source, democratization of codecs could be possible. But we don't, so it's not going to happen any time soon (open source cameras are coming and someday H.264's patents will expire).
-
The record button is not a deal breaker. You could be happy with ISO 25,600 or higher- you'll want to test with Neat Video to see what's good for you. A7S is the best low light camera right now.
-
Right on Luke. What's important isn't memorizing the rules, it's understanding why the rules exist, then we learn they're not rules at all, only guidelines we've learned that in many cases can provide predictable results. When money is on the line, everything changes- less risk tends to be taken, stress is higher, people need to pay rent, food, support their families, etc. It seems the best art is created from passion and not for money. I wondered why so many artist/designers would get upset when as a manager and developer I challenged their designs for usability (Cognitive Science concepts). After doing additional research, I found many of them were simply copying 'cool designs' without really knowing why they were good or not. In many cases, while the designs looked cool, the functionality greatly suffered. When challenged to make changes to improve usability, since they didn't really understand the design they had copied, they didn't know how to correct the flaws in usability, and thus became defensive as the jig was up. I'm totally cool with copying good designs, everyone does it, however its important to know why the designs are good and only copy the designs for the right reasons. These concepts apply to filmmaking as well: it's cool to copy what works and/or break the 'rules', so long as we understand why we're doing it.
-
Regarding rules for creating art: while I agree that there are no rules, understanding why the 'rules' are commonly used is helpful. Then as a creator, one can use the rules as tools or choose to ignore them. Rules are technical, and technology by definition is a tool. Like brushes, paint, and canvas. In a prior thread, we discussed 'winging it' vs. 'following the rules of traditional filmmaking'. Ed's Charlie Chicken was pretty good, though I could see Ed's concepts going a lot further with a little less winging it and a little more planning. It's clear Ed spent a lot of time with the script and overall design of his latest creative work, and it shows a dramatic improvement in The Quiet Escape. Now that I'm back doing tech-work for my day job, I'm itching to get back to shooting something creative, but I won't roll camera until we have a solid story, script, and basic shots planned ahead of time. There are (at least) two way to enjoy art: by experiencing it without thinking, typically allowing our minds to generate emotion/feelings and allow it to flow with no analysis, and more or less the complement: analyzing the art for structure, purpose, color, flaws, etc. Both methods are valid and the same work can experienced differently with repeated interactions. Our perceptions change even from what just happened in our lives right before or after experiencing the art. Sometimes a work we didn't like is experienced again years later and our appreciation flips 180 degrees: from hate to love and vice versa. Marginalizing one form of 'art experiencing' over the other is a form of extremism and narrow-mindedness. All forms are valid, and it depends on each of our own unique perspectives at the time of the experience. Understanding this allows us to have a sort of 'art empathy' for others' reaction to art which may differ from our own experience.