Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 12/06/2015 in all areas

  1. Everything I said is true and is backed up with hard evidence. There's no magic here - just good optics. Perhaps you might actually read the white paper I gave you a link to? Eventually, conservative established lens makers in Japan and Germany will wake up and realize the true benefit of the Speed Booster approach for designing high-speed short BFL optics. With any luck I'll be retiring on royalties when they do There are numerous examples of Speed Booster/lens combinations that beat native lenses. For instance, a Voigtlander 90mm/3.5 plus S.B. gives a 60mm f/2.5 that is better than the latest Olympus 60mm f/2.8 macro lens: http://***URL removed***/forums/post/51895542 . A Sigma 35/1.4 plus S.B. gives a 25/1.0 that is much better than a Voigtlander 25/0.95: http://***URL removed***/forums/post/55298481 . And these examples use the old version of the Speed Booster. The new ones referred to in my white paper are even better. Finally, I defy you to find any f/1.0 lens with better image quality than a Zeiss Otus combined with a Speed Booster Ultra.
    6 points
  2. http://menexmachina.blogspot.com/2015/12/a7rii-m43cmount-lenses-with-cropzoom.html
    4 points
  3. Every shot in this video except the rooftop was shot with V-Log L 10 bit on the GH4/Assassin. Thanks to Sandro and Nicolas Maillet for helping me with a couple of Twixtor shots.
    2 points
  4. I did a test with these LUTs and the NX1. Love them. Check the video here.... http://m.youtube.com/watch?v=WUHgyPtvilI
    2 points
  5. Brian (designer of the Metabones Speed Booster optics for those who don't realise) is 100% right And I see it with my own eyes when I use the latest Speed Boosters, especially the XL-T on Micro Four Thirds. I will likely be selling most of my native Micro Four Thirds lenses soon, unless the manufacturers wake up!
    2 points
  6. The human eye has: About 120 million photo receptors that can detect light and dark and around 6-7 million that provide color and all in about the space of a micro 4/3 lens circle. The human eye can adapt to detect a luma range of around 46 f-stops So between pixel and sensor sizes, image processing chips, and sensor read-out speeds, cameras have plenty of room to get better even at micro 4/3 sizes (in my opinion).
    2 points
  7. Hello everybody, I just build an online web app to test all the LUTs from my pack on JPG/PNG images. You can test the beta version here : http://luts.iwltbap.com/previewer The use is pretty simple, you have just to drop/load a JPG/PNG frame on the main container and click a LUT reference in the left sidebar. In top bar you have a ON/OFF button to enable or disable the LUT effect instantly, and a slider to change the LUT intensity. Also, you can export the current frame as a PNG by clicking SAVE button. The LUTs will be applied at low resolution to increase speed (the size of this web app is around 10MB, when my pack of LUTs is around 1.20GB). So, pixelation and banding will appear in the previewer. More details in the infos box on the previewer home page. It's still in beta so all feedback and bug reports are welcome Benjamin
    2 points
  8. I’ve been struggling this year to understand color. I come from a strictly digital-upbringing. I have shot some film, but I never understood color timing or even learned how to look critically at color – at skintones, at greens, blues, and red. And I wondered why I would spend so much time trying to understand it, when I could be spending time practicing lighting and finding angles and watching inspiring films. But now I kind of get it – since the 90’s – one shoots film, then one digitizes it in (Digital Intermediate) then you would color it in CINEON colorspace (which is flat like log, very similar to the flatness of Arri log-C) and then send it off to film print. Right now it’s Kodak 5283 or 5293 which respond to blacks, dynamic range, all that differently. So you would choose a stock to shoot on, digitize it, color it, then send off to a film print. So colorists have been involved digitally since the 90’s on all major motion pictures shot on film. They would have LUTs – or look up tables – to emulate the film prints, so they could see what it will look like before they print it. And also, to keep it consistent when it would go off to DVD or VOD or Blu Ray – so the film looks the same everywhere, with tweaking. So I started to use Filmconvert to grade my footage that I would shoot as flat as I can with my cameras. First, it was the Sony F3 and now mostly the Sony F35 and Red One and sometimes the Alexa. Filmconvert keeps it in REC 709 colorspace, which is the standard for TVs and online. Filmconvert was great, but the curves were aggresive and I found using Alexa dci-p3 worked best with slog1, not just slog2. Anyway, I found out later to lessen the film curve, raise highlights, etc. Then Visioncolor came along with their luts that got highly popular – especially the M.31 lut. I liked it, but it was a little too stylized. This is when I started to learn resolve. Before I just used Final Cut Pro 7 and the three-way color corrector. Resolve I didn’t get for a long time, but read a tutorial by Hunter Hampton and started to understand it. I also started to mess with DSLRs more fully and learned how to play in RAW there. All helping out with shooting on Red and understanding its own RAW and why to use Redlog over their Redgammas, to have the most info to play with, out the bat. And I learned that Red Cine-X was just a simple program that existed before Resolve was made free, and it was good, but Resolve is a lot more customizable. Then Visioncolor Impulz Luts came out. And Hunter said it was as close to film as you could get. I messed with it, didn’t understand it. But now I do. You take your footage, and if you are working quick, add a FC (film contrast lut) or a VS (visionspace) lut that has less contrast or can mess with a FPE (film print emulation) lut – that I think brings down the highlights too much. I would learn to go slog to cineon, then start coloring with the filmstock I like (I like 250d and 200t and 500t a lot – 50d is a little too wild in the blues) – and then once that is good, go off to a Film Print – 2383, 2393, or their custom Film Contrast 1 or Visioncolor or even Fuji print. And that gets you a good look. Anyway why is this important? Because now I can shoot on a log or raw camera, and build a lut, and make sure that the lut looks good and that the final image is going to look good. No more guessing with shooting flat and finding out later there is too much noise in the shadows or the skintone, how it reacts to light and color looks odd and off. In essence, it’s What you see, what you get, which is in some ways an improvement over film. You know for sure, pretty much, if the image is good or not. With film, the mystery and magic of course is there and it looks the best – how it handles skin and highlights and light, and those dancing frames and how unpredictable, but hey, I can’t afford to shoot film. No matter what people say, shooting digital is cheaper and you have less chance of screwing it up. Especially if you own the gear already and shoot pro res 4444 vs some insanely uncompressed codec. The world of digital doesn’t look as nice as film, usually. But sometimes, if you have the right colorist, it can. Watching the trailer for San Andreas, I thought it was shot on film. Same with the Age of Adeline. Adding power windows, using lenses that have pop and circular rendering of faces. Using cameras that have nice motion. It gets you there, kind of close. It’s not always perfect, but I still think the DP and the Colorist can make a film shot digitally look better than a film shot and colored poorly on film. I still wish for more and more advances in digital camera technology that can have the randomness of film – the highlight smooth-roll-off – the sharpness and how it renders faces – I don’t want companies to give up and call it a day. But, hey I’m not an engineer. I’m just a guy trying to figure things out. Here’s a clip of how they digitally graded “Oh Brother Where are Thou” – https://www.youtube.com/watch?v=pla_pd1uatg
    1 point
  9. I've been shooting for a living for over 2 decades, and one of the definitive signs that someone is an amateur is any statement like the above. That's some massive insecurity you've got going on, man. Professionalism isn't about the gear.
    1 point
  10. +1 for shooting a grid, then correcting from that result. It is probable that you have to adjust correction for different taking lenses. Nuke would be most proficient for correction like this, but After Effects would work too - using the optic compensation tool (inbuilt into AE) and/or some custom warp tweaks.
    1 point
  11. Just a bump for this topic to remind you of this great deal :-) Cashback offer ends today. It's a UK promotion but you can apply for the cashback EU-wide. Very much enjoying my new G7.
    1 point
  12. Those Nikon lenses sound awesome! You will need to post some footage/stills with them. Sounds like a really winning combo!
    1 point
  13. I would expect to see some degradation due to the filter stack. However, the 28/1.4 Nikkor is reasonably telecentric AFAIK, so it should have less of the filter-induced astigmatism that plagues other designs. Probably most of what you are seeing is due to the d800 having vastly cleaner and more detailed images than 35mm film, so its just showing faults in the lens that were always there.
    1 point
  14. I did. Very minimal grading to be honest. I used the Cineon Film Contrast 1 LUT, and added a bit of saturation in the blues.
    1 point
  15. yeah. the problem is not the desqueeze itself - it's the lens distortion of the anamorphic. It's quite a pronounced sphere at the edges, so they become 'compressed' more, if that makes sense? Depends on what software you are using, but I've had good results (not perfect but close) in Nuke - using a checkerboard grid that I filmed as a calibration tool, and then re-straightening it...
    1 point
  16. A beautiful video for a beautiful song! I love the look, very Arri-like in my opinion. How did you color grade it?
    1 point
  17. Great video. Thanks for sharing. That song is awesome - its been going through my head all day!
    1 point
  18. This is a false generalization most likely based on weaknesses inherent to teleconverters, which magnify aberrations. A well-designed focal reducer, on the other hand, will *shrink* the aberrations as mentioned above by Araucaria. And with a little know-how you can even do better than that to actually design a focal reducer that compensates some of the aberrations in the master lens. Here's a recent whitepaper that I wrote proving that a Metabones Speed Booster significantly increases the MTF of various lenses, including the extremely challenging case of a Zeiss Otus: http://www.metabones.com/assets/a/stories/The Perfect Focal Reducer (Metabones Speed Booster ULTRA for M43) - Whitepaper.pdf Disclaimer: I develop the optics used in Metabones Speed Boosters
    1 point
  19. It's an embarrassment of riches. And it's great. Even now I contend that even average consumer IQ is so advanced that it'll allow a filmmaker to create great looking cinema. 5-10 years from now? Whoa. Its fun to watch it get better and cheaper If you can't manage to effectively create with this stuff, then you're not doing it right and/or paying attention to the wrong details.
    1 point
  20. Not with H.265 PS - if Panasonic froze sensor R&D since 2011 that would explain why Sony made the GH3 sensor, but who made the GH4 sensor and what about the new Varicam S35?!
    1 point
  21. Axel with LOG footage the grade has much more impact on colour than the camera does. Most grades are weird looking and prioritise dynamic range. It takes real artistic skill to do a good one. So judge the grade not the camera. The skin bug is interesting... does it only affect HDMI?
    1 point
  22. Optics is mostly simple, with some occasional complexity thrown in to make it interesting. In fact, I've almost completely forgotten all the fancy math I learned after high school, since by and large all the math you need to know to be a successful lens designer is geometry, trigonometry, and a bit of algebra. In your case, assuming that the anamorphic portion is working at infinity (i.e., parallel light in and parallel light out) the aperture of the optical system is determined by the diameter of the iris diaphragm in the taking lens *unless* there is some other limiting aperture in the system. Imagine that you take a pin and poke a tiny hole in a large piece of aluminum foil. Next, open the f/1.2 taking lens wide open and place the aluminum foil in front of your Rectilux so that the pinhole is centered on the optical axis. Clearly, in this case the f/# of your lens system is determined by the diameter of the pinhole and not by the diameter of the taking lens' iris diaphragm. So we would say that the pinhole is the limiting aperture in your lens system. In your case the clear aperture of the back of the anamorphic section is 43mm in diameter (assuming it is round and not rectangular). This means that the collimated on-axis beam of light exiting your anamorphic section cannot exceed 43mm in diameter, and may be less if any of the other optical surfaces in the Rectilux or your anamorphic group are limiting apertures. The entrance pupil diameter of your 85mm f/1.2 taking lens (most likely 1/3 stop faster than f/1.4, or f/1.2599 in reality) is 85/1.2599 = 67.5mm. Since 67.5mm is bigger than 43mm you are underfilling the entrance pupil of the taking lens. As a consequence, you could stop down the taking lens until its entrance pupil is reduced to 43mm, and have no impact on the actual f/# of the system. In your case, the actual maximum f/# would be f/(85/43) = f/1.98 ~ f/2, and not f/1.2. Again, this assumes that the limiting aperture of the system is the rear aperture of the anamorph, and not some other surface in the Rectilux or the anamorph. If either of the latter is true then your true f/# would be slower than f/2.
    1 point
×
×
  • Create New...