jcs Posted August 9, 2017 Share Posted August 9, 2017 Sometimes we need to enhance the detail of a shot: a very soft lens, slightly out of focus, slow motion, post cropping (for story/emotion or after stabilization), and so on. Most are familiar with the sharpen effect and the unsharp masking effect. We can combine both effects, as well as use unsharp masking to create a local contrast enhancement effect as well. Canon 1DX II and Canon 50mm 1.4 at 1.4, 1080p (Filmic Skin picture style): Multi-spectral Detail Enhancement (let's call it MSDE, based on the physics of Acutance) Fine noise grain: adds texture and increases perception of detail (Noise effect: 2%, color, not clipped) High frequency sharpening: in PP CC this is called Sharpen (as a standalone effect) or via Lumetri/Creative/Sharpen (as used here: 93.4) Mid frequency sharpening: Unsharp masking effect with amount 41 and a radius of 5 Low frequency sharpening (Local Contrast Enhancement or LCE): Unsharp masking effect with amount 50 and a radius of 300 While this may be a bit too sharp/detailed for some, it illustrates MSDE, and one can add detail to taste using this technique. Note we didn't use a contrast effect or curves to achieve this look. MSDE can also be use to improve HD to 4K upscales: apply after upscaling. Also a great way to use Canon's soft-ish 1080p along with DPAF (since it's not currently available in any other cameras on the market). The GH5 is the new kid on the block with excellent detail, however Canon still looks more filmic to me and has excellent AF Someday Adobe will GPU accelerate their Unsharp Mask effect (it's a trivially easy effect to code too!), so this can easily run in real-time while editing. andrgl, kidzrevil, Inazuma and 4 others 7 Quote Link to comment Share on other sites More sharing options...
EthanAlexander Posted August 9, 2017 Share Posted August 9, 2017 That's a pretty incredible result! Guessing this was inspired by the mention of Finisher in the other thread Couple questions - Do the radii change in proportion to resolution? IE with 4K: Unsharp mask would be 20 for mid and 1200 for low? Do the standard sharpen effects in other programs work the same as premiere (FCP and AE, etc.)? Quote Link to comment Share on other sites More sharing options...
jcs Posted August 9, 2017 Author Share Posted August 9, 2017 @EthanAlexander thanks, yeah I've been mentioning LCE for a while, thought I'd expand the concept and show how it's done. I'd use the same settings for 4K and do minor tweaks as desired. Sharpen is typically implemented with a convolution sharpen, where surrounding pixels are used to increase high-frequency detail (by enhancing differences). Unsharp Masking works by subtracting a blurred copy (reducing low frequency: a high-pass filter), and has a variety of uses based on the radius. FCPX seems to have a hybrid sharpen- a plugin which does PP/AE style sharpen and Unsharp Mask is probably needed for FCPX. AE CC Test: Same soft 1DX II 1080p file (no stabilization, so wider shot), 4K Comp. Resized to 2K before upload (tried uploading 4K, was resized to 2K by website, so uploaded Photoshop resized 4K to 2K instead): AE Settings: Noise: 2%, color, no clipping Sharpen: 50 Unsharp Mask: Amount: 41, Radius 5 Unsharp Mask: Amount 50, Radius 300 AE's Sharpen appears to have a bug- edges sharpened without repeat, so thin line around border: do Transform / Scale to fix (in this case 200% became 201% (2K to 4K scale)). EthanAlexander 1 Quote Link to comment Share on other sites More sharing options...
Oliver Daniel Posted August 10, 2017 Share Posted August 10, 2017 12 hours ago, jcs said: Sometimes we need to enhance the detail of a shot: a very soft lens, slightly out of focus, slow motion, post cropping (for story/emotion or after stabilization), and so on. Most are familiar with the sharpen effect and the unsharp masking effect. We can combine both effects, as well as use unsharp masking to create a local contrast enhancement effect as well. Canon 1DX II and Canon 50mm 1.4 at 1.4, 1080p (Filmic Skin picture style): MSDE can also be use to improve HD to 4K upscales: apply after upscaling. Also a great way to use Canon's soft-ish 1080p along with DPAF (since it's not currently available in any other cameras on the market). The GH5 is the new kid on the block with excellent detail, however Canon still looks more filmic to me and has excellent AF Cool. I'm in no way disregarding your results using a Canon DSLR however in my experience of using sharpening tools I've found that Sony and Panasonic images respond far better - any reason why this may be? Alos I've only found Finisher and the DaVinci resolve built-in sharpening tools to be any good, amongst native and third party plugins. Some of them do something hideous to the image, like it's drawing black extruded lines around the edges of the subject. I've been using Finisher since 2012 and it literally transforms the perceived resolution of the image, Soft detail appears pin sharp. Fantastic for slow motion modes, where soft images are common. I tried Finisher on the 4k image from the A6500 for a laugh. Quote Link to comment Share on other sites More sharing options...
jonpais Posted August 10, 2017 Share Posted August 10, 2017 $99/year? I'll try to focus more carefully... ? Quote Link to comment Share on other sites More sharing options...
Super Members BTM_Pix Posted August 10, 2017 Super Members Share Posted August 10, 2017 Glad I got it as a one off purchase when I did then as it looks like they've farmed it out into a bundle. Not cool. Quote Link to comment Share on other sites More sharing options...
EthanAlexander Posted August 10, 2017 Share Posted August 10, 2017 I'm now wondering if this wasn't the upscaling algorithm that Yedlin was using in his comparison video. What do you think, @jcs ? Quote Link to comment Share on other sites More sharing options...
jcs Posted August 10, 2017 Author Share Posted August 10, 2017 9 hours ago, Oliver Daniel said: Cool. I'm in no way disregarding your results using a Canon DSLR however in my experience of using sharpening tools I've found that Sony and Panasonic images respond far better - any reason why this may be? Alos I've only found Finisher and the DaVinci resolve built-in sharpening tools to be any good, amongst native and third party plugins. Some of them do something hideous to the image, like it's drawing black extruded lines around the edges of the subject. I've been using Finisher since 2012 and it literally transforms the perceived resolution of the image, Soft detail appears pin sharp. Fantastic for slow motion modes, where soft images are common. I tried Finisher on the 4k image from the A6500 for a laugh. Regarding Sony & Panasonic vs. Canon for post-sharpening: Canon DSLR's are simply softer, where the finest detail is gone, either from the OLPF or from sensor binning/processing. You can see this when studying resolution charts. Canon C-series cameras (except the 1DC) are all very sharp, however they also alias like crazy for high-frequency images (fabric, brick, etc.). The 5D3 has practically no aliasing, however it's very soft. Part of the 5D3 H.264 softness comes from low-quality in-camera processing, as we can see much more detailed results when using ML RAW with post-de-Bayer and sharpening. Part of the Canon DLSR video softness may also be related to business reasons: protecting the C-line. As was noted in the Netflix 4K thread, only the 8K F65 produces True(-ish) 4K- everyone else is cheating (except perhaps Red 8K). This is easily observed by examining the test charts and aliasing. Cameras that are 'cheating' can appear sharp, however that's partially from aliasing (note the F65 is razor sharp & detailed with no visible aliasing!). Trying to sharpen soft, aliased footage in only the high frequencies can look ugly, as you've noticed. So what I did with MSDE (not a standard term; made it up), was sharpen the high frequencies only so much (else will get ugly), then went to a lower frequency and sharpened more, then finally to an even lower frequency, for the final sharpen. Here sharpen means contrast enhancement: amplifying differences in pixels and groups of pixels. Sharpening in the normal use of the word is contrast enhancement at the highest frequencies only. Adding pixel-level noise first creates the highest possible frequency information for texture. I tried adding noise later and it didn't work as well: even more noise was required to see the effect. When we add noise we are increasing acutance (not real resolution), however we are also reducing the signal to noise ratio, so we want to use a little as possible. To my eye, the results of this test look a lot like film, vs. the typical Sony & Panasonic video look, don't you agree? I think one of the reasons film looks the way it does is because of the acutance result from the chemical process and zero aliasing, where grain provides texture so it has that somewhat soft and detailed look at the same time. MSDE is based on spatial domain transforms, it's also possible to perform detail enhancement in the frequency domain using the discrete cosine and wavelet transforms (DCT & DWT): https://link.springer.com/chapter/10.1007%2F978-3-642-01209-9_13. Not clear why this isn't used more- could be patent related. New technologies based on feature extraction (generative processing) will be able to figure out generative structures and be able to re-render them at any resolution. Genuine Fractals made progress in this area a few years ago: https://blog.codinghorror.com/better-image-resizing/. While the results are 'sharper', it wasn't good enough vs. bicubic, Lanczos, etc. The MSDE technique only needs noise, convolution sharpen (or perhaps Unsharp Mask with a width around .5-1.0), and Unsharp Mask. It will work in any NLE where these effects are available (native or via plugin), and it's free Quote Link to comment Share on other sites More sharing options...
jcs Posted August 10, 2017 Author Share Posted August 10, 2017 1 hour ago, EthanAlexander said: I'm now wondering if this wasn't the upscaling algorithm that Yedlin was using in his comparison video. What do you think, @jcs ? I wouldn't be surprised- upscaling, adding noise, and sharpening alone work pretty well. I'm pretty sure PP CC (and AE?) use bicubic scaling. Yedlin's tools (Nuke?) may also provide more advanced (and expensive) scalers such as lanczos-3, which along with sharpening performed best in this 2014 state-of-the-art study: https://hal.inria.fr/hal-01073920/document, surprisingly performing better than super-resolution (which creates real extra detail from aliasing information). I think Yedlin could have made a better point by shortening the videos dramatically: way too long and rambly, especially if the goal was to appeal to studio execs and producers with short attention spans I jumped around his videos and didn't see the application of Local Contrast Enhancement or mention of acutance (perhaps I missed it?), which should have been paramount in such a test. It really felt like an advert/defense-a-tutorial for ARRI's low-res sensors (including the Alexa 65- still not capable of True 4K (only ~6.6K Bayer photosites; need 8K)). Love ARRI's color, F65 wins for ultimate color and real 4K detail (see Lucy, Oblivion). From https://hal.inria.fr/hal-01073920/document: Quote In this study, we have evaluated the performance of several upscaling algorithms by upscaling the video sequences from 720p, 1080p to 4K resolution. The results indicated that in general conditions, the current state of the art upscaling algorithms could not achieve as good perceptual quality as the original UHD versions. In addition, the best upscaling algorithm may be not the state-of-the-art computationally expensive super resolution algorithms but the less costly one, for example, lanczos-3 eventually with added sharpening. Due to the different viewing conditions on UHD and the corresponding viewing behavior of observers, improvement may be expected if the upscaling algorithm is particularly adapted to UHD. This is my experience as well, so I fundamentally disagree with Yedlin. True 4K capture, with real detail, will always be better than upscaling and fancy tricks like MSDE. Will most people notice or care? That's a different argument Quote Link to comment Share on other sites More sharing options...
EthanAlexander Posted August 10, 2017 Share Posted August 10, 2017 36 minutes ago, jcs said: I think Yedlin could have made a better point by shortening the videos dramatically: way too long and rambly, especially if the goal was to appeal to studio execs and producers with short attention spans He's a DP not an editor ... (I agree) 37 minutes ago, jcs said: Love ARRI's color, F65 wins for ultimate color and real 4K detail (see Lucy, Oblivion). I personally prefer the Alexa 65 from everything I've seen, whether or not it's true 4K. Just preference. Don't get me wrong though: I'm a huge Sony fan and have watched Oblivion several times on Blu-Ray through a sony BD player to a Sony 4K HDR tv, and the F65's no doubt a FANTASTIC camera. (Supposedly sony products use the same algorithm to upscale to 4K as they do to downscale the movie to HD BD. Can't remember where I saw that, but it was definitely an ad for the F65.) 43 minutes ago, jcs said: This is my experience as well, so I fundamentally disagree with Yedlin. True 4K capture, with real detail, will always be better than upscaling and fancy tricks like MSDE. Will most people notice or care? That's a different argument I totally get where you're coming from, and I've learned a lot from what you post here on EOSHD just from exploring the idea of "true 4K." My only thing, and the reason I am glad Yedlin made the video, is that I'd much rather consumers (professional, commercial, hobbyists alike) be aware of how much goes into an image than just resolution and start demanding things like better compression and color space. This is especially true because at a certain point the captured pixels are surpassing the resolving power of all but the most detailed lenses. jcs 1 Quote Link to comment Share on other sites More sharing options...
tugela Posted August 10, 2017 Share Posted August 10, 2017 12 hours ago, Oliver Daniel said: I'm in no way disregarding your results using a Canon DSLR however in my experience of using sharpening tools I've found that Sony and Panasonic images respond far better - any reason why this may be? Probably because they have higher resolution to work from. You will always get better results with good native resolution rather than synthesized resolution. You can't enhance information that is not there to start with, so that will always be the limiting factor in terms of what you can do with the image. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 10, 2017 Author Share Posted August 10, 2017 1 hour ago, EthanAlexander said: He's a DP not an editor ... (I agree) I personally prefer the Alexa 65 from everything I've seen, whether or not it's true 4K. Just preference. Don't get me wrong though: I'm a huge Sony fan and have watched Oblivion several times on Blu-Ray through a sony BD player to a Sony 4K HDR tv, and the F65's no doubt a FANTASTIC camera. (Supposedly sony products use the same algorithm to upscale to 4K as they do to downscale the movie to HD BD. Can't remember where I saw that, but it was definitely an ad for the F65.) I totally get where you're coming from, and I've learned a lot from what you post here on EOSHD just from exploring the idea of "true 4K." My only thing, and the reason I am glad Yedlin made the video, is that I'd much rather consumers (professional, commercial, hobbyists alike) be aware of how much goes into an image than just resolution and start demanding things like better compression and color space. This is especially true because at a certain point the captured pixels are surpassing the resolving power of all but the most detailed lenses. Overall ARRI is my favorite cinema camera brand, however I haven't seen anything from ARRI that blew me away like Lucy and Oblivion with the F65. F65 beats ARRI on test charts, would love to see a people/skintone head to head test ?. Imagine 8K => 4K processing in a small camera with IBIS, 10-bit 422, and DPAF- that's all possible today, without a fan! Held back for business reasons. The only way to demand anything is to not give them money for crappy products ? jonpais and EthanAlexander 2 Quote Link to comment Share on other sites More sharing options...
tupp Posted August 11, 2017 Share Posted August 11, 2017 After scanning this thread, this method reminds me of the wavelet decompose plug-ins for the GIMP, which is a functionality that has been in open-source software for years. Basically, these plug-ins separate detail "frequencies" into their own separate layers. I use wavelet decompose mostly for skin retouching, but some have been using it for sharpening for quite awhile. One of the wavelet decompose plug-ins can separate an image into 100 different frequency layers, but I can't imagine why that many separate frequencies would ever be needed. I don't think that proper wavelet decompose functionality has yet appeared in proprietary imaging software. Often, advanced features such as this show up in Photoshop years after the GIMP, and these "new" features are usually much trumpeted by the Adobe crowd. Quote Link to comment Share on other sites More sharing options...
tomekk Posted August 11, 2017 Share Posted August 11, 2017 Isn't wavelet decompose in GIMP called frequency separation technique in Photoshop? Quote Link to comment Share on other sites More sharing options...
tupp Posted August 11, 2017 Share Posted August 11, 2017 42 minutes ago, tomekk said: Isn't wavelet decompose in GIMP called frequency separation technique in Photoshop? Yes. Essentially, frequency separation is wavelet decompression with with just two layers -- the residual layer and the high frequency layer. However, on Photoshop it probably still has to be done manually (similar to the manual procedure given by the OP). Two layer frequency separation sets up a little more quickly in the GIMP, due to the grain extract and grain merge features. Of course, it is even faster to get two-layer frequency separation in the GIMP with either of the wavelet decompression plug-ins, but setting it up manually probably gives one more control over the "frequency." I don't know if Photoshop currently has a wavelet decompression plug-in (it didn't have one four years ago). If it doesn't, manually making five wavelet scale layers plus a residual layer would probably be a long, arduous process in Photoshop. Quote Link to comment Share on other sites More sharing options...
jcs Posted August 12, 2017 Author Share Posted August 12, 2017 @tomekk @tupp frequency separation is very similar to what MSDE does. It uses a high-pass filter and Gaussian blur in the spatial domain: https://fstoppers.com/post-production/ultimate-guide-frequency-separation-technique-8699 Wavelet's operate in the frequency domain (same as Fourier transforms (+DFT/DCT) with different pros/cons). For wavelet frequency filtering, an image is converted into a wavelet, desired frequencies are filtered, then the wavelet is converted back into an image. Note no compression or decompression takes place (same with a DCT and inverse DCT, which can also be used for frequency filtering). The GIMP plugin performs a wavelet transform which allows frequencies to be decomposed (vs. decompressed) and filtered before converting back into an image. Yeah, it's puzzling that Photoshop doesn't provide frequency-based filtering options or that no one's made plugins for Photoshop/Premiere/AE/FCPX/Resolve etc. For retouching stills, I haven't used frequency separation since I got Portrait Pro: http://www.portraitprofessional.com/. andrgl 1 Quote Link to comment Share on other sites More sharing options...
jcs Posted August 14, 2017 Author Share Posted August 14, 2017 @EthanAlexander (from PM): you can do Unsharp mask in Resolve with 2 nodes: the first node "doubles" the image intensity and saturation, then the next node subtracts a Gaussian blurred copy from the "doubled' image. Mathematically it's 2*(original RGB pixels) - (blurred original RGB pixels). If you don't blur the 2nd copy, 2*original - original = original: you can use that to make sure the operation is set up correctly. Then start blurring the 2nd copy to see the results: small blur is the traditional Unsharp mask, and larger amounts perform LCE. I don't use Resolve very often (only to test it every now and then), here's what I came up with in a couple minutes of experimenting (maybe Resolve experts have a better way): In Color, add a Serial Node. Set the Gain to 2.0 Add a Layer Node to this Serial Node Right-click and set the Layer Mixer Composite Mode to Subtract The output should appear normal Add a Box Blur to the newly created node after the Layer Node was created Set Iterations to 6, Border Type: Replicate, and turn up strength and see how it works: you need a lot of blur for LCE Use Gaussian blur and less strength instead of box blur for traditional unsharp masking sharpening That should get you started (fully GPU accelerated too)! EthanAlexander 1 Quote Link to comment Share on other sites More sharing options...
EthanAlexander Posted August 14, 2017 Share Posted August 14, 2017 10 hours ago, jcs said: @EthanAlexander (from PM): you can do Unsharp mask in Resolve with 2 nodes: the first node "doubles" the image intensity and saturation, then the next node subtracts a Gaussian blurred copy from the "doubled' image. Mathematically it's 2*(original RGB pixels) - (blurred original RGB pixels). If you don't blur the 2nd copy, 2*original - original = original: you can use that to make sure the operation is set up correctly. Then start blurring the 2nd copy to see the results: small blur is the traditional Unsharp mask, and larger amounts perform LCE. I don't use Resolve very often (only to test it every now and then), here's what I came up with in a couple minutes of experimenting (maybe Resolve experts have a better way): In Color, add a Serial Node. Set the Gain to 2.0 Add a Layer Node to this Serial Node Right-click and set the Layer Mixer Composite Mode to Subtract The output should appear normal Add a Box Blur to the newly created node after the Layer Node was created Set Iterations to 6, Border Type: Replicate, and turn up strength and see how it works: you need a lot of blur for LCE Use Gaussian blur and less strength instead of box blur for traditional unsharp masking sharpening That should get you started (fully GPU accelerated too)! I'm getting to step 3 and the composite mode set to subtract is giving me black Do you see anything I've done wrong so far? Also, I've only ever done blur/sharpen by dragging the RGB sliders on radius. How do I differentiate between box and gaussian? Also, thank you Quote Link to comment Share on other sites More sharing options...
EthanAlexander Posted August 15, 2017 Share Posted August 15, 2017 Ok, at least figured out the box/gaussian blur part. Still confused about the part I screen grabbed for you Quote Link to comment Share on other sites More sharing options...
jcs Posted August 15, 2017 Author Share Posted August 15, 2017 54 minutes ago, EthanAlexander said: Ok, at least figured out the box/gaussian blur part. Still confused about the part I screen grabbed for you Set Gain in the first node to 2.0, that will fix the black subtract issue. EthanAlexander 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.