-
Posts
1,839 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by jcs
-
Again, a 3D LUT (RGB cube) can map many colors to a reduced range of colors. It can happen internally in the cube or as color elements are clipped at the cube boundaries. If we have 10 numbers of which they're all the same, say a bunch of 5's, the average value is 5 and the variance is 0. If We lose '5', all the 5's are gone and can can't be recovered (a similar effect happens with banding). If we add noise/grain to those 5's, we're still averaging 5 now, however the variance is now above zero. So if we remove all the 5's, we'll still have the original average of 5, due to the variance provided by the noise. So if we've got suitable variance, we can quantize, crush, compress, and/or 'implode' colors and have little or no banding. The noise doesn't have to be excessive- just enough so that the average original colors can survive color compression/quantization. It's also possible to add noise (dither) intelligently based on exactly where the compression is the highest (this requires custom code: could be part of a plugin). These concepts come from probability and digital signal processing. Not a right or wrong technique- it's math that can be used as a tool when it helps with the creative work at hand. I would expect a LUT called "color implosion" to heavily compress colors. One way such a LUT could cause code to crash is that the cube is so warped, it exposes a bug in the interpolation or clipping code, and thus the crash (divide by zero or sqrt of a negative value are possible culprits).
-
Very cool. It's been possible to create 3D LUTs by modifying 2D textures (for Resolve etc.), though it's not possible to get everything ACR can do WRT color into a 3D LUT (local pixel processing with neighbors, etc.). It's not likely ACR will be part of Premiere Pro in its current form. Perhaps a subset that can be GPU accelerated might make it to PPro. ACR was designed originally for stills- it's really too slow for an NLE. If you need ACR level control, Adobe wants you to go to After Effects. Masking & tracking is indeed huge! Looking forward to giving it a try (hopefully has qualifiers too as with Resolve).
-
A 3D LUT can cause many shades of colors to map to a reduced set of shades, as the color space gets compressed. Even when the processing may be 10 or more bits (most NLEs support 32-bit float), after the 3D LUT transformation, some colors will effectively be quantized to the same or similar colors, causing banding. Any time we deal with quantization issues, we need to increase the effective variance or range of colors so that after color compression (with a 3D LUT, even 1D LUTs and other transforms), the extra range provided by the noise/grain still provides, on average, enough variance so that banding is reduced or eliminated. That's why it's important to add noise/grain before color compression or quantization, otherwise the original 'signal' (colors) are lost. Noise/grain can still help when added later but it's less effective. In some cases it might make sense to do a light NR as a final step if lots of noise/grain was needed to fix banding issues. Neat Video is amazing, but slow. There is NR that runs in real-time, that's almost as good as Neat Video, such as vReveal (http://www.vreveal.com/, now discontinued. Note their tech is used for Law Enforcement/Military applications). Resolve (not lite) has real-time NR, though last I read isn't nearly as good as Neat Video (vReveal was pretty good).
-
Noise/grain can help reduce banding. It's best applied before the operation which causes banding.
-
@jonpais- perhaps try adding noise before the LUTs.
-
See my post earlier in this thread. It looks like S35 mode isn't as good as FF on the A7S.
-
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
Reading about labs tests (or doing them yourself) provides useful insight (especially metamarism and CRI), however actual real world testing is needed to understand how the devices really perform. For example, showing clients and talent skintones shot on the Sony FS700 compared to the Canon 5D Mark 3, most of them much prefer the Canon. I can spend a lot of time trying to make the FS700 look like the Canon, and while I can get very close, many times it's not worth the effort. This is not just my findings, many others have the same experience and that's why the C100/C300 cameras sell so well and are used so much even though their specs were dated even at launch. 5D3 RAW looks so good folks call it the baby Alexa. The FS700 rocks at slomo (for the price). -
Where a focal reducer matches the FOV of the same FF lens when used on a Super-35 sensor area, since the focal length and FOV are identical, so are DOF and bokeh. The SB is actually a 1.1 crop, however the bokeh is close enough to call the same:
-
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
Color science as in the reason the ARRI Alexa and Canon 5D3 (with 14-bit RAW: "Baby Alexa") are so popular: most folks find people and skintones look better with those systems (vs. RED (the new Dragon is changing this), Panasonic (new GH4 is not bad), Sony (especially bad skintones and/or hard to work with color in many cases (A7S looks better so far)), etc.). A low metamerism score means about the same as a low CRI score: not much with real world images), From: http://***URL removed***/forums/post/53084809: Regard a "trick" feature- while I don't have an A7S to test yet, from example videos and folks comments it sounds like there is more effective NR happening in video than with stills. The GH4 does temporal NR (as does Neat Video), and I would expect Sony to be doing the same. Since the low ISO DR isn't even as high as the D800 (and certainly not Sony's claimed 15.3, at least for the DXO test), it appears the higher ISO extended DR is due to image processing (including powerful new NR) and not necessarily the sensor (depends on how and if analog gain affects sensor performance at higher ISOs). This could be a reason battery life isn't reported as being very long. -
While it will technically "work", apparently (on a pre-production camera) the scaler isn't that great: aliasing, moire, and reduced resolution in Super-35 mode http://www.dvxuser.com/V6/showthread.php?324956-FS700-a7S-and-a6000/page2:
-
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
For stills, the 5D Mark III might be better due to color science (and for sure higher resolution) even though the A7S has more DR. For video, the A7S is looking to be a worthy replacement for the 5D3, including RAW. To clarify, 14-bit RAW from the 5D3 will likely provide a better color image, however when factoring in recording time, disk space, in-camera low-light NR, and processing time, 50Mbit/s XAVC-S will be more detailed and cleaner than 5D3 RAW with a lot less work and disk usage. The A7 & A7R are clearly better for stills vs. the A7S. It's looking like the A7S sensor isn't the real magic*: it appears to be the trick NR processor for video. If reports regarding lower than normal battery performance are accurate for the A7S, this could be from the hardware NR processing. This could also explain why they couldn't do 4K internally: they're at their thermal and CPU limit just doing 4K = > 1080p + NR + H.264. * If the sensor was truly lower noise from (gapless) bigger pixels, still images should be cleaner on the A7S vs. the A7 & A7R. However, downsampling higher resolution to lower resolution also reduces noise (requires more computing power- an issue with video). -
The A7S is really a 1080p camera: 4K requires an external recorder. It is not using pixel binning or line skipping for 4K => 1080p: it has a decent scaler (perhaps something better than bilinear, possibly cubic or Lanczos etc.). Thus, when shooting at 1080p, there shouldn't be a significant image quality difference when scaling 4K to 1080p using the full sensor or a Super-35 crop. While there will be slightly less oversampling for the Super-35 crop, there's still plenty of information for a solid 1080p image- not likely to be much difference even if shooting a resolution chart. Externally recorded 4K will be slightly softer for Super-35 scaled up to FF, however since it appears to be a decent scaler, should still look pretty good. Resolution should be similar between full frame and Super-35 crop for 1080p. A 500+HP Turbo Hayabusa motorcycle engine coupled to a reduction gear would be bad-ass in a Range Rover and would sound awesome too:
-
Fred & Sharon created that video to promote their video production services. They were serious- that's why it's so... what it is (no spoilers here). The video has 1.6 million views. Read the comments on the video for more entertainment.
-
A must watch for all students of film. (Not shot on a GH4 or A7S)
-
While the A7S appears to be amazing in low light, an extra stop of light means less noise and even more low-light capability. Regarding turbos: car guys (& gals) can never have enough horsepower. It's like having a big V8 with a blower or twin turbos. Why add nitrous? Because it makes the car go faster :)
-
Since the A7S provides a Super-35 crop mode, it would appear the NEX to EF SpeedBooster would work with the A7S, providing another stop of light. Image quality for the crop mode should be decent as Sony appears to have a high-quality scaler.
-
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
Here is what Adam said: He states that 11 are 'required for a paycheck' and anything beyond is 'gravy'. He counts 14 stops, then counts 13 with different gammas. Art Adams did his own analysis- 13-14 stops with 'perhaps 12 usable': http://provideocoalition.com/aadams/story/cameras_sony_fs700_dynamic_range_presentation/P2 This kind of discussion is similar to discussing resolution charts: some people call the end of resolution at the beginning of first artifacts, some at the extinction of detail, others in between. In any case, they both stated 14 stops is what they can see; how much is usable is up for debate. If they couldn't see 14 stops, there would be no point making those statements. In any case, the FS700 has more DR than the 5D3 or GH4. The point was DR is less important than color science. -
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
Adam Wilt reported 14 stops for the FS700 http://provideocoalition.com/awilt/story/review_sony_nex-fs700_super35_lss_avchd_camcorder/P3 -
Pretty amazing for removing cell phone rings, sirens, fan noise, etc.: http://nofilmschool.com/2013/06/adobe-audition-cc-sound-remover-effect-fix-audio/
-
The GH4 supports 4K 422 10-bit over the micro-HDMI port (YAGH not needed): http://***URL removed***/forums/post/53343518 The Atomos Shogun will record it (approx. 3 months from now).
-
I haven't tested 1080p with 10-bit 422 enabled, however 4K 10-bit 422 scaled to 1080p by the GH4's HDMI scaler looks great.
-
I examined the original GH4 422 10-bit clip in PPro with contrast at 200 and brightness pulled down: the hair is perfectly fine- no sampling issues, about as perfect as is possible. I don't see an issue with your clip either- it looks normal once contrast is cranked and effective anti-aliasing has been removed. The Alexa clip wasn't the same kind of image to compare against. I don't see anywhere that the Ninja Blade has a 4K to 2K scaler- where is this information published? The GH4 has a 4K to 2K scaler on the HDMI output (I tested it and it looks great).
-
My favorite film stock is Eastman EXR 100T 5248- if there was a tool or toolchain that could replicate that well, it would be worth it to me. 5248 is in AE via Color Finesse, but didn't do much when I tested it. IIRC 5219 is included in ImpulZ, as are a few other interesting film stocks. Are they worth it? Yes, if the 3D LUT is done well so that nothing weird/strange happens. I wrote some software tools a few years ago to create 3D LUTs and found that it was super important to be able to visualize what was going on with the 3D cube otherwise getting good general results was tricky. ImpulZ has a few teal & orange 3D LUTs which are popular, and might take a while to set up from scratch. Adobe CC has SpeedGrade which allows creating 3D LUTs for the Lumetri effect in Premiere: might be worth trying to create 3D LUTs from scratch first, then if too time consuming consider purchasing 3D LUTs (ImpulZ, OSIRIS, Filmconvert, etc.). http://nofilmschool.com/2013/06/lumetri-create-custom-looks-in-premiere-speedgrade/ 3D LUTs differ from 1D LUTs in that you can map one color to another in a smooth way (and map many colors to other colors). A 1D LUT typically onlys adjust brightness/luma (though 3 1-D LUTs can control RGB individually, however color remapping isn't possible without a 3D LUT).
-
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
My FS700 has 14-stops DR however clients love the 5D3 RAW image and like the GH4 better as well.