-
Posts
1,839 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by jcs
-
Right, for $6K one could get the 4K URSA with Super-35mm sensor, global shutter, pro-audio, and 12-stops DR: Looks pretty good! (video from 4K Production camera- would expect URSA to be the same or better).
-
10-bit 4K external recording will be possible with the Shogun (September) or quad SDI recorders (possible now with YAGH: http://www.bhphotovideo.com/c/product/857190-REG/AJA_KI_PRO_QUAD_Ki_Pro_Quad.html).
-
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
5D3 H.264 needs decent sharpening in post: still not super detailed, but is workable if using sharp lenses. 5D3 RAW sharpened in Resolve or (even better), ACR, is very solid for 1080p. A sharpened 5D3 RAW shot compares very well to even a 4K GH4 shot projected on a 20' 4K screen (saw this in person). 14-bit 5D3 RAW still looks great compared to the GH4 4K and A7S (examples so far): Still no direct competition at that price point for 5D3 RAW color, color science and decent 1080p resolution. The workflow and disk space requirements are another story (I'm shooting mostly with the GH4 now). -
A more realistic impression of the Sony A7S low light performance at ISO 12,800
jcs replied to Andrew Reid's topic in Cameras
I played with the 12MP JPEG in ACR- highlights are clipped and not recoverable- DR looks similar to 5D3 in that shot. Noise cleaned up OK using ACR tools, however image then looked over-processed (Neat Video would work better along with temporal NR). I shot a clip with the GH4 at ISO 1600 and lit it with a single LED (iPhone 5S). After Neat Video NR and bringing up the levels in post, it looks pretty good compared to the A7S examples so far. I would expect the A7S to do better in any case, however if Neat Video is needed in both cases and the final results look similar, the A7S's utility over the GH4 in low light would then be lessened. The FS700, especially with SpeedBooster is pretty good in low light, all the way up to ISO6400 (still needs NR in post at that level). So far most of the A7S demo shots have been with locked off (tripod) or very low camera motion. It would be helpful to see typical handheld (with and without a rig) shots. While it would be nice to have a 5D3 replacement at 50Mbps (vs. 5D3 RAW at ~64GB per 12min), I almost pre-ordered the A7S but am holding off until it's released and actual end-user videos are posted. Also curious about the A7S XLR audio attachment (release date and cost). -
Nice post Yojimbo! jonpais- LUT's are pretty small, and unless there's a bug, shouldn't use up much RAM. If implemented on the GPU, a LUT (including 3D) should be very fast. If running on the CPU, a 3D LUT will be a lot slower (due to trilinear or tricubic interpolation). A decent graphics card / GPU is helpful.
-
10-bit 422 1080p recorded externally to ProRes (and HQ) will be around 185-220Mbps (peak) vs 100Mbps (peak) for 4K internal H.264 (which scales to 444 and 10-bit(ish) luma 1080p). A test comparing these two workflows would be helpful.
-
When setting the camera to 422 10-bit and hooking the GH4 to a 1080p HDTV I noticed the image looked really good. In that case the GH4 is scaling 4K to 1080p in-camera which is what I would expect would happen when hooking up a 1080p recorder such as the Ninja.
-
I was indeed kidding about that sensor, however a real point is that Canon has the capability to release something amazing in the pro/consumer space when it's right for their business.
-
Anyone using the ImpulZ 3D LUTs with the GH4? Apparently compatible with the GH4's Cinelike-D, however Neumann states colors are hard to work with on the GH4 (I don't use Cinelike-D very often: mostly use Natural). More info here: http://www.vision-color.com/impulz/
-
Canon or Sigma 24-105 F4.
-
Perhaps do a test with high bitrate H.264/MP4 (100-200Mbps). You might be surprised by the results. ProRes will need around 800Mbps for 4K (vs around 200Mbps for 2K).
-
The A7S has a 35mm sensor and extreme low-light sensitivity. Canon's 200+mm sensor (8 inches!) dwarf's the A7S's and also shoots in extremely low light: http://petapixel.com/2011/09/19/canons-gigantic-8-inch-cmos-sensor-now-shooting-meteors-at-60fps/ Guess Metabones will have a mega focal expander so it can be used with EF lenses ;) It's great that the A7S is coming out soon- the next Canon 35mm sensor is going to be amazing (has to be) and with Canon's color science will keep things interesting in the market.
-
The GH4 is a great camera but does not replace the 5D3. For stills the 5D3 is still King. IQ for RAW video 5D3 is still best in class (A7S might be a contender with 50Mbps compressed). GH4 has 4K, slomo, decent stills, all in a tiny package, especially if using m43 lenses (while excellent and sharp, not as good looking as FF L class lenses). A 4K 5D Mark 4 with AF video and XAVC-class codec would put Canon back on top. Will be unlikely with the 1DC still on the market.
-
Reviews and tests show the Sigma is sharper at 24mm, Canon is sharper at 105mm, similar in mid range. Canon is weather sealed, Sigma is not. Canon focuses slightly faster, Sigma might be more accurate. Sigma has a ~1/2 T-stop advantage. Sigma is slightly heavier, has smoother operation. Canon is 77mm, Sigma is 82mm. Canon is lower cost for you.
-
Invest in Canon/or Nikon glass for buying new GH4.
jcs replied to Lasers_pew_pew_pew's topic in Cameras
I have a Canon L lenses for the 5D3 and FS700+SB, however for the GH4 I went with the Panasonic 12-35 F2.8 + 35-100 F2.8 and the Voigtlander 25mm F.95. These lenses cover everything I need so far (24-200mm 5D3/FF equivalent). If MB ever releases the Canon EF to M43 SpeedBooster, that might be another option. -
How to cure banding in DSLR footage (and GH4 4K holds the key...)
jcs replied to Andrew Reid's topic in Cameras
It appears After Effects will apply dither when converting from 16+ bit color to 8-bit: http://provideocoalition.com/ryoung/story/remove_banding_in_after_effects Not clear of Premiere does this (I would expect that it would), if not it's a simple extra step to render the final using AE. -
cpc- I've sharpened the 4K GH4 footage in post and the noise grain is very fine. There are smeared areas and macroblock artifacts in some places, though overall when there's not a lot of motion the noise grain is pretty impressively small, especially compared to my 5D3 (RAW) or FS700 (AVCHD). Premiere uses a form of Lanczos and Bicubic for scaling ( http://blogs.adobe.com/premierepro/2010/10/scaling-in-premiere-pro-cs5.html ). Whatever they are doing appears to work reasonably well. In terms of the 10-bit luma debate, if the 8-bit 4K footage was effectively dithered, either via error diffusion or simply noise, then resampling a 4x4 grid (Bicubic) could kind of reverse the error diffusion and put some bits back into luma. Intuitively it doesn't seem like it would buy a lot of latitude vs. native 10-bit sampling, however a little bit can be helpful. Adding error diffusion / noise certainly helps reduce the appearance of banding/blocking. Ideally the dither/noise would only be added where it's needed. Without significant dithering of some form or another, I don't see how 4K to 2K could do much for the '10-bit luma' argument as we need variance for the 4 source pixels to spread around the values of the summed/averaged final pixels.
-
Hey cpc- nice write up on your blog! While the 4K to 2K effect might be minor in terms of grading latitude, using dither can help with banding and blocking reduction. It appears the fine noise grain from the 4K footage is mostly responsible for any gains when going from 4K to 2K vs. 2K native capture. The dithering discussion raises the question regarding how NLEs convert 32-bit to 8-bit for final rendering and delivery- do they apply dither? It would also be nice to be able to control dither method and amount. The quick & dirty solution is to apply noise or film grain, however targeted error diffusion would be more optimal and more likely to survive H.264 compression.
-
For $33, the iRig Pre (with cable or hack) works well: http://www.dslrfilmnoob.com/2012/11/25/irig-pre-hack-cheap-xlr-phantom-power-preamp-dslr/
-
How to cure banding in DSLR footage (and GH4 4K holds the key...)
jcs replied to Andrew Reid's topic in Cameras
To summarize from the other thread: 8-bit monitors can't display 10-bit information without some form of effective dither Banding and blocking are especially noticeable due to the way the eye-brain system detects edges: Mach banding and other illusions: http://www.wikiradiography.com/page/Mach+bands+and+other+Optical+Illusions Noise or image texture from 4K material can be helpful in reducing banding/blocking for material after downsampling to 2K The resulting 2K material won't be the same as 10-bit capture from the sensor, however it can in some cases provide additional tonality for grading latitude In order for this to work, there must be sufficient noise or texture in the 4K material. Increasing ISO might help when shooting sky or other challenging material Adding noise in post can help reduce banding, ideally on the 4K material before downsampling. In the case where downsampling introduces aliasing, applying Gaussian blur before adding noise to the 4K material can help When the final 2K render has banding or blocking when rendered to 8-bit 420 for delivery, a small amount of noise can be added to provide effective dither to preserve tonality for 8-bit quantization It's not clear if NLEs already apply dithering for 32-bit float to 8-bit integer conversion. If not, a plugin could provide more optimal error diffusion dither to the 32-bit material which will survive 8-bit quantization. While Floyd-Steinberg is easy to code, it's not very fast. Newer CPU/GPU versions are much faster: http://web.iiit.ac.in/~ishan.misraug08/research/dithering/dithering.html Challenging areas such as sky can be selectively masked and dithered using tools such as Resolve (External Fill + Power Window) -
sunyata- it is now hopefully clear that the 2x2 pixel summing/averaging from 4K to 2K takes fine noise from 4K and acts like dither before downsampling to 2K. The "10-bit" luma won't have any significant extra latitude in post as with actual 10-bit (or more) sensor acquisition, however banding can be reduced and color tonality can be better preserved vs. 1080p in-camera. In cases where banding is apparent in 4K, it's probably better to add extra noise to the 4K version before scaling to 2K. In Premiere this can be done by Nesting the 4K footage, applying noise in 4K, then adding the Nested clip to a 1080p Sequence with Scale to Frame Size selected.
-
Sunyata- no one was able to come up with an example image showing that so-called 10-bit luma from 4K to 2K did anything significant. Andrew's Mercedes example was due to scale and viewing size, the example I found (link above) appeared due to compression. Since most people have 8-bit displays, the only way to see 10-bit material is either via noise/texture during 10-bit acquisition before 8-bit quantization or via dithering before 8-bit quantization: this will provide the best results to see the advantage to >8-bit on 8-bit displays (temporal dithering is also possible but requires displays capable of 10-bit signals). As the examples I posted show, one can dither after quantization, which helps, but is not as good as adding dither to 10+ bit material before quantization. Folks wanted to understand how 8-bit 4K material could look so good downscaled to 2K, especially for gradations. The answer is 2x2 downscaling, fine 4K noise grain, and macroblock scaling, not anything significant with 10-bit luma. If 10-bit luma was the reason for improvement, we wouldn't be able to see such an effect on an 8-bit display. Effective dithering is the only way to view 10- or more bits on an 8-bit display. Tone Mapping can also help but changes the look and is currently expensive to compute (local adaption).
-
8-bit monitors are relevant as that's what most people use. How can people see the benefits of >8-bit data on an 8-bit display? Dithering. If we dither 10- or more bit images before truncation to 8-bit we can reduce banding more effectively than if we apply dither after truncation to 8-bit (more dithering is required, degrading the image). 4K to 2K "10-bit luma" doesn't do anything significant by itself. The fine-grained noise from 4K and perhaps macroblock scale helps improve gradations as does a slight improvement from the 2x2 scale reduction. Adding noise to the entire image isn't the best way to reduce banding/artifacts. Algorithms which selectively apply dither only to areas where it's needed are more optimal. Examples of these algorithms and example images here: http://en.wikipedia.org/wiki/Dithering . In order to use those algorithms, we need the original, >8-bit data, for error diffusion calculations, for example. Noise diffusion is a easy to use and fast to test, so it's worth a shot when there are no other options.
-
Andrew, everyone agrees that adding 4 8-bit numbers gives a 10-bit number, including Newman and Worth. However this does not mean scaling down a 4K image to 2K gives useful 10-bit luma, especially for one of the main reasons to shoot in 10 or more bits: reduced banding and improved color tonality. Just about everyone here (including me) is using 8-bit displays! How can we possibly see the results of greater than 8-bits on 8-bit displays? Here's an image created in Photoshop CC to illustrate sky banding using a simple diagonal blue gradient (PNG- no JPG DCT compression): A 16-bit per color channel image producing banding? How is this possible? This image was captured via a screenshot- so it's 8-bit per color channel and no chance for Photoshop to HDR compress the image to 8-bit: it's quantized. If we add .25% Uniform Monochromatic noise and convert to 8-bit: What is banding, really, and why is it so noticeable when it happens? It's due to human the visual system for edge detection, AKA Mach banding: http://en.wikipedia.org/wiki/Mach_bands . If the image has even a very small amount of noise, it can stop the "Mach band" trigger, and the gradients look smooth. What if we add .25% noise to the 8-bit image? How about 4x the noise or 2%? It's clear that adding noise (dithering) before quantizing to 8-bit is required to eliminate banding. For signal processing in general, it's common practice to perform a low-pass filter as well as dither before sampling down to a lower frequency signal (or image resolution/bit-depth). The 4K GH4 signal has a fine noise grain: thus when sampling from 4K to 2K, the effect is a low-pass filter + noise-dithering which produces an improved result vs. in-camera 1080p (which isn't performing as high of quality resampling as is possible in post). The Sony A7S, however, appears to have a very high-quality resampler in hardware, and thus looks great at 1080p 8-bit. Noise and compression behavior is the likely reason for reducing macroblocking/banding artifacts in this example: http://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/ . The reason 4K to 2K scaling could possibly reduce banding is due to noise / effective dithering and macroblock scale/effects, not virtual 10-bit luma: quantized 10-bit luma will still (Mach) band, as did the 16-bit example above without dithering. If shooting material that may have issues with banding, try a higher ISO- it may provide effective noise dither, reducing banding once quantized to 8-bit.