Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 07/14/2014 in all areas

  1. SOLD! Best luck and fun to the buyer!
    1 point
  2. That might be logical if you are going to display the image at the same final per pixel magnification and view it from the same distance. In other words, if you were going to display it at twice the width and height of the 1080p image. But we know that's not how videos/movies are typically viewed. So no, it really isn't logical. The practical reality is that an image with higher pixel count can tolerate more compression because each pixel and the resulting artifacts from noise and compression is visually smaller. If we want to get into pixel peeping, we need to consider question of the information density coming from the sensor. When shooting 4K on the GH4, we are getting one image pixel for each sensor pixel. But the GH4, of course, uses a CFA sensor. And in the real world of CFA image processing, such images are not 1:1 pixel sharp. This is due to the nature of the Color Filter Array as well as the use of the low pass filter in front of the sensor that reduces color moire. A good rule of thumb is that the luminance resolution is generally about 80% of what the pixel count implies. The color resolution is even less. Years ago I demonstrated online via a blind test that an 8MP image from a 4/3s camera (E-500 I think) only has about 5Mp of luminance info. Dpreview forum participants could not reliably distinguish between the out of camera 8Mp image and the image that I downsampled to 5Mp and the upsampled back to 8Mp. They got it right about 50% of the time. So what's the point? The point is that before you can even (mis)apply your logic about scaling bandwidth with image resolution, you must first be sure that the two images are similarly information dense. What is the real information density of the A7S image before it is encoded to 1080p video? I think of CFA images as being like cotton candy - not information dense at all. About 35% of the storage space used is not necessary from an information storage standpoint. (Though it may be useful from a simplicity of engineering and marketing standpoint.) And even with this, there's surely plenty I'm not considering. For instance, if the A7S uses line skipping (I have no idea what it really uses) that will introduce aliasing artifacts. How do the aliasing artifacts affect the CODEC's ability to compress efficiently. The bottom line is that all too often camera users apply simplistic math to photographic questions that are actually more complex. There are often subtle issues that they fail to consider (like viewing distance, viewing angles, how a moving image is perceived differently than a still images and more.) Personally, as a guy who sometimes leans toward the school of thought that "if a little bit of something is good, then too much is probably just right.", I kinda wish that the GH4 did offer 4K recording at 200mbs. Why not given that the camera can record at 200mbs? But the reality is that I can't really see much in the way of CODEC artifacts in the 4K footage I've shot with the GH4 so far. So 100Mbs is probably just fine. 50Mbs on the A7s is probably pretty good as well.
    1 point
  3. Our first shoot with the Sony A7S. Shot 1080p60 1/60th shutter with Picture Profile 7 (Slog2), with Color Mode changed to Cinema. Graded in Premiere Pro CC using color curves and saturation, the same setting applied to all clips. The goal was to see how well the A7S handled skintones under very challenging, changing lighting conditions: low light, multicolor spots, etc. NR was only used on the first clip (very very low light). Shot handheld using the Sony SEL18-200mm APS-C lens (FS700 kit lens), AUTO ISO, AWB, AF (center spot focus area), IS, manual zoom, built in mics. With this lens and settings, the A7S makes an excellent low-light live/event camera.
    1 point
  4. Yeah 24p on this camera is in a faux cinema aspect ratio with the crop forced on you in-camera, 1920 x 810. 30p otherwise. Nothing about the image is better than the G6 and the codec is FAR worse so beware!
    1 point
  5. This is a trailer for my new short film which will be getting it's first screening on July 17th. I shot this entirely on two BMPCCs both with speed boosters. I hope you all like it! Trey
    1 point
  6. sorry to butt in, but I keep seeing people talking about problems with colour science, yellow tint and other garbage. here's what i got with the a7s using awb: no grading. correct me if I'm wrong, but I don't see any colour issues here.
    1 point
  7. I never touched saturation just so you know. All I did was remove all that GREEN and add a tad bit of blackness. ALL the still frames associated with this post look excessively green (and a bit flat) to me. When I download the doggy image and open it in photoshop, the big reflected highlight under the Beagle shows a pronounced amount of green. And I keep my 32 bit monitor calibrated and do all my (considerable amount of) grading on it.
    1 point
  8. I would wager that some of you are on PC and others are on Mac. I will share that on a calibrated PC monitor, the OP's image appears washed-out (elevated gamma). I have quietly thought this since the inception of this thread -- but it is now worth pointing it out.
    1 point
  9. Hey Wilco, Here's a test I shot with the GH4 + SLR Magic Anamorphot 1.33x. The process is very simple on FCPX. I just use thescale / distort setting and set the vertical to 75%. That might not be the accurate percentage that is necessary but that's what looked good to me. Although 77-78% works as well. Hope that helps.
    1 point
  10. It doesn't fit on the GM1. Not tested on G6 yet! It does fit GX7 (just), GH3 and GH4, but still yet to test infinity! Beware for now.
    1 point
×
×
  • Create New...