Leaderboard
Popular Content
Showing content with the highest reputation on 03/20/2014 in all areas
-
I'm currently developing a pair of front mounted wide angle adaptors for use on the FF58 lens (which Is a renovated helios 44). the first is a 38mm attachment (4 elements) the second is a 25mm attachment (6 elements) both are looking like they will be significantly more costly than a helios 44 due to the fact that the optical complexity and that the glass types are about as exotic as they can get. They are designed to impart as little image degradation to the helios and instead simply widen the fov, maintaining the optical character of the helios44. dont expect edge softness, ca or other nasties from these optics! when released I will also introduce a 'Classic58' version of the Flare Factory 58, which will simply be a renovated helios 44 with uprated mechanical parts and a fresh external finish. There are simply no wide options that match the helios44 / biotar look unless you go with the old OCT18 lomo lenses - of which none can be used on a standard ef mount, or with speed boosters. they're only adaptable to PL mount3 points
-
Zacuto on the GH4
dahlfors and one other reacted to Ben Prater for a topic
Found this video, lots of interesting details about the camera directly from Panasonic. One bit that jumped out at me: the new Venus engine is powerful enough to bump up the usable ISO range of the camera (by effectively de-noising, in real time). Can't wait to see if it can match 5D3 in low-light.2 points -
Last I checked both YouTube and Vimeo use customized ffmpeg (with x264) to transcode. x264 has been the best H.264 encoder for a while now. Thus if you want the most efficient upload you could use any tool which uses a recent version of ffmpeg (rendering out ProRes/DNxHD then using Handbrake is a decent way to go). The challenge with your footage is high detail, fast motion. Adding grain or more detail (by itself) can make it worse. In order to help H.264 compress more efficiently in this case you need less detail in the high motion areas. You can achieve this by first shooting with a slower shutter speed (1/48 or even slower if possible). Next, use a tool in post which allows you to add motion blur. In this example you could cheat and use tools to mask off the skateboarder and car and Gaussian blur everything else in motion (mostly the sides but not so much the center/background). You could also apply Neat Video to remove noise and high frequency detail (in the moving regions only) and not use any additional motion blur as this will affect energy/tension of the shot (through adding more blur to motion will help the most). Once you have effectively lowered detail in the high motion areas (however achieved), H.264 will be able to better preserve detail for the lower motion areas- the skateboard, car, and distant background.2 points
-
32bit floating point is higher precision color processing over 8bit, float versus integer precision, 32bit is also usually done in the linear domain rather than on gamma applied image data. 32bit float there should be no loss of data from clipping, image data values can be negative or greater than 1.0 although you won't see that on your monitor and it will look like clipping is happening on your scopes but as you grade you'll see the data appear into and out of scope range, where as 8bit processing will clip below 0 and above 1 ie: 0 to 255 in 8bit terms. Full versus Video levels. Whether the image is encoded in camera based on a RGB to YCbCr conversion that derives YCbCr values, (luma & chroma difference) based on luma over limited range or full range, you're aim is to do the correct reverse for RGB preview on your monitor. You can monitor / preview & work with either limited or full as long as you are aware of what your monitor expects, is calibrated accordingly and that you feed it the right range. If you're unsure then video levels. Video export 'should' be limited range certainly for final delivery, full range only if you're sure of correct handling further along the chain for example to grade in BM Resolve you can set 'video' or 'data' interpretation of the source. Your 1DC motion jpegs are full range YCbCr but as the chroma is normalized over the full 8bit range along with luma (JPEG/JFIF), it's kind of equivilent to limited range YCbCr and the MOV container is flagged full range anyway so as soon as you import it into an NLE it will be scaled into limited range video levels YCbCr. Canon DSLR, Nikon DSLR and GH3 MOV's are all h264 JPEG/JFIF, flagged 'full range' in the container, interpreted as limited range in the NLE etc. What you want to avoid is scaling levels back and forth through the chain from graphics card to monitor, including ICC profiles and OS related color management screwing with it on the way as well. You may also have to contend with limited versus full range RGB levels as well depending on the interface you're using from your graphics card, DVI versus hdmi for example, NVidia feeding limited range RGB over DVI full over hdmi.2 points
-
Lot of good information here. I like that the Panasonic rep gave honest answers, for example on the increase in f stops, rather than just giving a marketing speech. 10 bit 4:2:2 at 4K on the HDMI port right on the camera body!! Whoohoo!!! Interesting that they put in adjustments for "shutter angle" in addition to the original fractions of a second for shutter speed. They also put in dB in addition to ISO. Panasonic's background in video really shows. Panasonic has an edge over the pure still camera manufacturers, or those manufacturers with stills camera divisions that don't seem to talk to their video divisions. The GH4 really has set a new benchmark for a hybrid camera, and shows Panasonic has obviously seen the trend where people want high quality video out of their stills camera. They are way ahead of the market with the GH4. I'm looking forward to seeing it at NAB. I have a feeling this will be the perfect camera for a lot of people, like me, who go back and forth between shooting stills and shooting video on the same shoot, literally minute by minute, and don't want compromises in either. Michael1 point
-
There was a time when integer/fixed-point math was faster than floating point. Today, floating point is much faster. For GPU accelerated apps such as Premiere, Resolve, (FCPX?), they always operate in floating point. The only area where 8-bit can be faster is on slower systems where memory bandwidth is the bottleneck (or really old/legacy systems, perhaps After Effects).1 point
-
Lenses like the helios 44?
nahua reacted to Bioskop.Inc for a topic
The only one that really springs to mind is the Helios 40-2 85mm f1.5 - wide open its dreamy soft & produces the best swirly Bokeh that you'll ever see! Stopped down it is really great & produces a star-like Bokeh shape. The Helios 40-2, the 44-2 & Tair 11a were the only 3 lenses I used to use. I recently discovered that they have halved in price & can be bought brand new (re-released in 2012 or just old new stock) in various mounts (Canon, Nikon or M42). This guy has a ton: http://stores.ebay.co.uk/moscowStore/_i.html?_nkw=helios+40-2&rt=nc&_dmd=1&_sid=92107522&_trksid=p4634.c0.m14&_vc=1 The other usual [Russain] suspects are: Mir 24 35mm f2 (MC in Nikon or M42) Mir-1b 37mm f2.8 Jupiter 9 85mm f2 Tair 11a 135mm f2.8 Also, seen some good things done with the Meteor 5-1 17-69mm f1.9 (a S16mm zoom lens) - anyone got one of these & got more info on its performance?1 point -
Canon say: "We need to respond to calls for DSLR video quality increase"
Zach Ashcraft reacted to etidona for a topic
@leo 25 frames per second and the perfect moment happens when you are changing the memory card ;) Photography is photography.1 point