-
Posts
1,149 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by tupp
-
It's an ultrasonic tape measure (for setting focus).
-
Definitely, some heavy duty IBIS with that rig. Looks like they forgot the cage with a Rode and shoe mounts! /s Jeez! That's as massive than a Mitchell BNC or a Dalsa Origin.
-
Not sure that Tizen is "Linux-like" -- I think it actually is version of Linux. This sounds like the old Magic Lantern HDR video in which alternating H264 frames have different ISOs (WARNING! STROBING/FLASHING FRAMES): The trick is that you shoot with the ML HDR video feature at a higher frame rate (50fps or 60fps) and then combine every two consecutive images to get HDR video at half the frame rate (25fps or 30fps). As mentioned in the video, one can encounter motion/ghosting problems with this method. I wonder if the Magic Lantern HDR option works their 60fps raw video feature on the 5d III (although I am not sure if this HDR method makes sense when shooting raw).
-
I said that those specs would be "ideal" -- I realize that those exact specs are probably not currently feasible. On the other hand, 12-bit/10bit raw at 1800x1024 is working with the EOSM, and that is certainly close enough to what I seek. However, the Magic Lantern documentation, nomenclature and forum are so confusing, I cannot figure out whether or not those specs are available in a super16 crop. ML has not been able to get higher raw resolutions from the full sensor, but I am not sure that is due to the EOSM lacking "power." I assume that this current limitation doesn't involve the EOSM's SD card controller bottleneck, because the EOSM can obviously record full-HD+ resolutions at higher bit depths with crop modes. So, there might be a way to get raw and "near-full-HD" without a crop. I am just trying to figure out what is possible right now with the EOSM.
-
That's not the cause of my confusion -- I don't even have Magic Lantern installed, so I am not looking at the menus. Indeed, if all the crops were simply labeled relative to full frame, there would be a lot less confusion in regard to crop sizes. I think that a lot of the confusion with Magic Lantern comes from three things: The official documentation is not current and a lot of features are left out; The only mention of some features is scattered over a zillion forum posts, so information on is difficult to find; Since all of the features are released as they appear, the features weren't named together, all at once, so the naming has been somewhat arbitrary, reactive and haphazard. Consequently, it is difficult to tell from the feature's name what that feature does (especially relative to other similar features). Yes. I have this one. There are other brands that have "dumb" EF-to-EF-M focal reducers -- just search on Ebay. Thanks for the info!
-
Thanks. As I mentioned above, I have shot extensively with the BMPCC, but I don't own one nor do I own a BMMCC. I own an EOSM with an RJ focal reducer. For the optics that I have, it would be ideal to get a super16 crop, full HD, 10-bit, 4:4:4 (or raw), or to get the same specs with no crop (APS-C).
-
@Alpicat Thanks for the info. That clarifies/differentiates some of the features. I thought that the 3x crop is equivalent to super16 while 5x is equivalent to super8. I also was under the impression that "movie crop mode" could be super16 in size.
-
Thanks for the confirmation on the 3x crop. I wonder what resolutions are available at that crop and whether or not I would have to deal with focus pixels. In regards to the Blackmagic pocket and micro, I have shot a lot with the pocket, and it gives great images. I own an EOSM, but not any Blackmagic cameras. So, instead of spending another US$1000, I would rather try similar functionality with my EOSM.
-
What happened to Bohus? Thanks for the link. I am fairly sure that there is a 3x crop mode in Magic Lantern, but I don't know what resolutions are possible, nor if one can shoot in that mode and not suffer pink/green focus pixels. It is difficult to determine Magic Lantern capabilities as the only mention of features is sometimes spread over many threads on their forum, with some threads having 60+ pages of posts. Well, if Metabones doesn't make a speed booster that mounts on the EOSM, some of their competitors already got the EOSM covered. Plus, if an EOSM-m4/3 gets made, that would allow the use of a lot of m4/3 focal reducers on the EOSM, including Metabones. Metabones should actually make their $peedboosters with interchangeable camera mounts, so you only have to spend $500+ for one set of optics.
-
Nice! Does the acrylic case conform tightly to the monitor, or is it more bulky (to include a Raspberry Pi board)?
-
Wow! I had no idea that such adapters existed. Thanks! Here's the Fotodiox version, by the way. Indeed, both mounts have the same flange focal distance (18mm), plus the EF-M has a slightly wider throat diameter at 47mm, compared to the 46.1mm throat of the E-mount. So, it should be a little easier to make a M4/3 adapter for the EOSM. Can you post your Fotodiox contact info? I would like to "second" your proposal for such an adapter. By the way, much of this thread involves shooting with a "Super-8m" crop on the EOSM. I would like to enjoy the same HD resolution and raw benefits, but shoot with a Super-16 crop. Would that be the equivalent of the 3x crop? If so, can I shoot that crop on the EOSM with full HD, raw, 24p at 10-bit? Thanks!
-
There is a PL mount to EOS M adapter, but it's from MTF so it's expensive: https://www.srb-photographic.co.uk/pl-mount-to-canon-eos-m-mtf-camera-adaptor-9290-p.asp I don't know if there are cheaper alternatives. There are PL to EF-M tilt adapters for US$195. I seem to recall non-tilt versions selling for a little over US$100, but I can't find them anymore. Another option is to get a PL to EF adapter for ~US$100 and merely attach that to a dummy EF to EF-M adapter (US$9). The flange focal distance for M4/3 is 19.25mm while the flange focal distance for EF-M is 18mm. So, it should be possible to drill matching holes in a M4/3 mount to match the screws on the EF-M mount and shim the mount ~1.25mm further out. Of course, making the locking pin work is another consideration. Also, safe clearance for the EOSM's EF-M electronic contacts should be considered, to prevent damage.
-
Glad to know somebody gets it! I don't use luts in grading, but a lut can help with "client preview" on set.
-
Hybrid Log Gamma?
-
Ha, ha! If you start out with an uncompressed file with negligible noise, the only thing other than bit depth that determines the "bandwidth" is resolution. Of course, if the curve/tone-mapping is too contrasty in such a file, there is less room to push grading, but the bandwidth is there, nevertheless. Bandwidth probably isn't the reason that 8-bit produces banding more often than 10-bit, because some 8-bit files have more bandwidth than some 10-bit files.
-
Barring compression variables, the "bandwidth" of 8-bit (or any other bit depth) is largely determined by resolution. Banding artifacts that are sometimes encountered when pushing 8-bit files (uncompressed) are not due to lack of "bandwidth" per se, but result from the reduced number of incremental thresholds in which all of the image information is contained.
-
Here is a 1-bit image (in an 8-bit PNG container): If you download this image and zoom into it, you will see that it consists only of black and white dots -- no grey shades (except for a few unavoidable PNG artifacts near the center). It is essentially a 1-bit image. If you zoom out gradually, you should at some point be able to eliminate most of the moire and see a continuous gradation from black to white. Now, the question is: how can a 1-bit image consisting of only black and white dots exhibit continuous gradation from black to white? The answer is resolution. If an image has fine enough resolution, it can produce zillions of shades, which, of course readily applies to each color channel in an RGB digital image. So, resolution is integral to color depth.
-
No. I am referring to the actual color depth inherent in an image (or imaging system). I never mentioned dithering. Dithering is the act of adding noise to parts of an image or re-patterning areas in an image. The viewer's eye blending adjacent colors in a given image is not dithering. Again, I am not talking about dithering -- I am talking about the actual color depth of an image. The formula doesn't require viewing distance because it does not involve perception. It gives an absolute value of color depth inherent in an entire image. Furthermore, the formula and the point of my example are two different things. By the way, the formula can also be used with a smaller, local area of images to compare their relative color depth, but one must use proportionally identical sized areas for such a comparison to be valid. What I assert is perfectly valid and fundamental to imaging. The formula is also very simple, straightforward math. However, let's forget the formula for a moment. You apparently admit that resolution affects color depth in analog imaging: Not sure why the same principle fails to apply to digital imaging. Your suggestion that "non-discrete color values" of analog imaging necessitate measuring color in parts of an image to determine color depth does not negate the fact that the same process works with a digital image. The reason why I gave the example of the two RGB pixels is that I was merely trying to show in a basic way that an increase in resolution brings an increase in digital color depth (the same way it happens with an analog image). Once one grasps that rudimentary concept, it is fairly easy to see how the formula simply quantifies digital, RGB color depth. In a subsequent post, I'll give a different example that should demonstrate the strong influence of resolution on color depth.
-
So what? It works the same with digital imaging. However, in both realms (analog and digital) the absolute color depth of an image includes the entire image. I will try to demonstrate how color depth increases in digital imaging as the resolution is increased. Consider a single RGB pixel group of size "X," positioned at a distance at which the red, green and blue pixels blend together and cannot be discerned separately by the viewer. This RGB pixel group employs a bit depth that is capable of producing "Y" number of colors. Now, keeping the same viewing distance and the same bit depth, what if we squeezed two RGB pixels into the same space as size "X"? Would you say that the viewer would still only see "Y" number of colors -- the same number as the single pixel that previously filled size "X" -- or would (slightly) differing shades/colors of the two RGB pixel groups blend to create more colors? What if we fit 4 RGB pixel groups into space "X"? ... or 8 RGB pixel groups into space "X"? As I think I have shown above, resolution plays a fundamental role in digital color depth. Resolution is in fact an equally weighted factor to bit depth, in digital color depth. I would be happy to explain this if you have accepted the above example.
-
I don't have a reference. When I studied photography long before digital imaging existed, I learned that resolution is integral to color depth. Color depth in digital imaging is exceedingly more quantifiable, as it involves a given number of pixels with a given bit depth, rather than indeterminate dyes and grain found in film emulsion (and rather than unfixed, non-incremental values inherent in analog video). The formula for "absolute" color depth in RGB digital imaging is: Color Depth = (Bit Depth x Resolution)^3
-
The notion that bit depth is identical to color depth is a common misconception, that has apparently made it's way into Wikipedia. The only instances in which bit depth and color depth are the same is when considering only a single photosite/pixel or a single RGB pixle-group. Once extra pixels/pixel-groups are added, color depth and bit depth become different properties. This happens due to the fact that resolution and bit depth are equally weighted factors of color depth, in digital imaging.
-
Actually, "8bit" and "10bit" refer only to bit depth -- bit depth and color depth are two different properties. My understanding is that Vlog doesn't actually change the dynamic range -- it changes the tone mapping within the dynamic range to capture more shades in desired exposure regions.
-
Almost every mic that I have seen on hand-held booms had an interference tube. What kind of mic is that? They might stop laughing when the director and post production sound supervisor start asking why there is so much extraneous noise in the audio. I have been in production for a little while. By the way, I started out in audio recording. I have worked on all kinds of film productions in various departments, from small corporate gigs to large features on stages and on location. I am telling you exactly what I see audio pros use on such sets. When I was involved in audio recording, I was utterly ignorant about the different types of mics and recording techniques. I also was completely unaware of certain brands/models that had more desirable response for certain applications. Please enlighten those of us with limited knowledge with your mastery of the important mic properties in recording. Specifically, please explain what is more important than the degree of directionality in regards to film audio production, given a quality, professional mic.
-
You should tell that to the pro's here in Hollywood. Almost all the boom operators that I see here on set are using shotguns on their booms, both indoors and outdoors. These operators are nimble and precise, and they want clean, dry audio. I am not specifically and audio person, but I always use my boomed shotgun mic, and I have always gotten great results. I would never want anything with wider reception.
-
Agreed! Otherwise, somebody needs to tell all the pro mixers here in Hollywood that they are wasting their money hiring boom operators! I suspect that this suggestion will sound alien to some here, but never mount your shotgun mic to your camera. Always boom the shotgun mic just outside of the frame, as close as possible to the subject. If your subject is static, such as someone sitting for an interview, you can boom from a light stand or C-stand. Of course, make sure that the shotgun mic is aimed directly at the subject's mouth or aimed directly at the action you want to record. One important trick that is often overlooked -- position the mic so that there are no air conditioners or other noise sources along the shotgun mic's axis (both in front and in back of the mic) where it is most sensitive. So, if there is an air conditioner on camera right, boom the mic from slightly to camera right, so that it is aiming a little leftward towards the subject, perpendicular to the noise source on the right of camera. As @maxotics suggested, it is best to use both a lav and a boomed shotgun, if possible. In such a scenario, one is always a backup for the other.