Jump to content

tupp

Members
  • Posts

    1,149
  • Joined

  • Last visited

Everything posted by tupp

  1. The easiest solution is a cardellini clamp with a magic arm. If the jaws of the cardellini are too big (visible on the bottle), you could use a thin plumbing pipe clamp with a rubber liner around the bottom of the bottle. Use plumber's epoxy on the head of the clamp's bolt to keep it from rotating and attach the magic arm to the exposed threads at the other end of that bolt. You could likewise make a clamp plumbing strap and a long 1/4-20 bolt. Plumber's epoxy would also work here in keeping the bolt fixed so that you could attach a magic arm to the exposed threads. Rubber scraps from an old bike inner tube could be used as a clamp liner. Or, you could just use J-B Weld to glue two, stacked, 1/4-20 nuts directly to the bottle. J-B Weld sticks to glass (seems to hold up to pot smoke and heat, too!) -- not sure if plumbers epoxy sticks to glass as good as J-B Weld. You might have to score (roughen) the surface of the glass to improve the adherence of the J-B Weld. Screw two 1/4-20 nuts onto the end of a 1/4-20 bolt, then use plenty of J-B Weld around the nuts and around the surface of the bottle. When the J-B Weld has hardened, simply unscrew the bolt from the two nuts, and the nuts will be perfectly aligned within the J-B Weld. Screw the 1/4-20 threads of the magic arm into the epoxied nuts.
  2. I am merely stating fact. There is no absolute correlation between dynamic range and bit depth -- they are two completely independent properties. Yes. That is how binary math works. However, that has no bearing on how many bit depth increments are (or should be) mapped to a signal/dynamic range. The number of "steps" mapped within a signal/dynamic range has nothing to do with the scope of that signal/dynamic range -- that is the point. This is a simple concept/fact. There might be an older machine vision camera with those specs, but it is an irrelevant request -- there are plenty of cameras featuring multiple bit depths yet with only one dynamic range. The dynamic range of these cameras doesn't change because the cameras' bit depth changes. Again, bit depth and dynamic range are two completely independent properties. Also, subject/location contrast has nothing to do with a camera's dynamic range. Standards and their calculation have nothing to do with the basic fact that bit depth and signal/dynamic range are totally different properties. Furthermore, dynamic range is so independent of bit depth that many systems (including numerous cameras) have incredible dynamic range with absolutely zero bit depth.
  3. Raw data can be mapped other ways, too. There is no absolute (nor absolutely "ideal") correlation between dynamic range and bit depth. Dynamic range and bit depth are two completely independent properties. A camera with 14 stops of dynamic range can be mapped with a 6 bit depth, and, likewise, a camera with a dynamic range 6 stops can have a 14 bit depth. Furthermore, there are numerous digital cameras that allow one to choose mapping different bit depths within the cameras' fixed dynamic range (or, more accurately, signal range).
  4. Tell them that you have the expertise to fulfill the director's vision and that you have extensive experience in using the language of the camera to tell the story. Of course, you must be able to follow through from such statements, and it certainly helps to have a reel that demonstrates such ability.
  5. Don't know how/if the camera's noise reduction settings affect the raw stills, but the problem doesn't appear in the raw files. By the way, there are raw files with two different noise settings (NR1 and NR6) available for download, and there is no noise reduction smudging in either. The problem is reduced on the jpegs with noise reduction set to "1." If the noise reduction is working in raw mode, I am not seeing any smudging, so perhaps this problem is just limited to jpeg processing. Wonder how the noise reduction in the GH5's 4:2:2 video will compare with that of the jpegs.
  6. You have a nice idea! Instead of high-end 3D printing or CNC machining, consider silicone molding. You can 3D print a rough master and finish it to your liking, and then make copies of it with the silicone mold. With this method, you have to be mindful that the parts need to be designed to release from the mold, and, of course, you might need plenty of ventilation to safely cure molding compounds. I see a potential problem with your orange part releasing from a mold. Also, don't get too precise with the design/engineering -- in most instances, the precision one thinks is needed is not actually required. Designing with "slop" (sloppy engineering tolerance) is generally a good practice, especially with mating parts and with any manufacturing process that might have shrinkage (injection molding, die casting, extrusion, blow molding, etc.). Design with as much slop as possible, and allow for that slop. It can save you a lot of headaches later on. For your enclosure, it might be good to have about 1/32" (0.8mm) of slop... meaning, for instance, that you should probably design the inner mating surfaces of the orange part to be 1/32" (0.8mm) wider than the outward mating surfaces of the yellow part. Hope this helps.
  7. Of course, you could gain over a stop by using a focal reducer or by using the Metabones BMPCC speed booster.
  8. Not all 80A filters are created equal. You might try one from another manufacturer. Same with CTBs and CTOs (lighting color temp correction gels). Also, filters can fade with age. Nevertheless, your corrections on the test are well within almost anyone's tolerances. Good job matching the two images! I don't see a huge problem in your test (yet I do see a slight difference), but If one finds oneself without correction filters and if one is also concerned about slightly noisy channels, one can often overexpose 1/2 stop (depending on the subject/scene). On the other hand, with well-lit sets, I have never encountered noise problems shooting raw with tungsten and only IR cut filters and ETTR.
  9. Thanks for doing this test! OP is shooting weddings/events, so OP might often encounter incandescent tungsten which starts at less than 3200K (with a higher percentage of IR), and which is likely further dimmed (further increasing the percentage of IR). I don't know if your 500w halogens are 3200K, but using a full CTB correction (80A) probably helps OP color-wise and IR pollution-wise. However, with a full-CTB/80A filter, brightness will suffer a 1 1/2-stop to 2-stop hit, so it might be better to use a 1/2 CTB (80C?) correction.
  10. Blackmagic cameras are sensitive to IR, and tungsten sources emit a high percentage of IR. So, you might want to try an IR cut filter.
  11. Evidently, the TIFF format has metadata designations that include camera make, model, color profile, orientation, exposure, etc. (see the linked exif file below). The highest quality still file that your camera produces will give the best image quality. However, it can be unwieldy or visually detrimental to directly edit raw camera files. Some still photographers first run their raw camera files through a raw image "processor/developing" program (such as Lightroom), before retouching them in an editing program such as Photoshop. I use the open source Darktable as my processing program and GIMP as my photo editor. I just checked the exif info in a tiff that I converted from a raw file using Darktable and the metadata is extensive: tif_exif_info.txt (Note that I was using a manual lens on a Canon EOSM, so there is no lens/f-stop info).
  12. Every raw file "development" program that I have used initially defaults to the camera's settings. Of course, one can usually change from the default settings to adjust white balance, exposure, camera profile, etc., but it probably depends on which program that you are using to convert from raw to .tiff.
  13. Shooting raw stills and comparing them to processed frames can reveal the extent of sharpening or other image processing that is occurring in the camera.
  14. Don't trust auto-focus for anything critical, even if you can see the image momentarily magnified. Again, seeing raw stills could help, as those images should bypass a lot of the in-camera image processing.
  15. Did you use the same 12-35mm f2.8 lens on both cameras, or did you use two different 12-35mm f2.8 lenses? I wonder what the comparison would be like with both cameras set to "natural" and with the sharpening dialed all the way down in both cameras. It would be helpful to also see raw stills. By the way, are you auto-focusing or manually focusing?
  16. It appears that the GX80 has some artificial sharpening, judging from the halo around some of the lines/numerals on its chart. By the way, did you use the same lens with each camera?
  17. I want neither. This is just a discussion. The film look and the "cinematic" look are different, but they are not mutually exclusive goals. Let's not "put on airs." I'm no huge fan of Seinfeld, but I would say that Seinfeld was more "cinematic" than the home films that I linked. As I recall, on Seinfeld they occasionally did use camera movement (staging), inserts and nearby CUs (lensing) and they had motivated lighting to convey certain environments. It was not entirely a flatly lit, distantly shot, three camera sitcom. Even if Seinfeld were utterly "uncinematic," how does your point differ from my example of home movies not being cinematic, but being undeniably captured on film? I don't think that we disagree here. I am not advocating emphasis on post, nor am I suggesting that we should dwell on technical aspects. I merely addressed your linked comparison (and the comparison that you quoted) between digital cameras and film stocks. My point is simply that there are certain variables that can make video look like it was shot on film. Some of these variables must be wrangled while shooting and some are dealt with in post, but none of them have anything to do with being "cinematic." Not necessarily. The Vilm camera had low dynamic range compared to current digital cameras. Also, much of the early video that FilmLook processed was captured with analog cameras of low dynamic range. Of course, having extra dynamic range helps. In regards to bit depth, I don't think that it is an important factor in mimicking film. There are plenty of film stocks that have low color depth, so high bit depth (as a factor of digital color depth) is not necessarily crucial in making video look like film. By the way, as you may have gathered from the parenthetical part of the previous sentence, bit depth is not color depth. Also, bit depth and dynamic range are independent properties. I agree that peculiarities in how emulsion rolls off the highlights/bright areas is an important characteristic to address in emulating film with video. After using up a significant portion of the dynamic range to deal with the highlights, it certainly is beneficial to have more room left over in the middle and low end for nice contrast range and color depth. However, huge dynamic range is not crucial in merely making video look like film. Peculiarities of emulsion grain likely should be considered when trying to emulate film with video. I don't know anything about post grain similations, but film grain is a somewhat controversial and complex topic in regards to film look. However, I have always understood that with negative stock, the grain clumps are usually larger and more overlapping in the bright areas, while grains are smaller but more separated and distinct in the dark areas. Also, the way noise appears in the shadows in digital is somewhat analogous to the more visible grain in shadows with film, and there have certainly been a few posts in this forum about how noise from certain cameras feels "organic." To me, whether or not film has an edge in color is largely subjective. In addition, I couldn't say that all film has a greater color depth than all digital sensors. the color depth of different film stocks varies dramatically, as does that of digital cameras/sensors. Special processing also significantly affects color depth and contrast in emulsions. Larger film formats yield greater color depth. Higher bit depth does yield greater color depth in digital, but, again, bit depth and color depth are not the same thing. No. I'm not confusing display technology with capture technology. I most certainly made a distinction between the viewing bit depth and the capture range in the YouTube scenario. I was additionally making the point that low bit depth (in the post-capture) stage doesn't influence whether or not something looks like it was shot on film. Apparently, we agree here. I don't have have an Iphone (I never fell for the Apple trick), but I have no doubt that I could shoot something "cinematic" with it.
  18. I don't think that's it. It is important to differentiate between what looks "filmic"/"cinematic" and what looks like it was shot with film. Consider this home movie and this home movie. In regards to lighting, lensing and staging, there is nothing particularly filmic/cinematic about these home movies, but both of them were unmistakeably shot on film. Also, the dynamic range of the stock was probably equivalent to a 7 1/2 stop or less capture range. Just before digital took over, negative stock was generally rated at only 7 1/2 to 8 1/2 stops capture range, with normal processing. So, the technical capture range of today's pro and prosumer cameras is greater than film stocks. Furthermore, those home movies are channeled through 8-bit, YouTube pipes, so the bit depth of what we are viewing is only 8-bits, yet the footage is obviously captured on film. It's something else... a combination of variables that can be quantified for the most part. The Filmlook people and Eddie Barber (with his Vilm camera) were early pioneers in making video look like film, and they evidently did a thorough analysis of the variables involved. I never used Filmlook but I did shoot with the Vilm camera, and I was able to glean a little on how it is done.
  19. Uh, I understand that he was asking about his adapter/focal-reducer, but if I read correctly, he is not sure what mount to get with his lenses. As you subsequently concurred, I suggested that he get Nikon F mounts on his lenses. However, I also noted that many Canon EF adapters/focal-reducers are more expensive than their counterparts. So, he might be better off by getting such adapters with a different mount (that would allow Nikon F lenses to be adapted).
  20. Go for the Nikon mount on your lenses. You will always be able to adapt the lens with a Nikon mount onto anything with a Canon EF mount (given that the version of the same lens with an EF mount also fits). However, the opposite is not true -- a lens with a Canon EF mount will be incompatible with quite a few adapters and focal reducers, a few of which are only available with a Nikon mount. Furthermore, the Canon EF version of some focal reducers and adapters are significantly more expensive than those with most other mounts, because of the electronics required for Canon EF lenses. Go with the Nikon mount on your lenses.
  21. If your input frame rate is not a multiple of the output frame rate (or vice versa), then you might need to blend some of the input frames for the output to seem smooth. Frame blending can involve a meshing of interlaced fields from adjacent frames (as done in typical pull-downs) or involve a digital blending of adjacent progressive frames. A touch of motion blur can be added digitally if needed. I don't do much post work, but I would guess that rendering time is the biggest drawback of digital frame blending and of adding motion blur. An interlaced pull-down/pull-up uses fewer resources than some digital frame blending processes. Of course, if you are making a simple conversion in which your input frame rate is a multiple of the output frame rate (e.g., 48fps -> 24fps), you are simply dropping unneeded frames, and very little computer power is needed. There must be examples of various frame rate conversions and digital motion blur on YouTube, Vimeo, and DailyMotion, etc. At any rate (pun unintended), it is probably best/easiest to capture in the frame rate that you will be outputting.
  22. Strobes are measured in watt-seconds (not watts) and in guide numbers. Guide numbers rate the actual illumination output of a strobe, while watt-seconds usually measure the electrical expenditure of a strobe's "power pack." A watt-second is equivalent to the expenditure of power of one watt for one second. So, a momentary flash from a 1,000 watt-second strobe is equivalent to leaving the shutter open for one full second with a 1,000 watt constant light source (all other variables being equal). That's a lot of light. Let's make another comparison between a 1,000 watt constant light with a 1,000 watt-second strobe. If you shoot 24fps video with a 180-degree shutter, your shutter speed is 1/48th of a second. So, shooting 24fps video with a 1,000 watt constant light is only yielding the equivalent of 1/48th of 1,000 watt-seconds -- only 20.8 watts-seconds. Generally, if a monoblock strobe and a strobe with a separate power pack have the same watt-second rating, the monoblock will be brighter. Monoblocks are more efficient because they have no head cable to incur line loss. If you intend to do a lot of stills outdoors and/or shoot stills indoors with large sources (umbrellas, soft boxes), then you will probably be much happier with strobes. Strobes are a lot more powerful than most constant sources, and strobes can freeze/sharpen action. Strobes allow one to make daytime exteriors look dark, with a short shutter speed and short flash duration. Most LED and fluorescent sources can't even come close to achieving this power/time density, and using focused tungsten and HMI sources to do the same would probably fry the subject.
  23. Thank you!!!! That's perfect -- no video necessary! Considering Panasonic mirroless now!
  24. Thank you for your suggestion, but I have actually been directly involved in the last two pages, and it is not clear whether or not others understand exactly what I am asking. There seems to be confusion between "aperture" and "shutter." If someone with a Panasonic camera and lens would merely observe the lens aperture (set to f11-f22) during a 1 second exposure, the answer would be clear. So, are you saying that you observed the lens aperture momentarily stopping down when the shutter was released?
×
×
  • Create New...