Jump to content

tupp

Members
  • Posts

    1,149
  • Joined

  • Last visited

Everything posted by tupp

  1. tupp

    NEED HELP

    Use the Glidecam with a rain cover and keep all flaps on the rain cover tucked-in (or use tape). Should be okay. Another option is to use a heavy, clear freezer bag around the camera, with the opening taped to the lens hood.
  2. tupp

    NEED HELP

    When you said "Glidecam," I got the impression that you were using a Steadicam type rig -- not an electronic gimbal. Furthermore, a solid underwater housing might be too much for a gimbal.
  3. tupp

    NEED HELP

    An underwater housing seems like overkill for your purpose. A rain cover and a strong lens filter might be better and easier. If you are concerned about damage from the impact force of the paint balls, you could try an insulated rain cover.
  4. Nope. I said "A conversion can be made so that the resulting Full HD 10-bit image has essentially the equivalent color depth as the original 4K 8-bit image." I didn't say anything about the original image having "reduced" color depth. You came up with that. However, the original image does have lower bit depth than the down-converted image -- even though both images have the same color depth. Yes. That is a fact -- all other variables being the same in both instances. No. It doesn't disagree with anything that I have said. You are just confusing bit depth with color depth. With such a down-conversion, resolution is being swapped for greater bit depth -- but the color depth remains the same in both images. It really is simple, and Andrew Reid's article linked above verifies and explains the down-conversion process. No. The banding artifact is caused by the bit depth -- not by the color depth. Posterization, bit depth and color depth are all different properties. Well, it would be so if bit depth and color depth were the same thing, but they're not. Stop confusing bit depth with color depth. I linked an article by the founder of this forum that verifies from a prominent image processing expert that one can swap resolution for higher bit depth when down-converting images -- while retaining the same amount of color information (color depth): Also, that very article gives a link to the expert's Twitter feed, should you care to verify that the down-conversion process is actually a fact. Furthermore, I have given an example proving that resolution affects color depth, and you even agreed that it was correct: Additionally, I gave lead-pipe cinch demonstration on how increasing pixel sites within a given area creates more possible shades, to which you did not directly respond. Do you deny the facts of that demonstration? I have repeated essentially the same thing over and over. It is you who keeps lapsing into confusion between color depth and bit depth. I have paraphrased the concept repeatedly in different forms, and I have given examples that clearly demonstrate that resolution is integral to color/tone depth. In addition, I have linked an article from the forum founder that further explains the concept and how it can work in a practical application. If you don't understand the simple concept by now, than I am not sure there is much more that I can do. Wait, I thought that you said: I never asserted that "resolution can increase bit-depth." Please quit putting words into my mouth. Changing the resolution will have no effect on the bit depth nor vice versa -- resolution and bit depth are two independent properties. On the other hand, resolution and bit depth are the two non-perceptual factors of color depth. So, if the resolution is increased, the color depth is increased (as exemplified in halftone printing and in my pixel site example). Likewise, if the bit depth is increased, the color depth is increased. Banding/posterization is something completely different. It is an phenomenon that can occur with lower bit depths in some situations, but it also can occur in analog imaging systems that possess neither bit depth nor pixels. The color depth of an image is not affected by whether or not the image exhibits posterization. Let's say that you shoot two 4k, 8-bit images of the sky within the same few seconds: one which is aimed at a part of the sky that exhibits no banding and one which is aimed at a portion of the sky produces banding. Then, you down-convert both images to Full HD, 10-bit using the method described in Andrew Reid's article in which no color information is lost from the original images. Would you say that the Full HD, 10-bit image of the banded sky has less color depth than the Full HD 10-bit image of the smooth sky? Well, it's good that you are looking out for us. Now, if you could only avoid confusing bit depth and color depth...
  5. I sense irony here. A conversion can be made so that the resulting Full HD 10-bit image has essentially the equivalent color depth as the original 4K 8-bit image. Of course, there are slight conversion losses/discrepancies. The banding remains because it is an artifact that is inherent in the original image. That artifact has nothing to do with the color depth of the resulting image -- the banding artifact in this case is caused merely by a lower bit depth failing to properly render a subtle transition. However, do not forget that bit depth is not color depth -- bit depth is just one factor of color depth. It's actually very simple. I never claimed that the banding would get eliminated in a straight down-conversion. In fact, I made this statement regarding your scenario of a straight down-conversion from an 8K banded image to an SD image: Where did you get the idea that I said banding would get eliminated in a straight conversion. You have to grasp the difference between banding/posteriztion artifacts and the color depth of an image.
  6. If you (and/or your client) like the aspect ratio and like the fact that you are using a wider portion of the image circle of your lenses, then, to me, those are the most important considerations. So, you are probably best shooting at 4096x2160 (DCI 4K) and down-converting cleanly to 2048x1080 (DCI 2K) or less cleanly to 1920x1013. Any extra rendering time for the odd height pixel in the "less clean" resolution would likely be minimal, but it would probably be a good idea to test it, just to make sure.
  7. Glad to know that I am making progress. You have not directly addressed most of my points, which suggests that you agree with them. Firstly, the banding doesn't have to be eliminated in the down conversion to retain the full color depth of the original image. Banding/posterization is merely an artifact that does not reduce the color depth of an image. One can shoot a film with a hair in the gate or shoot a video with a dust speck on the sensor, yet the hair or dust speck does not reduce the image's color depth. Secondly, broad patches of uniformly colored pale sky tend to exhibit shallow colors that do not utilize a lot of color depth bandwidth. So, it's not as if there is much color depth lost in the areas of banding. Thirdly, having no experience with 8K cameras, I am not sure if the posterization threshold of such a high resolution behaves identically to those of lower resolutions. Is the line in the same place? Is it smooth or crooked or dappled? In regards to eliminating banding during a down conversion, there are many ways to do so. One common technique is selective dithering. I have read that diffusion dithering is considered most favorable over other dithering methods.
  8. Nope. Color depth is the number of different colors that can be produced in a given area. A given area has to be considered, because imaging necessarily involves area... which area necessarily involves resolution. Obviously, if a 1-bit imaging system produces more differing colors as the resolution is increased, then resolution is an important factor to color depth -- it is not just bit depth that determines color depth. The above example of a common screen printing is just such an imaging system that produces a greater number of differing colors as the resolution increases, while the bit depth remains at 1-bit. The Wikipedia definition of color depth is severely flawed in at least two ways: it doesn't account for resolution; and it doesn't account for color depth in analog imaging systems -- which possess absolutely no bit depth nor pixels. Now, let us consider the wording of the Wikipedia definition of color depth that you quoted. This definition actually gives two image areas for consideration "a single pixel" -- meaning an RGB pixel group; and "the number of bits used for each color component of a single pixel" -- meaning a single pixel site of one of the color channels. For simplicity's sake, let's just work with Wikipedia's area #2 -- a single channel pixel site of a given bit depth of "N." We will call the area of that pixel site "A." If we double the resolution, the number of pixel sites in "A" increases to two. Suddenly, we can produce more tones inside "A." In fact, area "A" can now produce "N²" number of tones -- much more than "N" tones. Likewise, if we quadruple the resolution, "A" suddenly contains four times the pixel sites that it did originally, with the number of possible tones within "A" now increasing to "N⁴." Now, one might say, "that's not how it actually works in digital images -- two or four adjacent pixels are not designed to render a single tone." Well, the fact is that there are some sensors and monitors that use more pixels within a pixel group than those found within the typical Bayer pixel group or found withing a striped RGB pixel group. Furthermore (and probably most importantly), image detail can feather off within one or two or three pixel groups, and such tiny transitions might be where higher tone/color depth is most utilized. By the way, I didn't come up with the idea that resolution is "half" of color depth. It is a fact that I learned when I studied color depth in analog photography in school -- back when there was no such thing as bit depth in imaging. In addition, experts have more recently shown that higher resolutions give more color information (color depth), allowing for conversions from 4k, 4:2:0, 8-bit to Full HD, 4:4:4, 10-bit -- using the full, true 10-bit gamut of tones. Here is Andrew Ried's article on the conversion and here is the corresponding EOSHD thread.
  9. Well, this scenario is somewhat problematic because one is using the same camera with the same sensor. So, automatically there is a binning and/or line-skipping variable. However, barring such issues and given that all other variables are identical in both instances, it is very possible that the 8K camera will exhibit a banding/posterization artifact just like the SD camera. Nevertheless, the 8K camera will have a ton more color depth than the SD camera, and, likewise, the 8K camera will have a lot more color depth than a 10-bit, 800x600 camera that doesn't exhibit the banding artifact. Of course, it is not practical to have 1-bit camera sensors (but it certainly is possible). Nonetheless, resolution and bit depth are equally weighted factors in regards to color depth in digital imaging, and, again, a 4k sensor has 4 times the color depth of an otherwise equivalent Full HD sensor.
  10. I acknowledged your single "complexity" (bit rate), and even other variables, including compression and unnamed natural and "artificial" influences such as A/D conversion methods, resolution/codec conversion methods, post image processing effects, etc. By the way, greater bit rate doesn't always mean superior images, even with all other variables (including compression) being the same. A file can have greater bit rate with a lot of the bandwidth unused and/or empty. One is entitled to one's opinion, but the fact is that resolution is integral to digital color depth. Furthermore, resolution has equal weighting to bit depth when one considers a single color channel -- that is a fundamental fact of digital imaging. Here is the formula: COLOR DEPTH = RESOLUTION X BITDEPTH^n (where "n" is the number of color channels and all where pixel groups can be discerned individually). Most don't realize it, but a 1-bit image can produce zillions of colors. We witness this fact whenever we see images screen printed in a magazine, on a poster or on a billboard. Almost all screen printed photos are 1-bit images made up of dots of ink. The ink dot is either there or it is not there (showing the white base) -- there are no "in-between" shades. To increase the color depth in such 1-bit images, one must increase the resolution by using a finer printing screen. That resolution/color-depth relationship of screen printing also applies to digital imaging (and also to analog imaging), even if the image has greater bit depth. I simply state fact, and the fact is that 4k has 4 times the color depth and 4 times the bit rate of full HD (all other variables being equal and barring compression, of course).
  11. 4K has 4 times the color depth (and 4 times the bit rate) of full HD, all other variables being equal and barring compression or any artificial effects.
  12. Finding a cheap, fun camera certainly can be part of the fun for those who can afford to buy one. Another part of the fun is using inexpensive gear to shoot something compelling, which can be done with a camera that one already owns. Why exclude those who can't buy a camera, merely because they can't afford to experience one part of the fun?
  13. I wasn't suggesting anything about your points regarding social mores. I was merely showing what is to my knowledge the only feature film prior to 1980 that primarily addresses issues of having a female US president. It is not just a film that happens to have a female US president as a secondary character. Of course, the mores have changed dramatically since 1964, so much so that the ending (and title) of "Kisses For My President would have to be different. On the other hand, I don't think that changing social mores nor politics is at the heart of the mediocrity of our age. Certainly, shoehorning diversity into content doesn't help, but there is a larger reason(s) for the shallow, uninspired material that we encounter today.
  14. The general idea for this contest is great, but forcing folks to buy a camera might be a deal-breaker for some. Perhaps it should be stipulated that the camera merely has to has to be "trending" on Ebay for no more than US$150.
  15. tupp

    Filters?

    Although the company is gone, Harrison & Harrison was a dominant filter maker for cinema "back in the day." They invented black dot diffusion, which is the basis of Black Pro Mist filters and of other derivative filter technology. Well, the set of 5 filters that I linked was listed at US$200, but, as mentioned, H&H filters can can sometimes be found individually. What is a P2K? Definitely interested in that. Keep in mind that although the black levels can be lifted with diffusion filters, that doesn't mean that one will see more detail in the shadows. To approximate black dot effect, the black spray paint specs should be "embedded" within a diffusion layer (hair spray or something similar). Not sure what you seek here nor if any existing lens filters can yield such results. On the contrary, if you DIY, you are in complete control of the distribution of the diffusion medium. In the videos that I watched, it didn't seem too difficult. I don't know, guess it's just me... If you have a good lens hood or matte box (or a solid French flag), the flare will be reduced when the Sun is out of frame. It shouldn't take 20 minutes to "set-up" a lens hood. I am not suggesting lifting the blacks. To add ambient fog in post, one basically slaps a smooth white, slightly diffusing layer/track over the image, and then adjusts the opacity of that white layer/track as desired. Doing so is very similar to an out-of-frame light source hitting a lens diffusion filter. If one wants the look of ambient flare on a len diffusion filter, one can similarly lower the camera exposure and then use the post method stated directly above. The results will closely simulate doing it all in-camera with the higher black levels and no extra noise, plus one will have more control over the level of "ambient flare."
  16. tupp

    Filters?

    What size do you need? Here is an 82mm Tiffen Low Contrast filter for sale. If your lens is smaller, you could just use a step-up ring. By the way, there are plenty of YouTube videos on making DIY black-promist filters. One can even make a smaller increment than 1/8. To approximate the the black dot process one needs to apply the black spray paint before the hair spray (or other diffusion spray). Also, the Harrison & Harrison black dot originals can still be found for sale in sets or individually. It's always puts a smile on one's face when a YouTuber conducts a test with just a frontal light source, and the subject turns their head from left to right. As he suggests, it's generally best to use a lenser (flag the light source outside of the frame from hitting the lens/filter) or a hood/matte box. One can always add ambient fog in post.
  17. Great! Hopefully, this model will not require a recorder for raw footage.
  18. Anyone can call almost anything "art." Art mostly defies definition. Art doesn't have to push boundaries -- art can be something that is merely pretty. It can also be something that is stimulating, funny or entertaining in some way. To me, the big problem with current movies and television today is that there aren't a lot of good, original stories being generated. Similarly, there just isn't a lot of inspired originality anymore in the other performing arts, such as music, dance and theatre. We find ourselves deep in the age of mediocrity. Some will put the blame on the conglomeration of entertainment companies along with the onset of digital technology. Huge corporations (and talentless board members) making most of the big decisions in the arts has got to water things down. Also, before digital, one had to be more deliberate and thoroughly flesh-out ideas and be extensively prepared, talented and/or experienced. With digital, one can shoot things more "on the fly," without prep nor originality and with minimal artistic ability and little know how.
  19. tupp

    Scanning film

    Good to know for anyone working in that area. They have three scanners. Thanks!
  20. tupp

    Filters?

    Agreed. A good lens choice should reduce the video look more readily than diffusion filters. Vintage lenses are ideal. If you can't get Xtal Express, use a vintage spherical lens.
  21. tupp

    Filters?

    Yes, of course, but if one exposes properly and/or uses HDR features, then it might be possible to match "blown-out" areas in the frame. Additionally, lens diffusion scattering from "out-of-frame" sources is also influenced by lens hoods and matte boxes. In the 1970's, David Hamilton was the king of using lens diffusion while blowing-out highlights and light sources. As I recall, black-dot lens diffusion didn't appear until the early 1980's, and Hamilton would push Ektachrome which increased contrast, countering the softness/flatness produced by the lens diffusion. In addition, pushing gave coarser grain, which worked well for Hamilton's soft aesthetic.
  22. tupp

    Filters?

    Certainly there are many diffusion effects that can be emulated accurately in post. Furthermore, there are also diffusion effects that are exclusive to post which can't be done with optical filters. However, there are some optical filters which can't be duplicated digitally, such as IR cut/pass filters, UV/ haze filters, split-diopters, pre-flashing filters, etc.
  23. Well, the 16S, the Bolex, the Krasnogorsk, etc. all had their eyepieces at the rear of the camera, so they weren't shoulder mounted. There were a few tricks that one could practice to keep them stable. There were also other brackets (such as belt pole rigs) that could help. Of course, weight could always be added for more stability. I am with you on shoulder rigs. A balanced shoulder rig is always fairly stable regardless of weight.
×
×
  • Create New...