-
Posts
1,149 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by tupp
-
Nope. I did not say to compare differing formats using identical emulsions. In fact, I specifically stated "when using film stocks that gave comparable resolution (color depth) from each format," which means using a coarser-grained stock on the larger format. Doing so makes larger and smaller formats more similar in color depth (or more similar "in colour and tonal 'sophistication,'" as you put it). If you shot a 35mm image with Kodachrome 25 and compared it to a 6x4.5 image shot with Kodacolor 400, you would probably find that the smaller 35mm format has more color depth. However, you might notice a slight "improvement" in the DOF roll-off and in focal plane solidity on the larger format, given the same quality of optics on both formats and with DOFs of matching mathematical "equivalence," . To me, that optical advantage transcends a difference in color depth, which in a similar digital scenario is also dependent on the resolution (assuming that bit depth and DR is the same on both the larger and smaller sensors). Certainly, everyone has a right to their opinion, but the overriding difference in look between larger and smaller formats seems to be of an optical quality, as exemplified by the Kodachrome/Kodacolor scenario above and as demonstrated in the many eCyclops/MiniCyclops images captured with HD, CMOS cameras.
-
I disagree for two reasons: 1. The difference was apparent in the analog days on both 35mm and large format cameras when using film stocks that gave comparable resolution (color depth) from each format; 2. Gonzalo Ezcurra generally uses CMOS/Bayer cameras with his eCyclops and MiniCyclops ultra-large format DOF adapters. As is apparent from this demo shot with the MiniCyclops and a 5D mkII, such a drastically large format (combined with an appropriate large format lens) yields a uniquely rich DOF roll-off and an extraordinarily clean, flat focal plane, regardless of the type of camera sensor. So, a larger format with the appropriate optics does seem to make a difference that has nothing to do with the sensor type. If you can find a Super 16 lens and camera that gives the same performance as shown with the MiniCyclops, I would love to see it. You can even use DigiBolex CCD camera and shoot in Super 16 mode.
-
For years, I've used this plastic, rod-mounted lens shade.
-
Fitting your lens with Lux gears might help keep ACs from scratching the housing.
-
More details would be helpful, such as what type of interference are you experiencing (static noise, distorted signal?) and what bands/frequencies have you manually set. However, it sounds like your camera/lens combination might be generating weak, spurious EMI (electromagnetic interference) that your receiver is picking up due to proximity. If so, changing frequencies/bands might help. If the housing of your receiver is plastic, you might try wrapping the housing with metal foil as a test to see if it reduces the interference (leave a small space between the foil and base of the antenna). By the way, RF means "radio frequency" -- not necessarily "interference." It appears that others have had problems with the EW100 G3. If nothing else works, you might try using a spectrum analyzer to see if the camera rig (or some other source) is generating EMI.
-
Stacking filters is no problem. I generally put my polarizer's up front for convenience (as I change-out a polarizer most often) and to avoid interaction with the other filters. A variable ND essentially is two polarizers. Can't recommend any adapters nor matte boxes, but it sounds like you are using screw-in filters, at least for the variable ND (77mm). There is nothing optically disadvantageous with screw-in filters that except that filter changes take longer, and you usually have to unscrew and reattach the filters with each lens change. Also, if you stack too many screw-in filters, vignetting can start to appear.
-
I never tried the the original CYL anti-aliasing filters, but it appears that a single, variable CYL anti-aliasing filter was recently developed that can replace a whole set of the original CYL filters. Here is a demo of the new variable CYL filter:
-
Panavision DXL revealed, an 8K 60fps RAW cinema camera using RED's codec
tupp replied to Andrew Reid's topic in Cameras
No. The Panavision rep didn't say anything about "demand." Panavison does "one-offs." They are known for dong so. They are proud of doing so. There doesn't have to be any huge "demand." An ASC member or some DP with a big project might ask them for something special, and there is a good chance that they will provide it (probably based on whether or not they make it without too much expense). After the customer is finished, they put the part on the shelf, and sometimes it gets rented again or sometimes it doesn't. Panavision can operate in this fashion because they don't have to sell anything. The old Clairmont Camera outfit worked similarly, making specialty items that individuals would request, but Clairmont was a little more active in trying to rent out the special items. -
At Cinegear, I asked a Panasonic salesman about the HDMI output of both the GX85 and the G7, and he insisted that both gave 8 bit, 4:2:0 video out of HDMI. Later, I went to the Atomos booth, and asked a rep if any of their recorders could sense the bit depth and chroma sub-sample of a camera signal (to test the outputs of the G7 and GX85). He said no, but after taking a swig from his beer, he boasted that he knew the output specs on every camera and that the G7 was definitely 8 bit, 4:2:2. After I offered a US$5 wager that the G7 wasn't 8 bit, 4:2:2, we went to the Panasonic booth to find out the answer. With the tipsy Atomos rep at my side, suddenly the Panasonic salesman was not so sure as before. He admitted that the output of some cameras haven't been tested, but that the corporate office instructed the sales reps to say 8 bit, 4:2:0 for most cameras. However, he was definitely sure that the G7 output 8 bit, 4:2:0. So, I don't know what to tell you. Hopefully, someone will connect the GX85 to a capable monitor/recorder/analyzer and post the answer here.
-
Panavision DXL revealed, an 8K 60fps RAW cinema camera using RED's codec
tupp replied to Andrew Reid's topic in Cameras
That certainly is earth-shattering news... Actually, the fact that Panavision only rents is the very reason why they have more flexibility to make specialty items and one-offs, compared to the other manufacturers who only make money off of sales/service. In fact, Panavision has bragged about that advantage over the years, and, indeed, when I asked the Panavision rep about a special front plate, he smiled and said, "we only rent, and we make what we rent, so we can fabricate whatever we want to fit onto our cameras." Here are just a few of the specialty items that Panavision has made/adapted over the years, which they still rent. If some ASC member wants to adapt his old Hasselblad glass to the DXL for a feature, you can bet tha Panavision will jump at the his/her request. I wouldn't be so sure. Light Iron is a high-end DIT/finishing outfit, and they have just a little bit of experience over the years in dealing with the shortcomings of Red color. If they initiated the collaboration between Panavision and Red and if they were involved in the design/engineering of the A/D converters and/or the color algorithms of the DXL, I would guess that the image from the DXL is a step or two above that of a stock RED unit. Furthermore, even without their Light Iron division, Panavision has been no slouch in regards to their past digital cameras. In addition, I am fairly sure that they still own the Dynamax sensor foundry. I also wouldn't be so sure of that, as Panavision is known for making deals. -
Panavision DXL revealed, an 8K 60fps RAW cinema camera using RED's codec
tupp replied to Andrew Reid's topic in Cameras
I saw the camera at Cinegear yesterday. I asked one of the reps why they collaborated with Red on this camera, and he said that Panavision's newly acquired Light Iron division pushed the partnership (and helped design the camera's color science), because Light Iron was strongly Red oriented when it was an independent DIT/finishing house. It's not just a "rebranded" Red camera -- they evidently put some effort into "Panavising" it and changing not just the ergonomics, but also the image. By the way, the Panavision rep also said that the camera probably won't be available for the plebeians until 2017. Also, when asked about special lens mounts for alternative medium format lenses, the rep said that they might consider making special mounts on a piecemeal basis, but that they would probably prefer to adapt a lens to the Panavision mount and keep it in their rental stock. -
Thank you for the first-hand insight into the distinctiveness of the Cyclops' look. Perhaps the "buffer" in the highlights is similar to the "glow" that many enjoyed with 35mm DOF adapters. It certainly would be interesting to see a side-by-side DOF/look comparison test between a Cyclops and a much smaller format camera (such as a BMPCC, a BMMCC or a Digital Bolex). A test of such dramatically different formats might finally settle the DOF/look argument.
-
Gonzalo, Thank you for the reply! I guess that quite a few of the extra-large format photography lenses were not designed for extreme swing/tilts. On the other hand, it might be possible to find a cheap lens from one of the old, huge, graphics stat/process cameras that would allow much more play, and also have a larger maximum aperture. By the way, there has been a long-running debate here on whether or not there is a difference in "look" between large formats and smaller formats. Have you found any differences in look between your Cyclops rigs and smaller cameras? If so, please explain the differences that you see. Thanks!
-
There are a lot of different lighting scenarios shown in that video, and many of them are available light. In regards to the screen grab that you posted, the solitary light source seems to be coming from a moderately steep angle, and ever so slightly to camera right. Judging from the falloff and cast shadow edges, it appears to be fairly close to the subject, perhaps two to three feet outside of the frame. The source might be 10-24 inches in diameter/width, and it probably is a flat, smooth-faced (diffused) source. To get a similar effect, you could use an LED panel with some diffusion, or get a small portable soft box (like a Rifa) or just clip a large piece of diffusion to the bardoors of most any fixture. Position the light closer than usual, and have stuff in the background that barely reads in the darker area of the light's falloff.
-
Perhaps RawTherapee the default settings/algorithm is not optimized for the GX80, or perhaps its defaults are just a little laid-back in regards to sharpening, saturation and contrast, so some tweaking might help. As I mentioned, I am happy with Darktable's presets as a starting point. I would guess that Iridient Developer uses open source DCRaw, if it already has GX80 raw capability. Many of the open source raw developers/processors are multi-platform, so you can use them on OSX, Windows or Linux. As open source software is usually free, it can't hurt to try other raw processors to see if you like their defaults.
-
Not sure what you are after in regards to a "stable/secure desktop," but running an OS in a VM might not be the best way to test such "stability," because of the resource drain and potential glitches. I have used a lot of different desktops and window managers in the past 14 years, and I never had any problems that I can recall. I tend to use lightweight window managers instead of full desktops. By the way, those who use tiling widow managers usually run circles around their "point-&_click" counterparts. In regards to "pro and free" video and photo editors, the two are most definitely not mutually exclusive. A significant number of pros use open source (free) software -- even photo and video editors. For an NLE/compositor, you would probably be using Blender, Cinelerra, Lightworks or Pirnanha (proprietary, with the high-end version at US$250,000). Kdenlive looks like a good NLE, and it is has become more robust and a lot more popular since I played around with it many years ago. The studio version of the Lightworks NLE is probably pretty good, but I have never tried it. There are numerous image editors/processors that run on Linux. My favorites are GIMP and Darktable, but there is also Krita, CinePaint, RawTherapee, Raw Studio, Delaboratory, UFRaw, GTKRawGallery, LightZone, Pixeluvo (looks like an interesting processor/editor combo), Photivo, AfterShot Pro, Fotoxx, etc. These are mostly raster image editors, and, of course, there are also a few open source vector image creators/editors. Both proprietary and open source projects come and go, and no one can guarantee the future. I am guessing that you don't want to stick with FCP. For open source NLEs, Blender has a strong community with a lot of folks crazy about its editing capabilities. The community version of Cinelerra is updated fairly regularly, and it has some unique capabilities (but its default theme is rather garish). I don't know much about the proprietary NLEs, but I think Lightworks has a following. I am keeping my eye on Kdenlive. I wouldn't be so sure of that. I would guess that a few others in the list of the open source image processors above can already read raw files from the GX80. Open source projects can move fast. Most of the raw image processing apps have fine color control. I don't know much about RawTherapee, but Darktable has preset camera color profiles for certain camera models/brands/film stocks. Darktable usually defaults the profile brand/model it reads from exif info, but I sometimes use an Agfa profile on my Canon raw images. Of course, Darktable also allows one to create and save custom profiles. I would imagine that RawTherapee and a lot of the other open source raw image processors offer similar preset/custom profile capability. Judging from the fact that RawTherapee already has the capability to import the GX80/85 raw files, I would guess that there is some current activity in that project. I don't know if people have moved from RawTherapee to Darktable -- there are so many options in the open source world, as evident from the above list if photo editors and raw image processors. I use Darktable because that's what I started with years ago.
-
If you want to try Linux but have no experience, start with one of the newbie distros: Mint, Ubuntu, PCLinuxOS, Mageia, OpenSUSE,etc. Also, you can try most of these distros without installing them by booting "live" versions (liveCD, liveDVD, liveUSB, liveSD, etc.). The live versions of these big newbie distros will usually run more slowly than installed versions, but a live OS running off of a USB 3.0 flash drive might be fairly snappy. By the way, there are multimedia distros designed for video/audio production and photography, such as Ubuntu Studio, AVLinux and Apodio. These multimedia distros will often come with a lot of codecs already installed, but it is fairly easy to install codecs on the non-multimedia distros. In regards to GX80/GX85 raw support, you could just install open source Raw Therapee on OSX. It reportedly works with GX80/GX85 which uses the open source DCRaw library, upon which Adobe Camera Raw converter (and pretty much every other raw file converter) is based. Consequently, a lot of the other open source raw "darkroom" might also already have the ability to read GX80/GX85 raw files, as many open source projects tend to move faster than their proprietary counterparts. I use open source Darktable which can also be installed on OSX. If you start moving to Linux, there are other things of which it might be good to be aware, such as which NLEs (both open source and proprietary) are the most actively developed and robust, and such as which audio editor is ideal for your situation.
-
It's a good thing that @TheRenaissanceMan mentioned Personal-View, otherwise you would have never known about it! http://www.personal-view.com is a camera web site and forum run by Vitaliy Kiselev, who happens to be the founding developer of the GH1 and GH2 hacks. He also uses the site to sell gear.
-
I never bought a G7 from Personal-View, but those who say they have confirm unlimited record time in this thread, even after firmware updates.
-
You can get an NTSC Panasonic G7 with a 14mm-42mm lens that has unlimited recording from Personal-View for US$630.
-
No. Don't do that By summing four values and then multiplying by 0.25, you are actually averaging -- you are not summing. The best way to convert from 4k, 8-bit to HD, 10-bit is to simply sum the four values, which retains the full color depth/accuracy of the original image and which is a perfect mathematical conversion between 8-bit and 10-bit. No multiplier is necessary after summing. Your original hypothetical scenario has equivalent color depth to 1080p, 444, "~9-bit." If you start with 420, the color depth would be less than that original scenario.
-
Where did 420 come from? We are talking about 422. You started with: Don't average! -- NEVER AVERAGE!! Always SUM! Don't sacrifice overall accuracy/depth for a few wayward pixels. Reduce noise some other, more direct way. If Dugdale started out with 420 instead of 422, of course, that affects the end result. However, my point in response to your hypothetical color depth equivalence example is that the color depth is essentially identical in these three scenarios: 4k-UHD, 422, 8-bit; full-HD, 444, 10-bit-luma/8-bit-chromas (your example); full-HD, 422, 10-bit.
-
I think that you mean to say that 4k (UHD), 422, 8-bit image has equivalent color depth to a hypothetical HD, 444, 10-bit-luma/8-bit-chromas image, which is correct. However, I don't think that hypothetical end result is an accurate description of Dugdale's conversion. As I recall, he actually converted a 4k, 422, 8-bit image to HD, 10-bit, 422, which, if properly executed, also retains the full color depth of the original 4k, 422, 8-bit image. Of course, HD, 10-bit, 422 has an equivalent color depth to your hypothetical HD, 444 10-bit/8-bit image. By the way, if the pixels are properly summed in the down-conversion, there is no "pseudo" necessary. All three scenarios have equivalent color depth, with or without dithering. The dithering primarily helps eliminate banding artifacts.
-
By the way, since you are just copying, you might not need the extra codecs.