-
Posts
997 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by Sean Cunningham
-
It's not a necessary thing for that lens. Blah-blah-blah Your moller 1.5, can I track focus on a moving target with a moving camera at the widest aperture the adapter can handle? Oh, and what's the widest taking lens that can be used? They saw my posts here and on PV that CU diopters enhance bokeh on 1.33x anamorphic adapters (maybe elsewhere too, from other people, but I know they saw mine). They tested it. They saw it worked. They looked and saw what diopters were available in 77mm size, how few of them there were and then fewer still offered in multi-coated form and then fewer still how many were doublets. FFS
-
There aren't very many acromat lenses (doublet) available, very few at the size of the Tokina and fewer still at its relatively low power. SLR Magic is going to be producing both a +0.4 (same as the Tokina), +1.3 and +1.7 set at 77mm diameter size. They intend for the set as a companion product to their upcoming anamorphic adapter. They're already including the +0.4 and +1.3 with beta versions of the anamorphic.
-
Strange beast - Tushinsky Superscope
Sean Cunningham replied to Matthias Malleši�'s topic in Cameras
There was a super panatar on ebay recently that looked a lot like this. Oddly enough, even it, for all its size, didn't allow for wide angle anamorphic photography. http://www.ebay.com/itm/Panavision-Panatar-35mm-Anamorphic-Lens-RED-Isco-Iscorama-Hawk-Runco-Cinemascope-/280802571268?pt=LH_DefaultDomain_0&hash=item4161233404 ...I see it's still there. -
That's neat but it's hardly equivalent.
-
The softness could be the lens itself, unless you've done comparative testing with the same lens on a true APS-C camera (in the case of evaluating a given lens on a Speed Boosted BMCC).
-
All of those things are doable in realtime with realtime engines. Area lights, ambient occlusion and GI effects. Reduced to Black-n-White it's even faster. The graphic simplicity of most of what was projected here served an artistic purpose but also represents a wouldn't-break-a-sweat level of imagery for a modern gaming engine. A realtime solution allows them the freedom of performance and isn't just about playback, calibration and registration. It may very well be pre-rendered. There's just no reason for this particular level of imagery to be pre-rendered except by choice. Being able to see a virtual representation on set, giving actors, directors and cameramen better reference than dots or X's or "picture this" motivations to go on, leveraging realtime gaming technology (in many cases), has already been used on several films. Avatar used it to create camera moves with actors in virtual sets. The same sort of technology was used on fully-animated films like Polar Express with cameramen operating familiar dolly and other camera gear to create virtual camera moves in a virtual space but it was all driven by, essentially, mapping the mechanics of real equipment. Then you have the realtime overlay of fighting robots for Real Steel that let the camera crew create realistic, hand-held camerawork for the boxing matches, shooting virtual characters fighting in real spaces. You also have the app created for the filming of the massive jaeger hanger in Pacific Rim that let anyone on set hold up their iPhone or iPad and look around them at where they were standing and how the set related to what would eventually be seen all around them. Perhaps something like Google Glass will allow actors to see a representation of non-human characters they're interacting with, overlayed on their view of a stage space along with a representation of the virtual world they'll eventually be standing in. Projection like this is largely view dependent and so it's not quite as useful in that regard. The main issue with something like Google Glass, however, is how any kind of eye glasses or head-mounted display could potentially disrupt facial performance capture, especially since getting the eyes right is one of the most important components.
-
Not really. The imagery being projected is being generated in realtime, most likely. It's really simple graphics. It's likely more expensive than doing it in post, because of the large robots and the motion controlled camera, which is a third, unseen robot as well as bespoke software to handle integration of the telemetry, motion control and imagery. Setting this up means calibrating the three robots to the host software so that it knows the position and orientation of the tip of each at least, controlling position and orientation of the projection surfaces (which would be known dimensions) and the camera (which would also have known dimensions). MoCo software that calibrates itself to a fixed origin has been around forever and takes minutes, if that. This is doing that at least three times and then you're set for the shoot. The rest is mostly involved in generating the realtime imagery being projected.
-
What's being projected is (mostly) CG, it's just not done through post production techniques. The system knowing the position and orientation of the robotic arm at all times relative to the camera POV (like how the realtime line of scrimmage and yards-to-go lines are created for live sporting event broadcast) makes the orientation of projection surfaces easier to derive compared to having to track them in post. You could, additionally, sync a second motion controlled camera to photograph a completely different live scene, instead of using pre-rendered or on-the-fly CGI, to project onto the surfaces. After 5th Element we built a system within Houdini to interface to the Digital Domain motion-control stage that allowed animators to design physically accurate moves for motion control miniatures rather than the screen-based methodology that was standard to the industry that often introduced "skating" and other kinds of motion or performance artifacts that have plagued flying miniatures for decades (though miniatures were phased out shortly afterwards so it was a neat proof of concept that never really got used). This adds a projection component to that sort of realtime telemetry and view-dependent spacial awareness. I didn't see specific info on the website or VIMEO but I'm wondering if the projector is on-axis with the composite camera via beam splitter or if they're pre-distorting the projection so that the projector can be fixed, up out of frame and not attached to the composite camera. It's neat looking, however it's done.
-
I was going to ask about a wider and a longer addition to the FF lineup. It's very cool that these are in the works. These seem like the perfect answer to folks trying to do anamorphic with full-frame sensors and beating their heads against a wall.
-
Yeah, now that it's a reality (the Speed Booster) it turns a BMCC into an effective 22mm sensor, right in the ballpark of Super-35mm (in the sense that the 7D and other APS-C sensors are often claimed to be S35 equivalent). With the drop in price on the 2.5K BMCC it kinda seems like a no brainer. Especially if you need to get to a FOV equivalent to a true, cinematic wide. You start having to dip into the below 12mm category and the prices get really ugly, really fast. Heh, it does mean you'll want an extra stop of ND for daylight shooting though ;)
-
If you haven't bought it yet get the MFT mount instead and a Metabones Speed Booster and you're back in business with the 11-16mm (assuming appropriate adapter can be worked out). The LA7200 can be used with wider lenses than just about any other anamorphic adapter. It's worth a shot though 11mm is wider than I've ever read being compatible. You should check out the anamorphic forum and study up on the diopters you're going to need to make it feasible though. You're better off buying a Speed Booster and getting more FOV that way, if you can change your order from EF to MFT. An LA7200 is going to cost well over a thousand dollars, until something like the SLR Magic anamorphic lens is for sale and when that happens there will be little reason to spend $1500 on a used LA7200 when you could spend that on a new adapter that doesn't absolutely require diopters to be useful for nominal shooting. You could easily spend as much as the price of a brand new BMCC on an LA7200 and the necessary diopters. A Speed Booster is under $500.
-
Inherent camera sharpness is a common trait of other high end cameras and part of the fix for that is to use softer or vintage lenses. The Cooke S4 are notable lenses for taking the digital edge off, talking high end, based on some recent reading. I thought the one this guy did which shows the BMCC has slightly better highlight retention between the Epic and Alexa was pretty interesting. His motion graphics are a bit much but that's a minor quibble.
- 7 replies
-
- arri alexa
- red epic
-
(and 4 more)
Tagged with:
-
Feedback using the BMPCC on a professional shoot
Sean Cunningham replied to Oliver Daniel's topic in Cameras
I don't believe for a second you could do an eight hour shoot on one battery with the GH3. That's ridiculous. You could get by with a couple of 32Gb cards because the GH3 isn't shooting to a professional level codec. You couldn't get by with two of those small cards on a shoot like that with a GH2 (and certainly not one battery) using a high quality patch from Driftwood, or the like, which would be far more quality than the GH3 and still about half that of the BMPCC. And if all that quality wasn't necessary, because the client can't tell the difference or wasn't paying for the difference then I'd have shot on a camera I was more comfortable with. There's also nothing stopping you from adding additional batteries to a package, as an additional line item, or renting/buying more batteries from another source if the main rental house are idiots. You weren't using it as a "pocket" camera. Outside the lack of footage/space count most of these complaints are user expectation and assumption related, not tempered by research prior to use or other professional gear. Maybe I'm the asshole for saying it, but it's the truth. -
Feedback using the BMPCC on a professional shoot
Sean Cunningham replied to Oliver Daniel's topic in Cameras
Viewfactor is, according to Kholi, working on a battery grip that works with their cage. -
Feedback using the BMPCC on a professional shoot
Sean Cunningham replied to Oliver Daniel's topic in Cameras
4 x 32Gb is like having two decent cards. It's a 220Mbit codec. For a paying, professional shoot you need six to eight batteries at the ready and multiple chargers. Same as if it were an Alexa. When it's used as a "pocket" cam, catching the grandkids performance at a recital or little Timmy at bat, a baptism, a blow-out-the-candles moment, it's going to be just fine. When it's used for professional, continuous shooting, it's like any other professional camera with a lot of the same caveats and need for the professional to be prepared. edit: also, there are other professional cameras that cannot format or delete clips in camera as a safety feature. -
Hollywood has a long history of putting the heroic star on apple boxes and others walking in trenches too. I think that article, and that website, might also be giving Stallone an extra inch as I know multiple folks who have seen him in candid scenarios around town in LA and he wasn't looking 5'9" by any stretch (pun intended). Tom Cruise is likely never out in public without lifts. There was a lot of height augmentation going on below camera on Interview With the Vampire because both Cruise and Banderas are noticeably smaller than Brad Pitt, who's a reported 5'9" or 5'10" and you can see his relative size and height compared to (reported) 6' David Duchoveny in Kalifornia, even though some online sources have Pitt at 5'11". You can't really trust online sources. Any of them. It's in an actor's best interest to project an image of being taller. Being a smaller guy I take great pleasure finding out a particular actor isn't as tall as they're made to appear in film and, in my experience, there's a lot of us shorter guys in Hollywood than folks have been lead to believe, by the images at least. A friend of mine who's 5'11" nearly ran over Vin Diesel coming around a corner at the Universal lot and his reported 6' appears to be a lie. He was surprisingly short but with a giant head, according to my friend. I was surprised to see the same thing one day walking by a CSI shoot and seeing George Eads, who's no 5'11" but he does have a gigantic head. Head proportion seems to have a huge impact on both perceived height (or how it can be manipulated) as well as how someone photographs and appears on camera which can sometimes be incredibly different than how they appear by eye.
-
Some news on BMPCC - Bloom clip & blooming artifacts
Sean Cunningham replied to Axel's topic in Cameras
I don't particularly like the way the male actor looks in his close-ups (combiniation of the grade + lighting...the Hemsworth CUs in the Rush trailer look similarly flat) but otherwise it's very well done.- 70 replies
-
- PocketBlackmagic
- PB
-
(and 3 more)
Tagged with:
-
It's too bad they flattened the skin tones like that.
-
Oh no, I meant the Cineon compositor. Kodak used to market the compositor they developed in-house at Cinesite. What I saw was kinda like Shake locked to laying down nodes horizontally. This would have been around '98/99 or so last time I saw it in action. I think they sold the source to a 3rd party company who rebranded it for a period under a new name but I think Shake buried it in the early, pre-Apple days on price and I lost track of it.
- 12 replies
-
Now that Cineon is no more, it's the only compositor on the market, that I'm aware of, created and designed by high end compositors actually doing the work and not just computer scientists and software engineers. I'm sure there have been changes made by the Foundry but it's still basically purpose-built production software, like Renderman and Katana. There is a difference, definitely.
- 12 replies
-
All of Edgar Wright's films have had impressive or, at the very least, quite pleasing and slick cinematography and his most recent installment to his and Simon Pegg's "Cornetto Trilogy", The World's End, is no exception. There's a great feature article covering the style and technical approach to the film in the September issue of American Cinematographer Magazine (http://www.theasc.com/ac_magazine/September2013/current.php). When it got to the lens selection for the anamorphic portions of the film (once the scifi madness kicks in) I wasn't at all surprised that Wright and his DP went for a classic sampling that included C-Series lenses for their greater character versus new cinema anamorphics. Wright likes to shine lights into the lens as much as anyone here (and he does it so much better than JJ Abrams). Coincidentally, Panavision made a discovery at their Woodland Hills location, a B-Series that Panavision had failed to catalog and been long forgotten. Given the way a lot of discussions go around here over this anamorphic or that anamorphic, I had to smile when I read the part where this DP praises aberration and soft edges. ...and I always like when they go into this sort of detail, about working or target stops used on a film. Anyway, it's a nicely detailed story about a contemporary anamorphic film that's worth grabbing an issue for. Only God Forgives is likewise covered in this issue, and though it's not anamorphic it is, very surprisingly, a mostly practicals show with very few cinema lights used. Having seen the film it's almost too hard to believe the information in the article.
-
- anamorphic
- panavision
-
(and 5 more)
Tagged with:
-
Oh yeah. You won't catch me defending COPs. I try to avoid that part of the package entirely. It's never felt anything but clunky to me. And for several releases several really important nodes just plain didn't work correctly. And we're talking nodes that were really basic math. After using Nuke for so long it's hard to not be disappointed with any other node-based compositor, truthfully. I'm a .5-Alpha Nuker and one of the first artists to risk putting it into production, back on True Lies. It was a text only, script-based compositor back then, in the style of Wavefront's TAV-Comp. I was ready for anything better than Wavefront's Video Composer and until Nuke got a GUI I preferred script-based compositing anyhow. By Apollo 13 and Strange Days the first version with a GUI (nuke2) was rolled out but it was like the wild wild west. You'd kick off a six hour network render and halfway through you'd see things go haywire on the queue because the new lead programmer had pushed out a new version where he inverted his assumption regarding how a node might treat the absence of an alpha channel. Stuff like that. Good times. I kinda got even with Bill for giving me ulcers with nuke2 when he was exiled from the software department to work as an artist assigned to a show, so that he could get some perspective on how his work affected so many people, and he got assigned to one of my teams, lol. I use mostly After Effects these days. I love the interactivity but really, really hate how inefficient the timeline paradigm is.
- 12 replies
-
Some nice stuff in there. I think this is the first time I've seen anything shot on these adapters at more than 1080P. Very cool. I imagine the softness of the adapter helped with the look. More and more I'm reading about DPs purposely choosing soft lenses when shooting digital because digi lenses are just too sharp and clinical. What were your taking lenses?
- 2 replies
-
- Optex
- slow motion
-
(and 2 more)
Tagged with:
-
Hmmm, I took to Houdini, and Prisms before that, faster than almost any other 3D software. Maya makes me want to hurl. I've been wanting to force myself to do something in Blender but then every few years I'll boot up the latest version, look around and decide otherwise, it just feels so alien.
- 12 replies
-
I'm sorry you have a problem articulating your thoughts better than that. It's to be expected in most public forums, people just want to express an opinion and have others agree to it. It amazes me that companies like SLR Magic or BMD even interact with the general public like this. I applaud their patience or marvel at their masochism, whichever is most appropriate. There's some douching for you. You're a little late though. They're looking for input on coatings now but their inquiries into what ratio, speed and IQ characteristics were important to people started at least a year ago and they've already heard/read, ad nauseum, pages of input on 1.33, 1.35, 1.37, 1.5 and 2X adapters and so here we are after they've weighed both opinion and practical research.