Jump to content

Sean Cunningham

Members
  • Posts

    997
  • Joined

  • Last visited

Everything posted by Sean Cunningham

  1. You should be working with an array if you're 4K (or serious 2K or 1080P).  Individual drive sizes aren't really an issue as much as throughput (the real elephant in the room is back up and archival).  Individual drives aren't going to support much more than compressed formats.  This isn't something you're going to effectively build internally and never has been.  Internal arrays of a couple drives aren't really meant for, or ideal, for this.   So this begs the question, should you even be going 4K?  Odds are the realistic answer is "no" and so you're creating problems for yourself that already have solutions for those that actually need it, which is almost nobody except big event, commercial, theatrical.  Regardless of what the churn in consumerism tells you you need.     hint: maybe one in ten, if that, blockbusters you've ever seen has been finished at anything but 2K.  Sony and the rest are lying to you.
  2.   And you can't deny that in-camera compression radically effects the perceived detail in an original recording.   And what if it was destroyed before YT had a chance to do its own damage?   Convince me you know that the baseline compression, matrix, all the goodies are set the same as they are for the GH2 and that Panasonic hasn't pulled a typical corporate move and de-tuned an existing setup designed to go into a lower-spec camera than the original platform.   Convince me that the uploader rendered to a professional codec before then compressing to MP4 with a high quality encoder.   Convince me that they didn't render to MP4 straight out of their editor.   All of that is incidental, however, to the fact of YT delivering bad quality content from unsophisticated or non-technical accounts.  Even playing back the HD streams with the window scaled down didn't serve to filter the ugliness of their compression.   And, sorry, but compression and bandwidth does affect detail and sharpness.  Otherwise stock GH2 would look as sharp as Moon Trial 5.  And it does not.
  3.   By this logic all hacks/patches must render near identical detail...     ...and let's not forget, there's no details given regarding camera settings, lenses, the method used in editing, or anything at all to give a sense whether the uploader knew at all what they were doing.     Then there's the issue of even if this is the same chip as the GH2 it doesn't mean they used everything else.  It doesn't mean the baseline, stock compression is even as good as the under-achieving stock settings on the GH2 or whether it can be opened up yet.   Basically, judging this camera based on these clips isn't intelligent.  Period.  It might suck.  It might offer even more hackable potential than the GH2.  Nothing in these YT clips are a valid indicator of anything.
  4. The encoding on those YT streams is absolute garbage.  They must throttle bandwidth and encoding quality depending on the account type because they're not even at the same quality as several YT "partners" that I semi-regularly check out there.   That or the encoding done by the uploader was very bad as well.     I honestly don't understand why people still use it in this capacity.  I'll add duplicate streams of stuff I put on VIMEO, acknowledging the ubiquity of YT but I don't actually share those links with people.     In the case of both sites whatever you upload gets further compressed (heavily) so your upload needs to be of exceedingly high quality if you expect anything decent by the time a stream goes live.
  5.   The technique used here is called a cyclorama, or "cyc" for short.  It's widely used in film and television to create the illusion of being "on location" where distant buildings, skies, etc. need to be seen beyond a window or set edge.  It can be a blown-up photograph or painting.     Sometimes the in-camera effect is done well enough that even a sophisticated eye won't pick out anything "wrong" with a scene.  Sometimes all you have to do is actually look at it (where you shouldn't be looking, ideally) and the trick is obvious, either due to looking dim or being able to perceive blown-up grain, etc.
  6.   Uh, yeah.  You should read the topmost message and direct your attention there.
  7. Going to the theater in LA during the 1990s meant you were pretty much always seeing these LA Times promos that almost always featured some kind of visual or special effect, or stunt.  One of the more popular was their featuring of the Introvision process used in a lot of big movies during the day.   http://www.youtube.com/watch?v=rDAGW3aOcRA     ...they call it "rear projection" in the video but, meh, it's the LA Times.  They don't understand the tech they write about anymore than any other media outlet.
  8.   It's a regular practice in facility screening rooms everywhere.  
  9. This isn't a new technique.  It can't replace green screen entirely, otherwise it would have done so already.  It's neat though.   What's truly impressive is that we have projectors finally bright enough that the effect can be more or less invisible now, not to mention able to contribute to a shooting stop.
  10. Been trying to find confirmation that the Hoya are doublets, or not. Saw some examples with a Century Optics anamorphic showing the +2 gives the fairly conservative 1.33X stretch full-on oval bokeh when used in CUs.
  11. Hoya has a nice looking set you can get in several sizes. http://www.ebay.com/itm/360620060858?ssPageName=STRK:MEWAX:IT&_trksid=p3984.m1438.l2649
  12. Not a problem.  I was actually kinda shocked at the quality of a lot of the available music at either site, comparing what I was able to quickly and easily find compared to a few of the pay stock houses I've used as well (ie. videoblocks).  
  13. You might also try lighting the white backdrop a stop or so under your target, evenly, but not at your desired final brightness level.  If you were to then pull a luma key you would be able to build up its exposure relative to your subject with a little finer control over the edging.  Granted, it will be harder to keep the razor sharp jump from talent to pure white but you could likely hit that and "power-window" a slightly modified mix around his hands when they pop up if necessary.     I only suggest this alternative as a means of avoiding any possibility of clean keying issues, though contemporary keyers handle compressed, sub-sampled chroma in green screen plates a lot better than they used to.
  14. freesound.org  ...a lot of it is rather generic stuff obviously done in a synth package with no mastering or actual mixing of any kind but by browsing different categories you'll begin to pick out several talented musicians.  This site has extensive "soundscape" style pieces as well as traditional jingle-style and melodic pieces.   archive.org  ...I went looking here for some old jazz examples but found quite a bit of contemporary music as well   Like Julian's example, pay close attention to the type of license they're giving.  Most of the pieces on freesound.org require nothing more than attribution and archive.org is similar but with a lot of material that's totally public domain.
  15. Does the Mac gamma bug exist anymore?  I think I still had to watch out for it in 10.5 which still had the old oddball system gamma.   I don't see anything unexpected here.  Motion blur is transparent.  There's no way the thin, transparent pixels of the fingers can compete with that background.  If you were to have shot him against a greenscreen and composited him over white at that level you would have likely arrived at a similar end effect to avoid the motion blurred portions of his fingers looking like they had black halos.     Shooting at a higher shutter speed to avoid motion blur would have likely been the compromise you'd need for virtually any camera in a non-raw situation (betting a few more stops of DR might get you some of your finger back).  You'd then just have to make the judgement call on strobing.
  16. http://vimeo.com/64229408   ...my niece, Kinley, is not quite 2.5yrs and my nephew Connor is 7 months.  They were pretty fascinated by the big(gish) lens when they saw it, their extensive experience being photographed so far being mostly camera phones.  Kinley got over it pretty quickly though and was able to ham it up with perfect abandon.     GH2 (patch: Flowmotion) Nikkor 24mm f/2 Century Optics anamorphic 5DtoRGB CS6: PPro + AE ColorGHear
  17.   Hah-hah   That's just me walking around following my niece and nephew on Halloween.  You're totally free to make some kind of douche nozzle comparison of that against something where someone actually tried to shoot slick, might have had a crew, might have even been paid.   Go right ahead.  Your powers of perception are continuing to entertain.
  18. That top video is the first and only FS700 footage I think I've seen that didn't have almost immediate give-aways that it's digital video.  It was just pretty and obviously meticulously shot and crafted.   Pretty much all of the other footage I've seen, including some commercials I worked on, had what I'd typically expect from a Sony camera in the highlights.  Shooting into the sun is a really good way to bring this out and a BMCC wouldn't look like that.  You can shoot into the sun but please don't.   FYI, SyFy puts the most awful garbage on air, I wouldn't go bragging about how much they like anything.   I mean, gratz on the work but they're not a network of quality and never have been.  That said, it's not hard to imagine you or anyone who actually cares about their work could impress them.  They keep The Asylum in business, after all.
  19. The optical image won't be the same because the distance necessary to achieve the same framing between Full-Frame and S35 will be different.  The S35 result will match the 7D reference you're looking for.  The actual optical image of the lens is the same regardless of sensor.  Distortion is related to what part of the "optical image" can be seen and the distance from subject you have to be to achieve the framing of a sensor/film that's 36mm wide.   Faces are distorted by your own eye as you get closer to them.  How is a lens not going to do the same thing?  The theory behind portrait lenses and their reduced distortion have to do with how they render facial features  in accordance with how our own eye-to-brain interface handles facial recognition.  These lenses simulate how we see faces at the optimal distance for recognition (~15' I think it might be?)   At 50mm her nose is still quite pronounced, this lens rendering depth compression close to how our eyes see.  By 100mm she's starting to look like a model/actress and not just a semi-cute girl passed in the mall.  You want to know why some pretty girls could never be models or you see some candid photo taken of a working, high paid model and wonder "WTF?" it largely has to do with how that person looks with appropriate lenses and nothing to do with how they look in real life or standing right next to them.   @50mm she's not even the same girl as at @100mm, if you didn't know her personally or had familiarity overriding what the image is telling you about her features.   It should come as no surprise 135mm is where she looks best and this is a classic, go-to focal length for portrait photography.  This minor amount of distortion would be the same with a 50mm at a similar distance but now no longer framed for close-up.
  20. When you're in a close-up you're not seeing the sets though, you're seeing bits of busy detail behind their heads unless you're dealing with a blocking situation (Harry looking out the window) where some part of the set more than a few inches forward or back of the actor is desirable in frame.     Jean Pierre Jeunet would be more on the "wacky" end of the spectrum.  Beautiful, but wacky.  That aesthetic for Amelie and Delicatessen was a complete failure for Alien Resurrection.
  21. Most of a movie is shot with wide(r)-angle lenses.  That's a given.  Then, generally, if you're not on a steadicam or dolly and moving from a wide into a close-up in one continuous take, generally, you'll switch to a close-up lens which is more flattering and tends to isolate the face from the background, which becomes bokeh city with a longer, close-up lens.  Some filmmakers almost never seem to change lenses.   With Harry Potter perhaps there was also the issue of shooting of mostly minors affecting lens changes intentionally.  Perhaps they found that children's faces didn't benefit as much from depth compression (just taking a guess here, I haven't tested this theory myself).  Or, due to limited availability of minors, maybe they avoided lens changes except for when absolutely necessary.  Again, just taking a guess.   Some filmmakers eschew depth compression and like staying away from longer lenses but they'll most likely have an overall style that is also unconventional and maybe a little wacky (ie. Wes Anderson).   I saw a video on John McTiernan where he states he doesn't like it, pointing out Tony Scott's tendency to shoot everything on longer lenses.  His style is almost the opposite of Anderson so there's definitely not a one-size-fits-all approach to even picking wider lenses for close-ups.   I don't own any of the HP movies but I just skimmed through some videos and yes, it was pretty obvious they were using wider lenses in the close-ups on the kids but did see a few adult close-ups that looked like they were longer.  The backgrounds of the children's close-ups tended to be more busy, with more discernable details.     Anyway, I'm not saying using wides for close-ups is bad.  I'll just always advocate for a filmmaker making a knowing decision.  I like both ways when they're utilized to full effect for the overall tone of a film.  Sorry for being long winded or maybe a little jerkish about it.
  22. Be prepared to replace these when you upgrade cameras.  All of the higher-end cinema cameras, including those from BM, don't have the strong IR cut built into their sensors it would appear, making normal ND filters without strong IR components problematic to use, causing sometimes extreme color shifts that become more difficult to correct for the stronger you go in ND.   Canon DSLR seems to stay neutral with the use of normal, photographic ND and I'm only assuming here that the GH line would as well.  But BM, Alexa, RED, etc., these need IR sensitive ND filters and no single manufacturer seems to employ a formula that works uniformly across each camera.  Formatt is closest for current BMCC, Alexa and RED but there's still room for improvement.  The new BM cameras will need to be tested as well, since they're using sensors from an entirely different supplier.
×
×
  • Create New...