Jump to content

Axel

Members
  • Posts

    1,900
  • Joined

  • Last visited

Everything posted by Axel

  1. If you hit ⌘r, the selected clip becomes a rubbertape. It gets a green top, meaning 100%. If you drag it inwards, it gets blue=shorter=faster, outwards it gets orange=longer=slower. Under the retime-icon (below the info window, right beside the magic wand) you find a timeramp in the menu, self-explanatory, where the clip is automatically bladed in four parts, either sped up or slowed down. The speed values displayed in the segments are actually target-values. There are, eh, "transitions". Brandons method to manually find the frames to use the blade gives him more control.
  2. Nonetheless, the resolution of the famous van Eyck paintings would not surpass 2 MP, probably less. The textured surfaces are resolved well enough to represent the materials, from a supposed viewing distance. This painting has a size of 23 x 32 inches: ... so you can do the maths and acknowledge that there are no brush hairs fine enough to render i.e. the hairs of the fur. Also, this is a good example of motif detail. Famously, this painting is a tricky self portrait of Jan van Eyck. His reflection can be seen in the round mirror in the background: You can see the brush strokes ("pixels") in this enlargement. You usually would see the painting from 3-4 feet distance, but you'd have to come closer to see the reflected scene. The same is true for another well known painting by Jan Vermeer which he did as a 'spec shot' (without assignment) called The Art of Painting: The painting is too small (43 x 51 inches) to really depict fine texture detail, but it gives the impression of high resolution. Note also the framing and lighting (looks natural, but had an incredibly long 'exposure time' - months!). These paintings have high dynamic range, something that doesn't exactly strike you in the GH4-library clip, where the daylight from the windows clips videoishly. They should have put some ND gels on the panes or bounce the room or use an ND grade. At least they should have softened the fall-off by some mist filter. Not the fault of the camera though.
  3. I share your verdict. To be brutally honest, I rarely find chases engaging, or fights and battles. These are the obligatory scenes an action move has to have, in the script it probably says Joker attacks the convoi in a garbage truck, and Batman joins in his batmobile, if I drank too much Coke a chance to go to the toilet. I can easily imagine Nolan telling someone, can you put this together somehow and make it look good. Take all those miniatures and throw in some close ups of expendables, but don't waste too much time for this, I won't chastise you for being sloppy. Look, everybody knows that this defies logic anyway, so don't try too hard to be convincing. I never witnessed anyone saying, great fun all in all, but the chase sequence was completely unfollowable, after we left and discussed the film in the foyer. It just doesn't deserve all that attention. Here comes Emerson, presumably some TV show editor, to examine the corpse and lecture on bad editing in minute detail. Oh yes, he's so right, it's really frustrating to see so many inconsistencies in a multi million dollar movie. Come on, Nolan, we deserve better.
  4. I think leather jogging pants would be rather uncomfortable, being a jogger myself, so maybe that statement is not so much absurd. Joker in this film evokes chaos. Do me a favour and imagine the sequence in question with straight continuity. It would be a completely different scene, very weak and boring. I know I am defending Nolan, but I can base it on the effect the scene has on the audience. If it was all just sloppiness, then Lee Smiths quote in the beginning made no sense. Emerson, as I see it, belongs to those who want cuts to be logical, not stirring or, God forbid, emotionally disturbing. They have a real problem with breaks and jumps and find relief only in nice sequences like A> B >C >D. But that's not language. You measure a cut not by it's correctness but by it's impact. Good cuts hurt a bit, some bold cuts more. The main complaint about the sequence is that Emerson can't follow, but it never crosses his mind that this might have been his problem. I watched the film twice, together with big audience, and none of the supposed failures waked my disbelief nor that, obviously, of the fellow cinemagoers who followed alright. It now crosses my mind that this might have been my problem, can it be that I left my disbelief at the entrance? Shooting and editing with strict continuity and clear topography is not cinematic and almost never a good idea. Many great directors pretend to do this, with establishing shots and seemingly plausible action, but then deliberately expand the borders of logic (Nolan as one of the most prominent). Eisensteins famous Odessa stair sequence (meticulously planned by the way, nothing sloppy about this storyboarded scene), Kubricks taunting overview of the Overlook hotel (planned as well), Finchers steal plant prison in Alien³, where the place becomes an alien as well, many Hitchcock sets (which are the most surreal sets of all, see especially Vertigo and Psycho), Welles' Xanadu, Leones western landscapes, you name them. And then there is Panic Room. I love Fincher, and I like the film, but it actually defies it's own premise by describing the exact whereabouts of every character at every moment, shrinking the set to a theater stage. Or watch a later part of the Emerson video, where he praises Salt, one of the dullest action movies of all time, in which A. Jolie performs perfectly timed acrobatics with A >B >C editing. Unless you admire it for it's almost mechanical precision, you couldn't care less. Of course I can't claim to be in the right, I just hate Emersons tedious prosecution. Reminds me of Stones JFK, but that is at least more entertaining.
  5. This is bullshit. We need a shrink to analyze all those continuity-peepers. They didn't get the first thing of what cinema is about. Every great action scene has to cut out the joints of a sequence in order to boost the speed and to make the audience a complice by actively filling the gaps. Wait: There are multiple car chases from multiple views. Haven't we got the goddam right to stay oriented? Shouldn't we know exactly about directions, positions and frigging road maps? Everything should be clearly followable (as for the participants in this chase, they know everything, they see everything, they foresee it). Quote from Wikipedia on 'continuity': There are such unwanted, annoying continuity errors, the new Carrie is full of them, where someone raises his hands, cut, akimbo. And they can destroy the magic. But, Mr. Emerson, Sir, you chose the wrong example. Nolan clearly is a master of 'prestige', he knows what he is doing and why. Go see a doctor, have your anal fixation fixed.
  6. Excellent article. I occupied myself intensively with the correlations of resolution, image size, perceived sharpness and - to introduce the Holy-Grail-term that combines all those parameters - glory, both professionally and personally virtually all my life. There was an ancient german textbook on cinematography, considered The Bible at german filmschools, but I always found it to be unbearably dogmatic (it recommended, for instance, never to use sDoF and always to use 3-point-lighting). But it had a good, an indeed very good chapter on framing, on composing an image, using the french term cadrage. Good framing, according to the book, always results in an image that has great *pithiness*. This was illustrated by putting grids over great, well-known paintings, thereby retracing the way the artists guided the viewer's attention from there to there, creating either harmony or dynamic (or, for that matter, a tension between harmony and disharmony). Furthermore, there was a debate on how detail contributed to pithiness - or on the contrary, confused it. And there was a crucial distinction between texture detail and motif detail. Texture detail will always need sufficient resolution (relative to the image's size) to depict the pattern (grass, skin, fabrics), but it has no postive effect on pithiness (but may interfere with it). Motif detail is not so much about resolution, it is, interestingly, about *time*. Because to take in important motif detail within a film frame, you need time. Take this painting from Pieter Brueghel the Elder (The Procession to Calvary, 1564): (EDIT: Resolution is about size. You can't recognize all the interlocking scenes, because the resolution is 800px) Polish filmmaker Lech Majewski made a feature-lengh film of all the little actions put into this painting (which clearly is meant to be watched for quite a while), The Mill & The Cross: Every single of the awesome GH4 demos we saw so far either have no pithiness of the image (users are tempted to 'show off' meaningless High-Res and shoot the naked chaos) or they are resolution-independant. One could as well say, higher resolutions don't add any information to the image, they just allow to blow it up more. But then again, as the article says, we saw the latest blockbusters, including Godzilla, in meagre 2k (856 x 2046 pixels), and nobody complained.
  7. Note: My following statement doesn't contradict anything written above, it's just a simple, old rule that's still useful: If playback performance is paramount (and it generally is, who cares if after two days of editing the export takes half an hour or one hour?), the system with the applications should be on another harddrive as the footage. The HD with the footage should be 'read only', and it should be the fastest drive with the fastest connection, preferably an SSD. That means if you allow the program to write render files during editing, these should go to another drive. You can also trigger rendering manually, in a pause. This way when you edit, the second drive is also 'read only'. A backup drive can be a very slow drive. This can also be where you export your masters.
  8. I find it strange that anybody worries about disc space anymore. It has become so ridiculously cheap, even compared to tape. For friends - especially for friends! - you should take Shakespeare as a guide: Brevity is the soul of wit. Please understand, that though FCP X can use native AVCHD, you render in ProRes (project settings). You don't need to render for preview purposes (general settings, you can disable background rendering), but you can export your timeline as ProRes master just with cmd+e. Now even if you have hours and hours of recorded footage, you hardly edit a video that's longer than 20 minutes (unless you want to lose your friends), and that will result in a file size of round about 20 GB in full HD. 1. Rendering in ProRes is waaay faster than rendering directly to H.264. 2. Encoding a full quality ProRes video with freeware such as x264 to mpeg4 is about five times faster. 3. The resulting files - if file size counts - are then visually identical with waaay lower bit rates, resulting in smaller files (about 20% smaller). 4. If you decide to keep the ProRes master on an external drive, the costs for storage are just cents, really nothing in comparison to the effort you invested in making the video. If you wish to 'smart-render' your video, then FCP X is not the right app for you.
  9. You followed the gimbal thread in the main forum? There is this $ 500 gimbal: Yes, you need to assemble it yourself, but it won't be too hard. I believe the difference lies more in the operator's skills than in the built quality of the gimbal. What steadicam guru Tom Antos did with the $1000 Came gimbal imo is phenomenal (but he also tested some traditional 'stabilizers', some of which I had used myself, and his shots were very good). I dare say most of these gimbals are still ridiculously overpriced.
  10. I almost bought the a.m. Pancake, because my Sigma is too sharp. Overbid. Will try again. Because it's rather slow, it acts like a fixfocus lens here: The gimbal of course costs 3700 € (in Germany), but there is the Hobby King alternative also ...
  11. The perfect solution. But this is not a perfect world. 'What are 10.000 lawyers on the ground of the ocean?' 'A good start.'
  12. My observation is, that while you can match the DoF of, say, 5D MII, 7D, GH2 and BMPCC with the lens math above (respecting only 'optical properties'), you will still have subjectively different looks. The EOS cameras above have the lowest resolution (though particularly the 5D can look as if it had superior resolution, but it's aliasing to a great percentage, see this ancient article). Yet it's still an ambiguous 'Full Frame Look' everybody accepts as measure. Those factors that do contribute to DoF (let alone look), but are 'ignored' in the calculation, are not ignored because they are neglibible, but because they can't be generalized. And if we agree that a general rule of thumbs suffices, then it was: Buy the fastest lens ...
  13. I agree that this is annoying. On the other hand, if you have bad music or no music at all you can use in your video, you are forced to concentrate more on storytelling, which will result in better clips on the long run.
  14. I also don't wish to argue. My point was, that there are influences on DoF in real world conditions that add up to the "Full Frame Look" in such a way as making the comparison of 5D (that's it, more or less) to cameras with smaller sensors futile. Your Wiki-links states: Among them demosaicing, sharpening and image noise reductions. Not mentioned was pixel binning, all methods that inherently reduce resolution. I'm not saying this applies only to full frame cameras, I just ask if it's unreasonable to assume that it affects their look to a different degree. But not alone harder to detect. Lenses generally are calculated to produce the smallest possible CoCs, in order to focus a point within the physical borders of the pixel. The bigger the area of the pixel, the greater the hyperfocal distance, the bigger the DoF, that is in the real world a matter relative to resolution. The amount by which the image is enlarged. Exactly. Relative size is all about resolution. For my argument I didn't perform an own test. Note also, that many sources on the net contradict me (highlighted red in my first post), like you, but only by ignoring the physics. When we had 16 mm analog film we just knew there would be a noticable increase of DoF if we shot on TriX (ISO 400, as compared to ISO 100), even though we shot wide open. When we had the first HD camcorders that had the same sensor size of 1/3" as our SD camcorders, we didn't expect a difference as far as DoF was concerned. But there was, not dramatically, not enough to let us pass 35 mm adapters ('DoF machines', proof that the 'big sensor' factor mattered the most). Once again: I don't intend to question every single of Northrups statements, nor do I try to come up with a new theory. But for a "full frame look" (thread title, obviously referring to 5D) there are just too many of the 'several simplifications'.
  15. tupp, you didn't check the meaning of 'circle of confusion', did you? (Actually it is the *only* reason for shallowDoF, I just brought the resolution into the equation) You take for granted, that when you see a projection, there don't need to be reflective textures that are fine enough to define the individual picture element you recognize? This is, excuse me, a rather naive way of understanding optical laws. Softer? You mix up resolution and sharpness. Low resolution images may look out of focus when scaled to the same size as a high resolution image. I never wrote: The CoC-resolution-connection is the most important factor for DoF, but it is inseparable, and therefore your statement 'a given depth-of-field optically remains constant, regardless of sensor/film resolution.' is wrong, given, that there always has to be a medium that receives the light coming through the lens - be it dust or smear on a glas pane, chalk grain on a wall, silver nitrate crystals, pixel circuits, your retina's rod cells (by the way: our night vision has absolute depth of field, even though our irises are wide open) -optically, physically, whatever. Instead of arguing, you could make a test of your own. Open the aperture, then film with your camera's highest ISO/gain. You will find a considerably bigger depth of field than with your lowest ISO. It's not proportional to what would change with closing the aperture, but nobody said so. I'm happy we agree. Then what can all the pseudo-scientific comparisons of crops and lenses by their sheer numbers be worth?
  16. I did'nt know you were not familiar with that. Of course resolution affects DoF. If a pixel on the sensor is very big, it swallowes a bigger circle of confusion, whereas if you have four times as many pixels on a identically sized sensor, it also needs a four times smaller CoC to render sharp outlines. It's not the size of the sensor alone that affects DoF, it's the size of the sensor relative to it's resolution. This video explains it (jump to about 4'30"): What is more: One 50 mm f1.4 is not as sharp as any other lens with the same specs, meaning it may not be able to focus an equally small CoC. Would the comparison of a 21 MP sensor (M2) to a 22 MP sensor (M3) show significant differences in DoF? I doubt so. There is so much signal processing going on before you actually see the image, particularly with HD video. What I am trying to say: Things are much more complicated in the real world, and the only rule that always applies is that a faster lens will bring you less noise and better control over DoF.
  17. ISO was a reliable factor in analog photography, it had to be. It can still be a reliable factor if you calculate exposure for one and the same camera. I am afraid this is no longer true for digital cameras with different sensor sizes and different ways a full HD video (and in most cases it's not video they are built for!) is derived from a multitude of pixels on that sensor (pixel binning, line skipping, bicubic scaling, and I don't know how debayering fits into this), let alone the question whether cranking up the signal is done by actual electronic gain or just changing the gamma of the raw video before it's baked into a compressed 8-bit file. Because an often ignored fact is, that DoF has another factor: CoC, introducing the pixel size as equally important as the sensor size, or in other words: The sensor size means nothing unless you don't know the pixel size on it. The same lens with the same aperture will have different DoFs at different ISOs. How is that? Because using higher than the sensor's native ISO results in lower resolution. The pixels with actual signals within the noise are being sampled over time (video of cause), but they effectively behave like bigger (and therefore less) pixels, increasing the depth of field. EDIT: On the net, this correlation is frequently denied, stating that higher ISO allowed for smaller apertures and that this alone would then increase the DoF. I admit that CoC combined with ISO is not a very obvious factor. There should be tests executed by maintaining exposure exclusively with ISO and NDs to prove or refute the theory. All this math to reliably compare just too many contributing factors is futile imo. Try and buy the fastest lenses available for your system, speedboost them if possible, is my advice.
  18. Use the keyboard. No 'mousehand' with JKL, arrows, i&o asf. The keys should be flat and sensitive or else your index finger will hurt. Trackpad or magic mouse, gestures? Trackball, jog shuttle? I know some (including me) who tried these, but turned back to the keyboard eventually.
  19. Nobody stops us to publish own ideas in the Creativity & Ideas forum, to document their development, to discuss their scripts, post storyboards or production paintings, to ask for help how to find locations, tutorials for better make up or costumes or tips how to better direct untrained actors. Lighting, grip, editing, what have you. Why does no one take advantage of this? Why does everyone keep his plans or fancies in secrecy?
  20. Hi Blanche, there is a sub-forum Creativity & Ideas, but no one amongst us is either creative or has ideas, so the topics collect dust. There also was a new forum altogether, creativcrit, but it didn't survive for long. Every now and again someone feels obliged to post the truism 'it's not the camera' in a completely insipid gadget-thread, but he's like Jesus on Wall Street, stating 'money can't buy you love'. Another saying hits the nail on the head: Action speaks louder than words. There are hundreds of fantastic, affordable cameras, cinematic lenses asf., but no one generates content? Why?
  21. Half an hour ago, the filter arrived. I couldn't wait and snapped it onto he Sigma ... EDIT: First test looked promising, but was performed too hastily. There is still moire. Proof later today ... Look at the balcony. Somewhat nicer roll-off to clipped areas though, not completely useless.
  22. People who downloaded the 4k file and watched it on their 4k monitor reported it was still there. My guess is, this is caused by in-camera sharpness, making the edges of the books come alive during pans. As AaronChicago pointed out in another thread, 'incredible sharpness' is just that: incredible. The higher the resolution, the more an image suffers from irrelevant details, you shouldn't then accentuate that further. Had they dialed down sharpness, there probably wouldn't have been this aliasing. EDIT: I see. They say: "Camera settings used in this video: • Photo style: Cinelike D (all set to 0)" The GH4 may indeed be in need of a Digital Diffusion filter. We have seen good resolution in digital cinema, but never did the image burst apart into individual pixels. Watch the famous Nolan examples, there very often is some kind of fog smoothing the edges and adding depth to the 2D image. What's also causing a videoish look is the harsh clipping of the windows, another reason for using a soft filter.
  23. In RAW, there is no LOG. And yes, the images look colorful. You can take the colors out in post, as I wrote. As you can pump saturation into ProRes clips that look flat and desaturated. You can't trust the display or an external monitor, with the Pocket you just decide later.
×
×
  • Create New...