Guest Ebrahim Saadawi Posted January 21, 2015 Share Posted January 21, 2015 Warning: Long post, only useful for beginners, most pros here already know all of this so it would be quite boring to read! I had some free time to tutor a group of new teen filmmakers and though I would start by this subject, what makes a high IQ in a camera system. It's also written in a manner for people whose first language is not English. It's been very interesting for me too to explore so I went ahead and gathered all I know. Let me know if I missed something/have any wrong facts. I would love all camera reviewers to use this list on every camera review, it would make reviews so much better. anyhow: _________________________________Cinematography is all about making beautiful images. Similar to any other form of art, it's not an entirely emotional concept, but also has actual scientific/technical elements that combine together to make a beautiful and pleasing image. I hear many DPs saying certain cameras have a certain "mojo" or "look" that can't be described and this really annoys me to go into the field of magic and voodoo, I believe this mojo can be technically broken down to actual elements. So I believe one of the most important aspects a beginner must learn, is the technical aspect of image quality, and what are the elements that make an image either of high, or of low quality.Lets's break down the elements that determine image quality and explain later what each one means.1-Resolution 2-Noise amount 3-Noise pattern 4-Dynamic range amount5-Dynamic range roll off6-Colour fidelity, depth, science7-Compression quality 8-Digital artefacts (Aliasing/rolling shutter)9-Sensor size/image format 10-Lenses quality and how they affect IQIt might look intimidating looking at them at first glance, but they really are quite simple and understandable. Let's begin with no.1-ResolutionJust to make sure we start off the right foot, you have to understand that a video is simply a large number of photographs being presented at a very high speed, giving the illusion of movement.Each one of these photographs, is made up of very small building blocks, they're called "Pixels". An image is made of millions of tiny pixels next to each other, the higher the number of these pixels is, the more detail you get in the image. An image with more pixels is more clear, sharp and shows fine detail more than an image with a low pixel count.In the photography world, the image resolution is measued in "Megapixels", mega meaning 1 million. Therefore a 5MP camera means that each image is made up of 5 million tiny pixels/building blocks. 20mp = 20 million pixels, and so on.The resolution of video images however is not usually measured by the same unit (mp), but it's measured by a number called P, for example 720p, 1080p, etc.The P number refers to the number of pixels that exist in the image in every row vertically, a 1080p image is one that's made of 1080 pixels vertically, and 1920 pixels horizontally.As in simple math, multiplying the vertical number of blocks in one wall X the horizontal number of blocks in one row + the entire amount of pixels in the image.And 1920x1080= around 2 million.That means that a 1080p image is just another way of saying that it's 2 megapixels, it's just that other than saying the entire pixels number we're saying the vertical number.The standards you must be familiar with are:SD 480p: 640x480 which equals 0.8mpHD 720p: 1280x720 = 1.2mpFull HD 1080p: 1920x1080 = 2mpUltra HD 4K: 3840x2160 = 8mpYou might wonder why the resolution is so much lower in video compared to photography images (which are usually 20mp and more), the answer to that is just that a video contains hundreds of thousands of photographs, therefore it would be an extremely large file if each one of those photos is +20mp, it's simply impossible to store that much data, so 2mp is enough for now.Another reason is that resolution is more important for photographs, as the viewer observes carefully every fine detail in the image for a long time, but in video the image is displayed and removed in a fraction of a second, so the viewer doesn't have much time to observe the fine detail and resolution.So to recapulate: Resolution is how much information/detail there is in an image, the higher the number the clearer and sharper the image. The current most common standard is 1080p HD. That's all you need to know.-Noise performanceEvery image has an amount of noise/grain visible in the dark areas of the image, this noise is considered to be an indication of low quality.The lower the noise amount, the higher the quality is...But it doesn't just stop there, there's also the factor of HOW the noise looks, not just how much of it there is. The noise can be specks of ugly red/blue/green, square blotches or it can be simple black small dots, when the noise is small dots it's called "fine" noise, a finer (smaller) noise indicates better image quality than ugly coloured/large noise.-Fine/nice noise structure:-Ugly colour noise structure:Although both images have the same amount of noise, the type of noise in the first one is much more pleasing. So noise performance is determined by how much noise there is, and how that noise actually looks. 3-Dynamic RangeImagine filming a scene, doing an interview of someone sitting in a dark room, and in the frame, there's a very bright window behind him, A low DR image would either display the person and the room detail while the window will go to pure white without any information, or display what outside the window but crush the person to blackA high DR image would show both what's outside the window and will show the room/interviewee in the darker area.here the first two examples show how low DR looks, you have to chose between showing the bright areas and throwing out the dark areas, or chose the dark and throw away the bright window, while on the third, the camera records both the bright and dark information:In simple terms, DR is how much the camera can see in both the bright and dark areas at the same time, high DR meaning the shadows are bright and visible, and the highlight are dark and showing information.High DR means the image doesn't "clip" the highlights to pure white or "crush" the shadows to complete black.However, It's not just when the image clips, but there's also the factor of "how" the image clips.notice how both images do lose the same amount of highlight detail, they have the same dynamic range, but the one on the left looks much worse because it has a harsh highlight clipping while the right has smooth pleasing transition. Highlights can be clipped to pure white in two ways, first is harsh/abrupt clipping where you see a distinct line where the clipping occurs, this is considered to be a bad aspect of IQ, while higher-end camera clip the highlights with an extremely smooth transition without any ugly lines, this is especially true with 35mm film and therefore considered to be a filmic/pleasing aspect.This IQ element is commonly termed as "Highlight Roll-off" (how the highlights "roll"/transition into white)5-Compression/CodecsA video is made up of thousands of photographs presented at a very high speed, so each video files contains an enormous amount of data and it's is very large to store/transfer.Scientists therefore invented a way to "compress" these files into small ones by throwing away information that have the least visible effect on IQ.Those scientists discovered that the human eye is very senitive to changes in the black & white colours of an image, while not sensitive to changes in colour.The black&white information of the image is called "Luminance", short term is just Luma. While the colour information is termed "Chrominance", or Chroma.So they discovered that if they leave all the luma information but throw out some of the chroma information, it will not negatively affect IQ, our human eyes don't see a visible difference from throwing away that information.The process of doing this is termed as "Chroma Sub-sampling", it can be done as either 4:4:4, or 4:2:2, or 4:2:0.The first number indicates the Luma data while the other two indicate Chroma.So 4:4:4 chroma-subsampling means full Luma information (the first 4), and full chroma information (the last two 4s). No compression is done here.4:2:2, means full Luma information, but only half of the original chroma data, half of the colour information are thrown away to reduce file sizes.4:2:0, mean full Luma but only one/quarter of the chroma of the original image, so 3/4 of the colour information is thrown away to reduce files sizes.As expected 4:4:4 compression gives the highest IQ and largest file sizes. 4:2:2 give slightly lower IQ and smaller file sizes, and 4:2:0 gives us the worst IQ but very small manageable file sizes.The amazing thing is that the difference in size between 4:4:4 and 4:2:0 is humongous in terms of file sizes, yet the image quality difference is very slight, which makes the 4:2:0 compression one of the most used and popular compression formats in the world.The difference can be visible for a professional colourists, especially when making significant colour changes in the edit stage and really pushing the image very far. So 4:4:4 is usually preferred by people who don't want to compromise IQ and require heavy colour grading ability, this format is used for very high-end environments where of course file sizes is not an issue.-The other compression method is also throwing away colour information not Luma information (for the reason we know, our eyes), but this method does not just throw away half or 3/4 of the colour data, it does something different.Each colour in the image consists of many various steps/intensities, for example blue can be very light blue or very dark blue, the number of "steps" between these colour gradiants, is measured by "Bit Depth".8 bit compression means that each colour has 265 shades/steps of each colour.10 bit means there's 1024 shades/steps of each colour. 12 bit means 4096 shades of each colour.There's 14 bit, 16 bit, and so onImagine having a higher bit depth of 12 compared to 8, is like having a bigger box of crayons, where you have more versions of each colour, ranging from light to dark.This compression scheme is also surprisingly not very visible by the human eye usually. And only presents issues when doing extreme colour changing and editing.How a low bit depth (8bit) presents it self in an image is an issue called "banding", which is common in 8bit codecs.It's visible when shooting smooth colour gradients, like a blue sky, where the sky has a smoth transition from light to dark blue, with only 265 shades of colours, "steps" appear in the sky, looking like "bands", because there's a sudden transition from a darker to a lighter colour tone of blue. Having 1024 blue values (10bit) for example will make the steps smaller and less visible, while having a 12bit codec shows an extremely smooth transition from dark colours to light colours. Therefore 8bit saves a huge amount of file size but introduces this "banding" artefact that appear when filming smooth gradients.Again, the difference is extremely minimal and usually not noticeable by most people. But some professional colourists on high-end projects prefer having a higher bit-depth, more shades of each colour, as it helps them in editing with changing/shifting colours.Most consumer cameras use the highest kind of compression, 8bit and 4:2:0, because normal consumers benefit hugely from small file sizes and don't usually care about how the image quality holds up when editing, they simply won't value or use a 10/12bit 4:4:4 image and it would increase their file sizes dramatically for no reason.All of this is called Compression, and compression is performed by what's called a "Codec".Examples are, the most popular codec used right now is called "H.264", which uses a 4:2:0 8bit compression scheme. The h.264 codec is used on all consumer cameras/phones.A higher-end Codec is called "Apple ProRes HQ", which uses a 10bit 4:2:2 compression, making it a better option for high-end productions and for use in professional cameras as opposed to consumer models.There are more Codecs out there but these two codecs are all you need to know, the most popular ones.A Codec is a compression method, h.264 is a compression method, same with ProRes HQ, this codec is then stored in a "container", which can be MOV. MP4. AVI. MXF. RMVB, Or any other container.The reason I am mentioning this is because these containers are commonly mistaken for being codecs, which is not true, an MP4 file is a container for an h.264 codec. It's just a container for the codec, carries it for you and puts it on you memory card/hard drive. So MP4 is not a codec and is not a compression method, it's a container for the h.264 codec.You will always see professionals bad-wording the H.264 codec and they specifically hate it, and claim it has poor image quality, I don't agree, yes ProRes 10bit 4:2:2 has a better image quality and less compression, more information, but the difference in IQ is extremely minimal for most of us, while the file size difference is huge. So H.264 is a lovely codec that's suitable for people who want to begin shooting video. If you're a professional and require the best IQ for editing then sure, use ProRes, but it's not suitable for most of us simply because the files are too large. Each codec has its use, and different productions require different codecs.Examples for cameras/phones that use the H.264 codec:All Canon DSLRs, All Nikon DSLRs, All Sony DSLRs, All smarphones on the market, Iphone, Samsungs, Nokia, and TabletsCanon C100, Sony FS100, FS700 (which are very popular and expensive professional video cameras, yet use h.264, so it's not that bad really!)For photography cameras use the JPEG codec (8bit 4:2:2) or Raw (12/14bit 4:4:4)ProRes (4:2:2 10bit):Aja Cion, Arri Alexa, Blackmagic Cinema camerasCanon MXF codec (4:2:2 but 8bit)Canon C300, C500, XF305, XF205Redcode (12bit 4:4:4)Red Epic (Dragon and MX)Red Scarlet (Dragon and MX)Sony calls their H.264 codecs "AVCHD", and "XAVC-S", these are just marketing names for H.264, so don't be fooled into thinking they're something higher-end, they are h.264 8bit 4:2:0 codecs named differently.6-Image format/Sensor size.When you point your camera at a certain subject, the light source hits that subjects, then reflects into the camera's lens, then the lens sends this light to what's called an imaging "sensor", which sits directly behind the lens to receive the image.The image sensor is where the image is "made" or formed, then after making the image the sensor sends that image to the memory card to store it for you.Therefore the imaging sensor is perhaps the most important factor in determining IQ, it's the most valuable part of your camera system.Senors come in different sizes, and there are a few standards to be aware of.The two most common sizes are called "APS-C" and "Full-frame" sensors. Fullframe being almost twice as large as APS-C (which is still very large by its own terms!)Having a larger sensor has many benefits, the most direct one is that having a sensor means that it collects more light, it's a large "bucket" that can take a huge amount of light. So these large full frame sensors work amazingly in lowlight situations, where there isn't much light in your scene so collecting the most amount of it is very valuable. In brief larger sensor deliver better images in lowlight situations, they are more sensitive to light so they show cleaner images in dark conditions with much lower noise amounts. The second most important element of having a larger sensor, is that it gives you the ability to blur the background behind subjects in a strong manner, which is especially desirable in portrait photography/videography where you want the image to be focused on the person face while the background disappears in a beautiful blur. A small sensor simply gets both the face and the background in-focus, which is distracting. The reason why a larger sensor gives the ability is a rather complicated one and is beyond the capability of being in this article, it's pure physics and how light reacts to large surface areas. Not really important to know. with a fullframe sensor you get the image on the bottom while with a small sensor you get the one on top, everything else being equal.all you need to know is: larger sensor = better lowlight performance = easier ability to blur background = general better image quality simply as it can collect more lightBoth APS-C and full frame formats are quite large, but it's just that fullframe is even larger and is especially required by professionals seeking the absolute maximum IQ, so for most of us we can still create beautiful images on smaller APS-C sensors. You can get extremely blurry backgrounds with an APS-C sensor using the other elements that affect focus besides sensor size (faster lenses, getting close to subject, using longer zoom lenses etc) And to put things in perspective, super 35mm film, which is used to shoot all Hollywood films we see in the cinemas, is the same size as APS-C, not fullframe, so many respected DPs I know argue APS-C gives the filmic background blur without being too excessive as on fullframe cameras. It's an opinion to be considered. 7- Digital Artefacts:There are a few visual artefacts/defects that are common in digital video cameras, lets list then just so you are aware of what they are so you'll be able to avoid them.-Moire/AliasingAliasing is the weird artefacts you get on fine lines/objects in an image, most commonly on striped shirts, brick buildings, power lines/cordsThis phenomenon is most common on DSLR cameras that shoot video (Canons/Nikons) while it doesn't appear as much on dedicated video cameras, and it's best avoided by trying not to film brick buildings, power lines, or cloth that specifically induce this effect (tightly striped shirts).-Rolling shutter/Jello effectIt's a phenomenon where the image shifts to the right and left sides inconsistently, making straight lines appear to be "bent". It's common when quickly moving the camera laterally (from right left/ vice versa, i.e, panning)It's often referred to as the "jello" effect as it makes it seem like the images move as jello does. This phenomenon is most effectively avoided by trying not to pan the camera too quickly. Keep it steady and slow.It happens in cameras that have CMOS sensors, while cameras with CCD sensors produce a stable image even with fast movement. Modern cameras like DSLRs use a CMOS sensor because they have better lowlight performance and are much cheaper to manufacture than CCD sensors, so the rolling shutter effect is a compromise we have to make to get other benefits. However, lately, some manufacturers started producing CMOS sensors with Global shutter (no rolling shutter) so they combine the best of both worlds (no bending of straight lines but still have the good lowlight ability and low manufacturing cost) (like the Blackmagic URSA, AJA Cion and Sony F55, and a few others)-BandingIt's describes earlier, as a result from a low bit-depth of 8bit codecs, where smooth gradients appear to have "bands"/lines. This is most commonly seen in blue skies.7-Lenses and how they effect IQA lens is one of the most important elements in an imaging system, because if the lens sends a bad image to the sensor, it won't matter how great the sensor is, it will recieve a bad image to begin with no matter how amazingly advanced it can be!Lens defects:-Low sharpness: where the lens delivers a soft image with no fine detail, similar to how a low resolution camera can deliver. So delivering a sharp image depends on both the resolution of the sensor (mp count) AND the lens sharpness performance. Again if a lens sends a soft image to the highest resolution sensor in the world, the final image is going to be soft. It's a combination of both elements that make up a sharp image.-Distortion: where the image gets distorted or stretched near the corners of especially wide angle lenses Distortion can also occur on long lenses, but with an opposite effect, meaning rather than having the center part bulged and the corners distorted/smaller (in wide angle lenses) the corners are bulged with the center part is compressed/smaller. This is called "Pincution Distortion" This is the exact reason why portrait photographers like to use long lenses (+85mm), because if they use a wide angle lens (18mm) the center part of the image (the nose) will get bigger/be bulged while the ears in the corners will be smaller/farther. While using a long lens compresses the facial features (nose) and makes people look better and more flattering.note on the right, barrel distorion from a wide angle lens (18mm) makes faces ugly while ont the left with a long 200mm lens pincusion distortion compresses facial features and gives a more pleasing face. So always shoot faces with a long lens. Your clients will be happier.Viggenting: it's where the image gets darker towards the corners of the image.Some photographers/cinematographers actually like the effect and claim it focuses the audience eye to the centre of the image, even though it's a lens defect. Anyway viggneting can always be added in post-production very easily and also removed very easy, so not a major issue. Chromatic abberations/Colour fringing: It's where you see purple/blue lines on the edges of high contrast lines of an image. This is an indication of low lens quality.Bokeh: it's the quality of the blurry out-of-focus areas of the image, a good lens will have round/circle blur with a creamy smooth look while a bad lens will have a harsh non-creamy look. This is best demonestrated here:The right image the bokeh isn't circular, it's more distracting, while left one looks dreamy, smooth.Perfect creamy/circular bokehYou might wonder why a lens produces non-circular bokeh and some produce circular bokeh, it's determined by the shape of the lens "opnening", which is controlled by the number of iris blades. To demonstrate here: a lens with a low number of blades makes non-circular bokeh while a lens with a high number of blades makes circular more pleasing bokehIn any lens specifications the number of iris blades will be mentioned, so now you understand why a higher number is better to have in a lens. -An ideal lens would not have any of those defects, and the higher the quality the lenses are, the less defects they show, and the more expensive they get! So in low-budget photography/videography you'll have to make a few sacrifices and live with a few lens defects, so it's better to know what they are and try to hide/avoid them. ________________________________________________________________________________________This article described all the elements that combine together to make a high quality image. An ideal camera system would produce an image that's of-Largest sensor size-Highest resolution/sharpness-Highest Dynamic range -Smoothest highlight roll-off-Lowest noise amount -Smallest/finest noise structure/size -Highest colour depth and colour information (12/14bit)-Least image compression (4:4:4)-Least digital artefacts (no aliasing/jello/banding)-Least lens defects (no distortion/fringing/bad bokeh/low sharpness etc)Does this camera system exist? not at the low-budget end of the market! So you'll have to make compromises based on your specific budget and based on what elements you can endure/tolerate. This is an essential part of image making either is photography or videography. ___________________________________________________________________________________________________If I missed anything help me complete this "IQ elements" article I am writing for a group of beginner videographers, let us make a complete guide on the technical aspect of high quality images. There are two that I specifically think are missing, Colour science of different cameras, and motion cadence, I couldn't write anything about either really because both are beyond my comprehension ability. So if anyone could write both pieces on what each ones means with any samples it would be a great addition to the article (hopefully in a simple English manner for non-native speakers) Quote Link to comment Share on other sites More sharing options...
oferlevy Posted January 22, 2015 Share Posted January 22, 2015 Excellent article! Sent it to a friend who is just starting with video. Thank you for sharing! Quote Link to comment Share on other sites More sharing options...
Administrators Andrew Reid Posted January 22, 2015 Administrators Share Posted January 22, 2015 Damned impressive post. Cheers! Quote Link to comment Share on other sites More sharing options...
dbp Posted January 22, 2015 Share Posted January 22, 2015 Great write up! It really gives a crash course on the fundamentals of imagery. The examples are really good too, compared to similar types of articles I've read.There's two that I would throw into the hat. The first is just an overview on frame rate and the second is motion cadence in general. This a controversial one and I'm still not sure how I feel about it... but there are people who feel like all cameras render motion differently, even when the frame rates are matched. I feel this way too, even though I'm not sure if it's in my head or not. I've heard some people claiming that Sony cameras, at least the older models, had a bit of an ugly cadence. Where as something like the DVX100/HVX200 had a pleasing cadence. I find the blackmagic line stands out as having good motion cadence. I know some people claim that intra-frame codecs display motion in a more pleasing way, where as Long-GOP (ie: AVCHD) stuff is a little off. I think if it is variable, it's a subtle one. Nonetheless, an important point of discussion.The second is a huge point as far as I'm concerned, and that's color science. Not in terms of bitrate, 4:2:0 etc... but rather how the sensor interprets and renders color and tones. The "mojo" if you will. I think it's a huge reason why the early Canon DSLRs were so popular and remained popular despite being surpassed from a technical standpoint. The Panasonic line was often chosen over cameras like the sharper and larger sensor EX1 for this reason. The panasonic mojo. I have the GH2 and a lot of people (me included) were often not pleased with the green/yellow tint. Cameras like the C100/C300 having a lot of success despite being overpriced for what they deliver on the spec sheet. A pleasing look out of camera, particularly nice skin tones, goes a long way. I think this one is more subjective and tougher to compare than something like resolution. But damned if it's not insanely important in my opinion. So those are my two. Quote Link to comment Share on other sites More sharing options...
sunyata Posted January 22, 2015 Share Posted January 22, 2015 Ebrahim - Very nice, your article reads like an Apple support page, excellent supporting visuals! Tiny suggestion to disassociate bit depth from compression, but that's all I got. Great post. Quote Link to comment Share on other sites More sharing options...
arella Posted January 22, 2015 Share Posted January 22, 2015 Thanks Ebrahim! =)Your overview was very thorough and easy to follow, and your illustrations were particularly effective. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted January 22, 2015 Share Posted January 22, 2015 Thanks guys. Glad you liked it. I wanted to edit the post to correct a few unclear information on the Bokeh section based on suggestions from forum members, but I can't seem to find an "edit" option here. Is there one?Sunyata: "Tiny suggestion to disassociate bit depth from compression" Can you please elaborate further so I could understand where the mistake is and correct it? Isn't bit depth a compression method like chroma subsampling? I am still searching or a good definition of Colour science and Motion Cadence to add to the article. (Andrew can you give some insight on the definition of colour science since you seem quite experienced with using different systems). I think these two would complete the article as a full guide to IQ elements. Quote Link to comment Share on other sites More sharing options...
zenpmd Posted January 22, 2015 Share Posted January 22, 2015 one of the finest posts, articles even, ever found on the internet!!!!! Quote Link to comment Share on other sites More sharing options...
Nikkor Posted January 22, 2015 Share Posted January 22, 2015 One thing, the distortion on faces comes because of perspective not because of distortions of the lens. Just try to close one eye and get close to the face of someone, your brain corrects any distortion but the perspective is still there and it will look akward. This is why some people say that wideangle closeups are "intimate".Let's say that the most natural perspective is with the 50mm. A shot done with a 50mm on FullFrame will be natural because when viewed in a normal sized screen/viewing-distance relation it has the same perspective as your eyes watching the scene through a rectangle and being at the same place where the camera is placed.When going above 50mm it gets a closeup, or the way you would remember something where you concentraded on details while sitting away in a confortable distance. Nick Hughes, Ivar Kristjan Ivarsson, Daniel Acuña and 1 other 4 Quote Link to comment Share on other sites More sharing options...
Nick Hughes Posted January 22, 2015 Share Posted January 22, 2015 One thing, the distortion on faces comes because of perspective not because of distortions of the lens. Just try to close one eye and get close to the face of someone, your brain corrects any distortion but the perspective is still there and it will look akward. This is why some people say that wideangle closeups are "intimate".Right. It's the same sort of idea as with photos like this: A lot of people use the term 'lens compression' or 'telephoto compression,' when a more appropriate term would be 'perspective compression.' It's not anything about the lens itself that is compressing the image like this (except that it allows you to get 'close'), it's how far away you are from the subject. It's a small difference and doesn't make too much of a difference when shooting (you're not going to stand 200 feet away with a 28mm and crop it look like it was shot with a 200mm), but it's a good thing to know. sunyata and Daniel Acuña 2 Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted January 23, 2015 Share Posted January 23, 2015 Two users also made the same observation on distortion on another board. But my thought is, most wide angle lenses have inherent barrel distortion in the lens optical design, and most telephoto lenses have pincushion distortion, and these do contribute largely to the facial compression. But most of all, in order to get the same normal CA of an actor for example, with a wide 18mm lens, you will have to get closer, changing the distance, which directly changes perspective, while if you shoot it with a 100mm lens you will stand farther, changing compression, coupled with the factors of distortion in the optical design, so I think it's fair to say to beginners that effectively a longer lens gets a compressed face and a wide lens gets a distorted face as the picture indicates, I do get your point. It's similar to arguing that a larger sensor does not get shallower DOF, because the factors that determine DOF are three, Focal Length, Iris, and distance from the subject. But it real world use, a larger sensor directly makes you change distance, get closer to the subject to get the same shot, therefore sensor size indirectly affects DOF, as it affects an element which directly affects DOF... if that makes sense...I just thought it would be too complex to get into such details in an article being sent to complete beginner who just need direct real-shooting facts. But I am open to changing quite a bit, it's still not printed or sent out. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted January 23, 2015 Share Posted January 23, 2015 Great write up! It really gives a crash course on the fundamentals of imagery. The examples are really good too, compared to similar types of articles I've read.There's two that I would throw into the hat. The first is just an overview on frame rate and the second is motion cadence in general. This a controversial one and I'm still not sure how I feel about it... but there are people who feel like all cameras render motion differently, even when the frame rates are matched. I feel this way too, even though I'm not sure if it's in my head or not. I've heard some people claiming that Sony cameras, at least the older models, had a bit of an ugly cadence. Where as something like the DVX100/HVX200 had a pleasing cadence. I find the blackmagic line stands out as having good motion cadence. I know some people claim that intra-frame codecs display motion in a more pleasing way, where as Long-GOP (ie: AVCHD) stuff is a little off. I think if it is variable, it's a subtle one. Nonetheless, an important point of discussion.The second is a huge point as far as I'm concerned, and that's color science. Not in terms of bitrate, 4:2:0 etc... but rather how the sensor interprets and renders color and tones. The "mojo" if you will. I think it's a huge reason why the early Canon DSLRs were so popular and remained popular despite being surpassed from a technical standpoint. The Panasonic line was often chosen over cameras like the sharper and larger sensor EX1 for this reason. The panasonic mojo. I have the GH2 and a lot of people (me included) were often not pleased with the green/yellow tint. Cameras like the C100/C300 having a lot of success despite being overpriced for what they deliver on the spec sheet. A pleasing look out of camera, particularly nice skin tones, goes a long way. I think this one is more subjective and tougher to compare than something like resolution. But damned if it's not insanely important in my opinion. So those are my two.Exactly, the lack of hard technical evidence or definitions of both terms is what keeps me from including them in the list if IQ factors. But I do agree with everything you say. they do exist and do make a difference in IQ. We just need proof and actual definitions to put for readers. Can someone give an insight on Colour science and/or motion cadence? I think Andrew could be a good resource about colour science between different camera systems?Anyone? Quote Link to comment Share on other sites More sharing options...
sunyata Posted January 23, 2015 Share Posted January 23, 2015 I think Digital Cinematography by David Stump is excellent: http://www.amazon.com/Digital-Cinematography-Fundamentals-Techniques-Workflows/dp/0240817915 then there are books on the human visual system, colorimetry etc, as well as public info if you're into reading PDF's online. This paper by Jeremy Selan http://github.com/jeremyselan/cinematiccolor/raw/master/ves/Cinematic_Color_VES.pdf formerly of ImageWorks is really great but it skews towards the VFX industry. To answer your above question though, bits and bytes are the basic units of digital computing, compression (lossy and lossless) involves using algorithms to encode or reduce the resulting data.I agree though that to fixate on the simple formula for color depth is missing the point of all the discreet processes that happen inside a digital camera, wish I could help with cadence. The 24fps 180 film shutter still seems to be the standard I prefer and the Alexa seems to come the closest IMO, not news. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.