Guest Ebrahim Saadawi Posted March 19, 2014 Share Posted March 19, 2014 In my editor (Sony Vegas Pro) in the project properties I have an option called Pixel Format- The choices are: - 8 Bit - 32 Bit Floating Point Video Levels - 32 Bit Floating Point Full Range (I included a screengrab to elaborate) What does those mean? Should I care or just leave at the 8 Bit default setting? (by the way, if that is related, I am editing 4K Motion Jpeg files from the Canon 1DC which are 8-bit 4:2:2 color) And my second question, whilst we're at it, also what does the Motion Blur Type option mean? Choices are -Gaussian/Pyramid/Box etc... Shoud I care? Do these affect me? One thing I did notice is when changing from 8 Bit to 32 Bit Floating Point Full Range, the shadows are lowered significantly on the RGB Parade, it basically changes the levels. Sorry if I come across as an ignorant, that's because I am. I am not an editor and just starting out in the editing world. I searched every where but couldn't find a simple and understandable answer. I would appreciate the advice. Thank you. Quote Link to comment Share on other sites More sharing options...
see ya Posted March 19, 2014 Share Posted March 19, 2014 32bit floating point is higher precision color processing over 8bit, float versus integer precision, 32bit is also usually done in the linear domain rather than on gamma applied image data. 32bit float there should be no loss of data from clipping, image data values can be negative or greater than 1.0 although you won't see that on your monitor and it will look like clipping is happening on your scopes but as you grade you'll see the data appear into and out of scope range, where as 8bit processing will clip below 0 and above 1 ie: 0 to 255 in 8bit terms. Full versus Video levels. Whether the image is encoded in camera based on a RGB to YCbCr conversion that derives YCbCr values, (luma & chroma difference) based on luma over limited range or full range, you're aim is to do the correct reverse for RGB preview on your monitor. You can monitor / preview & work with either limited or full as long as you are aware of what your monitor expects, is calibrated accordingly and that you feed it the right range. If you're unsure then video levels. Video export 'should' be limited range certainly for final delivery, full range only if you're sure of correct handling further along the chain for example to grade in BM Resolve you can set 'video' or 'data' interpretation of the source. Your 1DC motion jpegs are full range YCbCr but as the chroma is normalized over the full 8bit range along with luma (JPEG/JFIF), it's kind of equivilent to limited range YCbCr and the MOV container is flagged full range anyway so as soon as you import it into an NLE it will be scaled into limited range video levels YCbCr. Canon DSLR, Nikon DSLR and GH3 MOV's are all h264 JPEG/JFIF, flagged 'full range' in the container, interpreted as limited range in the NLE etc. What you want to avoid is scaling levels back and forth through the chain from graphics card to monitor, including ICC profiles and OS related color management screwing with it on the way as well. You may also have to contend with limited versus full range RGB levels as well depending on the interface you're using from your graphics card, DVI versus hdmi for example, NVidia feeding limited range RGB over DVI full over hdmi. maxotics and etidona 2 Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted March 19, 2014 Share Posted March 19, 2014 32bit floating point is higher precision color processing over 8bit, float versus integer precision, 32bit is also usually done in the linear domain rather than on gamma applied image data. 32bit float there should be no loss of data from clipping, image data values can be negative or greater than 1.0 although you won't see that on your monitor and it will look like clipping is happening on your scopes but as you grade you'll see the data appear into and out of scope range, where as 8bit processing will clip below 0 and above 1 ie: 0 to 255 in 8bit terms. Full versus Video levels. Whether the image is encoded in camera based on a RGB to YCbCr conversion that derives YCbCr values, (luma & chroma difference) based on luma over limited range or full range, you're aim is to do the correct reverse for RGB preview on your monitor. You can monitor / preview & work with either limited or full as long as you are aware of what your monitor expects, is calibrated accordingly and that you feed it the right range. If you're unsure then video levels. Video export 'should' be limited range certainly for final delivery, full range only if you're sure of correct handling further along the chain for example to grade in BM Resolve you can set 'video' or 'data' interpretation of the source. Your 1DC motion jpegs are full range YCbCr but as the chroma is normalized over the full 8bit range along with luma (JPEG/JFIF), it's kind of equivilent to limited range YCbCr and the MOV container is flagged full range anyway so as soon as you import it into an NLE it will be scaled into limited range video levels YCbCr. Canon DSLR, Nikon DSLR and GH3 MOV's are all h264 JPEG/JFIF, flagged 'full range' in the container, interpreted as limited range in the NLE etc. What you want to avoid is scaling levels back and forth through the chain from graphics card to monitor, including ICC profiles and OS related color management screwing with it on the way as well. You may also have to contend with limited versus full range RGB levels as well depending on the interface you're using from your graphics card, DVI versus hdmi for example, NVidia feeding limited range RGB over DVI full over hdmi. Thank you for the input Yellow. Oh God. I must be truly an ignorant! I did get a few points, but mostly it's just like you're speaking Chinese! :D But from what you are saying, since I am only doing light grading (levels adjustment, RGB curves, sharpening, no more) and my content is for web delivery, I should stick with the 8 Bit? And since I rarely work with the 1DC, and mostly work with Canon/Nikon/Panasonic H.264 720/1080 DSLRs files, I should stick with 8 bit? Would that be a correct assumption? Quote Link to comment Share on other sites More sharing options...
see ya Posted March 20, 2014 Share Posted March 20, 2014 32bit float preferred regardless but this can have a slow down on lower powered systems for playback and render / encode times. Test it and see probably best approach. Certain codecs that store 'full range' or at least luma in the 16 - 255 range such as Sony cameras would probably benefit from 32bit operations regarding clipping in 8bit as the values above 235 luma using the typical YCbCr to RGB conversion won't fit ie: clipped unless the NLE handles those files specifically in a different way. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted March 20, 2014 Share Posted March 20, 2014 32bit float preferred regardless but this can have a slow down on lower powered systems for playback and render / encode times. Test it and see. Ah I see. OK thank you again, Yellow. I will take some time to work in 32bit and see, if I don't mind the speed difference I'll continue with it. If it slows me I will go with 8bit. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted March 20, 2014 Share Posted March 20, 2014 Repeated# Quote Link to comment Share on other sites More sharing options...
maxotics Posted March 20, 2014 Share Posted March 20, 2014 Hi Ebrahim, If your computer had all the power in the world it would default to 32-bit floating, though, as Yellow says, most displays output in 8bit (24bit color) because the extra data isn't need and all the colors we can see can be described in it. The larger the chunks of data, the more time it takes the computer to process, making it slower. You will find it easier to understand these issues if you don't think in video compression schemes like 4:2:2, etc. They were meant for display, not for image acquisition and editing. You want to think in pixels Let's start from the beginning. When you look out the window on a bright day your eye's iris will close down, just like a camera's, and you may look at a tree at say f22 (though an eye isn't measured that way) and say one of your eye "pixels" read a value of 16,000) When you look inside, your iris opens up, say for f/4, and you look at a chair in the room, and your eye reads a value of 16,000. But it's 16,000 at f4, NOT f22. The great thing about our brains is it constructs an image out of an incredibly wide range of values. The brightness of the tree and the cup may be 200,000 shades apart on one scale, but the brain adjusts it though you CANNOT look at both the tree and chair at the same time, just like a camera, and properly expose. In real-time, our brains do what no software can even approximate. If you could read your brain's final image, you could probably display it in 24-bit color. It would fit the 200,000 shades into a 16,000 shade space. HOWEVER, if you wanted to make the chair brighter, for example, you'd need go get shade information that your brain threw out, so you'd have to go look at those values between 0 and 200,000. That's what 32-bit floating point is about. It's what the camera "eye" saw. I suggest you read up on RAW image data, how it works, how it is de-bayered, etc, into final data. Learn the roots of digital photography. Then all this will make more sense. But like you, I still don't know what decisions to make when using these dang editors ;) Quote Link to comment Share on other sites More sharing options...
dhessel Posted March 20, 2014 Share Posted March 20, 2014 I never use this application but to address the last part of this which is the difference between video levels and full range. The eye has a fixed range of colors that it can see, digital devices and digital files have a fixed range of colors they can reproduce. This color range is referred to as gamut. The standard video color gamut is much smaller than what the human eye can see meaning that there is a whole range of colors that we can see that are cannot be displayed on some digital devices. We don't really notice this much because the video gamut has been carefully designed to include the most commonly seen colors so the ones that are left out are extremes in one way or another. The full range option will give you a larger gamut, hence a larger amount of possible colors, yet no where near what the eye is capable of but and improvement over video aka sRGB or REC709 colorspace. Why would you want to use full range, some cameras and newer devices have a larger gamut. I know that some of the 4k TV's have been advertising that they have larger gamut's. So working in full range would allow you to preserve that throughout the edit. Almost all current HD TV's and computer monitors are sRGB or REC709 with a video color gamut so you will want to use video range unless you have a specific reason not to. What format to choose? If you are working with 8bit footage and not adding any effects, just doing a straight edit then 8 bit is fine and you can enjoy the speedup to your workflow. Otherwise probably should just use 32bit to ensure the highest quality results. Welcome to the wonderful world of digital color, there is a lot to learn but the information is out there. I remember asking myself many years ago why are computer primaries red, green and blue when in traditional media the primaries are red, yellow and blue? Quote Link to comment Share on other sites More sharing options...
jcs Posted March 20, 2014 Share Posted March 20, 2014 There was a time when integer/fixed-point math was faster than floating point. Today, floating point is much faster. For GPU accelerated apps such as Premiere, Resolve, (FCPX?), they always operate in floating point. The only area where 8-bit can be faster is on slower systems where memory bandwidth is the bottleneck (or really old/legacy systems, perhaps After Effects). maxotics 1 Quote Link to comment Share on other sites More sharing options...
maxotics Posted March 20, 2014 Share Posted March 20, 2014 We need an "Ask JCS" thread on this forum ;) Quote Link to comment Share on other sites More sharing options...
see ya Posted March 21, 2014 Share Posted March 21, 2014 There was a time when integer/fixed-point math was faster than floating point. Today, floating point is much faster. For GPU accelerated apps such as Premiere, Resolve, (FCPX?), they always operate in floating point. The only area where 8-bit can be faster is on slower systems where memory bandwidth is the bottleneck (or really old/legacy systems, perhaps After Effects). When I mentioned low end these days it really means low end GPUs rather than processor, so floating point opps vis GLSL shaders I can see as bring achievable on lower spec GPUs but for non NVidia cards or low core count NVidia cards can see that the processor would out perform GPU, then debatable that 32bit float would be comparible to 8bit processing? For non NVidia then relying on extent of OpenCL support in an app. For CUDA related processing a GTX770 is entry level and at 4K resolution a 4GB Vram version. Which I think is as high end any mac pro can take? Not sure about imacs. Then its all OpenCL for mac anyway. Quote Link to comment Share on other sites More sharing options...
jcs Posted March 21, 2014 Share Posted March 21, 2014 32-bit floating-point has been faster than fixed point integer since at least 2000 on CPUs (that's when I stopped using fixed point for my flight and driving simulators). Around that time integer could be faster for certain operations, but when benchmarking a full application, where complex operations are made, float had passed fixed point integer (even when written in assembly). Floating point makes the code much simpler- we can defer clamping, for example, until the very last moment (when converting back to 8-bit for example). Otherwise when using fixed point we have to check for overflow more frequently, etc. The weird FS700 black pixels on strong highlight edges looks like the familiar integer overflow type of bug (where for performance reasons they're not clamping at the right time). GPUs are crazy fast- designed for complex real-time games. 4K isn't a big deal for GPUs these days. NLEs, on the other hand, are mostly based on old tech (including Vegas, Avid, Premiere and especially After Effects (no real GPU accel)). FCPX is modern, though not any faster than PPro on the same hardware (and many cases slower when I tested it). DaVinci Resolve has an excellent GPU-based engine- the fastest NLE-ish tool I have used so far. NLE developers aren't game developers, though. If top game developers were to build an NLE, we'd see a lot more real-time performance. MacPros (before the new Trashcan) can use many of the latest NVidia cards after being modified with the proper bios and hardware changes. I purchased a few cards from MacVidCards and they work great: http://www.ebay.com/sch/macvidcards/m.html . As PC's are slightly faster than Macs (same hardware running Bootcamp, etc.), and much lower cost, anyone on a budget wanting max performance for the dollar will want to build a custom PC with choice parts. Hackintosh's are pretty popular, lots of info online on how to set one up- looks like they are fast and reliable, so folks needing/wanting OSX apps can have the best of both worlds. I haven't looked at PC laptops in a while, however Mac laptops are fast and pretty solid. I've been running VMWare Fusion on my latest MacBook Pros and GPU acceleration is pretty solid for PC apps- no need for Bootcamp (though native will be faster- haven't benchmarked it). Quote Link to comment Share on other sites More sharing options...
see ya Posted March 21, 2014 Share Posted March 21, 2014 32-bit floating-point has been faster than fixed point integer since at least 2000 on CPUs (that's when I stopped using fixed point for my flight and driving simulators). Around that time integer could be faster for certain operations, but when benchmarking a full application, where complex operations are made, float had passed fixed point integer (even when written in assembly). Thanks for the clarification. Which was my point about low spec hardware, the OP is using Vegas and many have bemoaned the 32bit mode for sluggish performance? Yep and now full GPU debayer of RED raw, compression aspect is CPU bound though. Power was the issue I thought, a GTX 770 being lower consumption than say a GTX 680 for same performance. But maybe Titans can be used then? Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted March 21, 2014 Share Posted March 21, 2014 Can't thank you enough for all your inputs. what about my other question: what does the Motion Blur Type option mean? Choices are -Gaussian/Pyramid/Box etc... Shoud I care? Do these affect me? Quote Link to comment Share on other sites More sharing options...
dhessel Posted March 21, 2014 Share Posted March 21, 2014 Those are filtering types for adding motion blur to moving elements in your compositions, It has nothing to do with video footage its only if you want to add some animated text / motion graphics and wanted motion blur on it. Gaussian would be the best and box the worst of the ones listed. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted March 21, 2014 Share Posted March 21, 2014 Got it. Thank you dhessel. Thanks again to everybody I learnt a lot. Quote Link to comment Share on other sites More sharing options...
jcs Posted March 22, 2014 Share Posted March 22, 2014 ... On a low end system not doing any effects, 8-bit could be faster due to bandwidth vs. float (only way to know is to test). Vegas is a particularly slow app in general, compared to other NLEs (large parts written in C#, which is ~2x slower (sometimes much slower) than C++, and not a whole lot of GPU acceleration (I last used Vegas 11 when editing stereo3D footage). H.264 compression is now GPU accelerated in Premiere and likely FCPX & other apps. MacVidCards has modified the GTX 780 to run on internal MacPro power ($750). He's down the street in Hollywood- think he might be doing his card conversion business full time now (the only local source I know of for recent GPUs for Mac's from NVidia). Lots of Hollywood/LA folks using Macs for Resolve, etc. (which heavily uses the GPU). I'm still using a Quadro 5000- will probably upgrade soon (current consumer cards are much faster: Q5000 has 352 cores, GTX 780 has 2304). This is a cool site for comparing GPU performance. While the Q5000 is slightly better than the GTX 750m, the GTX 780 is "massively better" than the Q5000 (except for power consumption): http://www.game-debate.com/gpu/index.php?gid=880&gid2=626&compare=geforce-gtx-780-vs-quadro-5000 Quote Link to comment Share on other sites More sharing options...
see ya Posted March 22, 2014 Share Posted March 22, 2014 On a low end system not doing any effects, 8-bit could be faster due to bandwidth vs. float (only way to know is to test). Vegas is a particularly slow app in general, compared to other NLEs (large parts written in C#, which is ~2x slower (sometimes much slower) than C++, and not a whole lot of GPU acceleration (I last used Vegas 11 when editing stereo3D footage). That should be useful info to the OP. Yes, my mention of compression on CPU was specific to BM's RED raw decoding methods in Resolve 10.1.3 released a couple days ago. MacVidCards has modified the GTX 780 to run on internal MacPro power ($750). He's down the street in Hollywood- think he might be doing his card conversion business full time now (the only local source I know of for recent GPUs for Mac's from NVidia). Lots of Hollywood/LA folks using Macs for Resolve, etc. (which heavily uses the GPU) Yes there's some good performance stats done for upgrading a previous Mac Pro versus the Trashcan. But for heavy Resolve it would be conjunction with a Cubix or similar, maybe a Titan or two internally otherwise, then the internal 770 / 780 GPU for Resolve probably chosen as GUI only, maybe 'Compute' and UltraStudio for SDI / 10bit hdmi out. Checking out the GPU go compare site you link to interesting to see the GTX770 out performs the GTX780. :-) Quote Link to comment Share on other sites More sharing options...
jcs Posted March 22, 2014 Share Posted March 22, 2014 Yeah the external GPU boxes are used by high end colorists etc. I didn't debug that GPU site, lol :) On the question regarding blur: Box, triangle, Gaussian; they are all the same, just more :) Box is an averaging filter- let's call it box(). Then Box: box(); Triangle: box(); box(); Gaussian: int passes = 9; // more passes = more accurate for (int i=0; i < passes; i++) box(); That's not the only way to do triangle and Gaussian- just shows they are all the same mathematically, just more expensive for more quality. Quote Link to comment Share on other sites More sharing options...
Guest Ebrahim Saadawi Posted March 24, 2014 Share Posted March 24, 2014 Box is an averaging filter- let's call it box(). Then Box: box(); Triangle: box(); box(); Gaussian: int passes = 9; // more passes = more accurate for (int i=0; i < passes; i++) box(); That's not the only way to do triangle and Gaussian- just shows they are all the same mathematically, just more expensive for more quality. wow that Makes sense, thank you sir Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.