Search the Community
Showing results for tags 'h264'.
-
Hi all, Hoping someone can help with this edit workflow question: I currently shoot video on Canon DSLRs (in H264 MOV format), and edit on a late 2009 iMac (2.8ghz i7 processor, 16gb memory). The films I make are mainly for web rather than TV broadcast, and beyond basic colour grade / tidying up, have minimal effects added (no CGI). Until recently, I used Final Cut Pro 7, using FCP's Log & Transfer function to import and edit footage in Pro Res 422 format. Having just moved to Premiere Pro CC 2017, I'm trying to figure out the most efficient workflow with the best resulting image. Should I import and edit in native H264 MOV? Or ingest and edit as either Pro Res or DNxHD? If Pro Res or DNxHD, what's the best way to ingest (or import / transcode)? I've been reading mixed things via Google; mainly Adobe-related articles explaining a native workflow, vs various articles sponsored by transcoding software companies, saying that transcoding will have a better result. Any thoughts would be much appreciated. Thanks! Elliot
-
I captured a screenshot of side by side NX1 files. The one on the left is the H265 file being played by VLC. The right, the H264 conversion played with Quicktime. In the conversion, some of the color and detail are lost. You can see it in the VLC VLC version as well.
-
A WORK IN PROGRESS. Everyone, feel free to correct, add, subtract... Storage, power and bandwidth constraints necessitate the need for video compression. It's easier to understand the trade-offs, and issues, once you understand the ideal world. In the ideal world, you would work with all the data recorded by the camera. The total pixels in a frame of 1,920 pixels wide, and 1,080 pixels high is 2,073,600, or about 2 million pixels. In one second, we watch 30 of those frames, so that 2 million times 30, or roughly 60 million pixels per second. For a minute we’d need 60 million times 60 seconds, or 3,600,000,000 pixels per minute, or 3.6 billion. When you’re watching your HD-TV your eye is viewing 3.6 billion pixels every minute. What makes up a pixel? A color. Colors are often described in their red, green and blue components. That is, every color can be separated into a red, green and blue value, often abbreviated RGB. Most cameras record each color as a brightness value from 0 to 16,383 (14 bits). You need three sets of numbers, red (0 to 16,383), green (0 to 16,383) and blue (0 to 16,383) to numerically describe ANY color that the camera has recorded. Some simple math tells us that we will get a range of values between zero and 4.3 trillion. (16,383 times 16,383 times 16,383) To make matters REALLY confusing, cameras only shoot one color at each pixel location (red, green or blue (or yellow, magenta or cyan), in a "bayer" pattern. So each pixel is only accurate about 25% of the color at that location. It assumes that two pixels near it can give the correct color information to create a full color, through "de-bayering". This trick of borrowing color information from nearby pixels is ALSO used in video compression in a completely different way. Too complicated to get into here. We can only "see" about 12 million colors. We don't need 4.3 trillion. That is, we don't need 14bit * 14bit * 14 bit, we need 8bit * 8bit * 8bit (which actually gives us about 16 million) Therefore, for viewing purposes, we can throw out most of the recorded data Let’s go back to the optimum image we’d like to see, 3.6 billion pixels per minute times 24bits (3 bytes). That would be 10.8 gigabytes per minute. As you know, you’re not streaming 10 gigabytes of video to your TV every minute. Video compression does a marvelous job of cutting that down to a manageable size HD 720p @ H.264 high profile 2500 kbps (20 MB/minute) HD 1080p @ H.264 high profile 5000 kbps (35 MB/minute) If your compressed image "overexposed" your original data, you cannot get back the correctly exposed data from the compressed video. You would want the original data. Put another way, in compressed video you are starting out with 24-bit pixels (8/8/8). In the original data, you have 42bit pixels (14/14/14/). Those 42bits aren't all equal (the sensors aren't as accurate at the extreme ends of their readings), but this should give you an idea of why RAW sensor data is the ideal. REFERENCE BAYER SENSORS http://www.siliconimaging.com/RGB%20Bayer.htm DEBAYERING http://pixinsight.com/doc/tools/Debayer/Debayer.html PATENT BELIEVED BEHIND CANON's CURRNET VIDEO FOCUS PIXEL TECHNOLOGY http://www.google.com/patents/US20100165176?printsec=abstract#v=onepage&q&f=false SIGMA/FOVEON NON-BAYER SENSORS (not currently used in video due to technical problems) http://en.wikipedia.org/wiki/Foveon_X3_sensor CAMERAS MUST DUMP IMAGE DATA IN REAL-TIME. Or, not all SD cards created equally http://en.wikipedia.org/wiki/Secure_Digital VIDEO COMPRESSION http://tech.yanatm.com/?p=485 Oh this stuff makes my head swim ;)
-
The future codec for DSLRs is coming. Was sent this by email today, thanks Tero [url="http://techreport.com/discussions.x/23429"]http://techreport.co...ussions.x/23429[/url] This will be more efficient at the same bitrate and this means better image quality. For example 24mbit would look something like 44mbit on current codecs. [color=#000000][font=trebuchet ms', sans-serif][size=3]Over the past few years, H.264 video compression has permeated just about every corner of the tech world—YouTube, Blu-ray, cable and satellite HDTV, cell phones, tablets, and digital camcorders. Could it be just a year away from obsolescence? According to a [url="http://www.ericsson.com/news/120814_mpeg_244159018_c"]news release by Ericsson[/url], the Moving Picture Experts Group (a.k.a. MPEG) met in Stockholm, Sweden last month to "approve and issue" a draft standard for a next-generation video format. That format, dubbed High Efficiency Video Coding, or HVEC for short, will purportedly enable "compression levels roughly twice as high" as H.264.[/size][/font][/color] [color=#000000][font=trebuchet ms', sans-serif][size=3]Ericsson's Per Fröjd, who chairs the Swedish MPEG delegation, comments, "There's a lot of industry interest in this because it means you can halve the bit rate and still achieve the same visual quality, or double the number of television channels with the same bandwidth, which will have an enormous impact on the industry." HVEC could make its debut in commercial products "as early as in 2013," claims Fröjdh. He expects mobile devices will be the first ones to make use of the new format, with TV likely to lag behind.[/size][/font][/color] [color=#000000][font=trebuchet ms', sans-serif][size=3]That all sounds rather exciting. Halving bitrates while maintaining image quality would be fantastic for streaming web video. It might be advantageous for devices with high-PPI displays, as well, if they can offer better image quality at today's bit rates. However, hardware support could impede early adoption, since the hardware H.264 video decoders in today's mobile processors might not be compatible with the new standard[/size][/font][/color]