-
Posts
160 -
Joined
-
Last visited
About joema
Profile Information
-
Gender
Male
-
Interests
Digital video, photography, documentary production
Recent Profile Visitors
4,509 profile views
joema's Achievements
Active member (3/5)
98
Reputation
-
Juank reacted to a post in a topic: Canon Cinema EOS C70 - Ah that explains it then!
-
dellfonic reacted to a post in a topic: Canon Cinema EOS C70 - Ah that explains it then!
-
Mmmbeats reacted to a post in a topic: Canon Cinema EOS C70 - Ah that explains it then!
-
There are several recent trends that make this difficult to assess. Past experience may not be a reliable indicator. Historically most use of RAW video has been proprietary formats which were expensive and complicated in both acquisition and post. By contrast both Blackmagic's BRAW and ProRes RAW are cheap to acquire and easy to handle in post. However they are both fairly recent developments, especially the rapidly-growing inexpensive availability of ProRes RAW via HDMI from various mirrorless cameras. If a given technology has only been widely available for 1-2 years, you won't immediately see great penetration in any segment of the video production community. There is institutional inertia and lag. However, long before BRAW and ProRes RAW, we had regular ProRes acquisition, either internally or via external recorders. Lots of shops have used ProRes acquisition because it avoids time-consuming transcoding and gives good quality in post. BRAW or ProRes RAW are no more complex or difficult to use than ProRes. This implies in the future, those RAW formats may grow and become somewhat more widely used, even in lower end productions. Conflicting with this is the more widespread recent availability of good-quality internal 10-bit 4:2:2 codecs on mirrorless cameras. I recently did a color correction test comparing 12-bit ProRes RAW from a Sony FS5 via Atomos Inferno to 10-bit 4:2:2 All-Intra from an A7SIII, and even when doing aggressive HSL masking, the A7SIII internal codec looked really good. So the idea is not accurate that the C70 is somehow debilitated because of not shipping with RAW capability on day 1. OTOH Sony will also face this same issue when the FX6 is shortly released. If it doesn't at least have ProRes RAW via HDMI to Atomos, that will be a perceptual problem because the A7SIII and S1H have it. It's also not just about RAW -- regular ProRes is widely used, e.g. various cameras inc'l Blackmagic record this internally or with an inexpensive external recorder. The S1, S1H and A7SIII can record regular 10-bit 4:2:2 ProRes to a Ninja V, the BMPCC4k can record that internally or via USB-C to a Samsung T5, etc. With a good quality 10-bit internal codec you may have less need for either RAW or ProRes acquisition. OTOH I believe some camera mfgs have an internal perceptual problem which is reflected externally in their products and marketing. E.g, I recently asked a senior Sony marketing guy what is the strategy for getting regular ProRes from the FX9. His response was why would I want that, why not use the internal codecs. There is some kind of disconnect, worsened by the new mirrorless cameras. Maybe the C70 lack of RAW is another manifestation. This general issue is discussed in the FX9 review starting at 06:25. While about the FX9 specifically, in broader terms the same issue (to varying degrees) affects the C70 and other cameras:
-
I can confirm a top-spec 2019 MBP 16 using Resolve Studio 16.2.5 has greatly improved playback performance on 400 mbps UHD 4k/29.97 10-bit 4:2:2 All-Intra material from a Panasonic GH5, whereas FCPX 10.4.8 and Catalina 10.15.6 on the same hardware is much slower. Also Resolve export of UHD 4k 10-bit HEVC is very fast on that platform, whereas FCPX 10.4.8 is very slow (for 10-bit HEVC export, 8-bit is OK). In these two cases for some reason FCPX is not using hardware acceleration properly but Resolve is. This only changed in fairly recent versions of Resolve, and I expect the next version of FCPX will be similarly improved. Beyond those codecs it seems the latest 2020 iMac 27 has improved hardware acceleration for some codecs. Of course this will vary based on whether the NLE leverages this, as shown above. Max Yuryev will probably post those results next week. This may be due to the Radeon Pro 5700 XT having AMD's new VCN acceleration logic. VCN is the successor to VCE. It's a confusing area since there are three separate acceleration hardware blocks: Intel's Quick Sync, Apple's T2 and AMD's UVD/VCE (and now VCN). The apps apparently just call the MacOS VideoToolBox framework and request acceleration. Why that works on Resolve 16.2.5 and not FCPX 10.4.8 for the above two cases is unknown.
-
User reacted to a post in a topic: Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?
-
On Mac you can use Invisor which also enables spreadsheet-like side-by-side comparison of several codecs. You can also drag/drop additional files from Finder to the comparison window, or select a bunch of files to compare using right-click>Services>Analyze with Invisor. I think it internally uses MediaInfo to get the data. It cannot extract as much as ffprobe or ExifTool but it's much easier to use and usually sufficient. https://apps.apple.com/us/app/invisor-media-file-inspector/id442947586?mt=12
-
User reacted to a post in a topic: Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?
-
Most commonly H264 is 8-bit Long GOP, sometimes called IBP. This may date to the original H264 standard, but you can also have All-Intra H264 and/or 10-bit H264, it's just less common. I don't have the references at hand but if you crank up the bit rate sufficiently, H264 10-bit can produce very good quality, I think even the IBP variant. The problem is by that point you're burning so much data that you may as well use ProRes. In post production there can be huge differences in hardware-accelerated decode and encode performance between various flavors of a given general type. E.g, the 300 mbps UHD 4k/29.97 10-bit 4:2:2 All-Intra material from a Canon XC15 was very fast and smooth in FCPX on a 2015 iMac 27 when I tested it, but similar material from a Panasonic GH5 or S1 were very sluggish. Even on a specific hardware and OS platform, a mere NLE update can make a big difference. E.g, Resolve has had some big improvements on certain "difficult" codecs, even within the past few months, at least on Mac. Since HEVC is a newer codec, it seems that 10-bit versions are more common than H264 (especially as an NLE export format), but maybe that's only my impression. I think Youtube and Vimeo will accept and process 10-bit Long GOP HEVC OK, I tend to doubt they'd accept 10-bit All-Intra H264. There are some cameras that do 10-bit All-Intra HEVC such as the Fuji X-T3. I think some of these clips include that format https://www.imaging-resource.com/PRODS/fuji-x-t3/fuji-x-t3VIDEO.HTM But there's a lot more involved than just perceptual quality, data rate or file size. Once you expand post production beyond a very small group, you have an entire ecosystem that tends to be reliant on a certain codec; DNxHD or ProRes are good examples. It almost doesn't matter if another codec is a little smaller or very slightly different in perceptual quality on certain scene types, or can accommodate a few more transcode cycles with slightly less generational loss. Current codecs like DNxHD and ProRes work very well, are widely supported and not tied to any specific hardware manufacturer. There's also ease of use in post production. Can the codec be played with common utilities or does it require a special player just to examine the material? If a camera codec, is it a tree-like hierarchical structure or is it a simple flat file with all needed metadata in the single file? Testing perceptual quality on codecs is a very laborious complex process, so thanks for spending time on this and posting your results. Each codec variant may react differently to certain scene types. E.g, one might do well on trees but not water or fireworks. Below are some scenes used in academic research. https://media.xiph.org/video/derf/ http://medialab.sjtu.edu.cn/web4k/index.html http://ultravideo.cs.tut.fi/#testsequences
-
Danyyyel reacted to a post in a topic: Sony A7S III
-
There is no one version of H.265. Like H.264 there are many, many different adjustable internal parameters. AMD's GPUs have bundled on them totally separate hardware called UVD/VCE which is similar to nVidia's NVDEC/NVENC. Over the years there have been multiple versions of *each* of those, just like Quick Sync has had multiple versions. Each version of each hardware accelerator has varying features and capability on varying flavors of H.264 and later H.265. Even if a particular version of a hardware video accelerator works on a certain flavor of one codec, this is not automatic. Both application and system layer software must use development frameworks (often provided by the h/w vendor) to harness that support and provide it in the app. You can easily have a case where h/w acceleration support has existed for years and a certain app just doesn't use it. That was the case with Premiere Pro for years which did not use Intel's Quick Sync. There are some cases now where DaVinci Resolve supports some accelerators for some codec flavors which FCPX does not. There is also a difference between the decode side and encode side. It is common on many platforms that 10-bit HEVC decoding is hardware accelerated for some codec variants but not 10-bit HEVC *encoding*. It is a complex. bewildering, multi-dimensional matrix of hardware versions, software versions, codec versions, sprinkled with bugs, limitations and poor documentation. In general Intel's Quick Sync has been more widely supported because it is not dependent on a specific brand or version of the video acceleration logic bundled on a certain GPU. However Xeon does not have Quick Sync so workstation-class machines must use the accelerator on GPUs or else create their own. That is what Apple did with their iMac Pro and new Mac Pro - they integrated that acceleration logic into their T2 chip.
-
I don't have R5 All-I or IPB material but I've seen similar behavior on certain 10-bit All-Intra codecs using both H.264 and HEVC. This is on MacOS, FCPX 10.4.8 and Resolve Studio 16.2.4 on both iMac Pro with 10-core Xeon W-2150B and Vega 64, and 2019 MacBook Pro 16" with 8-core "Coffee Lake" i9-9980HK and Radeon Pro 5500M. Xeon doesn't have Quick Sync so Apple uses custom logic in the T2 chip for hardware acceleration. The MBP 16 apparently uses Quick Sync, although in either machine could use AMD's UVD/VCE. In theory All-I should be easy to decode, and for some codecs it is. The 300 mbps 4k 10-bit All-I from a Canon XC15 is very fast and smooth. By contrast, similar material from a Panasonic GH5 or S1 is very sluggish on FCPX but recent versions of Resolve is much faster. In that one case Resolve is better leveraging the available hardware acceleration, whether the T2 or Quick Sync. For 10-bit IPB material from a Fuji X-T3, it is slow on both machines and both NLEs. Likewise the Fuji 10-bit HEVC material is super slow. Test material from that camera using various codecs: https://www.imaging-resource.com/PRODS/fuji-x-t3/fuji-x-t3VIDEO.HTM There is no one version of 10-bit H264 or HEVC. Internally there are many different encoding parameters and only some of those are supported by certain *versions* of hardware acceleration. There are many different versions of Quick Sync, many versions of nVidia's NVDEC/NVENC and many versions of AMD's UVD/VCE - all with varying capabilities. Each of those have different software development frameworks. It would be nice if the people writing camera codecs coordinated their work with CPU, GPU and NLE people to ensure it could be edited smoothly. Unfortunately it's like the wild west - a chaotic "zoo" of codec variants, hardware accelerators and NLE capabilities. The general tendency is people buy a camera because they like the features, and then discover its codec cannot be smoothly edited in their favorite NLE (or any NLE). Then they complain the NLE vendors let them down. While there's an element of truth to that, the current reality is compressed codecs produce unreliable decode performance due to lack of standardization between the various technical stakeholders. This situation has gotten much worse with 4k and more widespread use of 10-bit and HEVC. Back around 2010 it seemed Adobe's Mercury Playback engine handled DVI and 8-bit 1080p H264 from various cameras fairly well on most Windows hardware platforms. That was before Quick Sync debuted in 2011, so it must have used software decoding. Today we have much more advanced CPUs and hardware accelerators but these developments have not been well coordinated with codec producers -- at the very time when hardware acceleration is essential. If you plan on editing a new camera's internal codec, that really must be tested on your current hardware and NLE -- before committing to the camera. The alternative is either transcode everything to a mezzanine codec or use ProRes acquisition,. However except for Blackmagic most cameras cannot do that without an external recorder. I'm not sure there is adequate sensitivity to this by most camera manufacturers. I just asked a senior Sony marketing person about external ProRes from the FX9 and he seemed unclear why I'd want that.
-
gethin reacted to a post in a topic: Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling
-
Vintage Jimothy reacted to a post in a topic: Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling
-
Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling
joema replied to Andrew Reid's topic in Cameras
Those are good points and helps us documentary/event people see that perspective. OTOH I'm not sure we have complete info on the camera. Is the thermal issue *only* when rolling or partially when it's powered up? Does it never happen when using an external recorder, or just not as quickly? I did a lot of narrative work last year and (like you) never rolled more than about 90 seconds. I see that point. But my old Sony A7R2 would partially heat up just from being powered on, which gave the heat buildup a "head start" when rolling commenced, making a shutdown happen faster. Is the R5 like that? Also unknown is the cumulative heat buildup based on shooting duty cycle. Even if you never shoot a 30 min. interview, if you do numerous 2-min b-roll shots in hot weather, will the R5 hit the thermal limit? More than the initial thermal time limit, the cooldown times seem troubling, esp. if hot ambient conditions amplify this. -
Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling
joema replied to Andrew Reid's topic in Cameras
The R5 has zebras, but does it have waveform monitor? -
Excellent point, and that would be the other possible pathway to pursue. It avoids the mechanical and form factor issues. Maybe someone knowledgeable about sensor development could comment. Given a high development priority, what are the practical limits on low ISO? Of course ISOs are mainly digital so going far below the native ISO would have negative image impact. In theory you could have triple native ISO, with one super low to satisfy the "pseudo ND" requirement, and the other two native ISOs for normal range. But that would likely require six stops below normal range and a separate chain of analog amplifiers per pixel. You wouldn't want any hitch in image quality as it rises from (say) ISO 4 or 8 up to normal levels. I just sounds difficult and expensive.
-
See attached for Sony internal eND mechanism. You are right there are various approaches, just none really good that fit a typical large-sensor, small mirrorless camera. This is a very important area, but it's a significant development and manufacturing cost who's benefit (from customer standpoint) is isolated mostly to upper-end, hard-core video-centric cameras like the S1H or A7SIII. It's not impossible, just really hard. Maybe someone will eventually do it. The Dave Dugdale drawing (which I can't find) was kind of like a reflex mirror rather than vertical/horizontal sliding. It may have required similar volume to a mirror box, which would be costly. For a typical short-flange mirrorless design, I can't see physically where the retracted eND element would fit.
-
I think the Ricoh GR is limited to 2 stops and the XT100 is limited to 3 stops. This is likely because there's no physical space in the camera to move the ND element out of the light path when disabled. This is due to the large sensor size and small camera body. I don't know the specs but a 2-3 stop variable ND can probably get closer to 0 stops attenuation when disabled. A more practical 6 stop variable ND can't go to zero attenuation but will have at least 1 stop on the low end. Nobody wants to give away 1 stop in low light conditions. A boxy camera like the FS5 has a flat front which allows an internal mechanism to vertically slide the variable ND out of the light path when disabled. A typical mirrorless camera doesn't have this space. I agree an internal electronic variable ND would be a highly valuable feature. Sony has the technology. If the A7SIII features, price and video orientation are similar to the S1H, maybe somehow they could do it. Some time ago Dave Dugdale drew a rough cutaway diagram of a theoretical large-sensor mirrorless camera which he thought could house a variable ND with an in-camera mechanism to move it out of the light path. I can't remember what video that was in.
-
On the post-production side, the problem I see is poor or inconsistent NLE performance on the compressed codecs. The 400 mbps HEVC from Fuji is a good example of that -- on a 10-core Vega 64 iMac Pro running FCPX 10.4.8 or Premiere 14.3.0 it is almost impossible to edit. Likewise Sony XAVC-S and XAVC-L, also Panasonic's 10-bit 400 mbps 4:2:2 All-I H264. Resolve Studio 16.2.3 is a bit better on some of those but even it struggles. Of course you can transcode to ProRes but then why not just use ProRes acquisition via Atomos. The problem is there are many different flavors of HEVC and H264, and the currently-available hardware accelerations (Quick Sync, NVENC/NVDEC, UVD/VCE) are in many different versions, each with unique limitations. On the acquisition side it's nice to have a high quality Long GOP or compressed All-I codec - it fits on a little card, data offload rate is very high due to compression, archiving is easy due to smaller file size, etc. But it eventually must be edited and that's the problem.
-
I hope that is not a consumerish focus on something like 8k, or yet another proprietary raw format, at the cost of actual real-world features wanted by videographers. I'd like to see 10-bit or 12-bit ProRes acquisition from day one (if only to a Ninja V), improved IBIS, or simultaneous dual-gain capture like on the C300 Mark III: https://www.newsshooter.com/2020/06/05/canons-dual-gain-output-sensor-explained/ Obviously internal ND would be great but I just can't see mirrorless manufacturers doing that due to cost and space issues.
-
All good points. Maybe the lack of ProRes on Japanese *mirrorless* cameras is a storage issue coupled with lack of design priority on higher-end video features. They have little SD-type cards which cannot hold enough data, esp at the high rate needed. I think you'd need UHS-II which are relatively small and expensive. The BMPCC4k sends data out USB-C so a Samsung T5 can record that at up to 500 MB/sec. The little mirrorless cameras could theoretically do that but (as a class) they just aren't as video-centric. The S1H has USB-C data output, is video-centric, but it doesn't do ProRes encoding. Why not? It also doesn't explain why larger higher-end cameras like the Canon C-series, Sony FS-series and EVA-1 don't have a ProRes option. Maybe it's because their data processing is based on ASICs and they don't have the general-purpose CPU horsepower to encode ProRes. I think Blackmagic cameras all use FPGAs which burn a lot more power but can be field-programmed for almost anything, in fact that's how they added BRAW. But the DJI Inspire 2's X5S camera has a ProRes option, so I can't explain that. For a little mirrorless camera, it's not that big a deal -- those can do external ProRes recording via HDMI to Atomos. Even given in-camera ProRes encoding, they'd likely need a USB-C-connected SSD to store that. Many people would use at least an external 5" monitor, which means you'd have two cable-connected external devices. A Ninja V is both a monitor and an external ProRes recorder - just one device. So in hindsight it seems the only user group benefiting from internal ProRes on a small camera would be those not using an external monitor, and they'd likely need external SSD storage due to the data rate of 4k ProRes.
-
Emanuel reacted to a post in a topic: RED claim victory in Apple RAW patent battle
-
That might be a grey area. The RED patent describes the recorder as either internal to the camera or physically attached. Maybe you could argue the hypothetical future iPhone is not recording compressed raw video and there is no recorder physically inside or attached outside (as described in the patent), rather it's sending data via 5G wireless to somewhere else. Theoretically you could send it to a beige box file server having a 5G card, which is concurrently ingesting many diverse data streams from various clients. Next year you could probably send that unrecorded raw data stream halfway across the planet using SpaceX StarLink satellites. However it seems more likely Apple will not use that approach, but rather attack the "broad patent" issue in a better prepared manner.