see ya
Members-
Posts
215 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by see ya
-
Premiere CS only upsamples chroma internally to 4:4:4 YCC if a filter is added. Cuts only or import export to a 4:2:0 codec will not involve 4:4:4 and export to RGB image then upsampling to 4:4:4 YCC first is unnecessary.
-
Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4
see ya replied to Andrew Reid's topic in Cameras
4:2:0 jpeg encoding and 4:2:0 AVCHD off a Panasonic are two different things. JPEG chroma is normalized over full range, AVCHD it's within 16-240. -
Have previously and continue to on occasions use Pinnacle Studio & TouchEdit on ipad 2 & ipad Air, when out and about. Canon DSLR h264 including ML higher bitrate, edits fine on Pinnacle Studio, TouchEdit and iMovie for ipad. What would be great would be an EDL export though, TouchEdit does provide FCPXML export.
-
Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4
see ya replied to Andrew Reid's topic in Cameras
Minor comments, its quite easy and its been done many times before over the years to use a tool like Avisynth and create 4:4:4 from 4:2:0 by scaling luma and chroma individually without having to go to RGB first and without the need for Cineform and AE. The output from the Cineform process does not appear to be 4:4:4 YCC but 4:4:4 RGB. Technically they are not the same. I know EOSHD write up about YCC is for illustration but the chroma difference is almost certainly left of rather than cosited as the diagram and the codec used by the Panny, the write up says is RGB, but it won't be. Minor nit picking but just see this "4:2:0 as 4:4:4 major discovery" as a bit off, to suggest 4:2:0 intetpolated and averaged 4k 4:2:0 to 1080p is going to give equivilant to 4:4:4 1080p shot in camera. However it will be interesting to see the results comparing Cineform route and straight forward Avisynth rescale luma route. -
For backup a small RAID in a box, either network or desktop attached would be useful: http://www.readynas.com/ or http://www.synology.com/en-uk/support/nas_selector They have many benefits, cheap, consumer or enterprise disks, disks replace easy, network attached can replicate between each other if you went for more than one, say one at home, in the office, friends or parents place. Cloud, gets your data off site. eSATA & USB direct connections, ReadyNAS has 2x ethernet connections which can be bonded. Snapshots, remote access through the manufacturers portal. Would suggest RAID 0 for a Desktop if it's just for additional storage immediate access. RAID 1 for full redundancy if for a backup and avoid RAID 5 and all other RAID configs unless using enterprise class disks. For off loading raw files / h264 I use small 1TB 'Passport' style USB drives and a second user netbook bought cheap. Duplicate the files to internal h/d and external passport drive, can be a bit slow, probably better set ups.
-
I'm not disputing that scaled down 4K to 1080p will look better, I'm just querying how 4:2:0 4k can make 4:4:4 1080p because of the acertion chroma is quarter resolution ie: 1080p that's all. Where do you see that seperate scaling taking place? Without going to RGB first from full 4k?
-
Yes, however my comment was more about the suggestion that just scaling down 4K 4:2:0 gives 4:4:4 and the kind of suggestion then it's the same as 4:4:4 which could have been encoded directly in camera, rather than quality loss in general. So adding 4:2:0 to a timeline in an NLE or grading app that works RGB for scaling, color work and effects will probably first convert the 4:2:0 to RGB by interpolating chroma based on the full 4K resolution, not scale luma and keep chroma resolution, if anyone want's 4:4:4 in 1080p from the 4K RGB, whats the point? interpolation to RGB has lost the YCC relationship and chance to scale luma down keeping chroma at it's original resolution? Alternatively as I understand it, in Premiere for example 4:2:0 is kept as long as it's cuts only, so after editing and no colour work, 4:2:0 YCC could be rendered out at 4K intact for another app to do the scaling. Or does Premiere offer scaling luma and maintaining chroma resolution from the original 4:2:0? Premiere immediately upsamples 4:2:0 internally to 4:4:4 by interpolation for color work and effects so again if there is no chance to scale luma and not chroma difference channels it's a typical 4:2:0 to 4:4:4 interpolation even at 32bit rather than scaling luma and maintaining chroma? So just wondering where exactly the simple 4:2:0 4k equals 1080p 4:4:4 statement holds true other than theoretically? Chroma in YCC is the difference of a RGB value once the luma has been extracted for the brightness of the RGB sample, and scaling luma down will involve interpolation / averaging of the luma values in the downscale, what's to say the corresponding 4:4:4 or RGB values generated after the downscale are equivilent to any original 4:4:4 data that might have been sample in camera, circling back to the suggestion that 4:2:0 to 4:4:4 by scaling is somehow the same as 4:4:4 in camera?
-
If those resizers work in RGB then a lot of the benefit of scaling 4:2:0 YCC source is lost? The mix has been done and the chroma subsampling scaled down accordingly in the conversion to RGB. If a YCC resizer is used and the resizers allows the ability to scale luma and chroma channels seperately including the ability to offset chroma or luma to suit chroma placement in the source. ie: whether it's left of centre or centred, then interpolate to 4:4:4, then convert to RGB typically in the NLE or grading software for grading a 'better' result should be seen. Resizing is preferable as one of the last opps before encoding to delivery, using Lanczos resizer at input, then grading / sharpening etc can lead to ringing and halos. It's really just oversampling edges, is it not?
-
I see, that makes more sense. :-) regarding codec, it's the BT2020 specification's transfer curve that it would appear to allow the transport of the wider DR, if available. The codec would preferrably need to be 10bit to distribute those 'stops' over a decent levels range. Why I mentioned we can bet 8bit log maybe some manufacturers choice even. What a manufacturer chooses to offer in their product line, how they choose to implement 4k is their business, we choose a camera to suit ourselves. The UHD specification of which 4k is a part of, it's not just resolution will it is suggested by smpte and ITU provide the 'space' for the things Andrew lists. The codec is just the transport, codecs take years to develop, time and resources, what codec devs choose to do is their business, if someone's not happy with that there's always the option to join opensource development, write a specification, start a website enthusing over why that persons codec specification is 'better', start a kickstarter whatever or far more realisticly make the choice of whats available within the wider considerations of camera choice and work with what's been offered.
-
Is that aimed at me? I guess not because I never made that strange assumption. ^
-
Where did I say 4k has better DR? I didn't. You asked this: I answered, BT2020 specification as long as the camera can provide it. I'm not interested in all the 4k BS debate, it's business as usual. Same old same old. New formats, standards, hype and BS. Choose the camera that suits the individiual and f--k what anyone else thinks.
-
Whatever, but 4k brings with it a new specification, BT2020. http://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2020-0-201208-I!!PDF-E.pdf In that specification it explains how the wider DR and wider gamut will be transported in the codec. A tweaked rec709 transfer curve (at the low end) and different color primaries wider than rec709 in a 10bit codec, maybe some camera manufacturers will offer 8bit LOG still to try to carry that gamut and DR. 4k is not just about resolution. Although anyone can choose what they take from the new 4k standard. http://www.avsforum.com/t/1496765/smpte-2013-uhd-symposium http://www.ultrahdtv.net/what-is-ultra-hdtv/ That's all assuming the camera sensor is capable of providing wider DR and as long as the display technology supports BT2020 to show it. Bearing in mind LCD technology with these crap computer monitors struggle with a contrast ratio and elevated black level to even display a decent rec709 image (5 stop curve) without a UHD spec'd display those benefits won't be seen apart from the resolution, again as long as the camera provides a decent image. We already read about how camera raw offers wider DR over a typical rec709 or sRGB curve compressed video output and the BS talk of highlight recovery, aka tonemapping in this case to get that wider DR to display on crap or rec709 limited monitors. The new BT2020 transfer curve, it is suggested along with the 10bit coding should allow that DR through, even in a compressed codec. I guess first release of cameras and displays will offer 4k resolution, 1080p mode and rec709 back compatibility, then later releases the BT2020 UHD specification and wider gamut in some mash up of specifications to pick through. Although we've already had xvcolor (xvYCC) extended gamut via rec709 primaries and 'Deep Color' higher saturated colors and neither taken off, just marketing hype.
-
haha, :-) I Don't if its worth trying to calibrate or if its possible to successfully calibrate it. If there is acess to those controls then first step would be to download a couple of calibration images of some AV Home Cinema forum, the images with all the greyscale boxes on for setting brightness and contrast and see if the imac display can actually display them all after adjusting the hardware settings, if that's not possible then wouldn't waste any time trying to calibrate and profile it with a probe. May be better to just use a decent backlit LED domestic TV and calibrate / profile that. :-)
-
Calibration and profiling are two seperate things, to calibrate, ie: to set black and white levels requires access to hardware brightness, contrast, LED backlight, RGB gain and offset. Calibration gets hardware responsive to profiling and 3D LUT for display for Resolve after that. So does imac give hardware access to those?
-
You'd ideally backlight the reference display and grade in a dim lit room, ambient lighting at about 10% of brightest display white so reflective screen not so much an issue. D65 balanced lighting. 18% grey surrounding walls.
-
Yes Cineon out of Resolve is certainly possible, there's also a flat render option although I haven't used it so can't comment. Choosing BMD Film gamma would also mean BMD Film color space and for Canon camera raw personally I don't think thats a valid or clever approach because they're to define the BMD cameras color science and response to be able to work in a color managed way including accurate transforms to Rec709 or DCP for example. Anyway, hope the suggestion helps, would be good to hear your findings.
-
Yeah your right, but a reasonable grading monitor will be more expensive than a computer monitor, spending 500 euros on two mediocre Dells id over half towards a CX, which also appear to be on offer at the moment. But yes a budget to work to is inevitable.
-
Can I ask what did you use as DNG settings in Resolve? Rec709 colorspace & Rec709 gamma because you mention 'sacrificing' black levels? Have you compared the EOSHD LUT to just setting 'Linear' in Resolves (v10) gamma project settings for CinemaDNG rather than rec709 gamma. That would give you demosaiced linear space frames and then doing a typical lin2log on those by applying Resolves linear to CineonLOG 3D 'input 3D LUT' in your project settings and then using Resolves log grading tools? Doing a lin2log at input, will lift your shadows, obviously and put your source in a LOG space for grading. You may then find there's no point applying a 'Output LUT' to lift it all, having possibly crushed it at the start by applying rec709 gamma? You may also find that 'Highlight Recovery' option then only needs to be applied to camera raw files that have 'actually' clipped a channel in camera raw rather than on every clip as EOSHD suggests, I guess to counter the intial rec709 curve again.
-
NVidia GTX770 4GB any make, budget would be maybe a Zotac. Socket 2011 could be a Asus P9X79 Pro mobo which gives 2x 16x PCI-e 3.0, on a budget a basic quad core processor, 16GB or 32GB HyperX (4x 8GB sticks). Eizo CS series monitor + cheapo monitor for GUI / Scopes. The SDI thing and 3D LUT box is a step too far really for budget. Most important is GPU power with as much VRAM as you can afford 4GB really particularly if using any temporal filters, then 2nd importance is RAM, 16GB minimum, 32GB better, then least importance is processor, just used for encoding. Personally I'd not waste cash on an 8 core or greater processor, on a SSD, on 2x mediocre Dell monitors, perhaps put that cash towards one decent entry level reference monitor like a Eizo CS range, or the CX range that'll take a 3D LUT, if you can afford it.
-
Consider an Eizo, maybe CX or CS series http://www.eizo.com/global/products/coloredge/cx240/ and I'd second the Colormunki Display probe.
-
I'd double your GPU ram to 4GB. Have you looked at socket 2011 ivy bridge e? Consider one decent monitor that takes a 3D LUT than two poor ones and just pick up a cheap monitor for GUI. Then there's the whole SDI out not GPU, external LUT box etc.
-
New H.265 codec on test - ProRes 4444 quality for 1% of the file size
see ya replied to Andrew Reid's topic in Cameras
Bare in mind that h264 is not just heavily compressed 4:2:0, there are high profiles available via x264 implementation that offer both 10bit and 4:4:4 and lossless of both. x265 is already going strong and available for testing, http://forum.doom9.org/showthread.php?t=168814 & https://bitbucket.org/multicoreware/x265/overview CineMartin, Cinetec I believe uses ffmpeg as a base, along with SDK's from others for wider format / codec support including x264 I'd guess. I wouldn't be surprised if Cinetec's h265 support isn't x265. Just a hunch but very possibly incorrect. -
Introducing the EOSHD Film LUT (for 5D Mark III raw and Resolve 10)
see ya replied to Andrew Reid's topic in Cameras
Just wanted to correct something I said previously about Canon raw not being supported in Resolve 10, looking at the exif metadata written to the DNGs by Magic Lantern everything is there including matrix just WB multipliers don't appear to be written correctly being all set to 1. @jcs, thanks for info on dcraw libraw quality. -
Introducing the EOSHD Film LUT (for 5D Mark III raw and Resolve 10)
see ya replied to Andrew Reid's topic in Cameras
Sorry Andrew, I was just replying to prior comments on magenta / green, I too don't see them personally. -
Introducing the EOSHD Film LUT (for 5D Mark III raw and Resolve 10)
see ya replied to Andrew Reid's topic in Cameras
It's not really surprising highlights may appear magenta and shadows green( wrong black level and sensor saturation assumptions), Canon raw isn't supported in Resolve 10, like say ACR supports it, providing a profiled set of adobe coefficients for each Canon camera model, to the conversion from camera raw space to the color space used within ACR or Resolve or whatever. Basic's of adding camera raw support into an app: http://www.darktable.org/2012/10/whats-involved-with-adding-support-for-new-cameras/ http://www.darktable.org/2013/10/about-basecurves/ And the WB value metadata isn't written to the Magic Lantern DNG's for 'As Shot' or have things progressed more now. btw all the talk of ACR being the 'best' raw conversion, which in it's self is subjective, compared to what? Resolve 9 and now 10? Any comparisons to say libraw DCB or Amaze demosaicing algo's. http://www.libraw.org/docs/Samples-LibRaw-eng.html Camera profiling ie: Process to help create a 3D LUT (Input) for camera space to working color space rather than a 3D 'Look' LUT as a .cube for Output from Resolve via the dpx provided by Black magic. https://blog.pcode.nl/2010/06/28/darktable-camera-color-profiling/ http://ninedegreesbelow.com/photography/dcraw-undnged.html