-
Posts
391 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by sunyata
-
An astounding Sony A7S low light test by Philip Bloom
sunyata replied to Andrew Reid's topic in Cameras
This is making me wonder which is better, realistic low light that uses the actual light in the scene, which is also going to be artificial at night (street lights or candles etc) or additional lighting techniques to simulate night.. in other words, what is going to look more "real" to a viewer, the one that is more real or the one that is more fake? I'm gonna guess that most people would criticize reality of looking fake, but you could use the A7s for either scenario. -
An astounding Sony A7S low light test by Philip Bloom
sunyata replied to Andrew Reid's topic in Cameras
I wonder if the sensitivity of the sensor works against it in bright light, like a high iso film?? With iso settings lower of course. -
An astounding Sony A7S low light test by Philip Bloom
sunyata replied to Andrew Reid's topic in Cameras
It's like sniper vision! And it kinda looks like CNN coverage of the first Iraq war too. Anyone else feel like you were spying on the girl in the bathing suit? -
I think it was 5:00AM, the sun was coming up, the file was a crazy mess in that way that only AE can get. Anyway, I've had time to watch a video or two on some of the new features in CC 2014, it looks like they are getting up to speed on a lot of things that would help reduce clutter: like masking effects. That should cut way down on the number of nested comps. Also a planar tracker that you can use straight from your roto shape tool now. http://www.redgiant.com/videos/redgianttv/item/415/?utm_source=Red+Giant+Software+Email+List&utm_campaign=bb6d1574b3-07_01_14_Stus_Tutorial&utm_medium=email&utm_term=0_ff8ed2ae13-bb6d1574b3-296530881
- 104 replies
-
- grading
- colour correction
-
(and 1 more)
Tagged with:
-
Up another 20% today.... now at 6B market cap.
-
.. memory bloat. I think Adobe only knows how to innovate through acquisitions. The next cool app for motion graphics is going to come out of left field from a company nobody has heard of, like CoSA ;)
- 104 replies
-
- grading
- colour correction
-
(and 1 more)
Tagged with:
-
The topic was: "put your imagination in the form of a sensor".. Imagine that you're just a sensel in a sea of endless sensels. Would you want to collect light, what would be the point?
-
I hate After Effects... Let me explain. I've been working all night on fixing someone's mess here and every layer has a ton of redundant effects and nested comps, all discrete parameter changes, visibility on or off, sometimes affecting nothing, and all need to be investigated and double clicked on.. it's like crawling through a nightmarish compositing rabbit hole. Procedural or "node" based is so much less mistake prone and easy to visualize it's ridiculous. ... end rant.
- 104 replies
-
- grading
- colour correction
-
(and 1 more)
Tagged with:
-
Most of these de-bayer methods would be easy to test with a Nuke expression node, or with a flame matchbox shader, the latter are written in GLSL. Unfortunately you can't do anything temporal in matchbox, but the trade-off is that you're doing near real-time rendering.
-
Maxotics- Haven't seen ascii art in a while. I understand what you are saying. I think that this applies to more of a bi-linear method though? If you were using a rules based algorithm, like in an adaptive laplacian method, this might help prevent the horizontal (or perfectly vertical) problem? I'm interested in learning more about FPGA programming, and experimenting with de-bayering, but this all makes me think that 3 chip cameras are not dead yet... unless someone invents a chip that has substrates like color film.
-
I'm sorry, it sounded like you were saying you couldn't get an actual 4k sized image after de-bayering, I think I took the part about the "Nyquist 2x sample theory" and "A 4K Bayer sensor can't actually capture 4K- it's some percentage less, perhaps 3.2K-3.8K" a little too literally. I can see now that you were just saying that it's not true trichroic RGB. And you think with 5k to 2k, the de-bayering factor is less important than 2k to 2k. Got spammed with an interesting article on 4k today from RedShark news. "Before we all leap into 4K, maybe we need to understand resolution better!" http://www.redsharknews.com/technology/item/1799-before-we-all-leap-into-4k,-we-need-to-understand-resolution-better?utm_source=www.lwks.com+subscribers&utm_campaign=fe5b3d60da-RSN_June27_2014&utm_medium=email&utm_term=0_079aaa3026-fe5b3d60da-76206849 Essentially he's making a point about motion and how higher resolutions change the perception of motion.
-
Correct me if I'm wrong but I was under the impression that de-bayering does not require any downsizing from the original pixel density of the sensor, but in fact it uses algorithms that look at neighboring pixels and can recreate an image that represents the full pixel density of the chip when finished, only with more "accuracy" in the green channel and less in the rb channels, depending on end gamut. From what I'm reading above, there needs to be an over-sampling to prevent aliasing that forces a reduction in the pixel density in the encoded image? http://videsignwire.com/cmos-image-sensor-processing-with-fpgas/4/ View full size Figure 5. Bayer Patterns When interpolating the missing values of R and B on a green pixel, as in Figure 5(a) and ( B), the average values of the two nearest neighbors of the same color are used. For example, in Figure 5(a), the value for the blue component on a shaded G pixel will be the average of the blue pixels above and below the G pixel, while the value for the red component will be the average of the two red pixels to the left and right of the G pixel. In Figure 6(a), the value of the green component is to be interpolated on an R pixel. The value used for the G component here is: To determine the value of the red component on a B pixel in Figure 6( B), take the average of the four nearest red pixels cornering the B pixel. View full size Figure 6. Two Bayer Pattern Cases In order to improve the resulting image quality an adaptive interpolation algorithm must be employed. Here, a 5×5 grid is used to estimate the missing pixel colors. Consider the Bayer pattern in Figure 8. View full size Figure 8. 5×5 Bayer Pattern Grid First, a horizontal laplacian is determined using the following equation. Secondly, a corresponding vertical laplacian is determined using the following equation. Using these horizontal and vertical metrics the interpolation algorithm is adapted according to the following rules. A similar algorithm is applied to the remaining mosaic colors. These rules determine a pixel location in which a color change has taken place and adapts the interpolation accordingly. The result of this adaptation on the output image quality can be seen below in Figures 9(a), ( B) and ©. The adaptive algorithm greatly reduces the blur and hue effect.
-
I'm sure most of you guys have seen this, but for anyone that hasn't; this is by Phillip Bloom, using a Phantom 2 drone and a GoPro3.
-
that's the CEO with the camera in mouth. "At its current price, GoPro has a stock-market value of $4.7 billion. That’s roughly 35 times the 2015 profit estimate that underwriters settled on as the forecast. It’s about 3.1 times the 2015 sales estimate." http://blogs.wsj.com/moneybeat/2014/06/26/is-gopro-a-gadget-maker-a-lifestyle-brand-or-a-social-media-firm/ So currently the market is expecting an exponential growth in profits.. I have a feeling we're gonna see more new "lifestyle" video oriented cameras from the other companies soon.
-
JG covered it IMO with the LUT's etc.. the only thing I would add is, with respect to getting a "good film-looking result", I personally think the GH4 is going to look more like video, regardless of log or color, when there is rapid motion. The inherent sharpness could be a giveaway too, so you could consider at times adding blur with a motion vector filter like reelsmart, and/or add grain from a common film stock (at the end of your workflow so you have control over density and color) . Just my 2 cents.
-
They are different in some cases, but they also have overlap in the context of light in a scene, transparency of film, or sensitivity of a digital sensor. References: "Dynamic range, abbreviated DR or DNR,[1] is the ratio between the largest and smallest possible values of a changeable quantity, such as in signals like sound and light. It is measured as a ratio, or as a base-10 (decibel) or base-2 (doublings, bits or stops) logarithmic value." http://en.wikipedia.org/wiki/Dynamic_range "In photography, exposure range may refer to any of several types of dynamic range.. The Light sensitivity range of photographic film, paper, or digital camera sensors. The luminosity range of a scene being photographed. The opacity range of developed film images The reflectance range of images on photographic papers." http://en.wikipedia.org/wiki/Exposure_range No but, in the real world, you can measure relative exposure of anything that emits or reflects light, like between a sheet of white paper and direct sunlight (with a spectroradiometer), which is actually quite huge. Captured by a digital camera, which has it's own dynamic range, and recorded into 8 bit (which has a max of 256 luma levels), the sheet of paper and sun would likely read the same value of 255. So your usable stops have been clipped by your maximum bit depth. If recorded or filmed with a wide dynamic range and encoded into a 10bit Log format (for example) the paper could read as "white" but below 685 (white) and the sun could read over 685, or "over white". LDR and HDR files / workflows refer to dynamic range in this other context. See Cineon, DPX, OpenEXR or LDR vs HDR rendering pipelines etc. Reference: "Conversion of DPX code values to relative scene luminance relative luminance = 10 ^ ( (dpx code value – white point) * 0.002 * negative gamma ) Commonly, the negative gamma is assumed to be 0.6 (density) and the white point is set to 685. The result is relative luminance where the reference white is placed at 1.0. To convert into a format like 16 bit TIFF the luminance values are multiplied by 4095 leaving a headroom of 4 stops for the highlights." http://www.acvl.org/digital_intermediates/dicompanion/ch03.html The dynamic range of the camera doesn't change, but the range or color depth of what you've recorded does, which has a direct impact on how much you can push the footage in post and the level of dynamic range you can effectively use. Anyway, I think the post above sums up what I'm saying about bit depth: "8bit log produces banding. This must be for recorders.".
-
btw, you can calculate the relative exposure range as: relative exposure = log2 ( luminance / reference luminance )
-
humm, "gamma log!" not to be confused with "Gamera log"..
-
Yes, I didn't mean to imply that bit depth and dynamic range are somehow the same property, or that film actually contains an inherent bit depth, since of course it's an emulsion and not digital. It's just the profile that the vendor thinks works best with their cameras since it's all somewhat arbitrary. Log C encodes middle gray of 18% to a code value of 400 in a 10bit signal. The total max values can vary though based on exposure settings. .
-
In mathematical terms Logarithmic scale is: "... a scale of measurement that displays the value of a physical quantity using intervals corresponding to orders of magnitude, rather than a standardlinear scale. The function of the curve may include an exponent, which is the source of its curved nature." Translation, it's a curve. In digital camera terms it usually refers to a profile curve that is built into a camera's firmware that is going to select the bits to encode from the scene, to the eventual file, but that file will be recording 256 (or fewer) integers if it is an 8bit format. If you want to retain the typical full range (and have it be usable) of a camera's log profile, you will want to encode to 10bit. Most log profiles consider middle gray as 18%, not 50%, because our eyes see 50% as almost white, and require 10bits to cover the full range. Log profiles often look "flat" and can vary in their "flatness" based on the profile. The original source of the flatness was not desirable as much as necessary to reproduce a print and include more dynamic range, but it also can allow more details to be drawn out, assuming you have the color depth. So basically, a log profile might not be desirable when shooting rec709 8bit, but this is the source of much (much) debate. A LUT is a color lookup table and has more to do with post and changing the way you interpolate a file, although it can be used for lots of other conversion needs. Gamma log seems redundant to me, I'm confused by that one, that's kinda like "Scientology", i.e. the study of science.
-
That should scream but beware of AE consuming ALL of your ram when you want to use those cores (especially hyperthreaded). It's a real memory hog now.
-
You shouldn't rely on any LUT to do all your work for you and you shouldn't have a workflow that always adds noise at the top to introduce artificial "variance". If you have banding after using a LUT, but you really like the look, then de-construct what your LUT is doing and see if you can mimic the style manually, going softer if possible where it's introducing the banding (no doubt in the blacks). The LUT is just revealing the limitations of your footage, like any other color correction, or it's a limited color gamut LUT. Adding noise at the top is baking in this fake "variance" and the more work you do on the footage, the more the noise will be exaggerated and start to look non-organic.
-
yep, you'll upgrade right when 4D 16k comes out. a lot of people are familiar with moore's law but my favorite is blinn's law, which states, no matter how much faster computers get, the time it takes to render really stays the same, because the client always wants more (or in the case of editing, no matter how much faster your drives get, they always will be too slow for the newest format).