Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/04/2023 in all areas

  1. Very interested in the camera, probably will upgrade, but...it was a VERY strange launch. First, all the usual suspects that do reviews for video (the now Petapixel couple, Gerald Undone, Toneh, Kai, etc) did not got units, as they usually do. From the more traditional reviewers, only Gordon Laing and Cine-D got one. And...almost no one showed the new AF system in full action, or described it in detail. Most of them said (and could be true) that they got units with no final firmware. Other, like Chris from Pal2Tech (the great surprise in the review batch - he is a very good Fuji reviewer, but never got an unit) showed a little, said that was way better than all the Fuji that he used, even better than X-T5 post firmware update. He had a excuse - only got the unit for 2 days, and was sent by a Fuji representative, no Fuji itself. Another reviewers also mentioned very short time with the camera. Also in this regard - most of the reviewers are european, specially british ones. But most of them, never heard before. My two cents: - There were not a lot of ready units available, hence people have to test it breifly and return the camera to be sent to another reviewer; - About AF tests: or (the "I hope" answer) the firmware as not totally final and will got more tweaks before release, hence it was not allowed deep tests, or (the "damn..." answer) the AF still not good enough and Fuji hope to move some units in preorder before the not so good news becomes spread.
    1 point
  2. Very true @kye, priceless post, mate : ) I think it's more how to learn with the limitations but you're pretty on the spot about everything you said :- )
    1 point
  3. so many ppl are going to loose their jobs lol. this is gen 1 beta
    1 point
  4. I disagree. Creativity is about creating, plain and simple. While we can debate if making videos or films is "art" or not, and we can debate what is and isn't "art" forever (because that debate has been going for the entire history of mankind practically), I don't see that the level of creativity of a final edited video/film relies solely (or even that substantially) about if you hand-hold the camera and grab 10s clips or if you put it on a tripod and record 20 minute clips. I don't know you, and I don't know what work you have or haven't done, but you're not speaking like someone who understands the entirety of the creative process. Camera YouTube and the online camera ecosystem of forums and blogs gives a completely fictitious impression that the camera is the main item when it comes to creating moving images (moving in an emotional as well as literal way). I have come to understand that the following things are so much more important to the creative process that they eclipse the camera entirely: Whatever preparation you do prior to filming anything (writing, scripting, concept designing, location scouting, etc) What you put in front of the camera (casting, production design, lighting, directing, acting, etc) What you do with the footage you have captured (editing, vfx, story structure, music and sound design, etc) What you do with the final edit (audience selection, distribution, promotion, etc) If you can't understand how recording footage from a tripod could be used in a deeply creative way, then you don't understand film-making that well. A large proportion of TV/streaming content is made from long recordings of fixed cameras - reality TV, documentaries (that typically have hundreds or thousands of hours of interview footage), game shows, most talking-head YouTube content filmed in a fixed location, music videos, etc. Is there lots of content where the camera is taking short clips, preferably from a moving camera? Sure, but it's not the only type of content. Take this recent comment from the Godfather of vlogging: It's not until you really study how professional content is made that you start to understand what goes into things. Are you familiar with shooting ratios? Feature films can vary wildly: and documentaries are often a lot larger (as you may not know what the story is going to be until after it happens, so you tend to just shoot everything): Source: https://blogs.lse.ac.uk/usappblog/2014/06/07/how-much-data-do-you-need-like-documentary-film-making-research-requires-far-greater-coverage-than-the-final-cut/ So, you may think that capturing 30 minute clips from a static location makes for boring footage, and you're likely right, but that's literally what editing is for. This is one of the (many) reasons that film-making is hard work. Going back to our friend Casey Neistat for a second, during the time he did a daily vlog for something like 800 days in a row, he mentioned that most of his vlogs took between 5-9 hours to edit. That's somewhere between 15-60 minutes of editing time per minute of final footage. I don't know how much footage he actually shot, but when he's interviewing people he's often severely chopping the footage down. He recently mentioned one example where he cut down 25 minutes of interview (during which the camera was rolling the whole time) to 60-75 seconds in the final edit of the video he made. If Casey Neistat, who revolutionised vlogging, needs a camera that can roll for 25 minutes reliably, then making the claim that long locked-off shots aren't "creative" then you've eliminated vast vast sections of the content created. I understand why it overheats, and why Sony would make a camera that overheats. The thing I find bizarre is that customers have somehow come to accept that cameras overheating is somehow normal, and not a sign of an inferior product. It's pretty clear that professional environments require equipment that has rock-solid reliability, and so the high-end / professional cameras are built for reliability. Where I think there is a lack of understanding is when it comes to lower-budget environments. If you have a lower budget then you're probably shooting in situations where you are less in control of the environment. So not only does this mean that you're less able to control the situation that the camera is in (and ensure it's not exposed to adverse temperatures / conditions) but you're also likely to require longer record times because the likelihood of getting the shot you want is lower.
    1 point
  5. In my opinion, camera companies should be giving their $2,000+ uncompressed internal RAW recording in a simple manner. The Canon 5D II can do this but the $2000 Panasonic S5 II cannot.
    1 point
  6. Yes, and the beautiful thing about how Sigma handled it is it’s uncompressed CDNG, so it’s literally reading the sensor and dumping the raw feed to the card, wrapping it up in the DNG format. Couldn’t get easier. Whole reason why old canon cameras can do 14-bit RAW continuously without overheating (ok not in a .DNG format straight of the bat but still)
    1 point
  7. Academy award winning cinematographer Greig Fraiser (Dune, Rogue One, Lion) shot his upcoming sci-fi thriller entirely on an FX3 apparently. The ZV-E1 has the same sensor, image processor, 10-bit codec, S-LOG3 & even LUT support. I'm sure we'll end up seeing incredible cinematic footage produced with it. It is certainly capable of it for sure. That being said, not to sound like a broken record but the fact the camera overheats does limit its use. So maybe not the most reliable option for pro work vs A7S3/FX3 with passive/active cooling systems. I've done work myself with overheating cameras in the past (R5,R6) so its doable but its stressful and better suited as B or C cams to be on the safe side. The ZV-E1 seems even more unreliable than those because of its ultra compact size. We know for instance that indoors it is very sensitive to ambient temperature. What the latest findings show is that air flow is key. Hence why it is better suited as a Vlog cam where you are outdoors in motion, or as a travel cam where you are generally shooting short clips outdoors. And in both cases, even if the camera overheats, no biggie, take a coffee break let it cool off. But when there is a will there is a way, rig a mini fan on it and you may have yourself indeed a budget mini FX3! Not that bizarre. No cooling = overheat. FX3/FX30/R5C/S1H have active cooling so don't overheat. Sony has not played into the resolution wars with their A7S3/FX3/FX6/ZVE1 cameras. All 12MB 1:1 4K cameras. Overheating isn't resolution related its high frame rate and high bitrate compressed 10-bit codec related. Its when these appeared that overheating started plaguing mirrorless bodies. Its also why Sigma chose not to implement internal 10-bit compressed on their ultra compact FP. Its either RAW or 8-bit.
    1 point
  8. The 28-60 has no OSS. The 24-70 has OSS, so in principle, if they coordinate, there should be more stability with IBIS. I use Active Stabilization for static shots, reserving dynamic stabilization for moving with the camera. I have not tried that (yet) with the Zeiss/Sony lens.
    1 point
  9. There's definitely a magic to that image! Nice work!! 🙂
    1 point
  10. Matt Kieley

    Lenses

    I have a Fujinon 18-108mm 1.8 c-mount zoom that covers s16 that's very light and compact. Here it is next to my Canon 17-102 and Panasonic 12-35, and mounted on my GH5: I don't know if it's compact enough for you, but it's the smallest c-mount zoom that covers s16 that I've owned (and I've owned quite a few).
    1 point
  11. In Switzerland a new S5II from a store is 1763 CHF while the S5IIX is 2599 CHF... so hard to justify the price difference, considering what you can get for the 200$ firmware upgrade... so I guess with some luck, one might get a used S5II for 1000 less than a new S5IIX at this point. But I have to agree that I feel like there's a big opportunity missed by not having BRAW to an SSD instead of to an external recorder.
    1 point
  12. The recording to SSD and ProRes of the S5iiX (with a minor mention of the connect to phone for streaming) are big differences that might persuade me if I was looking to buy an S5ii. Not that I am of course as I'm completely adamant that I'm not interested as I am dead set on a Z9/Z8/A1/X-H2S purchase next week. Absolutely nailed on. Not going to waver in the slightest. Honestly.
    1 point
  13. kye

    Sensor vs. Processor

    To return to the original question, perhaps the most important element in all this is the ability of the operator to understand the variables and aesthetic implications of all of the above layers, to understand their budget, the available options on the market, and to apply their budget to ensure the products they use are optimal to achieve the intellectual and emotional response that the operator wishes to induce in the viewer.
    1 point
  14. kye

    Sensor vs. Processor

    To expand on the above, here is a list of all the "layers" that I believe are in effect when creating an image - you are in effect "looking through" these items: Atmosphere between the camera and subject Filters on the end of the lens The lens itself, with each element and coating, as well as the reflective properties of the internal surfaces Anything between the lens and camera (eg, speed booster / TC, filters, etc) Filters on the sensor and their accompanying coatings (polarisers, IR/UV cut filters, anti-aliasing filter, bayer filter, etc) The sensor itself (the geometry and electrical properties of the photosites) The mode that the sensor is in (frame-rate, shutter-speed, pixel binning, line skipping, bit-depth, resolution, etc) Gain (there are often multiple stages of gain, one of which is ISO, that occur digitally and in the analog domain - I'm not very clear on how these operate) Image de-bayering (or equivalent for non-bayer sensors) Image scaling (resolution) Image colour space adjustments (Linear to Log or 709) Image NR, sharpening, and other processing Image bit-depth conversions Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) Image container formats This is what gets you the file on the media out of the camera. Then, in post, after decompressing each frame, you get: Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to timeline colour space) All image manipulation done in post by the user, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of export processing) Image bit-depth conversions (as part of export processing) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of export processing) Image container formats (as part of export processing) This gets you the final deliverable. Then, if your content is to be viewed through some sort of streaming service, you get: Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to streaming colour space) All image manipulation done in post by the streaming service, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of preparing the steam) Image bit-depth conversions (as part of preparing the steam) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of preparing the steam) Image container formats (as part of preparing the steam) This list is non-exhaustive and is likely missing a number of things. It's worth noting a few things: The elements listed above may be done in different sequences depending on the manufacturer / provider The processing that is done by the streaming provider may be different per resolution (eg, more sharpening for lower resolutions for example) I have heard anecdotal but credible evidence to suggest that there is digital NR within most cameras, and that this might be a significant factor in what separates consumer RAW cameras like the P2K/P4K/P6K from cameras like the Digital Bolex or high-end cinema cameras ..and to re-iterate a point I made above, you must take the whole image pipeline into consideration when making decisions. Failure to do so is more likely to lead you to waste money on upgrades that don't get the results you want. For example, if you want sharper images then you could spend literally thousands of dollars on new lenses, but this might be fruitless if the sharpness/resolution limitations are the in-camera-NR or you might spend thousands of dollars getting a camera that is better in low-light when there is no perceptible difference after the streaming service has compressed the image so much that you have to be filming at ISO 10-bajillion before and grain is visible (seriously - test this for yourself!).
    1 point
  15. Django

    Sensor vs. Processor

    No in Fuji's case the X-Trans sensor has a huge impact on IQ. It uses a unique non standard Color Filter Array that affects noise, moiré and detail resolution. It also avoids the need for an optical lowpass filter. The new processor in XS20 is what allows 6.2k open gate in 10-bit 4:2:2. The XT4 could only do up to 4K in 4:2:0. Also allows FLog2 and ProRes RAW out.
    1 point
  16. kye

    Sensor vs. Processor

    1) colour The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera. The sensor then just measures the light that hits each photo site and is completely Linear. Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card. 2) DR DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting. If a sensor has more DR or less noise then the overall image has more DR. The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image. The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR. 3) Highlight rolloff Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun. All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image. There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect. I'm happy to discuss this further if you're curious.
    1 point
  17. More rainbow flarytails test 🙂 with Oneplus 8 pro + custom anamorphic 1.5x scope. I hope you like it and if not please leave your feedback. Thanks!
    1 point
×
×
  • Create New...