Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/18/2023 in all areas

  1. I own two C70s, a Red One MX, and even a 5D iii that also runs ML with more stability and at higher resolutions. I keep asking myself why I even care about this cheap little camera that I have never shot an actual project with, and why I haven't sold it yet. On the surface, it's because I think that once the ML firmware gets straightened out, the EOS-M has the potential to be one of the few raw-capturing cameras that can work with 16mm lenses. But it's dawned on me that it's not so much the camera as it is the community collaboration around this camera. The "movement" (mostly happening in the FB group) is reminiscent of the early days of DSLR and mirrorless video, when people were all trying to squeeze every inch of quality out of these cameras that were never intended for cinematography. Nowadays, every camera has great IQ and a pleothera of custom-fitted accessories and perfected workflows. But 10-15 years ago, there were no dedicated cages or practical battery solutions and the cameras were full of quirks. You had to lean on the community for workarounds if you wanted to have a chance of creating anything worthwhile with these difficult cameras. Despite all of the hurdles, some people were able to do it and it always felt like a team victory. I think what's going on with the EOS-M reflects the evolving hacks for the GH1 and GH2 in lot of ways. There is no dedicated cage for this camera either. Plug it into a battery and you have to find a place to hide the unruly adapter cable. Moire and aliasing abound in the lower resolutions. Rolling shutter is horrendous in the higher ones. The ergonomics make the GH cameras look like a dream. And it just doesn't make sense to throw a lot of money at a defunct $150 camera, which really ups the ante for DIY solutions. People share their rigging experiments freely (with lots of 3D printed parts), as well as their footage since the software is constantly being tweaked and improved. That group is experimenting, tinkering, and collaborating like it's 2011. It's the fun side of camera geekery all over again.
    4 points
  2. I dusted off my EOS-M today and loaded the new ML. It's feeling a lot more user-friendly than before. 2.8K with accurate monitor display? Heck yes. This is potentially an acceptable level of 'frustrating' and I might actually shoot something with it.
    2 points
  3. Emanuel

    Fx30

    Ditto. And only to testify it, at home, ALL members of the family notice it and complain against the Xiaomi's 'smart ' : D TV which is unable to know what realm of motion pictures fill the cup of educated people used to more than a century of such universe. The interesting part of the story is people truly need history of cinema to understand something while they don't need to appreciate classic movies to reject what looks like crappy dictatorship for real in turn. That's what I am supposed to realize from what I am used to hear back home without needing to be the first one to protest when the display misbehaves set from default. Stop the bloody automation. All made without scrutiny of an educated guess rather than humans invariably refusing to see themselves as stupid and claiming the need not necessarily just for once instead. No idea what each of both is more critical. :- )
    1 point
  4. filmmakereu

    Fx30

    I think people have little clue about things they think they know.
    1 point
  5. Emanuel

    Fx30

    To neglect motion, I wouldn't call it stupid instead, I'd call it to get the wrong end of the stick. It's not about style but intrinsic features related to the inherent traits of the medium. Motion is everything, it's the name of the game. In a word, language. - EAG :- )
    1 point
  6. Django

    Fx30

    Continuing its steady avalanche of camera roll-outs, Sony is expected to release 5 new E-mount cameras this year alone, including some sort of cine/video centric camera later this Spring and an FX10 by this summer with the same 26MP FX30 sensor but with AI processor (expect ZV-E1 features) plus an added mechanical shutter but no IBIS at an entry-level price. An oversampled 6K sensor should make better sense for the AI and stab cropping stuff.
    1 point
  7. Django

    Fx30

    Those attributes go without saying but I disagree about motion being "negligible for the viewer". If you have a +120hz TV from Samsung or LG its quite easy for you to test: activate clear motion (which artificially doubles the frame rate) on your favourite indie classic and watch it get ruined with disgraceful soap-opera effect. No lighting, framing, color grading or acting will save it from that. That being said, not everything needs to be "cinematic". Adding faux widescreen black bars is a form of frame crippling. I'm actually a fan of 3:2 & 4:3 open gate. Its quite refreshing not having the height cropped. Vertical makes sense on a handheld phone, that's about it. Fortunately open gate allows multi aspect ratios so its a win-win. Hopefully more manufacturers start implementing open-gate on video centric cameras. It has more uses than anamorphic support.
    1 point
  8. Mentioned this in another EOS-M thread but for anyone who missed it, Meike do an EF-M converter which has the drop in filter system. The standard kit with the Variable ND and Clear filter is €145 and I think really gives this kit an extra boost as a compact powerhouse. Albeit at almost the full cost of the camera 🙂
    1 point
  9. Great write up and reading pleasure! @QuickHitRecord What are your findings with this ML build? How does it compare to Dantes builds which have been held in high regard by our friend and Eos M artist @ZEEK ?
    1 point
  10. This is mostly true. I subscribed to the 'subtractive' model of editing where you start with all your footage and then remove the parts you don't want, making several passes, and then ending in a slight additive process where I pull in the 'in-between' shots that allow it to be a cohesive edit. I'm aware there's also the 'additive' model where you just pull in the bits you want and don't bother making passes. Considering my shooting ratios (the latest project will be 2000 shots / 5h22m likely to go down to something like 240 shots / 12m - either 8:1 or 27:1 depending on how you look at it) and the fact I edit almost completely linearly (in chronological order of filming) I might be better off with an additive process instead. I've also just found a solution to a major editing challenge, and am gradually working through the process of understanding how I'll include it in my editing style. If you're not deeply attuned to the subtleties of the image (as I know some people are) then its quite feasible to match footage across cameras, and even do image manipulation in post to emulate various lenses, at least to the extent that it would be visible in an edit where there are no side-by-side comparisons. The fact that a scene can be edited together from multiple angles that were lit differently and shot with different focal lengths from different distances is a statement about how much we can tolerate in terms of things not matching completely. I had a transformative experience when I started breaking down edits from award-winning travel show Parts Unknown (as that was what most closely matched the subject matter and shooting style I have). I discovered a huge number of things, with some key ones being: Prime (which streams high-quality enough 1080p that grain is nicely rendered) showed clearly that the lenses they used on many episodes aren't even sharp to 1080p resolution, having visible vintage lens aberrations like CA etc They film lots of b-roll in high frame rates and often use it in the edit at normal speed (real life speed) which means it doesn't have a 180-shutter, and yet it still wins awards - even for cinematography Many external shots have digitally clipped skies Most shots are nice but not amazing, and many of them were compositions that I get when I film The colour grading is normally very nice and the image is obviously from high-quality cameras This made me realise that the magic was in their editing. When I pulled that apart I found all sorts of interesting sequences and especially use of music etc. But what was most revealing was when I then pulled apart a bunch of edits from "cinematic" travel YouTube channels and discovered that while the images looked better, their editing was so boring that any real comparison was simply useless. This was when I realised that camera YouTube had subconsciously taught me that the magic of film-making was 90% image and 10% everything else, and that this philosophy fuels the endless technical debates about how people should spend their yearly $10K investment in camera bodies. Now I understand that film-making is barely 10% image, and that, to paraphrase a well-known quote, if people are looking at the image quality of your edit then your film is crap. When you combine this concept with how much is possible in post, I think people spending dozens/hundreds of hours working to earn money to trying to buy the image they like, and spending dozens/hundreds of hours online talking about cameras without even taking a few hours to learn basic colour grading techniques is just baffling. It's like buying new shoes every day because you refuse to learn how to tie and untie the laces and the shop does that for you when you buy some. Absolutely - that's a great way of putting it! My consideration is now what is 'usable', with the iPhone wide angle low-light performance being one of the only sub-par elements in my setup, and, of course, why IQ is Priority 4.
    1 point
  11. IronFilm

    Audio for dummies...

    For people who need to travel a lot this will be a very powerful feature! As soon as you land in a new country, you can open up the Deity app and get access to that country's new set of legal frequencies to use. (which might not have been legal back home) No need to buy/rent new hardware for each country you visit! Also useful for pros like myself, who need transmitters (to go out to camera or DIT) in their bag next to their receivers (for the talent/booms), so now you can with the Deity use the weird odd little gaps in the frequency spectrum that are located far far away from your frequencies for your talent. Helps a lot for frequency coordination! I'm looking forward to using 819 to 824MHz with the Deity UHF wireless.
    1 point
  12. Found some tasty GH5 footage with the tasty Tokina 11 16 2.8. Finding tasty GH5 footage is always a treat as this camera is still super capable and it was one of the first fantastic cameras which reversed the trend of medium camera- great footage. It started the infamous trend of great camera- mediocore footage. So I am always enjoying a good find regarding great image quality coming from this awesome camera. Love the colours and use of the hybrid beast. The channel has a variety of great stuff to enjoy, browse and watch, for instance using the GHAlex Lut on the natural profile instead of VLog. Anyway, here is the tasty, tasteful wide angle cinematic:) video. His channel for the love of browsing and procastinating in an enjoyable way:) https://www.youtube.com/@SwedishFilmmaker/videos
    1 point
  13. Seems like you have a process that works and you're content with it. Very cool. I'm in the same boat. Since I'm low low low budget, I shoot what I can with what I can and align as best as possible in post. I just spent a month and a half filming with a shitty 500mm lens simply because that's all I could afford. It's not great, but it's not a deal breaker either. So, off into the field I went and I used it. And also, combining multiple cam footage with different lenses is not that hard unless, as a filmmaker, you're incredibly intent on having an extremely tight cohesiveness to the IQ --and are desperately striving for seeking out that extra 5% of IQ. Your test prove that consolidating various footage is viable, and my anecdotal experience follows. I know a lot of us here really want to find the perfect recipe for all of the above, throw in some secret sauce to make it all work, and that'll make us sit back in the editing seat and go "golly, doesn't that look wonderful!" However, since consumer IQ tech is pretty damn good now, as a documentarian my goal isn't about the tech, more often it's simply get the shot that tells the story, then tell that story. These days, when it comes to IQ, I worry much much more about the floor than the cieling.
    1 point
  14. Hope they will find a way to newer cameras as well. The DR of those cameras is quite limited.
    1 point
  15. The funding goal was just reached. Thanks to everyone who donated!
    1 point
  16. They need to hack Sony firmware and make a Magic Torchlight or smth. Lots of ppl would support that in a heartbeat. I would.
    1 point
  17. Gave the equivilent of a beer. Magic Lantern is wonderful. Love the raw film that these cameras can be hacked to shoot.
    1 point
  18. The GX85 is the next challenge. For Priority 1 it needs to be kept handy and accessible. This means either being kept in-the-hand or kept in a pocket - keeping it in a bag adds access time unless the bag is on the front of my chest. Both of these mean that the rig has to be kept as small as possible, which essentially boils down to lens choice. The 14mm f2.5 lens is an absolute gem in this regard. In the 4K mode (which has a 10% crop into the sensor) it's got a 30.8mm equivalent FOV. I edit and deliver in 1080p, like all sensible people in the real world who haven't confused their ass for their elbow do, and so I can also use the 2x digital crop feature, which gives a 61.6mm equivalent FOV. These two FOVs are hugely handy for showing people interacting with the environment around them - environmental portraits. It also has good low-light, good close-focus distance, and has a bit of background defocus if the subject is close. I have the GX85 configured with back-button focus. This means that I hold down a button on the back when I want to engage AF and hitting the shutter button doesn't engage it. This works brilliantly in practice as it means that I can focus once for a scene and then shoot without having to wait for the camera to AF. I also have the viewfinder set to B&W and have focus peaking enabled in red, making it easily visible. The histogram is also really handy to know what is going on too. For Priority 2, this setup works well - the tilt-screen is great, AF is super-fast, IBIS is impressive and very functional, and I find using it in real-world situations to be easy, fast, and very low-friction. It still gets some attention, but it's not excessive. I find that for most shots I want to stop down to ensure that everything is in focus. This is because my work is about the subjects experiencing the location and the interactions that are going on. A nice portrait with a blurred-beyond-recognition serves very little purpose as it could have been shot anywhere at any time and therefore has no relevance. This lens can give a satisfying amount of background defocus for mid-shots if required, and especially for macro shots, which are occasionally relevant in an edit. Here are a few grabs SOOC. In grading I would typically lower the shadows to the point where the contrast is consistent (assuming it makes sense for the scene - lots of these have haze which you have to treat differently) and I would even out the levels of saturation etc. I'd also sharpen or soften images to even out the perceptual sharpness too. Interestingly enough, most of these images have enough DR, and even have elevated shadows, despite the camera not having a log profile. These provide a really solid foundation to grade from.
    1 point
  19. Django

    Fx30

    So the DCI 4K in this update is revealed to be fake.. it just crops the image to make it fit 17:9. You don't gain any width like on a proper DCI mode. This is so lame and not so different from the fake ZV-E1 cinematic mode. SMH Sony. Other weirdness:
    0 points
×
×
  • Create New...