Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 01/26/2022 in all areas

  1. The C70 is a great image. In the right setting, it definitely looks great. Arri beat it in skin texture, color detail in the shadows, over exposure, color separation, etc. I’ve shot very consistently with the Canon C300 Mk2 for the last 5 years. The C70 is such a close image. I’m sure the OG C300ii would feel very close to the Alexa as well. The DGO sensor is great and all, but the C300 Mk3 / C70 is also 5 years newer than the C300 Mk2. I’m honestly a bit underwhelmed if anything. It’s just so similar. Cleaner... but again, it’s 5 years newer. Things like false color, better auto focus, long GOP, cleaner C-Log2, 4K/60p are all things that impress me more than the DGO sensor. I’ve shot projects where I used the C70 and C300ii next to each other for interviews. For better or for worse, I very much doubt anyone could tell the difference, even when looking at the raw source material. The highlights are great on the C70, but anything pointed into the sun sucks. You get the clipping blob in really bright sources or the sun. Once you see it, you know what to look for and it is always there. Pocket 6K probably does it worse. Komodo a little better. Film handles it by far the best. Ive said this before... but the Long GOP 4k codec on the C70 is so great. You get a cleaner, sharper image than the older generation Canon cameras and it looks really great. The files are also almost 3x as small. I’m interested to see what effect the CRL files will have on the image. If the files are even a little sharper and just a bit noisier / textured, that would be great.
    3 points
  2. BTM_Pix

    Canon EOS R5C

    In their promo video at the cued up point here, they dedicate a whole section of it to the "hey, switch to stills....hey, lets do video...hey, back to stills". They make it appear to be seamless and pretty much instant when the reality is they should have shown this between each switch. So whether you need it or not isn't the point for me, more that there's no need for such blatantly misleading bollocks from Canon in their promo video. Mind you, its probably fixable in firmware as we know what they are like when it comes to timers don't we ?
    3 points
  3. Andrew Reid

    Canon EOS R5C

    I don't like to be forced into this split personality way of working really. Imagine if you're on location in some beautiful place and want to capture stills at the same time as video, quickly before the moment's gone. You can't ask nature or streets to pause like a video game whilst you reboot into Video OS 2000. It's one of these things that you just can't justify being forced into. Even things like separate GUI or exposure settings between video and stills mode should be a matter of choice. Quickest way to shoot both is to have an all-purpose stills mode, with PSAM and then a video button. Samsung NX1 got it right. Tap a button to instantly get 16:9 frame in stills mode and to shoot video. No matter how technically interesting it is to have a Cinema EOS OS and Stills EOS OS on one camera.... the benefits don't outweigh the amount of missed shots there will be. If you want an identical shot in video mode and in stills mode, and nothing has changed in the light from one second to the next why would I want to change exposure or have to set the same exposure twice for two modes? I could be snapping merrily away in stills mode, nailing the exposure and all other settings, then switch to video mode and it will be completely off because it had been set earlier for a completely different scene? Who wants that? Does the EOS R5 C actually offer all of that or is it a crippled subset? Assist tools, shutter angle, etc. is on the GH5 since like 4 years ago? S1H and Blackmagic have LUT support, and so on. So how about adding all that to the EOS R5 C in a normal way?... there is no need to hive it off into a Cinema EOS operating system that takes ages to boot up. This thing is targeted at them! EOS R5 C is a hybrid cam! I completely accept it may be ok for you. I can only say that from my point of view it's a pretty big flaw!
    3 points
  4. They say "A journey of a thousand miles begins with a single step" and that describes exactly where I am with Virtual Production. The reason for setting off on this journey ? Curiosity in the process is one aspect - and is something I'm hoping will sustain when the expected big boulders block the route - but I also think VP is going to be a process that trickles down to lower budget projects faster than people might expect so it is something I'm keen to get involved in. What aspects of VP am I looking to learn about and get involved with ? In the first instance, it will be about simple static background replacement compositing but will move on to more "synthetic set" 3D territory both for narrative but also multi camera for virtual studio applications. There will also be a side aspect of creating 3D assets for the sets but that won't be happening until some comfort level of how to actually use them is achieved ! And the ultimate objective ? I'm not expecting to be producing The Mandalorian in my spare bedroom but I am hoping to end up in a position where a very low-fi version of the techniques used in it can be seen to be viable. I'm also hopeful that I can get a two camera virtual broadcast studio setup working for my upcoming YouTube crossover channel Shockface which is a vlog about a gangster who sells illicit thumbnails to haplessly addicted algorithm junkies. I'm fully expecting this journey to make regular stops at "Bewilderment", "Fuck Up", "Disillusionment" and "Poverty" but if you'd like to be an observer then keep an eye on this thread as it unfolds. Or more likely hurtles off the rails. Oh and from my initial research about the current state of play, there may well be a product or two that I develop along the way to use with it too...
    2 points
  5. 3/4 years ago I almost bite this one here: https://nikonrumors.com/2018/06/30/this-camera-and-lens-collection-contains-1750-pieces-and-can-be-yours-for-65000.aspx/
    2 points
  6. If there are some other collections and barrels not exactly glass related, what about ours? ; ) Inspired by this post from one of the most dearest ones among us and a most part of us after all... Show us your Nerd ;- )
    1 point
  7. I'm not really familiar with the difference between motion tracking and capture (am now - I just looked it up!) but obviously I'm at the edge of my knowledge 🙂 One thing that stands out about the difference would be that the accuracy of motion tracking the camera/lens/focus ring would have to be spectacularly higher than for motion capture of an object in the frame. Unless the sensors for these things were placed very close to the camera, which would limit movement considerably. I guess we'll see though - @BTM_Pix usually delivers so I wouldn't be surprised to see a working prototype in a few days, and footage a few days after that!
    1 point
  8. Yes, there are a number of possibilities to do it differently with integrations of different bits and pieces. The advantage of the Vive for most people is that it is an off the shelf solution. Some of us have shelves with different things on though 😉 Not with my ancient steam powered Samsung it wouldn't ! But yeah, this is what I was referring to about how much shit you need to cobble together to reach the start line whereas the new generations of smartphones have all of the components built in. Including the background removal so you don't have to use chroma key. Not sure about the depth of support for it in Unity. Or, more importantly, the ratio of people that would be using it for VP versus those using Unreal Engine which means less support materials and user experience to help the journey along. Fuck that. This entire projects is just a thinly veiled excuse to sneak a 4K ultra short throw laser projector into the house under the premise of essential R&D. Until then, it'll be one of those pop-up green screens that are more difficult to get back in their enclosure than a ship in a bottle !
    1 point
  9. Love this idea and I'm definitely down to contribute! I'll dig out my Z cam e1 from the $200 challenge and get a rig together.
    1 point
  10. This is a better video by far. Don't know much about this channel, maybe I should dive into his other stuff too? The point about having 100 megapixel for stills justifies the existence of the larger format just by itself because there are huge proportions of commercial photography and fine art that ends up printed very large and placed where viewers are just a few inches away. The absolute largest billboards tend to be very far away from people on a building, so you can get away with much lower resolutions for those. If you had the tech to put 100 megapixel into an APS-C sensor it would probably top out at ISO 400. Medium format at 100 megapixel can be cropped to full frame and you're still at 70 megapixel. A full frame camera at 100 megapixel probably won't be as clear in low light as a larger sensor at same res. I can comfortably shoot ISO 12,800 on the GFX 100 if I forget about pixel peeping at 100%. The same image on a full frame 100 megapixel sensor would probably be much noisier with less DR and worse colour. It's by no means a perfect system for autofocus, because Fuji did the same as they did with the earlier XF lenses, whereas some are slow to focus with a big noisy motor and some have an internal stepping motor which focuses really quick and silently like a modern lens should do. Compare for instance the 45mm F2.8 GF to 50mm F3.5 - the cheaper nifty 50 is much quicker to focus especially in low light. But the GF lenses have absolutely insane optical performance and resolution up there with a Cooke S4i. The main reason I got a GFX 100 was to use it with vintage lenses. These old Minoltas do NOT look like this on full frame... https://jonasraskphotography.com/2017/08/16/minolta-x-fujifilm/ I suppose I'll do my own YouTube rebuttal of DPReview dismissing medium format in such over simplistic ways... Just a shame about the misinformation out there funnelling everyone into silly opinions. We want informed customers otherwise camera industry goes down the toilet.
    1 point
  11. The other thing about crap YouTubers, is they make the good one’s look even better. It’s all about balance in the end. And opinions. But the only opinion that really accounts is yours, to yourself, after you have personally tried something.
    1 point
  12. I've watched a number of tests comparing vNDs over the years and agree - the quality is limited regardless of budget. Also, cost isn't a predictor of performance either, with some mid-priced options out-performing higher priced options, often quite considerably.
    1 point
  13. TomTheDP

    Canon EOS R5C

    Could be interesting but I've found the R5 dynamic range lacking even in RAW. The lack of ND's and the uselessness of 8k just make it an overall no for me. I'd take the C70 any day especially considering it will soon be able to shoot RAW internal and hopefully externally as well. Not sure how the color on the R5 compares but I recently tested the C70 against the ARRI alexa and found it the closest match color and roll off wise I have ever seen in a "cheaper" camera.
    1 point
  14. i can rent you some of my back yard for cheap.🤔
    1 point
  15. PannySVHS

    2022 Weight Loss Challenge

    How about special categories, fi smallest lens to biggest body ratio or the other way around. Two days ago i enjoyed a plastic 25mm on my Gx85 for some glorious BW. Not that this would be unusual by any means for this place and its people.:) @kye you won't see me sell my F3, my friend nor the bmmcc. We need a special category with camera body to data rate ratio, mbps per gram, giving 5d raw shooters some space- Hallo Glenn @mercer 🙂and the other way around, getting some F3 owners to finally use their gem and some internal 8bit goodness!😊
    1 point
  16. Step 2 - Holy Kit OK, so we've established that there is no way we can contemplate the financial, physical and personnel resources available to The Mandalorian but we do need to look at just how much kit we need and how much it will cost to mimic the concept. From the front of our eyeball to the end of the background, we are going to need the following : Camera Lens Tracking system for both camera and lens Video capture device Computer Software Video output device Screen No problem with 1 and 2 obviously but things start to get more challenging from step 3 onwards. In terms of the tracking system, we need to be able to synchronise the position of our physical camera in the real world to the synthetic camera inside the virtual 3D set. There are numerous ways to do this but at our level we are looking at re-purposing the Vive Tracker units mounted on top of the camera. With the aid of a couple of base station units, this will transmit the physical position of the camera to the computer and whilst there may be some scope to look at different options, this is a proven solution that initially at least will be the one for me to look at getting. Price wise, it is €600 for a tracker and two base stations so I won't be getting the credit card out for these until I've got the initial setup working so in the initial stages it will be a locked off camera only ! For the lens control, there also needs to be synchronisation between the focus and zoom position of the physical lens on the physical camera with that of the virtual lens on the virtual camera. Again, the current norm is to use additional Vive trackers attached to follow focus wheels and translating the movements to focus and lens positions so that will require an additional two of those at €200 each. Of course, these are also definitely in the "to be added at some point" category as the camera tracker. The other option here are lens encoders which attach to the lens itself and transmit the physical position of the lens to the computer which then uses a calibration process to map these to distance or focal length but these are roughly €600 each so, well, not happening initially either. Moving on to video capture, that is something that can range from the €10 HDMI capture dongles up to BM's high end cards but in the first instance where we are just exploring the concepts then I'll use what I have around. The computer is the thing that is giving me a fit of the vapours. My current late 2015 MacBook Pro isn't going to cut it in terms of running this stuff full bore and I'm looking to get one of the new fancy dan M1 Max ones anyway (just not till the mid part of the year) but I'm not sure if the support is there yet for the software. Again, just to get it going, I am going to see if I can run it with what I have for proof of concept and then defer the decision until later regarding whether I'm going to take it further enough to hasten the upgrade. What I'm most worried about is the support to fully utilise the new MacBooks not being there and having to get a Windows machine. Eek ! At least the software part is sorted as obviously I'll be using Unreal Engine to do everything and with it being free (for this application at least) there is no way of going cheaper ! The video output and screen aspect are ones that, again, can be deferred until or if such time as I want to become more serious than just tinkering around. At the serious end, the output will need an output card such as one of BM's and a display device such as a projector so that live in camera compositing can be done. At the conceptual end, the output can just be the computer screen for monitoring and a green screen and save the compositing for later. To be honest, even if I can get my addled old brain to get things advanced to the stage that I want to use it then its unlikely that the budget would stretch to the full on in camera compositing so it will be green screen all the way ! OK, so the next step is to install Unreal Engine and see what I can get going with what I have to hand before putting any money into what might be a dead end/five minute wonder for me.
    1 point
  17. stefanocps

    sony zv1 or alternative?

    i am sorry for that little friction that has been goin on As i said, i think forum are really valuable option to search infromation, a part of google and youtube, because direct feedback from users are gold. Even the Kye's focus on a interchabgeable camera was important, as i made some reseacrh to see if there could be one as samll as i woul'd like and that helped me in making my mind up about the compact choice Eveb what webrunner said..i tried to understand more, and i was stimulated for a deeper search on action cameras, but i just could not underttand his suggestion. What i have pointed put is that sometimes in forums, people are quickly and easily pointed in frames, generico ones, making the user just like another one, because they seem. I know that nowdays lot's of people just wake up and ask without making a little effort, ignoring that the seacrh is one of the nicest part (and acknowlediging) of the process..but not all the people is the same, even if sometimes might look like It is easy to get carried away from this disturbing feeling in a forum, given the hundreds of contribute that are the same..but also is important to be always aware of who you you are talking to, as a person might seems like many another one, but t is not...
    1 point
  18. kaylee

    Canon EOS R5C

    this is still a discussion on the hybrid camera with the 8 second delay right
    1 point
  19. Do you understand that variable NDs can be used with any brand of lenses or cameras?
    1 point
  20. Kai Wong is a crazy good photographer. One of my favorites.
    1 point
  21. Step 1 - The Pros And Not Pros So, to look at sort of what we want to end up with conceptually, this is the most obviously well discussed example of VP at the moment. Fundamentally, its about seamlessly blending real foreground people/objects synthetic backgrounds and its a new-ish take on the (very) old Hollywood staple of rear projection as used with varying degrees of success in the past. The difference now is that productions like The Mandalorian are using dynamically generated backgrounds created in real time to synchronise with the camera movement of the real camera that is filming the foreground content. Even just for a simple background replacement like in the driving examples from the old films, the new technique has huge advantages in terms of being able to sell the effect through changeable lighting and optical effect simulation on the background etc. Beyond that, though, the scope for the generation and re-configuring of complex background sets in real time is pretty amazing for creating series such as The Mandalorian. OK, so this is all well and good for a series that has a budget of $8.5m an episode, shot in an aircraft hangar sized building with more OLED screens than a Samsung production run and a small army of people working on it but what's in it for the rest of us ? Well, we are now at the point where it scales down and, whilst not exactly free, its becoming far more achievable at a lower budget level as we can see in this example. And it is that sort of level that I initially want to head towards and possibly spot some opportunities to add some more enhancements of my own in terms of camera integration. I'm pretty much going from a standing start with this stuff so there won't be anything exactly revolutionary in this journey as I'll largely be leaning pretty heavily on the findings of the many people who are way further down the track. In tomorrow's thrilling instalment, I'll be getting a rough kit list together and then having a long lie down after I've totted up the cost of it and have to re-calibrate my "at a budget level" definition.
    1 point
  22. Last year my friends and I put together a show which we filmed in virtual reality. We tried to make it as much like actual filmmaking as possible, except we were 1000 miles away. That didn't turn out to be as possible as we'd hoped, due to bugs and limited features. The engine is entirely built from scratch using Unity, so I was doing making patches every 10 minutes just to get it to work. Most of the 3D assets were made for the show, maybe 10% or so were found free on the web. Definitely one of the more difficult aspects was building a pipeline entirely remotely for scripts, storyboards, assets, recordings, and then at the very end the relatively simple task of collaborative remote editing with video and audio files. If you're interested you can watch it here (mature audiences) https://www.youtube.com/playlist?list=PLocS26VJsm_2dYcuwrN36ZgrOVtx49urd I've been toying with the idea of going solo on my own virtual production as well.
    1 point
  23. I have always been curious to check out the Unreal Engine. Emulating a lens, a camera, lights, all so advanced now in game engines. Will be interested to see what you discover, software and hardware wise. There are a big studios in Australia now that really push the boat out and make all sorts. Dare say they can sell a lot of their virtual assets to other studios as well! Virtual production asset marketplace anyone?
    1 point
  24. C70 in low light featuring rich tones and subtle gradations: RAW in this camera would be amazing, but you don't really need it. The camera is already reaching C300/C500 RAW quality with XF-AVC:
    1 point
  25. This thread started with a question - buy a Varicam or use your BMMCC. @PannySVHS - that one is easy... just use your BMMCC. Then you elaborate about all the lovely equipment you have that sits unused, which (I think) negates the original question, and proposes a new one. By saying you own lots of gear, don't use it, and still want more, you're saying that you attribute some value to simply owning equipment, even if you don't use it. This means you are also a collector, rather than exclusively a cinematographer. This is a very different mindset, and it's a different kind of value. While one person thinks that owning every piece of tableware from some brand is a worthwhile goal, the next person thinks of it as tacky and a waste of time and money. The value is in the eye of the beholder and only you can answer the Varicam question for yourself. Your question mixes the two however. It seems you derive value from buying equipment that performs well, even if you don't use it, so that's also something for you to evaluate. In terms of my thoughts about camera stuff: Almost any camera can look great, but the better the camera and the better the cinematographer, the easier it is to get it looking great The worse the camera the less latitude it will give you in the grade... bad cameras won't have enough latitude in the image to even point them at anything difficult, good cameras will have enough latitude to point them at difficult scenes or to point them at normal scenes and have a bit of wiggle room, and great cameras give you lots of room to move even in difficult situations @TomTheDP was kind enough to share some S1 files with me and I felt like they had more latitude than the GH5 files, but ultimately they felt the same nearing their limits, and in shots that were quite underexposed I felt like I was arguing with strange colour casts and noise and compression problems in the footage in the same way as I would with underexposed GH5 footage, so the flavour is the same. The S1 and GH5 files I've graded feel awkward and deformed in comparison to grading RAW or Prores from the OG BMPCC or BMMCC, which respond exactly the way you'd expect them to. It's hard to explain but it feels like you've got some setting horribly wrong in Resolve when you work with GH5 or S1 files by comparison. I haven't dealt with BM footage shot as poorly as the GH5/S1 footage I've worked with, but even when both brands are shot well, the BM ones feel fundamentally better TLDR; understand what you're actually trying to achieve, then work out the best way to get there.
    1 point
  26. A bit of a thread resurrection, but I don't mind. I'm actually doing these lessons these days as well. I learned Premiere by trying and failing and really wish Adobe provided something similar back then. It's easy to miss some key feature when learning a tool by oneself.
    1 point
  27. 1 point
  28. Seems like most people don't visit the sub forums, you might get more responses in the main forum. Without having used either camera, and assuming you're sure it doesn't have anything wrong with it at that price, here's my two cents. It would all come down to how controlled a setting you typically shoot in, and how big your crew is. That's a large camera, meaning support equipment also needs to be sturdier--tripod, gimbal, crane, car mounts, etc. It'll go through more batteries and storage. If in a studio, it may not be a problem, but on a 10 hour location shoot that's a lot of extra stuff to bring, compared to a BMMCC. Those 4.6k sensors aren't known for being great in low light, so you'll also perhaps need more/bigger lights. So if it already matches your production style, than that's a great deal to jump on. However, if you're one-man-banding it on location, consider how it might affect your workflow.
    1 point
  29. Amazeballs

    Sony A7S III

    Solved that again myself 😅 You need to format a card as an exFat on PC first. That is cos my card was 32gb only. Probably wont need that on 128 or 64gb card, but if it happens - that formating trick might help you as well.
    1 point
×
×
  • Create New...