Jump to content

KnightsFan

Members
  • Posts

    1,328
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. This. I love VR for entertainment, I have high hopes for it as a productivity tool, and it's a huge benefit in many training applications. But the Metaverse concept--particularly with Facebook behind it--is a terrible idea.
  2. The main conceptual thing was I (stupidly) left the rotation of the XR controller on in the first one, whereas with camera projection the rotation should be ignored. I actually don't think it's that noticeable a change since I didn't rotate the camera much in the first test. It's only noticeable with canted angles. The other thing I didn't do correctly in the first test, was I parented the camera to a tracked point, but forgot to account for the offset. So the camera pivots around the wrong point. That is really the main thing that is noticeably wrong from the first one. Additionally, I measured out the size of the objects so the mountains are scaled correctly, and I measured the starting point of the camera's sensor as accurately as I could instead of just "well this looks about right." Similarly I actually aligned the virtual light source and my light. Nothing fancy, I just eyeballed it, but I didn't even do that for the first version. That's all just a matter of putting the effort in to make sure everything was aligned. The better tracking is because I used a Quest instead of a Rift headset. The Quest has cameras on the bottom, so it has an easier time tracking an object held at chest height. I have the headset tilted up on my head to record these, so the extra downward FOV helps considerably. Nothing really special there, just better hardware. There were a few blips where it lost tracking--if I was doing this on a shoot I'd use HTC Vive trackers, but they are honestly rather annoying to set up, despite being significantly better for this sort of thing. Also this time I put some handles on my camera, and the added stability means that the delay between tracking and rendering is less noticeable. The delay is a result of the XR controller tracking position at only 30 hz, plus HDMI delay from PC to TV which is fairly large on my model. I can reduce this delay by using a Vive, which has 250 hz tracking, and a low latency projector (or screen, but for live action I need more image area). I think in best case I might actually get sub-frame delay at 24 fps. Some gaming projectors claim <5ms latency.
  3. ...and today I cleaned up the effect quite a bit. This is way too much fun! The background is a simple terrain with a solid color shader, lit by a single directional light and a solid color sky. With it this far out of focus, it looks shockingly good considering it's just a low poly mesh colored brown! https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest2.mp4 It's a 65" tv behind this one. You can vaguely see my reflection in the screen haha.
  4. Thanks to this topic, I dug a little further into virtual backgrounds myself. I used this library from github, and made this today. It's tracked using an Oculus controller velcro'd to the top of my camera. "Calibration" was a tape measure and some guesstimation, and the model isn't quite the right scale for the figure. No attempt to match focus, color, or lighting. So not terribly accurate, but imo it went shockingly well for a Sunday evening proof on concept. https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest.mp4
  5. It's still easy to spot an object sliding around in the frame, especially if it's supposed to be fixed to the ground. And motion capture usually means capturing the animation of a person, ie how their arms, legs, and face move, and we're especially adept at noticing when a fellow human moves incorrectly. So for foreground character animation, the accuracy has to also be really high. I believe that even with high end motion capture systems, an animator will clean up the data afterwards for high fidelity workflows like you would see in blockbusters or AAA video games. The repo I posted is a machine learning algorithm to get full body human animation from a video of a person, as opposed to the traditional method of a person wearing a suit with markers tracked by an array of cameras fed through an algorithm someone wrote manually. It has shockingly good results, and that method will only get better with time--as with every other machine learning application! Machine learning is the future of animation imo. For motion tracking, Vive is exceptional. I've used a lot of commercial VR headsets, and Vive is top of the pack for tracking. Much better than the inside out version (using builtin cameras instead of external sensors) of even Oculus/Meta's headsets. I don't know what the distance limit is, I've got my base stations about 10 ft apart. For me and my room, the limiting factor for a homemade virtual set is the size of the backdrop, not the tracking area.
  6. You can't get accurate position data from an accelerometer, the errors from integrating twice are too high. It might depend on the phone, but orientation data in my experience is highly unreliable. I send orientation from my phone to PC for real world manipulation of 3D props, and it's nowhere near accurate enough for camera tracking. It's great for placing virtual set dressings, where small errors actually can make it look more natural. There's a free android app called Sensors Multitool where you can read some of the data. If I set my phone flat on a table the orientation data "wobbles" by up to 4 degrees. But in general, smartphones are woefully underutilized in so many ways. With a decent router, you can set up a network and use phones as mini computers to run custom scripts or apps anywhere on set--virtual or real production. Two way radios, reference pictures, IP cams, audio recorders, backup hard drive, note taking, all for a couple bucks second hand on ebay.
  7. Sorry I made a typo, I was talking about motion capture in my response to Jay, not motion tracking. The repo I linked to can give pretty good results for full body motion capture. Resolution is still important there, but not in the sense of background objects moving. As with any motion capture system, there will be some level of manual cleanup required. For real time motion tracking, the solution is typically a dedicated multicam system like Vive trackers, or a depth sensing camera. Not a high resolution camera for VFX/post style tracking markers.
  8. How accurate does it need to be? There are open source repos on github for real time, full body motion tracking based on a single camera which are surprisingly accurate https://github.com/digital-standard/ThreeDPoseUnityBarracuda. It's a pretty significant jump in price up to an actual mocap suit, even a "cheap" one. I wonder how accurate or cost effective it would be to instead mount one of these on your camera. https://www.stereolabs.com/zed/ I keep almost buying a Zed camera to mess around with, because I never had a project to use it in. Though if you already have a Vive ecosystem, a single tracker isn't a ton more money. You can make your own focus ring rotation sensor with a potentiometer and an arduino for next to nothing. (As you might be able to tell, I'm all for spending way too much effort on things that have commercial solutions). One piece that I haven't researched at all is how to actually project the image. I don't know what type of projector, what type of screen, how bad the color cast will be, or the potential cost of all that would be. An LCD would be the cleanest way to go, but for any size at all that's a hefty investment. I once did a test with a background LCD screen using lego stopmotion so that the screen would be a sufficient size, but that was long before I had any idea what I was doing. Enjoying the write up so far!
  9. Unity is very dependent on coding but is designed to be easy to code in. There are existing packages on the asset store (both paid and free) that can get you pretty far, but if you aren't planning to write code, then a lot of Unity's benefits are lost. For example you can make a 3D animation without writing any code at all, but at that point just use Blender or Unreal. The speed with which you can write and iterate code in Unity is its selling point. Edit: Also, Unity uses C# so that would be the language to learn
  10. Thanks! I've been building games in Unity and Blender for many years, so there was a lot of build up to making this particular project. All that's to say I'm interested in virtual production, and will definitely be interested to see what BTM or anyone else comes up with in this area.
  11. Last year my friends and I put together a show which we filmed in virtual reality. We tried to make it as much like actual filmmaking as possible, except we were 1000 miles away. That didn't turn out to be as possible as we'd hoped, due to bugs and limited features. The engine is entirely built from scratch using Unity, so I was doing making patches every 10 minutes just to get it to work. Most of the 3D assets were made for the show, maybe 10% or so were found free on the web. Definitely one of the more difficult aspects was building a pipeline entirely remotely for scripts, storyboards, assets, recordings, and then at the very end the relatively simple task of collaborative remote editing with video and audio files. If you're interested you can watch it here (mature audiences) https://www.youtube.com/playlist?list=PLocS26VJsm_2dYcuwrN36ZgrOVtx49urd I've been toying with the idea of going solo on my own virtual production as well.
  12. @kyeWith that software, does Resolve have to be in a specific layout? For example, if I undock my curves window, can it still find the curves and adjust it, or does it rely on knowing the exact positions of various buttons on screen? Does it work with an arbitrary sized window or only full screen?
  13. I buy used whenever I can: cameras, lenses, microphones, cars, clothes. Mainly to do my part to reduce waste. I sell or give away anything I've upgraded away from, unless it's absolutely unsalvageable. So a nice perk is that often I don't lose much value. In fact, with 2 out of the 3 cameras I've sold, I actually profited from it! In both cases I bought and sold on ebay, which is what I almost always use.
  14. Never heard of it. It's a good idea, there are a number of ad hoc locking connectors out there including for USB, would be nice if they could standardize and gain traction. The other side though is that software standardization for network AV isn't there yet. NDI has some promise. Yeah, the difficulty of finding an affordable 5.1 receiver for my home theater shows the limitations of digital longevity. Only way these days is with HDMI passthrough, which means finding a receiver that supports all the image formats you want. It's ridiculous that every time you want to upgrade your image you need a new sound system. I think anyone serious about audio should stick with analog mics for the reason you mention. Digital mics are a little bit like fixed lens cameras. However, I do think there is a user tier where they make sense and will continue to grow market share. Right, but you're talking orders of magnitude more expensive than consumer gear. If you're buying Rode Wireless GO-tier--which is near the quality tier I expect for this Tascam unit--it will be much more reliable to record at the TX. I trust a $150 recorder. I don't trust a $150 wireless set. In terms of what's technologically possible, recording at the TX in extremely high quality with very high reliability is quite affordable. I can extrapolate quite a bit in terms of how early digitization would help the consumer market. If my shotgun mic recorded onto an SD and sent wireless to a mixer for monitoring, suddenly I don't need nearly as good of a boom pole or booming technique to avoid cable rattle. I'm not saying that's a good pro workflow, but the benefits are pretty tangible for the rest of us.
  15. The problem with USB audio for widespread use is it's limited to short cables. There are workarounds, but it gets complicated once you get to 5+ meters. XLR cables can easily be 10x that length. So XLR is better unless you know that the DAC is always going to be right next to the recorder. As far as non-USB digital audio goes, it would be interesting if we saw some uptake of RJ45 ethernet ports on cameras along with the necessary standards to carry video and/or audio. Those have (flimsy) locking connectors and longer run distance, plus there's a POE standard in place. But as of right now, the only universal standard for digital audio in the consumer world is USB. AES would help pro audio, but would be useless for most consumers still. @mkabiI think we're heading in the direction of putting the DAC (digitizer) in the mic housing like you are suggesting. Certainly the consumer world is. The Sennheiser XS Lav Mobile and the Rode VideoMic NTG are examples, and then there are a couple bodypacks like the Zoom F2 or Deity BP-TRX that are moving there for the wireless world. I think putting the DAC in the mic is especially beneficial for consumer products, because it means that the audio company is solely responsible for the audio signal chain. However, for the foreseeable future, if there is a wire between the mic and recorder, then analog via XLR to a DAC inside the recorder has no downsides compared to digitizing on the other end. If we're talking wireless it's a different story--it would be better to both digitize and record on the TX side, but that's a legal minefield. Also, the reason lavs get away with thin cables and 3.5mm jacks is because they have short runs. You can run unbalanced audio from your shirt to your belt, not 50m across a stage.
  16. The NPH scene don't look like CG characters to me
  17. KnightsFan

    Sony A7 IV

    I imagine when it says 4k30 oversampled from 33 MP, it means that 4k60 is a crop mode--assuming the rumor is from a reliable source in the first place. Although if Sony does make it 4k30 max camera now, it will be an interesting data point in Canon's resurgence of hybrid mirrorless cams.
  18. In addition to the new Z Cam model, Accsoon announced their own product which seems better designed physicall. Rather than needing to mount it somewhere, it is the phone holder. It has a NPF battery sled, and I'd assume that means it'll charge your phone simultaneously, plus a cold shoe on top. I also see some nice locating pin holes on the bottom. Z Cam though has the benefit of controlling compatible cameras with a USB cable. Would be nice to combine them and get the physical design of Accsoon with camera control. It would be cool (and not that surprising!) if Accsoon made a version that could receive a wireless signal from their CineEye transmitter in addition to HDMI.
  19. I think this is great news. The more we can do with generic devices like smartphones that can run 3rd party software, the more creative options open up to us. Some purists are married to using large sensors to get "real" depth of field control, but I say that once it can be simulated to the point of not being able to tell the difference, we'll all be free to use smaller, lighter gear with fewer expensive accessories. Will this camera be indistinguishable from a full frame camera with a 50mm f1.4? Probably not--but it's getting excitingly close! I'm not an Apple user as I don't like their closed ecosystem, but good for them for pushing a little farther into pro imaging quality with iphones. Yeah since apparently it's physically impossible to make a ProRes encoder that is smaller than a Ninja V.
  20. Not sure how you came to that conclusion, afaik neither Video Hummus nor I mentioned anything about what look we preferred, except for the statement (not by me) that the last 0.02% of quality wasn't that important. No, that's not what happened at all. You said you wanted ProRes in a small camera via a bolt on accessory, I said it makes more sense to want ProRes in that small camera via firmware.
  21. @kye I think you should be clearer when you are talking about properties of the codec, vs. properties of the camera. Not having a GH5, I'll take your word that the internal recording is over sharpened vs. the uncompressed output. However, Z Cam (for example) provides sharpening options that are used with whichever codec you are on, all the way down to None which is genuinely blurry compared to any other camera I've used, including Blackmagic cDNG or ProRes. These are minor details in the firmware, from an implementation perspective, that you are suggesting we solve with huge amounts of extra hardware. I'm saying let's vocalize our desire for these options to be exposed as internal settings, rather than vocalize our desire for a separate company like Atomos to come up with their interpretation of what good quality means. The difference is that the hypothetical we're talking about--good internal recording options--is entirely possible via simple firmware changes. The reason they aren't included is to sell us something else, whether it's a higher end Panasonic camera, or Red locking out compressed raw to sell you their brand.
  22. I think the question is whether its still called "raw" if it's been scaled, and as a separate question, to what degree scaling individual color arrays before debayering is better or worse.
  23. But what you're saying in this topic is that you'd like a 3rd party to make an HDMI recorder, putting you in their hands for recording format instead of the camera's. Unless you're envisioning the external recorder space getting large enough that there is legitimate competition with a variety of different options. Why not wish for camera companies to allow for more flexibility in choosing codec, and higher quality instead of adding a whole extra layer of compatibility, accessories, and specs. Remember when Panasonic added 10 bit to the G9 years later via firmware? Remember when hackers added compressed raw recording to the 5D3? Or when Blackmagic made a pocket camera that shot ProRes and Raw at a fraction of the price of contemporary DSLR's, without fans and heatsinks? My sarcasm in this topic is because, in my opinion, wishing for small external recorders is a roundabout solution that would also encourage the implementation of bad internal video. I'd rather pay $100 for firmware that unlocks pro video formats, than $600 + batteries for a recorder even if it's the size of a battery grip. This is the exact reason I hope for the Octopus cinema camera to turn into a real product, or why I'd like to see cameras running vanilla android or Linux OS's. Getting on my soap box here, but the migration away from standardized computers (both hardware and software) to dozens of different devices running proprietary firmwares is a harmful trend.
  24. So you're saying that if I transcode a ProRes file to H.264, you will 100% be able to tell which is which in a blind test?
  25. GoPro makes a camera the records 4k60 for hours on end while enclosed into a waterproof case with no external heatsinks, at under half the weight of the Ninja V. I'm sure it's quite possible to build a screenless recorder smaller than the Ninja V.
×
×
  • Create New...