KnightsFan
Members-
Posts
1,292 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
Deciding closest modern camera to Digital Bolex look
KnightsFan replied to Andrew Reid's topic in Cameras
Well yes, they spent the last few years making an 8K full frame camera, so it's implicit to me that going for 16mm size was for economic reasons and that their goal is larger format. Nothing wrong with that, in fact, many camera companies started with small sensor, simple cameras (Z Cam E1, BMCC 2.5k) before earning the credibility to sell more expensive, feature-rich models (F6 Pro, BM 12k). I read a comment years ago that the 8K LF model was going to be $13k or something. Presumably they realized that their 8K LF dream might not work out on a first model. (I've been checking on Octopus every few months since the announcement in 2019, so at one point or another I've seen most of their posts and comments) -
Deciding closest modern camera to Digital Bolex look
KnightsFan replied to Andrew Reid's topic in Cameras
I'm not sure that giving a middle finger is a fair assessment. More like they are expecting/hoping for 3rd party solutions. I expect they'll have a fairly open API. Although--from the early pics, there's a lack of mounting points, and having the screen on the back makes a modular approach difficult. But I actually agree with what Octopus said. I'd rather bolt any decent audio interface onto a camera than rely on a small manufacturer trying to reinvent the wheel and make a quality audio. Especially since almost all Zoom recorders are interfaces, so you can have proper recorder-style ergonomics rather than (for example) trying to record on an Audient Evo or something. I guess what I'm saying is that IF someone makes a 3rd party TC solution, then I'm fine with the modular USB approach, since other manufacturers cover the all-in-one option. I hope Octopus finds success. I love what they're doing. I can't see myself going for a small sensor, not with the lenses I have, but maybe if the 16mm is successful they'll make that full frame camera they prototyped as well! -
I didn't realize the M4 has LTC. Out of all their products, that's a weird one to have it on when not even the F3 has it. I do like their M-series of recorders in theory, though they aren't quite what I would look for in my uses. True, and a Blue + One is the minimum viable timecode setup for audio/video at this time. Honestly, with how cheap the Blue is, I would sink ~$200 total into a single proprietary device to sync 6 other devices (e.g. camera(s), boom mic, a few DR-10L Pro's). That's a convenient setup, plus an acceptable loss if the ecosystem disappears in 5 years. But when you have to add a One on as well, the lock in gets more expensive and risky. Hopefully Ultrasync is banging on Sony, Canon, and Panasonic's doors to add support.
-
I guess this TC approach is technically better than nothing, but I do not like being vendor locked into a closed, proprietary system. BTM_Pix's post perfectly encapsulates why. LTC is simple, works well, and has open documentation, so in the absolute worst case, you can fix problems yourself. Not so with UltraSync Blue. Buying a dongle for TC is fine , but if Zoom made a dongle that takes normal LTC via a BNC or 3.5mm, that would be sooooo much better.
-
I love seeing the reels that everyone has posted! 2023 was a fantastic year, just not for film or video! It was the first year since 2012 where I made 0 video projects. My hope is to spend 2024 getting back to narrative films. The difficulty is finding people with the time and resources to make movies for fun. I did get to take some fun photos. I've used an A7rII for photos the last couple years. My primary lens is a 28mm Nikon AI, which it has been since I bought it 6 or 7 years ago--talk about good return on investment! I attached a couple pictures I took this year. The snowy one is a Canon 24-105, and the others are the Nikon 28. Importantly, my photography kit weighs just 2.28lbs/1.03kg with filter and lens cap, and fits in a pouch on my daypack. Most of my adventures include a lot of hiking, and some rock climbing, so I like a camera that is A) small and light B) not stored in front of me and C) can be retrieved without removing my pack. I did also side/down-grade from a Zoom F4 to an F3 for audio. It's unlikely I'll work on proper productions any time soon. The F3's size and 32 bit are great for recording effects, and for production use I'll velcro it to boom pole next to an NPF battery sled (and then hope whoever is using it while I run camera aims it well). It's overall a better fit for what I do now. For 2024, my plan is to make videos of all kinds. Narrative films, tutorials on video game design (my other creative hobby), videos about DIY projects, and maybe some animation. I'm also interested in building a projector-based virtual set--I did a proof of concept, but I'd need to invest in good backdrops to make it photorealistic. Aside from virtual sets, I have all the gear I need. However, I do plan a couple new items. - A couple lights for narrative films. Probably LEDs that can run off batteries when needed. - Switching my video camera to full frame. Maybe a Z Cam F6 instead of my M4. - Considering lens upgrades. I never owned high quality glass, it's always been borrowed or rented by the production. I might get Sigma Arts--I always enjoyed using them. I look at cinema lens sets every now and then, but honestly I won't get much more out of "real" cinema housings vs 3D printed gears, and you have to go waaay up in price before optical quality rises.
-
Lol true. My point with 8 vs 10 was that the difference is readily apparent to the naked eye in most shooting conditions without any color grading (though again, it could just be my camera's implementation). From my experience shooting DNG vs ProRes on old blackmagic cameras, I can't say I ever saw a difference. So it's all about diminishing returns.
-
The biggest difference I notice between 8 and 10 bit footage is that 8 bit has splotchy chroma variation. I believe this is a result of the encoder rather than inherent in bit depth, but it's been visible on every camera that I've used which natively shoots both bit depths. In this quick example, I shot 60 Mbps 4:2:0 UHD Rec 709 in 10 bit H265 and 8 bit H264, and added some saturation to exaggerate the effect. No other color corrections applied. Notice when zooming in, the 8 bit version has sort of splotches of color in places. All settings were the same, but this is not a perfectly controlled test--partially because I was lazy, and partially to demonstrate that it's not that hard to show a 10 bit benefit at least on this camera. I do, however, agree with the initial premise, that 8 bit does generally get the job done, and I generally also agree that 8 bit 4k downscales to a better image than native 10 bit 1080p.
-
It's a couple people, really. I disagree with you about aesthetics of 24p and about the purpose of art, but agree about AI. I disagree with some others about the nature of art requiring a human origin, but agree with them about 24p and the purpose of art. And a lot of us who disagree have had a decent discussion, between the silliness, so don't give up entirely. In my opinion @zlfan has been especially inflammatory, not addressing examples or arguments, and ending every other post with lol. And I'm not interested in @Emanuel 's statements like "art made by machines is not art. Period." I know some of it is a language barrier, but it's not a useful statement or a reasoned argument. I appreciate @kye's detailed posts with actual examples, even when I disagree. I didn't read every post, but he might be the only one other than me who has tried to explain their artistic position with any depth or examples. Saying "24p is better better because it's what we've always done" is as inane a position as "more frames is better" not because of the position taken, but because neither statement contributes to anyone's understanding. I haven't posted here in a while because I don't have time for making movies anymore. I don't know if I was included in the previous statement that there are too many engineers in this thread--I would definitely prefer to be tagged if so--but I'm one of the few people who has posted original narrative work here on the forum, back when we had a screening room section (as low budget and poorly made as my work was! I'm certainly not the best filmmaker here). As an artist, I will say that anyone who does not delve into the exact mechanics behind the emotional response that art invokes, particularly in a field that requires huge amounts of collaboration, might be doing their artistic side a disservice.
-
Right, if we gave a machine learning model only movies, then it would have a limited understanding of anything. But if we gave it a more representative slice of life, similar to what a person takes in, it would have a more human-like understanding of movies. There's no person whose sole experience is "watching billions of movies, and nothing else." We have experiences like going to work, watching a few hundred movies, listening to a few thousand songs, talking to people from different cultures, etc. That was my point about a person's life being a huge collection of prompts. We can observe more limited ranges of artistic output from groups of people who have fewer diverse experiences as well. Defining art as being made by a living person does, by definition, make it so that machines cannot produce art. It's not a useful definition though, because 1. It's very easy to make something where it is impossible to tell how it was made, and so then we don't know whether it's art. 2. We now need a new word for things that we would consider art if produced by humans, but was in fact produced by a machine Perhaps a more useful thing in that case would be for you to explain why art requires a living person, especially taking into account the two points above? Jaron Lanier wrote an interesting book 10 years ago about our value as training data, called Who Owns the Future. Worth a read for a perspective on how the data we produce for large companies is increasing their economic value. I don't disagree, but I also believe that learning art is also a process of taking in information (using a broad definition of information) over the course of a lifetime, and creating an output that is based on that information.
-
So are you saying that humans have an aspect, known as innate humanity, which is not a learnable behavior, nor is something that can be defined by any programmable ruleset? And that this is the element that allows a human to tell whether its creation is art? I would argue that Midjourney, for example, does a pretty good job even now of determining whether its own output is artistic, before giving that result to you. It would be pretty useless to the many artists who use it, if it could not already determine the value of its output before giving it to the user. Saying it doesn't make it true. Why do you believe that to be the case?
-
Last time you checked, AI is in its infancy. ChatGPT, arguably our most sophisticated model, just turned 1 year old. However, already what you said is already incorrect. Learning models long ago invented their own languages. https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/. It is not what we call artistic, but these are very early models with extremely limited datasets compared to ours. My argument is that this is the same for humans. We build up prompts over the course of our lifetime. Billions of them. Every time someone told you, as a child, that you can't do something... that's a prompt that you remembered, and later tried to do. You telling me that AI can't create is a prompt that I am using to write this post. Every original idea that you have is based entirely on the experiences you have had in your life. Is that a statement that you disagree with? If so, can you explain where else your ideas come from? And if not, can you explain how your experiences lead you to more original ideas than machine learning models'? We do not have ideas in a vacuum. And obviously our ideas evolve over time as something is incrementally added. But you can't go back 200,000 years to the first humans and expect them to invent something analogous to Haiku's either.
-
The same applies to humans. We have an information set, and can only create thoughts and inferences from that information. What information do humans have access to that AI do not, which allows them to create where nothing existed before? And I'm not talking about AI today in 2023. I mean the ones we'll have 50 years from now. Perhaps you could try to define what a "new work" is vs a "mash up" in a formal and abstract sense. We're looking for a definition that shows what humans can do, that machine learning can never do.
-
Unrealistic art is just as technically challenging as realistic art. A lot of research, experiments, and programming went into the shaders and tools used to make Up, which is highly impactful emotionally. The technique of creating emotion is itself technical. As a casual viewer, it's completely fine to have an opinion that the more frames the better. It's your opinion. However, for someone who is creating movies--whether personal art projects, or part of hundreds of professionals on a blockbuster--one tool cannot be better than another. Different frame rates create different emotions, the same way different focal lengths do. Saying 60p is better is like saying horror movies are better than romances. On the topic of horror, however, it's well known that one of the critical elements of horror is not showing the monster. Obscuring the monster through camera angles and shadows is a critical element of scaring people. That's not an artistic note. It's simply scarier. In fact, most of effective storytelling is saying just enough that the primary story takes place in the audience's head. If you disagree with that, then I'm not even sure fiction is something you enjoy--which is perfectly okay, but also renders everything else moot! When people say 24p has a dreamy effect, another way to say it is that giving the audience less information allows them to create more in their head. Something else I will add to the discussion about 24p vs 60p is that I have never seen a really good movie shot in 60p. By that I mean, I have never seen a movie that has top class story, lighting, direction, editing, and acting that is also 60p. It's hard to compare The Lord of the Rings with the Hobbit on the merit of framerate because I bet that, all else being equal, 48p Lord of the Rings would be more enjoyable than 24p The Hobbit.
-
Do you believe that humans have a non-physical and/or magical ability to innovate using information outside of that which we learned? Human thoughts are also mashups of our experiences. We start with nothing and gradually take in information during our lifetime.
-
In my opinion as a software engineer at a company extensively using AI, it is a mistake to believe that there is anything humans can do that AI will never be able to do. ChatGPT was launched barely over a single year ago. Midjourney was launched less than 18 months ago. Imagine where they will be next year. Or, more fairly, imagine where these models will be when they are the age of a working professional--then remember that the models will keep learning indefinitely, not tied to a human lifespan. Just like machine learning models, all people--including highly skilled professionals--start with 0 knowledge, and their opinions and artistic vision/instincts are formed from sensory inputs. The building blocks of our brains are not complex, though humans have more training data and a lot more neurons than today's ML models.
-
And do you prefer the plot, dialog, and action to accurately represent real life exclusively? Or do you ever like characters that are braver, stronger, or more villainous than real life? My whole point in my first post was that deviations from reality are used in every aspect of films and storytelling in general. But Kye's post wasn't about the color grade only, it was the entire visual design. We don't light real life the same way we light movies. What is the difference in your opinion between unrealistic lighting design, and unrealistic motion design? There are no wrong opinions of course, I'm just asking questions to explain my point of view on it.
-
Every movie that I really enjoy and watch over and over has elements that are purposely unrealistic, whether in the image, the staging, or characterization. I'm not talking about technical story unrealism, like elves or warp speeds. ^ Here, it's not the image quality, considering the time it was shot. However, the staging of the actors is unrealistic. The way they pose, the dialog--no one actually does that or speaks that way. One of my top 5 films, and perhaps my favorite opening scene ever. ^ Have you ever seen a toy shop organized like that, with those colors and lights? I picked Hugo because, seeing it in 3D, I was blown away by how they changed the interpupillary distance for different scenes to get different moods, using unrealism as part of the craft. And it's an easy segue into the highly creative movie it revolves around. Even in 1902, they could have made the moon more realistic! Specifically on the topic of framerate, Spiderverse did a fantastic job using different frame rates to convey different moods. Some of it is explained here: https://www.youtube.com/watch?v=JN5sqSEXxm4 ^ This is another favorite movie (and it's recent--they could have shot digital or HFR if they'd wanted). Everything works because of unrealism, from the costumes, to the sets, to dialog, sound, delivery. I would argue that purposely making films look or act realistic results in boring content. I don't necessarily disagree. Good movies transport me to that world with perfect clarity, but the world may not be realistic. When I watch the Third Man, I'm there, in black and white, with the grain, and the film noir corny dialog, and Orson Welles' overacting. That's the world I'm in.
-
Panasonic S5 Is mirrorless full frame, can use my existing lenses Has good video and photo quality (10 bit, good DR) Lightweight enough to bring backpacking Comfortable to carry/hold for viewfinder-based photography Has pixel shift high res for specific VFX uses I'm not sure the S5 II has enough benefits to justify the price increase compared to used hand S5. The part I'd miss about the S5 II are the solid neck strap rings 😂 In fact I'm planning on getting an S5 to replace the A7RII as my photo camera, and MAYBE for some video... I'm open to other suggestions. The big pieces I would upgrade if I could are rolling shutter, and battery life. I guess that's assuming it's truly 1 camera, for photos and videos. If video, an UMP 4.6k or 12k looks pretty nice: good balance between quality, ease of use, realistically priced for me (unlike an Alexa 35), and not too annoying for solo operation.
-
While neither are tools I would likely use personally, I had to stop by to say Blackmagic knocked it out of the park with their FF camera and app. I'm extremely happy to see more L mount cameras in general. Competition is great, but standardization of interchangeable parts is also preferable. I love to see multiple companies adhering to a standard connection between any interlocking pieces while each innovating their core functionality. I think Blackmagic's strength (and their target audience for this form factor) is the simplicity. Back when I worked on a lot of student films, Blackmagic cameras were always the #1 choice because of their ease of use. A lot of those students were more artistic than technical. Having a big, built in screen, simple UI, and an end-to-end workflow with Da Vinci Resolve was really great for them. The iPhone app is a similar advantage. I wouldn't use an iPhone for any serious project personally, but lots of people already do. And all of those people now have a huge reason to use Resolve, and many will buy the full version. Having an end-to-end workflow for phones is really innovative, particularly with the direct-to-cloud part. I'm always happy to see smaller companies doing something fresh and original, and really wish Blackmagic the best! I'd love a that box-style camera though... but I guess that's what Z Cam is for
-
The problem with any effort to stop technology is that it won't work in the long run. Right now, there are only a handful of companies that have the computing power to run an LLM like ChatGPT, so it's somewhat feasible to control. But once the technology can run on your home PC, there is no amount of legislation or unionization that can control its use. And that statement is not to say anything is good or bad. The reality is simply that we have very limited ability to control the distribution and use of software. Switching to opinion mode, I believe that the technology is ultimately a good thing. I think limiting the use of technology, in order to preserve jobs, is bad in the long run. I believe it's better for humans if cars drive themselves and we don't need to employ human truck drivers. It's better for humans to give everyone the ability to make entire movies, simply by describing it to a computer. The big problem is that our economic model won't support it. And I'm not talking about studios and unions--the fundamental problem is that digital goods can be infinitely duplicated at no cost, and every economy is based on shifting finite packages. The same applies to AI, but with the new meta-layer being that the actual, duplicated product of AI isn't a digital good, it's a skillset for producing that digital good. I don't have all the right words to describe exactly what I'm trying to say. The example I give is that right now, self driving cars are not as good as people. But the moment any car can drive itself better than a human, every car will be able to. We have to keep training new truck drivers to do the same task. That is not true of a duplicatable AI skillset. So to bring this back to my original point, we can try to prevent self driving cars in an effort to protect truck drivers, but someday, someone will still achieve it and at that moment, the software will exist, and unlike a physical product, it can be copied all over the world simultaneously. So instead of preventing technology or its use, we need to adapt our economic model to better serve humans in lieu of our new abilities.
-
Nice article! My perspective is as a software engineer, at a company that is making a huge effort to leverage AI faster and better than the industry. I am generally less optimistic than you that AI is "just a tool" and will not result in large swaths of the creative industry losing money. The first point I always make is that it's not about whether AI will replace all jobs, it's about the net gain or loss. As with any technology, AI tools both create and destroy jobs. The question for the economy is how many. Is there a net loss or a net gain? And of course we're not only concerned with number of jobs, but also how much money that job is worth. Across a given economy--for example, the US economy--will AI generated art cause clients/studios/customers to put more, or less net money into photography? My feeling is less. For example, my company ran an ad campaign using AI generated photos. It was done in collaboration with both AI specialists to write prompts, and artists to conceptualize and review. So while we still used a human artist, it would have taken many more people working many more hours to achieve the same thing. The net result was we spent less money towards creative on that particular campaign, meaning less money in the photography industry. It's difficult for me to imagine that AI will result in more money being spent on artistic fields like photography. I'm not talking about money that creatives spend on gear, which is a flow of money from creatives out, I'm talking about the inflow from non-creatives, towards creatives. The other point I'll make is that I don't think anyone should worry about GPT-4. It's very competent at writing code, but as a software engineer, I am confident that the current generation of AI tools cannot do my job. However, I am worried about what GPT-5, or GPT-10, or GPT-20 will do. I see a lot of articles--not necessarily Andrew's--that confidently say AI won't replace X because it's not good enough. It's like looking at a baby and saying, "that child can't even talk! It will never replace me as a news anchor." We must assume that AI will continue to improve exponentially at every task, for the foreseeable future. In this sense, "improve" doesn't necessarily mean "give the scientifically accurate answer" either. Machine learning research goes in parallel with psychology research. A lot of machine learning breakthroughs actually provide ideas and context for studies on human learning, and vice versa. We will be able to both understand and model human behavior better in future generations. My third point is that I disagree that people are fundamentally moved by other people's creations. You write I think that only a very small fraction of moviegoers care at all about who made the content. This sounds like an argument made in favor of practical effects over CGI, and we all know which side won that. People like you and I might love the practical effects in Oppenheimer simply for being practical, but the big CGI franchises crank out multiple films each year worth billions of dollars. If your argument is that the people driving the entertainment market will pay more for carefully crafted art than generic, by the numbers stories and effects, I can't disagree more. Groot, Rocket Raccoon, and Shrek sell films and merchandise based off face and name recognition. What percent of fans do you think know who voiced them? 50%, ie 100 million+ people? How many can name a single animator for those characters? What about Master Chief from Halo (originally a one dimensional character literally from Microsoft), how many people can tell you who wrote, voiced, or animated any of the Bungie Halo games? In fact, most Halo fans feel more connected to the original Bungie character than the one from the Halo TV series, despite having a much more prominent actor portrayal. My final point is not specifically about AI. I live in an area of the US where, decades ago, everyone worked in good paying textile mill jobs. Then the US outsourced textile production overseas and everyone lost their jobs. The US and my state economies are larger than ever. Jobs were created in other sectors, and we have a booming tech sector--but very few laid off, middle aged textile workers retrained and started a new successful career. It's plausible that a lot of new, unknown jobs will spring up thanks to AI, but it's also plausible that "photography" shrinks in the same way that textiles did.
-
Interesting older article about WB and ISO. Alister Chapman
KnightsFan replied to webrunner5's topic in Cameras
He's being obtusely literal in my opinion. So obviously you can't change the camera's analog gain after the fact. But most people don't judge image or workflow based on counting which photons and voltages flowed through their equipment, they care about whether the end result is accurate to their expectation. So when people say you can change WB in post, it means that the NLE is performing a mathematically correct operation to emulate a different white balance, based on accurate metadata. Not too long ago, there was no such thing as a color managed workflow in consumer NLE's, which meant that the WB sliders and gain adjustments--outside of not changing analog camera circuitry's native WB in post--ALSO produced mathematically incorrect results compared. So when we got accurate WB and ISO adjustments in raw processors, it was truly revolutionary. Nowadays, as long as its color managed and the files have sufficient data, you can get the same result even without raw. Neither one is technically changing the camera's WB, but they produce the correct results and that's all that matters. I'll also point out that I suspect that most (all?) sensors don't actually change their analog gain levels based on WB setting. I bet it's almost always digital adjustment. In that case, Alister would have to also argue that changing WB on the camera doesn't actually change WB. Maybe he wants to argue that shooting at anything other than identical gain on each pixel isn't true white balancing, but I am not sure that is a useful description of the process. That is why I say it's obtusely literal. Everything I said also applies to ISO on cameras that have a fixed amount of gain. -
This. Do your own tests and trust your judgement, but here's my opinion. If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld. Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
-
Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it. Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
-
Honestly the "or more" part is the only bit I really take issue with. Once Elon Musk reaches Mars, he should patent transportation devices that can go 133 million miles or more so he can collect royalties when someone else invents interstellar travel. If he specifically describes "any device that can transport 1 or more persons" that would even cover wormholes that don't technically use rockets! If the patent had listed the specific set of frame rates that they were able to achieve, like 24-48 in 4k and 24-120 in 2k (or whatever the Red One was capable of at the time), at the compression ratios that they could hit, that would seem more like fair play. That leaves opportunity for further technical innovation, Which, by the way, Red might very well have been first at as well.