Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/01/2023 in all areas

  1. Have you tried the Flat profile? I’ve read and seen some good stuff coming from that. Last day of a semi-working break and been doing some musing as you (I) do and the pairing of Z8 & Z9 are back on top (had a short dalliance with an imaginary gal called R3 and for her body alone, she would still be my pick, but lenses, less attractive), especially now that the Tamron 35-150 has been officially announced. I wish I could try one but as none seem to be available in La belle France…
    2 points
  2. zerocool22

    THE Big Question

    I have not seen oppenheimer, but this is something I have thought about movies in general. The dialogue is allways so clear and loud and constant throughout movies. And while cinematography is always a style choice to overlight/underlight something, where in some films or shows most is black (I remember the game of thrones hate). For voice audio it is not so the case, everything is allways so unnaterally clean and clear. Maybe I am alone in this, as I have tinnitus and have this problem in real life, where I often don't hear what anyone is saying and have to ask to repeat it because the dialogue clip of a person is not mixed out like in the movies. But yeah I can imagine it beeing super annoying, having an actor deliver a line and while watching the movie (without lip readers) nobody can make up what the character just said. (Maybe in that case just op for a show don't tell kinda approach).
    1 point
  3. PPNS

    Share our work

    1 point
  4. M_Williams

    THE Big Question

    This is interesting, but personally, I think it's a stupid way to approach this. Yes, make it sound best for the theatres, but the film won't be in theatres forever. A lot of people are going to watch it at home - probably most people will watch it at home over the course of X number of years. But, more importantly, this is really a problem exclusive to Nolan. When I was the TDKR opening scene preview before Mission Impossible: Ghost Protocol (in IMAX at Universal) I couldn't understand 80% of what Bane was saying. I think they went back after a lot of the complaints and remixed it, because it was better in the full release. I watched everything with subtitles at home. But when you have a film like Oppenheimer, which has some of the most intense sound effects and music I've ever heard in a theatre, as well as being almost completely dialogue-driven, you gotta make that sh*t understandable. Same with Tenet, where you wouldn't understand the plot at all if you couldn't hear the dialogue. I dunno, I just never have this problem with any movies other than Nolan films. Maybe a few lines here and there obviously, but not entire movies. That said, I don't think I missed out on anything important, it's just really frustrating. Tenet was waaaay worse.
    1 point
  5. I just shot my first couple of things in nlog. Can anyone point me to alternative luts I can check out? The nikon lut is contrasty, and the skies are a bit fluro-turquoise. I've just had a go at grading it myself without the lut and I need something in-between mine and the nikon one. I'm in premiere for now. I've tried turning down the intensity and tweaking the HUE/HUE controls in premiere but its a bit limited.
    1 point
  6. Indeed, just look at the leap forwards in improvement from GPT2 to GPT3 Or each generation from Midjourney V1 vs V2 vs V3 vs V4 vs vV5 (and those 5 generations only took a single year to happen!!!). https://aituts.com/midjourney-versions/ We might laugh at the efforts of generative AI video right now, but they're no worse than Midjourney V1 was.... Perhaps 50/50 odds we'll have the Midjourney V5 equivalent for video by 2028: https://manifold.markets/ScottAlexander/in-2028-will-an-ai-be-able-to-gener Or maybe even higher odds than that... https://manifold.markets/firstuserhere/will-we-have-end-to-end-ai-generate-12f2be941361 https://manifold.markets/firstuserhere/will-we-have-end-to-end-ai-generate-de41c9309e38 I agree with your disagreeing. That's a good analogy! And if it is carefully/appropriately managed, you can even have a change in voice actor who is doing these characters, and almost none of the fans will notice or care. Another good analogy. It is indeed very likely, I feel, that the country as a whole will be massively better off and wealthier thanks to AI. But... there will also be huge numbers of individuals (such as those middle aged textile workers) who will be a lot worse off. We'll be able to have super niche "micro celebrity AI avatars" At the moment, celebrities need a certain amount of broad appeal. As you said, they need to avoid offending their fans. So end up appealing to the common denominator, because what might appeal to one section of the fan base could drive away other fans who get offended by it. But once you're freed from the physical constraints, then an "AI celebrity" could cater to any and all of these micro niche fanbases. "I think there is a world market for about five computers." ~ IBM's president, Thomas J Watson (said in the early 1940's) Nah, my Raspberry Pi can run a LLM. (ok, only a baby-ChatGPT that's quite cut down, and somewhat crippled. But even if I want to run a LLM that's quite close to the power of GPT3, that only costs me much much less than a $1/hr, in fact, more like a handful of cents per hour. It is cheap to run a LLM) It's predicted as highly like that even GPT4 can be run on consumer grade hardware by next year: https://manifold.markets/LarsDoucet/will-a-gpt4equivalent-model-be-able What you're thinking about, is the costs to train GPT4 from scratch. That's VERY EXPENSIVE! But still, it isn't quite as bad as you think. If a government wanted to do it, then absolutely any government in the OCED could do this, they could do it ten times over. Likewise, there are hundreds, if not thousands, of companies in the world which could train the next GPT4 if they wanted to. (GPT4 would've cost roughly the same order of magnitude as $100M in costs, waaaay out of reach for you and me, but easily within reach of many many other organizations) But they won't, because the costs to train their own GPT4 vs the profits they could make (as AI is quickly becoming a very competitive space!) just isn't worth it. The good news though, is that costs for training are dropping drastically fast! Look at this prediction, it is highly likely that before 2030 it will cost under $10K to train from scratch a GPT3 quality LLM (i.e. any keen hobbyist can do it themselves!): https://manifold.markets/Gigacasting/will-a-gpt3-quality-model-be-traine And that's yet another reason why there are not hundreds of other companies training their own GPT4, why put that risk into it if you're not already an industry leader in this? When your $100M+ investment could quickly within a few short years be worth next to nothing. You need a solid business plan to recoup your costs fast. OpenAI can do that, because they're massively funded with Microsoft's backing, and they have a first mover advantage. Too late, that genie left the bottle long ago.
    1 point
×
×
  • Create New...