Indeed, just look at the leap forwards in improvement from GPT2 to GPT3
Or each generation from Midjourney V1 vs V2 vs V3 vs V4 vs vV5 (and those 5 generations only took a single year to happen!!!).
https://aituts.com/midjourney-versions/
We might laugh at the efforts of generative AI video right now, but they're no worse than Midjourney V1 was....
Perhaps 50/50 odds we'll have the Midjourney V5 equivalent for video by 2028:
https://manifold.markets/ScottAlexander/in-2028-will-an-ai-be-able-to-gener
Or maybe even higher odds than that...
https://manifold.markets/firstuserhere/will-we-have-end-to-end-ai-generate-12f2be941361
https://manifold.markets/firstuserhere/will-we-have-end-to-end-ai-generate-de41c9309e38
I agree with your disagreeing.
That's a good analogy!
And if it is carefully/appropriately managed, you can even have a change in voice actor who is doing these characters, and almost none of the fans will notice or care.
Another good analogy. It is indeed very likely, I feel, that the country as a whole will be massively better off and wealthier thanks to AI. But... there will also be huge numbers of individuals (such as those middle aged textile workers) who will be a lot worse off.
We'll be able to have super niche "micro celebrity AI avatars"
At the moment, celebrities need a certain amount of broad appeal. As you said, they need to avoid offending their fans. So end up appealing to the common denominator, because what might appeal to one section of the fan base could drive away other fans who get offended by it. But once you're freed from the physical constraints, then an "AI celebrity" could cater to any and all of these micro niche fanbases.
"I think there is a world market for about five computers." ~ IBM's president, Thomas J Watson (said in the early 1940's)
Nah, my Raspberry Pi can run a LLM. (ok, only a baby-ChatGPT that's quite cut down, and somewhat crippled. But even if I want to run a LLM that's quite close to the power of GPT3, that only costs me much much less than a $1/hr, in fact, more like a handful of cents per hour. It is cheap to run a LLM)
It's predicted as highly like that even GPT4 can be run on consumer grade hardware by next year:
https://manifold.markets/LarsDoucet/will-a-gpt4equivalent-model-be-able
What you're thinking about, is the costs to train GPT4 from scratch. That's VERY EXPENSIVE!
But still, it isn't quite as bad as you think. If a government wanted to do it, then absolutely any government in the OCED could do this, they could do it ten times over. Likewise, there are hundreds, if not thousands, of companies in the world which could train the next GPT4 if they wanted to. (GPT4 would've cost roughly the same order of magnitude as $100M in costs, waaaay out of reach for you and me, but easily within reach of many many other organizations)
But they won't, because the costs to train their own GPT4 vs the profits they could make (as AI is quickly becoming a very competitive space!) just isn't worth it.
The good news though, is that costs for training are dropping drastically fast!
Look at this prediction, it is highly likely that before 2030 it will cost under $10K to train from scratch a GPT3 quality LLM (i.e. any keen hobbyist can do it themselves!):
https://manifold.markets/Gigacasting/will-a-gpt3-quality-model-be-traine
And that's yet another reason why there are not hundreds of other companies training their own GPT4, why put that risk into it if you're not already an industry leader in this? When your $100M+ investment could quickly within a few short years be worth next to nothing. You need a solid business plan to recoup your costs fast. OpenAI can do that, because they're massively funded with Microsoft's backing, and they have a first mover advantage.
Too late, that genie left the bottle long ago.