very wip just an idea soup for now
A positive vision of AI applied to human relationships is AI that doesn't act human
A positive world with AI is one where we are less confused about the nature of AI. Where we see it more appropriately as a tool best for certain uses. Like other tools.
Every new cognitive technology is accompanied by its own mania. AI may be the most extreme of all but it's not the first.
It takes a long time to see a technology for what it is. Maybe it's only possible after that technology has completely fulfilled its potential - when we can finally see its limits - that we can see what it really is. That's hard right now because the promise of AI is that it can do anything, or at least "all knowledge work". (radioactivity example, let Seniha give it)
Positive vision of AI is where we let it do the things it's good at - there is absolutely toil that AI can remove. it lowers the barrier to entry to building just about anything on a computer. we can prototype - even build - websites in an instant.
AI might not get smarter on its own. that hard work may be left to us
Positive vision of a world with AI is one where we acknowledge the limits of AI so we can get back to the hard work of pushing our own (human) limits. Where we accept that AI is not going to solve climate change, or cure disease, or fix democracy, or end scarcity or war on its own if we just pour all our resources into [the companies that are building] AI. Even if you accept the boldest claims about the power of these tools, we can see from how they work that they rely on the will of the humans behind them to push them towards one problem or another.
We have to acknowledge that there is no easy way out. That we haven’t identified the solution to all our problems. We need humility.
Once we admit that AI won’t solve all our problems, humanity is still clearly more capable with AI than without it.
But (I’m reacting to my use of “humanity” just now) AI discourse tends to do this totalizing thing where the human race is reduced to “humanity” as if all humans have the same values and goals. This is why “alignment” is a farce. Aligned to who? People disagree with each other.
What we have now and will have in the future is a world with models “aligned” to the interests of whoever is training them. So far that’s the model companies themselves. And nation states, pretty soon, maybe already. But the cost to train models is dropping and open source models aren’t far behind, so before too long essentially “anybody” will be able to align a model to their own values, and at that point I’m not sure alignment will mean much at all. People will still disagree.
Of course the nation states and model companies will always be ahead but as open source gets better, I could see that not mattering for most things.
(end digression)
So humanity is more capable with AI than without it but what does that mean?
AI can mean building a survey to gather feedback from human patrons to improve your restaurant or it can mean building simulation tools so that you never have to talk to a human customer again.
AI can break down sentences in another language to help you learn that language or it can do all the listening and speaking for you so you don’t have to learn a single word.
So there are ways for AI to bring humans closer together but we have to choose them
(Getting less and less structured)
Something to keep in mind - whatever capability a technology is supposed to extend tends to atrophy once we start to rely on the technology for that capability, once we offload to that technology