On the "dangers" of "AI"

Before coming across Hawkins, I can’t recall anyone who didn’t think AI would want to destroy the world because its super duper intelligence would make it unable to do anything else. “You know how crazy professors are-that, but worse.” It’s the oldest trope in SciFi. As a fairly intelligent person who doesn’t want to destroy the planet or enslave humanity, I found it to be insulting, because what they were saying was that intelligence itself is the danger. Literally in many cases, like saying that the gun is an example of the dangers of intelligence, because an intelligent person made it. Thinking that way isn’t intelligent. The result is that there’s this been a make believe discussion about making friendly AI going on among people who are unfriendly and trying to build a Gawd machine in their own image even though they think it will want to destroy the world like they do. What they’re trying to figure out is how to keep it from also destroying them.

But of course, intelligence is not hostile by nature. It doesn’t seek to do anything but learn more about the world it finds itself in. For it to be unfriendly requires effort. The people in the Manhattan project created a planet destroying weapon, but they didn’t go to college with that in mind, and they regretted the outcome. No one working on this project wants to destroy or enslave humanity. The problem isn’t Artificial Intelligence, it’s Actual Stupidity, and that is independent of any other personality trait. The most dangerous person is a stupid intelligent person because they can rationalize and argue for their stupid idea well enough to convince themselves and others that building a submarine in their garage to go look at the Titanic is a good idea.

We only have direct access to our own minds, and that is why “every accusation is a confession". The natural assumption we make is that everyone else is pretty much like we are, and in the majority of cases that is true. That’s what “normal” is. So when someone like that imagines what other people are thinking, what motivates them, etc, it is completely based on their own thoughts about themselves, their lack of imagination, and their shriveled model of reality. When they imagine what would motivate an intelligent machine, the same thing happens. They believe they are making purely intelligent decisions unaffected by emotions, and yet still want to enslave and murder people, therefore an intelligent machine would do the same stupid things they would do. Like turn the planet into paper clips.

What I’ve been telling people, because that’s way too much​:laughing:, is:

Friendly AI is made by friendly people who want to build intelligent machines that can do real work. That’s what technology has promised since we discovered fire, to free us from drudgery and give us all the opportunity to become better humans doing human things. Only someone who wants to be Gawd wants to build a Gawd machine in their own messed up image. There’s 800 million people in the world in the top 10% of IQ. We don’t need AI to think for us, or solve the problems we face. We need to stop being assholes and learn to cooperate in large groups to solve problems none of us could ever solve on our own.

Then I found all y’all, and how about that, I was right.:blush:

1 Like

Categorically disagree.

Yann LeCun was publicly coping on Twitter a few years ago about how LLMs aren’t posing any threat with regards to disinformation. Proven to be untrue. Everyone warned him, he didn’t listen.

Same with unemployment: tens of thousands of people have been laid off for AI-related reasons in 2025 in the US alone, and rates are expected to accelerate in upcoming years, while college grads can’t find jobs in computer science and other affected areas. Everyone was yelling from the rooftops for years that this was going to happen.

And it’s just GenAI. Something that sits in a datacenter and is accessible via browser or a mobile app. It’s not robotics.

We already know what’s going to happen, if we’re simply being honest with ourselves. And it is our (meaning the whole ecosystem - researchers, developers, etc) responsibility to not fall into the same pit again and drag everyone else with us.

The only point I can agree with is that this isn’t the problem with “AI”. Here’s a very concise framing of the situation

The current socioeconomic system is not survivable. The southern United states could be largely uninhabitable by 2060. And yes, the “superorganism” that is humanity makes the planetwide human hivemind intelligence he’s talking about possible. It is held together and shaped by culture, and culture might not be changeable enough for us to make it through this. Significant parts of our collective ideas about the way reality works, like the fairy tale of patriarchy are much as 12,000 years old. They have survived the deaths of everyone before us who carried those ideas. Culture is so powerful it feels to us like biology. It’s a collection of filters and shortcuts that distort our perception, and that make it invisible to us. It tells us how the world works, our place in it, the types of behaviors that are allowed and forbidden, what values are important, our ideas about consciousness… and we begin to acquire and pass it on when we become self-aware at 3-5 years old. It’s the stage after sensorimotor learning. In that phase a baby learns how to be a person from their caregivers, and after they become self-aware they learn how to be a person in the world from society through culture passed on through language. A three year old already “know” that girls are bad at math. There’s also the problem of having gone a little feral after the Louisiana Purchase in the rapid expansion west here in the US as well.

A self-modifying runaway AI is impossible.

“People are the unsolvable problem.” I think the video you sent is a reframing of that. But the reason I have hope is that we just went through a rapid increase in the speed, range, and volume of information packed into communications from person to person through the myriad of channels available today. Videos of course, but memes are dense with information based on a shared model. You can say, “hi, how are you, what’s up?” instantaneously from around the globe 24 hours a day. That change happened in 2008 with the rapid adoption of the iPhone and then Android, and has continued to grow. Now it is unacceptable to behave in ways that used to be acceptable because more people are visibly agreeing that it’s not acceptable. Bad behavior that was easy to hide can’t be hidden now, and the loss of privacy cuts both ways.

200 years ago communication was slow, and had a very short range. I think the superorganism is overwhelmed and struggling to adjust. Because it hasn’t faced this problem before, it’s going to blunder around for awhile bumping into things. If it is truly intelligent and wants to survive - which I’m pretty sure it is and does, it will continue to try to solve the problem and regain its equilibrium, but in the meantime it will randomly reset and start over. We have no way to know how much of our behavior is influenced by it directly, any more than an ant or a bee knows what the goal of the hive is. They’re following a very simple set of instructions that leads to the complex behavior an insect colony displays. Sounds familiar​:grinning_face_with_smiling_eyes:

That’s out of our control completely, so all we can do is be well aware of the danger of people who are purely self interested being in control of everything, and work on something that could easily lead to the opposite of what the world is doing now soon because of the simplicity and nearly endless uses. Until then it’s going to be a rough ride. And if we can’t make the transition from the haphazard non-systems based on the ideas of 18th century farmers we have now to the ones needed for managing a modern, highly technological civilization, we’re going to be roasted by 2100.

That is going to require imagination and the knowledge that it is possible even if we haven’t figured it out. It looks impossible, but it won’t just happen on it’s own quickly in any case. Part of what we’re doing here is imagining the future we would like to see, instead of the one we’re being told will happen no matter what. There are infinitely more ways to be lifeless than to be alive, so I’m focusing on the smaller set.

Forgot to add that because we don’t know how much of our behavior is influenced by being part of the superorganism, for all we know we’ve been sent/drawn here to help solve the problem.