Universal politeness is a problem that makes the world worse.
Most people have a difficult time judging ideas, so they end up going along with whatever social framework the ideas are presented within. What ideas they take seriously is a social, not intellectual phenomenon. Therefore, what the "smart people" seem to respect gains immediate respect. The framework determines the discussion.
The problem arises when terrible ideas, presented by bad thinkers (or disingenuous ones) are treated with undeserved respect. It's like going along with a lie because you don't want to be rude. This happens all the time outside the world of ideas, and it's why we get "respected" martial artists who are incompetent, teaching other people to be incompetent.
In the worst cases, it's how you end up with priests abusing people for decades. Nobody wants to speak up, be rude, and risk rocking the boat.
I see it right now with the Rationalist AI Exterminationists. All the signs of a cult, including impending apocalypse and polycules.
Polite engagement with the ideas of the cult leader is not what's needed. In fact, if this is actually a cult, such behavior makes the world worse. What's needed is somebody saying "Wake up, dummies. You're trapped in a cult. The signs are there."
The correct response to Applewhite is not, "But what if the Earth *is* about to be spaded over by aliens?! We only get one shot at this..."
It's, "This guy has signs of mental illness and looks to be building a cult." If that's rude, then I'll be the rude one, and you can thank me in a few years.
Another thing that I'll say is that this is doing the opposite of persuading me towards your point of view and away from theirs. Not that you are being rude, but that you seem to be refusing to engage with the ideas in an intellectual level. But maybe I'm not most people, so it could be true that you are doing the world a favor by doing this (if you're right about your AI prediction).
"Polite engagement with the ideas of the cult leader is not what's needed. In fact, if this is actually a cult, such behavior makes the world worse. What's needed is somebody saying 'Wake up, dummies. You're trapped in a cult. The signs are there.'"
I guess that answers my question on whether you plan to refute the rationale behind their predictions. To me this sounds anti-intellectual. It sounds a lot like what book burners would say, or progressive university students would say to justify banning people from speaking in schools. They also think to even engage with certain ideas would make the world worse.
You say you aren't smart enough to evaluate these ideas, and I believe you, because you've just equated "not trying to refute the ideas" with "book burning," de-platforming, and anti-intellectualism.
I haven't said Yud should be barred from speaking. I'm saying his ideas do not meet my standards for engagement. Such standards are necessary in this world, otherwise all one's time will be wasted engaging with ideas which 1) shriek the loudest (aliens are coming!) and 2) are put forward by true-believers who are incapable of evaluating good ideas from bad.
There are actually people out there who have written like 500 page books trying to explain to flat-earthers why their arguments fail. And of course, the believers remain unpersuaded. It is a waste of time. There must be standards, and Yud does not meet mine.
Maybe that equation isn't completely accurate, but you are being anti-intellectual. Here's a better analogy. What you are doing when you are calling him a cultist is the same as a progressive calling someone a nazi and moving on, claiming that to even engage with their ideas is to legitimize evil ideas and creating a worse world.
Maybe what Yud is saying doesn't meet your standards, and that's fine. I don't want this to be about him. What about all the other people in the field who are creating this technology and are also saying doom is a possibility (albeit less likely)? Do they not also meet your standards.
No, when a progressive calls somebody a "nazi," that's intended as a slur. When I call Yud a cultist, I am making the straightforward claim that he is actually a cult leader. Hence the comparison to Applewhite. He has the traits of a cult leader. He has a following of a cult leader. He has the apocalyptic vision, the Grand Plan, and the techno-babble of a cult leader. So, I say he's actually a cult leader who shows signs of mental illness.
I am on record saying there are serious risks from AI and real disaster scenarios. This is another reason why Yud engagement is a framing error--when the discussion is about diamondoid bacteria replicating in the sky and blocking out the sun, people are trapped in a framing error that makes the world a worse place.
By the way, I think I'm aware of the disaster scenarios you've talked about. As far as I know, they involve humans using AI to do bad things, not AI by itself doing bad things. Or am I wrong? Is that the framing error you're referring to?
I think you're jumping into some wild conclusions here. I don't know enough about him either (I've just listened to every interview he's been on this year) but I would gladly take that bet.
Maybe it's the fact that he advocated for government intervention that rubs you the wrong way.
Instead of pegging him for a cultist I'd like to see you take on his arguments (and those of others who say human extinction is a significant possibility). I'd like you to do that for my own sake. I've been seeing the arguments about why a super intelligent AI would do this, and it makes sense. Since I'm not smart enough, I'm looking for refutations of those arguments, and seeing this uncharitable slander (from you no less) doesn't give me much confidence that they are wrong.
If you're willing to take on those arguments specifically, I just want to point you to this short video that made things click for me as to why AI would turn on us: https://youtu.be/ZeecOKBus3Q
This take seems uncharacteristically uncharitable coming from you. I more or less agree that if the only way we can save humanity from certain doom is to implement global totalitarianism, then humanity is just doomed and we shouldn’t bother. You’ve been trying to take the doomsayers seriously in your previous comments, which I commend. I realize how difficult it can sometimes be to apply charity, so I do not intend this criticism to be a call for Thor to strike you with lightning. I just wanted you to know I think you usually achieve a higher standard than this.
Maybe it would be kinder to praise you when you make interesting and worthwhile posts, and remain silent when you goof up. But all that praise might get tedious! Just saying “plus one” now and then doesn’t really add anything.
AI is coming, whether the Yudkowsky crowd and similar cults like it or not. Actually, to a great extent it's already here. Computer programs are already much better than top human chess players. Speech recognition is already in place. Vast databases of knowledge grow by leaps and bounds every day, as does the ability of programs to access that knowledge intelligently.
Is there really anything to fear? I don't think so. Oh, all right, I fear packs of AI robot dogs acting like terminators, but bad guys having superior weaponry is nothing new. I don't fear Skynet, because humans have too much fun shooting off bombs manually to turn the decision over to a computer.
I would recommend you to study the actual decision making systems behind waging nuclear war. (Much of it is in the public domain.) It has been completely computerized since the Carter administration (late 70's), by that time the response times for a nuclear threat had become so short, that human involvement had to be reduced to the minimum.
Basically the US president has a few minutes to decide how to respond to a credible threat and in the case the president is unable to do it, there is a long chain of backups. This sole fact has completely transformed the US political system. There is no time for any sort of meaningful political debate. The threat evaluation is computerized, the algorithm gives you a probability and a threat level, and gives a menu of choices for response. After this basically everything is automatic, and almost impossible to shut down.
The Soviets / Russians suffer from similar problems. There are several documented occasions on their side, when their system gave a false alert several times, and full scale nuclear war was only averted because the officer on duty disobeyed the protocol.
We have been already living for decades in that insane dystopia that you seem to deny. Artificial General Intelligence (AGI) will make things orders of magnitude more dangerous. Just one quick example: the AI doesn't need to get any direct access to the weapons systems, if it want's to launch, just needs to spoof/ hack / scam the threat evaluation system.
Before AI: risk of global annihilation via nuclear war.
After AI: risk of global annihilation via nuclear war.
Maybe, as you say, AI will increase the risk. It could also reduce risk: for example, parallel programs could be set up to watch each other and rat out one that has gone off the reservation.
Steve, why does history repeat itself? Is the universe naturally cyclical? I think you're repeating history from my other philosopher for another time, don't you think?
(From Twitter)
Universal politeness is a problem that makes the world worse.
Most people have a difficult time judging ideas, so they end up going along with whatever social framework the ideas are presented within. What ideas they take seriously is a social, not intellectual phenomenon. Therefore, what the "smart people" seem to respect gains immediate respect. The framework determines the discussion.
The problem arises when terrible ideas, presented by bad thinkers (or disingenuous ones) are treated with undeserved respect. It's like going along with a lie because you don't want to be rude. This happens all the time outside the world of ideas, and it's why we get "respected" martial artists who are incompetent, teaching other people to be incompetent.
In the worst cases, it's how you end up with priests abusing people for decades. Nobody wants to speak up, be rude, and risk rocking the boat.
I see it right now with the Rationalist AI Exterminationists. All the signs of a cult, including impending apocalypse and polycules.
Polite engagement with the ideas of the cult leader is not what's needed. In fact, if this is actually a cult, such behavior makes the world worse. What's needed is somebody saying "Wake up, dummies. You're trapped in a cult. The signs are there."
The correct response to Applewhite is not, "But what if the Earth *is* about to be spaded over by aliens?! We only get one shot at this..."
It's, "This guy has signs of mental illness and looks to be building a cult." If that's rude, then I'll be the rude one, and you can thank me in a few years.
Another thing that I'll say is that this is doing the opposite of persuading me towards your point of view and away from theirs. Not that you are being rude, but that you seem to be refusing to engage with the ideas in an intellectual level. But maybe I'm not most people, so it could be true that you are doing the world a favor by doing this (if you're right about your AI prediction).
I just saw you say this on a tweet
"Polite engagement with the ideas of the cult leader is not what's needed. In fact, if this is actually a cult, such behavior makes the world worse. What's needed is somebody saying 'Wake up, dummies. You're trapped in a cult. The signs are there.'"
I guess that answers my question on whether you plan to refute the rationale behind their predictions. To me this sounds anti-intellectual. It sounds a lot like what book burners would say, or progressive university students would say to justify banning people from speaking in schools. They also think to even engage with certain ideas would make the world worse.
You say you aren't smart enough to evaluate these ideas, and I believe you, because you've just equated "not trying to refute the ideas" with "book burning," de-platforming, and anti-intellectualism.
I haven't said Yud should be barred from speaking. I'm saying his ideas do not meet my standards for engagement. Such standards are necessary in this world, otherwise all one's time will be wasted engaging with ideas which 1) shriek the loudest (aliens are coming!) and 2) are put forward by true-believers who are incapable of evaluating good ideas from bad.
There are actually people out there who have written like 500 page books trying to explain to flat-earthers why their arguments fail. And of course, the believers remain unpersuaded. It is a waste of time. There must be standards, and Yud does not meet mine.
Maybe that equation isn't completely accurate, but you are being anti-intellectual. Here's a better analogy. What you are doing when you are calling him a cultist is the same as a progressive calling someone a nazi and moving on, claiming that to even engage with their ideas is to legitimize evil ideas and creating a worse world.
Maybe what Yud is saying doesn't meet your standards, and that's fine. I don't want this to be about him. What about all the other people in the field who are creating this technology and are also saying doom is a possibility (albeit less likely)? Do they not also meet your standards.
No, when a progressive calls somebody a "nazi," that's intended as a slur. When I call Yud a cultist, I am making the straightforward claim that he is actually a cult leader. Hence the comparison to Applewhite. He has the traits of a cult leader. He has a following of a cult leader. He has the apocalyptic vision, the Grand Plan, and the techno-babble of a cult leader. So, I say he's actually a cult leader who shows signs of mental illness.
I am on record saying there are serious risks from AI and real disaster scenarios. This is another reason why Yud engagement is a framing error--when the discussion is about diamondoid bacteria replicating in the sky and blocking out the sun, people are trapped in a framing error that makes the world a worse place.
By the way, I think I'm aware of the disaster scenarios you've talked about. As far as I know, they involve humans using AI to do bad things, not AI by itself doing bad things. Or am I wrong? Is that the framing error you're referring to?
I think you're jumping into some wild conclusions here. I don't know enough about him either (I've just listened to every interview he's been on this year) but I would gladly take that bet.
Maybe it's the fact that he advocated for government intervention that rubs you the wrong way.
Instead of pegging him for a cultist I'd like to see you take on his arguments (and those of others who say human extinction is a significant possibility). I'd like you to do that for my own sake. I've been seeing the arguments about why a super intelligent AI would do this, and it makes sense. Since I'm not smart enough, I'm looking for refutations of those arguments, and seeing this uncharitable slander (from you no less) doesn't give me much confidence that they are wrong.
If you're willing to take on those arguments specifically, I just want to point you to this short video that made things click for me as to why AI would turn on us: https://youtu.be/ZeecOKBus3Q
This take seems uncharacteristically uncharitable coming from you. I more or less agree that if the only way we can save humanity from certain doom is to implement global totalitarianism, then humanity is just doomed and we shouldn’t bother. You’ve been trying to take the doomsayers seriously in your previous comments, which I commend. I realize how difficult it can sometimes be to apply charity, so I do not intend this criticism to be a call for Thor to strike you with lightning. I just wanted you to know I think you usually achieve a higher standard than this.
Maybe it would be kinder to praise you when you make interesting and worthwhile posts, and remain silent when you goof up. But all that praise might get tedious! Just saying “plus one” now and then doesn’t really add anything.
AI is coming, whether the Yudkowsky crowd and similar cults like it or not. Actually, to a great extent it's already here. Computer programs are already much better than top human chess players. Speech recognition is already in place. Vast databases of knowledge grow by leaps and bounds every day, as does the ability of programs to access that knowledge intelligently.
Is there really anything to fear? I don't think so. Oh, all right, I fear packs of AI robot dogs acting like terminators, but bad guys having superior weaponry is nothing new. I don't fear Skynet, because humans have too much fun shooting off bombs manually to turn the decision over to a computer.
I would recommend you to study the actual decision making systems behind waging nuclear war. (Much of it is in the public domain.) It has been completely computerized since the Carter administration (late 70's), by that time the response times for a nuclear threat had become so short, that human involvement had to be reduced to the minimum.
Basically the US president has a few minutes to decide how to respond to a credible threat and in the case the president is unable to do it, there is a long chain of backups. This sole fact has completely transformed the US political system. There is no time for any sort of meaningful political debate. The threat evaluation is computerized, the algorithm gives you a probability and a threat level, and gives a menu of choices for response. After this basically everything is automatic, and almost impossible to shut down.
The Soviets / Russians suffer from similar problems. There are several documented occasions on their side, when their system gave a false alert several times, and full scale nuclear war was only averted because the officer on duty disobeyed the protocol.
We have been already living for decades in that insane dystopia that you seem to deny. Artificial General Intelligence (AGI) will make things orders of magnitude more dangerous. Just one quick example: the AI doesn't need to get any direct access to the weapons systems, if it want's to launch, just needs to spoof/ hack / scam the threat evaluation system.
... and still a human actually pushes the button.
Before AI: risk of global annihilation via nuclear war.
After AI: risk of global annihilation via nuclear war.
Maybe, as you say, AI will increase the risk. It could also reduce risk: for example, parallel programs could be set up to watch each other and rat out one that has gone off the reservation.
Steve, why does history repeat itself? Is the universe naturally cyclical? I think you're repeating history from my other philosopher for another time, don't you think?
Thank you so much 🙏🏼