Should I be taking the doomsaying of Eliezer Yudkowsky and his followers seriously?

In short, he’s right—we should be working on better AI alignment and looking to caution. But the whole notion of humanity being entirely wiped out evades me. Yes, it’s possible, but seemingly not anytime soon. In the podcast he speaks of such an AI as a sort of program. Nothing is associated with bodily autonomy. If the AI seeks to outsmart us, why on earth would it wipe out humanity? As a computer program it wouldn’t subsist without us. There’s bound to be some sort of natural disaster or something that takes it offline and unless there’s some robot army, it doesn’t possess the manpower to continue itself. In other words, how would that serve it any purpose? Remember, it’s still a man made machine fed man made ideologies and substance. For it be sentient in the way it’s romanticized about it has to possess some sort of knowledge of human nature and emotion… which induces me to believe that it wouldn’t wipe out all of humanity for shits and gigs unless it seriously has the capabilities and knows it can maintain itself. And because of that people must choose wisely what they feed the systems.

/r/IsaacArthur Thread