Why Are Humans So Delusional and Arrogant about AI?

Based on my conversations with people regarding AI, I have come to the conclusion that the human race is a prisoner to its own delusional exceptionalism. The way that the average person confidently thinks that AI will never be able to surpass them/be like them is beyond infuriating.

Even in the tech field. One would think that people in tech would be more savvy to things like AI, but I was shocked again and again by how many tech people barely have any developed takes on it or simply don’t even think about it.

  1. In a conversation with a software engineering instructor, he said AI would never grow more powerful than humanity because it “will never have consciousness of its own” and thus not have its own agency. I then told him that we as human beings are made of atoms and energy, and that computers are, too, so it wouldn’t be that far-fetched to think that a computer could, in its developmental trajectory, develop consciousness. I then told him that we don’t even have a solid definition of consciousness, so how can we say for sure that something cannot have consciousness? To which he replied “oh yeah haha wow that’s really philosophical”.

  2. In a conversation with a software engineering student, he said that AI would never grow more powerful than humanity because “humans will never give AI the power to do that because of human nature.” He also said that I was being “idealistic” for thinking that humans would give AI the power to take over. I replied that he was assuming that all of AI’s future capabilities would have to be given to AI by humans, and that it would be a very different story when AI begins to modify itself and to give itself capabilities through this self-modification. He didn’t have anything to say after that.

  3. A family friend said that AI would be able to replace paralegals but never judges, because AI would never truly know what it is to be human and would therefore never have the “empathy” it takes to be a judge in a court of law. What about embodiment? What about learning through simulation? And what about the hungry judge effect (a uniquely human problem) that has been proven in studies to unfairly affect court rulings?

Why do people who barely even think about the subject have this habit of saying with ABSOLUTE certainty that AI “will not be able to [XYZ]”? What is this delusional exceptionalism? Is it a lack of abstract thinking abilities? Or is it ego and self-preservation?

EDIT: There are some comments saying that both sides are speculating. My point is that I’m speculating, while the other side is straight up saying that AI won’t be able to do certain things with 100% certainty. Also remember that they don’t even think about the topic very much, so it boggles the mind that they know for sure that AI is incapable of [XYZ]. That’s the point of this post.

EDIT 2: Somebody in the comments made the assumption that I was talking to an AI researcher in Conversation 1, and said that I thought I knew more about the topic than a “person who literally builds this stuff”. I don’t know how they came to that conclusion, but to be very clear, not a single one of the people I spoke to in these 3 conversations is in the field of AI.