Sunday, May 14, 2023

A friend asked me whether I was worried about AI exterminating humanity - thought I would share this

I'm not worried PERSONALLY, but there are some very smart people who are.

Geoffrey Hinton, mentioned in the article, recently left Google because of his concern. He got mentioned quite a bit by Zvi Moshowitz, who puts a "Geoffrey Hinton Watch" on his blog, to which I subscribe.

Eliezer Yudkowsky, of course, makes that the center of his efforts.

And the brilliant Scott Alexander led me to these guys, he was concerned years back...

They might be right that we are only years from the complete doom of the human race, but I am personally less concerned because:

1) I suspect intelligence is overrated

2) Less intelligent people with just a little cunning have been controlling more intelligent people for ages

3) Ramping up AIs is probably going to run into the exponential barriers to calculation that have been plaguing us forever

4) It is going to be hard for even computers to concentrate the allocation of resources by society

5) It is not clear to me that superintelligent computers wouldn't cherish us as pets rather than try to eradicate us. There are a lot of atoms in the universe!

Besides, I think anybody terrified of superintelligent computers is missing the bigger threat of supermanipulative humans. Most methods of PREVENTING superintelligent computers requires SOMEBODY to have an immense amount of power to clamp down on them. I'm more afraid of that SOMEBODY than the uncertain future.