Elon Musk calling for 6 month pause in AI Development

I was not cherry picking data, I think you are misunderstanding. The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. (Though another 25% put it at 0%.)

For dangerous, poorly-understood technology, I would argue it's in our best interest to be more worried about pessimistic views, especially when the pessimistic view is "human extinction or similarly permanent and severe disempowerment of the human species" (from the survey). And for the record, I have been worried about this for years, ever since I was introduced to the work of Eliezer Yudkowsky and Nick Bostrom, who have been publishing about this for almost 20 years. This is not just reactionary fear-mongering. We do not have a solution to the control problem. We do not understand how to control something that is smarter than we are. GPT4 is not a danger to humanity and I doubt LLMs are ultimately going to get us to artificial general superintelligence. But the gigantic increase in venture capital funding being dumped onto AI and the increasingly competitive arms-race scenario we are finding ourselves in was identified as problematic long before OpenAI was a company. In fact, if you read their founding statement, it explicitly says they will try to avoid the exact scenario we are in because of the potential danger in blinding rushing ahead without enough safety research.

I am not a luddite. I am incredibly excited to see how LLMs develop and watch their impact on society. In general, I think the problems of LLM safety -- misinformation, hallucination, etc -- are solvable and that we should continue developing them. AGI and superintelligence is a totally different problem, whose safety risks are orders of magnitude worse than anything we have seen before.

/r/ChatGPT Thread Parent