Vitalik: Afraid of AGI and ASI, supports building intelligence-enhancing tools for humans rather than creating super intelligent life
On December 22, in response to recent AGI-related product developments in the OpenAI and AI field, Ethereum founder Vitalik Buterin posted on platform X saying, "My definition of AGI (Artificial General Intelligence) is: AGI is a powerful enough artificial intelligence that if one day all humans suddenly disappeared and this AI was uploaded into a robot body, it would be able to continue civilization independently.
Obviously, this is a very difficult definition to measure, but I think this is exactly the core intuitive difference between 'the AI we are used to' and 'AGI' in many people's minds. It marks the transition from a tool that constantly relies on human input to a self-sufficient life form. ASI (Artificial Super Intelligence) is completely another matter - my definition is when humans no longer add value to productivity in the cycle (just like in chess games where we have only reached this point over the past decade.)
Yes, ASI scares me - even my defined AGI scares me because it brings obvious risks of losing control. I support focusing our work on building intelligent enhancement tools for humans rather than building super-intelligent life forms."
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Andre Cronje: Sonic is running in "safe mode", 1620 TPS has not yet reached its peak
The number of PENGU token holders has exceeded 500,000