In a new blog post, Yoshua Bengio lays out his rationale for why we should be paying a lot more attention to the existential risks presented by future artificial intelligence
"A superintelligent AI system that is autonomous and goal-directed would be a potentially rogue AI if its goals do not strictly include the well-being of humanity and the biosphere, i.e., if it is not sufficiently aligned with human rights and values to guarantee acting in ways that avoid harm to humanity."
Given that the species developing AI routinely ignores the well-being of humanity and the biosphere, and that there is widespread often violent disagreement within that species regarding what human rights and values should be, and....
Given that AI has no where to obtain it's values other than from this confused violent species, and/or the world of nature at large, both of which are governed by the rules of evolution such as survival of the fittest and the strong dominating the weak....
When referring to rogue AI, why are we still using the word "potentially"??
Bengio writes...
"A superintelligent AI system that is autonomous and goal-directed would be a potentially rogue AI if its goals do not strictly include the well-being of humanity and the biosphere, i.e., if it is not sufficiently aligned with human rights and values to guarantee acting in ways that avoid harm to humanity."
Given that the species developing AI routinely ignores the well-being of humanity and the biosphere, and that there is widespread often violent disagreement within that species regarding what human rights and values should be, and....
Given that AI has no where to obtain it's values other than from this confused violent species, and/or the world of nature at large, both of which are governed by the rules of evolution such as survival of the fittest and the strong dominating the weak....
When referring to rogue AI, why are we still using the word "potentially"??