"....(AI concerns) are not typically based on science fiction speculation or cynical fear mongering, as some have suggested. Rather, they reflect the speed with which a potentially transformative technology is emerging faster than our ability to understand and govern the possible social and economic consequences."
Yes, good point. Now let's keep going with that.
AI will further accelerate the knowledge explosion, just like computers and the Internet have. Knowledge development feeds back upon itself, leading to an accelerating rate of knowledge development. Thus, if we're already unable to "understand and govern the possible social and economic consequences" that is only going to get worse over time, at a faster and faster pace.
We don't need academia to jump on the AI bandwagon like everybody else. We need intellectuals to be asking big picture questions like this...
What are the compelling benefits of AI which justify taking on even more risk at a time when we already face so many? Is the marriage between violent men and an accelerating knowledge explosion sustainable?
More to the point, we need academics focused on the knowledge explosion itself, the source of the accelerating pace of disruption. Some disruption is good, more and more, faster and faster without limit is not. So, how do we take control of the pace of the knowledge explosion so that it proceeds at a pace which society at large can constructively adapt to?
At the heart of such challenges is a simplistic, outdated, and increasingly dangerous "more is better" relationship with knowledge left over from the 19th century. if academics won't examine and challenge this outdated knowledge philosophy, who will?
You write...
"....(AI concerns) are not typically based on science fiction speculation or cynical fear mongering, as some have suggested. Rather, they reflect the speed with which a potentially transformative technology is emerging faster than our ability to understand and govern the possible social and economic consequences."
Yes, good point. Now let's keep going with that.
AI will further accelerate the knowledge explosion, just like computers and the Internet have. Knowledge development feeds back upon itself, leading to an accelerating rate of knowledge development. Thus, if we're already unable to "understand and govern the possible social and economic consequences" that is only going to get worse over time, at a faster and faster pace.
We don't need academia to jump on the AI bandwagon like everybody else. We need intellectuals to be asking big picture questions like this...
What are the compelling benefits of AI which justify taking on even more risk at a time when we already face so many? Is the marriage between violent men and an accelerating knowledge explosion sustainable?
More to the point, we need academics focused on the knowledge explosion itself, the source of the accelerating pace of disruption. Some disruption is good, more and more, faster and faster without limit is not. So, how do we take control of the pace of the knowledge explosion so that it proceeds at a pace which society at large can constructively adapt to?
At the heart of such challenges is a simplistic, outdated, and increasingly dangerous "more is better" relationship with knowledge left over from the 19th century. if academics won't examine and challenge this outdated knowledge philosophy, who will?
https://www.tannytalk.com/p/our-relationship-with-knowledge