Lawmakers are scrambling to make sense of the pace at which AI is developing — but are they asking the right questions and talking to the right people?
Yes, AI experts are certainly knowledgable about how AI works, but they can hardly be objective about questions such as whether the AI industry should exist, or how fast or deep it should proceed etc.
You write, "And I must confess that I’m not sure I trust people who have an agenda, who can mobilize fast, and who have the ear of decision makers, to get things right."
Indeed, their goal is to get things right for themselves.
You write, "Of course, a robust, agile, and responsive regulatory framework for advanced technologies, is important. "
I just watched the excellent Netflix documentary about the Bernie Madoff scam. A key take away was how ***spectacularly*** incompetent SEC regulation was. They were handed the case on a silver platter multiple times over a period of years, and never moved on it. It was only the 2008 financial crisis that finally brought Madoff down. Point being, talk of regulation and governance schemes etc may be mostly a method of pacifying the public while the AI industry is pushed past the point of no return.
What I don't hear from anybody anywhere is an answer to this question:
What are the compelling benefits of AI which justify taking on yet another risk at a time when we already face so many?
Here's what we'll hear instead:
1) Don't worry about AI because it's not really a threat yet.
2) Once it is a threat, we'll hear this: Don't worry about AI because it's too late to do anything about it.
If we want to better understand what's going to happen with AI we shouldn't listen to industry experts, but to the history of nuclear weapons, which documents in sobering detail our ability to manage technologies of awesome scale.
I like your article, but might advise more caution regarding being "reasonable", and "realistic". None of that has worked with nuclear weapons.
Yes, AI experts are certainly knowledgable about how AI works, but they can hardly be objective about questions such as whether the AI industry should exist, or how fast or deep it should proceed etc.
https://www.tannytalk.com/p/the-future-of-ai
You write, "And I must confess that I’m not sure I trust people who have an agenda, who can mobilize fast, and who have the ear of decision makers, to get things right."
Indeed, their goal is to get things right for themselves.
You write, "Of course, a robust, agile, and responsive regulatory framework for advanced technologies, is important. "
I just watched the excellent Netflix documentary about the Bernie Madoff scam. A key take away was how ***spectacularly*** incompetent SEC regulation was. They were handed the case on a silver platter multiple times over a period of years, and never moved on it. It was only the 2008 financial crisis that finally brought Madoff down. Point being, talk of regulation and governance schemes etc may be mostly a method of pacifying the public while the AI industry is pushed past the point of no return.
What I don't hear from anybody anywhere is an answer to this question:
What are the compelling benefits of AI which justify taking on yet another risk at a time when we already face so many?
Here's what we'll hear instead:
1) Don't worry about AI because it's not really a threat yet.
2) Once it is a threat, we'll hear this: Don't worry about AI because it's too late to do anything about it.
If we want to better understand what's going to happen with AI we shouldn't listen to industry experts, but to the history of nuclear weapons, which documents in sobering detail our ability to manage technologies of awesome scale.
I like your article, but might advise more caution regarding being "reasonable", and "realistic". None of that has worked with nuclear weapons.