Perhaps the best reason to be worried about AI is that it's likely to serve as a powerful accelerant to an already overheated knowledge explosion. I would be a lot more sympathetic to the situation of scientists if they were more concerned with that threat than they are with their job satisfaction.
Example of "Overheated": As new tools like CRISPR make genetic engineering ever easier, ever cheaper, and ever more powerful, ever more people with ever less skill and governance will be creating ever more new life forms in their garage workshops.
I'm going to say, the science hasn't had a soul in decades with the 'publish or perish' mentality and when AI is used to just publish more, that's just exponentially worse. It's also the wrong use of AI and something that can be easily fixed by slowing down and designing the study better for where human vs. AI agents best fit.
I don't see this as a problem with AI... this is a human problem in how they are using AI and they're using the tool badly and suffering from it. But the root cause is deeper, much deeper, and much more human than the tool. AI can't fix that but maybe AI can break it open so we can get to the root of the problem.
But the peer review process has become a joke in most disciplines and corrupt, and the pressure on researchers to prioritise quantity over quality of papers makes it even worse.
As with most other human institutions, it’s the humans who are the problem and end up messing things up, putting ego before knowledge discovery, sharing knowledge, being open to new ideas, challenging othodoxy, testing existing hypothesis rather than worshipping them blindly like a new religion.
Communism seemed like a great idea, as did the scientific method.
The problem is the human execution of those ideas.
AI driven science might be a new source of hope for the future of science.
I feel like this might be a consequence of being in a transition period and not really knowing how best to use these tools to augment our current workflows. It may be that we need a "Citizen Kane" moment, and it won't be an add-on but a whole new paradigm in scientific enquiry. Guess we can only wait and see (and commentate!)
I think that this is certainly a possibility -- and I hope this is the case. But it does mean that we need to be proactive in thinking what we want the AI future to be, and not just passively assume it will be a good one.
How should we think about Science relative to other vocations that have been increasingly enhanced or replaced by robots and machines, eg shoemakers, woodworkers, medical professionals, etc. I’m sure there were and still are many in these areas that get great joy from the challenge and satisfaction from creating/solving something themselves. Do you think there’s something fundamentally different about science compared to these other examples? Perhaps there are lessons we can take from those experiences to maintain satisfaction but allow for the societal improvements that come from increased productivity.
That's a great question! I focus on science as that's what I know, and because it's been such an important driver of social and economic growth over the past 70+ years. But I think you're right that there are bigger questions and and larger parallels here.
I wonder if we would see the same results with PhD/EdD students doing research for their dissertation. I would suspect the productivity benefit may likely increase satisfaction due to the efficiencies gained from AI. Of course, their motivation is being done- with a specific goal (successful defense) in mind. Makes me also wonder how AI might affect the broader white-collar workers who can automate/delegate many tasks to AI (here come the agents!) Will they be less satisfied or quite happy to offload their work burden to a bot. Which may explain why many use AI in "shadow mode" not telling their bosses for fear of getting more work. I guess it is complicated. Loved your analysis Andrew.
Thanks Jim -- think there are a lot of ways this might play out in different settings, but the big message seems to be that if we're not thinking critically about how AI disrupts science, things aren't likely to go well!
Perhaps the best reason to be worried about AI is that it's likely to serve as a powerful accelerant to an already overheated knowledge explosion. I would be a lot more sympathetic to the situation of scientists if they were more concerned with that threat than they are with their job satisfaction.
Example of "Overheated": As new tools like CRISPR make genetic engineering ever easier, ever cheaper, and ever more powerful, ever more people with ever less skill and governance will be creating ever more new life forms in their garage workshops.
What could possibly go wrong?
I'm going to say, the science hasn't had a soul in decades with the 'publish or perish' mentality and when AI is used to just publish more, that's just exponentially worse. It's also the wrong use of AI and something that can be easily fixed by slowing down and designing the study better for where human vs. AI agents best fit.
I don't see this as a problem with AI... this is a human problem in how they are using AI and they're using the tool badly and suffering from it. But the root cause is deeper, much deeper, and much more human than the tool. AI can't fix that but maybe AI can break it open so we can get to the root of the problem.
The soul was sucked out of science long before AI by corruption and arrogance and anti scientific practices 😁
Ouch - cynical 😁
I'd argue that there's still a little bit of soul left!
I’d agree there’s a bit left.
But the peer review process has become a joke in most disciplines and corrupt, and the pressure on researchers to prioritise quantity over quality of papers makes it even worse.
As with most other human institutions, it’s the humans who are the problem and end up messing things up, putting ego before knowledge discovery, sharing knowledge, being open to new ideas, challenging othodoxy, testing existing hypothesis rather than worshipping them blindly like a new religion.
Communism seemed like a great idea, as did the scientific method.
The problem is the human execution of those ideas.
AI driven science might be a new source of hope for the future of science.
I feel like this might be a consequence of being in a transition period and not really knowing how best to use these tools to augment our current workflows. It may be that we need a "Citizen Kane" moment, and it won't be an add-on but a whole new paradigm in scientific enquiry. Guess we can only wait and see (and commentate!)
I think that this is certainly a possibility -- and I hope this is the case. But it does mean that we need to be proactive in thinking what we want the AI future to be, and not just passively assume it will be a good one.
How should we think about Science relative to other vocations that have been increasingly enhanced or replaced by robots and machines, eg shoemakers, woodworkers, medical professionals, etc. I’m sure there were and still are many in these areas that get great joy from the challenge and satisfaction from creating/solving something themselves. Do you think there’s something fundamentally different about science compared to these other examples? Perhaps there are lessons we can take from those experiences to maintain satisfaction but allow for the societal improvements that come from increased productivity.
That's a great question! I focus on science as that's what I know, and because it's been such an important driver of social and economic growth over the past 70+ years. But I think you're right that there are bigger questions and and larger parallels here.
Enjoyed this - important points raised.
I wonder if we would see the same results with PhD/EdD students doing research for their dissertation. I would suspect the productivity benefit may likely increase satisfaction due to the efficiencies gained from AI. Of course, their motivation is being done- with a specific goal (successful defense) in mind. Makes me also wonder how AI might affect the broader white-collar workers who can automate/delegate many tasks to AI (here come the agents!) Will they be less satisfied or quite happy to offload their work burden to a bot. Which may explain why many use AI in "shadow mode" not telling their bosses for fear of getting more work. I guess it is complicated. Loved your analysis Andrew.
Thanks Jim -- think there are a lot of ways this might play out in different settings, but the big message seems to be that if we're not thinking critically about how AI disrupts science, things aren't likely to go well!