11 Comments

"Students are learning through creative thinking and hands-on experience how to extract value from emerging AI tools and capabilities far faster than most of their instructors are."

This has not been my experience. My experience has been that educators are following the discourse and are working hard to figure out how to apply AI in the context of their work. We know that AI is the future and it is moving fast, but we must also be very careful which aspects of student work gets disrupted and which gets protected. I have found that most students are incapable of effective prompting because their critical thinking and composition skills lack focussed, complex articulation. Most of my students have a shallow understanding of how to use AI or even which options are available to them.

Expand full comment

I like the triple S graphs, thanks. Here in Oz we’re moving faster than most countries in the last 18mnths within HigherEd, to address the AI-induced crisis in assuring academic integrity

A distillation of universities’ current strategic responses https://www.teqsa.gov.au/about-us/news-and-events/latest-news/gen-ai-strategies-australian-higher-education-emerging-practice

My university’s strategy https://lx.uts.edu.au/blog/2024/09/02/next-steps-for-genai-and-assessment-reform-uts-response-teqsa/

This is deep sociotechnical systems change.

Expand full comment

Thanks Simon!

Expand full comment

That’s the techology’s education problem as much (if not more than) education’s technology problem. Said differently; AI suffers from a deep product marketing problem as the frontier models were primarily concerned with funding research for more generalized capabilities with more generally approachable chatbots. The apps have yet to materialize and workflows not yet optimized for any one vertical - definitely not for education from the ground up, though educators CAN adopt them to great results. No educator is paid well enough to take that risk nor are they incentivized to

Expand full comment

Last year, I developed an applied capability maturity model for AI in my space, online for adults, with levels from 1 to 5. This was before open ai came up with their levels. My model is called EMERALD, it is easy to google/perplexity to find it. During the year, I was mostly concerned about how to move from L2 to L3, in other words, from using intro models unsystematically to embedding custom gpts everywhere. Based on 11-month experience, I found that moving from L1 (digital literacy) to L2 was even harder. Initially, I was not a silent cyborg, as defined by Mollick, but am seriously considering it now. Does anybody know how and where silent cyborgs unite besides youtube

Expand full comment

Wow. Thank you for this!

I do not mind sharing that I am currently experiencing a very challenging role whereby I am writing, and delivering, training courses for a very well established and respected training organisation based in the Gulf.

We are assessed on a daily basis (think ‘Tripadvisor’) and therefore live and die by our scores. 3’s or less trigger alarms … and I seem to be receiving more than my fair share of them!

This article perfectly articulates the environment into which I have thrown myself and determine to stay in.

Why?

Because regardless of the pain I go through, almost on a daily basis (btw I get mostly 5 out of 5’s, but these do not get mentioned!), the rewards are exponential.

And I am not just talking financially!

So thank you Andrew.

I will endeavour to incorporate this message into my next review by trying to articulate to the organisation that unless we equip ourselves (especially me!) accordingly, with ‘agile life vests’ that can take the punches long enough to reach that Gladwellian ‘AI Tipping Point’.

Wish me luck … or help me find another job quick 😂?!

Expand full comment

I am angry. You will "hear" that in my words. I am also a 19 yr veteran classroom teacher. 10 years teaching English. 9 years teaching computer Science. And 15 years working in technology: Salesforce.com, Grameen Foundation, startups.

My favorite part of what you wrote are the graphs without any units. The "lag" you see is not poor stupid, overworked teachers toiling in the boiler rooms of education unable to see what you can see from the heights of your tower (old metaphor, but still works :)). We are making choices.

In the third paragraph you mention AI's "ability to bring about transformational leaps in education" but you leave the details for another day and instead focus on your graphs. The one's without any units. Your entire argument in this post is FOMO. You give no reason for why I should stop lagging other than it's better than it was in 2022 and Apple/Google/etc.

Here are the two best arguments that explain why I lag.

#1: AI at scale today, kills and maims children

More children have been murdered or maimed by AI than have been helped or than we have any data to support that they will ever be helped. GenAI is used slaughter Palestinian children. It's use in Social Media, Predictive Policing, Insurance, Health Care, Financial Services is AI being used at scale to harm children. You can bet the Trump administration will use it to target children to deport.

#2: Deskilling

https://www.nber.org/papers/w31161

AI increases the productivity of less experienced workers in call centers.

Exploring the Impact of ChatGPT on Business School Education: Prospects, Boundaries, and Paradoxes

https://journals.sagepub.com/doi/epub/10.1177/10525629241261313

the Expertise Paradox suggests that AI’s adequate performance at lower-level tasks may weaken students’ development of higher-level thinking; the Innovation Paradox states that AI’s creativity aid could stifle original thinking; and the Equity Paradox highlights AI’s potential to provide immense gains to experts but disproportionately harm novices.

Expand full comment

I still find many campuses struggling to come up with a strategic response to AI at all.

Expand full comment

Hi. I'm developing a strategic landscape for AI in HE on a platform called Miro. Its a nind-map and a little messy at present, because of the struggle to align the different perspectives on AI in Higher Education. As a PhD candidate, AI has massively impacted practice in research (and that will grow so fast). You have to have sympathy for the faculty trying to deal with this level of disruption and the pace of change. Pretty much every AI policy document I read is inadequate and out of date. Lastly, you have to feel for the students are the people whose voice is little heard yet arguably the most important. (Father of three, one graduated, one on a mid-course work placement, one starting in 25).

Expand full comment

As ever, we will end up writing policy for the (fictional) world model we've constructed in our minds. For slow-moving/static endeavours, our mental models are generally pretty good reflections of reality. For fast-moving technology though, as you point out, this is less so.

I feel an even greater sense of concern as I reject the idea that progress will suddenly plateau (as fashionable, and palatable, as it is); AlphaProof/AlphaGeometry, o1, QwQ-32B, etc. are early exemplars of the power of flexible inference-time compute. This, alone, is a paradigm that still has room to scale. (That said, the ability to ascertain the difference between a machine with a human-equivalent IQ of 250, and 350, for the average human, is less obvious!)

Expand full comment

Yeah, I was finding it hard not to go off on a tangent around just how complex and entangled tech s-curves are, as a single curve is just an out of context snapshot from a whole ecosystem of entwining curves that, together, show a different type of behavior. And, of course, as you point out, there is strong disagreement around if and even when the current developments in AI will plateau out.

I decided to avoid the tangent as this would have made the piece too convoluted and detract from the point -- which of course becomes even more pointed if those capability and utility curves are still heading upward at an increasing rate and perception's still in the equivalent of the dark ages 🙂

Expand full comment