Five voices, five pieces
To start the new year I thought I'd highlight five fellow writers I enjoy reading on Substack, and a recent post from each of them that grabbed my attention
In today’s “attention economy” it’s sometimes hard to be heard as a writer, even if you have something interesting and worthwhile to say. And so this week I thought I’d carve out a bit of space for five writers on Substack I enjoy reading but whom — in at least in some cases — you may not have come across before.
I hope you enjoy the five pieces I’ve chosen — one from each of the “five voices”. And if you do, please support them by subscribing to their Substacks.
Mark Daley
Noetic Engines
Mark’s a good friend and colleague, and Chief AI Officer at Western University in London Ontario. He’s also one of the most forward-thinking and informed people I know working at the cutting edge of artificial intelligence across multiple disciplines.
Mark’s Substack, Noetic Engines, is an exploration of the evolving relationship between intelligence (artificial and otherwise), knowledge, and what it means to be human in the 21st century, and is a must-read for anyone who’s seriously interested in where advances in AI are taking us.
Out of his recent articles, I wanted to highlight Marks post on OpenAI’s o3 model from December — Punctuated Equilibria — in part because I couldn’t bring myself to do a deep dive into the new model over the holiday period, but mainly because Mark is way more insightful and eloquent here than I could ever be.
Definitely a must-read:
Jamey Wetmore
Tech Skeptic Goes Electric
Another good friend and colleague, Jamey Wetmore started writing about his first-hand experiences of becoming an electric car owner early in 2024. What makes his Substack Tech Skeptic Goes Electric stand out though is that Jamey’s also a leading expert on the intersection of emerging technologies and society, and has been studying the intersection of electric (and autonomous vehicles) and society for longer than I can remember!
I’d strongly recommend a deep dive into Jamey’s past posts for their blend of academic expertise, real-world insights, and practical experience. But for this article I wanted to highlight a recent post on a possible weakening between owning an electric vehicle and being concerned about the environment:
Andy Revkin
Sustain What
Sticking with the environmental theme, my third recommendation is Andy Revkin and his Substack Sustain What. Andy’s been writing and reporting on the environment and sustainability for decades now — many will remember his New York Times blog Dot Earth.
Sustain What is more of a typical blog than my first two recommendations as Andy writes candidly about his thoughts, observations, frustrations, and passions. Through it all though there’s a respect for following the evidence, challenging uncritical thinking, and encouraging open dialogue, thatI find both refreshing and compelling.
Out of Andy’s many posts the one I wanted to highlight revisits his 2018 book Weather: An Illustrated History, from Cloud Atlases to Climate Change — in Andy’s words, “an illustrated volume exploring 100 moments in humanity’s relationship with, and understanding of, weather and the climate system.”
Highly recommended reading for the insights it provides into the book, and as a great introduction to his style and perspectives:
Ethan Mollick
One Useful Thing
I suspect that Ethan Mollick won’t need an introduction to many readers here as he’s one of the more prominent writers around on generative AI — especially with respect to its uses and impacts. But I wanted to include him anyway as he has plenty of things to say that are worth paying attention to.
Ethan’s Substack, One Useful Thing, focuses on his personal explorations and thinking around the implications of AI. It’s worth exploring at length, but the piece I wanted to highlight here is one that particularly stood out to me a few weeks ago.
His article 15 Times to use AI, and 5 Not to is, to me, one of the most useful guides to using generative AI I’ve read in a while — mainly because it eschews black and white rules in favor of practical advice that guides readers toward make smart choices as capabilities change on a weekly basis.
Very highly recommended reading:
Melanie Mitchell
AI: A Guide for Thinking Humans
For my fifth and final recommendation I want to stick with AI and go back to where I started with OpenAI’s o3 model — and Melanie Mitchell’s Substack AI: A Guide for Thinking Humans.
Melanie’s a professor at the prestigious Santa Fe Institute and a widely recognized thinker and researcher in the areas of artificial intelligence, cognitive science, and complex systems. Her Substack reflects this with deep dives into the cutting edge of AI that are both grounded in her considerable expertise and accessible to a broad audience.
The post I want to highlight here is one she wrote just after OpenAI announced their game-changing model o3. The post, Did OpenAI Just Solve Abstract Reasoning?, takes a clear-eyed look at claims that o3 represents a profound step forward in AI-based reasoning, and places the assessment methods used in context.
What I find particularly interesting and relevant in this article is the acknowledgement that emerging AI models like OpenAI’s o3 are pushing far beyond critiques that large language model-based AIs are simply “stochastic parrots” as they take us into new territory, but that we still don’t know what this means in terms of human-like reasoning in machines:
Hopefully you find someone or something here that grabs your interest. And if you do, please do remember to support them by subscribing to their Subscack!
Wow, grateful for this, Andrew. I think I need to do the same thing and post something lacing together my favorites here (one being you). Have a sane 2005!