This framework offers a promising path forward, not because it tries to define what AI is, but because it focuses on what AI does in context. That’s the shift we need. Creating agentic profiles based on autonomy, efficacy, goal complexity, and generality feels grounded and actionable, especially as we move toward more dynamic and embedded systems. It gives us a way to think beyond static classifications and assess how these systems interact with their environments and what kind of governance they require.
But the challenges are real. Without standard methods for measuring these dimensions, it's hard to imagine consistent or enforceable adoption across the board. Companies are unlikely to voluntarily classify their models in a way that opens them up to more regulation, and many systems operate in such closed or proprietary ways that meaningful oversight is impossible. Even if we did agree on a framework, AI systems are so context-dependent that a profile could shift dramatically with something as simple as a new plug-in or tool integration. So while I see this as a helpful structure, we will likely need technical refinement and a serious shift in institutional capacity and public will if we want it to be more than a paper exercise.
Coming out of my work cocoon as the ASU year ends, I’ve been enrapt reading this Substack and the HSD Ph.D. program in general. Thank you for updating & sharing as you do.
This summer my wife (1st grade Spanish immersion teacher) and I (full-time at Central Arizona College & ASU ENG associate faculty) are interested in using personal observations, sons’ interests, & appropriate learning needs to prompt engineer individualized AI agents/life tutors to begin mentoring our children to use for focused, diverse life learning situations. Any suggestions on beginning to prompt-engineer a safe, adaptable AI agent that can engage in dialogue & activities via appropriate personas that align to our kiddos’ respective needs & interests?
Our “ositos,” 6 & 7 and dual Mexican-American citizens with foundations in English and Spanish, may benefit from guided assistance by us this summer to begin supplementing school mentors with personalized AI for adaptive conversations & activities (as you’ve written about and I did w/ ChatGPT in my Spring ‘25 ASU ENG 102 courses, students leveraging ChatGPT as research assistants) in support of developing their dialectic skills & overall learning, in & out of the classroom.
One, a 7-year-old, struggles with some schoolbook learning but loves going with me to concerts & learning piano via YouTube tutorials on “Sympathy for the Devil,” “Beautiful Day,” Super Mario Bros. theme, & movie scores, suggesting he’d learn through AI-generated song & karaoke (e.g., from Suno song & lyric generator) across the curriculum.
Our 6-year-old with Level 2 ASD & ADHD, who misbehaves in kindergarten after finishing worksheets in 1/10th the time as classmates, needs to improve social emotional conversational skills and flourishes when interacting with “princesses” ranging from Princess Peach to Snow White to Wonder Woman.
I dove deep into ChatGPT as a research assistant these past four semesters, but haven’t used Gemini, Manus, or many other tools extensively. While we may supplement our AI agents with work on educational platforms like Khanmigo & interest-based tools like Suno, we’d appreciate your thoughts on how to responsibly & effectively approach creating AI “life tutors” for our kiddos?
I think the challenge here is the age of your kids and ensuring appropriate guardrails are in place. OpenAI's GPTs are good, but also a little unreliable -- especially when the underlying model is updated. It's worth checking out companies developing custom solutions -- I'm not sure what platforms like Kahnmigo are like (https://www.khanmigo.ai/). Axio is another one to check out: https://axio.ai/about
In A5, can the agent adjust or change the goals which are set by humans? Even better, can the agent decide on goals by itself?
This framework offers a promising path forward, not because it tries to define what AI is, but because it focuses on what AI does in context. That’s the shift we need. Creating agentic profiles based on autonomy, efficacy, goal complexity, and generality feels grounded and actionable, especially as we move toward more dynamic and embedded systems. It gives us a way to think beyond static classifications and assess how these systems interact with their environments and what kind of governance they require.
But the challenges are real. Without standard methods for measuring these dimensions, it's hard to imagine consistent or enforceable adoption across the board. Companies are unlikely to voluntarily classify their models in a way that opens them up to more regulation, and many systems operate in such closed or proprietary ways that meaningful oversight is impossible. Even if we did agree on a framework, AI systems are so context-dependent that a profile could shift dramatically with something as simple as a new plug-in or tool integration. So while I see this as a helpful structure, we will likely need technical refinement and a serious shift in institutional capacity and public will if we want it to be more than a paper exercise.
Coming out of my work cocoon as the ASU year ends, I’ve been enrapt reading this Substack and the HSD Ph.D. program in general. Thank you for updating & sharing as you do.
This summer my wife (1st grade Spanish immersion teacher) and I (full-time at Central Arizona College & ASU ENG associate faculty) are interested in using personal observations, sons’ interests, & appropriate learning needs to prompt engineer individualized AI agents/life tutors to begin mentoring our children to use for focused, diverse life learning situations. Any suggestions on beginning to prompt-engineer a safe, adaptable AI agent that can engage in dialogue & activities via appropriate personas that align to our kiddos’ respective needs & interests?
Our “ositos,” 6 & 7 and dual Mexican-American citizens with foundations in English and Spanish, may benefit from guided assistance by us this summer to begin supplementing school mentors with personalized AI for adaptive conversations & activities (as you’ve written about and I did w/ ChatGPT in my Spring ‘25 ASU ENG 102 courses, students leveraging ChatGPT as research assistants) in support of developing their dialectic skills & overall learning, in & out of the classroom.
One, a 7-year-old, struggles with some schoolbook learning but loves going with me to concerts & learning piano via YouTube tutorials on “Sympathy for the Devil,” “Beautiful Day,” Super Mario Bros. theme, & movie scores, suggesting he’d learn through AI-generated song & karaoke (e.g., from Suno song & lyric generator) across the curriculum.
Our 6-year-old with Level 2 ASD & ADHD, who misbehaves in kindergarten after finishing worksheets in 1/10th the time as classmates, needs to improve social emotional conversational skills and flourishes when interacting with “princesses” ranging from Princess Peach to Snow White to Wonder Woman.
I dove deep into ChatGPT as a research assistant these past four semesters, but haven’t used Gemini, Manus, or many other tools extensively. While we may supplement our AI agents with work on educational platforms like Khanmigo & interest-based tools like Suno, we’d appreciate your thoughts on how to responsibly & effectively approach creating AI “life tutors” for our kiddos?
I think the challenge here is the age of your kids and ensuring appropriate guardrails are in place. OpenAI's GPTs are good, but also a little unreliable -- especially when the underlying model is updated. It's worth checking out companies developing custom solutions -- I'm not sure what platforms like Kahnmigo are like (https://www.khanmigo.ai/). Axio is another one to check out: https://axio.ai/about
Interested to see what others recommend here.