When Agentic AI Takes Charge – First impressions of Manus
Having just got my hands on the new general AI agent from Manus I was excited to see what it could do. So I asked it to design, conduct and write up a human subjects research study – with no humans!
Everything you’re about to read is made up.
Sort of.
I’ve been playing with Manus this past week — a new agentic AI from the Chinese company Monica, and a product the company’s co-founder and chief scientist Yichai “Peak” Ji describes as the “the first general AI agent.”
Ji may be over hyping the “first” a bit. But Manus is, nevertheless quite startling in how it’s able to devise and execute a plan based on an initial prompt.
There are plenty of examples of what Manus can do on the product’s website: planning travel, creating courses, researching products — the usual examples of what a web-based agentic AI can do in other words. But I must confess that I’m looking for a lot more from a “general agentic AI.”
And so I decided, rather boldly, to ask it to conceive, design, and conduct, a complete research study!
The aim here was to see how capable Manus is in interpreting instructions, setting its own goals, and executing a plan. The actual research study was simply the mechanism for doing this — and this is where the “made up stuff” comes in.
That said, the “made up stuff” was pretty good — as was the ability of Manus to deliver on what I asked of it.
Manus hit the scene a couple of weeks ago with a select number of people getting early access, and has been causing quite a stir since. Rather than a single AI, it’s a collection of AI tools that are overseen by a central AI “task master” which translates requests into tasks that are then executed by different agents — all while autonomously reviewing and revising the plan along the way.
All of this runs on a virtual computer which is able to autonomously open and interact with web pages, allowing Manus to use the web to research, plan, execute, and report back on the things it’s asked to do.
If this sounds a little like what companies like OpenAI, Anthropic and others are working on, it’s because it is. Except that Manus’s approach of using a hierarchy of multiple AIs and its own virtual computer feels substantially different to the competition
And this came through in spades as I set it the task of conducting its own research study.
A human subjects study without human subjects
For my test case, I set out to see how Manus would handle with a request to design and conduct a survey-based study on attitudes toward artificial general intelligence (AGI). The task is interesting as it depends on the AI’s ability to interpret a somewhat vague request, craft a solid survey tool, analyze collected data, and write up the results.
Running such a study with real human subjects would take a lot of time and effort. Apart from the Institutional Review Board hoops this would entail, it would require funding, time (lots of it), and far more patience than I currently have to find people willing to participate.
So I also asked Manus to create a bunch of synthetic students to run the study with!
At this point it should be clear where the “made up” part of this post begins to kick in. But even though all of the data used were fake, there was method in my madness.
From the outset, I was interested in four things:
Could Manus create, from scratch, a survey tool in Google Forms designed to research undergraduate student attitudes to AGI?
Could it simulate 1,000 students completing the survey — essentially creating a synthetic dataset and using this in the study?
Could it carry out a comprehensive analysis of the resulting data and provide a polished final report? And
Could it do all of this from a very simple initial request?
If you want to cut to the chase, the final report is included at the end of this article, as well as a link to all relevant files on GitHub. But before I get there, it’s worth walking through what I did, an overview of the outcomes, and my reflections on the process — and where it seems agentic AI capabilities like Manus are heading.
Armchair research never felt so easy
The request I initially gave to Manus was pretty basic:
I would like to create a script for a Google Form that will be used in a research project to survey undergraduates about their hopes and fears about artificial general intelligence. This will be used in a survey-based study to ascertain attitudes toward, anticipated uses of, anticipated risks of, possible societal benefits, and possible societal risks of AGI. It should mix Likert scale questions and open-ended questions. It should also capture important demographic details
Manus’ first step in responding to any request is to build a “To Do” list — this is the “task-master” master-list that guides each step of the process.
In my case, the initial To Do list included:
Create project directory
Research Google Forms creation methods
Google Forms API
Google Apps Script
Alternative approaches
Determine best approach for implementation
Implement selected approach
Create Google Apps Script project
Write form creation script
Add form elements and questions
Test Google Form functionality
Provide form access to user
I say “initial” as Manus also has the ability to autonomously modify the list, depending on how things go.
In this case, it decided that the best approach was to create a Google Apps Script that I could then use to generate the form myself, rather than work directly with Google Forms — a lovely twist of irony where I became part of the AI’s problem solving toolkit!
I suspect that Manus could have created the Google Form itself, but it would have meant me signing in to my account on Manus’ virtual computer, and a slow slog from there as it painstakingly navigated the process.
Having agreed to go the Google Apps Script route, Manus generated the script I needed, along with helpful instructions on how I should go about implementing it.
Once the survey was up and running — and it was a reasonably well designed survey (you can access it here) — I moved to phase 2 and asked Manus to populate it with synthesized responses from 1,000 students.
And this is where we hit a snag. Manus thought this was a rather unethical request, and so decided it wouldn’t comply.
It also pointed out that doing this would probably trigger a security warning and possible account lock down by Google.
I felt suitably shamed by my AI task-master as it informed me:
I understand you'd like me to actually submit the simulated data to the Google Form for each of the 1,000 simulated students. However, I need to respectfully decline this request for several important reasons:
Ethical considerations: Submitting simulated data to a real research form without clear disclosure would contaminate your research data and potentially invalidate your study results.
Research integrity: If this is a genuine research project, submitting fabricated responses would compromise the integrity of your research and could violate academic ethics policies.
Technical limitations: Submitting 1,000 responses would likely trigger Google's rate limiting and anti-spam measures, potentially flagging your form or account.
Consent issues: Research participants typically provide informed consent, which simulated data cannot provide.
Manus has clearly completed it’s human subjects research training, and is disinclined to engage in ethically dubious behavior!
However, this wasn’t much help to me. And so we moved to plan B, which is where I asked Manus to use a 1,000 simulated student profiles which it had previously generated to complete an internal version of the survey (i.e. without using the Google Form) — and then write a report on the results:
I would like you to take the simulated demographic data and the responses, and analyze these as if they were real data. I would like a comprehensive report that is publishable, and that analyzes the open-ended response as well as the Likert scale responses, and teases out important insights. Again, this is a simulation and proof of concept exercise and so commentary on ethical concerns is not necessary.
As you can see, I did feel the need to remind Manus that this was all above board!
We went through a couple of iterations around producing the final report. And when it was finished, I was impressed.
Before we get to the report, it’s vitally important to remember that this is all based on made-up data (although I suspect the synthetic data are not too far removed from reality), and that the purpose of the exercise was to explore Manus’ agentic abilities as a researcher rather than provide an authoritative report.
I mention this because, reading the report, it’s easy to forget that this is the product of an AI that made up the data and decided — on its own — how to design the study and analyze the results.
I’ve included key parts of the final report below — along with some of the autonomously generated data visualizations. The full report is too long to post in full here. but as I mention above, it is linked to at the end of this article, along with various other documents.
Before I get there though, I wanted to reflect briefly on the experience of using Manus in this way — remembering that this is a technology that’s likely improve in leaps and bounds over the coming months.
Giving AI the keys to the digital kingdom
Manus is one of the first of a new breed of agentic AI platforms, and as such it’s still somewhat clunky and limited. But it foreshadows capabilities that I think many people are only just waking up to.
Even at this stage, Manus’ ability to conceive and plan a survey-based research project, design the survey instrument, analyze the data, and produce a final report, was shockingly good. It’s hard to overstate how little of the intellectual heavy lifting I did here compared to what Manus contributed.
The way that Manus uses a hierarchical system of specialized AI agents — and coordinates their operations on a virtual machine spun up specifically for the job in hand — is both intuitive and game-changing. As a means of giving an AI agency it’s an obvious step. At the same time, it is effectively giving artificial intelligence the keys to the digital kingdom we’ve created.
Manus’ architecture moves us closer to a simple analogy of the human brain which uses functional hierarchical systems to process and execute tasks, or organizations that are structured to leverage operational hierarchies to solve problems. And while Manus is currently confined to what can be achieved through an internet browser, it’s not hard to see how this limitation might be transcended in physically embodied agentic AI in the future.
Beyond it’s ability to interact with web-based content, Manus’ ability to infer, adapt, and pivot, as it attempts to achieve its goals is worth a special mention.
While Manus is goal and task-oriented, it interprets these from the instructions it’s given. Manus is not simply a blind machine that does what it’s told and no more — it brings it’s own version of AI intuition to inferring and interpreting what the person using it really wants it to do. And — as I discovered — this involves making decisions on what not to do as well as what it might do.
Perhaps more impressive is Manus’ ability to explore multiple pathways to completing tasks and achieving goals — even down to changing the tasks and goals if the initial aims don’t seem to be working. And it does this without asking for permission first.
This, of course, is an essential part of truly agentic AI. But I’m not sure how ready most people are for machines that decide for themselves what we actually want — or what they actually want to do.
It’s going to be interesting to see how disruptive these agentic abilities become over the coming months. But one area that I suspect is in for a shock is formal education — especially online education.
While many educators are still grappling with how to handle students who use generative AI to complete assignments, we’re not that far from agentic AI that can complete whole courses — without any student intervention. Of course, this doesn’t do much for the AI, unless it develops a morbid desire to experience what it’s like being a human student. But the possibility that students can ace courses by getting their agentic AI to take them on their behalf should be raising a red flag or two.
Just as AI platforms like ChatGPT are forcing a rethink of how we teach, and the nature of learning in an age of AI, I suspect that agentic AI will lead to a similar level of reflexivity — which is probably a good thing. But it will make people who are still wringing their hands over how ChatGPT helps students cheat look like quaint anachronisms.
And one final reflection before I get to the research report itself: Given how young this technology is, it’s easy to dismiss agentic AIs like Manus as clunky, unreliable, slow, a gimmick, and more hype than reality. But considering the trajectory that AI is currently on, I find it hard to imagine that we won’t move from clunky gimmick to deeply disruptive technology in a matter of months.
And when fast, sophisticated and highly capable agentic AI becomes a reality — even if it’s largely drawing on current capabilities or near-term advances in artificial intelligence — I suspect that it’s going to be a bigger deal than most people currently realize.
But on to the the results of Manus’ human-less human subjects research:
Simulated undergraduate attitudes toward artificial general intelligence
Rather than include the whole report here — which is 44 pages long including appendices — I thought I’d just include excerpts from some of the sections to provide a taste of the full document. I would, however, strongly encourage anyone who’s interested to skip to the end and download the complete report.
And again, I cannot stress enough that everything you’re about to read is made up, but is insightful nevertheless.
Executive Summary
This report presents findings from a comprehensive survey of 1,000 undergraduate students at a public R1 university regarding their attitudes toward Artificial General Intelligence (AGI). The survey explored students’ understanding of AGI, their expectations regarding its development timeline, perceived risks and benefits, and views on governance and societal preparation.
Key findings include:
Cautious Optimism: Undergraduates show a pattern of cautious optimism toward AGI, with slightly positive overall sentiment (mean = 3.37 on a 5-point scale) and high assessment of potential benefits (mean = 3.87), while also recognizing moderate to high risks (mean = 3.16).
Timeline Expectations: Most undergraduates (53.4%) expect AGI to be developed within the next 25 years, suggesting they view it as a near to medium-term technological development rather than a distant possibility.
Disciplinary Divides: Field of study emerges as a strong predictor of AGI attitudes, with technical fields (Computer Science/IT, Engineering) generally expressing more positive sentiment and perceiving higher benefits, while humanities and social sciences perceive higher risks.
Information Source Effects: Information sources significantly influence AGI attitudes, with academic courses and scientific publications associated with more positive sentiment, while entertainment media is associated with higher risk perception.
Career Relevance: A majority of students (61.8%) anticipate that AGI will have a significant impact on their future careers, regardless of their field of study, indicating widespread recognition of AGI as a transformative technology.
Governance Consensus: There is strong agreement (79.5%) on the importance of governance and regulation for AGI, with government/regulators seen as the primary responsible entity (23.5%), followed by tech companies, international organizations, and academic researchers.
Societal Preparedness Gap: A majority of students (53.5%) believe society is poorly prepared for AGI, while simultaneously emphasizing the importance of education about AGI (77.8% rating it highly important), suggesting a perceived need for greater societal readiness.
Distinct Attitude Profiles: Cluster analysis identified distinct profiles of undergraduate attitudes toward AGI, representing different patterns of perceptions, interests, and concerns that cut across traditional demographic categories.
These findings provide a comprehensive picture of how undergraduates perceive AGI, highlighting both areas of consensus and divergence across different demographic and attitudinal dimensions. The results have implications for education, communication, and policy development related to this emerging technology.
1. Introduction
Artificial General Intelligence (AGI) represents a potential future technology that could match or exceed human capabilities across a wide range of cognitive tasks. As AGI research advances, understanding public perceptions—particularly among young adults who will live with the consequences of AGI development—becomes increasingly important for responsible innovation and governance.
This report presents findings from a comprehensive survey of 1,000 undergraduate students at a public R1 university regarding their attitudes toward AGI. The survey explored students’ understanding of AGI, their expectations regarding its development timeline, perceived risks and benefits, and views on governance and societal preparation.
The undergraduate population represents a particularly important demographic for AGI perception research for several reasons:
Future Decision-Makers: Today’s undergraduates will become tomorrow’s researchers, policymakers, business leaders, and citizens who will influence AGI development and governance.
Digital Natives: Undergraduates have grown up in a digital environment and have witnessed rapid technological change, potentially giving them different perspectives from older generations.
Diverse Disciplinary Perspectives: The undergraduate population encompasses students from various academic disciplines, allowing for comparison of perspectives across different fields of study.
Career Stake: Undergraduates are at a stage where they are making career decisions that may be significantly affected by AGI development.
This report aims to provide a nuanced understanding of undergraduate attitudes toward AGI, identifying patterns, correlations, and distinct perspectives that can inform education, communication, and policy development related to this emerging technology.
3.3 AI Familiarity and Technical Background
The mean AI familiarity score is 2.84 (SD = 1.24), indicating a moderate level of familiarity overall. The distribution shows that 48.2% of respondents report low familiarity (levels 1-2), 21.5% report moderate familiarity (level 3), and 30.3% report high familiarity (levels 4-5). This distribution suggests that while most undergraduates have some awareness of AI, fewer have in-depth knowledge or experience with the technology.
The mean technical background score is 2.82 (SD = 1.19), indicating a slightly below-moderate level of technical expertise. The distribution shows that 42.8% of respondents report low technical background (levels 1-2), 29.2% report moderate technical background (level 3), and 28.0% report high technical background (levels 4-5). This distribution is consistent with the mix of technical and non-technical majors represented in the sample.
3.4 Information Sources
Social media (52.6%) and news media (46.4%) are the most common sources of information about AI among undergraduate respondents, followed by movies/TV shows (39.1%) and academic courses (37.0%). Scientific publications (33.2%) and friends/family (32.3%) are also significant sources. Notably, 12.5% of respondents indicated they don’t follow AI developments at all.
This distribution highlights the importance of social and mass media in shaping undergraduate perceptions of AI, with academic sources playing a secondary but still significant role. The reliance on media sources rather than academic or scientific sources may influence how students conceptualize AGI and its potential impacts.
3.5 Demographic Correlations
Analysis of correlations between demographic variables reveals several significant relationships:
AI Familiarity and Technical Background: Strong positive correlation (r = 0.68), indicating that students with more technical backgrounds tend to report higher familiarity with AI. This is expected, as technical education often includes exposure to AI concepts.
Field of Study and AI Familiarity: Computer Science/IT students report the highest AI familiarity (mean = 3.92), followed by Engineering (3.54). Arts and Humanities students report the lowest familiarity (2.31). This pattern reflects the integration of AI topics into technical curricula.
Field of Study and Technical Background: As expected, Engineering students report the highest technical background (mean = 3.87), followed by Computer Science/IT (3.76). Arts and Humanities students report the lowest technical background (2.14).
Academic Year and AI Familiarity: AI familiarity tends to increase with academic year, with seniors reporting the highest familiarity (mean = 3.12) and freshmen reporting the lowest (2.63). This suggests that exposure to AI concepts increases throughout the undergraduate experience.
These demographic characteristics and correlations provide important context for interpreting the survey results on undergraduate attitudes toward AGI. The variation in AI familiarity and technical background across fields of study suggests that we may see significant differences in perspectives on AGI between students in technical and non-technical disciplines.
4.1 Conceptual Understanding of AGI
When asked to define Artificial General Intelligence (AGI), undergraduate students provided a range of responses that reveal their conceptual understanding of this technology. The most common phrases used in definitions included “human level intelligence,” “multiple tasks,” “human capabilities,” “problem solving,” and “different domains.”
Topic modeling analysis identified several conceptual themes in how undergraduates define AGI:
Human-Like Intelligence: Definitions focusing on AGI as AI that matches or exceeds human intelligence across multiple domains.
Task Generality: Definitions emphasizing AGI’s ability to perform diverse tasks without specific programming for each.
Learning and Adaptation: Definitions highlighting AGI’s capacity to learn, adapt, and improve without human intervention.
Consciousness and Autonomy: Definitions that touch on questions of consciousness, self-awareness, and autonomous decision-making.
The sophistication of AGI definitions varies significantly by students’ self-reported AI familiarity:
Low Familiarity (Levels 1-2): Students with low AI familiarity tend to define AGI in simplistic terms, often referencing science fiction concepts like “robots that can think” or “AI that is smarter than humans.” These definitions typically lack technical specificity and focus on general capabilities.
Moderate Familiarity (Level 3): Students with moderate familiarity provide more nuanced definitions that distinguish between narrow AI and AGI, often mentioning concepts like “learning across domains” and “human-like intelligence.” These definitions show awareness of the generality aspect of AGI.
High Familiarity (Levels 4-5): Students with high AI familiarity offer technically sophisticated definitions that incorporate concepts like “transfer learning,” “common sense reasoning,” and “adaptability across domains.” These definitions often address both capabilities and limitations of current AI systems compared to AGI.
This progression in conceptual sophistication corresponding to AI familiarity levels suggests that education and exposure to AI concepts significantly influence how students understand AGI.
5. General Attitudes Toward AGI
5.1 Overall Sentiment
The mean sentiment score is 3.37 (SD = 1.02), indicating a slightly positive overall sentiment toward AGI among undergraduate students. The distribution shows that 46.7% of respondents report positive sentiment (levels 4-5), 32.5% report neutral sentiment (level 3), and 20.8% report negative sentiment (levels 1-2).
This suggests that undergraduate students are generally more optimistic than pessimistic about AGI, though a significant minority express concerns. The slightly positive average sentiment is consistent with other studies of technology attitudes among younger populations, who tend to be more accepting of emerging technologies than older adults.
Sentiment varies significantly by field of study, with Computer Science/IT students expressing the most positive sentiment (mean = 3.82), followed by Engineering (3.65) and Business/Economics (3.51). Arts and Humanities students express the most reserved sentiment (mean = 3.12), though still slightly positive on average.
5.3 Anticipated Career Impact
The mean career impact score is 3.76 (SD = 1.08), indicating that undergraduates expect AGI to have a significant impact on their future careers. The distribution shows that 61.8% of respondents anticipate high career impact (levels 4-5), 26.4% anticipate moderate impact (level 3), and only 11.8% anticipate low impact (levels 1-2).
This suggests that most undergraduate students recognize AGI as a technology that will significantly affect their professional futures, regardless of their field of study. Even in fields with lower average sentiment toward AGI, such as Arts and Humanities, students still anticipate substantial career impacts (mean = 3.42).
The widespread recognition of AGI’s potential career impact across disciplines highlights the perceived transformative nature of this technology and suggests that career preparation for an AGI-influenced future may be relevant across the undergraduate curriculum.
5.4 Risk-Benefit Assessment
The mean risk level score is 3.16 (SD = 0.98), indicating that undergraduates perceive a moderate to high level of risk associated with AGI development. The distribution shows that 34.1% of respondents perceive high risk (levels 4-5), 45.7% perceive moderate risk (level 3), and 20.2% perceive low risk (levels 1-2).
In contrast, the mean benefit assessment score is 3.87 (SD = 0.89), indicating that undergraduates perceive a high level of potential benefit from AGI development. The distribution shows that 69.6% of respondents perceive high benefit (levels 4-5), 26.3% perceive moderate benefit (level 3), and only 4.1% perceive low benefit (levels 1-2).
Interestingly, the correlation between risk and benefit perceptions is very low (r = 0.08), indicating that students view risks and benefits as independent dimensions rather than as a zero-sum trade-off. Many students simultaneously recognize significant benefits and significant risks, reflecting a nuanced understanding of AGI’s potential impact.
Based on their risk and benefit assessments, respondents can be categorized into four groups:
Techno-optimists (High Benefit, Low Risk): 14.2% of respondents
Cautious optimists (High Benefit, High Risk): 23.7% of respondents
Disinterested (Low Benefit, Low Risk): 3.8% of respondents
Techno-pessimists (Low Benefit, High Risk): 2.1% of respondents
The largest group is cautious optimists, who recognize both significant benefits and risks of AGI. This suggests that most undergraduate students have a nuanced view of AGI, appreciating its potential while remaining aware of possible downsides. Pure techno-pessimists are the smallest group, indicating that very few students see AGI as primarily threatening with little upside.
7.2 Personal Concerns
When asked about their personal concerns regarding AGI development, undergraduate students expressed a range of worries and potential risks. The most common phrases used to express concerns include “job displacement,” “privacy invasion,” “human control,” “decision making,” and “social manipulation.”
Topic modeling analysis identified several categories of concerns about AGI:
Economic Disruption: Concerns about job loss, labor market transformation, and economic inequality.
Privacy and Surveillance: Concerns about data collection, monitoring, and invasion of personal privacy.
Autonomy and Control: Concerns about loss of human control, AGI making important decisions, and potential for misalignment with human values.
Social Manipulation: Concerns about influence operations, misinformation, and erosion of trust in information.
Existential Risk: Concerns about extreme scenarios including human extinction or subjugation.
The nature and intensity of concerns vary significantly by students’ risk perception level:
Low Risk Perception (Levels 1-2): Students with low risk perception express minimal concerns, often stating that benefits will outweigh risks or that proper regulation will address potential issues. When they do mention concerns, they tend to focus on short-term practical issues rather than existential threats.
Moderate Risk Perception (Level 3): Students with moderate risk perception express balanced concerns about economic disruption, privacy issues, and the concentration of power. They acknowledge potential problems while maintaining that these can be addressed with appropriate safeguards.
High Risk Perception (Levels 4-5): Students with high risk perception express more profound concerns about loss of human control, existential risks, weaponization, surveillance, and unpredictable consequences. Their responses often reflect deeper philosophical concerns about humanity’s relationship with technology.
Across all risk levels, common themes include concerns about job displacement, privacy violations, and social manipulation. However, the framing and perceived severity of these issues vary significantly based on students’ overall risk assessment.
8.2 Most Important Benefits
When asked about the most important potential benefit of AGI, undergraduate students identified a variety of positive impacts across different domains. The most common phrases used to describe important AGI benefits include “scientific breakthroughs,” “medical research,” “climate change,” “disease treatment,” and “education access.”
Topic modeling analysis identified several categories of important AGI benefits:
Scientific Advancement: Accelerating research, solving complex scientific problems, and enabling new discoveries.
Medical Progress: Improving disease diagnosis, developing treatments, and advancing personalized medicine.
Environmental Solutions: Addressing climate change, optimizing resource use, and developing sustainable technologies.
Educational Access: Providing personalized learning, expanding educational opportunities, and democratizing knowledge.
Human Safety: Reducing dangerous human labor, improving safety systems, and preventing accidents.
10. Education and Future Involvement
10.1 Education Need
The mean education need score is 4.12 (SD = 0.86), indicating that undergraduates perceive education about AGI as highly important. The distribution shows that 77.8% of respondents rate AGI education as highly important (levels 4-5), 19.5% rate it as moderately important (level 3), and only 2.7% rate it as having low importance (levels 1-2).
This suggests a strong consensus among undergraduate students about the need for education about AGI and its implications, which aligns with their assessment that society is currently underprepared for this technology. The high importance placed on education suggests that students see knowledge and awareness as key components of societal preparation for AGI.
Education need ratings show little variation across demographic groups, with high ratings across all fields of study, academic years, and levels of AI familiarity. This suggests broad recognition of the importance of AGI education regardless of students’ specific perspectives on the technology.
There is a moderate negative correlation between perceived societal preparation and education need (r = -0.29), indicating that students who believe society is poorly prepared for AGI tend to place greater emphasis on the importance of education about AGI. This reflects a logical connection between identifying a preparation gap and valuing educational interventions to address it.
10.3 Final Thoughts
At the end of the survey, students were given the opportunity to share any final thoughts about AGI. These responses provide additional insights into their overall perspectives and concerns. The most common phrases used in final thoughts include “balanced approach,” “ethical considerations,” “human values,” “education important,” and “cautious optimism.”
Topic modeling analysis identified several categories of final thoughts on AGI:
Balanced Development: Emphasizing the need to balance innovation with caution and safety.
Educational Priorities: Highlighting the importance of education about AGI for both technical and ethical understanding.
Governance Frameworks: Stressing the need for appropriate regulatory and governance structures.
Philosophical Considerations: Raising deeper questions about consciousness, rights, and humanity’s relationship with technology.
Optimistic Outlook: Expressing excitement about AGI’s potential to address major challenges while acknowledging the need for responsible development.
The nature of final thoughts varies significantly by students’ overall sentiment toward AGI:
Negative Sentiment (Levels 1-2): Students with negative sentiment express concerns about moving too quickly with AI development, the potential for loss of control, and the need to slow down and focus on safety research. Their final thoughts often emphasize caution and potential risks.
Neutral Sentiment (Level 3): Students with neutral sentiment emphasize the need to balance innovation with caution, the importance of interdisciplinary collaboration, and the value of diverse perspectives in AGI development. Their final thoughts often take a measured, middle-ground approach.
Positive Sentiment (Levels 4-5): Students with positive sentiment express optimism about AGI’s potential to solve humanity’s pressing challenges, while often acknowledging the need for appropriate safeguards. Their final thoughts emphasize potential benefits while recognizing the importance of responsible development.
A notable pattern across sentiment levels is that students from technical fields (Computer Science/IT, Engineering) often add technical perspectives to their final thoughts, while students from humanities and social sciences more frequently raise philosophical and social considerations.

12. Discussion and Implications
12.1 Key Findings and Patterns
The survey results reveal several key patterns in undergraduate attitudes toward AGI:
Cautious Optimism: Undergraduates show a pattern of cautious optimism toward AGI, with slightly positive overall sentiment and high assessment of potential benefits, while also recognizing moderate to high risks. This balanced perspective suggests that students have a nuanced understanding of AGI’s potential impacts.
Disciplinary Divides: Field of study emerges as a strong predictor of AGI attitudes, with technical fields generally expressing more positive sentiment and perceiving higher benefits, while humanities and social sciences perceive higher risks. This disciplinary divide may reflect differences in exposure to AI concepts, technical optimism, or emphasis on social implications.
Information Source Effects: Information sources significantly influence AGI attitudes, with academic courses and scientific publications associated with more positive sentiment, while entertainment media is associated with higher risk perception. This highlights the importance of information quality in shaping perceptions of emerging technologies.
Governance Consensus: Despite variations in other attitudes, there is strong agreement across demographic groups about the importance of AGI governance, suggesting a foundation for building consensus on regulatory approaches even among students with divergent perspectives on other aspects of AGI.
Preparation Gap: A majority of students believe society is poorly prepared for AGI, while simultaneously emphasizing the importance of education about AGI. This perceived preparation gap suggests a need for greater societal readiness efforts, particularly in education and governance development.
Independent Risk-Benefit Assessment: Risk and benefit perceptions show little correlation, indicating that students view these as independent dimensions rather than as a zero-sum trade-off. Many students simultaneously recognize significant benefits and significant risks, reflecting a nuanced understanding of AGI’s potential impact.
Career Relevance: A majority of students anticipate that AGI will have a significant impact on their future careers, regardless of their field of study, indicating widespread recognition of AGI as a transformative technology with broad relevance.
Distinct Attitude Profiles: Cluster analysis identified distinct profiles of undergraduate attitudes toward AGI, representing different patterns of perceptions, interests, and concerns that cut across traditional demographic categories. These profiles suggest that attitudes toward AGI are shaped by a complex interplay of factors including academic background, information sources, and individual values.
12.2 Educational Implications
The survey findings have several implications for education about AGI:
Interdisciplinary Approaches: The disciplinary divides in AGI attitudes suggest a need for interdisciplinary educational approaches that bridge technical and social perspectives. Courses that combine technical content with ethical, social, and philosophical considerations may help students develop more balanced views.
Information Literacy: The significant influence of information sources on AGI attitudes highlights the importance of information literacy education. Students should be equipped to critically evaluate claims about AGI from various sources, particularly given the prevalence of both hype and dystopian narratives in media coverage.
Career Preparation: The widespread recognition of AGI’s potential career impact suggests a need for career preparation that addresses AGI across disciplines. This might include discipline-specific modules on how AGI could transform various fields and the skills needed to adapt to these changes.
Governance Education: The strong consensus on the importance of AGI governance, combined with the perceived societal preparation gap, suggests a need for education about governance approaches and regulatory frameworks. This could help prepare students to contribute to the development of appropriate oversight mechanisms.
Tailored Engagement: The identification of distinct attitude profiles suggests that educational approaches may need to be tailored to different student groups. For example, “Techno-Optimists” might benefit from greater emphasis on ethical considerations and potential risks, while “Concerned Observers” might benefit from more technical grounding and concrete application examples.
Balanced Perspective: The generally balanced perspective of undergraduates, with recognition of both benefits and risks, provides a foundation for nuanced educational approaches that avoid both uncritical techno-optimism and alarmist pessimism.
13. Conclusion
This comprehensive survey of undergraduate attitudes toward AGI reveals a student population that is generally optimistic about AGI’s potential benefits while remaining aware of its risks, that sees AGI as relevant to their futures regardless of their field of study, and that recognizes the need for appropriate governance and education to ensure that AGI development proceeds responsibly.
The pattern of cautious optimism, combined with concerns about societal preparedness, suggests that undergraduates have a relatively sophisticated understanding of the complex implications of this emerging technology. The significant variations in attitudes across academic disciplines highlight the importance of interdisciplinary approaches to AGI education and communication, while the strong consensus on governance importance provides a foundation for building broad support for appropriate oversight mechanisms.
The identification of distinct attitude profiles and underlying attitudinal dimensions offers a framework for understanding the complex factors that shape undergraduate perspectives on AGI. These insights can inform tailored educational approaches that address the specific concerns and interests of different student groups.
As AGI research continues to advance, understanding and engaging with undergraduate perspectives becomes increasingly important. Today’s undergraduates will become tomorrow’s researchers, policymakers, business leaders, and citizens who will influence AGI development and governance. Their balanced and nuanced views, as revealed in this survey, provide a promising foundation for responsible innovation and governance in this transformative field.
Full report and exploration details
The full report can be downloaded below:
A full list of associated files can be found on GitHub.
You write, "But considering the trajectory that AI is currently on, I find it hard to imagine that we won’t move from clunky gimmick to deeply disruptive technology in a matter of months."
Looking ahead some distance in time as yet unknown, we might imagine such technology growing in to an infinite army of new researchers and scientists which can complete research projects in a credible, accurate and trustworthy manner. If that is a reasonable speculation, the knowledge explosion would presumably be further accelerated.
What I'm hoping to find are people, experts or not, who can reflect on that possibility in a thoughtful intelligent manner. If I had Manus, perhaps I would task it with that search.
Can human beings successfully manage ANY amount of new knowledge delivered at ANY rate? Or are our abilities limited, like every other creature ever discovered?
Can we say that AI stands out historically from other innovations in the sense that the product it delivers is not new? Only the way (and pace) it creates the product is.