Two new reports urge âhuman-centeredâ school AI adoption
Two new reports urge âhuman-centeredâ school AI adoption
Two new reports caution that if schools make missteps implementing AI, the results could haunt them for years, locking them into a future largely written by big tech instead of those closest to kids.
examines the reports, both the results of small, intensive gatherings of educators, policymakers, researchers, tech officials and students last year, share a common warning: AI in schools must serve human-centered learning that doesnât simply push for more efficiency. To do anything else risks creating a generation of young people ill-equipped for the future.
The findings come as young people say theyâre turning to generative AI more than ever: A released in late February found that more than half of teens ages 13 to 17 use chatbots to search for information or get help with schoolwork. About 4 in 10 report using AI to summarize articles, books or videos or create or edit images or videos. And about 1 in 5 say they use chatbots to get news.
For the first report, a group of 18 people met in July in Phoenix. Brought together by , a training and policy organization, and , a digital curriculum company, the treats the question of how schools should view AI as a literal âchoose-your-own-adventureâ story: The authors lay out three possible scenarios in which educators in an imaginary school district make radically different decisions about the technology.
In the first scenario, the district retreats from AI altogether after a data breach, abandoning a previously created âInnovation Lab,â while teachers return to traditional instruction and testing.
The restrictions soon backfire. Students continue using AI at home, but without guidance, take shortcuts on homework, developing a kind of survival mechanism they privately call âschool brain.â Seeing how irrelevant most lessons are, they do just enough to get by, offloading thinking to AI tools. When tested, they show shallow understanding and poor foundational skills.
Test scores plummet, college acceptances drop, and 40% of graduates land on academic probation. Employers report that graduates can neither work independently nor collaborate effectively with AI. Teachers begin departing in waves.
Retreating from AI, the authors find, creates âthe worst of both worldsâ â students who can neither think independently nor use AI effectively.
In the second scenario, the district, facing competition from AI-driven private schools, goes all-in, adopting a comprehensive, district-wide AI platform for automated instruction. The platform promises greater efficiency via AI tutors, automated grading and behavioral monitoring. And while it initially lowers costs and produces higher test scores, teachers find that students are soon gaming the algorithms rather than learning. The auto-grader penalizes valid but unconventional answers, while multilingual learners are unfairly penalized for nonstandard answers on tests.
Teachers find themselves defending grades they didnât assign and canât fully explain, while families that challenge grades are stopped by âproprietary algorithmsâ that even administrators canât review. The system delivers âa black boxâ that removes human judgment: âStudents could feel the difference between being evaluated by an algorithm and being understood by a teacher.â
Before long, graduates struggle with collaboration, creativity and adaptability â skills employers and colleges increasingly value.
In the reportâs third choice, the district, via its Innovation Lab, redesigns its offerings to prepare students for an AI-driven future while keeping a focus on âhuman-centeredâ education. Rather than focusing solely on technology, it develops a âgraduate profileâ that emphasizes critical thinking, ethical reasoning and human-AI collaboration, among other indicators.
The lab shifts to flexible, project-based learning, and students soon learn to use AI as a tool that supports but doesnât replace their thinking. While the district continues to satisfy state accountability through testing, it also pursues federal innovation grants to fund portfolio-based assessment systems based on the graduate profile.
All is not rosy, though. The redesign is expensive and hard on teachers. Enrollment suffers as political resistance builds steam. But graduates soon demonstrate an ability to critically evaluate AI tools, adapt quickly to workplace changes and develop a âlearn how to learnâ mindset that serves them in the long term.
Alumni soon report that their ârobustâ portfolios of work are a huge advantage in competitive job markets, and employers say they are the only new hires who critically evaluate AIâs recommendations, spotting hallucinations and biases.
Amanda Bickerstaff, AI for Educationâs co-founder and CEO, said the first two scenarios are what educators at the July convening said they were seeing most often in schools.
âThere was a strong recognition from everyone, including the students, the two high schoolers, that the traditional methods have not worked ⊠for decades,â she said. âBut it feels safer.â
As for going âall inâ on AI, she said, that point of view is inevitable in many places, given current aggressive efforts of tech giants like Google, which are âpushing into schools,â going directly to students.
âThereâs this real pressure from both ed tech and AI itself, because itâs such a big market thatâs never really been figured out,â she said.
What makes it worse is that few tech firms employ enough teachers to ensure that their products work well for students. âThey donât have hundreds of education people,â Bickerstaff said. Their education teams are âfractions of their headcount, working on tools that are instantly in studentsâ hands.â
The third path, in which the district redesigns its offerings, is âthe most humanâ of the three, she said, and the most intentional. âThe third path is the one that trusts humans and educators and students and families,â Bickerstaff said.
âExplicitly ambidextrousâ schooling
by the , a think tank at Arizona State University, also calls for a new approach to schoolsâ decisions about AI, saying the technology âshould be a catalyst for human-centered learning, not a replacement.â
The CRPE report, the result of another gathering in November, asserts that schools are at a pivotal moment. Their AI policies could go one of two ways: They can either entrench outdated educational models or help bring about a fundamental transformation of schooling.
âOne of the big things that came out of those discussions was a strong feeling among the group that AI is currently being thought of as a productivity tool for the education system that we have, rather than a tool to radically improve teaching and learning and outcomes for kids,â said Robin Lake, CRPEâs executive director.
During its meeting, the group repeatedly discussed an âefficiency paradoxâ that could make schools faster and cheaper without addressing studentsâ actual needs. To protect against it, they call for a more coherent, human-centered approach that is âexplicitly ambidextrous,â improving current practices while intentionally building toward new learning models.
The problem with AI, the report alleges, is that it could simply improve the efficiency of outdated educational models. It notes that the , a time-saving testing technology, for decades reinforced low-level standardized assessments, often at the expense of improved learning.
Instead of using AI as a new kind of Scantron, it says, AI could make way for several innovations, including new assessments that capture real-time performance as students work. It could even measure key nonacademic indicators such as belonging, confidence, curiosity and relationship quality.
Lake said the reportâs idea of an âambidextrousâ approach to AI came from an acknowledgement by the group that âwe have to attend to the kids who are in our schools right now â and the teachers,â she said. âWe have to use whatever technologies are available to make things better, but we also have to make investments in big, really different whole-school designs.â
Those could include not just better assessments but ways to help teachers provide ârigorous personalization grounded in the science of learning.â
Districts could create classrooms with multiple adults working in teams based on their expertise. And AI could enable schools to match students to internships and other experiences, handling administrative tasks so humans can focus on relationships.
Lake said the group that met in November kept coming back to one idea: keeping an eye on both the future of school and the reality of the schools we already have.
âA lot of times when we have these conversations about AI and the future of schooling, it feels very floaty and abstract,â she said. âSo I really appreciated that the fellows had a vision to connect the here-and-now to what kids need to know and [should] be able to do in the future. That feels really important for us all right now.â
was produced by and reviewed and distributed by ±ŹÁÏTV.