It’s been more than a year since ChatGPT’s ability to produce astonishingly humanlike writing sparked fundamental questions about the role of artificial intelligence in K-12 education.
Yet most school districts are still stuck in neutral, trying to figure out the way forward on issues such as plagiarism, data privacy, and ethical use of AI by students and educators.
More than three-quarters—79 percent—of educators say their districts still do not have clear policies on the use of artificial intelligence tools, according to an EdWeek Research Center survey of 924 educators conducted in November and December.
District leaders want to help schools chart the right course on the potentially game-changing technology, but many feel “overwhelmed and overloaded,” said Bree Dusseault, a principal at and the managing director for the Center for Reinventing Public Education, a research organization at Arizona State University’s Mary Lou Fulton Teacher’s College, who has studied AI policymaking.
The consequences of us ignoring it and sticking our heads in the sand is that students will game the system.
Catherine Truitt, North Carolina superintendent of public instruction
The lack of clear direction is especially problematic given that the majority of educators surveyed—56 percent—expect the use of AI tools to increase in their districts over the next year, according to the EdWeek Research Center survey.
And while experts are encouraging schools to teach their students to use AI appropriately, banning the tools for students is still a relatively common practice in K-12 education, the survey found.
One in 5 educators surveyed said that their district prohibits students from using generative AI, such as ChatGPT, although teachers are permitted to use it. Another 7 percent of educators said the tools were banned for everyone—including staff.
When district officials—and school principals—sidestep big questions about the proper use of AI, they are inviting confusion and inequity, said Pat Yongpradit, the chief academic officer for Code.Org and leader of Teach AI, an initiative aimed at helping K-12 schools use AI technology effectively.
“You can have, in the same school, a teacher allowing their 10th grade English class to use ChatGPT freely and getting into AI ethics issues and really preparing their students for a future in which AI will be part of any industry,” Yongpradit said. “And then literally, right down the hall, you can have another teacher banning it totally, going back to pencil and paper writing because they don’t trust their kids to not use ChatGPT. Same school, different 10th grade English class.”
The new “digital divide will be an AI divide,” Yongpradit said.
‘Policy is always behind technology’
It’s not hard to understand why most district leaders aren’t eager to make big decisions about how their schools will use the technology.
Many educators worry that if students are exposed to generative AI, they’ll employ it to cheat on assignments. Plus, AI tools can spit out false information and magnify racial and socioeconomic biases. AI also develops—some would say “gets smarter”—by consuming data, opening the doors for potential student-data-privacy nightmares.
The vast majority of educators don’t have the capacity to cope with those complications on top of their other responsibilities, the survey found.
More than three quarters—78 percent—of educators surveyed said they don’t have the time or bandwidth to teach students how to think about or use AI because they are tied up with academic challenges, social-emotional-learning, safety considerations, and other higher priorities.
What’s more, AI is changing so rapidly that any policy a district or state crafts could be outdated the moment it is released.
That’s typical when it comes to new technologies, said Kristina Ishmael, who until late last year served as the deputy director of the U.S. Department of Education’s office of educational technology.
“Policy is always behind technology,” said Ishmael, who is now a strategic advisor at Ishmael Consulting. In some cases, that’s “very intentional, because it’s policy; once you put it in, it’s hard to take it off.”
But AI requires a shift in thinking, she pointed out.
AI policy and guidance need to be “living, breathing documents, because the technology is changing so quickly,” Ishmael said. “It’s not something like a continuous improvement plan where your school is looked at every couple of years, then the binder sits on the shelf.”
Another stumbling block, she said: Some district leaders are hitting the pause button to see if their state or Washington policymakers establish AI parameters. The federal Education Department has pledged to release resources, including an AI policy tool kit this year. Members of Congress have introduced legislation on select AI issues, such as literacy.
But it’s not clear if more significant action is on the horizon, Ishmael said.
“Folks are waiting to see what happens at the federal level,” said Ishmael. But she recommends districts avoid delay.
“I’d tell them to start working on things now,” she said. “This is a brand-new tool that is impacting our classrooms and our lives. There needs to be some sort of baseline parameters for students to be able to use [it].”
‘We’re all entering this innovative environment with a lot of unknowns’
Most educators see value in understanding AI.
Two-thirds of those surveyed by the EdWeek Research Center say students will need knowledge of AI because the technology already features so heavily in the products and services that are part of their daily lives. And another 60 percent say that employers are looking for people who can work with AI tools to do their jobs more efficiently.
Nearly half said students will need AI skills to be successful in college, and nearly a third believe younger students will need them to do well academically in the upper grades.
That’s motivated some district leaders to move quickly.
“I think the driver for me is really looking at the jobs of the future and looking at it through the economic lens,” said Jerry Almendarez, the superintendent of the Santa Ana Unified school district, which he describes as a largely “blue collar” Southern California community. “I see this as a window of opportunity for communities like mine, to catch up to the rest of society by giving [students] skills and access to a technology that has never been at their fingertips before,” he said.
District and school leaders who want to help their students navigate this technology “should know that they’re not alone, if they don’t know where to start,” Almendarez said. “That’s OK. None of us really do. We’re all entering this innovative environment with a lot of unknowns.”
Almendarez suggested districts turn to entities that have already sketched out what AI policy guidance could look like. That includes six states—California, North Carolina, Oregon, Virginia, Washington, and West Virginia—as well as districts that were early out of the gate on AI policy, such as Washington state’s Peninsula school district near Seattle.
Nonprofit organizations have also stepped up. TeachAI last fall released a guidance tool kit that offers suggestions for mitigating privacy risks of AI tools, tactics for ensuring students use the technology to inform their assignments and not to cheat, and tips on how to train staff on using AI appropriately.
Last fall, the Council of the Great City Schools and the Consortium for School Networking released a 93-question checklist to help educators think through policies around generative AI. The list includes queries such as: Does your district have a dedicated point person on the role of artificial intelligence in K-12 education? Are you requiring vendors that use AI algorithms in their products to ensure they are free from bias?
That kind of direction is what district leaders are searching for, said Dusseault of the Center for Reinventing Public Education.
“We’ve heard superintendents say, ‘I would like to see support, and it doesn’t have to come from my state. It could come from a trusted nonprofit,’” she said.
Some districts are taking it a step further. New York City, which reversed an initial ban on ChatGPT, and Santa Ana are launching AI policy shops whose work can inform the broader field.
‘What really is the purpose of having kids take world literature or biology and physics?’
Much of the early discussion around the use of generative AI in K-12 classrooms centered on how students might use the technology to cheat, Dusseault said.
“One of the big questions that I know out the gate was kind of scary and put some of the districts on their back foot was plagiarism, this idea that ChatGPT is going to end up giving students the ability to plagiarize and not represent their work,” Dusseault said.
But district and state leaders’ thinking has evolved over the past year, she said.
“Now, a year later, we’re seeing: ‘We are probably going to all be using some large language model or something like ChatGPT into the future, so students may need to actually have skill building on how to use it appropriately.’”
One state on the vanguard of this approach: North Carolina, whose AI guidance, released last month, includes a clear outline of possibilities for using AI on assignments without the technology encouraging cheating or plagiarism.
As generative AI gets ever more adept at the kinds of assignments teachers regularly give students—write an essay on bird imagery in Shakespeare’s “MacBeth,” explain the differences between igneous and metamorphic rocks—educators will need to rethink long-held tenants of teaching and learning, said Catherine Truitt, North Carolina’s superintendent of public instruction.
They will have to ask themselves: “What really is the purpose of having kids take world literature or biology and physics. What should kids be getting out of these courses?” she said.
Educators “are going to have to start having hard conversations” about what it really means to teach content or help students develop critical-thinking and analytical skills, she said. “The consequences of us ignoring it and sticking our heads in the sand is that students will game the system.”
window.fbAsyncInit = function() {
FB.init({
appId : ‘200633758294132’,
xfbml : true,
version : ‘v2.9’
});
};
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = “https://connect.facebook.net/en_US/sdk.js”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));
!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘344596112942513’);
fbq(‘track’, ‘PageView’);
Source link