ChatGPT – The 74 America's Education News Source Thu, 09 May 2024 20:27:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png ChatGPT – The 74 32 32 Texas Will Use Computers to Grade Written Answers on This Year’s STAAR Tests /article/texas-will-use-computers-to-grade-written-answers-on-this-years-staar-tests/ Wed, 10 Apr 2024 12:30:00 +0000 /?post_type=article&p=725110 This article was originally published in

Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the state’s standardized tests will be graded automatically by computers.

The Texas Education Agency is rolling out an “automated scoring engine” for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing technology like artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students’ understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions — known as constructed response items. After the redesign, there are six to seven times more constructed response items.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score,” said Jose Rios, director of student assessment at the Texas Education Agency.

In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given.

This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans.

When the computer has “low confidence” in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.

“We have always had very robust quality control processes with humans,” said Chris Rozunick, division director for assessment development at the Texas Education Agency. With a computer system, the quality control looks similar.

Every day, Rozunick and other testing administrators will review a summary of results to check that they match what is expected. In addition to “low confidence” scores and responses that do not fit in the computer’s programming, a random sample of responses will also be automatically handed off to humans to check the computer’s work.

TEA officials have been resistant to the suggestion that the scoring engine is artificial intelligence. It may use similar technology to chatbots such as GPT-4 or Google’s Gemini, but the agency has stressed that the process will have systematic oversight from humans. It won’t “learn” from one response to the next, but always defer to its original programming set up by the state.

“We are way far away from anything that’s autonomous or can think on its own,” Rozunick said.

But the plan has still generated worry among educators and parents in a world still weary of the influence of machine learning, automation and AI.

Some educators across the state said they were caught by surprise at TEA’s decision to use automated technology — also known as hybrid scoring — to score responses.

“There ought to be some consensus about, hey, this is a good thing, or not a good thing, a fair thing or not a fair thing,” said Kevin Brown, the executive director for the Texas Association of School Administrators and a former superintendent at Alamo Heights ISD.

Representatives from TEA first mentioned interest in automated scoring in testimony to the Texas House Public Education Committee in August 2022. In the fall of 2023, the agency announced the move to hybrid scoring at a conference and during test coordinator training before releasing details of the process in December.

The STAAR test results are a key part of the accountability system TEA uses to grade school districts and individual campuses on an A-F scale. Students take the test every year from third grade through high school. When campuses within a district are underperforming on the test, state law allows the Texas education commissioner to intervene.

The commissioner can appoint a conservator to oversee campuses and school districts. State law also allows the commissioner to suspend and replace elected school boards with an appointed board of managers. If a campus receives failing grades for five years in a row, the commissioner is required to appoint a board of managers or close that school.

With the stakes so high for campuses and districts, there is a sense of uneasiness about a computer’s ability to score responses as well as a human can.

“There’s always this sort of feeling that everything happens to students and to schools and to teachers and not for them or with them,” said Carrie Griffith, policy specialist for the Texas State Teachers Association.

A former teacher in the Austin Independent School District, Griffith added that even if the automated scoring engine works as intended, “it’s not something parents or teachers are going to trust.”

Superintendents are also uncertain.

“The automation is only as good as what is programmed,” said Lori Rapp, superintendent at Lewisville ISD. School districts have not been given a detailed enough look at how the programming works, Rapp said.

The hybrid scoring system was already used on a limited basis in December 2023. Most students who take the STAAR test in December are retaking it after a low score. That’s not the case for Lewisville ISD, where high school students on an altered schedule test for the first time in December, and Rapp said her district saw a “drastic increase” in zeroes on constructed responses.

“At this time, we are unable to determine if there is something wrong with the test question or if it is the new automated scoring system,” Rapp said.

The state overall saw an increase in zeroes on constructed responses in December 2023, but the TEA said there are other factors at play. In December 2022, the only way to score a zero was by not providing an answer at all. With the STAAR redesign in 2023, students can receive a zero for responses that may answer the question but lack any coherent structure or evidence.

The TEA also said that students who are retesting will perform at a different level than students taking the test for the first time. “Population difference is driving the difference in scores rather than the introduction of hybrid scoring,” a TEA spokesperson said in an email.

For $50, students and their parents can request a rescore if they think the computer or the human got it wrong. The fee is waived if the new score is higher than the initial score. For grades 3-8, there are no consequences on a student’s grades or academic progress if they receive a low score. For high school students, receiving a minimum STAAR test score is a common way to fulfill one of the state graduation requirements, but it is not the only way.

Even with layers of quality control, Round Rock ISD Superintendent Hafedh Azaiez said he worries a computer could “miss certain things that a human being may not be able to miss,” and that room for error will impact students who Azaiez said are “trying to do his or her best.”

Test results will impact “how they see themselves as a student,” Brown said, and it can be “humiliating” for students who receive low scores. With human graders, Brown said, “students were rewarded for having their own voice and originality in their writing,” and he is concerned that computers may not be as good at rewarding originality.

Julie Salinas, director of assessment, research and evaluation at Brownsville ISD said she has concerns about whether hybrid scoring is “allowing the students the flexibility to respond” in a way that they can demonstrate their “full capability and thought process through expressive writing.”

Brownsville ISD is overwhelmingly Hispanic. Students taking an assessment entirely in Spanish will have their tests graded by a human. If the automated scoring engine works as intended, responses that include some Spanish words or colloquial, informal terms will be flagged by the computer and assigned to a human so that more creative writing can be assessed fairly.

The system is designed so that it “does not penalize students who answer differently, who are really giving unique answers,” Rozuick said.

With the computer scoring now a part of STAAR, Salinas is focused on adapting. The district is incorporating tools with automated scoring into how teachers prepare students for the STAAR test to make sure they are comfortable.

“Our district is on board and on top of the things that we need to do to ensure that our students are successful,” she said.

Disclosure: Google, the Texas Association of School Administrators and Texas State Teachers Association have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune’s journalism. Find a complete .

This article originally appeared in at .

The Texas Tribune is a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.

]]>
A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor /article/a-cautionary-ai-tale-why-ibms-dazzling-watson-supercomputer-made-a-lousy-tutor/ Tue, 09 Apr 2024 13:30:00 +0000 /?post_type=article&p=724698

With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits. 

In the annals of artificial intelligence, Feb. 16, 2011, was a watershed moment.

That day, IBM’s Watson supercomputer finished off a three-game shellacking of Jeopardy! champions Ken Jennings and Brad Rutter. Trailing by over $30,000, Jennings, now the show’s host, wrote out his Final Jeopardy answer in mock resignation: “I, for one, welcome our computer overlords.”

A lark to some, the experience galvanized Satya Nitta, a longtime computer researcher at IBM’s Watson Research Center in Yorktown Heights, New York. Tasked with figuring out how to apply the supercomputer’s powers to education, he soon envisioned tackling ed tech’s most sought-after challenge: the world’s first tutoring system driven by artificial intelligence. It would offer truly personalized instruction to any child with a laptop — no human required.

YouTube

“I felt that they’re ready to do something very grand in the space,” he said in an interview. 

Nitta persuaded his bosses to throw more than $100 million at the effort, bringing together 130 technologists, including 30 to 40 Ph.D.s, across research labs on four continents. 

But by 2017, the tutoring moonshot was essentially dead, and Nitta had concluded that effective, long-term, one-on-one tutoring is “a terrible use of AI — and that remains today.”

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

Some of the leading lights of ed tech, from to , are trying to pick up where Watson left off, offering AI tools that promise to help teach students. Sal Khan, founder of Khan Academy, last year said AI has the potential to bring “probably the ” that education has ever seen. He wants to give “every student on the planet an artificially intelligent but amazing personal tutor.”

A 25-year journey

To be sure, research on high-dosage, one-on-one, in-person tutoring is : It’s interventions available, offering significant improvement in students’ academic performance, particularly in subjects like math, reading and writing.  

But traditional tutoring is also “breathtakingly expensive and hard to scale,” said Paige Johnson, a vice president of education at Microsoft. One school district in West Texas, for example, recently spent in federal pandemic relief funds to tutor 6,000 students. The expense, Johnson said, puts it out of reach for most parents and school districts. 

We missed something important. At the heart of education, at the heart of any learning, is engagement.

Satya Nitta, IBM Research’s former global head of AI solutions for learning

For IBM, the opportunity to rebalance the equation in kids’ favor was hard to resist. 

The Watson lab is legendary in the computer science field, with and six Turing Award winners among its ranks. It’s where modern was invented, and home to countless other innovations such as barcodes and the magnetic stripes on credit cards that make . It’s also where, in 1997, Deep Blue beat Garry Kasparov, essentially inventing the notion that AI could “think” like a person.

Chess enthusiasts watch World Chess champion Garry Kasparov on a television monitor as he holds his head in his hands at the start of the sixth and final match May 11, 1997 against IBM’s Deep Blue computer in New York. Kasparov lost this match in just 19 moves. (Stan Honda/Getty)

The heady atmosphere, Nitta recalled, inspired “a very deep responsibility to do something significant and not something trivial.”

Within a few years of Watson’s victory, Nitta, who had arrived in 2000 as a chip technologist, rose to become IBM Research’s global head of AI solutions for learning. For the Watson project, he said, “I was just given a very open-ended responsibility: Take Watson and do something with it in education.”

Nitta spent a year simply reading up on how learning works. He studied cognitive science, neuroscience and the decades-long history of “intelligent tutoring systems” in academia. Foremost in his reading list was the research of Stanford neuroscientist Vinod Menon, who’d put elementary schoolers through a 12-week math tutoring session, collecting before-and-after scans of their brains using an MRI. Tutoring, he found, produced nothing less than an increase in neural connectivity. 

Nitta returned to his bosses with the idea of an AI-powered cognitive tutor. “There’s something I can do here that’s very compelling,” he recalled saying, “that can broadly transform learning itself. But it’s a 25-year journey. It’s not a two-, three-, four-year journey.”

IBM drafted two of the highest-profile partners possible in education: the children’s media powerhouse Sesame Workshop and Pearson, the international publisher.

One product envisioned was a voice-activated Elmo doll that would serve as a kind of digital tutoring companion, interacting fully with children. Through brief conversations, it would assess their skills and provide spoken responses to help kids advance.

One proposed application of IBM’s planned Watson tutoring app was to create a voice-activated Elmo doll that would be an interactive digital companion. (Getty)

Meanwhile, Pearson promised that it could soon allow college students to “dialogue with Watson in real time.”

Nitta’s team began designing lessons and putting them in front of students — both in classrooms and in the lab. In order to nurture a back-and-forth between student and machine, they didn’t simply present kids with multiple-choice questions, instead asking them to write responses in their own words.

It didn’t go well.

Some students engaged with the chatbot, Nitta said. “Other students were just saying, ‘IDK’ [I don’t know]. So they simply weren’t responding.” Even those who did began giving shorter and shorter answers. 

Nitta and his team concluded that a cold reality lay at the heart of the problem: For all its power, Watson was not very engaging. Perhaps as a result, it also showed “little to no discernible impact” on learning. It wasn’t just dull; it was ineffective.

Satya Nitta (left) and part of his team at IBM’s Watson Research Center, which spent five years trying to create an AI-powered interactive tutor using the Watson supercomputer.

“Human conversation is very rich,” he said. “In the back and forth between two people, I’m watching the evolution of your own worldview.” The tutor influences the student — and vice versa. “There’s this very shared understanding of the evolution of discourse that’s very profound, actually. I just don’t know how you can do that with a soulless bot. And I’m a guy who works in AI.”

When students’ usage time dropped, “we had to be very honest about that,” Nitta said. “And so we basically started saying, ‘OK, I don’t think this is actually correct. I don’t think this idea — that an intelligent tutoring system will tutor all kids, everywhere, all the time — is correct.”

‘We missed something important’

IBM soon switched gears, debuting another crowd-pleasing Watson variation — this time, a touching throwback: It engaged in . In a televised demonstration in 2019, it went up against debate champ Harish Natarajan on the topic “Should we subsidize preschools?” Among its arguments for funding, the supercomputer offered, without a whiff of irony, that good preschools can prevent “future crime.” Its current iteration, , focuses on helping businesses build AI applications like “intelligent customer care.” 

Nitta left IBM, eventually taking several colleagues with him to create a startup called . It uses voice-activated AI to safely help teachers do workaday tasks such as updating digital gradebooks, opening PowerPoint presentations and emailing students and parents. 

Thirteen years after Watson’s stratospheric Jeopardy! victory and more than one year into the Age of ChatGPT, Nitta’s expectations about AI couldn’t be more down-to-earth: His AI powers what’s basically “a carefully designed assistant” to fit into the flow of a teacher’s day. 

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”&Բ;

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

These notions aren’t news to those who do tutoring for a living. , which offers live and online tutoring in 500 school districts, relies on AI to power a lesson plan creator that helps personalize instruction. But when it comes to the actual tutoring, humans deliver it, said , chief institution officer at , which operates Varsity.

”The AI isn’t far enough along yet to do things like facial recognition and understanding of student focus,” said Salcito, who spent 15 years at Microsoft, most of them as vice president of worldwide education. “One of the things that we hear from teachers is that the students love their tutors. I’m not sure we’re at a point where students are going to love an AI agent.”

Students love their tutors. I'm not sure we're at a point where students are going to love an AI agent.

Anthony Salcito, Nerdy

The No. 1 factor in a student’s tutoring success is consistently, research suggests. As smart and efficient as an AI chatbot might be, it’s an open question whether most students, especially struggling ones, would show up for an inanimate agent or develop a sense of respect for its time.

When Salcito thinks about what AI bots now do in education, he’s not impressed. Most, he said, “aren’t going far enough to really rethink how learning can take place.” They end up simply as fast, spiffed-up search engines. 

In most cases, he said, the power of one-on-one, in-person tutoring often emerges as students begin to develop more honesty about their abilities, advocate for themselves and, in a word, demand more of school. “In the classroom, a student may say they understand a problem. But they come clean to the tutor, where they expose, ‘Hey, I need help.’”

Cognitive science suggests that for students who aren’t motivated or who are uncertain about a topic, only will help. That requires a focused, caring human, watching carefully, asking tons of questions and reading students’ cues. 

Jeremy Roschelle, a learning scientist and an executive director of Digital Promise, a federally funded research center, said usage with most ed tech products tends to drop off. “Kids get a little bored with it. It’s not unique to tutors. There’s a newness factor for students. They want the next new thing.”&Բ;

There's a newness factor for students. They want the next new thing.

Jeremy Roschelle, Digital Promise

Even now, Nitta points out, research shows that big commercial AI applications don’t seem to hold users’ attention as well as top entertainment and social media sites like YouTube, Instagram and TikTok. dubbed the user engagement of sites like ChatGPT “lackluster,” finding that the proportion of monthly active users who engage with them in a single day was only about 14%, suggesting that such sites aren’t very “sticky” for most users.

For social media sites, by contrast, it’s between 60% and 65%. 

One notable AI exception: , an app that allows users to create companions of their own among figures from history and fiction and chat with the likes of Socrates and Bart Simpson. It has a stickiness score of 41%.

As startups like offer “your child’s superhuman tutor,” starting at $29 per month, and publicly tests its popular Khanmigo AI tool, Nitta maintains that there’s little evidence from learning science that, absent a strong outside motivation, people will spend enough time with a chatbot to master a topic.

“We are a very deeply social species,” said Nitta, “and we learn from each other.”

IBM declined to comment on its work in AI and education, as did Sesame Workshop. A Pearson spokesman said that since last fall it has been ​​beta-testing AI study tools keyed to its e-textbooks, among other efforts, with plans this spring to expand the number of titles covered. 

Getting ‘unstuck’

IBM’s experiences notwithstanding, the search for an AI tutor has continued apace, this time with more players than just a legacy research lab in suburban New York. Using the latest affordances of so-called large language models, or LLMs, technologists at Khan Academy believe they are finally making the first halting steps in the direction of an effective AI tutor. 

Kristen DiCerbo remembers the moment her mind began to change about AI. 

It was September 2022, and she’d only been at Khan Academy for a year-and-a-half when she and founder Khan got access to a beta version of ChatGPT. Open AI, ChatGPT’s creator, had asked Microsoft co-founder Bill Gates for more funding, but he told them not to come back until the chatbot could pass an Advanced Placement biology exam.

Khan Academy founder Sal Khan has said AI has the potential to bring “probably the biggest positive transformation” that education has ever seen. He wants to give every student an “artificially intelligent but amazing personal tutor.” (Getty)

So Open AI queried Khan for sample AP biology questions. He and DiCerbo said they’d help in exchange for a peek at the bot — and a chance to work with the startup. They were among the first people outside of Open AI to get their hands on GPT-4, the LLM that powers the upgraded version of ChatGPT. They were able to test out the AI and, in the process, become amateur AI before anyone had even heard of the term. 

Like many users typing in queries in those first heady days, the pair initially just marveled at the sophistication of the tool and its ability to return what felt, for all the world, like personalized answers. With DiCerbo working from her home in Phoenix and Khan from the nonprofit’s Silicon Valley office, they traded messages via Slack.

Kristen DiCerbo introduces users to Khanmigo in a Khan Academy promotional video. (YouTube)

“We spent a couple of days just going back and forth, Sal and I, going, ‘Oh my gosh, look what we did! Oh my gosh, look what it’s saying — this is crazy!’” she told an audience during a recent at the University of Notre Dame. 

She recounted asking the AI to help write a mystery story in which shoes go missing in an apartment complex. In the back of her mind, DiCerbo said, she planned to make a dog the shoe thief, but didn’t reveal that to ChatGPT. “I started writing it, and it did the reveal,” she recalled. “It knew that I was thinking it was going to be a dog that did this, from just the little clues I was planting along the way.”

More tellingly, it seemed to do something Watson never could: have engaging conversations with students.

DiCerbo recounted talking to a high school student they were working with who told them about an interaction she’d had with ChatGPT around The Great Gatsby. She asked it about F. Scott Fitzgerald’s famous , which scholars have long interpreted as symbolizing Jay Gatsby’s out-of-reach hopes and dreams.

“It comes back to her and asks, ‘Do you have hopes and dreams just out of reach?’” DiCerbo recalled. “It had this whole conversation” with the student.

The pair soon tore up their 2023 plans for Khan Academy. 

It was a stunning turn of events for DiCerbo, a Ph.D. educational psychologist and former senior Pearson research scientist who had spent more than a year on the failed Watson project. In 2016, Pearson that Watson would soon be able to chat with college students in real time to guide them in their studies. But it was DiCerbo’s teammates, about 20 colleagues, who had to actually train the supercomputer on thousands of student-generated answers to questions from textbooks — and tempt instructors to rate those answers. 

Like Nitta, DiCerbo recalled that at first things went well. They found a natural science textbook with a large user base and set Watson to work. “You would ask it a couple of questions and it would seem like it was doing what we wanted to,” answering student questions via text.

But invariably if a student’s question strayed from what the computer expected, she said, “it wouldn’t know how to answer that. It had no ability to freeform-answer questions, or it would do so in ways that didn’t make any sense.”&Բ;

After more than a year of labor, she realized, “I had never seen the ‘OK, this is going to work’ version” of the hoped-for tutor. “I was always at the ‘OK, I hope the next version’s better.’”

But when she got a taste of ChatGPT, DiCerbo immediately saw that, even in beta form, the new bot was different. Using software that quickly predicted the most likely next word in any conversation, ChatGPT was able to engage with its human counterpart in what seemed like a personal way.

Since its debut in March 2023, Khanmigo has turned heads with what many users say is a helpful, easy-to-use, natural language interface, though a few users have pointed out that it sometimes .

Surprisingly, DiCerbo doesn’t consider the popular chatbot a full-time tutor. As sophisticated as AI might now be in motivating students to, for instance, try again when they make a mistake, “It’s not a human,” she said. “It’s also not their friend.”

(AI's) not a human. It’s also not their friend.

Kristen DiCerbo, Khan Academy

Khan Academy’s shows their tool is effective with as little as 30 minutes of practice and feedback per week. But even as many startups promise the equivalent of a one-on-one human tutor, DiCerbo cautions that 30 minutes is not going to produce miracles. Khanmigo, she said, “is not a solution that’s going to replace a human in your life,” she said. “It’s a tool in your toolbox that can help you get unstuck.”

‘A couple of million years of human evolution’

For his part, Nitta says that for all the progress in AI, he’s not persuaded that we’re any closer to a real-live tutor that would offer long-term help to most students. If anything, Khanmigo and probabilistic tools like it may prove to be effective “homework helpers.” But that’s where he draws the line. 

“I have no problem calling it that, but don’t call it a tutor,” he said. “You’re trying to endow it with human-like capabilities when there are none.”&Բ; 

Unlike humans, who will typically do their best to respond genuinely to a question, the way AI bots work —by digesting pre-existing texts and other information to come up with responses that seem human — is akin to a “statistical illusion,” writes Harvard Business School Professor . “They’ve just been well-trained by humans to respond to humans.”

Researcher Sidney Pressey’s 1928 Testing Machine, one of a series of so-called “teaching machines” that he and others believed would advance education through automation.

Largely because of this, Nitta said, there’s little evidence that a chatbot will continuously engage people as a good human tutor would.

What would change his mind? Several years of research by an independent third party showing that tools like Khanmigo actually make a difference on a large scale — something that doesn’t exist yet.

DiCerbo also maintains her hard-won skepticism. She knows all about the halting early decades of computers a century ago, when experimental, punch-card operated “teaching machines” guided students through rudimentary multiple-choice lessons, often with simple rewards at the end. 

In her talks, DiCerbo urges caution about AI revolutionizing education. As much as anyone, she is aware of the expensive failures that have come before. 

Two women stand beside open drawers of computer punch card filing cabinets. (American Stock/Getty Images)

In her recent talk at Notre Dame, she did her best to manage expectations of the new AI, which seems so limitless. In one-to-one teaching, she said, there’s an element of humanity “that we have not been able to — and probably should not try — to replicate in artificial intelligence.” In that respect, she’s in agreement with Nitta: Human relationships are key to learning. In the talk, she noted that students who have a person in school who cares about their learning have higher graduation rates. 

But still.

ChatGPT now has 100 million weekly users, according to . That record-fast uptake makes her think “there’s something interesting and sticky about this for people that we haven’t seen in other places.”

Being able to engineer prompts in plain English opens the door for more people, not just engineers, to create tools quickly and iterate on what works, she said. That democratization could mean the difference between another failed undertaking and agile tools that actually deliver at least a version of Watson’s promise. 

An early prototype of IBM’s Watson supercomputer in Yorktown Heights, New York. In 2011, the system was the size of a master bedroom. (Wikimedia Commons)

Seven years after he left IBM to start his new endeavor, Nitta is philosophical about the effort. He takes virtually full responsibility for the failure of the Watson moonshot. In retrospect, even his 25-year timeline for success may have been naive.

“What I didn’t appreciate is, I actually was stepping into a couple of million years of human evolution,” he said. “That’s the thing I didn’t appreciate at the time, which I do in the fullness of time: Mistakes happen at various levels, but this was an important one.”

]]>
‘Distrust, Detection & Discipline:’ New Data Reveals Teachers’ ChatGPT Crackdown /article/distrust-detection-discipline-new-data-reveals-teachers-chatgpt-crackdown/ Tue, 02 Apr 2024 20:01:00 +0000 /?post_type=article&p=724713 New survey data puts hard numbers behind the steep rise of ChatGPT and other generative AI chatbots in America’s classrooms — and reveals a big spike in student discipline as a result. 

As artificial intelligence tools become more common in schools, most teachers say their districts have adopted guidance and training for both educators and students, by the nonprofit Center for Democracy and Technology. What this guidance lacks, however, are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom — making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” report co-authors Maddy Dwyer and Elizabeth Laird write.

Among the middle and high school teachers who responded to the online survey, which was conducted in November and December, 60% said their schools permit the use of generative AI for schoolwork — double the number who said the same just five months earlier on a similar survey. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat. 

That doesn’t mean, however, that students aren’t getting into trouble. Among survey respondents, 64% said they were aware of students who were disciplined or faced some form of consequences — including not receiving credit for an assignment — for using generative AI on a school assignment. That represents a 16 percentage-point increase from August. 

The tools have also affected how educators view their students, with more than half saying they’ve grown distrustful of whether their students’ work is actually theirs. 

Fighting fire with fire, a growing share of teachers say they rely on digital detection tools to sniff out students who may have used generative AI to plagiarize. Sixty-eight percent of teachers — and 76% of licensed special education teachers — said they turn to generative AI content detection tools to determine whether students’ work is actually their own. 

The findings carry significant equity concerns for students with disabilities, researchers concluded, especially in the face of are ineffective.

]]>
University of Texas at El Paso To Use Faculty Survey Results For AI Strategy /article/utep-to-use-faculty-survey-results-to-enhance-campus-ai-strategy/ Thu, 14 Mar 2024 16:30:00 +0000 /?post_type=article&p=723865 This article was originally published in

A University of Texas at El Paso team plans to conduct a survey this spring and act on the data to offer UTEP instructors the necessary help to address the growing capabilities and complexities of artificial intelligence, including ChatGPT.

Jeff Olimpo, director of the campus’ Institute for Scholarship, Pedagogy, Innovation and Research Excellence, said the goal of this study will be to determine how much instructors know about AI and how comfortable they would be to incorporate the technology into their courses.

Armed with that knowledge, the InSPIRE team will develop a multi-pronged, hybrid effort to build on every level of understanding from basic tutorials to in-depth ideas to enhance instruction to include ways students can use AI in their fields of study.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


This effort is the follow-up step to InSPIRE’s spring 2023 workshops that led to the university’s initial ChatGPT guidelines. Since then, the team has incorporated other concepts used at institutions within and beyond the University of Texas System.

“We essentially created a Frankenstein of sorts,” Olimpo said.

Jeff Olimpo, director of UTEP’s Institute for Scholarship, Pedagogy, Innovation and Research Excellence (UTEP)

The latest incarnation included recommendations of what might be appropriate to include in a syllabus such as if AI is prohibited, allowed or allowed with restrictions. The team also created a guide that included a Frequently Asked Questions section that included AI restrictions, and procedures if the instructor suspected a student used AI in an assignment and did not credit the technology. The information was shared with faculty in January after it was approved by John Wiebe, provost and vice president for Academic Affairs.

Olimpo called the guidelines “brief, digestible and accessible,” and he stressed that instructors ultimately would decide what was best for their classes.

Gabriel Ibarra-Mejia, associate professor of public health sciences, was among the UTEP faculty who responded to the university’s recommendations. He said like it or not, ChatGPT (Generative Pre-trained Transformer) is part of the education equation now and he planned to embrace it to a point.

The professor said he allows students to use it in assignments as long as they cite its use and the reasons behind it such as to develop an outline or to polish the grammar or the report’s flow. What he does not want is for AI to replace thoughts and knowledge, especially from his students who may be health care professionals someday.

“I’m more concerned about how it might replace critical thinking,” said Ibarra-Mejia, who mentioned how he had received student papers where he suspected AI use because the responses had nothing to do with the question. “I’m concerned that the answers I get from a student might be from ChatGPT.”

Gabriel Ibarra-Mejia, associate professor of public health sciences at UTEP, said that he will allow students to use ChatGPT –with some restrictions — because it is an academic tool, but his concern is that it could lead to diminished critical thinking if used poorly. (Daniel Perez / El Paso Matters)

Melissa Vito, vice provost for Academic Innovation at UT San Antonio, said AI has been around for decades and that ChatGPT is part of the evolution.  She is the lead organizer of an AI conference for UT System institutions this week at her campus.

“The consensus in higher ed is that instructors need to use it, and students need to understand it and be able to use it,” Vito said.

In 2021, members of agreed that AI would influence all industries, but those tech leaders suggested that it would have the most effect on industries such as logistics, cybersecurity, health care, research and development, financial services, advertising, e-commerce, manufacturing, public transportation, and media and entertainment.

A research study released in March 2023 by the creator of ChatGPT, showed that approximately 80% of the U.S. workers could have at least 10% of their work affected by GPT, and that 19% of employees could see at least 50% of their jobs affected by it. The projected effects span all wage levels.

Melissa Vito, vice provost for Academic Innovation at the University of Texas at San Antonio (UTSA)

While unaware of any UT System mandates to use ChatGPT, she said institutions are creating opportunities for faculty to learn about it so they can explain its uses better to their students. She said the best path for higher education is to work with the AI industry to address concerns such as data privacy that could restrict access to what is produced and how it is used.

Vito referenced the January announcement of the collaboration between Among the goals of that relationship is to introduce advanced capabilities to the institution, which will help faculty and staff to investigate the possibilities of generative AI, which can create text, images and more in response to prompts.

The UTSA official said the purpose of the AI conference is to bring together administrators, faculty, staff and students with the broadest AI competencies to share their experiences and create a strong framework for how the UT System can benefit from the transformative effects of generative AI academically and socially.

Marcela Ramirez, associate vice provost for Teaching, Learning & Digital Transformation at UTSA, helped develop the conference’s workshops and panel discussions with representatives from sister institutions. They will cover ethical use, practical applications and how AI can be used to help students with critical thinking and problem-solving skills.

Ramirez, a two-time UTEP graduate who earned her BBA in 2008 and her MBA five years later, said the content will support faculty who want to update their courses with AI, and help them to be able to explain to students AI’s current limitations and future opportunities.

“What are the lessons learned,” asked Ramirez, who worked at UTEP for more than 10 years. “And what’s next?”

This first appeared on and is republished here under a Creative Commons license.

]]>
Wizard Chess, Robot Bikes and More: Six Students Creating Cool Stuff with AI /article/students-ai-opportunity-while-adults-fret-artificial-intelligence/ Sun, 25 Feb 2024 15:30:00 +0000 /?post_type=article&p=722752 More than a year after ’s surprise launch thrust artificial intelligence into public view, many educators and policymakers still fear that students will primarily use the technology for cheating. An found that two-thirds of high school and college instructors are so concerned about AI they’re rethinking assignments, with many planning to require handwritten assignments, in-class writing or even oral exams. 

But a few students see things differently. They’re not only fearless about AI, they’re building their studies and future professional lives around it. While many of their teachers are scrambling to outsmart AI in the classroom, these students are embracing the technology, often spending hours at home, in classrooms and dorm rooms building tools they hope will launch their careers.

In a , ACT, the non-profit that runs the college entrance exam of the same name, found that nearly half of high school students who’d signed up for the June 2023 exam had used AI tools, most commonly ChatGPT. Almost half of those who had used such tools relied on them for school assignments. 

The 74 went looking for young people diving head-first into AI and found several doing substantial research and development as early as high school. 

The six students we found, a few as young as 15, are thinking much more deeply about AI than most adults, their hands in the technology in ways that would have seemed impossible just a generation ago. Many are immigrants to the West or come from families that emigrated here. Edtech podcaster Alex Sarlin, who also writes a newsletter focused on edtech and founded the consultancy , ’t surprised by the demographics. He explained that while U.S. companies typically make headlines in AI, the phenomenon has “truly been a product of global collaboration, and many of its major innovators have been immigrants,” often with training and professorships at top North American universities.

These young people are programming everything from autonomous bicycles to postpartum depression apps for new mothers to 911 chatbots, homework helpers and Harry Potter-inspired robotic chess boards. 

All have a clear message about AI: Don’t fear it. Learn about it.

Isabela Ferrer

Age 17

Hometown Bogota, Colombia

School MAST Academy, Miami, Fla.

What she’s working on: A high school junior at MAST, a public magnet high school focused on maritime studies and science, Ferrer plans to return to Colombia this spring and study computer science in college. She has been working with a foundation called that takes in abandoned and abused children in her home country. She’s developing an AI tool to help the children learn how to read and write Spanish more easily.

“They enter a public school system that expects them to know how to read, but they don’t have these skills,” she said. 

Ferrer is also considering adding more features in the future, such as one that uses AI voice recognition to identify trauma in a student’s voice. 

Once she graduates, she’d like to take a gap year to “get a little more involved in the Colombian startup ecosystem and culture. I also want to travel internationally and possibly keep working on projects like the one I’m working on right now, but on an international scale.”&Բ;

What most people misunderstand about AI: “Something I think most people don’t get about AI is that it’s very accessible to everyone,” Ferrer said. “Coding API [application programming interface, which allows two applications to talk to each other] and creating AI models for any specific purpose is very easy and, if done correctly, can be beneficial for different purposes.”&Բ;

All the same, she also worries that AI is often used to tackle “very superficial problems” like productivity or data processing. “But I think there’s a huge opportunity to use these technologies to solve real problems in the world … There’s a huge opportunity to close different gaps that exist in emerging markets and in developing countries. And it’s very worth exploring.”&Բ;

Shanzeh Haji

Age 16

Hometown Toronto, Canada

School Bayview Secondary School, Richmond Hill, Ontario

Once she learned about postpartum depression, Haji began talking to new mothers and family members, including her own mother, who had experienced it. “I realized how big the problem was and how closely connected I was to it.” Haji finished coding the AI chatbot for the as-yet unnamed app and is working on the symptom recognition platform. 

What most people misunderstand about AI: “If you look at some of the people who are working in AI and some of the significant impact that AI has made on so many different problems,” she said, “whether it be climate change or medicine or drug discovery, you can just see that AI has significant potential — it can literally transform our lives in a positive way. It really allows for this radical innovation. And I feel like people see more of the negative side of artificial intelligence rather than the positive and the significance that it has on our lives.”&Բ;

Aditya Syam

Age 20

Hometown Mumbai, India

School Cornell University

What he’s working on: A math and computer science double major, Syam is part of a longstanding team at Cornell that is developing an AI-powered, self-navigating, , basically a robot bike. “The kinds of applications we are thinking of for this are deliveries and basically just getting things from point A to point B without having a human intervene at any point,” he said. Syam, who is working on the bike’s navigation team, has been honing its obstacle avoidance algorithm, which keeps it from hitting things. 

The project began about a decade ago, he said. “Back then, it was just a theory.” Now they plan to showcase an actual prototype of the bike this spring, probably in March or April, so everyone who has contributed to the project “can see what we’ve built.”

What most people misunderstand about AI: “It’s technology that’s been around for decades,” he said. “It’s just been rebranded in a different way.” ChatGPT, for instance, combines Natural Language Processing and Web access, which results in a kind of “miracle” product. “It seems so great — it can just pull something off the web for you, it can write essays for you, it can edit software code for you. But in its essence, it’s not that different from technologies that have been around before.”

Vinitha Marupeddi

Age 21

Hometown San Jose, Calif.

School Purdue University

What she’s working on: A senior studying computer science, data science and applied statistics, Marupeddi recently led two student teams — one in voice recognition and another in computer vision — developing a robotic, voice-activated modeled after , the 3-D animated game in the Harry Potter books in which the pieces come to life. “We were able to do a lot of high-level robotics using that one project, so I thought that was very cool,” she said. Though the game is still far from being playable, Marupeddi calls it a good use case “to get people interested in robotics and machine learning.”&Բ;

Last summer, she interned at a John Deere warehouse in Moline, Ill., where she was set free to work on any project that struck her fancy. Marupeddi looked around the warehouse and saw that Deere had a robot that was being used to track inventory, so she expanded its abilities to cover a wider area. She also worked on a computer vision algorithm that used security camera footage to detect how full certain areas of the warehouse were and determine how much more inventory they could hold.

What most people misunderstand about AI: ”Honestly, I think a good chunk of people are just obsessed with the cheating part of it. They’re like, ‘Oh, ChatGPT can just write my essay. It can do my homework. I don’t have to worry about it.’ But they don’t try to actually understand the material. The people that do use ChatGPT to understand the material are actually going to use it as tutors or use it to ask questions if they don’t understand something.” That divide, between those who reject AI and those who learn how to control it, could grow larger if unaddressed. But learning about AI, she said, will “give people the resources, if they have the drive.”

Vinaya Sharma

Age 18

Hometown Toronto, Canada

School Castlebrooke Secondary School, Brampton, Ontario

What she’s working on: Actually, the better question might be: What ’t she working on? Sharma, a high school senior, writes code like most of us speak. In part, her work is a response to how little challenge she gets in school these days. “After COVID, I feel schools have gone easier on students,” she said. “I skip school as much as I can so I can code in my room.” The result has been a flurry of applications, from an AI-powered chatbot to handle 911 calls to a power grid simulator to a pharmaceutical app to aid in drug discovery. 

The is still in search of customers, she said, but would be valuable especially in cases where multiple people are calling about the same emergency, such as a car crash. The AI would geolocate the calls and determine if callers were using similar words to describe what they saw. To those who balk at talking to a 911 chatbot, Sharma said the current system in Toronto is often backed up. “It’ll be 100% better than being put on hold and no one assisting you at all.”

The idea was born after she began talking to engineers and energy policymakers and realized that, in her words, “The engineers were very technical, looking at things on a scale of voltages and currents. And the policymakers had trouble communicating with these grid engineers. And I realized that that was one of the bottlenecks slowing down the process so much.” She used design principles pioneered by one of her favorite video games, , to give the two groups a drag-and-drop simulation that both could understand. 

Sharma got interested in drug discovery that Lululemon founder Chip Wilson has a rare form of muscular dystrophy that makes it difficult to walk. He’s investing $100 million on treatments and research for a cure. Sharma said she “fell down a research rabbit hole” and soon realized that the drug discovery process “is honestly broken. It takes more than a decade to bring a drug to market, and it costs, on average, $1 billion to $2 billion,” or about $743 million to nearly $1.5 billion in U.S. dollars.

Her app, BioBytes, aims to bring down both the cost and time needed to bring drugs to market. 

What most people misunderstand about AI: “With any new emerging tech, there’s going to be bad actors that will abuse the system or use it for harm,” she said. “But personally I believe the pros outweigh it. Instead of taking these tools away from us in order to prevent these bad things from happening, I think that people need to realize that the tools are here and people are going to use them. So there needs to be a greater focus on education, of how to use the tools and how to use [them] for good and how it can actually support us.”&Բ;

Krishiv Thakuria 

Age 15

Hometown Mississauga, Ontario, Canada

School The Woodlands Secondary School, Mississauga

What he’s working on: Thakuria founded a startup called and is building a set of AI-powered learning tools to help students study more efficiently. The tools let users upload any class materials — study notes, a PDF of a textbook chapter or entire novel or even a teacher’s PowerPoint. From there they can create “an infinite set of practice questions” keyed to the course, Thakuria said. If students get stuck, they can click on an AI tutor customized to the material they uploaded.

The tutoring function is similar to Khan Academy’s AI-powered teaching assistant , but Thakuria said Aceflow’s tool has an advantage: Khanmigo only works, for now, on Khan Academy materials. “In a lot of classes, teachers teach content in very different ways,” he said. “If you can personalize an AI tool to study the material of your teachers, you get learning that’s far more personalized and far more relevant to you, making your studying sessions more effective.” Aceflow users can also create timed study sessions, something neither Khanmigo nor ChatGPT users can currently do.

The new tool is being beta-tested by a focus group of 20, with a 1,400-person waitlist, he said. He and his partners plan to offer it on a “freemium” model, with charges for premium features. Even paying a small amount for unlimited use of the tool makes it available to many families who can’t afford a tutor, Thakuria said, since private tutoring can cost upwards of $10,000 a year. 

What most people misunderstand about AI: That its impact on education will be “binary,” he said. People believe “it’s either a good thing or a bad thing. I think that it can do both. For all the people who worry about AI being a bad thing, I would argue that, well, a hammer can be a bad thing when you give your kid a hammer for the first time to help you out with carpentry work. You have to teach your kid how to use it, right? And without teaching your kid how to use a tool, the tool is not going to be used properly, and that hammer is going to break something.”

It’s the same with AI. “If we can teach kids that smoking is bad for the body, we should teach kids that using AI in certain ways is bad for the brain. But we shouldn’t just focus on the negative effects, because then we’re closing off a future of using AI to solve educational inequity in so many beautiful ways. AI is a technology that can help us scale private tutoring to far more families than can actually afford it now. I think no one should underestimate the positive effects of AI while also safeguarding [against] the negative effects, because two things can be true at once.”&Բ;

]]>
High School Cheating Increase from ChatGPT? Research Finds Not So Much /article/high-school-cheating-increase-from-chatgpt-research-finds-not-so-much/ Tue, 06 Feb 2024 19:30:00 +0000 /?post_type=article&p=721579 The rise of AI chatbot tools caused panic among high school teachers and administrators nationwide — but researchers say the frequency of students cheating on assignments remained “surprisingly” stagnant.

According to from Stanford University, about 60 to 70 percent of high school students surveyed in the fall of 2023 have engaged in cheating behavior — the same number prior to the debut of ChatGPT in the fall of 2022.

“I thought that we would see higher numbers in the fall so it was a little surprising to me,” said Denise Pope, a senior lecturer at Stanford’s Graduate School of Education who surveyed students across 40 high schools through an she co-founded.

Victor Lee, an associate professor at Stanford’s Graduate School of Education who helped oversee the research with Pope, said high school students are “underwhelmed” by AI chatbot tools.

“It just sounds very sterile and vanilla to them,” Lee said. “They may have heard about it, but the media a lot of kids are using are quite different than the ones adults and working professionals are attuned to.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


A conducted by the in the fall of 2023 found nearly one-third of students aged 13 to 17 have never heard of ChatGPT and another 44 percent have only heard “a little” about it. 

From those who were familiar with ChatGPT, the vast majority — about 81 percent — said they had not used it to help with school work.

“Many teens are using a variety of technology…[but] among those who’ve heard at least a little about ChatGPT, shares of them still aren’t sure how they feel about it,” said Colleen McClain, a research associate at the Pew Research Center.

Here are four things to know about the effects AI chatbot tools have had on high school cheating:

1. High school students who weren’t cheating before aren’t cheating now.

According to the , surveys of more than 70,000 high schools from 2002 to 2015 found about 64 percent of students cheated on a test — a similar outcome to Stanford’s findings after the rise of AI chatbot tools.

Pope said what surprises educators and parents the most is how common cheating has been.

“We know from our research that when students do cheat, it’s typically for reasons that have very little to do with their access to technology,” Pope told .

“When a student is less engaged, when they feel like they don’t belong or are not respected or valued in their community, when they’re stressed and highly sleep deprived — these are things that tend to correlate with cheating,” Pope said. 

Lee said this number will “consistently stay there unless schools engage in certain steps to be thoughtful about what climate they’re creating that motivates cheating.”

This includes tapping into the topics students are already interested in and developing useful skills based on how they naturally enjoy learning.

“A lot of the time, the AI students encounter is via Snapchat because they have a chatbot built into it,” Lee said. “And students aren’t turning to Google as their primary search, they turn to YouTube…[or] video-based searches rather than text-based.”

2. ChatGPT awareness is higher among White, wealthier and older students.

Pew found about 72 percent of white students had at least some knowledge of ChatGPT compared to 56 percent of Black students.

In addition, more than 75 percent of students in households with an annual income of $75,000 or more had some knowledge of ChatGPT compared to 41 percent of students in households with annual incomes under $30,000.

Data courtesy of the Pew Research Center. (Chart: Meghan Gallagher/The 74)

McClain pointed to the “digital divide” as an explanation for Pew’s survey findings.

“The pattern here is quite striking,” McClain said. “It certainly speaks to the fact that not every teen is equally likely to have heard about these tools and used them.”

She added how awareness of ChatGPT was seen more in older students — particularly those in 11th and 12th grade.

“Even among those who heard at least a little about ChatGPT…[young] teens may still be figuring out how they feel about it,” McClain said.

3. High school students have adopted a “good faith” approach to AI chatbot tools.

Pew found only 20 percent of students aged 13 to 17 said ChatGPT was acceptable to write essays compared to 57 percent who said it was not.

But, nearly 70 percent said it was acceptable to research new topics compared to 13 percent who said it was not.

Data courtesy of the Pew Research Center. (Chart: Meghan Gallagher/The 74)

The Stanford researchers found similar outcomes.

At four high schools surveyed this fall 2023, about 9 to 16 percent of students used AI chatbot tools to write essays and about 55 to 77 percent used it to generate an idea for a paper, project or assignment.

Data courtesy of Stanford’s Graduate School of Education. (Chart: Meghan Gallagher/The 74)

“The vast majority don’t want AI to do all the work for them so they’re coming into this with sort of a good faith effort,” Lee said.

“When I’ve had conversations with educators, they sort of breathe a sigh of relief and think ‘oh okay let’s think about some of the cool things we could do’ and that’s exciting,” Lee added.

4. Prohibiting AI chatbot tools won’t solve the systemic issues of why students cheat.

For Pope, finding comfort around AI chatbot tools starts with educators and parents including their students into the conversation.

“If you’re going to come up with a classroom or home policy, you want to have the students present, speaking up and telling you what they think will be the most useful and appropriate uses of AI,” Pope said.

Lee said addressing AI chatbot tool usage in high schools is just the “tip of a much larger iceberg.”

“Part of why we get concerned is because students feel pretty disenfranchised from the boring assignments, tedious homework and essays in these weird written formats that they don’t feel will provide them any long term need or use,” Lee said.

“I don’t see us as saying AI is the best thing since sliced bread, but I also don’t think of us as saying AI is going to destroy humanity,” Lee added.

]]>
Bismarck State’s AI-Written Plays Show Potential, Flaws of ChatGPT /article/bismarck-states-ai-written-plays-show-potential-flaws-of-chatgpt/ Wed, 03 Jan 2024 13:01:00 +0000 /?post_type=article&p=719955 This article was originally published in

Two performers are seated in the middle of the stage, shooting the breeze as they pretend to get ready for an upcoming performance.

“We never know what the future holds,” one actor laments to his friend. “I mean, they thought computers can’t write poetry or compose music, but now they can.”

“There are AI-generated characters in some places, but nothing can replace the magic of live performance,” the other performer replies.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


This one-act, titled “Theatre Kids At The End of the World,” is one of 16 recently performed by Bismarck State College as part of “The AI Plays,” a production reflecting on recent breakthroughs in artificial intelligence and its implications for ordinary life.

The works are purposely self-referential and introspective, with the actors often playing the role of students, performers or both.

And the whole thing was written by ChatGPT, the famous chatbot by OpenAI.

ChatGPT is primarily a text tool; you tell it to write something, and it whips up an answer. Its ability to handle sophisticated instructions has attracted a level of attention unlike any AI before it.

A study by the Union Bank of Switzerland named ChatGPT the fastest-growing consumer app in history, Reuters .

Boosters of so-called generative AI point to its massive educational and creative potential. It can write prose and poetry. It can conjure up paintings. It can tell you where the nearest gas station is. It can write an essay summarizing the history of the Roman Empire. All in relatively short order, for free.

But that’s also inspired widespread anxiety, even existential fear, about the future of creative work.

The recent strikes by Hollywood writers and actors, for instance, were spurred in part by concerns that generative AI would sideline creative workers. Both successfully bargained for regulations on how the technology can be used by film and television producers.

In “The AI Plays,” students at Bismarck State College Theatre throw their two cents into the debate.

“I think we, as artists, need to get in front of this,” said Director Dean Bellin, associate professor of technical theater at the Bismarck State College.

The group decided to have ChatGPT write the scripts as an interesting way to show people just how far the technology has come.

He and his students wrote the general outline of each scene. They fed ChatGPT writing prompts based on their real feelings toward AI — from reverent, to skeptical, to indifferent.

Then, they performed the scripts completely unedited – quirks and all.

In one scene, a woman chopping vegetables bemoans the constant frustration of living in a world where technology is advancing so quickly. (ChatGPT did not feel it necessary to explain who the woman is or why she was chopping vegetables.)

“I have seen the rise and fall of Tamagotchi, lived through Y2K, and even managed to scan a QR code” — she pauses, still bent over her cutting board — “ … once.”

“At the rate we’re going, I’m afraid I’ll blink, and then my toaster is giving me life advice,” she continues.

Government oversight

Lawmakers in 2023 grappled with definitions, standards and regulation of artificial intelligence, and Congress and more. Senators from both sides of the aisle agree there’s a need . Legislators and officials in many states are studying the issue and weighing AI legislation in upcoming sessions.

The North Dakota Legislature is also on the bandwagon; this interim session, the Information Technology Committee is researching potential paths for AI investment and regulation.

At its next meeting on Dec. 14, lawmakers will hear from the Department of Public Instruction, the university system, the attorney general’s office and other groups about the future of AI in the state.

Earlier this year, the statehouse passed a law preventing AI from gaining human rights. (The law extends the same ban to animals, the environment, and inanimate objects.)

Sponsor Rep. Cole Christensen, R-Rogers, told fellow lawmakers during the session the legislation was intended “to define personhood and to retain its exclusive rights to human beings.”

Several acts explore the concept of AI sentience. In one scene, a medieval court goes on a witch hunt for a robot masquerading among them as a human. The village ultimately accepts the machine with open arms.

Even though the work it produces can be uncannily similar to human writing, tools like ChatGPT don’t think like people.

ChatGPT and other so-called “generative” AI — like DALL-E, which makes images — are trained on massive hordes of data that help them approximate human language, photography, art and so on.

But it’s only an approximation. When ChatGPT asked to write creatively, it’s often choppy, repetitive and lacking depth.

The dialogue became circular in several scenes of “The AI Plays,” with characters making the same two or three points over and over again until a scene ended.

Bellin said he and his students learned a lot about scriptwriting by studying where ChatGPT’s writing missed the mark.

Bismarck State College ’t the first higher ed institution to experiment with AI theater.

This summer, students at University of Wollongong in Australia performed a three-act drama written by ChatGPT, the Australian Broadcasting Corporation .

In that case, the performers may have been a little more involved in the writing process. The show’s director said he and his students had to tinker with the app quite a bit before it spit out something they liked.

Earthquakes in academia

There are plenty of other reasons why AI may be front-of-mind for colleges and universities — say, how it makes it easier for students to cheat on homework.

AI may not be good enough to write a flawless essay, but a student might be able to pass ChatGPT-generated work off as their own if they proofread it and introduce a few minor tweaks, Bellin said.

Many higher ed institutions have already adopted policies regulating AI. One survey published in June by UNESCO — the United Nations Educational, Scientific and Cultural Organization — estimated that globally, about 13% of universities have issued official guidance on the technology.

For the moment, the North Dakota University System, of which Bismarck State College is a member, ’t one of them.

Not that it ’t giving the subject any attention.

In the wake of ChatGPT’s release, the university system convened a task force to help it navigate the many opportunities and obstacles AI presents to higher ed.

At a Dec. 7 State Board of Higher Education meeting, Chancellor Mark Hagerott urged the University System to invest in AI technology.

He pointed to a handful of other higher ed institutions scrambling to get ahead in what he likened to an “arms race.”

“We have to be able to adapt and move and change to the landscape that’s in front of us,” said Hagerott, who has a background in cyber security. “And we have to plan for the unknown.”

In 2020, the University of Florida hired 100 new faculty members to study artificial intelligence. The University of Albany announced this year it would set aside $200 million toward AI. It says it wants to integrate the technology in all of its academic programs. Meanwhile, Arizona State University formed a schoolwide community of practice this fall to figure out how to integrate AI into its classrooms.

“This is an earthquake,” Hagerott said.

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. North Dakota Monitor maintains editorial independence. Contact Editor Amy Dalrymple for questions: info@northdakotamonitor.com. Follow North Dakota Monitor on and .

]]>
Opinion: Forget ChatGPT — Extractive AI Is the Real Game-Changer for Teachers, Students /article/forget-chatgpt-extractive-ai-is-the-real-game-changer-for-teachers-students/ Wed, 08 Nov 2023 14:30:00 +0000 /?post_type=article&p=717427 The way schools are organized, staff are deployed and time is allocated have a powerful impact on the way teachers do their work. As a result, these structures have significant influence on how students learn and experience school.

Artificial intelligence could be harnessed to completely reshape all that.

Just as AI is being used in other professions to improve productivity, engagement and the quality of output, so, too, it could be leveraged to transform how schools are organized. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The world’s top education systems, like Singapore and Finland, for school staff to work together and . At its best, teacher collaboration involves structured mentor-mentee relationships, coaching, course development, lesson planning, observation, debriefing and more. But that ’t the standard experience for U.S. teachers. Here, despite large investments in learning communities and other collaborative models, many teachers spend in front of students, having little time to work with colleagues to improve practice.

In such environments, the rise of AI is likely seen as a threat that carries the very real risk of replacing human educators

However, in high-performing systems where schools are organized around collaboration, AI can be greeted as an asset, deployed for the benefit of students and seen as an enhancement of the human and relational work of teaching.

While ChatGPT, Google Bard and other generative AI tools are making headlines — and are the subject of great resistance — it may be extractive AI models that hold the most exciting promise for educators and students. 

Extractive AI is a form of natural language processing that uses deep linguistic techniques to replicate human comprehension of text — moving closer to a form of human language understanding. It can pull huge amounts of information together in a way that is explainable and traceable back to the original source. This is because extractive AI takes a query and returns “.” That ability to track back to original sources is a huge advantage in schools because material produced by extractive AI can be verified or debunked — avoiding the type of wrong, inaccurate or flawed responses that have become known as machine hallucinations.

Combining these two approaches to AI may hold the most power for both students and teachers. Based on its ability to generate new content from established patterns, generative AI is ideal for creating summaries and drafts. Meanwhile, extractive AI, based upon its ability to draw out specific and traceable ideas and concepts from large data resources, is ideal for assembling, organizing and synthesizing new learning from any source from within or outside an organization’s library of content, curricula or lesson plans. 

For example, teachers could take the outputs from extractive AI, review the sources and assumptions and together create new and engaging learning experiences. This technology could be used as a virtual research assistant, allowing teachers to critique, vet and consider AI-generated coursework and recommended sources. It would complement and propel the higher-level work of teaching by making routine and tedious tasks, such as pulling together relevant materials about a particular topic, less labor-intensive. This would give educators more time to focus on evaluating, synthesizing and turning information into a lesson that comes alive for students, creating more engaging classroom activities, project-based learning experiences and effective lesson plans — the more creative and human aspects of their jobs. 

It holds similar promise for the learning experiences of students. AI can serve as an amazingly adaptive textbook or course developer — on demand, in real time, in ways that respond to evolving student needs. Used strategically, AI that combines both extractive and generative models could create individual education plans for all students.

Unfortunately, many schools are banning the use of AI, especially generative models, out of concern that students will have the technology do their homework for them. This is not just misplaced fear but reflects a bigger issue: If you are assigning homework that can be done by a machine, no matter how intelligent, you are assigning the wrong kind of homework. 

That kind of academic experience is preparing students for obsolescence — replacement by machines. If all a student can do is replicate what a machine can do, but more slowly and expensively and with more errors, that’s a path to low or no wages for the individual and economic instability for the country. 

Instead, by embracing the technology, by putting it to use in the classroom, students can learn to use and thrive alongside new technologies in ways that are fundamentally human. Being able to take what a machine can produce and add value to it will be an essential skill in the future workplace.

Properly deployed, AI can elevate the work of teachers and strengthen the 21st century readiness of students. 

Welcome or not, artificial intelligence is here to stay. Schools can either embrace it and seize the opportunities for the benefit of all students and educators, or try to resist it and become even more obsolete.

]]>
Survey: AI is Here, but Only California and Oregon Guide Schools on its Use /article/survey-ai-is-here-but-only-california-and-oregon-guide-schools-on-its-use/ Wed, 01 Nov 2023 04:01:00 +0000 /?post_type=article&p=717117 Artificial intelligence now has a daily presence in many teachers’ and students’ lives, with chatbots like ChatGPT, Khan Academy’s tutor and AI image generators like all freely available. 

But nearly a year after most of us came face-to-face with the first of these tools, a that few states are offering educators substantial guidance on how to best use AI, let alone fairly and with appropriate privacy protections.

As of mid-October, just two states, California and , offered official guidance to schools on using AI, according to the Center for Reinventing Public Education at Arizona State University. 

CRPE said 11 more states are developing guidance, but that another 21 states don’t plan to give schools guidelines on AI “in the foreseeable future.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Seventeen states didn’t respond to CRPE’s survey and haven’t made official guidance publicly available.

Bree Dusseault

As more schools experiment with AI, good policies and advice — or a lack thereof — will “drive the ways adults make decisions in school,” said Bree Dusseault, CRPE’s managing director. That will ripple out, dictating whether these new tools will be used properly and equitably.

“We’re not seeing a lot of movement in states getting ahead of this,” she said. 

The reality in schools is that AI is here. Edtech companies are pitching products and schools are buying them, even if state officials are still trying to figure it all out. 

Satya Nitta

“It doesn’t surprise me,” said Satya Nitta, CEO of , a generative AI company developing voice-activated assistants for teachers. “Normally the technology is well ahead of regulators and lawmakers. So they’re probably scrambling to figure out what their standard should be.”

Nitta said a lot of educators and officials this week are likely looking “very carefully” at Monday’s on AI “to figure out what next steps are.”&Բ;

The order requires, among other things, that AI developers share safety test results with the U.S. government and develop standards that ensure AI systems are “safe, secure, and trustworthy.”&Բ;

It follows five months after the U.S. Department of Education released a detailed, with recommendations on using AI in education.

Deferring to districts

The fact that 13 states are at least in the process of helping schools figure out AI is significant. Last summer, no states offered such help, CRPE found. Officials in New York, , Rhode Island and Wyoming said decisions about many issues related to AI, such as academic integrity and blocking websites or tools, are made on the local level.

Still, researchers said, it’s significant that the majority of states still don’t plan AI-specific strategies or guidance in the 2023-24 school year.

There are a few promising developments: North Carolina will soon require high school graduates to pass a computer science course. In Virginia, Gov. Glenn Youngkin in September on AI careers. And Pennsylvania Gov. Josh Shapiro in September to create a state governing board to guide use of generative AI, including developing training programs for state employees.

Tara Nattrass

But educators need help understanding artificial intelligence, “while also trying to navigate its impact,” said Tara Nattrass, managing director of innovation strategy at the International Society for Technology in Education. “States can ensure educators have accurate and relevant guidance related to the opportunities and risks of AI so that they are able to spend less time filtering information and more time focused on their primary mission: teaching and learning.”

Beth Blumenstein, Oregon’s interim director of digital learning & well-rounded access, said AI is already being used in Oregon schools. And the state Department of Education has received requests from educators asking for support, guidance and professional development.

Beth Blumenstein

Generative AI is “a powerful tool that can support education practices and provide services to students that can greatly benefit their learning,” she said. “However, it is a highly complex tool that requires new learning, safety considerations, and human oversight.”

Three big issues she hears about are cheating, plagiarism and data privacy, including how not to run afoul of Oregon’s Student Information Protection Act or the federal Children’s Online Privacy and Protection Act. 

‘Now I have to do AI?’

In August, CRPE conducted focus groups with 18 superintendents, principals and senior administrators in five states who said they were cautiously optimistic about AI’s potential, but many complained about navigating yet another new disruption.

“We just got through this COVID hybrid remote learning,” one leader told researchers. “Now I have to do AI?”

Nitta, Merlyn Mind’s CEO, said that syncs with his experience.

“Broadly, school districts are looking for some help, some guidance: ‘Should we use ChatGPT? Should we not use it? Should we use AI? Is it private? Are they in violation of regulations?’ It’s a complex topic. It’s full of all kinds of mines and landmines.”&Բ;

And the stakes are high, he said. No educator wants to appear in a newspaper story about her school using an AI chatbot that feeds inappropriate information to students. 

“I wouldn’t go so far as to say there’s a deer-caught-in-headlights moment here,” Nitta said, “but there’s certainly a lot of concern. And I do believe it’s the responsibility of authorities, of responsible regulators, to step in and say, ‘Here’s how to use AI safely and appropriately.’ ” 

]]>
Biden Order on AI Tackles Tech-Enabled Discrimination in Schools /article/biden-order-on-ai-tackles-tech-enabled-discrimination-in-schools/ Tue, 31 Oct 2023 21:01:00 +0000 /?post_type=article&p=717111 Updated Nov. 1

As artificial intelligence rapidly expands its presence in classrooms, President Biden signed an executive order Monday requiring federal education officials to create guardrails that prevent tech-driven discrimination. 

The , which the White House called “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems,” offers several directives that are specific to the education sector. The order dealing with emerging technologies like ChatGPT directs the Justice Department to coordinate with federal civil rights officials on ways to investigate discrimination perpetuated by algorithms. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Within a year, the education secretary must release guidance on the ways schools can use the technology equitably, with a particular focus on the tools’ effects on “vulnerable and underserved communities.” Meanwhile, an Education Department “AI toolkit” released within the next year will offer guidance on how to implement the tools so that they enhance trust and safety while complying with federal student privacy rules. 

For civil rights advocates who have decried AI’s potentially unintended consequences, the order was a major step forward. 

The order’s focus on civil rights investigations “aligns with what we’ve been advocating for over a year now,” said Elizabeth Laird, the director of equity and civic technology at the nonprofit Center for Democracy and Technology. Her group has called on the Education Department’s Office for Civil Rights to open investigations into the ways AI-enabled tools in schools could have a disparate impact on students based on their race, disability, sexual orientation and gender identity. 

“It’s really important that this office, which has been focused on protecting marginalized groups of students for literally decades, is more involved in conversations about AI and can bring that knowledge and skill set to bear on this emerging technology,” Laird told The 74. 

In to federal agencies on Wednesday, the Office of Management and Budget spelled out the types of AI education technologies that pose civil rights and safety risks. They include tools to detect student cheating, monitor their online activities, project academic outcomes, make discipline recommendations or facilitate surveillance online and in-person.  

An Education Department spokesperson didn’t respond to a request for comment Monday on how the agency plans to respond to Biden’s order. 

Schools nationwide have adopted artificial intelligence in divergent ways, including in to provide students individualized lessons and with the growing use of chatbots like ChatGPT by both students and teachers. It’s also generated heated debates over technology’s role in exacerbating harms to at-risk youth, including educators’ use of early warning systems that mine data about students — including their race and disciplinary records — to predict their odds of dropping out of school. 

“We’ve heard reported cases of using data to predict who might commit a crime, so very Minority Report,” Laird said. “The bar that schools should be meeting is that they should not be targeting students based on protected characteristics unless it meets a very narrowly defined purpose that is within the government’s interests. And if you’re going to make that argument, you certainly need to be able to show that this is not causing harm to the groups that you’re targeting.”&Բ;

AI and student monitoring tools

An unprecedented degree of student surveillance has also been facilitated by AI, including online activity monitoring tools, remote proctoring software to detect cheating on tests and campus security cameras with facial recognition capabilities. 

Beyond its implications on schools, the Biden order requires certain technology companies to conduct AI safety testing before their products are released to the public and to provide their results to the government. It also orders new regulations to ensure AI won’t be used to produce nuclear weapons, recommends that AI-generated photos and videos be transparently identified as such with watermarks and calls on Congress to pass federal data privacy rules “to protect all Americans, especially kids.”

In September, The Center for Democracy and Technology released a report that warned that schools’ use of AI-enabled digital monitoring tools, which track students’ behaviors online, could have a disparate impact on students — particularly LGBTQ+ youth and those with disabilities — in violation of federal civil rights laws. As teachers punish students for using ChatGPT to allegedly cheat on classroom assignments, a survey suggested that children in special education were more likely to face discipline than their general education peers. They also reported higher levels of surveillance and subsequent discipline as a result. 

In response to the report, a coalition of Democratic lawmakers penned a letter urging the Education Department’s civil rights office to investigate districts that use digital surveillance and other AI tools in ways that perpetuate discrimination. 

Education technology companies that use artificial intelligence could come under particular federal scrutiny as a result of the order, said consultant Amelia Vance, an expert on student privacy regulations and president of the Public Interest Privacy Center. The order notes that the federal government plans to enforce consumer protection laws and enact safeguards “against fraud, unintended bias, discrimination, infringements on privacy and other harms from AI.”&Բ;

“Such protections are especially important in critical fields like healthcare, financial services, education, housing, law and transportation,” the order notes, “where mistakes by or misuse of AI could harm patients, cost consumers or small businesses or jeopardize safety or rights.”

Schools rely heavily on third-party vendors like education technology companies to provide services to students, and those companies are subject to Federal Trade Commission rules against deceptive and unfair business practices, Vance noted. The order’s focus on consumer protections, she said, “was sort of a flag for me that maybe we’re going to see not only continuing interest in regulating ed tech, but more specifically regulating ed tech related to AI.”

While the order was “pretty vague when it came to education,” Vance said it was important that it did acknowledge AI’s potential benefits in education, including for personalized learning and adaptive testing. 

“As much as we keep talking about AI as if it showed up in the past year, it’s been there for a while and we know that there are valuable ways that it can be used,” Vance said. “It can surface particular content, it can facilitate better connections to people when they need certain content.”&Բ;

AI and facial recognition cameras

As school districts pour billions of dollars into school safety efforts in the wake of mass school shootings, security vendors have heralded the promises of AI. Yet civil rights groups have warned that facial recognition and other AI-driven technology in schools could perpetuate biases — and could miss serious safety risks. 

Just last month, the gun-detection company Evolv Technology, which pitches its hardware to schools, acknowledged it was the subject of a Federal Trade Commission inquiry into its marketing practices. The agency is reportedly probing whether the company employs artificial intelligence in the ways that it claims. 

In September, New York became the first state to , a move that followed outcry when an upstate school district announced plans to roll out a surveillance camera system that tracked students’ biometric data. 

A new Montana law bans facial recognition statewide with one notable exception — . Citing privacy concerns, the law adopted this year prohibits government agencies from using facial recognition, but with a specific carveout for schools. One rural education system, the 250-student Sun River School District, employs a 30-camera security system from Verkada that uses facial recognition to track the identities of people on its property. As a result, the district has a camera-to-student ratio of 8-to-1. 

In an email on Wednesday, a Verkada spokesperson said the company is in the process of reviewing Biden’s order to understand its implications on the company.

Verkada offers a cautionary tale about the potential security vulnerabilities of campus surveillance systems. In 2021, the company suffered a massive data breach and hackers claimed to expose the live feeds of 150,000 surveillance cameras — including those in place at Sandy Hook Elementary School in Newtown, Connecticut, the site of a mass shooting in 2012. A conducted on behalf of the company found the breach was more limited, affecting some 4,500 cameras.

Hikvision has similarly made inroads in the school security market with its facial recognition surveillance cameras — including during a pandemic-era push to enforce face mask compliance. Yet the company, owned in part by the Chinese government, has also faced significant allegations of civil rights abuses and in 2019 was placed on a U.S. trade blacklist after being implicated in the country’s “campaign of repression, mass arbitrary detention and high-technology surveillance” against Muslim ethnic minorities. 

Though multiple U.S. school districts continue to use Hikvision cameras, a recent investigation found the company’s software despite claiming for years it had ended the practice.

 In an email, a Hikvision spokesperson didn’t comment on how Biden’s executive order could affect its business, including in schools, but offered a letter it shared to its customers in response to the investigation, saying an outdated reference to ethnic detection appeared on its website erroneously.

“It has been a longstanding Hikvision policy to prohibit the use of minority recognition technology,” the letter states. “As we have previously stated, that functionality was phased out and completely prohibited by the company in 2018.“

Data scientist David Riedman, who built a national database to track school shootings dating back decades, said that artificial intelligence is at “the forefront” of the school safety conversation and emerging security technologies can be built in ways that don’t violate students’ rights. 

Riedman became a figure in the national conversation about school shootings as the creator of the K12 School Shooting Database but has since taken on an additional role as director of industry research and content for ZeroEyes, a surveillance software company that uses security cameras to ferret out guns. Instead of using facial recognition, the ZeroEyes algorithm was trained to identify and notify law enforcement within seconds of spotting a firearm. 

The — as opposed to facial recognition — can “evade privacy and bias concerns that plague other AI models,” and internal research found that “only 0.06546% of false positives were humans detected as guns.”&Բ;

“The simplicity” of ZeroEye’s technology, Riedman said, puts the company in good standing as far as the Biden order is concerned.

“ZeroEyes ’t looking for people at all,” he said. “It’s only looking for objects and the only objects it is trying to find, and it’s been trained to find, are images that look like guns. So you’re not getting student records, you’re not getting student demographics, you’re not getting anything related to people or even a school per se. You just have an algorithm that is constantly searching for images to see if there is something that looks like a firearm in them.”

However, false positives remain a concern. Just last week at a high school in Texas, from ZeroEyes prompted a campus lockdown that set off student and parent fears of an active shooting. The company said the false alarm was triggered by an image of a student outside who the system believed was armed based on shadows and the way his arm was positioned. 

]]>
‘Just Slow It All Down’: School Leaders Want Guidance on AI, New Research Finds /article/just-slow-it-all-down-school-leaders-want-guidance-on-ai-new-research-finds/ Tue, 24 Oct 2023 11:01:00 +0000 /?post_type=article&p=716702 New generative artificial intelligence tools like ChatGPT, which can mimic human writing and generate images from simple user prompts, are poised to disrupt K-12 education.

As school and district administrators grapple with these rapid advances, they crave guidance on how to incorporate AI tools into teaching and learning, new research shows. 

In conducted in August by the Center on Reinventing Public Education with colleagues at the Mary Lou Fulton Teachers College at Arizona State University, 18 superintendents, principals and senior administrators who collectively oversee nearly 70 schools in five states expressed cautious optimism about AI’s potential to enhance teaching and learning. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


But few are exploring how to provide AI training to staff. And many bemoaned having to navigate another new and major disruption to schooling, according to the focus group responses.

“We just got through this COVID hybrid remote learning,” one leader said. “Now I have to do AI?”

In general, the participants said they wanted more guidance from states, universities and even the industry on how to incorporate generative AI and establish policies to ensure staff and students use the tools ethically and responsibly. At the time the focus groups were conducted, no state departments of education had offered any guidance to help districts navigate the new landscape, CRPE research shows. The federal Department of Education’s technology office says it’s working to for AI-enabled education technology. And a new group of experts recently released an “” toolkit.

That attention at the national level reinforces some of the concerns that administrators voiced: that AI equity and access issues could open a new chasm in the digital divide. One high school leader who had collaborated with a student advisory board said some students didn’t understand ChatGPT at all, while others were highly knowledgeable and were already using paid versions. 

“The technology is bound to grow exponentially as more students become familiar with it,” the participant said — but it also raises troubling issues. “Who’s going to have access to what, and what are our responsibilities as a school district to provide access?”

Administrators’ perspectives on AI are important because they set the tone for learning priorities. As AI begins to disrupt conventional schooling habits and practices, their willingness to adapt, develop guidelines and encourage exploration has implications for student and teacher success.

Even in the short time that ChatGPT has been available to the public, districts nationwide have adopted divergent stances on its use (largely because of concerns about cheating), previous CRPE research found. Some districts quickly shifted from initially banning the technology to cautiously allowing it.

Despite the concerns over the pace of change, administrators who participated in the focus groups expressed a relatively high level of excitement about AI’s potential advantages and relatively little concern about issues such as student safety and data privacy. 

Some called for guidance from the tech industry.

“If you’re one of those people who creates this fantastic tool, then you need to also help educate around it,” one said.

Others hoped higher education would uncover best uses for AI and allow those tips to trickle down to the K-12 level. When administrators were asked what they would do to AI developments if they had a magic wand, one leader said: “Can we just slow it all down?”

Most administrators said they weren’t ready to create policies that specified appropriate uses for generative AI. 

“I refuse to do it because I don’t know what to put in it,” one leader said. 

Many believe their current plagiarism policies are sufficient to deal with today’s most salient concern over AI: student cheating.

Some hesitation is understandable given that educators are frequently pressured to adopt new technology. However, AI is not a fad, and it’s not going away anytime soon. These tools are rapidly being integrated into everyday life and cannot be banned or ignored.

Some education leaders said they’ve formed teacher or student advisory groups to continue exploring AI. Others are setting aside discussion time at staff meetings. Some said they’re listening to early adopters, such as technology directors or enthusiastic teachers. But none of the leaders said they are urging all staff to use the tools. And none had mapped out plans for staff training.

Helping students prepare to use AI in their professional and personal lives means schools must start investing in — and encouraging broader understanding of — AI technology among teachers and staff. State departments of education need to accelerate work in this area so they can help guide districts. Teachers need dedicated work time to play with the tools. 

And perhaps students who are adept at AI should be encouraged to share what they’re learning with adults.

“I do have a more formal workshop ready to go when that time is right,” one technology director said. “We are just stepping in slowly.”

]]>
ChatGPT Is Landing Kids in the Principal’s Office, Survey Finds /article/chatgpt-is-landing-kids-in-the-principals-office-survey-finds/ Wed, 20 Sep 2023 04:01:00 +0000 /?post_type=article&p=715056 Ever since ChatGPT burst onto the scene last year, a heated debate has centered on its potential benefits and pitfalls for students. As educators worry students could use artificial intelligence tools to cheat, a new survey makes clear its impact on young people: They’re getting into trouble. 

Half of teachers say they know a student at their school who was disciplined or faced negative consequences for using — or being accused of using — generative artificial intelligence like ChatGPT to complete a classroom assignment, , a nonprofit think tank focused on digital rights and expression. The proportion was even higher, at 58%, for those who teach special education. 

Cheating concerns were clear, with survey results showing that teachers have grown suspicious of their students. Nearly two-thirds of teachers said that generative AI has made them “more distrustful” of students and 90% said they suspect kids are using the tools to complete assignments. Yet students themselves who completed the anonymous survey said they rarely use ChatGPT to cheat, but are turning to it for help with personal problems.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“The difference between the hype cycle of what people are talking about with generative AI and what students are actually doing, there seems to be a pretty big difference,” said Elizabeth Laird, the group’s director of equity in civic technology. “And one that, I think, can create an unnecessarily adversarial relationship between teachers and students.”&Բ;  

Indeed, 58% of students, and 72% of those in special education, said they’ve used generative AI during the 2022-23 academic year, just not primarily for the reasons that teachers fear most. Among youth who completed the nationally representative survey, just 23% said they used it for academic purposes and 19% said they’ve used the tools to help them write and submit a paper. Instead, 29% reported having used it to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Part of the disconnect dividing teachers and students, researchers found, may come down to gray areas. Just 40% of parents said they or their child were given guidance on ways they can use generative AI without running afoul of school rules. Only 24% of teachers say they’ve been trained on how to respond if they suspect a student used generative AI to cheat. 

Center for Democracy and Technology

The results on ChatGPT’s educational impacts were included in the Center for Democracy and Technology’s broader annual survey analyzing the privacy and civil rights concerns of teachers, students and parents as tech, including artificial intelligence, becomes increasingly engrained in classroom instruction. Beyond generative AI, researchers observed a sharp uptick in digital privacy concerns among students and parents over last year. 

Among parents, 73% said they’re concerned about the privacy and security of student data collected and stored by schools, a considerable increase from the 61% who expressed those reservations last year. A similar if less dramatic trend was apparent among students: 62% had data privacy concerns tied to their schools, compared with 57% just a year earlier. 

Center for Democracy and Technology

Those rising levels of anxiety, researchers theorized, are likely the result of the growing frequency of cyberattacks on schools, which have become a primary target for ransomware gangs. High-profile breaches, including in Los Angeles and Minneapolis, have compromised a massive trove of highly sensitive student records. Exposed records, investigative reporting by The 74 has found, include student psychological evaluations, reports detailing campus rape cases, student disciplinary records, closely guarded files on campus security, employees’ financial records and copies of government-issued identification cards. 

Survey results found that students in special education, whose records are among the most sensitive that districts maintain, and their parents were significantly more likely than the general education population to report school data privacy and security concerns. As attacks ratchet up, 1 in 5 parents say they’ve been notified that their child’s school experienced a data breach. Such breach notices, Laird said, led to heightened apprehension. 

“There’s not a lot of transparency” about school cybersecurity incidents “because there’s not an affirmative reporting requirement for schools,” Laird said. But in instances where parents are notified of breaches, “they are more concerned than other parents about student privacy.”&Բ;

Parents and students have also grown increasingly wary of another set of education tools that rely on artificial intelligence: digital surveillance technology. Among them are student activity monitoring tools, such as those offered by the for-profit companies Gaggle and GoGuardian, which rely on algorithms in an effort to keep students safe. The surveillance software employs artificial intelligence to sift through students’ online activities and flag school administrators — and sometimes the police — when they discover materials related to sex, drugs, violence or self-harm. 

Among parents surveyed this year, 55% said they believe the benefits of activity monitoring outweigh the potential harms, down from 63% last year. Among students, 52% said they’re comfortable with academic activity monitoring, a decline from 63% last year. 

Such digital surveillance, researchers found, frequently has disparate impacts on students based on their race, disability, sexual orientation and gender identity, potentially violating longstanding federal civil rights laws. 

The tools also extend far beyond the school realm, with 40% of teachers reporting their schools monitor students’ personal devices. More than a third of teachers say they know a student who was contacted by the police because of online monitoring, the survey found, and Black parents were significantly more likely than their white counterparts to fear that information gleaned from online monitoring tools and AI-equipped campus surveillance cameras could fall into the hands of law enforcement. 

Center for Democracy and Technology

Meanwhile, as states nationwide pull literature from school library shelves amid a conservative crusade against LGBTQ+ rights, the nonprofit argues that digital tools that filter and block certain online content “can amount to a digital book ban.” Nearly three-quarters of students — and disproportionately LGBTQ+ youth — said that web filtering tools have prevented them from completing school assignments. 

The nonprofit highlights how disproportionalities identified in the survey could run counter to federal laws that prohibit discrimination based on race and sex, and those designed to ensure equal access to education for children with disabilities. In a letter sent Wednesday to the White House and Education Secretary Miguel Cardona, the Center for Democracy and Technology was joined by a coalition of civil rights groups urging federal officials to take a harder tack on ed tech practices that could threaten students’ civil rights. 

“Existing civil rights laws already make schools legally responsible for their own conduct, and that of the companies acting at their direction in preventing discriminatory outcomes on the basis of race, sex and disability,” the coalition wrote. “The department has long been responsible for holding schools accountable to these standards.”

Sign up for the School (in)Security newsletter.

Get the most critical news and information about students' rights, safety and well-being delivered straight to your inbox.

]]>
Study: How Districts Are Responding to AI & What It Means for the New School Year /article/study-how-districts-are-responding-to-ai-what-it-means-for-the-new-school-year/ Sun, 10 Sep 2023 12:30:00 +0000 /?post_type=article&p=714352 Districts are responding in divergent ways to artificial intelligence’s potential to reshape teaching and learning, and most have refrained from defining a for schools to navigate AI, according to a review by the Center on Reinventing Public Education at Arizona State University. 

By searching for district communications and media coverage in each state from fall 2022 through summer 2023, CRPE identified districts publicly responding to AI last school year. We conducted more thorough research on these districts and .

Most of the reactions have revolved around ChatGPT, the large language learning model-based chatbot . 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Many large districts were initially wary of the new technology, with , and issuing , largely because of concerns over cheating. 

But many are adapting. New York City Public Schools , with Chancellor David Banks acknowledging a and a determination to “embrace its potential.”&Բ;

in Washington State reported that while it blocked ChatGPT to “get out ahead of it,” the district doesn’t plan to stop it long-term. In April, the district established a committee of teachers learning how to use ChatGPT to work on related policies.

In California’s , Superintendent Don Austin embraced ChatGPT’s potential to enhance learning and improve efficiency. Likening AI pushback to early resistance to calculators and the internet, the superintendent this spring to start using the technology. 

Supporting learning and emotional well-being

While most districts CRPE reviewed have not released precise plans for using AI, some are exploring opportunities. 

The introduced a tool called that functions like a literacy tutor that listens to students read and corrects mistakes in real time. The district piloted the tool at four schools last spring and had a small group of teachers experimenting with a tool to help create unit and lesson plans.

is piloting , an AI-powered “tutorbot” created by Khan Academy to give students individualized support across core subjects. The program , offering personalized prompts, diagnosing errors and helping students develop deeper reasoning skills, and gives teachers .

in Arizona and in Texas are piloting AI-enabled “early warning” programs that track student performance and send alerts if kids are off track. Mesa’s program collects academic, social and emotional data from teachers and students to predict up to three months in advance whether a student will pass or fail coursework. 

Creating new AI courses and standards

Other districts are designing curriculum to build students’ AI literacy. Most are in states creating conditions to help steward the advancement of AI curriculum. 

Baltimore County Public Schools an AI program at three high schools this year that will feature . The program is a byproduct of a 2020 state innovation grant, which funded district staff to develop curriculum and lead an advisory council.

In Georgia, the district is opening up a K-12, AI-themed that will provide progressively more sophisticated study of AI . This will in core subjects, and Gwinnett hopes that piloted lessons will spread across the entire district. The Georgia Department of Education worked with Gwinnett to write new academic standards so all schools in the state can launch their own AI courses.

A dozen districts in Florida, including those in the , are rolling out AI and data science programs this year in partnership with the , part of the university’s broader goal to infuse AI into K-12 curriculum across the state. The state is also providing funding to train teachers. 

Supporting teacher development

A small number of districts reviewed are using AI to strengthen teacher practice or generally orient educators to the technology as a teaching tool. 

This year, Spokane Public Schools in Washington, St. Vrain Valley School District in Colorado and Keller Independent School District in Texas an instructional coaching platform called that films classroom instruction and uses AI to offer teachers feedback and in developing an “action plan” to implement suggestions.

in Maryland launched training sessions this summer to help teachers learn how to incorporate AI into their lessons as part of a three-year agreement with nonprofit training partner aiEDU, which provides curricula and learning resources. 

Improving communications and operational efficiency

Districts are using AI to provide individualized guidance to students and parents. In April, the announced a chatbot to answer parents’ and guardians’ questions online and track whether issues were resolved. In August, unveiled a chatbot “student adviser” that provides parents real-time access to grades, test results, and attendance and assists its “” program. is one of many Arizona districts using , a chatbot digital assistant that helps students navigate the federal student financial aid — FAFSA — application. 

Districts are also using AI-powered technology to support safety and operational efficiency. in Florida uses AI to . uses AI-powered, self-driving floor cleaners, and in North Carolina uses AI to detect student illnesses as part of their pandemic response. 

Districts face essential questions about AI in 2023-24

A year ago, few districts or stakeholders were paying much attention to AI. Now, it’s clear that this technology will evolve faster than districts can develop formal training and guidance for staff. Leaders need to respond by thinking through how they train their workforce to responsibly use AI, and prepare for fundamental shifts in teachers’ roles and students’ opportunities in the coming years.

We suggest that districts:

  • engage early adopter educators to discuss strategies and guidelines;
  • communicate regularly and transparently with parents;
  • train teachers on responsibly using AI; and
  • partner with organizations, industry and higher education institutions who have AI expertise and can weigh in on best practices. 

We also urge state departments of education and regional associations to provide guidance and tools to help districts navigate AI. Students, parents, teachers and employers are looking to districts to do this well and to provide a learning environment that is both safe and reflective of the 21st century and beyond.

]]>
Exclusive: For Busy Teachers, AI Could Crack Open the Dense World of Ed Research /article/exclusive-phonics-learning-styles-teachers-confounded-by-education-research-may-soon-turn-to-new-ai-chatbots-for-help/ Wed, 06 Sep 2023 11:15:00 +0000 /?post_type=article&p=714153 As students across the U.S. enter their first full school year with access to powerful AI tools like ChatGPT and Bard, many educators remain skeptical of their usefulness — and preoccupied with their potential to .

But this fall, a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them. 

Their aspiration is to offer new tools that are more focused and helpful than wide-ranging ones like ChatGPT, which tends to stumble over research questions with competing findings. And like many kids faced with questions they can’t answer, it has a frustrating tendency to make things up.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Tapping into curated research bases and filtering out lousy results would also make the bots more reliable: If all goes according to plans, they’d cite their sources.

The result, supporters say, could revolutionize education. If their work takes hold, millions of teachers for the first time could routinely access high-quality research and make it part of their everyday workflow. Such tools could also help stamp out adherence to stubborn but ill-supported fads in areas from “learning styles” to reading instruction.

So far, the two groups are each feeling their way around the vast undertaking, with slightly different approaches.

In June, the International Society for Technology in Education introduced , a tool built on content vetted by ISTE and the Association for Supervision and Curriculum Development. (The two groups merged in 2022.) ISTE has made it available in to selected users. All of the chatbot’s content is educator-focused, and it’s trained solely on materials developed or approved by the two organizations. 

Richard Culatta

Now its creators say that within about six months, they expect that the tool will also be able to scour outside, peer-reviewed education research and return “pretty understandable, pretty meaningful results” from vetted journals, said Richard Culatta, ISTE’s CEO.

“There’s this big gap between what we know in the research and what happens in practice,” he said. One reason: Most research is published in a format that “is just totally inaccessible to teachers.”

Case in point: A set of by the Jefferson Education Exchange, a nonprofit supported by the University of Virginia’s Curry School of Education, found that while educators prefer research they can act on — and that’s presented in a way that applies to their work — only about 16% of teachers actually use research to inform instruction.

So he and others are building a digital tool, “purpose-built for educators by educators,” that can translate research into practice, using “very practical language that teachers understand.”

For instance, a teacher could ask the chatbot, “What does the research say about creating a healthy school culture?” or “What’s the evidence for teaching phonics to developing readers?” One could also ask it to suggest activities that are appropriate for middle school students learning about digital citizenship.

Joseph South, ISTE’s chief learning officer, said teachers want the latest research, but are up against formidable obstacles. “They have to find the article in the journal that happens to relate to the thing that they want to do,” he said. “They have to somehow understand academic-speak. They have to have the time to read this, and they have to translate it into something useful.”

While ChatGPT can comb through journals it has access to, translate and summarize the research, he said, it’s not reliable. The typical chatbot — and thus the typical end user — doesn’t know whether the results are from a credible, peer-reviewed journal or not, and it may not necessarily care.

Joseph South

“We do, though,” he said. “So we can do that filtering and let the AI do its magic.”

As with its beta version, the new chatbot will also cite the sources used to generate each response. And it’ll let users know when it simply doesn’t have enough information to return a reliable response.

Developers are still in the early stages of deciding what academic journals to include. For now, they’re experimenting with a handful of key research articles, but will expand the chatbot’s range if initial prototypes prove helpful to educators.

Culatta and South, both veterans of the U.S. Department of Education, have spent years working on the research-to-practice problem, offering, in effect, translation services for research findings. “We’ve spent so much work trying to figure out how to do it and it’s just never really worked,” he said. “It’s just always been a struggle. And we actually think that this could be the first for-real, sustainable, scalable approach to taking research and getting it into language that actually could be used by teachers.”

Daniel Willingham

, a professor of psychology at the University of Virginia and a well-known translator of education research, said his limited experience with ChatGPT has shown that when asked about a subject where there’s general consensus, such as “What is the effect of sleep on memory?” it produces helpful results. But it ’t very good at synthesizing conflicting findings.

It’s also inconsistent in its willingness to reveal, in Willingham’s words, that “‘I really don’t know anything about that.’ And so it, you know, just .”

A paid ChatGPT subscriber, Willingham said he gets “really useful” results only about 20% of the time. “But it requires plenty of verification on my part. And this is all within my area of expertise, so it’s not very hard for me to verify.”

Tapping ‘What Works’

ISTE ’t the only organization pushing to make education research more widely accessible via chatbot. The Learning Agency, a Washington, D.C.-based consulting firm, is also testing a of a bot designed to offer answers to education research queries.

Unlike ISTE’s, the agency’s tool taps an already existing, if finite, resource: the U.S. Department of Education’s What Works Clearinghouse, or more specifically its , a curated collection of materials developed by the department’s Institute of Education Sciences.

“We were inspired to basically create a special version of ChatGPT that was exposed to more high-quality educational data and research evidence on what works,” said Perpetual Baffour, the group’s research director.

In a sense, she said, much of the work had already been done, since the library, though limited, exists to translate research findings into more digestible forms for educators. The result is a prototype that offers what Baffour calls “evidence-based education advice” on topics from adolescent literacy to dropout prevention and school turnarounds.

Perpetual Baffour

Baffour noted that the app development tool allowed the agency to create a “simple but robust” prototype within a day.

At the moment, the version they’re testing is “very basic,” Baffour said. The agency is still considering what it’ll look like in the future. 

“It only knows as much as the Doing What Works Library tells it,” she said. So queries about topics that are in the library return rich results. But asking it about topics that aren’t can be problematic. 

For example, ask it about myths around the aforementioned learning styles and it defaults to a more general knowledge base scraped from Wikipedia articles, transcripts of recorded conversations and materials from “different corners of the Internet,” Baffour said. “And as you can imagine, those sources might not have the most up-to-date and accurate information about education,” Baffour said.

Indeed, a query about the topic returns this: “The myth concerning learning styles is that there is one specific style that works best for everyone. This is not true, as different people learn in different ways and have different preferences for how they learn. Additionally, there is no evidence that suggests that focusing on one particular learning style is more effective than focusing on multiple styles.”

Not exactly accurate or helpful.

In the first place, the widely believed “myth” holds that people with different learning styles learn best when their preferred mode of learning is indulged — not that one style works best for everyone. At a more basic level, while many people may express preferences for ways to take in new information and study — receiving instruction verbally, for example, instead of via pictures — scientists have yet to find good evidence that material tuned to these preferences . 

Unfortunately, at the moment the agency’s bot doesn’t confess whether it knows a lot or little about a topic. Baffour said they want to change that soon. For now, however, that’s just an aspiration.

“I think you’re more likely to get a confident chatbot producing inaccurate information than you are to get a self-aware chatbot admitting its false and incomplete knowledge,” she said. 

Willingham, the UVA researcher, said a useful education-focused chatbot would not just have to incorporate reliable findings, but put them in context. For example, an answer to a query about the evidence for phonics instruction would properly note that, while the record is fairly strong, a lot of mediocre research and “hyperbolic claims” made in support of alternative methods serve to cloud the overall picture — a delicate but accurate detail.

“How is an aggregator going to negotiate that?” he said. 

Asked if he thought a chatbot might soon replace him, Willingham, the author of and a that translate learning science into plain English, said he wouldn’t make any predictions. 

“I was never much of a futurist, but I hocked my crystal ball 15 years ago,” he said.

]]>
3 Ways to Use ChatGPT to Help Students Learn – and Not Cheat /article/3-ways-to-use-chatgpt-to-help-students-learn-and-not-cheat/ Sat, 12 Aug 2023 12:00:00 +0000 /?post_type=article&p=713093 This article was originally published in

Since ChatGPT can engage in conversation and generate essays, computer codes, charts and graphs that closely resemble those created by humans, . A across the country have decided to block access to ChatGPT on computers and networks.

As professors of and , we’ve found that the is their academic motivation. For example, sometimes students are just motivated to get a high grade, whereas other times they are motivated to learn all that they can about a topic.

The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are .


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


We believe teachers can use ChatGPT to increase their students’ motivation for learning and actually prevent cheating. Here are three strategies for doing that.

1. Treat ChatGPT as a learning partner

Our research demonstrates that students are when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade.

Treating ChatGPT as a learning partner can help teachers shift the focus among their students from competition and performance to collaboration and mastery.

For example, a science teacher can assign students to work with ChatGPT to design a hydroponic vegetable garden. In this scenario, students could engage with ChatGPT to discuss the growing requirements for vegetables, brainstorm design ideas for a hydroponic system and analyze pros and cons of the design.

These activities are designed to promote mastery of content as they focus on the processes of learning rather than just the final grade.

2. Use ChatGPT to boost confidence

Research shows that when students feel confident that they can successfully do the work assigned to them, they are . And an important way to boost students’ confidence is to provide them with .

ChatGPT can facilitate such experiences by offering students individualized support and breaking down complex problems into smaller challenges or tasks.

For example, suppose students are asked to attempt to design a hypothetical vehicle that can use gasoline more efficiently than a traditional car. Students who struggle with the project – and might be inclined to cheat – can use ChatGPT to break down the larger problem into smaller tasks. ChatGPT might suggest they first develop an overall concept for the vehicle before determining the size and weight of the vehicle and deciding what type of fuel will be used. Teachers could also ask students to compare the steps suggested by ChatGPT with steps that are recommended by other sources.

3. Prompt ChatGPT to give supportive feedback

It is well documented that supports students’ positive emotions, including self-confidence.

ChatGPT can be directed to deliver feedback using positive, empathetic and encouraging language. For example, if a student completes a math problem incorrectly, instead of merely telling the student “You are wrong and the correct answer is …,” ChatGPT may initiate a conversation with the student. Here’s a real response generated by ChatGPT: “Your answer is not correct, but it’s completely normal to encounter occasional errors or misconceptions along the way. Don’t be discouraged by this small setback; you’re on the right track! I’m here to support you and answer any questions you may have. You’re doing great!”

This will help students feel supported and understood while receiving feedback for improvement. Teachers can easily show students how to direct ChatGPT to provide them such feedback.

We believe that when teachers use ChatGPT and other AI chatbots thoughtfully – and also encourage students to use these tools responsibly in their schoolwork – students have an incentive to learn more and cheat less.The Conversation

This article is republished from under a Creative Commons license. Read the .

]]>
Texas Professors on ChatGPT: ‘Strategize, Don’t Demonize’ to Curb Academic Dishonesty /article/utep-on-chatgpt-strategize-dont-demonize-to-curtail-academic-dishonesty/ Sat, 05 Aug 2023 11:30:00 +0000 /?post_type=article&p=712642 This article was originally published in

A faculty member at the University of Texas at El Paso was grading a composition during the spring 2023 semester, and suspected that it was not the student’s work – until she got to that one sentence.

The instructor of the upper-level course with a strong writing component believed the essay was prepared, at least in part, by ChatGPT, an artificial intelligence program that launched last November. With a few prompts, users of the free AI program can produce essays, research papers, computer code and more with relative ease.

To stymie ChatGPT, the lecturer directed her students to base one of their answers on how they related to the assigned readings. The answer from the student in question did not sound like a student’s “voice.” The final confirmation was the inclusion of something like “I don’t have a personal experience because I’m AI.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The student, who earned a zero for his paper, acknowledged his offense and apologized to the instructor, who did not want to be named. The student is among those who tried to cut academic corners with ChatGPT. In most cases, these indiscretions were handled at the classroom level. More serious offenses were submitted to the university’s Office of Student Conduct and Conflict Resolution, or OSCCR.

“I’d like to see more suggestions or training on how to proactively address ChatGPT with my students rather than solely acting in the role of ‘catching’ and disciplining them,” the faculty member said.

To , UTEP conducted a series of workshops late last spring to inform faculty about the pervasive use of ChatGPT and other forms of AI. UTEP’s Center for Faculty Leadership and Development organized the presentations to increase awareness and, where possible, to educate faculty on how to use AI effectively in the classroom and as an assessment tool. About 50 university instructors from throughout the university attended the presentations.

Jeffrey Olimpo, director of the faculty leadership center, said the main concern workshop participants shared with him was students’ unethical use of ChatGPT and AI in general. His response was that AI is not going away.

“We came at it from an angle of, ‘You can’t put the toothpaste back in the tube,’” Olimpo said a few weeks after the last workshop.

The event’s presenters included representatives from OSCCR and the Provost’s Office. Olimpo recalled that the OSCCR official said that his office already had seen some potential ChatGPT cases.

Strategize, don’t demonize

The Office of Student Conduct and Conflict Resolution conducted 20 investigations into possible cases of academic dishonesty tied to the use of AI during the spring 2023 semester, according to the university. UTEP did not respond to a question about how those cases were resolved and said that OSCCR director Jovita Simón would not comment on this story.

While the university was aware of ChatGPT’s potential downsides, Olimpo said there was no reason to chase it down with torches and pitchforks.

“We try to strategize and not demonize,” he said.

Arthur Ramirez, a second-year UTEP doctoral student in finance, said he began to test ChatGPT soon after it launched to learn if it could help with his research. Initially, he was concerned with its inaccuracies, but found it helpful with coding, especially with better prompts, and to understand certain charts. He said the only instructions a professor gave him was to follow the university’s guidelines.

“He said there was no right or wrong way to use ChatGPT,” Ramirez said. “Just don’t abuse it.”

Responding to an El Paso Matters Instagram request for students to share their experiences, one UTEP student said that some of his professors encouraged students to use ChatGPT, while others warned them not to use it for plagiarism.

“I don’t see what the big deal is,” wrote the student who identified himself as “sergio.iii.”

Sergio.iii called the AI program an effective study and communication tool with the right prompts. He said it helped create outlines for papers, add focus to his PowerPoint presentations and often gave more understandable explanations to complicated topics.

“The students using ChatGPT unethically aren’t even being smart about it,” he wrote via Instagram. El Paso Matters reached out to the user, but he did not respond. “Most people use it in a brain-dead way where they just copy and paste answers straight out of ChatGPT and they end up with responses that look identical to a dozen other students.”

Leslie Waters, an assistant professor of history, did not offer ChatGPT instructions at the start of the spring 2023 semester. She believed the obscure primary source material from her 20th century European history course would be AI-proof. In one case she gave students copies of letters written by soldiers and their families during World War I and asked them to write essays based on the letters’ themes.

Three of her students submitted papers that focused generally on the war, but did not mention the letters or their themes. Additionally, the essays included ChatGPT red flags: grammatically correct sentences that lacked analysis and critical thinking. Each of those students earned low scores. Waters planned to send one of those cases to OSCCR, which she said uses software that can detect AI-generated material.

“It’s not easy (for me) to prove, but it’s extremely easy for me to detect,” she said of ChatGPT work.

Her plan for the fall 2023 semester is to talk to her students about the perils of using ChatGPT, and to encourage them to stay on top of their coursework. It is her experience that students cheat out of desperation. She will give multi-level assignments that force students to submit papers at various stages to keep track of their progress.

Olimpo did not respond to several requests for the recommendations generated by his spring workshops, but he previously proposed that a faculty committee review and possibly update the university’s general course syllabus in regards to the use of AI tools.

A June 2023 article in The Chronicle of Higher Education included the results of a faculty survey of how to work with ChatGPT this fall. Two of the more popular ideas were to alter assignments to make AI participation less useful, and to incorporate AI in some work to help students understand its strengths and weaknesses.

As for El Paso Community College, its ChatGPT directive to students is to follow their professors’ instructions for assignments and the academic guidelines in the college’s Student Code of Conduct, said Keri Moe, associate vice president for External Relations Communication & Development.

“ChatGPT, like any technology available, must be used with academic integrity and in accordance with these guidelines,” Moe said.

Texas Tech University Health Sciences Campus El Paso did not respond to a request for instructions on how its leaders want faculty and students to use ChatGPT.

Academic integrity

While some faculty members want to use AI tools such as Turnitin to catch cheaters, Sarah Elaine Eaton, an associate professor in the Werklund School of Education at the University of Calgary, in Canada, advised them to not overreact.

During a May 16 virtual forum about “Academic Integrity and AI,” Eaton said that instructors should include a statement in their syllabus about the AI they plan to use to help with their assessments and inform the students about the limitations of those programs.

“It’s not about trying to use technology in order to catch students,” Eaton said during the presentation. “Nobody wins in an academic-integrity arms race. Deceptive assessment using tools and technologies without students’ knowledge ahead of time is not modeling integrity.”

Greg Beam, an associate professor of practice in UTEP Department of Communication, said that he taught an asynchronous virtual course this summer and strongly suspected that some students submitted work done by chatbots. He posted a video on Blackboard where he explained the right and wrong way to use ChatGPT.

Beam told the students that those who admitted that they used the technology improperly would be allowed to redo the assignment with no penalty. Additionally, he told them that he would contact those who did not come forward to ask them follow-up questions about their submissions to verify that they understood the material.

The professor said about 10% of those students redid the assignment. He suspected a few others, but those submissions lacked the tell-tale red flags. It made him wonder if some students had mastered ChatGPT enough to be undetectable.

“For the most part, at UTEP at least, I don’t think students want to cheat – they want to learn,” Beam said. “And they’re just as concerned about the potential ramifications of these new technologies as the rest of us are.”

This first appeared on and is republished here under a Creative Commons license.

]]>
Opinion: Student-Led Conference Puts Focus on AI and Education /article/student-led-conference-puts-focus-on-ai-and-education/ Thu, 27 Jul 2023 10:15:00 +0000 /?post_type=article&p=712231 On Aug. 5 and 6, student volunteers from the University of Illinois and Stanford University will present the , an online event charting the adoption and utility of artificial intelligence in education, with a special emphasis on student perspectives. So far, more than 2,700 educators have registered to hear the perspective of over 60 student representatives and the insights of thought leaders in the field including Stephen Wolfram, Chris Dede and Kristen Dicerbo.

As the organizer of the conference, I have had the opportunity to interview over 30 high school and college students from a range of backgrounds, who were nominated by teachers who are slated to speak at the conference or have shared their AI experiences through articles and interviews. Their innovative use of AI tools has underscored to me that if the broader community of educators, policymakers and industry professionals are to harness AI effectively in education, this collective cannot afford to overlook student voices in its discussions. This raises a pertinent question: How can students and educators cultivate a collaborative approach to this rapidly evolving field?

Most students want to learn, but many have anxieties that their current skills and knowledge could rapidly become irrelevant without integrating AI. In my interviews, one accounting student who interned at a tax consulting company said she feared that AI could automate her data processing tasks, while a computer science student expressed worries that tools like ChatGPT could replace his entire job of coding user interfaces for websites. A marketing student noted that the advanced copywriting and strategic thinking abilities of these AI tools are already making the skills she learned in classrooms obsolete.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Such fears are most evident among college students who will soon enter the job market and hope their professors swiftly revisit their teaching materials and reconsider the goals of their classes in light of the evolving future of work. They need to discern which aspects of the curriculum could be enhanced by AI and which may no longer be relevant. Undertaking such a revision requires professors to have a thorough understanding of what AI can and cannot do. Ideally, they will be supported by their academic departments and college-based teaching and learning centers.

Another concern voiced by students pertains to equitable access to and training in these tools. The paid version of ChatGPT significantly outperforms the free version, leading to an unfair advantage for students who can afford it. Furthermore, students who have the privilege of free time to experiment daily with AI tools develop effective prompting techniques that can produce much better results than those of their less well-off peers who must spend their time outside of class working part-time jobs.

Bolstered by such insights, these students have been receptive to educators who are ready to navigate the changes brought about by artificial intelligence. Seeing teachers revise their curricula and foster dialogues about AI’s role in the classroom has encouraged students to reciprocate, sharing their personal experiences with AI tools. Such discussions are proving invaluable in helping educators refine their curricula and evolve their code of ethics around issues like plagiarism while building trust with their students.

It is incumbent on educators to clarify the areas of curriculum that could be enhanced by AI, those that should remain human-centered and those that might benefit from a hybrid approach. For example, English teachers could require students to collaborate with AI for initial brainstorming and drafting of essays, but not for editing and revision. By clearly communicating expectations and providing guidance on artificial intelligence, educators can prevent students from inappropriately using AI — such as generating solutions to every assignment without thinking critically about them — and assure them of the ongoing relevance of their education.

It is equally crucial for school leaders and administrators to think beyond the classroom and formulate clear guidelines for students and teachers at the institutional level. To ensure that these policies are relevant and practical, schools should consider establishing student advisory committees. These could provide valuable insights into students’ experiences with AI, particularly for those working in classrooms where AI-enhanced teaching is being tested. Integrating student voices into discussions about educational policy and curriculum design will undoubtedly speed up the adoption of AI in education while ensuring that appropriate and effective guardrails are in place.

Further, educational institutions should collaborate with leading large language model service providers, such as OpenAI, to guarantee equitable access to and training in advanced programs such as the paid version of ChatGPT. This would not only help close the growing inequality gap in education due to access to premium tools, but equipping educators with a sufficient understanding of AI can alleviate apprehensions that often stem from unfamiliarity with technology. To foster dialogues and effective experiments around AI in education, institutions must empower both students and teachers with the leading tools and a deep understanding of how to get the most out of them.

While many more challenges posed by AI in education remain unresolved, every conversation between students and educators can help accelerate these important, ongoing experiments. This collective quest for insight is precisely why I and the other student volunteers decided to host the AI x Education conference. By providing a platform for rich discussion and collaboration, we aim to contribute toward a future where AI and education coalesce seamlessly, to the benefit of all students. I invite educators interested in attending to register . 

]]>
National ChatGPT Survey: Teachers Even More Accepting of Chatbot Than Students /article/national-chatgpt-survey-teachers-accepting-ai-into-classrooms-workflow-even-more-than-students/ Tue, 18 Jul 2023 09:30:00 +0000 /?post_type=article&p=711609 Teacher and parent attitudes about ChatGPT, the popular AI chatbot that debuted in late 2022, are shifting slightly, according to new findings out today from the polling firm Impact Research.

The survey is the latest in a series commissioned by the Walton Family Foundation, which is tracking the topic, as well as attitudes about STEM education more broadly.

The researchers say Americans and teachers especially are beginning to see the potential of incorporating AI tools like ChatGPT into K-12 education — and that, in their experience, it’s already helping students learn. 

The new findings come as the U.S. Federal Trade Commission into OpenAI, ChatGPT’s creator, probing whether it put personal reputations and data at risk. The FTC has warned that consumer protection laws apply to AI, even as the Biden administration and Congress push for new regulations on the field.

OpenAI is also a defendant in several recent lawsuits filed by authors — including the comedian Sarah Silverman — who say the technology “ingested” their work, improperly appropriating their copyrighted books without the authors’ consent to train its AI program. The suits each seek nearly $1 billion in damages, the Los Angeles Times reported.

The latest results are based on a national survey of 1,000 K-12 teachers, 1,002 students, ages 12-18; 802 voters and 916 parents. It was conducted by Impact Research between June 23 and July 6. The plus-or-minus margin of error is 3 percentage points for the teacher and student results, 3.5 percentage points for the voter results and 3.2 for the parent responses.

Here are the top five findings:

1. Nearly everyone knows what ChatGPT is

About seven months after it first , pretty much everyone knows what ChatGPT is. It’s broadly recognized by 80% of registered voters, according to the new survey, by 71% of parents and 73% of teachers.

Meanwhile, slightly fewer students — just 67% — tell pollsters they know what it is.

2. Despite the doom-and-gloom headlines about AI taking over the world, lots of people view ChatGPT favorably

Surprisingly, parents now view the chatbot more favorably than teachers: 61% of parents are fine with it, according to the new survey, compared with only 58% of teachers and just 54% of students.

3. Just a fraction of students say they’re using ChatGPT … but lots of teachers admit to using it

In February, a found that 33% of students said they’d used ChatGPT for school. That figure is now creeping up to 42%.

But their teachers are way ahead of them: 63% of teachers say they’ve used the chatbot on the job, up from February, when just 50% of teachers were taking advantage of the tool. Four in 10 (40%) teachers now report using it at least once a week.

4. Teachers … and parents … believe it’s legit

Teachers who use ChatGPT overwhelmingly give it good reviews. Fully 84% say it has positively impacted their classes, with about 6 in 10 (61%) predicting it will have “legitimate educational uses that we cannot ignore.”

Nearly two-thirds (64%) of parents think teachers and schools should allow the use of ChatGPT for schoolwork. That includes 28% who say they should not just tolerate but encourage its use.

5. It’s not just for cheating anymore

While lots of headlines since last winter have touted ChatGPT’s superior ability to on essays and the like, just 23% of teachers now believe cheating will be its likely sole use, down slightly from the spring (24%).

Disclosure: The Walton Family Foundation provides financial support to The 74.

]]>
IES Director Mark Schneider on Education Research and the Future of Schools /article/74-interview-ies-director-mark-schneider-on-education-research-and-the-future-of-schools/ Mon, 05 Jun 2023 10:15:00 +0000 /?post_type=article&p=709844 See previous 74 Interviews: Bill Gates on the challenge of spurring educational improvement; Sal Khan on COVID’s math toll; and Patricia Brantley on the future of virtual learning. The full archive is here

The Institute of Education Sciences turns 21 this year. After five years at its helm, Director Mark Schneider is hoping to shepherd its transition to maturity.

When he was appointed by President Trump in 2017, Schneider took over an agency designed to reveal the truth of how schooling is delivered in the United States. IES houses four research centers that measure the effects of educational interventions from preschool to university, and through the National Assessment of Educational Progress — the agency’s most recognizable research product, often referred to as the Nation’s Report Card — it delivers regular updates on the state of student achievement.

But Schneider sees a new role for federal research endeavors. Through the use of public competitions and artificial intelligence, the director wants IES to help incubate breakthrough technologies and treatments that can help student performance take a giant leap forward in the coming years. Rapid-cycle experimentation and replication, he hopes, will help reverse more than a decade of stagnation in K–12 performance.

Late in his six-year term, Schneider is candid about his status as one of the few holdovers from the previous administration still serving in government. In part, he quips, that’s because education research ’t considered important enough for a Trump appointee to be fired. But he’s also labored to win the trust of Congress and cultivate bipartisan support for a vision of educational improvement powered by data.

Now he believes that vision could soon be realized. In December, Congress approved a substantial increase in IES’s budget to potentially fund a fifth national center that some have dubbed a “DARPA” for education research (based on the Pentagon’s ). Further legislation is needed to authorize a branch for advanced development in education sciences, but potential research strands are already being theorized.

Schneider — a political scientist who left academia for leadership and research roles at the American Institutes for Research and the American Enterprise Institute — has a commanding perspective on the federal education bureaucracy, serving as the head of the National Center for Education Statistics in the 2000s. His sometimes tart observations about Washington’s research efforts, and the future of IES, can be found on his .

In a wide-ranging conversation with The 74’s Kevin Mahnken, Schneider spoke with surprising openness about the Department of Education (which “operates like a bank” in its grantmaking capacity), the “horrifying” reality of university master’s programs (“It’s a money machine, and so you create more of them”), and why he believes some concerns about data privacy are overblown (“If I were really worried about this, I wouldn’t wear an Apple watch.”) 

Above all, he said, the task ahead is to develop a research base that can yield transformative educational tools on the order of COVID vaccines and ChatGPT.

“The goal, using this foundation, is to look at things that pop out, that would not exist otherwise,” Schneider said. “If we can do this with vaccines, if we can use it with chatbots, then what’s our foundation?”

The conversation has been edited for length and clarity.

The 74: Tell me a little about what you’re anticipating this year in terms of legislation to establish a DARPA-type program for education.

Mark Schneider: There are two parts of the . The first is to set up the National Center for Advanced Development in Education, NCADE, and the other is for major reinvestment in . Most people focus on the first part, but the second is also really important because we spent a billion dollars building those data systems over the last 18 years. The whole thing is a great system, but it needs to be rebuilt.

What needs to be modified in those systems?

It’s old technology. I think the first round of money for them went out the door in 2006. [Gestures at iPhone sitting on the table] Can you imagine having a technology system that was built in 2006? So they need to be modernized, but the more important thing is that we now have a much more expansive vision of what they can do after almost 20 years of work. 

The example I point to is absenteeism. States have really good records on attendance because money flows based on average daily attendance, and they have to take counts. They know who are chronic absentees, but they don’t know why. It could be food insecurity, health, migration status, could be a dozen things or more. But if we use these longitudinal data systems as a backbone and then plug in information from criminal justice, health, Social Security, we would have a much better sense of what’s going on with any student in a given school. The strength of Statewide Longitudinal Data Systems [SLDS] has always been tracking students over time.

“Why did I survive when almost nobody else did? I don’t think education research is that important. I think I’m good at my job, and the reforms we’re pursuing … are really strongly supported by the current administration. But I’m not important enough to be fired.”

The biggest problem, of course, is that as you merge more data, the issues of privacy become more intense because it’s easier and easier to identify people when there’s more information. We’re nowhere near good enough at privacy protection, but we’re getting way better, and there are so many more ways of protecting privacy than there were 20 years ago.

Given the lengthy timetables of federal projects like the SLDS, do you ever feel like you’re painting the Golden Gate Bridge, and now that you’ve finally established these tools, it’s already time to overhaul them?

Well, we spent a $1 billion building this, and right now, we’re spending about $35 million per year on grants to states to do things with it. What percentage of $1 billion is going back into maintenance and expansions? It’s pocket change. So you always have to remember that this is a state-owned system, designed to help them do their work. And to take an example, Tennessee is surrounded by seven other states, and they end up doing their own collaborations and data exchanges.

Is the inherent federalism of that approach, especially layered over the archaic technology, difficult to manage? How did it play out during the pandemic, for instance, when real-time data was so hard to generate?

The trickiness had nothing to do with SLDS, though. It had to do with the world we woke up to in March 2020.

For me, SLDS is like an exemplar of a federal system where the states assume almost all responsibility. But again, we have more capacity compared with most states. There are states like Massachusetts that are doing an unbelievably good job, and other states are not. Our role there is providing the resources to enable states to a) experiment like Massachusetts and b) bring states that have little capacity up to speed. 

Probably the most alarming federal data coming out of the COVID era has been the release of scores from the National Assessment of Educational Progress, which showed huge drops in achievement in reading and especially math. Did those results match what you were expecting?

By the time NAEP landed, we had NWEA results and others that suggested it was going to be a debacle. We knew the scores were going to go down by a bunch. But NAEP is NAEP — it’s national, it’s rock-solid in terms of its methodologies and its sample. So it’s indisputable that this was an awful situation, right?

To connect the dots with SLDS: One of the problems with the system is that it was conceived as a data warehouse strategy. And I tried and tried, but nobody caught that this was a stupid way of phrasing its purpose. I said, “We don’t need a data warehouse. What goes into a warehouse, a forklift?” We want an Amazon model where we also have retail stores, and you can go in and find stuff. 

I understand that states are very hesitant to let random academics and researchers have access to very private data. But as we rebuild the SLDS, we need to make sure that there are use requirements as part of the deal — always, always consistent with privacy protections, but we have to use these more. It’s a little tricky because some states have a history of opening up the doors and letting in researchers, and others just don’t. In the state of Texas, it can depend on who the attorney general is. 

It can be striking come out of, for instance, Wake County, North Carolina.

It’s because they’ve opened the data to more people. And that’s part of the deal, but Wake County is not the United States. We need more. 

My days of active research are behind me, but the possibilities built into these data are incredible. I thought I was going to be able to do a deal with Utah, where there’s an organization doing early childhood interventions; all the evidence is that they’re good, but we need to see if “good” sticks. Well, SLDS is perfectly designed to figure out if interventions stick. I thought this work in Utah would allow us to identify students in their early childhood interventions, work with the state to track those students over time, and find out if those very positive pre-K results — it’s a very inexpensive intervention with great results in the early years — stick. We have the means to do it. We just need to do it.

It seems like efforts like that would be complicated by the growing political salience of data security.

It’s everywhere, and for good reason. I’m not really a privacy hawk, but all the privacy protections need to consider benefits versus costs. In too many places, we’ve concentrated on the risk without considering the benefit. But that’s only half the equation. We have to be able to say, “This risk can be mitigated, and there could be huge benefits to come out of this.” 

“It’s largely the same technology that ETS invented 40 years ago. But the world has changed. It’s just gotten more and more expensive, but the amount of reimagining NAEP and its structure — whether or not we can do this cheaper and faster — is just lagging. It’s really frustrating.”&Բ;

This is what political systems do all the time — they balance risks against rewards. But we have to do it in a much more sophisticated way.

Why are you a privacy dove? There is something a little funny about how guarded people are about government intrusions when they so freely hand over their data to Amazon or whomever.

I have an Amazon Echo in every room in my house, and I know that they’re listening! Everyone has a story where they’re talking about something, and then they go on their Amazon account and see an advertisement related to the product they were talking about. It’s really scary, but I’ve only turned off the microphone on one of my devices because of the convenience of being able to say, “Alexa, turn on my lights, play the BBC.” For me, those benefits are worth getting a bunch of stupid advertisements.

If I were really worried about this, I wouldn’t wear an Apple watch or own an Apple phone. We all should be concerned about privacy, and especially when it comes to children. Obviously, the standards have to be high. But again, there are benefits to using a more comprehensive database, which is my vision of what SLDS would be. The technology issues are real, and it’s always a war of whether people hack it and we need to develop better mechanisms for protection. 

What are you trying to achieve, organizationally, with the proposed addition of an advanced research center?

IES is only 20 years old. My predecessor, , was the founding director, and he was brilliant. He set out to modernize the research and development infrastructure, and the coin of the realm. I was the NCES commissioner for three years, and I argued with him all the time about his model of RCTs, which are the gold standard. The way he saw it was — and he knew what he was doing, he’s really smart — “I can’t compromise this at the beginning. If I say, ‘Maybe we do this, maybe we do that,’ then nobody goes in the direction I want, and they just wait me out.”

The problem with the model was that RCTs, as they were originally introduced, were about average effects across populations. But to use a specific example, we’ve now moved into individualized medicine — it’s about what works for you, and under what conditions. So the mantra of IES now is, “What works for whom, and under what conditions?” Of course, we still have studies that look at main effects, but our work is all about identifying what works for individuals or groups of students. This requires a lot of changes about the way we think and how we do business.

My joke is that almost every science has . We don’t have a replication crisis, because we don’t replicate anything. Even if it works, we don’t replicate it! So a few years ago, we launched a replication RFA [request for applications]. IES was moving in that direction anyway, but we needed a much more systematic attention to replication. My mistake was we structured the replication this way: “Something worked in New York City, so give me another $5 million, and I’ll try it in Philadelphia.” Or, “It worked for some African American kids, let’s try it with Hispanic kids.” They were all big experiments, five years long. You can’t make progress that way.

Now we’re , which will be announced before the summer. I’m not sure how generalizable this will be, but the prize is based on using digital learning platforms to run experiments. The critical part is that you have to have 100,000 users on your platform to qualify. You run those experiments, you fail fast — that’s an incredibly important principle, fail fast — and the few things that work, you have to do multiple replications. The original plan was: experiment, replication, then another round of replications. At the end of which, the goal is to say, “Here’s an intervention that worked for these students, but not for these students.” Then you take what worked for those students and push it further. [On May 9, of the $1 million Digital Learning Challenge prize.]

It’s a systematic approach to rapid replication. Not everything in education research can be done in short order. Some things take a long time. But there are many, many things that last a semester or a school year, and at the end of that time, we have . This prize approach is just a different process for how we replicate. 

ChatGPT just opened up a whole world of discussion about the use of AI. But what happened with ChatGPT is like what we’re trying to do. The world has been doing AI for literally decades, but the last 10 years have seen increased computing power and more complexity in the models, and the foundational models have gotten bigger and bigger and bigger. We built an incredible foundation: machine learning, data science, AI. And all of a sudden, boom! ChatGPT is the first thing that caught the public’s attention, but it was built on this amazing foundation. Nobody knows what the next thing is that will break through, but they’re all being built on decades’ worth of work that established this foundation. It’s the same thing — the COVID vaccine could not have happened without that foundation.

What I’m trying to do is use IES resources to build this kind of foundation, which includes the learning platforms, rapid-cycle experimentation and replication, transformative research money. And the goal, using this foundation, is to look at things that pop out, that would not exist otherwise. That’s the goal: If we can do this with vaccines, if we can use it with chatbots, then what’s our foundation? What I hope is that, when we get NCADE going, we move this activity there and let it consolidate and interact. Then we start doing new, innovative research based on that foundation.

What are the kinds of research projects and outcomes that perhaps seem fantastical now, but could be realized in the way that MRNA vaccines have been?

The telos, the North Star, is individualized education. The first thing that is popping from this work is that IES is launching with the National Science Foundation, and it’s designed for students’ with speech pathologies. There in schools, so the demand for them is really high. We also do something incredibly stupid by burdening them with unbelievable paperwork.

“My joke is that almost every science has gone through a replication crisis. We don’t have a replication crisis, because we don’t replicate anything. Even if it works, we don’t replicate it!”

This AI institute is funded by $20 million, split between IES and the NSF, and it has several prongs to it. The first is to develop an AI-assisted universal screener, because it takes time to diagnose exactly what students’ speech pathologies are — whether it has to do with sentence structure, vocabulary, pronunciation. Medicine has been doing this forever, by the way. The second prong is to use an AI toolbox to help design, update, and monitor the treatment plan. In other words, we’ve got a labor shortage, we know we need assessment and a treatment plan, and AI can do this. Or, AI should be able to do this, whether or not we can pull it off with this group. It’s a risk, like everything we do is a risk. But to me, this is a breakthrough.

I’m very optimistic that they’re going to pull it off, in part because of the third prong, which relates to the paperwork. It’s a lot of work, multiple forms, and it’s routine. Well, guess what can now type up routine paragraphs?

It seems like school districts, let alone Congress, could be really hesitant about deploying AI to write up after-incident reports, or what have you. Some regulatory structure is going to have to be created to govern the use of this technology.

I’m sure, like me, you’ve been monitoring the reaction to ChatGPT. There’s an extreme reaction, “Ban it completely.” Another extreme would be, “This is amazing, go for it!” And then there’s the right reaction: This is a tool that’s never going back in the box. So how do we use it appropriately? How do we use it in classrooms, and to free teachers from drudgery?

AI-powered chatbots like ChatGPT challenge K–12 schools, but could also prove a boon to teachers. (Getty Images)

At least for the foreseeable future, humans will have a role because ChatGPT is often wrong. And the biggest problem is that we sometimes don’t know when it’s wrong. It’ll get better over time, I don’t think there’s a question about that, but it needs human intervention. Humans have to know that it’s not infallible, and they have to have the intelligence to know how to read ChatGPT and say, “That doesn’t work.”

Of course, it writes very boring prose.

But so do students.

And so do reporters.

Touché. You mentioned that you ran NCES over a decade ago. I’m wondering if you’ve noticed a change in Washington’s ambitions around using federal data to spur school improvement, especially now that the peak reform era is long gone.

It’s true that the level of skepticism is much greater. But the technology has also gotten way, way better. We hired the National Academies [of Science, Engineering, and Medicine] to do three reports for us to coincide with our 20th anniversary. was the most interesting one. It talks about new and somewhat less intrusive measures.

NCES is old. There are lots of arguments about when it started, but the modern NCES was actually a reaction to [sociologist and researcher] , who was intimately involved in the early design of longitudinal studies. They’ve gotten more complicated — the original was “” — and they’re all based on survey data, just going out and talking to people. Well, you know the fate of surveys: Response rates are falling and falling, and it’s harder to get people to talk. 

That’s how bad it’s gotten?

We were forced — “forced” makes it sound like it was a bad idea; and it did turn out to be a bad idea — to ask schools that were participating for a lot of information about IEPs [individualized education programs] and students with special needs. This gets back to that cost/benefit calculation because they would not share the classification of students with special needs, and they just refused to participate. So we ended up canceling that data collection. That was a leading indicator of the problem.

“I taught public policy for decades at Stony Brook University, and when I decided that I was never going back, they asked me to give a talk. … My opening remark set everyone back on their heels because I said, ‘I taught here for 20 years, and every one of my students should sue me for malpractice.’ Nothing I taught had anything to do with the way the sausage is really made.”

Increasingly, the question is what we can do to get the kind of data that these longitudinal studies generated without having to interview 15,000 or 18,000 kids. It requires a modification in the way you think, and it requires an expansive view of where the data lie. How much of the data that we’re asking students and parents and teachers about resides in state longitudinal data systems, for example? Could we drive the need for human interviewing to 5 percent or 10 percent of what we do now? It actually calls for a different thought process than, “Well, we always do ‘High School and Beyond’ this way!” But federal bureaucracies aren’t known for their innovative thinking, quite frankly. 

This adaptation might also mean that some of the unique things we get from surveys are going to have to go because no one will give them to you.

What, if anything, is the effect of changes in government on a massive organization like IES? You were appointed under President Trump, so the Department of Education has already undergone a really significant change, and now Congress has changed hands as well.

We’re not massive. We’re pretty small, actually.

We’re a science agency, and we were created when the Education Sciences Reform Act was authorized in 2002. I think the vision was that IES would grow not to the size of the  National Institutes for Health or the National Science Foundation, but on a trajectory that would put it into that kind of group. If you look at the original legislation, it’s still there. We have a board that is almost populated now, and the ex officio members include the director of the Census, the commissioner of the Bureau of Labor Statistics, and somebody from NIH. You don’t create a board with those kinds of people on it unless you expect it to be a big, major player.

It never got there. The budget is up to $808 million, in part because we got a pretty big chunk of money in the omnibus package. But $30 million of that was for DARPA-Ed, which we don’t have yet. Ten million dollars of that is for the . So Congress is interested in modernization, and we have to prove that this investment is worthwhile. 

What about the difference at the top? Are there notably different attitudes between Secretary DeVos and Secretary Cardona with respect to IES’s mission?

I’ve gotten enormous support from the department. We would not have gotten the money for NCADE, we would not have gotten the money for School Pulse without that support. DeVos’s goal was to make the Education Department go away, so this administration is obviously much more expansive. They’ve been careful in their support of things, but again, NCADE wouldn’t have gotten this far without the full-throated backing of the department, and of the Office of Management and Budget and the White House.

I’m reminded of the parties’ divergent positions on the federal government’s role in education, and how close the Department of Education came to never being authorized.

Jimmy Carter is a really good ex-president and a good human being, but was not a very effective president. As you know, the establishment of the department was in response to support that he got from teachers’ unions. So there is a philosophical debate about the role of the federal government in education, and it’s not a slam dunk. There are things that are worth talking about. A huge chunk of the money that the department manages is Title IV, so it operates like a bank, and it’s by far the smallest cabinet department in terms of workforce.

President Jimmy Carter at the inaugural ceremony for the Department of Education in 1980. (Valerie Hodgson/Getty Images)

The other thing I’m not sure people fully understand is that the department isn’t just a grant-making operation, it’s also a contract shop. I taught public policy for decades at Stony Brook University, and when I decided that I was never going back, they asked me to give a talk to my former colleagues — almost all of whom I’d hired — and graduate students. My opening remark set everyone back on their heels because I said, “I taught here for 20 years, and every one of my students should sue me for malpractice.” Nothing I taught had anything to do with the way the sausage is really made. 

You hear this all the time, and academics pooh-pooh it. But I’ve been on both sides of it, and it’s really true: Academic research and the sausage factory are the same. In 20 years of teaching public policy, I never once mentioned contractors. And contractors run the whole show. It’s the way we do business, and it’s even more interesting than just: “I run this agency, but here’s what you, the contractor, should do.” All too often, it’s the contractors doing the actual thinking.

There’s been a long argument over the 20 years, on and off, that I’ve been associated with this stuff. We should, and must, contract out the work and the implementation, but we should not be contracting out the thinking. And that’s easy to articulate, but what’s the dividing line? When are we surrendering our intellectual capital — our control of the ship, if you will — to contractors who now design the ship, build the ship and steer the ship? 

Are there concrete examples from education research where you can point to projects that have gone off-course?

NAEP is $185 million per year, and it gets renewed every five years. Do you know how long Educational Testing Services has had the contract? Forty years. There are reasons why they get this contract — they’re good! But this is decades of either minimal or zero competition. And as the test has gotten bigger and more complicated, even putting together a bid to compete costs millions of dollars. People ask, “Why would we spend millions of dollars to compete with ETS when they’ve had the contract for 40 years and we see no indication that it will ever be different?”

To me, this is a serious issue.

Given that NAEP is the foremost product of NCES, there’s probably very little scope for reimagining it beyond, say, changing the testing modality from pen-and-paper to computers.

I agree on that, it’s largely the same technology that ETS invented 40 years ago. But the world has changed. It’s just gotten more and more expensive, but the amount of reimagining NAEP and its structure — whether or not we can do this cheaper and faster — is just lagging. It’s really frustrating. 

Even before COVID, there was a lot of pondering about the future of NAEP and the costs of administering it. The Long-Term Trends test was postponed between 2012 and 2020, right?

Yeah, but that’s an interesting case. The modern version of NAEP — which measures fourth- and eighth-grade reading and math — was authorized in 2002, I believe. It goes back to the ’70s, really, but we’ve been doing this version of it for 20 years. People love the Long-Term Trends test, but do we really need it when we’ve had 20 years of the main NAEP?

You’ve spent a lot of your career studying the value of higher education. Do you think we’re staring at a financial or demographic apocalypse for colleges and universities?

“Apocalypse” is way too strong a word. There are demographic trends such that the pool of students is shrinking, and there’s also incredible regional variation. The New England and mid-Atlantic states are experiencing much sharper declines than the South and the West. And of course, universities are not mobile; if you invest all this infrastructure in frigid Massachusetts or northern New York, and all the students move, you have to ask, “What do I do with all this infrastructure now?”

As to the value of a four-year degree, you and I operate in a sphere where everybody is highly literate. I read all the time, and I’m not talking about technical stuff. I read novels all the time because it’s an opportunity to live in a different world. But what’s the definition of literacy in the world we now live in, and what skills do we truly need? It’s still only a minority of people who go to four-year programs, but do we need to send even that many students to get four-year degrees? Most of them want jobs and family-sustaining wages, and do we need four-year degrees for that? The answer is obviously not, if you look at what’s happening in Maryland and Pennsylvania [where governors have recently removed degree requirements from thousands of state jobs]. 

The fact of the matter is, this is happening. To the extent that it’s happening, which I believe is necessary and important, the incentives for getting a bachelor’s degree start to decline. It becomes more of an individual question: “I’m going to spend five or six years at a four-year institution. It’s pretty much a cookie cutter, stamp-stamp-stamp experience, and I get a bachelor’s degree. Then, at a job interview, they ask what my skills are, and I can’t answer. Well, I can use ChatGPT!”

That’s quite grim. But is there a way to offer prospective students better information about the value they’re actually getting from college?

When I was at the American Institutes for Research, I ran something called , which was the first systematic attempt to crack all the work that had been done at the university level about what happens to students when they graduate. In the end, it’s the variation in programs that really matters — as soon as we started unpacking student outcomes, program by program, the programs that were technical were the winners. And the numbers were amazing. The first results we published came from and , and I swear to God, when I saw the results, I didn’t believe them. I thought we had an error in the data because associate’s degree holders were out-earning bachelor’s degree holders. 

We repeated this over and over and over again, in maybe 10 different states. It was always technical degrees coming out of community colleges that had the best earnings. In the state of Florida, I think the best postsecondary certificate was “Elevator Mechanic/Constructor.” There aren’t a lot of them, but the starting wage was $100,000! Then you start looking at sociology, English, psychology, and [gestures downward with his hand, makes crashing sound].

It turned out to be that these degree programs were increasingly becoming surrogates for skills. The worst outcome for all students was for those who went into liberal arts and general studies at community colleges. They’re doing that because they want to transfer to a four-year school, but only 20 percent of them actually transfer. They come out with a general education and no skills, and the labor market outcomes were a disaster. 

I was working with , which has employment records for millions of people and scrapes job advertisements, to start looking for what skills were in high demand. The beauty of it was that it was such good data, and even better, it was regional. Most people don’t move that often, so if I’m living and going to school in western Tennessee, it doesn’t help me at all to know what somebody’s hiring for in Miami. It basically asked, “How much money is each skill worth?” Things have probably changed since that time, but one of the highest-demand skills in almost every market was [the customer relationship manager software] , which was worth between $10,000 and $20,000. 

The other thing we did, which made me really popular, was look at the same outcomes for master’s programs. Colleges just create these programs, and the money goes to support everything that academics love: travel, course buyouts, graduate students. But the numbers are horrifying for most master’s programs. You create a master’s program, and they tend to be relatively cheap — and you don’t give TAs to master’s students, so it’s all cash. It’s a money machine, and so you create more of them. 

This brings me back to my previous question. If young people start seeing the value proposition of a four-year degree differently, and American fertility rates are producing fewer young people to begin with, it seems like the music eventually has to stop for the higher education sector. And if that happens, employers are going to have to rely on something besides the apparent prestige of a B.A. to distinguish between job candidates, right?

Both my daughters think I’ve become increasingly conservative because of what goes on in post-secondary education. Look at university endowments: All the money is hidden, but the subsidy we give to well-off students is humungous because their endowments are tax-free. Princeton has a huge endowment and a small student population; Harvard has a bigger endowment, but also a larger enrollment. When I was at the American Institutes for Research, we calculated the subsidy at Princeton per undergraduate student, and the subsidy was something in the vicinity of $100,000 per year. All hidden, nobody talks about it. Meanwhile the total subsidy for Montclair State University, which is down the road, was $12,000; the local community college was $3,000. This includes both state and federal money. What kind of system is this?

I testified at the Senate Finance Committee, and we got a small tax on endowments that was only for the very, very richest schools. I think it’s still on the books, but it was nowhere near as aggressive as it should have been. What I wanted was to take the money and set up a competitive grant program for community colleges because what they do is hard work, and they absolutely need the money. But what happened was that we got a much smaller tax that went into the general fund and didn’t go into improving anything. It was a disappointment.

This leads me to wonder what you make of the Biden administration’s student debt relief!

I’m not going to talk anymore. [Laughs

The other part of that same campaign was about property taxes. Georgetown and George Washington University, for example, don’t pay property taxes. Some universities acknowledge that they’re getting police services, fire, sewage, and so forth, and they negotiate something called a PILOT, a payment in lieu of taxes. One case was Harvard, which negotiated a PILOT with Boston that was way lower than what they would have otherwise paid, and ! A past college president told me once, “Your campaign to go after the endowments is never going to happen in a serious way. But if you start attacking our property tax exemption, that gets us worried.” 

“The numbers were amazing. The first results we published came from Virginia and Tennessee, and I swear to God, when I saw the results, I didn’t believe them. I thought we had an error in the data because associate’s degree holders were out-earning bachelor’s degree holders.”&Բ;

Back when I thought some of this was actually going to stick, I . Washington, D.C.’s Office of Tax Revenue turns out to be a pretty good agency, and I asked them for a list of all the properties owned by Georgetown and George Washington. I just asked them to calculate the value of those properties, and what should be the payment given the commercial tax rate. It was a lot of money. The average residential property owner in Princeton, New Jersey, pays thousands of dollars more in taxes than they otherwise would because Princeton University doesn’t pay property taxes. 

Criticizing universities in the Washington Post doesn’t sound like a good way to make friends in your current position.

Well, I haven’t done anything like that in years. And of course, I was appointed by the previous administration, when none of this stuff was particularly poisonous.

So why did I survive when almost nobody else did? I don’t think education research is that important. I think I’m good at my job, and the reforms we’re pursuing — whether it’s establishing NCADE or revising the SLDS — are really strongly supported by the current administration, which I really appreciate. But I’m not important enough to be fired.

Isn’t that something of an indictment of federal policymakers, though? They should care more about education research!

Yeah, but then I would have been fired. [Laughs

I was affiliated with AEI [the American Enterprise Institute, a conservative think tank], and I still have many friends there. But this NCADE proposal has Democratic backing in Congress. A lot of the work is still nonpartisan, or bipartisan. We work really hard at this, and some of the things we’re pushing are just so fundamentally important that it doesn’t matter which party you’re in.

Does partisanship make it harder to pursue the higher education issues you’re interested in, though?

I’m only the third IES director that’s been confirmed and served any length of time. Russ Whitehurst was totally focused on early childhood literacy, and John Easton cared the most about K–12. So even over these last five years, IES is predominantly still K–12 oriented.

My newest thing in postsecondary research is to collect data on , and I don’t think people understand how big that is in community college. A lot of it is people enrolling to use a swimming pool, or someone who takes three courses in musicology but isn’t interested in credit or a degree. But increasingly, non-credit activity is being used for non-credit certificates that are job- and career-related. Maybe you need three courses to upgrade my skills for auto body repair, or to upgrade your IT skills, but you don’t want a whole degree or to enroll in college. So you can do it on a non-credit basis.

We don’t even know how many non-credit certificates are being granted because we don’t collect any data on it. [the Integrated Postsecondary Data System, the federal government’s primary source of information on colleges and universities] is rooted in Title IV, and it doesn’t collect information about schools that don’t take federal grants or about non-credit activity. But it’s really big, and many people are betting time and energy and money to acquire non-credit certificates. We’re trying to do some work on that, and OMB is very hesitant to mandate any collections of data because of Title IV, but they’ve approved a voluntary data collection. I don’t do research anymore, but I’m trying to broker deals with researchers and states — Virginia has a beautiful data set, for instance — to find out what happens if you get a non-credit certificate. Indiana is another opportunity. 

Launching this stuff is hard because it’s pretty untraditional, and it requires strong state data systems and the willingness of states to work with independent researchers. And of the $808 million we’ve got, none of it is walking-around money; all of it is competitive, everything’s peer-reviewed. Which it should be, but I can’t just say, “Sure, sounds great, I’ll send you $50,000.”

]]>
Opinion: AI Will Not Transform K-12 Education Without Changes to 'the Grammar of School' /article/ai-will-not-transform-k-12-education-without-changes-to-the-grammar-of-school/ Tue, 30 May 2023 11:15:00 +0000 /?post_type=article&p=709562 Call me a luddite, but I’m not convinced artificial intelligence will transform educational outcomes.  

This has nothing to do with the technology itself. It’s actually awe-inspiring to see how ChatGPT can provide instant feedback to students on their writing, deftly coach them in solving a complex math problem, and interact in ways that can easily be mistaken for a human tutor. It will only get better over time.

But it’s important to remember that promises of educational transformation were made about television in the 1970s, desktop computers in the 1980s and the internet in the 1990s. If “transformation” is defined as an era with entirely new levels of student outcomes, it is hard to say that any of these innovations delivered — fewer than 1 in 3 students still graduate high school ready for college or a career.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


What would make this time different is if systems leaders and policymakers recognize that the benefits of new technologies in K-12 education are inherently constrained by age-based cohorts, standardized curriculum and all the other hallmarks of what David Tyack and Larry Cuban famously called “.”&Բ;

That basic paradigm of schooling was designed over a century ago around a different core purpose: to educate some while winnowing out others. It’s akin to a timed, academic obstacle course where learning is structured based on a student’s age. Once a student falls behind, it can be hard to catch back up. 

When technology is applied within this industrial paradigm, schools can operate more efficiently. Electronic gradebooks, smartboards, digital assessments and now AI-generated lessons and student feedback can all make teaching a more sustainable profession. That’s an important end in itself — but it’s not one that will necessarily lead to transformative student outcomes.

What about personalization? Several organizations, including ours, have embedded aspects of AI to support more tailored approaches to learning. But the use of such technology can often conflict with the standardized methods of teaching that are core to the grammar of school.

A fifth-grade math teacher, for example, can use AI-generated lesson plans, quiz generators and grading tools to support teaching grade-level standards. But when student performance in that class spans at least seven grade levels, using AI to support fifth-grade standards supercharges the ranking and sorting that is core to the grammar of school.

A more consequential path would be to redesign math education so each teacher can meet all students where they are and help them accelerate as far as they can with a combination of individual and group work. That’s hard to do in a traditional classroom of 30 academically diverse kids, but AI makes it far more possible. The key barrier is not technology. It’s a century-old paradigm of schooling in which curriculum, teacher training, classroom workflow, assessments, accountability systems and regulations are all oriented around whole-class, age-based instruction. 

How can schools break free from this legacy and shift to student-centered learning? 

The most urgent need is for new and existing organizations to redesign the student experience in ways that take full advantage of AI’s capabilities.

Thousands of organizations are conducting research and development to reimagine how AI will fundamentally change the experience of consumers, passengers, patients, business leaders, employees, athletes and others. But few are doing the same when it comes to teachers and students. Districts are built to run schools, not to redesign them; universities are organized around scholarship and teacher development; and curriculum companies are largely focused on tools that fit within the current paradigm of schooling, which is where the demand is. Absent organizations designing that use AI and other technologies in ways that fundamentally rethink the student and teacher experience, the grammar of school will remain intact.

But stoking the supply of new learning models won’t be enough. School districts have spent decades grouping students by age, buying textbooks, training teachers on a uniform scope and sequence, and administering standardized tests based on students’ grade levels. Beginning to shift away from that can feel risky, if not impossible. But overcoming the forces of inertia is possible if local leaders and their communities develop and act upon a new vision for learning that is rooted in meeting each student’s unique strengths and needs.  

Finally, policymakers must create the conditions for student-centered learning to emerge. At the federal level, that begins by revamping the assessment and accountability provisions within the Elementary and Secondary Education Act so States also have a key role to play in encouraging schools and districts to embrace student-centered learning, as exemplified by initiatives like .

AI has massive potential to dramatically impact children’s reading abilities, quantitative reasoning skills, understanding of history and the sciences, and more. But unless there’s a broader shift toward student-centered learning, the gap between what schools could be and what they are will only widen.

]]>
The Promise of Personalized Learning Never Delivered. Today’s AI Is Different /article/the-promise-of-personalized-learning-never-delivered-todays-ai-is-different/ Thu, 04 May 2023 11:15:00 +0000 /?post_type=article&p=708385 Over the last decade, educators and administrators have often encountered lofty promises of technology revolutionizing learning, only to experience disappointment when reality failed to meet expectations. It’s understandable, then, that educators might view the current excitement around artificial intelligence with a measure of caution: Is this another overhyped fad, or are we on the cusp of a genuine breakthrough?

A new generation of sophisticated systems has emerged in the last year, including Open AI’s GPT-4. These so-called large-language models employ neutral networks trained on massive data sets to generate text that is extremely human-like. By understanding context and analyzing patterns, they can produce relevant, coherent and creative responses to prompts. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Based on my experiences using several of these systems over the past year, I believe that society may be in the early stages of a transformative moment, similar to the introduction of the web browser and the smartphone. These nascent iterations have flaws and limitations, but they provide a glimpse into what might be possible on the very near horizon, where AI assistants liberate educators from mundane and tedious tasks, allowing them to spend more time with students. And this may very well usher in an era of individualized learning, empowering all students to realize their full potential and fostering a more equitable and effective educational experience.

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

  1. Smarter capabilities: These AI systems are now capable of , from high school to graduate- and professional-level exams that span . Google’s Med-PaLM performed at an “ on the medical licensing exam, not only correctly answering the questions but also providing a rationale for its responses. The rate of improvement with these systems is astonishing. For example, GPT-4 made significant progress in just four months, going from a failing grade on the bar exam to scoring in the 90th percentile. It scored in the 93rd percentile on the SAT reading and writing test and the 88th on the LSAT, and got a 5 — the top score — on several Advanced Placement exams.
  2. Reasoning engines: AI models like GPT-4, Microsoft’s Bing Chat, and Google’s Bard are advancing beyond simple knowledge repositories. They are developing into sophisticated that can contextualize, infer and deduce information in a manner strikingly similar to . While traditional search engines functioned like librarians guiding users toward relevant resources, this new generation of AI tools acts as skilled graduate research assistants. They can be tasked with requests such as conducting literature reviews, analyzing data or text, synthesizing findings and generating content, stories and tailored lesson plans.
  3. Language is the interface: One of the remarkable aspects of these systems is their ability to interpret and respond to natural language commands, eliminating the need to navigate confusing menus or create complicated formulas. These systems also explain concepts in ways people can easily understand using metaphors and analogies that they can relate to. If an answer is too confusing, you can ask it to rephrase the response or provide more examples.
  4. Unprecedented scale: Innovations often catch on slowly, as start-ups must penetrate markets dominated by well-established companies. AI stands in stark contrast to this norm. With tech giants like Google, OpenAI and Microsoft leading the charge, the capabilities of large-language models are not only rapidly scaling, but becoming deeply integrated into a myriad of products, services and emerging companies.

These capabilities are finding their way into the classroom through early experiments providing a tantalizing sense of what might be possible.  

  • Tutoring assistants: The capability of these systems to understand and generate human-like text allows for . They can offer explanations, guidance and real-time feedback tailored to each learner’s unique needs and interests. and are also piloting GPT-4 powered tutors that have been trained on their unique datasets.
  • Teaching assistants: Teachers spend hours on , from lesson planning to searching for instructional resources, often at the cost of less time for teaching. As capable reasoning engines, AI can assist teachers by automating many of these tasks — including quickly , developing worksheets, drafting quizzes and translating content for English learners. 
  • Student assistants: AI-based feedback systems have the capacity to offer constructive , including feedback aligned to , which helps students elevate the quality of their work and fine-tune their writing skills. It also provides immediate help when students are stuck on a concept or project.

While these technologies are enormously promising, it is also important to recognize that they have limitations. They still struggle with some math calculations and at times offer inaccurate information. Rather than supplanting teachers’ expertise and judgment, they should be utilized as a supportive co-pilot, enhancing the overall educational experience. Many of these limitations are being addressed through integrations with other services, such as for dramatically better math capabilities. Put another way, this is the worst these AI technologies will be. Whatever shortcomings they have now will likely be improved in future releases.

The unprecedented scale and rapid adoption of generative AI mean that these benefits are not distant possibilities, but realities within reach for students and educators worldwide. By harnessing the power of AI, it is possible to create a future where teaching and learning are not only more effective and equitable, but also deeply personalized, with students empowered to reach their full potential and teachers freed to focus on teaching and fostering meaningful connections with their students.

]]>
Opinion: ‘This Changes Everything’: AI Is About to Upend Teaching and Learning /article/this-changes-everything-ai-is-about-to-upend-teaching-and-learning/ Thu, 27 Apr 2023 13:30:00 +0000 /?post_type=article&p=708030 In April 2022, I attended the ASU-GSV Summit, an ed tech conference in San Diego. I’d recently become an official Arizona State University employee, and as I was grabbing coffee, I saw my new boss, university President Michael Crow, speaking on a panel being broadcast on a big screen. At the end of the discussion, the moderator asked Crow what we’d be talking about at the 2030 summit. In his response, Crow referenced a science fiction book by Neil Stephenson, . I was intrigued.  

I’ve since read the book (which is weird but fascinating). The protagonist is a girl named Nell who is a pauper and victim of abuse in a dystopian world. By a stroke of luck, Nell comes to own a device that combines artificial intelligence and real human interaction to teach her all she needs to know to survive and develop a high level of intellectual capacity. The device adjusts the lessons to Nell’s moods and unique needs. Over time, she develops an exceptional vocabulary, critical physical skills (including self-defense) and a knowledge base on par with that of societal elites – which enables her to transcend the misery of her life.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Crow told the conference crowd last year: In 2030, we will have tools like this. In fact, he said, ASU and engineers elsewhere are developing them now. But if we reconvene in 2030 without figuring out how we get those kinds of tools to kids like Nell, we will have failed.

The recent and rapid advances in artificial intelligence have been on my radar for some time, but I came home from last week’s 2023 ASU-GSV conference even more certain that advances in AI via models such as GPT-4 (the latest iteration of ChatGPT) and Bing will soon be used as radically personalized learning tools like Nell’s primer. That future seemed far off in 2022 — but these tools are developing so fast, they’re not just here now; in a matter of weeks or months, they’re going to be your kid’s tutor, your teacher’s assistant and your family’s homework helper.

I attended several conference panels on AI, and one specifically on Khan Academy’s new tutoring program, , which is powered by GPT-4, blew me away. As Sal Khan said, he realized the power of this generation of AI: “This changes everything.” Of course, attendees discussed the safety and security risks and threats of using AI in the classroom. But what struck me was the potential for these sophisticated tools that harness the intelligence of the internet to radically personalize educational content and its delivery to each child. Educators can radically equalize education opportunities if they figure out how to ride this technological revolution effectively.

Khanmigo can do extraordinary tasks. For example, it writes with students, not for them. It gives sophisticated prompts to encourage students to think more deeply about what they’re reading or encountering, and to explain their thinking. It will soon be able to remember students’ individual histories and customize lessons and assessments to their needs and preferences. And that’s just the start. Khan described how one student reading The Great Gatsby conversed in real time with an AI version of Jay Gatsby himself to discuss the book’s imagery and symbolism. Khan said his own daughter invented a character for a story and then asked to speak to her own character — through Khanmigo — to further develop the plot.

Khanmigo — and likely other competing tools to come — also have the potential to revolutionize teaching. Right now, a teacher can use AI to develop a lesson plan, create an assessment customized to each student’s background or interests, and facilitate breakout sessions. This portends a massive shift in the teaching landscape for both K-12 and higher education — and likely even workforce training. By, the use of AI in colleges and universities is “beyond the point of no return.” A professor from Wharton School of Business at the conference said he actually requires his students to use AI to write their papers, but they must turn them in with a “use guide” that demonstrates how they utilized the tool and cited it appropriately. He warns students that AI will lie and that they are responsible for ensuring accuracy. The professor said he no longer accepts papers that are “less than good” because, with the aid of AI, standards are now higher for everyone. 

All this feels like science fiction becoming reality, but it is just the start. You have probably heard about how GPT-4 has made shocking advances compared to the previous generation of AI. Watch on the AP Bio or the bar exam. Watch how it performs nearly all . Watch how it writes original and pretty good poetry or essays. Kids are indeed using this tool to write their final papers this year. But the pace of development is so rapid that one panelist predicted that in a year, AI will be making its own scientific discoveries — without direction from a human scientist. The implications for the types of jobs that will disappear and emerge because of these developments are difficult to predict, but rapid change and disruption will almost certainly be the new normal. This is just the beginning. Buckle your seat belts.

To be sure, the risks are real. Questions about student privacy, safety and security are serious. Preventing plagiarism, which is virtually undetectable with GPT-4, is on every teacher’s mind. Khan is currently working with school districts to set up guardrails and help students, teachers and parents navigate these very real concerns. But a common response — to shut down or forbid the use of AI in schools — is as shortsighted and fruitless as trying to stop an avalanche by building a snowbank. This technology is unstoppable. Educators and district, state and federal leaders need to start planning now for how to maximize the opportunities for students and families and educators while minimizing the risks.

A host of policy and research questions need to be explored: What kind of guardrails are available and which are most effective? Which tools and pedagogical approaches best accelerate learning? In what ways can AI support innovations that truly move the needle for teaching and learning? Education policy leaders, ed tech developers and researchers must begin to address these issues. Quickly.

I believe AI can make the teaching profession much more effective and sustainable. It can also put an end to the ridiculous notion that one teacher must be wholly responsible for addressing every student’s learning level and individual needs. AI — combined with novel staffing models like team teaching and specialized roles being piloted in districts like Mesa, Arizona, by my — could finally allow teachers to start working in subjects they’re most suited to. Instead of fretting about the lack of high-dosage, daily tutoring, which is the best way to address learning gaps, districts and families could see an army of AI tutors available for all students across the U.S. Parents who have been frustrated with the lack of attention to their children’s needs could set up an AI tutor at home.

But to go back to Michael Crow’s message: If technology and education leaders develop these tools but do not ensure they reach the students most in need, they will have failed. The field must begin to 1) track what is happening in schools and living rooms across the country around AI and learning; 2) build a policy infrastructure and research agenda to develop and enforce safeguards and move knowledge in real time; and 3) dream big about realizing a future of learning with the aid of AI.

As CRPE’s 25th anniversary predicted in 2018, there are many things those planning for the future of education cannot know with the rise of AI: the effect of rapid climate change, natural disasters and migrations; shifting geopolitical forces; fast-rising inequalities; and racial injustices. It is clear, however, that education must change to adapt to these new realities. This must happen quickly and well if educators are to adeptly combine the positive forces of AI with powers that only the human mind possesses. To make this shift, schools will need help to transition to a more nimble and resilient system of learning pathways for students. CRPE has been for five years, and we are now launching a series of research studies, grant investments and convenings that bring together educators with technology developers to help navigate the path forward. 

I hope that when people reconvene at ASU-GSV in 2030, AI will have been utilized so effectively to reimagine education that attendees can say they have radically customized learning for all kids like Nell. Despite the risks, using AI in classrooms could help eliminate poverty, reinvigorate the global economy, stem climate change and, potentially, help us humans co-exist more peacefully. The time is now to envision the future and begin taking steps to get there.

]]>
Opinion: ChatGPT Is Here to Stay. Testing & Curriculum Must Adapt for Students to Succeed /article/chatgpt-is-here-to-stay-testing-curriculum-must-adapt-for-students-to-succeed/ Mon, 17 Apr 2023 11:15:00 +0000 /?post_type=article&p=707465 As a former teacher, I have seen the power of technology to enhance and transform the way educators teach and learn. From interactive whiteboards to educational apps, technology has the potential to revolutionize education and better prepare students for the future. That’s why the decision by some school districts to — which generates human-like responses to complex questions using artificial intelligence — is deeply concerning. It risks widening the gap between those who can harness the power of this technology and those who cannot, ultimately harming students’ education and career prospects.

In a recent titled “The Age of AI Has Begun,” Bill Gates identified the technology behind ChatGPT as one of the two most groundbreaking he has witnessed in his lifetime. Gates believes it will fundamentally reorient entire industries. Researchers at Open AI, the company that created ChatGPT, estimate the technology has the potential to and that four-fifths of American workers could see their jobs affected by chatbots in some way. Among the most vulnerable: translators, writers, public relations representatives, accountants, mathematicians, blockchain engineers and journalists.

Already, effective use of ChatGPT is becoming a highly valued skill, impacting workforce demands. A San Francisco-based company is salaries of up to $335,000 for engineers skilled in writing prompts — the questions that generate complex responses using this technology. A Japanese company is on their ChatGPT proficiency and requiring them to apply it in their work. McKinsey & Company has estimated that between could be lost to automated technology by 2030 — and that was in 2017, before ChatGPT came on the scene.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Employees are perceiving a significant shift in their workplaces, and many on how to effectively use AI tools such as ChatGPT to perform their jobs. This growing demand for these skills underscores the need for schools to prepare students — especially those in high school — to meet these evolving demands.

That’s why banning ChatGPT is a mistake. It would be like prohibiting students from learning how to use laptops and calculators. To fully utilize ChatGPT’s capabilities, users must create , review the output, refine their requests, provide feedback to the chatbot and then have it integrate their ideas to produce the desired insight or product. Students must employ essential skills such as reason, logic, writing, reading comprehension, critical thinking, creativity and subject knowledge across various topics to engage a generative AI technology effectively. They must also learn to recognize its and propensities for error.

in classrooms risks creating a division between students who learn how to utilize its capabilities and those who are left behind.

Preparing students for the demands of the 21st century will take a comprehensive approach. To achieve this, the federal government can require high schools to assess AI proficiency within their existing English Language Arts and math exams. This approach can motivate states to redesign their K-12 curricular standards, which influence what students learn daily. State agencies must lead the way in integrating generative AI technologies into their K-12 standards, investing in educator training and developing effective curriculum materials. Washington should incentivize and fund these efforts.

Businesses must recognize the importance of preparing their future workforce and encourage state education officials to incorporate technologies like ChatGPT into learning standards. Philanthropic organizations can partner with school districts to create pilot programs demonstrating successful AI tool integration, inspiring state agencies to prioritize and fund this work.

Advocacy is also crucial to the success of these efforts. Parents must urge their children’s schools to teach AI technology, and teachers should insist on adequate training to become proficient in them. Collaboration among educators and families is essential for students to acquire the necessary tools and skills to thrive in an AI-driven world.

Whether schools embrace it or not, generative AI technology will transform how students access information and learn. Other countries are paying attention. is already introducing AI-driven support systems for students and teachers. The United Arab Emirates aims to to one-third of its annual STEM graduates, and the United Kingdom — in its effort to become a leading global AI superpower — has set a goal of AI-focused Ph.D.s over five years.

In this new, AI-driven world, success will belong to those who possess the skills to navigate it effectively. To equip students for an ever-changing technological landscape, K-12 and higher education must adopt generative AI technologies like ChatGPT. In doing so, they can foster a well-educated and skilled workforce, encourage innovation and build a brighter future for everyone.

]]>
Opinion: ChatGPT Could Be an Effective and Affordable Tutor /article/chatgpt-could-be-an-effective-and-affordable-tutor/ Sat, 04 Mar 2023 13:01:00 +0000 /?post_type=article&p=705355 This article was originally published in

Imagine a private tutor that never gets tired, has access to massive amounts of data and is free for everyone. In 1966, Stanford philosophy professor Patrick Suppes did just that when he : One day, computer technology would evolve so that “millions of schoolchildren” would have access to a personal tutor. He said the conditions would be just like the .

Now, , a new artificial intelligence-powered chatbot with advanced conversational abilities, may have the capability to become such a tutor. ChatGPT has collected huge amounts of data on a wide range of topics and can . As a researcher who studies , I think ChatGPT can be used to help students excel academically. However, in its current form, ChatGPT shows an inability , let alone tutoring.

Philosophy, engineering and artificial intelligence scholars envisioned using the computer as an well before the internet . I believe lessons from developing those early tutoring systems can offer insight into how students and educators can best make use of ChatGPT as a tutor in the future.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Computers as tutors

Suppes – the Stanford philosophy professor – was a pioneer of a field called “.” He developed some of the earliest educational software. That software than those who didn’t use the program. I worked for Suppes in developing software and other online programs from 2004 to 2012.

Since then, , social networks and computer hardware. And today, the abilities of ChatGPT to write essays, answer philosophical questions and solve computer coding problems may finally achieve Suppes’ goal of truly personalized tutoring via computer.

Early versions of personalized learning

In 1972, a new personalized learning system called , for Programmed Logic for Automated Teaching Operations, made its debut. It was the .

Created by Don Bitzer, a professor of electrical engineering at the University of Illinois, PLATO allowed up to 1,000 students to be logged onto a mainframe computer simultaneously. Each student could complete different online courses in foreign languages, music, math and many other subjects while receiving feedback from the computer on their work.

PLATO in less time. And most students preferred this mode of instruction over sitting in a large lecture class. Yet, to be used by many colleges and universities. Each computer terminal was marketed at over US$8,000 – about $58,000 today – and schools were charged additional fees every time a student used the system. Still, PLATO’s success with students inspired a number of companies to create software that provided a similar kind of tutoring, including the College Curriculum Corporation, which was co-founded by Suppes.

Popular personal computer brands, such as Apple and Commodore, as a reason for families to invest in a home computer.

By 1985, researchers at Carnegie Mellon University were designing software using advances in artificial intelligence and cognitive psychology. They claimed that the current technology had advanced to a level that enabled computer systems to be designed to . However, even though there were over 10,000 pieces of educational software available at the time, much of it was of fairly low quality and did not provide real tutoring.

Although the more advanced designs of the educational software developed at Carnegie Mellon enabled students to learn significantly more than students in traditional classrooms, they were not widely used in schools.

In the 1980s and 1990s, a school would need a sizable number of expensive, high-powered computer workstations for students to use an intelligent tutor. Today, the computers are much more powerful and much less expensive.

And early intelligent tutors were used primarily in math, science and technology courses, which limited their appeal. Students could not ask the software programs a question. However, students could get feedback on answers to quizzes and tests.

In 2001, the next generation of intelligent tutoring systems was able to have . These systems, early chatbots, used advances in to have dialogues with students about a range of subjects. Each system was designed specifically for one subject, such as physics. Adding another subject, like English, would require its own specialized tutoring system.

The next few years brought three developments, which enabled a forward leap in achieving the dream of computerized tutoring. One was the availability of broadband internet, which was faster than the dial-up connections of the 1990s. The second was the lower cost of more powerful computers. The third was investment by technology companies and government funding for university research in artificial intelligence, which led to the development of what we now call chatbots.

The first chatbots

By 2007, early AI chatbots provided tutoring by talking students through their answers to questions. Research showed these chatbots . Even so, using chatbot technology to provide tutoring was primarily experimental.

During the 2010s, social networks and advances in artificial intelligence dramatically changed the way people interact with technology. Chatbots are integrated into many parts of our lives and are no longer experimental. People now talk to their phones, to their cars and their home appliances. Many people have virtual assistants at home. They can ask Alexa, Siri or Google for directions or the answers to simple homework questions.

Today, ChatGPT has advanced abilities that enable more complex conversations and can provide individualized tutoring much like what Suppes predicted in 1966.

I asked ChatGPPT why it would be a good tutor, and it responded: “I am a good tutor for a student because I am able to understand and respond to a wide range of questions and prompts, and I can provide detailed and accurate information on a wide range of subjects. Additionally, I am able to communicate this information in a clear and easy-to-understand manner, making it accessible for students of all levels of understanding. Because I am a machine-learning model, I can also provide a large amount of information quickly and efficiently, which can be especially helpful for students who are trying to learn a lot of new material in a short amount of time.”

By contrast, Siri said, “Hmm … I don’t have an answer for that. Is there something else I can help you with?”

Recent research in using chatbots in college courses shows that how chatbots are used is important in encouraging students to learn more deeply. For instance, AI chatbots enabled significant improvements in learning in a graduate . These learning gains occurred when these chatbots asked students to build on an existing argument or to provide more information about a claim they had made. In this case, the chatbot asked the student a question, rather than vice versa.

Many educators are since it can be used to cheat on assignments and papers. Others are worried about or spreading misinformation.

Yet the history and research of intelligent tutors show that to harness the power of chatbots like ChatGPT can make deeper, individualized learning available to almost anyone. For example, if people use ChatGPT to ask students questions that prompt them to revise or explain their work, . Since ChatGPT has access to far more knowledge than Aristotle ever did, it has great potential for providing tutoring to students to help them learn more than they would otherwise.

This article is republished from under a Creative Commons license. Read the .
The Conversation

]]>
Iowa’s State Universities Not Shying Away From AI Technology in Classrooms /article/iowas-state-universities-not-shying-away-from-ai-technology-in-classrooms/ Fri, 03 Mar 2023 14:30:00 +0000 /?post_type=article&p=705333 This article was originally published in

While educators around the country are wrestling with ethical concerns posed by new artificial intelligence chatbots, Iowa’s public universities are looking to use the advancements in the classroom.

ChatGPT is a natural language processing artificial intelligence (AI) technology tool, developed by OpenAI, that generates human-like conversations. The program other bots are currently expanding, although their software is not yet perfect. They do, however, offer unique experiences to students, Iowa educators say.

AI has been around for decades, but the recent developments of more advanced AI have created new questions for higher education.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Carl Follmer, University of Iowa instructor and interim director of the Frank Business Communications Center, has been aware of ChatGPT and its potential impacts on the Tippie College of Business since November 2022. After learning about the software for months, he said it is being integrated into classrooms this spring.

“My overall interpretation of ChatGPT and AI chatbots is that they seem helpful in terms of generating text, but they’re lousy in terms of writing for humans,” he said. “We’re going to be teaching our students how to harness ChatGPT as a tool to create raw language and then adapt it, optimize it, and infuse it with content that humans need to get the meaning out of it.”

He said the best way to use this emerging artificial intelligence is to get the bare bones of business communications from the site and then edit the text. The only way to use ChatGPT successfully is to optimize its potential, Follmer said.

Iowa State University is also embracing the new technology while remaining wary of its shortfalls. Associate Provost Ann Marie VanDerZanden said it’s important at an institution of science and technology to help students learn about artificial intelligence and how to use it responsibly in whatever form it comes in.

“We’re really diving in to understand what are the benefits of (AI) from a developmental writing standpoint and what are the limitations,” VanDerZanden said. “It’s something that will continue to grow and faculty have started to think about how they can leverage tools like this to help their students learn and expand in their disciplinary content.”

Concerns about academic integrity

The University of Northern Iowa’s Dean’s Council met on Feb. 20 to discuss chatbots and the growth of AI. Pete Moris, the university’s director of university relations, said the dialogue about new opportunities for ChatGPT is present on campus,

“We are continuing to find ways to embrace this type of technology at UNI, while maintaining our high standards of academic integrity,” he said in an email.

VanDerZanden said AI chatbots are on her university’s radar regarding academic integrity. Earlier this year, ISU had a task force regarding where ChatGPT and other generators can and cannot be used in the classroom space.

While there are some concerns, she said there is also space for students to learn while the AI is still getting better.

“Cheating has been around forever,” she said. “It ’t a result of this tool becoming widely available … The other part, I think, that’s important to consider is the tool is only at a certain level of effectiveness. It may generate content, but there’s no guarantee that content is correct. And that’s really the space within higher ed where we can do the work and help our students understand it’s a tool that ’t perfect.”

While faculty members continue to learn new ways to use the technology at ISU, VanDerZanden said some of the responsibility falls on the students to ask questions about AI and to be honest.

“From the student standpoint, you don’t know how the program will be implemented in any given class,” she said. “There’s some responsibility on the student to make sure they are clear on the uses and to ask those clarifying questions if they’re uncertain.”

VanDerZanden said ISU is focused on showing students the opportunities and the drawbacks of the technology while ensuring academic integrity is upheld.

UNI is also working to clarify rules regarding ChatGPT and chatbots.

“There has been a working group composed of members of our Information Technology team and members of our faculty working to develop language and guidance for our campus community relative to these topics,” Moris said. “… This is an evolving topic and we hope to have more information to share in the near future regarding AI and ChatGPT at UNI.”

UI also has some concerns about dishonesty in course work and has for teaching with artificial intelligence tools. Follmer said it is a concern for him and his colleagues, but academic dishonesty is broader than some may understand.

“Cheaters are going to cheat, that’s the realist aspect,” he said. “And we’ll never be able to stop it completely unless we get everyone in a room with no screens to test them using pens and paper. Even if ChatGPT goes behind a paywall, there will be others like it. They may not be as good as that particular program, but maybe students are okay with that. But, as with all forms of cheating, we just don’t want to make it too easy.”

AI-generated work is recognizable

UI Professor Patrick Fan, the Henry B. Tippie Excellence Chair in Business Analytics, is an expert in AI. He said it’s somewhat easy to know what AI-generated work looks like.

“I think the cognitive capabilities of the model (ChatGPT) are super, super powerful, but a lot of the time you can distinguish between the outputs produced by the AI versus a human being,” he said. “… There are some tools out there, like ChatZero, that instructors can use to check if what is submitted was done by robots or by a person. It may not detect all of them, but it shows most of them. As you have more experience with it, you will notice patterns and once you are trained on that you can distinguish it more easily.”

ChatZero is a tool developed by a Princeton University student and OpenAI to detect AI-written texts. According to , Turnitin is also looking to create AI-checking software. Turnitin is a online tool used by many education systems to detect plagiarism.

Potential positives of advancing AI in the classroom

An unintended and positive consequence of the system, Follmer said, is for professors to come up with more creative assignments and develop stronger rubrics. He said faculty members need to be nimble and curious as they deal with the repercussions of technological advancements like this.

“At Tippie, we’re teaching with the future in mind, so, we’re going to need to have really specific prompts and rubrics that emphasize other things than just structure,” he said. ”Elements like tone and audience analysis are needed. I don’t know if that’s going to last forever, but that buys us enough time to be able to figure out how to adapt to the next change. Something as simple as writing a five-paragraph essay about the ethics of euthanasia? That’s pretty much dead.”

Fan said the focus now, for him and other faculty members who are introducing these technologies to their students, is to remain curious and find ways to showcase these new opportunities while being cautious.

“We have to be open minded and we have to be more receptive to this technology,” Fan said. “But we also need to be cautious about potential bias, hatred, or risk produced in (AI-generated) content. We need to do the due diligence to know the limitations and criticize the tools as they are improving. Knowing what they’re capable of will make them a positive classroom assistant.”

is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: info@iowacapitaldispatch.com. Follow Iowa Capital Dispatch on and .

]]>