The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.
is now so prevalent, high school and college educators say, that to assign writing outside of the classroom is like asking students to cheat.
"The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, who has taught English for 23 years. Educators no longer wonder if students will outsource schoolwork to AI chatbots. "Anything you send home, you have to assume is being AI'ed."
The question now is how schools can adapt, because many of the teaching and assessment tools used for generations are no longer effective. As rapidly improves and becomes more entwined with daily life, it transforms how students learn and study and how teachers teach. It also creates new confusion over what constitutes academic dishonesty.
People are also reading…

Casey Cuny, an English teacher at Valencia High School, works on his computer Aug. 27 as he prepares for class in Santa Clarita, Calif.
"We have to ask ourselves, what is cheating?" says Cuny, a 2024 recipient of California's Teacher of the Year award. "Because I think the lines are getting blurred."
Cuny's students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him "lock down" their screens or block access to certain sites. He's also integrating AI into his lessons and teaching students how to use AI as a study aid "to get kids learning with AI instead of cheating with AI."
In rural Oregon, high school teacher Kelly Gibson made a similar shift to in-class writing. She incorporates more verbal assessments to have students talk through their understanding of assigned reading.
"I used to give a writing prompt and say, 'In two weeks, I want a five-paragraph essay,'" she said. "These days, I can't do that. That's almost begging teenagers to cheat."
Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in "The Great Gatsby." Many students say their first instinct is now to ask ChatGPT for help "brainstorming." Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: "Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!"

Timothy Rimke reads Aug. 27 during Casey Cuny's English class.
Students uncertain when AI use is out of bounds
Students say they often turn to AI with good intentions for things like research, editing or . However, AI offers unprecedented temptation, and it's sometimes hard to know where to draw the line.
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading "felt like a different language" until she read AI summaries of the texts.
"Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating?" she said. "If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?"
Her class syllabi say things like: "Don't use AI to write essays and to form thoughts," she says, but that leaves a lot of gray area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater.
Schools tend to leave AI policies to teachers, which often means rules vary widely in the same school. Some educators, for example, welcome the use of , an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.
"Whether you can use AI or not depends on each classroom. That can get confusing," Valencia 11th grader Jolie Lahey said. She credits Cuny with teaching her sophomore English class a variety of AI skills such as how to upload study guides to ChatGPT and have the chatbot quiz them, and then explain problems they got wrong.
This year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it that just doesn't make sense," Lahey says. "It feels outdated."

A screen displays guidelines for using artificial intelligence above a portrait of Ernest Hemingway in Casey Cuny's classroom.
Schools introduce guidelines
Many schools initially banned use of AI after ChatGPT launched in late 2022. Since then, views on the role of artificial intelligence in education shifted dramatically. The term "AI literacy" became a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.
Over the summer, several colleges and universities convened their AI task forces to or provide faculty with new instructions.
The University of California, Berkeley, emailed all faculty new guidance that instructs them to "include a clear statement on their syllabus about course expectations" on AI use and offered three sample statements — for courses that require AI, ban AI in and out of class, or allow some AI use.
"In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email said, stressing that AI is "creating new confusion about what might constitute legitimate methods for completing student work."
Carnegie Mellon University saw a huge uptick in academic responsibility violations due to AI, but often students aren't aware they've done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university's Heinz College of Information Systems and Public Policy.
For example, one student who is learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. He didn't realize the platform also altered his language, which was flagged by an AI detector.
Enforcing academic integrity policies became more complicated, since use of AI is hard to spot and even harder to prove, Fitzsimmons said. Faculty are allowed flexibility when they believe a student unintentionally crossed a line, but are now more hesitant to point out violations because they don't want to accuse students unfairly. Students worry that if they are falsely accused, there is no way to prove their innocence.
Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told a blanket ban on AI "is not a viable policy" unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to "flipped classrooms," where homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon's business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in "a lockdown browser" that blocks students from leaving the quiz screen.
"To expect an 18-year-old to exercise great discipline is unreasonable," DeJeu said. "That's why it's up to instructors to put up guardrails."
5 ways companies are incorporating AI ethics
5 ways companies are incorporating AI ethics

As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.Â
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.  Â
found that about half of consumers think there is not sufficient regulation of generative AI right now. The lack of oversight tracks with limited trust that institutions—particularly tech companies and the federal government—will ethically develop and implement AI, according to KPMG.Â
Within the tech industry, ethical initiatives have been set back by a , according to an article presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency. Layoffs at major corporations, including Amazon's streaming platform Twitch, Microsoft, Google, and X, hit hard, leaving a vacuum.
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.   Â
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The KPMG survey found that the biggest consumer concerns are the spread of misinformation, fake news, and biased content, as well as the proliferation of more sophisticated phishing scams and cybersecurity breaches. As AI grows more sophisticated, these concerns are likely to be amplified as more people may potentially be negatively affected—making ethical frameworks for approaching AI all the more essential.Â
That puts the onus to set ethical guardrails upon companies and lawmakers. In May 2024, Colorado became the first state to introduce with provisions for consumer protection and accountability from companies and developers introducing AI systems used in education, financial services, and other critical, high-risk industries.
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies' responsible use of AI, the KPMG report also found that organizations can take concrete steps to garner and maintain public trust—education, clear communication and human oversight to catch errors, biases, or ethical concerns.
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.Â
analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.Â
Actively supporting a culture of ethical decision-making

AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end, JPMorgan Chase has , including an ethics team to work on the company's AI initiatives. The company ranks top on the , which looks at banks' AI readiness, including a top ranking for transparency in the responsible use of AI.
Development of risk assessment frameworks

The National Institute of Standards and Technology has developed an that helps companies better plan and grow their AI initiatives. The approach supports companies in identifying the risks posed by AI, defining and measuring ethical activity, and implementing AI systems with fairness, reliability, and transparency. The Vatican is even getting in on the action—it collaborated with the Markkula Center for Applied Ethics at Santa Clara University, a Catholic college in Silicon Valley, to for companies to navigate AI technologies ethically.
Specialized training in responsible AI usage

Amazon Web Services has developed many tools and guides to help its employees think and act ethically as they develop AI applications. The , a YouTube series produced by AWS Machine Learning University, serves as an introductory course that covers fairness criteria and methods for mitigating bias. tool helps developers detect bias in AI model predictions.
Communication of AI mission and values

Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include Dell Technologies' and IBM's , which clarify their approach to AI application development and implementation, publicly setting guiding principles such as "respecting cultural norms, furthering social equality, and ensuring environmental sustainability."
Implementing an AI ethics board

Companies can create to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. has had an AI Ethics Advisory Panel since 2018; it works on current ethical issues and looks ahead to identify potential future problems and solutions. Northeastern University has to work with companies that prefer not to create their own.
Story editing by Jeff Inglis. Additional editing by Alizah Salario. Copy editing by Paris Close. Photo selection by Clarese Moller.Â