In this school, the computer science course now includes criticism of chatbots

Marisa Shuman’s computer class at Young Women’s Leadership School in the Bronx began as usual on a recent January morning.
Just after 11:30 p.m., energetic 11th and 12th graders burst into the classroom, sat at communal study tables and pulled out their laptops. Then they turned to the front of the room, looking at a whiteboard where Ms. Shuman had posted a question about wearable technology, the topic of that day’s class.
For the first time in her decade-long teaching career, Ms. Shuman had not written any lesson plans. She had generated the course material using ChatGPT, a new chatbot that relies on artificial intelligence to provide written answers to questions in plain prose. Ms. Shuman was using the algorithm-generated lesson to examine the potential usefulness and pitfalls of the chatbot with her students.
“I don’t care if you’re learning anything about wearable technology today,” Ms. Shuman told her students. “We are evaluating ChatGPT. Your goal is to identify whether the lesson is effective or ineffective.
Across the United States, universities and school districts are scrambling to master new chatbots capable of generating human-like text and images. But while many are rushing to ban ChatGPT to try to prevent its use as a cheating aid, teachers like Ms. Shuman are leveraging innovations to stimulate more critical thinking in the classroom. They encourage their students to question the hype around rapidly evolving artificial intelligence tools and to consider the potential side effects of the technologies.
The goal, according to these educators, is to train the next generation of technology creators and consumers in “critical computing.” This is an analytical approach in which understanding how to critique computer algorithms is as important – if not more important than – knowing how to program computers.
Public schools in New York, the nation’s largest district, serving some 900,000 students, are training a cohort of computer science teachers to help their students identify AI biases and potential risks. Lessons include discussions of flawed facial recognition algorithms that can be much more accurate at identifying white faces than darker-skinned faces.
In Illinois, Florida, New York and Virginia, some middle school science and humanities teachers are using an AI literacy program developed by researchers at the Scheller Teacher Education Program at the Massachusetts Institute of Technology. One lesson asks students to think about the ethics of powerful AI systems, known as “generative adversarial networks,” that can be used to produce fake media content, like realistic videos in which well-known politicians say things they’ve never said before.
With the proliferation of generative AI technologies, educators and researchers say understanding these computer algorithms is a crucial skill students will need to navigate daily life and participate in civics and society.
Learn more about schools and education in the United States
“It’s important for students to know how AI works, because their data is harvested, their user activity is used to train these tools,” said Kate Moore, an education researcher at MIT who helped create the tools. AI courses for schools. “Decisions are made about young people using AI whether they know it or not.”
To observe how some educators are encouraging their students to scrutinize AI technologies, I recently spent two days visiting classrooms at Young Women’s Leadership School in the Bronx, a public girls’ middle and high school that is at the forefront of this trend.
The huge beige brick school specializes in math, science and technology. It serves nearly 550 students, mostly Latinx or black.
This is by no means a typical public school. Teachers are encouraged to help their students become, as the school’s website puts it, “innovative” young women with the skills to complete college and “influence public attitudes, policies and laws.” to create a more socially just society”. The school also has an enviable 98% four-year graduation rate, significantly higher than the New York high school average.
One morning in January, about 30 9th and 10th graders, many wearing navy sweatshirts and gray pants, rushed into a class called Software Engineering 1. The hands-on class introduces students to coding, resolution of computer problems and the social repercussions of technological innovations.
It’s one of several computer science courses at the school that ask students to think about how popular computer algorithms — often developed by tech company teams made up mostly of white and Asian men — can have disparate impacts on groups such as immigrants and low-income communities. The topic for the morning: face-matching systems that may struggle to recognize darker-skinned faces, such as those of some students in the room and their families.
Standing in front of her class, Abby Hahn, the computer science teacher, knew her students might be shocked by the subject. Faulty facial recognition technology has helped lead to fake arrests of black men.
So Ms Hahn alerted her students that the class would be discussing sensitive topics like racism and sexism. Then she released a YouTube video, created in 2018 by Joy Buolamwini, computer scientistshowing how some popular facial analysis systems mistakenly identified iconic black women as men.
As the class watched the video, some students gasped. Oprah Winfrey “seems like a man,” Amazon tech said with 76.5% confidence, according to the video. Other sections of the video said Microsoft’s system mistook Michelle Obama for “a young man in a black shirt” and IBM’s system identified Serena Williams as a “man” with 89% confidence.
(Microsoft and Amazon later announced improvements in the accuracy of their systems, and IBM stopped selling such tools. Amazon said it was committed to continually improving its facial analysis technology through customer feedback. customers and collaborating with researchers, and Microsoft and IBM said they were committed to the AI development lead)
“I’m shocked at how women of color are seen as men, even though they don’t look like men at all,” said Nadia Zadine, a 14-year-old student. “Does Joe Biden Know?”
The goal of the AI biases lesson, Hahn said, was to show student programmers that computer algorithms can be flawed, just like cars and other human-designed products, and to encourage them to challenge problematic technologies.
“You are the next generation,” Ms. Hahn told the young women at the end of the class period. “When you are in the world, are you going to let this happen?”
“No!” replied a chorus of students.
A few doors down the hall, in a colorful classroom adorned with handmade paper snowflakes and origami cranes, Ms. Shuman was preparing to teach a more advanced programming course, Software Engineering 3, which focused on creative computing like game design and art. Earlier that week, his student coders discussed how new AI-powered systems like ChatGPT can analyze vast stores of information and then produce human-like essays and images in response to short invites.
As part of the lesson, 11th and 12th graders read news articles about how ChatGPT could be both useful and error-prone. They also read social media posts about how the chatbot could be asked to generate texts promoting hate and violence.
But students couldn’t try ChatGPT in class themselves. The school district blocked it for fear that it could be used to cheat. So the students asked Ms. Shuman to use the chatbot to create a lesson for the class as an experiment.
Ms Shuman spent hours at home tricking the system into generating a lesson in wearable technology like smartwatches. In response to her specific requests, ChatGPT has produced a remarkably detailed 30-minute lesson plan – complete with a warm-up discussion, wearable technology readings, in-class exercises, and a wrap-up discussion.
At the start of the class period, Ms. Shuman asked the students to spend 20 minutes going through the scripted lesson, as if it were a real lesson in wearable technology. Next, they would analyze the effectiveness of ChatGPT as a simulated teacher.
In small groups, students read aloud the information the bot generated about the amenities, health benefits, brand names, and market value of smartwatches and fitness trackers. There were moans as the students read the innocuous phrases from ChatGPT – “Examples of smart glasses include Google Glass Enterprise 2” – which they said sounded like marketing copy or rave product reviews.
“It reminded me of fourth grade,” said 18-year-old Jayda Arias. “It was very bland.”
The class found the lesson mind-numbing compared to those of Ms. Shuman, a charismatic teacher who creates lesson materials for her specific students, asks them provocative questions and offers relevant, concrete examples on the fly.
“The only effective part of this lesson is that it’s simple,” 17-year-old Alexania Echevarria said of the ChatGPT hardware.
“ChatGPT seems to like wearable technology,” noted Alia Goddess Burke, 17, another student. “It’s biased!”
Ms. Shuman was offering a lesson that went beyond learning to identify AI biases. She used ChatGPT to make her students understand that artificial intelligence was not inevitable and that young women had the ideas to challenge it.
“Should your teachers be using ChatGPT?” Mrs. Shuman asked towards the end of the lesson.
The students’ response was “No!” resounding. At least for now.
Tech