The Hechinger Report
ChatGPT, a large language model, showed racial bias in grading essays, penalizing Asian American students more than other races, according to a study by ETS researchers. The AI model, trained on 300 billion words, reflected implicit biases in its source material. The study involved over 13,000 essays from students in grades 8 to 12. The AI model scored essays almost a point lower than human evaluators, with Asian Americans receiving an additional quarter point deduction. The researchers warned about potential racial bias when using AI in classrooms and advised caution and evaluation before presenting AI-generated scores to students.