A Cornell University study reveals that the prolonged exposure of large language models to low-quality training data negatively impacts accuracy, comprehension, and thought process.