Cryptopolitan
2026-02-01 14:24:23

Researchers say AI “slop” is distorting science, push for mandatory disclosure

Scientists working inside the AI research world are facing a credibility problem they can no longer ignore. Major conferences focused on AI research reacted after review systems became clogged with weak submissions. Organizers saw a sharp rise in papers and peer reviews produced with little human effort. The concern is not style. The concern is accuracy. Errors are slipping into places where precision used to matter. Conferences crack down as low-quality papers overwhelm reviewers Researchers warned early that unchecked use of automated writing tools could damage the field. Inioluwa Deborah Raji, an AI researcher at the University of California, Berkeley, said the situation turned chaotic fast. “There is a little bit of irony to the fact that there’s so much enthusiasm for AI shaping other fields when, in reality, our field has gone through this chaotic experience because of the widespread use of AI,” she said. Hard data shows how widespread the problem became. A Stanford University study published in August found that up to 22 percent of computer science papers showed signs of large language model use. Pangram, a text analysis start-up, reviewed submissions and peer reviews at the International Conference on Learning Representations in 2025. It estimated that 21 percent of reviews were fully generated by AI, while more than half used it for tasks like editing. Pangram also found that 9 percent of submitted papers had more than half their content produced this way. The issue reached a tipping point in November. Reviewers at ICLR flagged a paper suspected of being generated by AI that still ranked in the top 17 percent based on reviewer scores. In January, detection firm GPTZero reported more than 100 automated errors across 50 papers presented at NeurIPS, widely seen as the top venue for advanced research in the field. As concerns grew, ICLR updated its usage rules before the conference. Papers that fail to disclose extensive use of language models now face rejection. Reviewers who submit low-quality evaluations created with automation risk penalties, including having their own papers declined. Hany Farid, a computer science professor at the University of California, Berkeley, said “If you’re publishing really low-quality papers that are just wrong, why should society trust us as scientists?” Paper volumes surge while detection struggles to keep up Per the report, NeurIPS received 21,575 papers in 2025, up from 17,491 in 2024 and 9,467 in 2020. One author submitted more than 100 papers in a single year, far beyond what is typical for one researcher. Thomas G. Dietterich, emeritus professor at Oregon State University and chair of the computer science section of arXiv, said uploads to the open repository also rose sharply. Still, researchers say the cause is not simple. Some argue the increase comes from more people entering the field. Others say heavy use of AI tools plays a major role. Detection remains difficult because there is no shared standard for identifying automated text. Dietterich said common warning signs include made-up references and incorrect figures. Authors caught doing this can be temporarily banned from arXiv. Commercial pressure also sits in the background. High-profile demos, soaring salaries, and aggressive competition have pushed parts of the field to focus on quantity. Raji said moments of hype attract outsiders looking for fast results. At the same time, researchers say some uses are legitimate. Dietterich noted that writing quality in papers from China has improved, likely because language tools help rewrite English more clearly. The issue now stretches beyond publishing. Companies like Google, Anthropic , and OpenAI promote their models as research partners that can speed up discovery in areas like life sciences. These systems are trained on academic text. Farid warned that if training data includes too much synthetic material, model performance can degrade. Past studies show large language models can collapse into nonsense when fed uncurated automated data. Farid said companies scraping research have strong incentives to know which papers are human-written. Kevin Weil, head of science at OpenAI, said tools still require human checks. “It can be a massive accelerator,” he said. “But you have to check it. It doesn’t absolve you from rigour.” Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free .

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.