Wikipedia Prohibits AI-Generated Articles in Major Language Editions

Wikipedia for desktop

Wikipedia has officially shut the door on AI-generated articles. The English-language site now prohibits using large language models (LLMs) to write or rewrite articles, citing repeated breaches of core content policies. The ban isn’t absolute: editors may still use AI to polish their own drafts or assist with translations, but only if they personally verify every fact. So, if you were hoping to have ChatGPT ghostwrite your next Wikipedia deep-dive, think again.

This change matters for anyone who depends on Wikipedia for accurate, up-to-date information. Generative AI is notorious for fabricating facts, misquoting sources, and subtly distorting meaning. The new policy aims to uphold the encyclopedia’s standards-no hallucinated history or AI-invented citations allowed. Editors can still use LLMs for minor copy edits or language assistance, but they must be fluent enough to spot errors and verify everything against reliable sources. If you’re not double-checking, you’re out.

Why Wikipedia Drew the Line

The official policy highlights a key issue: LLMs can go beyond what you ask and alter the meaning of text so it’s unsupported by the cited sources. That’s a dealbreaker for a site built on verifiability. Wikipedia administrator Chaotic Enby described the ban as a pushback against enshittification and the aggressive AI push by many companies in recent years. The hope is that other online communities will follow suit and decide for themselves how much AI they want in their mix.

Not all Wikipedias play by the same rules. Each language version sets its own policies. For instance, Spanish Wikipedia has gone further, banning LLMs outright with no exceptions-not even for translation or minor edits. Others may be more lenient, but the English-language ban sets a high-profile precedent. If more communities follow, expect a patchwork of AI rules across the web.

What This Means for Editors and Readers

For volunteer editors, this means more manual work and less temptation to let AI fill in the blanks. Human oversight is now mandatory, especially for translations and minor edits. If you’re not fluent in both languages, don’t trust the bot. For readers, this could mean fewer AI-induced errors slipping through-at least on high-traffic pages with regular moderation.

Still, detecting AI-written text isn’t foolproof. Wikipedia’s human moderators might miss some LLM-generated content, especially on less-watched pages. But the message is clear: if you want to contribute, bring your own brain.

The bottom line

  • English Wikipedia bans AI-generated articles, with rare exceptions for editor-verified tweaks and translations.
  • Other language Wikipedias set their own rules-some stricter, some looser.
  • Readers get more reliable info, but editors face stricter oversight and more manual work.