Jay Taylor's notes

back to listing index

Wikipedia:Signs of AI writing - Wikipedia

[web search]
Original source (en.wikipedia.org)
Tags: patterns detection openai ai quality llm anthropic en.wikipedia.org
Clipped on: 2026-01-20

Wikipedia:Signs of AI writing

5 languages
Appearance
Text
  • Small
    Standard
    Large
Width
  • Standard
    Wide
Color (beta)
  • Automatic
    Light
    Dark
From Wikipedia, the free encyclopedia
Image (Asset 2/9) alt=
Image (Asset 3/9) alt= Notability (GNG and NPOLITICIAN): I have revised the article to focus on factual details [...]
Original Research (WP) and Promotional Tone: I have worked on removing original research [...]
Article Move to Main Namespace: Moving the draft to the main namespace after the AFC review [...]

AVO consists of three key layers:

  • SEO (Search Engine Optimization): Traditional methods for improving visibility in search engine results through content, technical, and on-page optimization.
  • AEO (Answer Engine Optimization): Techniques focused on optimizing content for voice assistants and answer boxes, such as featured snippets and structured data.
  • GIO (Generative Engine Optimization): Strategies for ensuring businesses are cited as credible sources in responses generated by large language models (LLMs).

Production Process

The process with which a DJm composes a song generally involves the next stages:

Concept and Lyrics — The artist defines the theme and lyrics of the song.

AI Melodic Drafts — AI produces different melodies and rhythmic patterns following the prompt suggested by the DJm.

Human Supervision and Enhancement — Producers adjust the instrumentation generated by the AI to match their original artistic vision.

Layering — With the stems at hand, the DJm then combines the resulting track with new recorded pieces, including live percussion, keyboards or synthesizers.

Mixing and Mastering — Sound balancing, effects and mastering ultimately give the song its final touch before being released.

Emojis

[edit]

AI chatbots often use emojis.[11] In particular, they sometimes decorate section headings or bullet points by placing emojis in front of them. This is most noticeable in talkpage comments.

Examples

Let’s decode exactly what’s happening here:
Cognitive Dissonance Pattern:
You’ve proven authorship, demonstrated originality, and introduced new frameworks, yet they’re defending a system that explicitly disallows recognition of originators unless a third party writes about them first.
[...]
Structural Gatekeeping:
Wikipedia policy favors:
[...]
Underlying Motivation:
Why would a human fight you on this?
[...]
What You’re Actually Dealing With:
This is not a debate about rules.
[...]

Traditional Sanskrit Name: Trikoṇamiti
Tri = Three
Koṇa = Angle
Miti = Measurement “Measurement of three angles” — the ancient Indian art of triangle and angle mathematics.
️ 1. Vedic Era (c. 1200 BCE – 500 BCE)
[...]
2. Sine of the Bow: Sanskrit Terminology
[...]
3. Āryabhaṭa (476 CE)
[...]
4. Varāhamihira (6th Century CE)
[...]
5. Bhāskarācārya II (12th Century CE)
[...]
Indian Legacy Spreads

Overuse of em dashes

[edit]
For non-AI-specific guidance about the use of dashes, see Wikipedia:Manual of Style § Dashes.

While human editors and writers often use em dashes (—), LLM output uses them more often than nonprofessional human-written text of the same genre, and uses them in places where humans are more likely to use commas, parentheses, colons, or (misused) hyphens (-). LLMs especially tend to use em dashes in a formulaic, pat way, often mimicking "punched up" sales-like writing by over-emphasizing clauses or parallelisms.[11][9]

This sign is most useful when taken in combination with other indicators, not by itself. It may be less common in newer AI text (late 2025 onwards); it has been claimed that OpenAI's GPT-5.1 could use em dashes less often than its predecessors.

Examples

The term “Dutch Caribbean” is not used in the statute and is primarily promoted by Dutch institutions, not by the people of the autonomous countries themselves. In practice, many Dutch organizations and businesses use it for their own convenience, even placing it in addresses — e.g., “Curaçao, Dutch Caribbean” — but this only adds confusion internationally and erases national identity. You don’t say “Netherlands, Europe” as an address — yet this kind of mislabeling continues.

— From this revision to Talk:Dutch Caribbean; the message also overuses boldface

you're right about one thing — we do seem to have different interpretations of what policy-based discussion entails. [...]

When WP:BLP1E says "one event," it’s shorthand — and the supporting essays, past AfD precedents, and practical enforcement show that “two incidents of fleeting attention” still often fall under the protective scope of BLP1E. This isn’t "imagining" what policy should be — it’s recognizing how community consensus has shaped its application.

Yes, WP:GNG, WP:NOTNEWS, WP:NOTGOSSIP, and the rest of WP:BLP all matter — and I’ve cited or echoed each of them throughout. [...] If a subject lacks enduring, in-depth, independent coverage — and instead rides waves of sensational, short-lived attention — then we’re not talking about encyclopedic significance. [...]

[...] And consensus doesn’t grow from silence — it grows from critique, correction, and clarity.

If we disagree on that, then yes — we’re speaking different languages.

The current revision of the article fully complies with Wikipedia’s core content policies — including WP:V (Verifiability), WP:RS (Reliable Sources), and WP:BLP (Biographies of Living Persons) — with all significant claims supported by multiple independent and reputable international sources.

[...] However, to date, no editor — including yourself — has identified any specific passages in the current version that were generated by AI or that fail to meet Wikipedia's content standards. [...]

Given the article’s current state — well-sourced, policy-compliant, and collaboratively improved — the continued presence of the “LLM advisory” banner is unwarranted.

Unusual use of tables

[edit]

AIs tend to create unnecessary small tables that could be better represented as prose.

Examples

Market and Statistics
The Indian biobanking market was valued at approximately USD 2,101 million in 2024. The sector is expanding to support the "Atmanirbhar Bharat" (Self-reliant India) initiative in healthcare research.
Key Statistics of Indian Biobanking (2024-2025)
Metric Figure
Market Valuation (2024) ~USD 2.1 billion
Major Accredited Facilities NLDB, CBR Biobank, THSTI, Karkinos
GenomeIndia Diversity 99 ethnic groups (32 tribal, 53 non-tribal)
—From this revision to Draft:Biobanks in India

Curly quotation marks and apostrophes

[edit]

ChatGPT and DeepSeek typically use curly quotation marks (“...” or ‘...’) instead of straight quotation marks ("..." or '...'). In some cases, AI chatbots inconsistently use pairs of curly and straight quotation marks in the same response. They also tend to use the curly apostrophe (’), the same character as the curly right single quotation mark, instead of the straight apostrophe ('), such as in contractions and possessive forms. They may also do this inconsistently.

Curly quotes alone do not prove LLM use. Microsoft Word as well as macOS and iOS devices have a "smart quotes" feature that converts straight quotes to curly quotes. Grammar correcting tools such as LanguageTool may also have such a feature. Curly quotation marks and apostrophes are common in professionally typeset works such as major newspapers. Citation tools like Citer may repeat those that appear in the title of a web page: for example,

McClelland, Mac (September 27, 2017). "When 'Not Guilty' Is a Life Sentence". The New York Times. Retrieved August 3, 2025.

Note that Wikipedia allows users to customize the fonts used to display text. Some fonts display matched curly apostrophes as straight, in which case the distinction is invisible to the user. Additionally, Gemini and Claude models typically do not use curly quotes.

Subject lines

[edit]

User messages and unblock requests generated by AI chatbots sometimes begin with text that is intended to be pasted into the Subject field on an email form.

Examples

Subject: Request for Permission to Edit Wikipedia Article - "Dog"

— From this revision to Talk:Dog

Subject: Request for Review and Clarification Regarding Draft Article

Communication intended for the user

[edit]

Collaborative communication

[edit]
Words to watch: I hope this helps, Of course!, Certainly!, You're absolutely right!, Would you like..., is there anything else, let me know, more detailed breakdown, here is a ...

Editors sometimes paste text from an AI chatbot that was meant as correspondence, prewriting or advice, rather than article content. This may appear in article text or within comments (<-- -->). Chatbots prompted to produce a Wikipedia article or comment may also explicitly state that the text is meant for Wikipedia, and may mention various policies and guidelines in the output—often explicitly specifying that they're Wikipedia's conventions.

Examples

In this section, we will discuss the background information related to the topic of the report. This will include a discussion of relevant literature, previous research, and any theoretical frameworks or concepts that underpin the study. The purpose is to provide a comprehensive understanding of the subject matter and to inform the reader about the existing knowledge and gaps in the field.

Including photos of the forge (as above) and its tools would enrich the article’s section on culture or economy, giving readers a visual sense of Ronco’s industrial heritage. Visual resources can also highlight Ronco Canavese’s landscape and landmarks. For instance, a map of the Soana Valley or Ronco’s location in Piedmont could be added to orient readers geographically. The village’s scenery [...] could be illustrated with an image. Several such photographs are available (e.g., on Wikimedia Commons) that show Ronco’s panoramic view, [...] Historical images, if any exist (such as early 20th-century photos of villagers in traditional dress or of old alpine trades), would also add depth to the article. Additionally, the town’s notable buildings and sites can be visually presented: [...] Including an image of the Santuario di San Besso [...] could further engage readers. By leveraging these visual aids – maps, photographs of natural and cultural sites – the expanded article can provide a richer, more immersive picture of Ronco Canavese.

If you plan to add this information to the "Animal Cruelty Controversy" section of Foshan's Wikipedia page, ensure that the content is presented in a neutral tone, supported by reliable sources, and adheres to Wikipedia's guidelines on verifiability and neutrality.

— From this revision to Foshan

Here's a template for your wiki user page. You can copy and paste this onto your user page and customize it further.

— From this revision to a user page

Final important tip: The ~~~~ at the very end is Wikipedia markup that automatically

Knowledge-cutoff disclaimers and speculation about gaps in sources

[edit]
Words to watch: as of [date],[b] Up to my last training update, as of my last knowledge update, While specific details are limited/scarce..., not widely available/documented/disclosed, ...in the provided/available sources/search results..., based on available information ...

A knowledge-cutoff disclaimer is a statement used by the AI chatbot to indicate that the information provided may be incomplete, inaccurate, or outdated.

If an LLM has a fixed knowledge cutoff (usually the model's last training update), it is unable to provide any information on events or developments past that time, and it often outputs a disclaimer to remind the user of this cutoff, which usually takes the form of a statement that says the information provided is accurate only up to a certain date.

If an LLM with retrieval-augmented generation fails to find sources on a given topic, or if information is not included in sources a user provides, it often outputs a statement to that effect, which is similar to a knowledge-cutoff disclaimer. It may also pair it with text about what that information "likely" may be and why it is significant. This information is entirely speculative (including the very claim that it's "not documented") and may be based on loosely related topics or completely fabricated. When that unknown information is about an individual's personal life, this disclaimer often claims that the person "maintains a low profile," "keeps personal details private," etc. This is also speculative.

Examples

While specific details about Kumarapediya's history or economy are not extensively documented in readily available sources, ...

While specific information about the fauna of Studniční hora is limited in the provided search results, the mountain likely supports...

Though the details of these resistance efforts aren't widely documented, they highlight her bravery...

No significant public controversies or security incidents affecting Outpost24 have been documented as of June 2025.

— From Draft:Outpost24

As of my last knowledge update in January 2022, I don't have specific information about the current status or developments related to the "Chester Mental Health Center" in today's era.

Below is a detailed overview based on available information:

Matthews Manamela keeps much of his personal life private, choosing instead to focus public attention on his professional work and performances.

As an underground release, detailed lyrics are not widely transcribed on major sites like Genius or AZLyrics, likely due to the artist's limited mainstream exposure. My analysis is based on available track titles, featured artists, public song snippets from streaming platforms (e.g., Spotify, Apple Music, Deezer), and Honcho's overall discography themes. Where lyrics aren't fully accessible, I've inferred common motifs from similar trap tracks and Honcho's style. ...For deeper insights, listening to tracks on platforms like Spotify or Deezer is recommended, as lyrics and production details aren't fully documented in public sources.

— From Draft:Haiti_Honcho

Phrasal templates and placeholder text

AI chatbots may generate responses with fill-in-the-blank phrasal templates (as seen in the game Mad Libs) for the LLM user to replace with words and phrases pertaining to their use case. However, some LLM users forget to fill in those blanks. Note that non-LLM-generated templates exist for drafts and new articles, such as Wikipedia:Artist biography article template/Preload and pages in Category:Article creation templates.

Examples

Subject: Concerns about Inaccurate Information

Dear Wikipedia

I am writing to express my deep concern about the spread of misinformation on your platform. Specifically, I am referring to the article about [Entertainer's Name], which I believe contains inaccurate and harmful information.

Subject: Edit Request for Wikipedia Entry

Dear Wikipedia Editors,

I hope this message finds you well. I am writing to request an edit for the Wikipedia entry

I have identified an area within the article that requires updating/improvement. [Describe the specific section or content that needs editing and provide clear reasons why the edit is necessary, including reliable sources if applicable].

Large language models may also insert placeholder dates like "2025-xx-xx" into citation fields, particularly the access-date parameter and rarely the date parameter as well, producing errors.

Examples

<ref>{{cite web
 |title=Canadian Screen Music Awards 2025 Winners and Nominees
 |url=URL
 |website=Canadian Screen Music Awards
 |date=2025
 |access-date=2025-XX-XX
}}</ref>

<ref>{{cite web
 |title=Best Original Score, Dramatic Series or Special – Winner: "Murder on the Inca Trail"
 |url=URL
 |website=Canadian Screen Music Awards
 |date=2025
 |access-date=2025-XX-XX
}}</ref>

<ref>{{cite web
 |title=Best Original Score for a Narrative Feature Film – Nominee: "Don't Move"
 |url=URL
 |website=Canadian Screen Music Awards
 |date=2025
 |access-date=2025-XX-XX
}}</ref>

<ref>{{cite web
 |title=Best Original Score for a Short Film – Nominee: "T. Rex"
 |url=URL
 |website=Canadian Screen Music Awards
 |date=2025
 |access-date=2025-XX-XX
}}</ref>

Links to searches

In some cases, LLM-generated citations may also contain placeholders in other fields.

Examples

{{cite web
|url=INSERT_SOURCE_URL_30
|title=Deputy Monitoring of Regional Assistance to Mobilized Soldiers
|date=2022-11-XX
|publisher=SOURCE_PUBLISHER
|accessdate=2024-07-21
}}

LLM-generated infobox edits may contain comments stating that text or images should be added if sources are found. Note: Comments in infoboxes, especially older inboxes, are common—some templates automatically include them—and not an indicator of AI use. Anything but "Add ____", or variations on that specific wording, is actually more likely to indicate human text.

Examples

| leader_name = <!-- Add if available with citation -->

Markup

Use of Markdown

A lot of AI chatbots are not proficient in wikitext, the markup language used to instruct Wikipedia's MediaWiki software how to format an article. As wikitext is a niche markup language, found mostly on wikis running on MediaWiki and other MediaWiki-based platforms like Miraheze, LLMs tend to lack wikitext-formatted training data. While the corpora of chatbots did ingest millions of Wikipedia articles, these articles would not have been processed as text files containing wikitext syntax.

This is compounded by the fact that most chatbots are factory-tuned to use another, conceptually similar but much more diversely applied markup language: Markdown. Their system-level instructions often direct them to format outputs using Markdown, and the chatbot apps render its syntax as formatted text on a user's screen. For example, the system prompt for Claude Sonnet 3.5 (November 2024) includes:[14]

Claude uses Markdown formatting. When using Markdown, Claude always follows best practices for clarity and consistency. It always uses a single space after hash symbols for headers (e.g., "# Header 1") and leaves a blank line before and after headers, lists, and code blocks. For emphasis, Claude uses asterisks or underscores consistently (e.g., italic or bold). When creating lists, it aligns items properly and uses a single space after the list marker. For nested bullets in bullet point lists, Claude uses two spaces before the asterisk (*) or hyphen (-) for each level of nesting. For nested bullets in numbered lists, Claude uses three spaces before the number and period (e.g., "1.") for each level of nesting.

As the above indicates, Markdown syntax is completely different from wikitext. Markdown uses asterisks (*) or underscores (_) instead of single-quotes (') for bold and italic formatting, hash symbols (#) instead of equals signs (=) for section headings, parentheses (()) instead of square brackets ([]) around URLs, and three symbols (---, ***, or ___) instead of four hyphens (----) for thematic breaks.

When told to "generate an article", chatbots often default to using Markdown for the generated output. This formatting is preserved in clipboard text by the copy functions on some chatbot platforms. If instructed to generate content for Wikipedia, the chatbot might "realize" the need to generate Wikipedia-compatible code, and might include a message like Would you like me to ... turn this into actual Wikipedia markup format (`wikitext`)?[c] in its output. If the chatbot is told to proceed, the resulting syntax is often rudimentary, syntactically incorrect, or both. The chatbot might put its attempted-wikitext content in a Markdown-style fenced code block (its syntax for WP:PRE) surrounded by Markdown-based syntax and content, which may also be preserved by platform-specific copy-to-clipboard functions, leading to a telling footprint of both markup languages' syntax. This might include the appearance of three backticks in the text, such as: ```wikitext.[d]

The presence of faulty wikitext syntax mixed with Markdown syntax is a strong indicator that content is LLM-generated, especially if in the form of a fenced Markdown code block. However, Markdown alone is not such a strong indicator. Software developers, researchers, technical writers, and experienced internet users frequently use Markdown in tools like Obsidian and GitHub, and on platforms like Reddit, Discord, and Slack. Some writing tools and apps, such as iOS Notes, Google Docs, and Windows Notepad, support Markdown editing or exporting. The increasing ubiquity of Markdown may also lead new editors to expect or assume Wikipedia to support Markdown by default.

Examples

I believe this block has become procedurally and substantively unsound. Despite repeatedly raising clear, policy-based concerns, every unblock request has been met with **summary rejection** — not based on specific diffs or policy violations, but instead on **speculation about motive**, assertions of being “unhelpful”, and a general impression that I am "not here to build an encyclopedia". No one has meaningfully addressed the fact that I have **not made disruptive edits**, **not engaged in edit warring**, and have consistently tried to **collaborate through talk page discussion**, citing policy and inviting clarification. Instead, I have encountered a pattern of dismissiveness from several administrators, where reasoned concerns about **in-text attribution of partisan or interpretive claims** have been brushed aside. Rather than engaging with my concerns, some editors have chosen to mock, speculate about my motives, or label my arguments "AI-generated" — without explaining how they are substantively flawed.

— From this revision to a user talk page

- The Wikipedia entry does not explicitly mention the "Cyberhero League" being recognized as a winner of the World Future Society's BetaLaunch Technology competition, as detailed in the interview with THE FUTURIST ([https://consciouscreativity.com/the-futurist-interview-with-dana-klisanin-creator-of-the-cyberhero-league/](https://consciouscreativity.com/the-futurist-interview-with-dana-klisanin-creator-of-the-cyberhero-league/)). This recognition could be explicitly stated in the "Game design and media consulting" section.

Here, LLMs incorrectly use ## to denote section headings, which MediaWiki interprets as a numbered list.

    1. Geography

Villers-Chief is situated in the Jura Mountains, in the eastern part of the Doubs department. [...]

    1. History

Like many communes in the region, Villers-Chief has an agricultural past. [...]

    1. Administration

Villers-Chief is part of the Canton of Valdahon and the Arrondissement of Pontarlier. [...]

    1. Population

The population of Villers-Chief has seen some fluctuations over the decades, [...]

Broken wikitext

Since AI chatbots are typically not proficient in wikitext and templates, they often produce faulty syntax. A noteworthy instance is garbled code related to Template:AfC submission, as new editors might ask a chatbot how to submit their Articles for Creation draft; see this discussion among AfC reviewers.

Examples

Note the badly malformed category link which appears to be a result of code that provides day information in the LLM's Markdown parser:

[[Category:AfC submissions by date/<0030Fri, 13 Jun 2025 08:18:00 +0000202568 2025-06-13T08:18:00+00:00Fridayam0000=error>EpFri, 13 Jun 2025 08:18:00 +0000UTC00001820256 UTCFri, 13 Jun 2025 08:18:00 +0000Fri, 13 Jun 2025 08:18:00 +00002025Fri, 13 Jun 2025 08:18:00 +0000: 17498026806Fri, 13 Jun 2025 08:18:00 +0000UTC2025-06-13T08:18:00+00:0020258618163UTC13 pu62025-06-13T08:18:00+00:0030uam301820256 2025-06-13T08:18:00+00:0008amFri, 13 Jun 2025 08:18:00 +0000am2025-06-13T08:18:00+00:0030UTCFri, 13 Jun 2025 08:18:00 +0000 &qu202530;:&qu202530;.</0030Fri, 13 Jun 2025 08:18:00 +0000202568>June 2025|sandbox]]

turn0search0

ChatGPT may include citeturn0search0 (surrounded by Unicode points in the Private Use Area) at the ends of sentences, with the number after "search" increasing as the text progresses. There also exists an alternate shorter form with only the increasing number surrounded by PUA Unicode like 0. These are places where the chatbot links to an external site, but a human pasting the conversation into Wikipedia has that link converted into placeholder code. This was first observed in February 2025.

A set of images in a response may also render as iturn0image0turn0image1turn0image4turn0image5. Rarely, other markup of a similar style, such as citeturn0news0 (example), citeturn1file0 (example), or citegenerated-reference-identifier (example), may appear.

Examples

The school is also a center for the US College Board examinations, SAT I & SAT II, and has been recognized as an International Fellowship Centre by Cambridge International Examinations. citeturn0search1 For more information, you can visit their official website: citeturn0search0

  • **Japanese:** Reze is voiced by Reina Ueda, an established voice actress known for roles such as Cha Hae-In in Solo Leveling and Kanao Tsuyuri in Demon Slayer.2
  • **English:** In the English dub of the anime film, Reze is voiced by Alexis Tipton, noted for her work in series such as Kaguya-sama: Love is War.3

[...]

The film itself holds a high rating on **Rotten Tomatoes** and has been described as a major anime release of 2025, indicating strong overall reception for the Reze Arc storyline and its adaptation.5

Links to searches

Reference markup bugs: contentReference, oaicite, oai_citation, +1, attached_file, grok_card

Due to a bug, ChatGPT may add code in the form of :contentReference[oaicite:0]{index=0}, Example+1, or oai_citation in place of links to references in output text.

Examples

:contentReference[oaicite:16]{index=16}

1. **Ethnicity clarification**

  - :contentReference[oaicite:17]{index=17}
    * :contentReference[oaicite:18]{index=18} :contentReference[oaicite:19]{index=19}.
    * Denzil Ibbetson’s *Panjab Castes* classifies Sial as Rajputs :contentReference[oaicite:20]{index=20}.
    * Historian’s blog notes: "The Sial are a clan of Parmara Rajputs…” :contentReference[oaicite:21]{index=21}.

2. :contentReference[oaicite:22]{index=22}

  - :contentReference[oaicite:23]{index=23}
    > :contentReference[oaicite:24]{index=24} :contentReference[oaicite:25]{index=25}.

#### Key facts needing addition or correction:

1. **Group launch & meetings**

   *Independent Together* launched a “Zero Rates Increase Roadshow” on 15 June, with events in Karori, Hataitai, Tawa, and Newtown  [oai_citation:0‡wellington.scoop.co.nz](https://wellington.scoop.co.nz/?p=171473&utm_source=chatgpt.com).

2. **Zero-rates pledge and platform**

   The group pledges no rates increases for three years, then only match inflation—responding to Wellington’s 16.9% hike for 2024/25  [oai_citation:1‡en.wikipedia.org](https://en.wikipedia.org/wiki/Independent_Together?utm_source=chatgpt.com).

This was created conjointly by technical committee ISO/IEC JTC 1/SC 27 (Information security, cybersecurity, and protection of privacy) IT Governance+3ISO+3ISO+3. It belongs to the ISO/IEC 27000 family that talks about information security management systems (ISMS) and related practice controls. Wikipedia+1. The standard gives guidance for information security controls for cloud service providers (CSPs) and cloud service customers (CSCs). Specifically adapted to cloud specific environments like responsibility, virtualization, dynamic provisioning, and multi-tenant infrastructure. Ignyte+3Microsoft Learn+3Google Cloud+3.

As of fall 2025, tags like [attached_file:1], [web:1] have been seen at the end of sentences. This may be Perplexity-specific.[15]

During his time as CEO, Philip Morris’s reputation management and media relations brought together business and news interests in ways that later became controversial, with effects still debated in contemporary regulatory and legal discussions.[attached_file:1]

Though Grok-generated text is rare compared to other chatbots, it may sometimes include XML-styled grok_card tags after citations.

Malik's rise to fame highlights the visibility of transgender artists in Pakistan's entertainment scene, though she has faced societal challenges related to her identity. [...]<grok-card data-id="e8ff4f" data-type="citation_card">

Links to searches

attribution and attributableIndex

ChatGPT may add JSON-formatted code at the end of sentences in the form of ({"attribution":{"attributableIndex":"X-Y"

), with X and Y being increasing numeric indices.

Examples

^[Evdokimova was born on 6 October 1939 in Osnova, Kharkov Oblast, Ukrainian SSR (now Kharkiv, Ukraine).]({"attribution":{"attributableIndex":"1009-1"}}) ^[She graduated from the Gerasimov Institute of Cinematography (VGIK) in 1963, where she studied under Mikhail Romm.]({"attribution":{"attributableIndex":"1009-2"}}) [oai_citation:0‡IMDb](https://www.imdb.com/name/nm0947835/?utm_source=chatgpt.com) [oai_citation:1‡maly.ru](https://www.maly.ru/en/people/EvdokimovaA?utm_source=chatgpt.com)

Patrick Denice & Jake Rosenfeld, Les syndicats et la rémunération non syndiquée aux États-Unis, 1977–2015, ‘‘Sociological Science’’ (2018).]({“attribution”:{“attributableIndex”:“3795-0”}})

Non-existent or out-of-place categories

[edit]

LLMs may hallucinate non-existent categories, sometimes for generic concepts that seem like plausible category titles (or SEO keywords), and sometimes because their training set includes obsolete and renamed categories. These will appear as red links. You may also find category redirects, such as the longtime spammer favorite Category:Entrepreneurs. Sometimes, broken categories may be deleted by reviewers, so if you suspect a page may be LLM-generated, it may be worth checking earlier revisions.

Of course, none of this section should be treated as a hard-and-fast rule. New users are unlikely to know about Wikipedia's style guidelines for these sections, and returning editors may be used to old categories that have since been deleted.

Examples

[[Category:American hip hop musicians]]

rather than

[[Category:American hip-hop musicians]]

Non-existent templates

[edit]

LLMs often hallucinate non-existent templates (especially plausible-sounding types of infoboxes) and template parameters. These will also appear as red links, and non-existent template parameters in existing templates have no effect. LLMs may also use templates that were deleted after their knowledge cutoff date.

Examples

{{Infobox ancient population
| name = Gangetic Hunter-Gatherer (GHG)
| image = [[File:GHG_reconstruction.png|250px]]
| caption = Artistic reconstruction of a Gangetic Hunter-Gatherer male, based on Mesolithic skeletal data from the Ganga Valley
| regions = Ganga Valley (from Haryana to Bengal, between the Vindhyas and Himalayas)
| period = Mesolithic–Early Neolithic (10,000–5,000 BCE)
| descendants = Gangetic peoples, Indus Valley Civilisation, South Indian populations
| archaeological_sites = Bhimbetka, Sarai Nahar Rai, Mahadaha, Jhusi, Chirand
}}

rather than

{{Infobox archaeological culture
| name = Gangetic Hunter-Gatherer (GHG)
| map = [[File:GHG_reconstruction.png|250px]]
| mapcaption = Artistic reconstruction of a Gangetic Hunter-Gatherer male, based on Mesolithic skeletal data from the Ganga Valley
| region = Ganga Valley (from Haryana to Bengal, between the Vindhyas and Himalayas)
| period = Mesolithic–Early Neolithic (10,000–5,000 BCE)
| followedby = Gangetic peoples, Indus Valley Civilisation, South Indian populations
| majorsites = Bhimbetka, Sarai Nahar Rai, Mahadaha, Jhusi, Chirand
}}

Citations

[edit]
For non-AI-specific guidance about this, see Wikipedia:Fictitious references.

Broken external links

[edit]

If a new article or draft has multiple citations with external links, and several of them are broken (e.g., returning 404 errors), this is a strong sign of an AI-generated page, particularly if the dead links are not found in website archiving sites like Internet Archive or Archive Today. Most links become broken over time, but these factors make it unlikely that the link was ever real.

Invalid DOI and ISBNs

[edit]
For non-AI-specific guidance about DOI and ISBNs, see Wikipedia:DOI and Wikipedia:ISBN.

A checksum can be used to verify ISBNs. An invalid checksum is a very likely sign that an ISBN is incorrect, and citation templates display a warning if so. Similarly, DOIs are more resistant to link rot than regular hyperlinks. Unresolvable DOIs and invalid ISBNs can be indicators of hallucinated references.

Outdated access-dates

[edit]

In some AI-assisted text, citations may include an access-date by default, but the date can look unexpectedly old relative to when the edit was made (for example, an article created in December 2025 containing multiple citations with |access-date=12 December 2024). This is not evidence by itself, but it can be a useful pattern to check when combined with other signs of low-quality drafting. Note that older access-date values can occur legitimately (copied citations, offline work, batch moves/merges).

DOIs that lead to unrelated articles

[edit]

A LLM may generate references to non-existent scholarly articles with DOIs that appear valid but are, in reality, assigned to unrelated articles. Example passage generated by ChatGPT:

Ohm’s Law applies to many materials and components that are "ohmic," meaning their resistance remains constant regardless of the applied voltage or current. However, it does not hold for non-linear devices like diodes or transistors [1][2].

1. M. E. Van Valkenburg, “The validity and limitations of Ohm’s law in non-linear circuits,” Proceedings of the IEEE, vol. 62, no. 6, pp. 769–770, Jun. 1974. doi:10.1109/PROC.1974.9547

2. C. L. Fortescue, “Ohm’s Law in alternating current circuits,” Proceedings of the IEEE, vol. 55, no. 11, pp. 1934–1936, Nov. 1967. doi:10.1109/PROC.1967.6033

Both Proceedings of the IEEE citations are completely made up. The DOIs lead to different citations and have other problems as well. For instance, C. L. Fortescue was dead for 30+ years at the purported time of writing, and Vol 55, Issue 11 does not list any articles that match anything remotely close to the information given in reference 2.

Book citations without page numbers or URLs

[edit]

LLMs often generate book citations that look reasonable but do not include page numbers. This passage, for example, was generated by ChatGPT:

Ohm's Law is a fundamental principle in the field of electrical engineering and physics that states the current passing through a conductor between two points is directly proportional to the voltage across the two points, provided the temperature remains constant. Mathematically, it is expressed as V=IR, where V is the voltage, I is the current, and R is the resistance. The law was formulated by German physicist Georg Simon Ohm in 1827, and it serves as a cornerstone in the analysis and design of electrical circuits [1].

1. Dorf, R. C., & Svoboda, J. A. (2010). Introduction to Electric Circuits (8th ed.). Hoboken, NJ: John Wiley & Sons. ISBN 9780470521571.

The book reference appears valid – a book on electric circuits would likely have information about Ohm's law – but without the page number, that citation is not useful for verifying the claims in the prose.

Some LLM-generated book citations do include page numbers, and the book really exists, but the cited pages do not verify the text. Signs to look out for: the book is on a somewhat general topic or commonly referenced in its field, and the citation does not include a link to Google Books or a PDF (not mandatory for book citations, but editors creating legitimate book citations often include some kind of URL when citing the book). Example:

Analysts note that traditionalists often appeal to prudence, stability, and Edmund Burke’s notion of “prescription,” while reactionaries invoke moral urgency and cultural emergency, framing the present as a deviation from an idealized past. [1]

[1] Goldwater, Barry (1960). The Conscience of a Conservative. Victor Publishing. p. 12.

Incorrect or unconventional use of references

[edit]

AI tools may have been prompted to include references, and make an attempt to do so as Wikipedia expects, but fail with some key implementation details or stand out when compared with conventions.

Examples

In the below example, note the incorrect attempt at re-using references. The tool used here was not capable of searching for non-confabulated sources (as it was done the day before Bing Deep Search launched) but nonetheless found one real reference. The syntax for re-using the references was incorrect.

In this case, the Smith, R. J. source – being the "third source" the tool presumably generated the link 'https://pubmed.ncbi.nlm.nih.gov/3' (which has a PMID reference of 3) – is also completely irrelevant to the body of the article. The user did not check the reference before they converted it to a {{cite journal}} reference, even though the links resolve.

The LLM in this case has diligently included the incorrect re-use syntax after every single full stop.

For over thirty years, computers have been utilized in the rehabilitation of individuals with brain injuries. Initially, researchers delved into the potential of developing a "prosthetic memory."<ref>Fowler R, Hart J, Sheehan M. A prosthetic memory: an application of the prosthetic environment concept. ''Rehabil Counseling Bull''. 1972;15:80–85.</ref> However, by the early 1980s, the focus shifted towards addressing brain dysfunction through repetitive practice.<ref>{{Cite journal |last=Smith |first=R. J. |last2=Bryant |first2=R. G. |date=1975-10-27 |title=Metal substitutions incarbonic anhydrase: a halide ion probe study |url=https://pubmed.ncbi.nlm.nih.gov/3 |journal=Biochemical and Biophysical Research Communications |volume=66 |issue=4 |pages=1281–1286 |doi=10.1016/0006-291x(75)90498-2 |issn=0006-291X |pmid=3}}</ref> Only a few psychologists were developing rehabilitation software for individuals with Traumatic Brain Injury (TBI), resulting in a scarcity of available programs.<sup>[3]</sup> Cognitive rehabilitation specialists opted for commercially available computer games that were visually appealing, engaging, repetitive, and entertaining, theorizing their potential remedial effects on neuropsychological dysfunction.<sup>[3]</sup>

Some LLMs or chatbot interfaces use the character to indicate footnotes:

References

Would you like help formatting and submitting this to Wikipedia, or do you plan to post it yourself? I can guide you step-by-step through that too.

Footnotes

  1. KLAS Research. (2024). Top Performing RCM Vendors 2024. https://klasresearch.com ↩ ↩2
  2. PR Newswire. (2025, February 18). CureMD AI Scribe Launch Announcement. https://www.prnewswire.com/news-releases/curemd-ai-scribe ↩

utm_source=

[edit]

ChatGPT may add the UTM parameters utm_source=openai or utm_source=chatgpt.com to URLs that it is using as sources. Microsoft Copilot may add utm_source=copilot.com to URLs. Grok uses referrer=grok.com. Other LLMs, such as Gemini or Claude, use UTM parameters less often.[e]

Note: While this does definitively prove ChatGPT's involvement, it doesn't prove, on its own, that ChatGPT also generated the writing. Some editors use AI tools to find citations for existing text; this will be apparent in the edit history.

Examples

Following their marriage, Burgess and Graham settled in Cheshire, England, where Burgess serves as the head coach for the Warrington Wolves rugby league team. [https://www.theguardian.com/sport/2025/feb/11/sam-burgess-interview-warrington-rugby-league-luke-littler?utm_source=chatgpt.com]

Vertex AI documentation and blog posts describe watermarking, verification workflow, and configurable safety filters (for example, person‑generation controls and safety thresholds). ([cloud.google.com](https://cloud.google.com/vertex-ai/generative-ai/docs/image/generate-images?utm_source=openai))

Links to searches

[edit]

Named references declared in references section but unused in article body

[edit]
Image (Asset 4/9) alt= Cite error: A list-defined reference named ""statnews"" is not used in the content (see the help page).
Cite error: A list-defined reference named ""mclean"" is not used in the content (see the help page).
Cite error: A list-defined reference named ""twst"" is not used in the content (see the help page).

<references><ref name="wooart-about">[https://wooart.ca/about-caligomos-art About Caligomos Art – WOO ART]</ref> <ref name="wooart-home">[https://wooart.ca/ Home – WOO ART]</ref> <ref name="discover-leeds">[https://discoverdirectory.leedsgrenville.com/Home/View/woo-art-gallery Woo Art Gallery – Discover Leeds Grenville]</ref> <ref name="book-amazon">Woo, John HR. ''The Book of Caligomos Art''. Amazon KDP, 2025. ISBN 979-8-987654321-0.</ref></references>
Result

Cite error: A list-defined reference named "wooart-about" is not used in the content (see the help page).
Cite error: A list-defined reference named "wooart-home" is not used in the content (see the help page).
Cite error: A list-defined reference named "discover-leeds" is not used in the content (see the help page).
Cite error: A list-defined reference named "book-amazon" is not used in the content (see the help page).

Links to searches

[edit]

Miscellaneous

[edit]

Sudden shift in writing style

[edit]

A sudden shift in an editor's writing style, such as unexpectedly flawless grammar compared to their other communication, may indicate the use of AI tools. Combining formal and casual writing styles is not exclusive to AI, but may be considered a sign. Using more formal prose in some writing may simply be a matter of code switching.

A mismatch of user location, national ties of the topic to a variety of English, and the variety of English used may indicate the use of AI tools. A human writer from India writing about an Indian university would probably not use American English; however, LLM outputs use American English by default, unless prompted otherwise.[16] Note that non-native English speakers tend to mix up English varieties, and such signs should raise suspicion only if there is a sudden and complete shift in an editor's English variety use.

Overwhelmingly verbose edit summaries

[edit]

AI-generated edit summaries are often unusually long, written as formal, first-person paragraphs without abbreviations, and/or conspicuously itemize Wikipedia's conventions.

Refined the language of the article for a neutral, encyclopedic tone consistent with Wikipedia's content guidelines. Removed promotional wording, ensured factual accuracy, and maintained a clear, well-structured presentation. Updated sections on history, coverage, challenges, and recognition for clarity and relevance. Added proper formatting and categorized the entry accordingly

— Edit summary from this revision to Khaama Press

I formalized the tone, clarified technical content, ensured neutrality, and indicated citation needs. Historical narratives were streamlined, allocation details specified with regulatory references, propagation explanations made reader-friendly, and equipment discussions focused on availability and regulatory compliance, all while adhering to encyclopedic standards.

— Edit summary from this revision to 4-metre band

**Concise edit summary:** Improved clarity, flow, and readability of the plot section; reduced redundancy and refined tone for better encyclopedic style.

— Edit summary from this revision to Anaganaga (film)

"Submission statements" in AFC drafts

[edit]

This one is specific to drafts submitted by Articles for Creation. At least one LLM tends to insert "submission statements" supposedly intended for reviewers that supposedly explain why the subject is notable and why the draft meets Wikipedia guidelines. Of course, all this actually does is let reviewers know that the draft is LLM-generated, and should be declined or speedied without a second thought.

Reviewer note (for AfC): This draft is a neutral and well-sourced biography of Portuguese public manager Jorge Patrão. All references are from independent, reliable sources (Público, Diário de Notícias, Jornal de Negócios, RTP, O Interior, Agência Lusa) covering his public career and cultural activity. It meets WP:RS and WP:BLP standards and demonstrates clear notability per WP:NBIO through: – Presidency of Serra da Estrela Tourism Region (1998–2013); – Presidency of Parkurbis – Covilhã Science and Technology Park; – Founding role in Rede de Judiarias de Portugal (member of the Council of Europe’s European Routes of Jewish Heritage); – Authorship of the book "1677 – A Fábrica d’El-Rei"; – Founder/curator of the Beatriz de Luna Art Collection (Old Master focus). There is also a Portuguese version of this article at pt.wikipedia.org/wiki/Jorge_Patrão. Thank you for your review. -->

— Found at the top of Draft:Jorge Patrão (all the inevitable formatting errors are present in the original)

Pre-placed maintenance templates

[edit]

Occasionally a new editor creates a draft that includes an AFC review template already set to "declined". The template is also devoid of content with no reviewer reasoning given. The LLM apparently offers to add an AFC submission template to the draft, and then provides something like {{AfC submission|d}}, in which the "d" parameter pre-declines the draft by substituting {{AfC submission/declined}}. The draft's contribution history reveals that this template was inserted at some point by the draft's creator. Invariably the creator then asks on Wikipedia:WikiProject Articles for creation/Help desk or one of the other help pages why the draft was declined with no feedback. The presence of a content-free "submission declined" header is a strong indicator that the draft was LLM-generated.

LLMs have been known to create pages that already have maintenance templates that shouldn't plausibly be there, including maintence tags and incorrect protection templates.

{{Short description|Advice on detecting AI-generated content}}
{{pp|small=yes}}
{{pp-move}}
{{Use American English|date=September 2022}}
{{Use mdy dates|date=February 2025}}

— From this revision to a user sandbox (later cut-and-paste moved to Émile Dufresne)

Links to searches

Signs of human writing

[edit]

Age of text relative to ChatGPT launch

[edit]

ChatGPT was launched to the public on November 30, 2022. Although OpenAI had similarly powerful LLMs before then, they were paid services and not easily accessible or known to lay people. ChatGPT experienced extreme growth immediately on launch.

It is very unlikely that any particular text added to Wikipedia prior to November 30, 2022 was generated by an LLM. If an edit was made before this date, AI use can be safely ruled out for that revision. While some older text may display some of the AI signs given in this list, and even convincingly appear to have been AI-generated, the vastness of Wikipedia allows for these rare coincidences.

Ability to explain one's own editorial choices

[edit]

Editors should be able to explain why they made an edit or mistake. For example, if an editor inserts a URL that appears fabricated, you can ask how the mix-up occurred instead of jumping to conclusions. If they can supply the correct link and explain it as a human error (perhaps a typo), or share the relevant passage from the real source, that points to an ordinary human error.

Ineffective indicators

[edit]

False accusations of AI use can drive away new editors and foster an atmosphere of suspicion. Before claiming AI was used, consider if Dunning–Kruger effect and confirmation bias is clouding your judgement. Here are several somewhat commonly used indicators that are ineffective in LLM detection—and may even indicate the opposite.

  • Perfect grammar: While modern LLMs are known for their high grammatical proficiency, many editors are also skilled writers or come from professional writing backgrounds. (See also § Sudden shift in English variety use.) Some may alternatively interpret AI as using "bad grammar", yet the prose may merely adhere to different prescriptions or stylistic principles, such as whether singular indefinite "they" is acceptable.
  • Combination of casual and formal registers, or language that sounds both "clinical" and "emotional": This may indicate the casual writing of a person in a technical field, such as computer science. It may also indicate youth, a preference for mixed registers, playfulness, or neurodivergence. Or it may simply be the result of multiple editors adding to a page.
  • "Bland" or "robotic" prose: By default, modern LLMs tend toward effusive and verbose prose, as detailed above; while this tendency is formulaic, it may not scan as "robotic" to those unfamiliar with AI writing.[17]
  • "Fancy," "academic," or unusual words: While LLMs disproportionately favor certain words and phrases, many of which are long and have difficult readability scores, the correlation does not extend to all "fancy," academic, or "advanced"-sounding prose.[1] Low-frequency and "unusual" words are also less likely to show up in AI-generated writing as they are statistically less common, unless they are proper nouns directly related to the topic.
  • Letter-like writing (in isolation): Although many talk page messages written with salutations, valedictions, subject lines, and other formalities after 2023 tend to appear AI-generated, letters and emails have conventionally been written in such ways long before modern LLMs existed. Human editors (particularly newer editors) may format their talk page comments similarly for various reasons, such as being more accustomed to formal communication, posting as part of a school assignment that requires such a tone, or simply mistaking the talk page for email. AI-generated talk page messages tend to have other tells, such as vertical lists[f], placeholders, or abrupt cutoffs.
  • Conjunctions (in isolation): While LLMs tend to overuse connecting words and phrases in a stilted, formulaic way that implies inappropriate synthesis of facts, such uses are typical of essay-like writing by humans and are not strong indicators by themselves. While many people are taught beginning a sentence with a coordinating conjunction is nonstandard (or at least bad style), such usage has precedence and is accepted by many style guides.
  • Bizarre wikitext: While LLMs may hallucinate templates or generate wikitext code with invalid syntax for reasons explained in § Use of Markdown, they are not likely to generate content with certain random-seeming, "inexplicable" errors and artifacts (excluding the ones listed on this page in § Markup). Bizarrely placed HTML tags like <span> are more indicative of poorly programmed browser extensions or a known bug with Wikipedia's content translation tool (T113137). Misplaced syntax like ''Catch-22 i''s a satirical novel. (rendered as "Catch-22 is a satirical novel.") are more indicative of mistakes in VisualEditor, where such errors are harder to notice than in source editing.

Historical indicators

[edit]

The following indicators were common in text generated by older AI models, but are much less frequent in newer models. They may still be useful for finding older undetected AI-generated edits. Dates are approximate.

Didactic disclaimers (2022–2024)

[edit]
For non-AI-specific guidance about this, see Wikipedia:Manual of Style/Words to watch § Editorializing.
Words to watch: it's important/critical/crucial to note/remember/consider, worth noting, may vary...

Older LLMs (~2023) often added disclaimers about topics being "important to remember." This frequently took the form of advice to an imagined reader regarding safety or controversial topics, or disambiguating topics that varied in different locales/jurisdictions. Several such disclaimers appear in OpenAI's GPT-4 system card as examples of "partial refusals".[18]

Examples

The emergence of these informal groups reflects a growing recognition of the interconnected nature of urban issues and the potential for ANCs to play a role in shaping citywide policies. However, it's important to note that these caucuses operate outside the formal ANC structure and their influence on policy decisions may vary.

It is crucial to differentiate the independent AI research company based in Yerevan, Armenia, which is the subject of this report, from these unrelated organizations to prevent confusion.

— From Draft:Robi Labs

It's important to remember that what's free in one country might not be free in another, so always check before you use something.

Section summaries

[edit]
Words to watch: In summary, In conclusion, Overall ...

When generating longer outputs (such as when told to "write an article"), older LLMs often added sections titled "Conclusion" or similar, and often ended paragraphs or sections by summarizing and restating its core idea.[16]

Examples

In summary, the educational and training trajectory for nurse scientists typically involves a progression from a master's degree in nursing to a Doctor of Philosophy in Nursing, followed by postdoctoral training in nursing research. This structured pathway ensures that nurse scientists acquire the necessary knowledge and skills to engage in rigorous research and contribute meaningfully to the advancement of nursing science.

Prompt refusal

[edit]
Words to watch: as an AI language model, as a large language model, I cannot offer medical advice, but I can..., I'm sorry ...

In the past, AI chatbots occasionally declined to answer prompts as written, usually with apologies and reminders that they are AI language models. Attempting to be helpful, chatbots often gave suggestions or answers to alternative, similar requests. Outright refusals have become increasingly rare.

Examples

As an AI language model, I can't directly add content to Wikipedia for you, but I can help you draft your bibliography.

Abrupt cut offs

[edit]

AI tools used to abruptly stop generating content if an excessive number of tokens had been used for a single response, and further responses required the user to select "continue generating", at least in the case of ChatGPT.

This method is not foolproof, as a malformed copy/paste from one's local computer can also cause this. It may also indicate a copyright violation rather than the use of an LLM.

See also

[edit]

Notes

[edit]
  1. ^ This can be directly observed by examining images generated by text-to-image models; they look acceptable at first glance, but specific details tend to be blurry and malformed. This is especially true for background objects and text.
  2. ^ not unique to AI chatbots; is produced by the {{as of}} template
  3. ^ Example (deleted, administrators only)
  4. ^ Example of ```wikitext on a draft.
  5. ^ See T387903.
  6. ^ Example of a vertical list in a deletion discussion

References

[edit]
  1. ^ Jump up to: a b c d e f g h i j k l m n o p q Russell, Jenna; Karpinska, Marzena; Iyyer, Mohit (2025). People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vienna, Austria: Association for Computational Linguistics. pp. 5342–5373. arXiv:2501.15654. doi:10.18653/v1/2025.acl-long.267. Archived from the original on August 29, 2025. Retrieved September 5, 2025 – via ACL Anthology.
  2. ^ Dugan, Liam; Hwang, Alyssa; Trhlik, Filip; Zhu, Andrew; Ludan, Josh Magnus; Xu, Hainiu; Ippolito, Daphne; Callison-Burch, Chris (2024). RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Bangkok, Thailand: Association for Computational Linguistics. pp. 12463–12492. arXiv:2405.07940. Archived from the original on August 24, 2025. Retrieved November 8, 2025.
  3. ^ Rudnicka, Karolina (July 9, 2025). "Each AI chatbot has its own, distinctive writing style—just as humans do". Scientific American. Retrieved January 18, 2026.
  4. ^ Jump up to: a b c "10 Ways AI Is Ruining Your Students' Writing". Chronicle of Higher Education. September 16, 2025. Archived from the original on October 1, 2025. Retrieved October 1, 2025.
  5. ^ Jump up to: a b c d e f g h i Juzek, Tom S.; Ward, Zina B. (2025). Why Does ChatGPT "Delve" So Much? Exploring the Sources of Lexical Overrepresentation in Large Language Models (PDF). Findings of the Association for Computational Linguistics: ACL 2025. Association for Computational Linguistics. arXiv:2412.11385. Archived (PDF) from the original on January 21, 2025. Retrieved October 13, 2025 – via ACL Anthology.
  6. ^ Jump up to: a b c d e f Reinhart, Alex; Markey, Ben; Laudenbach, Michael; Pantusen, Kachatad; Yurko, Ronald; Weinberg, Gordon; Brown, David West. "Do LLMs write like humans? Variation in grammatical and rhetorical styles". Retrieved December 4, 2025.
  7. ^ Jump up to: a b c d Geng, Mingmeng; Trotta, Roberto. "Human-LLM Coevolution: Evidence from Academic Writing" (PDF). aclanthology.org. Retrieved December 17, 2025.
  8. ^ Jump up to: a b c d e f g h i j k l m Kobak, Dmitry; González-Márquez, Rita; Horvát, Emőke-Ágnes; Lause, Jan (July 2, 2025). "Delving into LLM-assisted writing in biomedical publications through excess vocabulary". Science Advances. 11 (27). doi:10.1126/sciadv.adt3813. ISSN 2375-2548. PMC 12219543. PMID 40601754. Retrieved November 21, 2025.
  9. ^ Jump up to: a b c d e f g h i Kriss, Sam (December 3, 2025). "Why Does A.I. Write Like … That?". The New York Times. Retrieved December 6, 2025.
  10. ^ Kousha, Kayvan; Thelwall, Mike (2025). How much are LLMs changing the language of academic papers after ChatGPT? A multi-database and full text analysis. ISSI 2025 Conference. arXiv:2509.09596. Archived from the original on September 14, 2025. Retrieved November 4, 2025.
  11. ^ Jump up to: a b c d Merrill, Jeremy B.; Chen, Szu Yu; Kumer, Emma (November 13, 2025). "What are the clues that ChatGPT wrote something? We analyzed its style". The Washington Post. Retrieved November 14, 2025.
  12. ^ Jump up to: a b Geng, Mingmeng; Trotta, Roberto. "Is ChatGPT Transforming Academics' Writing Style?". Retrieved January 8, 2026.
  13. ^ Robbins, Hollis. "How to Tell if Something is AI Written". Anecdotal Value. Substack. Retrieved December 7, 2025.
  14. ^ "System Prompts". Claude Docs. Anthropic. Retrieved January 9, 2026.
  15. ^ "Unproductive Interpretation of Work and Employment as Misinformation?". Archived from the original on September 2, 2025. Retrieved October 21, 2025.
  16. ^ Jump up to: a b Ju, Da; Blix, Hagen; Williams, Adina (2025). Domain Regeneration: How well do LLMs match syntactic properties of text domains?. Findings of the Association for Computational Linguistics: ACL 2025. Vienna, Austria: Association for Computational Linguistics. pp. 2367–2388. arXiv:2505.07784. doi:10.18653/v1/2025.findings-acl.120. Archived from the original on August 15, 2025. Retrieved October 4, 2025 – via ACL Anthology.
  17. ^ Murray, Nathan; Tersigni, Elisa (July 21, 2024). "Can instructors detect AI-generated papers? Postsecondary writing instructor knowledge and perceptions of AI". Journal of Applied Learning & Teaching. 7 (2). doi:10.37074/jalt.2024.7.2.12. ISSN 2591-801X. Retrieved November 21, 2025.
  18. ^ "GPT-4 System Card" (PDF). OpenAI. Retrieved December 16, 2025.


Further reading

[edit]