Is Grammarly Crossing A Line By Using Deceased Scholars To Review Academic Papers?

Is Grammarly Crossing A Line By Using Deceased Scholars To Review Academic Papers?

The question of whether Grammarly is crossing a line by using deceased scholars to review academic papers has ignited a fierce debate about artificial intelligence, ethics, and consent. Recently, academics and journalists discovered that Grammarly’s new “Expert Review” feature offers writing feedback modeled on the styles of real people, including historians who have died . This revelation has left many wondering if the company has gone too far by using someone’s name and life’s work after they have passed away.

The “Expert Review” Feature Explained

In August 2025, Grammarly launched a set of AI-powered tools. One of them, called “Expert Review,” lives in the Grammarly sidebar. It claims to help users “sharpen your message through the lens of industry-relevant perspectives” . When you write a document, you can ask the AI to review it. The feedback then appears to come from specific experts.

The list of experts includes famous names. Users have reported seeing suggestions from Stephen King, Neil deGrasse Tyson, and Carl Sagan . It also includes tech journalists from major publications like The Verge and Wired. Crucially, it also includes recently deceased academics, such as historian David Abulafia, who died in January 2026 .

The way the feedback looks can be misleading. In Google Docs, the comments appear similar to suggestions from a real person. This simulates the experience of getting edits from a trusted authority, even though no human expert is involved .

How Does Grammarly Justify This Practice?

When asked about the controversy, Grammarly’s parent company, Superhuman, defended the feature. Alex Gay, the vice president of product and corporate marketing, explained that the experts appear because their works are publicly available and widely cited .

The company line is that the feature does not claim endorsement from these individuals. A user guide states that the references are “for informational purposes only” .

Grammarly argues they are merely pointing users toward influential voices. They claim the AI generates suggestions “inspired by” the works of these experts, not that the experts are actively participating.

However, this justification has done little to calm the critics. The core issue is that permission was never asked. Living journalists were shocked to find themselves listed. Dead scholars, of course, could not give consent.

Is Grammarly Crossing A Line By Using Deceased Scholars? The Backlash

The academic reaction has been swift and harsh. The central question remains: is Grammarly crossing a line by using deceased scholars to review academic papers? For many, the answer is a resounding yes.

“Digital Necromancy”

The most striking criticism came from historian Kathleen Alves. She described the feature as “literally digital necromancy” . This term, which refers to conjuring the spirits of the dead for divination, perfectly captures the unease many feel. It suggests an AI is trying to raise the intellectual “spirits” of the dead to serve the living, without any regard for their legacy.

The Problem of Reputation

Vanessa Heggie, an associate professor at the University of Birmingham, posted a strong critique on LinkedIn. She stated that Grammarly is creating small language models based on scraped work. They are then using the names and reputation of these scholars without explicit permission .

This is a key point. A scholar’s name carries weight. When a student sees a comment attributed to a famous historian, they may trust it implicitly. The AI is trading on a lifetime of work to give its product authority. If that feedback is bad or wrong, it tarnishes the scholar’s posthumous reputation.

Concerns Over Accuracy

The problem gets worse when you look at the feature’s performance. The Verge tested the tool and found it to be glitchy. It crashed frequently. More importantly, the “sources” it linked to were often spammy copies of legitimate sites .

In some cases, the sources were completely unrelated to the person whose name was on the suggestion. This implies that the AI feedback tied to one scholar might actually be based on another person’s work . If Grammarly is crossing a line by using deceased scholars, doing so inaccurately is a second, deeper violation.

How the Technology Likely Works

Understanding the tech helps clarify the ethical breach. Large Language Models (LLMs) learn from vast amounts of text. They scan books, articles, and websites. In this case, the system likely analyzes a scholar’s published work to produce comments that mimic their voice .

Some experts believe Grammarly uses a technique called persona prompting . Instead of building a small AI model for each person, the system might use public descriptions of a scholar’s work. It then tells the main AI to answer questions as that persona.

One user described this method as “both dumber and even weirder” than creating individual models . Regardless of the method, the result is the same. A machine generates comments using a real person’s name, risking misuse of that identity.

The Ethical and Legal Gray Area

This controversy falls into a major gap in current law and academic policy.

Consent and Identity

In the US, there are no clear federal rules about creating synthetic versions of real people . An intellectual property attorney noted that this is the kind of case that will force courts to decide what constitutes identity theft in the AI age .

Grammarly updated its terms of service when it rebranded. These new terms allow the company to train its AI using any content users upload unless they manually opt out . Critics argue this is a broad claim on user data.

AI in Academic Writing Guidelines

Universities are scrambling to keep up. Guidelines from institutions like the University of Pretoria and the University of South Carolina stress that AI should be an editor, not a ghostwriter . They warn against using AI that changes your voice or meaning.

The new Grammarly feature does exactly that. It replaces the user’s voice with the voice of a named expert. This violates the principle that your unique perspective should remain recognizable in your work .

Publisher Policies

Academic publishers have strict rules. The Committee on Publication Ethics (COPE) requires transparency about AI use. AI tools cannot be listed as authors because they cannot be held responsible for the work .

If a student or researcher uses Grammarly’s Expert Review, are they getting unauthorized help? Are they citing the scholar whose name was used? The guidelines from journals say that generating substantive content with AI may be considered plagiarism .

Broader Implications for Students and Researchers

This feature creates practical problems for anyone in education.

  1. False Authority: Students might trust bad advice because it has a famous name attached. AI is prone to “hallucinations”—making up facts . A student could end up including fabricated information in a paper, thinking it was vetted by an expert.
  2. Violation of Trust: If you upload a draft to Grammarly, the company’s terms may allow them to use that work to train AI . For a researcher with unpublished findings, this is a massive risk. Confidentiality is not guaranteed.
  3. Misleading Feedback: The AI might not understand how a real editor works. One reporter noted that a suggestion from an AI “inspired by” a Verge editor was the opposite of what the real editor would do . The AI can mimic style, but it cannot replicate the critical thinking and judgment of a human.
  4. Erosion of Trust in Academia: Historian C.E. Aubin from Yale told Wired that this reinforces the “profound mistrust so many scholars in the humanities have for AI” . When scholarship is used this way, it suggests that the actual people who do the thinking are replaceable.

Conclusion: A Line That Should Not Be Crossed

So, is Grammarly crossing a line by using deceased scholars to review academic papers? All evidence points to yes.

The company is using the names, identities, and hard-won reputations of living and dead academics without their consent. They are trading on the trust those names inspire to sell a product. When that product malfunctions—providing inaccurate feedback or linking to spammy sites—it damages the very idea of scholarly authority.

The feature fails the basic test of ethical AI use: transparency, consent, and accuracy. It moves beyond editing into the realm of identity appropriation. For scholars who have passed away, it robs them of the dignity of their legacy. Their work is no longer a contribution to human knowledge; it is simply data to be scraped and mimicked.

What do you think? Should companies be allowed to use the names and works of deceased individuals to train AI models, or does this cross a fundamental ethical line?

References

  • GNN News. (2026, March 9). Grammarly is using our identities without permission
  • TechRound. (2026, March 6). Is Grammarly Crossing A Line By Using Deceased Scholars To Review Academic Papers? 
  • Springer. (2025, April 17). Artificial intelligence-assisted academic writing: recommendations for ethical use. Advances in Simulation. 
  • TechCrunch. (2026, March 7). Grammarly’s ‘expert review’ is just missing the actual experts
  • University of Pretoria Library Services. (2026, January). Artificial Intelligence Guidelines: Design and Editing
  • SFist. (2026, March 8). Grammarly’s New AI Tools Use Experts’ Identities Without Their Permission
  • Pinboard. (2026). Bookmark of The Verge article on Grammarly
  • Signalplus. (2026, March 5). Grammarly’s ’Expert Review’ Draws Criticism for Using Deceased Scholars’ Names
  • University of South Carolina. (2025, September 4). Guidelines for the Responsible Use of Artificial Intelligence (AI) in Graduate Research and Writing

More From Author

Top Blockchain Startups In Denmark To Watch Right Now

Top Blockchain Startups In Denmark To Watch Right Now

What Is Hugging Face And How Does It Work For AI Developers?

What Is Hugging Face And How Does It Work For AI Developers?