Grammarly’s Expert Review Uses Authors’ Names Without Consent
Grammarly’s new AI feature, Expert Review, has sparked controversy. It uses the names of real authors without their permission. This includes well-known writers and journalists like Nilay Patel, David Pierce, and Tom Warren.
The tool leverages these identities to add unearned credibility to its AI suggestions. Now, amid backlash, Grammarly is offering an opt-out instead of an apology. This raises serious questions about Grammarly privacy policies and AI ethics.
How Grammarly’s Expert Review Works
Grammarly’s Expert Review feature is designed to provide writing feedback. It mimics the style and authority of recognized experts. The AI uses their names to suggest edits as if they were personally reviewing the text.
This happens without any consent from the authors involved. Many only discovered their names were being used after the fact. The situation highlights the risks of AI cloning in writing tools.
The Initial Discovery and Backlash
Last week, colleagues found that Grammarly had turned me into an AI editor. It used my real name without asking. The same occurred for my boss Nilay Patel and others.
Wired initially reported on this issue last Wednesday. It affected many authors far more famous than us. Grammarly faced immediate criticism for its lack of transparency.
- Real names used without permission
- No prior notification to authors
- AI suggestions given false credibility
Grammarly’s Response to the Controversy
Grammarly has finally addressed the backlash. However, it did not apologize or remove the feature. Instead, it is offering an opt-out process.
This means authors must actively request to be excluded. Many argue this is insufficient. You can read more in our related article: Grammarly says it will stop using AI to clone experts without permission.
The Ethical Implications of AI Cloning
Using someone’s identity without consent is a serious ethical breach. It undermines trust in both the tool and the authors involved. This practice can damage reputations and mislead users.
AI cloning blurs the line between human and machine contributions. It also raises legal questions about rights to one’s name and likeness. Writers deserve control over how their identities are used.
Why Opt-Out Isn’t Enough
An opt-out system places the burden on authors. They must discover the misuse and take action. This is unfair and impractical.
Grammarly should have sought permission first. An opt-in approach would respect authors’ rights. The current method is a reactive fix, not a proactive solution.
- Authors must find out they are affected
- They need to navigate Grammarly’s opt-out process
- No guarantee of immediate removal
What Authors Can Do to Protect Themselves
Authors should monitor where their names appear online. They can use tools to track unauthorized use. Legal action may be necessary if misuse persists.
Platforms must prioritize consent and transparency. For more insights, check our article: Grammarly says it will stop using AI to clone experts without permission.
Conclusion: Choose Ethical Writing Tools
Grammarly’s handling of this issue highlights the importance of ethical AI. Writers need tools that respect their rights and contributions. Opt-out policies are not enough; consent must come first.
For a writing assistant that values transparency, try Seemless. It ensures your work remains yours. Explore Seemless today for a more respectful writing experience.