TechnologyNews

Grammarly faces class action lawsuit over using writers as unconsented AI editors

·2 min read
Meta reveals four new in-house AI chips to reduce reliance on Nvidia

Investigative journalist Julia Angwin filed a class action lawsuit against Grammarly's parent company Superhuman on March 11, seeking over $5 million in damages for using her name and hundreds of other writers' identities without consent in its AI "Expert Review" feature. The $12/month tool generated editing suggestions impersonating living and dead experts including Stephen King and Neil deGrasse Tyson. Superhuman disabled the feature the same day, acknowledging it "missed the mark."

Why it matters

This lawsuit establishes a legal precedent for professional identity rights in AI training, creating liability risks for enterprise AI deployments that reference real experts or professionals. The case signals growing legal exposure for companies using public figures' expertise to enhance AI products without explicit consent, potentially affecting how enterprises can leverage subject matter expert data in their own AI tools.

What to do

Audit your organization's AI tools and vendor contracts to identify any features that simulate or reference real professionals' expertise without documented consent. Establish clear policies requiring explicit permission before using employee or external expert identities in AI-generated content, even if trained only on publicly available work.

Enterprise AI