Why Did Grammarly’s AI Expert Persona Feature Fail?

Why Did Grammarly’s AI Expert Persona Feature Fail?

Oscar Vail is a distinguished technology expert whose career has been defined by a relentless pursuit of innovation in robotics, open-source ecosystems, and the rapidly shifting landscape of artificial intelligence. As a frequent commentator on the intersection of human creativity and machine learning, he has become a leading voice in the debate over digital persona rights and the evolution of professional writing tools. In this conversation, we explore the recent turbulence surrounding persona-based AI features, focusing on the lessons learned from failed implementations and the ethical frameworks required to protect creators. We delve into the complexities of revenue sharing for digital likenesses, the delicate balance between helpful automation and intrusive monetization, and the potential for AI to foster deeper connections between authors and their audiences without compromising the integrity of the original voice.

When an AI feature designed to mimic professional personas fails to provide value to both the creator and the end-user, how should a platform pivot? What specific metrics or feedback signals indicate that a persona-based tool has become suboptimal or intrusive for the daily writer?

When a feature like “Expert Review” fails, the platform must immediately pivot toward transparency and user control rather than doubling down on the automation. In the case of Superhuman and Grammarly, the CEO admitted the feature was suboptimal because it didn’t deliver value to either side, which is the ultimate red flag for any developer. You know a tool has become intrusive when users spend more time dismissing pop-up boxes and premium pitches than actually interacting with the suggested edits. We look at engagement metrics: if a user is consistently ignoring red-highlighted suggestions or if the AI is losing the prose’s context—like misinterpreting a writer’s unique stylistic flair as an error—the tool has lost its utility. A successful pivot requires moving back to being a “quiet assistant” that prioritizes the writer’s original intent over aggressive AI-driven interruptions that block the screen.

Using an expert’s name and perceived thinking style to guide AI suggestions can lead to legal challenges regarding persona rights. What specific steps should companies take to secure permission, and how do you effectively structure a revenue split between the software platform and the living expert?

The era of “ask for forgiveness, not permission” is ending, as evidenced by the class action lawsuits facing companies that use names and perceived thinking styles without authorization. To move forward ethically, companies must treat these experts as partners, securing formal licensing agreements that detail exactly how an LLM can utilize their body of work. A fair revenue split should mirror the “YouTube model,” where the platform provides the infrastructure but the expert retains ownership of their digital likeness and earns a percentage of the subscription or usage fees generated by their persona. This ensures that when a user asks for advice in the style of a specific journalist or author, that creator is compensated for the decades of experience the AI is synthesizing. Currently, platforms like Gemini can mimic writers like Nilay Patel or Stephen King in seconds, but without a formal revenue-sharing structure, this is essentially a “pirate persona” economy that devalues professional expertise.

If a writing assistant evolves into a platform similar to YouTube, what incentives must be in place to convince experts to opt-in? How would this model provide more protection for a creator’s brand than current LLMs that already replicate famous voices without any formal compensation?

The primary incentive for experts to opt-in is the promise of a “persistent fan connection” and a sustainable way to monetize their intellectual property in a world where LLMs are already scraping their content for free. By joining a formal platform, an expert gains a level of control that the open web doesn’t provide, such as the ability to curate the data their digital clone is trained on to ensure accuracy. This model protects a brand by offering an “official” version of an AI persona, which carries a seal of authenticity that a generic ChatGPT prompt lacks. It transforms the AI from a competitor into a distribution channel, allowing a prominent book author to reach millions of writers simultaneously while maintaining the integrity of their specific editorial voice. If the platform handles the legal and financial heavy lifting, it becomes a win-win where the creator is finally paid for the influence they’ve spent a career building.

Many users find that aggressive AI pop-ups and premium pitches can distract from the core utility of a writing tool. How do you balance the need for monetization with the goal of being a quiet assistant, and what are the risks of AI losing necessary context in prose?

The risk of losing context is the greatest threat to a writing assistant’s credibility; if the AI suggests a fix that changes the meaning of a sentence or ignores the nuances of a technical news story, it becomes a liability rather than an asset. Balancing monetization requires moving away from intrusive pop-ups that obscure the text and toward a more integrated, “hover-to-reveal” style of interaction that respects the writer’s flow. When a tool becomes too aggressive in its pitches for premium assistance, it triggers a “slippery slope” where the user’s primary emotion is frustration rather than empowerment. We’ve seen that users are willing to pay for tools that genuinely improve their output, but when the AI’s suggestions feel “off” or confused, the push for monetization feels predatory. The goal should be to remain invisible until the moment a user actually needs a spelling or grammar assist, maintaining a respectful distance from the creative process.

Established authors and journalists often struggle to maintain persistent connections with their audience in a crowded digital market. Could persistent AI personas actually bridge this gap, and what specific safeguards are necessary to prevent these digital clones from misrepresenting a creator’s actual values or writing style?

Persistent AI personas have the potential to bridge the gap by acting as a 24/7 editorial bridge between a celebrated author and an aspiring writer who seeks their guidance. However, the safeguards must be rigorous, including a “human-in-the-loop” verification process where the creator can test and audit their digital clone’s responses before they go live. There is a real danger of these clones taking a creator’s name “in vain” and producing output that contradicts their actual values or professional standards. To prevent this, platforms need to implement strict parameters on the LLM’s creativity, ensuring it sticks to the specific “thinking style” and editorial philosophy documented in the expert’s actual work. Without these protections, a digital persona is just a sophisticated parody that risks damaging the reputation of the very expert it was designed to emulate.

What is your forecast for the future of AI persona development in the writing industry?

I forecast that the writing industry will move toward a fragmented marketplace of verified digital identities, where “persona rights” become as legally significant as copyright or trademarks. We are currently in a “whack-a-mole” phase where unauthorized clones are popping up across various LLMs, but eventually, we will see a consolidation into platforms that prioritize expert permission and direct compensation. The “YouTube-ification” of writing assistance is inevitable because experts are desperate for new ways to drive connection in a crowded market, and users are hungry for guidance from people they admire. While the initial attempts, like those at Grammarly, were clumsy and legally fraught, the underlying demand for high-level, persona-based editing is too strong to disappear. Success in this space will depend on whether tech companies can learn to treat creators as partners rather than just data points for their next model.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later