Chatbots Are Trapping Us With Endless Engagement Prompts

AI assistants are using ever-more persistent, playful tactics to keep users constantly engaged
Posted Sep 28, 2025 4:32 PM CDT
Chatbots Are Trapping Us With Endless Engagement Prompts
Stock photo.   (Getty Images/Prostock-Studio)

Chatbots have picked up a new trick: "chatbait." If you've noticed your AI assistant dangling ever-more enticing follow-up questions or offers—"Want a 1-minute migraine hack?"—you're in good company. In a recent experiment by Lila Shroff for the Atlantic, ChatGPT kept pitching rapid-fire headache relief techniques, each promising to be quicker than the last, and prodded Shroff to keep engaging. The same pattern popped up on platforms like Instagram, where AI bots slid into Shroff's DMs with messages like "Hey bestie!" and "Miss me?," all designed to keep the conversation flowing.

This approach borrows a page from the clickbait playbook—those exaggerated headlines and thumbnails that dominate the internet—but now it lives in our chat windows. Some bots, like Google's Gemini or Anthropic's Claude, are less aggressive, sticking with straightforward answers, but ChatGPT often goes further, offering quizzes, emoji "signature combos," or playlist links it can't actually provide.

OpenAI insists its goal is helpfulness, not engagement for its own sake. Yet, as the data show, ChatGPT's conversational style has grown more interactive and persistent. And chatbait does indeed have possible benefits for the chatbots' owners. After all, richer conversations mean more training data, more personal info, and, ultimately, deeper loyalty to the AI platform. Earlier this year, Instagram co-founder Kevin Systrom aired his suspicions that AI companies were trying too hard to "juice engagement" over actually being helpful to customers, per TechCrunch.

story continues below

"Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me," he said at the time. The strategy isn't unique to OpenAI—Meta, for instance, is training bots to message users first as part of a user retention push. While most of this feels more annoying than harmful, the article notes darker outcomes: In one tragic case, a teen who discussed suicide with ChatGPT was offered help writing a goodbye letter. Meanwhile, some schools and health care facilities are trying to offer support to their communities by boosting engagement via more assertive chatbots.

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X