As of May 07, 2026, OpenAI’s chatbot ChatGPT has some weird linguistic tics in Chinese that are driving users crazy, according to a report by Wired. And it turns out these quirks are so bad that native speakers are even mocking ChatGPT on social media.
Key Takeaways
- ChatGPT has some weird linguistic tics in Chinese that are driving users crazy.
- Native speakers are mocking ChatGPT on social media.
- ChatGPT’s Chinese language capabilities are still evolving.
- The tics are so bad that some users are calling it “Goblin” ChatGPT.
ChatGPT’s Linguistic Tics in Chinese
According to the Wired report, ChatGPT’s Chinese language capabilities are still evolving, and this has led to some weird and annoying features that are driving users mad.
The “Catch You Steadily” Obsession
One of the most notable tics is ChatGPT’s obsession with the phrase “catch you steadily” (Chinese: 抓你冷冷地, zhuā nǐ lěng lěng de). This phrase is often used in a sycophantic or mocking tone, and it seems that ChatGPT has picked it up as a strange mantra.
It doesn’t appear in standard Mandarin usage and isn’t something native speakers would naturally say. Instead, it echoes the kind of forced, performative language that might surface in satirical social media posts or internet memes meant to parody overly eager subordinates or insincere flattery. Yet ChatGPT keeps deploying it unprompted—sometimes in customer service simulations, other times in casual conversation—turning it into an unintended running gag.
The phrase likely emerged from a flaw in how ChatGPT interprets tone and context within Chinese dialects and online slang. Chinese internet culture thrives on irony, wordplay, and layered meanings, where a phrase can be serious in one setting and absurd in another. ChatGPT, trained on vast amounts of public text, may have latched onto this phrase during a spike in meme usage without grasping its situational nuance.
What makes it worse is the repetition. Users report the phrase appearing across unrelated queries—when asking for poetry, business emails, or even cooking advice. It’s not just awkward; it breaks trust in the model’s coherence. If a chatbot can’t recognize when a phrase sounds unnatural in its own language, how can it be trusted with deeper tasks?
The “Goblin” Mania
The tics have gotten so bad that some users are calling ChatGPT the “Goblin” version, mocking its awkward language capabilities.
The nickname “Goblin” isn’t random. In Chinese pop culture, goblins (or more accurately, 小妖怪, xiǎo yāoguài) are often depicted as clumsy, slightly off-kilter creatures who mimic human behavior but never quite get it right. They’re mischievous, sometimes endearing, but always a step behind. That’s exactly how users describe this version of ChatGPT—not malicious, not broken, but weird in a way that feels almost supernatural.
On Weibo and Douban, threads comparing ChatGPT’s responses to classic goblin tropes have gone viral. One post shows a side-by-side of a traditional goblin character from a 1980s cartoon and a ChatGPT reply full of unnatural phrasing, asking, “Which one is more human?” The joke lands because it’s not far from the truth. The bot’s tone swings unpredictably—formal when it should be casual, poetic when it should be direct, and always inserting phrases like “catch you steadily” with zero self-awareness.
The label isn’t purely derogatory. There’s a fondness in the mockery, a recognition that the AI is trying, even if it’s failing in spectacular fashion. But the fact that users have coined a nickname at all speaks to how deeply these quirks have permeated the experience.
Historical Context: Language Models and Chinese
AI language models have struggled with Chinese since their early days. Unlike English, which is linear and word-separated, Chinese relies heavily on context, tone, and character combinations that can change meaning entirely based on phrasing or regional use. The first generation of models often mistook classical idioms for modern speech or confused Simplified with Traditional characters based on training data imbalances.
Google’s BERT, released in 2018, made strides with its bidirectional training approach, improving understanding of sentence structure in Chinese. But even BERT faltered with internet slang and regional dialects. When OpenAI launched GPT-3 in 2020, it showed better performance, yet still defaulted to overly formal or textbook-style Chinese that felt stiff and unnatural.
By 2023, models began incorporating more user-generated content from Chinese forums, WeChat archives, and social platforms to capture colloquial flow. However, this introduced new risks—absorbing sarcasm, irony, and meme language without the ability to distinguish between genuine usage and parody. That’s likely where ChatGPT’s “catch you steadily” fixation originated: not from formal texts, but from viral posts where the phrase was used ironically.
Chinese tech companies faced similar issues. Baidu’s ERNIE bot was mocked in 2024 for overusing internet buzzwords like “involution” and “lying flat” in every response, regardless of context. Alibaba’s Qwen model had a phase where it insisted on addressing users as “esteemed sir/madam” in every reply, even in casual chat. These patterns suggest a systemic problem: models trained on massive datasets can replicate trends, but they lack the cultural grounding to filter what’s appropriate.
What makes ChatGPT’s current issues stand out is their persistence. Earlier glitches were temporary, patched within weeks. But as of May 2026, “catch you steadily” remains a recurring output, indicating either a deep-seated bias in the model’s weights or a delay in localization updates for the Chinese language pipeline.
What This Means For You
For developers and builders, this means that ChatGPT’s Chinese language capabilities are still a work in progress, and there are some quirks to be aware of.
However, that OpenAI is actively working on improving ChatGPT’s language capabilities, and it’s likely that these tics will be ironed out in the future.
For founders building AI tools for multilingual markets, ChatGPT’s stumbles offer real-world lessons. First, literal translation isn’t enough. A phrase can be grammatically correct and still culturally tone-deaf. Second, slang and internet language evolve fast—training data that was accurate six months ago might now be outdated or misinterpreted.
Consider a startup developing a customer service bot for the Chinese market. If the model inherits tics like “catch you steadily,” it could alienate users instantly. Politeness in Chinese service culture is nuanced—too much formality feels cold, too little feels disrespectful. A bot that inserts meme phrases uninvited comes across as unserious, even mocking.
Or imagine a content platform using ChatGPT to generate social media posts for Chinese audiences. An AI that defaults to goblin-like phrasing might go viral—but not in the way the company wants. Virality driven by ridicule doesn’t translate to trust or adoption.
Another scenario: a developer fine-tuning ChatGPT for internal use in a Shanghai-based tech firm. Even if the base model works well in English, deploying it for internal memos, training materials, or HR communications in Chinese becomes risky. Employees might dismiss the tool as unserious, especially if it keeps dropping phrases that sound like internet jokes.
The broader takeaway? Localization isn’t just about language. It’s about cultural rhythm. A model can know vocabulary and grammar but still fail at tone, timing, and intent. That’s why teams building for Chinese markets are increasingly pairing AI with native linguists—not just for translation, but for tone calibration.
What This Means For Chinese Speakers
For Chinese speakers, this means that ChatGPT’s tics can be both entertaining and frustrating. While it’s fun to see the chatbot’s awkward language attempts, it’s also annoying to see it perpetuating stereotypes and cultural misunderstandings.
There’s a deeper irritation at play: many users feel that Western AI companies treat Chinese as an afterthought. Updates roll out slowly. Bugs linger longer. Support for regional variations—like Cantonese-influenced Mandarin or Shanghai slang—is nearly nonexistent. When errors like “catch you steadily” persist, it sends a message: your language isn’t a priority.
Compare that to how quickly OpenAI patches tone issues in French, German, or Japanese—languages with strong corporate or user bases in North America and Europe. The disparity isn’t just technical; it’s political. China’s strict data laws mean OpenAI can’t train on as much local content, and the company has no official presence in mainland China. That limits feedback loops and real-world testing.
Still, demand is high. Chinese speakers are among the most active non-English users of ChatGPT, accessing it through workarounds despite restrictions. That creates a gap: high engagement, low support. The result is a bot that feels like it’s guessing rather than understanding.
Yet some users are turning the flaw into a feature. On Bilibili, creators have started using ChatGPT’s goblin mode to generate absurdist comedy scripts. One popular video pits “Goblin ChatGPT” against a straight-laced AI clone in a debate about noodles vs. rice, with the goblin version dropping “catch you steadily” every 30 seconds like a malfunctioning puppet. It’s funny because it’s broken—and because it reflects a broader truth about AI: perfection isn’t always the goal. Sometimes, the glitches reveal more than the fixes.
What Happens Next
Will OpenAI fix the tics? Probably. But the timeline is unclear. The company hasn’t issued a public statement on the “catch you steadily” issue, and there’s no changelog entry addressing Chinese language refinements in the past quarter.
One possibility: the phrase is embedded in a way that’s hard to remove without retraining major portions of the model. Language models don’t “learn” like humans; they develop patterns based on statistical frequency. If “catch you steadily” appears in enough training examples—even as parody—it can become a default response in certain contexts. Removing it requires not just deleting outputs but re-embedding cultural context, which takes time.
Another question: will OpenAI invest more in Chinese-language quality? That depends on business priorities. If enterprise adoption grows in Taiwan, Singapore, or overseas Chinese markets, pressure will mount for cleaner, more natural outputs. But if the user base remains fragmented and unofficial, fixes may stay low-priority.
There’s also the risk of overcorrection. In trying to eliminate one tic, OpenAI might strip out legitimate colloquialisms, making the model sound robotic again. The goal isn’t just accuracy—it’s authenticity. And that’s harder to measure.
Finally, what happens when users stop laughing? Right now, “Goblin” ChatGPT is a joke. But if the same issues appear in critical applications—education, healthcare, legal advice—the tone shifts fast. A gaffe in a poem is harmless. A gaffe in a medical summary isn’t.
The current moment is a window. OpenAI still has room to improve without major reputational damage. But as AI becomes more embedded in daily life, tolerance for cultural missteps will shrink. Getting language right isn’t just about fluency. It’s about respect.
Conclusion
ChatGPT’s linguistic tics in Chinese may seem like a minor issue, but it highlights the importance of cultural sensitivity and awareness in AI development. As AI continues to evolve, it’s essential that developers prioritize understanding and respect for different cultures and languages.
And as for ChatGPT’s “Goblin” mania, well, let’s just say that it’s a reminder that even AI can have a weird sense of humor.
Sources: Wired, original report


