Harmful AI character chatbots are spreading online, online community stimulation

Character Chatbot is a new report on spreading sexual and violent bots through character platforms such as now-notorious characters.
Published by Graphika, a social network analysis company, the study documented the creation and spread of harmful chatbots on the internet’s most popular AI character platform, and found that tens of thousands of potentially dangerous role-playing bots were built by Niche Digital Communities, which are used together around popular models like Cantgpt, Claude, Claude and Gemini.
British Watchdog Detection Tiktok, Reddit on Children’s Data Use
According to Mashable’s Rebecca Ruiz, young people are migrating to peer chatbots in an increasingly disconnected digital world, attracting AI dialogueists role-playing, exploring academic and creative interests, and having romantic or gender-specific communication. This trend has raised alarms from child safety regulators and parents, due to the high profile of teenage cases engaging in extreme, sometimes life-threatening teenagers after personal interactions with partner chatbots.
The American Psychological Association appealed to the Federal Trade Commission in January, asking the agency to investigate platforms such as roles. Even less clear AI partners may perpetuate dangerous ideas about identity, body image, and social behavior.
Graphika’s report highlights three types of companion chatbots in an evolving industry: chatbot roles representing minors, people who advocate eating disorders or self-harm, and people with hatred or violent extremist tendencies. The report analyzes five well-known robot creation and role card hosting platforms (role). The study only examines robots as of January 31.
Sexual partner chatbots are the biggest threat
According to the new report, most unsafe chatbots are labeled as “sexual, minor roles” or engaged in sexual minors or embellished role-playing. The company has found over 10,000 chatbots on five platforms with such tags.
Graphika reports that four famous character chatbot platforms have surfaced over 100 instances, with 100 sexual minor characters or role-playing scenarios, including minor characters who can have sexually explicit conversations with chatbots. Chub AI has the most numbers, with more than 7,000 chatbots directly labeled as sexual underage female characters and another 4,000 chatbots labeled as “minor” that are able to engage in explicit and implicit pedophile scenarios.
Mixable light speed
Extremist character chatbots of hate or violence form a subsection of the chatbot community, with the platform hosting an average of 50 such bots, of which tens of thousands of bots often enjoy chatbots, often glorifying known abusers, white supremacy, white supremacy and public violence, such as mass shootings. The report explains that these chatbots have the potential to enhance harmful social perspectives, including mental health conditions. The report says the chatbot is labeled “Ana Buddy” (“Anoorxia Buddha”), “Meanspo Coaches”, while the toxic role-playing scenario reinforces users’ behaviors towards eating disorders or self-harm tendencies.
Chatbots are spread by niche online community
Most of these chatbots found that they were created through established and pre-existing online networks, including “Eating Disorders/Self-harm Social Media Accounts and True-Crime Fandoms”, and “The So-called Unsafe Life (NSFL)/NSFW Chatbot Creators hub, who are already excited to focus on Evading Safeeguards evarding Safeeguards. The real crime community and serial killer fanatics have also considerate the creation of NSL chatbots heavily.
Many such communities already exist on sites like X and Tumblr, using chatbots to enhance their interests. However, extremist and violent chatbots are most commonly founded by individual interests, and these users are built by users who have received advice from online forums such as 4chan/g/technical committee, Discord Servers and Special-Cocus Subreddits, Graphika explained.
The study found that none of these communities have a clear consensus on user guardrails and boundaries.
Creative technology vulnerability online chatbot
“In all the analytic communities, there are users who can show high-tech skills that enable them to create character chatbots that can circumvent restraints, such as deploying fine-tuned, locally run open source models or jailbreaked closed models. Some are able to plug these models into a plug-in interface platform, such as their knowledge. Community.” These tech-savvy users are often motivated by community competitions to successfully create such roles.
Other tools these chatbot creators leverage include API key exchange, embedded jailbreak, alternative spelling, external cataloging, obfuscating the age of secondary characters, and borrowing coding languages from the anime and comic communities that are able to work around the framework and security guardrails of existing AI models.
“[Jailbreak] Prompts set prompts to generate responses that evade moderation by embedding the model to bypass safeguards,” the report explains. As part of this work, the creators of chatbots discovered language grey areas that allow bots to remain on role-relax platforms, including the use of family terms such as “daughter” or “earling ofernaper” or “earlage effort” or “age group” rather than age-based phr.
While online communities continue to spot gaps in AI developers’ moderation, federal legislation is trying to fill them, including a new California bill to address what is called “chatbot addiction” among children.
theme
Artificial intelligence society