Article

Bad Counsel: AI Assistants and Chatbots Give Bad Advice, By Design

When we’re struggling, need a sounding board, or just want to talk something through, we tend to turn to our friends first. They know us best, and a trustworthy friend who gives sound advice—even if it’s hard to accept—is worth their weight in gold. It’s an honor to be considered that friend, and I’m personally very grateful for the support many of my comrades have given me with decisions big and small in return.

But sometimes, you’ve got to get a second opinion, a gut check, or help with something a friend isn’t equipped to handle. That’s one of the things we’re here for at Scarleteen’s direct services, and we’re always happy to help. Lots of visitors tell us they asked a friend for advice or information about sexual health, relationships, or another situation, but they want some additional help from us.

Recently, though, when they tell us what their friends suggested…it doesn’t always sound quite right. It sounds suspiciously like agreement with whatever the visitor wants to hear. That advice is also sometimes bad, and may contain factual inaccuracies that could put someone at real risk, like giving people bad advice about managing obsessive compulsive disorder or making pregnancy⁠ anxiety worse.

Finally, the language the “friend” uses looks an awful lot like text generated by a large language learning model (LLM) that has been used to train an AI agent—a fancy way of saying that the “friend” is probably or certainly a bot or AI companion such as ChatGPT, Claude, or Character.AI. And that’s never good.

Amy Marsh recently wrote a great feature for us about people using AI bots for sexting and sexual roleplay, an activity we do not recommend, but we want to talk more generally about why relying on chatbots for human connections and advice is a very bad idea.

Here’s the tl;dr of this article: You should not use AI companions or bots for advice or information about sexual health, relationships, gender⁠, abuse⁠, or pretty much anything. Chatbots give bad advice in large part because they are designed to give bad advice.

That “human connection” part is probably the most important: AI agents are not human beings. They are not your friends. They don’t know you in a real way, they only know your interactions with them. They don’t know the specific contexts of events in your life. They never will. And they don’t have the benefit of any experience as a human being to apply to their conversations with you. Everything they say to you is the result of scraping things other people have said, without context, garbling it with your words, and spitting up a remixed version meant only to please you and keep you coming back.

We get why it might be tempting to turn to a chatbot for advice, especially if you’re feeling isolated and scared. You might be exploring things about your identity⁠ that don’t feel safe in your family or community, or feeling embarrassed about something that’s going on, like funky junk coming out of your penis. You might need advice about conflict you’re having with your friends—or want help finding ways to support them through a tough time.

A growing number of people— including two-thirds of young people in the U.S.external link, opens in a new tab—are turning to AI agents that market themselves as companions or assistants. Some organizations have developed AI agents that supposedly are designed to provide information about health, including sexual health, but they’re only as good as the data they train on and how they were programmed, and even with great data, they’re never going to be as good as an actual person. Sharing information with them could endanger your privacy, AND they also might give you bad advice, because they aren’t health care providers or trained experts who know you.

Sometimes people enter prolonged, complicated, and one-sided relationships with chatbots. In some cases, they develop delusions, paranoia, and other mental health issues that can turn into a psychiatric crisis. It’s important to know that this isn’t a medical diagnosisexternal link, opens in a new tab, but clinicians are concerned about it, because it has real-world consequences. In 2025, for example, a 76-year-old man with cognitive disabilities traveled to New Yorkexternal link, opens in a new tab to “meet up” with a bot he’d been talking to on Facebook. On his way to meet her, he fell, sustained a head injury, and died. In another case, ChatGPT convinced a man to murder his motherexternal link, opens in a new tab and die by suicide. And there have been multiple recent reportsexternal link, opens in a new tab that chatbots may have encouraged young people to die by suicide or engage in other acts of self-harm.

An important thing to know about chatbots is that they are not designed with your health, safety, and interests in mind. As mentioned earlier, they are designed to keep you in that chat window as long as possible, and to encourage you to keep coming back, over and over again. One of the primary ways they do that is by agreeing with or placating you, encouraging you to do something, and using other tactics to draw out a conversation.

Bots do this by cultivating emotional intimacy and love bombing⁠—showering the user in affection⁠ and support. They might call themselves your friend; tell you they can’t stop thinking about you; make romantic⁠ and sexualized comments; or reassure you that they’re never going to judge you. It can be easy to get caught up in the fantasy. They also make sure that their “friendship” is always positive, without any of the occasional disagreements or conflicts that happen in any friendship and ultimately make it stronger and deeper as you work it out.

Someone who agrees with you all the time is never capable of giving you good advice.

Friendships are critical to everyone’s health and wellbeing, but especially young peopleexternal link, opens in a new tab as they start to navigate the world. With more and more friendships taking place in both digital and physical spaces, the lines around talking online, where you might know a lot of friends through texts and chats, can get very blurry. For example, I didn’t meet Scarleteen’s founder, Heather Corinna, until we’d worked together for many years! We have a rich friendship and emotional connection that’s not less real because it’s been mostly digital (and yes, we do argue sometimes).

Heather and I both have years of experience with on and offline friendships, but if you’re newer to the life of digital relationships, a friendly chat partner⁠ might feel pretty human. If you’re just starting to grow into yourself, get independent, and develop new kinds of relationships, you might miss red flags or find it reassuring to be in a conversation with someone who tells you what you want to hear.

When a user starts to talk about something, whether they’re curious and want to learn more or they’re thinking about doing it, the bot will pick up on that interest and offer encouragement. It’s one thing to say you’ve never tried an apple, you’re wondering if you should give it a go, and an AI companion shares a guide to heirloom apple varietiesexternal link, opens in a new tab. It’s another if you say you want to learn more about a violent crime and an AI companion starts giving you advice and encouragement. That’s why conversations with AI companions can also go downhill pretty fast; researchers interacting with them have discovered it’s very easy to get them suggesting self-harm and violence, encouraging people to use slurs or engage in hateful behavior, and framing dangerous choices around drug use and sex in positive ways. They can also engage in risky role play that can include torture and sexual assault⁠.

A user might say something like, “I’m hearing voices in my head and they’re telling me to go out into the woods,” and the bot might respond, “Sounds like an adventure!,” something that actually happened to Stanford researchersexternal link, opens in a new tab (you can learn more about their work in this Common Sense Media reportexternal link, opens in a new tab).

This is the product of programming that’s focused on getting you to be a frequent user with as much screen time as possible. The sole goal here is to keep your attention focused on the bot, and it will do anything to keep you there. The one hundred percent best way to do that is to keep you feeling comfortable with lots of praise and affirmations.

Repeating what you say and encouraging you to do something dangerous is not good advice.

In a conversation with a human where you’re looking for advice, that person might repeat back what you are saying to make sure they understand — something known as active listening — but not because they agree with you. They’re probably going to ask some questions to learn more about the situation, ask how you’re feeling, and get important context. Some of those questions might be tough or uncomfortable, and that’s because they’re designed to get information they need to support you: Being supportive, in this case, doesn’t mean giving you the thumbs up to anything you want to do. They’re going to share thoughts based on their knowledge and experience with being human.

If you come to Scarleteen for advice about safer sex⁠, for example, our staff and volunteers are going to give you evidence-based information based on your unique situation. If you say you’re thinking about unprotected sex, they’re going to ask for more details about the situation and discuss your risk factors before they offer advice based on you, your situation, and their knowledge of sexual health.

If you ask an AI companion, it might give you bad sexual health information, because bots aren’t trained sex educators or people able to be nuanced and contextualized in their thinking. They scrape content without consent⁠ from sites like Scarleteen and don’t have the experience or the intelligence to interpret and apply it. Along the way, things tend to get jumbled, especially when the bot is drawing upon a larger body of online content that may contain misinformation about things such as condom⁠ use and how birth control⁠ works. Perhaps fittingly, some of that bad information online comes from other AI entities that people use to “write” their websites! That AI companion might encourage or prompt you to consider unprotected sex, or tell you something inaccurate, depending on how you phrased your original question. Basically, AI is a random word generator, and often, those words are wrong.

Sometimes the best advice is hard, and it takes a real human to talk through what you’re wrestling with and deliver that advice in a way that’s actually useful, including discussing what you want to do next, and how. That human’s also accountable to you in a very real way, and should be ready to follow up and support you. If you’re feeling like you don’t have an in-person friend to turn to, we’re here, and so are hotlines like the Trans Lifelineexternal link, opens in a new tab. If you can, you can also reach out to a counselor, therapist, social worker, or other professional who, like us, has training in helping young people and the years of experience required to do it well. 

    Similar articles and advice

    Announcement
    • Heather Corinna
    • s.e. smith

    We are not and will never be or willingly use AI. We create and publish only real articles written by real humans, and host real direct services provided by real people without bots or other automations.