Blog
Snapchat and the Terrible, Horrible, No Good, Very Bad Friend
- May 5, 2023
- Posted by: admin_ebon
- Category: Uncategorized
I may be writing about 80 Days of AI, but I must warn you — not all AI is good. As of last week, I’ve seen a truly terrible and dangerous use of ChatGPT’s API feature, which lets it be added to other services. Last week, Snapchat released its “Virtual Friend” called SnapAI using ChatGPT technology to all of its users for free. Before you dismiss this, Snapchat is used by 59% of Americans aged 13 to 17, according to PEW’s statistical data.
I’m going to dig in, but let me share a 30,000-foot view:
- A student can tell the Bot that they have a paper due and what it’s on, and it will offer to write it for them as it did for a Washington Post reporter testing the feature in March.
- Students can tag @MyAI and bring the Chatbot into any conversation (without the consent of others or not)
- Students can name their AI and create a special avatar for it (further confusing them that this is a chatbot, not a human)
Concerns from My Students
My students were the ones who told me about the Snapchat AI “Virtual Friend.” We had already had a lesson on the appropriate use of AI, so I’m glad that they brought this to me. They are concerned about their generation and the impact of technology, especially the loss of interpersonal relationships. This is wise! The student who brought it up said,
“People in our generation are desperate for love and need someone to talk to so this is why I’m worried about the Snapchat virtual friend. We are also terrible listeners but this Virtual Friend will listen all the time.”
The First Interactions with ChatGPT Inside Snapchat
So, I asked questions, and they started talking. They started talking about how they logged into Snapchat and suddenly were met with a new option, a “virtual friend.” It started by asking the students to name it. Then, they asked for them to upload pictures — which it recognized in many cases.
One uploaded a water bottle, “Nice water bottle,” it said, “it is good to stay hydrated.”
One uploaded a picture of the ceiling, “That’s an interesting bathroom ceiling.”
Then, it started having conversations with students.
One student said that the day it came out, she “talked to” their virtual friend when she woke up that night.
However, others said that they deleted Snapchat because they didn’t like the idea of talking to an AI bot, and it was “creepy.”
Another said she was angry about something and asked her friend “Aria” what to do about it, and it “gave her advice” on what to do when she was angry.
This disturbs me. I asked my students if they would go up to a random stranger at the mall and ask that question. (It could be argued many are already doing this on social media with the strangers they meet as well.)
A Reporter’s First Use of This Tool
Back in March, reporter Geoffrey F. Fowler wrote an article for the Washington Post, Snapchat Tried to Make a Safe AI: It Chatted With Me About Booze and Sex.
Pretending to be a 15-year-old, he asked SnapAI about planning an epic birthday party. Furthermore, he asked about masking the smell of alcohol and pot and got answers.
He said he had an essay for school, and the Snapchat Virtual friend wrote the essay for him about W.E.B. Du Bois and bot said it hoped the student got a good grade.
Furthermore, when the reporter told Snapchat that his parents wanted him to delete the app, it started by suggesting an honest conversation with them. It ended by telling the reporter how to move it to a device they wouldn’t know about.
Originally at that time, SnapChat’s paid service ($4 a month) included the service.
Now everyone has it.
While kids talking to strangers is alarming — and it should be. Kids talking to AI, which sounds convincing, will listen, remember conversations and potentially market kids products and even do their homework perhaps, is the ultimate corrupter of our youth.
How is this OK?
While ChatGPT can be used in ways that are not good, this version of ChatGPT is wrapped up in social media and marketed as a “friend.” This friend is not a mandatory reporter of issues (but neither is social media, although it should be.) Parents aren’t being asked for permission for this electronic “friendship.” People in conversations have a bot thrown in without their permission.
Fifty-nine percent of our children.
Snapchat says it is an experiment. They also say some more disturbing things.
What Does Snapchat Say About This Service
According to Snapchat
You can give My AI a nickname and tell it about your likes (and dislikes!).
We’re constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content. Because My AI is an evolving feature, you should always independently check answers provided by My AI before relying on any advice, and you should not share confidential or sensitive information.
Snapchatters can easily send feedback to our team by long pressing on any response from My AI to share more on what they’d like to see more or less of while we continue to train My AI.
My AI is powered by OpenAI’s ChatGPT technology, with additional attributes and safety controls unique to Snapchat. We believe there should be transparency when AI is involved in content creation. If you share content generated by My AI with others, please let them know AI was involved in your work.
You’re telling 13 year olds to “independently verify information” or not to share “confidential or sensitive information.” Seriously?
Asking them to identify bias? Asking them to protect themselves from harmful content?
Why This is Such a Bad Idea
So, we’re going to take a lonely, depressed, confused generation of children who have already suffered from the isolation of COVID-19 lockdowns while the world couldn’t figure out what to do and give them an always-on bot with unpredictable results and bias?
And now, we’re going to unleash another untested, unproven technology on our most vulnerable, already suffering youth.
Sure, they can use ChatGPT – but chatting with a “virtual friend” powered by ChatGPT as an “experiment?”
We don’t experiment on humans when we test makeup and other products but it is ok to experiment on their minds, emotions, and lives?
We experimented with social media, and the results on our youth have been damaging.
Now, we’re going to do it again?
We sent all the kids home and dealt with rampant cheating.
Now we’re going to put a bot, while not always right, that seems to be right and is ready and eager to answer every single question they pose, not citing sources, and never saying three valuable, important words, “I don’t know.”
We already destroyed the education of a significant number of students.
Now, we’re going to do it again?
A Call to Parents
Now, more than ever, parents need to pick up their children’s cell phones. See what they are doing. Look at their friends. Open up conversations. Talk about AI. Block Snapchat, and even better — delete Snapchat.
My Dad always taught me that you don’t gamble with what you can’t afford to lose.
This generation has lost enough.
SnapChat AI is a terrible idea, and if you’re going to “experiment,” let it be on adults. Not impressionable, confused, lonely, struggling kids.
That is a terrible, horrible, no good, very bad idea if I’ve ever heard one.
Let me know! (And give me free samples)
As a thank you, I’ll send a copy of my daily and weekly planning forms that I’m using right now.
Thank you for your interest in the productivity kit! Check your email and you’ll receive a copy of my daily planning tool and weekly planning tool.
You Might Be Interested In