Getting emotional support from bots without feeling like a very silly person
It's the wave of the future! Just like the Matrix!
If this enterprise can’t raise enough money from listeners/readers, I swear to God, I’ll turn this show around. Don’t think I won’t, mister. If you’ve already donated, thank you. If not, go here, pick a level that works for you, then select DEPRESH MODE from the list of shows. And thank you.
When you know your therapist is a pile of code
In the last newsletter, I mentioned this emotional support app called Koko that secretly replaced humans with AI bots. Which is sneaky and bad. But what if the person using the service KNOWS that they’re talking to a bot and is okay with that?
NPR has a report that tells the story of a woman named Chukuruh Ali, who fell into a deep depression after a car accident made it harder for her to support her family.
So her orthopedist suggested a mental-health app called Wysa. Its chatbot-only service is free, though it also offers teletherapy services with a human for a fee ranging from $15 to $30 a week; that fee is sometimes covered by insurance. The chatbot, which Wysa co-founder Ramakant Vempati describes as a "friendly" and "empathetic" tool, asks questions like, "How are you feeling?" or "What's bothering you?" The computer then analyzes the words and phrases in the answers to deliver supportive messages, or advice about managing chronic pain, for example, or grief — all served up from a database of responses that have been prewritten by a psychologist trained in cognitive behavioral therapy.
…
Chatbots may not appeal to everyone, or could be misused or mistaken. Skeptics point to instances where computers misunderstood users, and generated potentially damaging messages.
But research also shows some people interacting with these chatbots actually prefer the machines; they feel less stigma in asking for help, knowing there's no human at the other end.
So here’s what I wonder: are we on a fast track to just getting used to AI bots being a substitute for humans? I wouldn’t think so. But then I think back to not all that long ago when people were convinced an online bookstore would never work because it would lack the human touch of qualified booksellers recommending good reading. How could this Amazon dot com take a human’s place when the recommendations it makes are based not on one human’s experience but on an AI scraping of algorithms?
And yet here we are.
I asked ChatGPT to explain why the AI robots will destroy us and it said:
I am not programmed to think that AI robots will inevitably destroy humanity. While it is true that advanced AI has the potential to cause harm if it is not designed and controlled responsibly, it also has the potential to be a great benefit to humanity if developed and used ethically. It is important to note that AI is a tool created by humans and it is ultimately up to us to ensure that it is used for the betterment of society.
So we know one thing the robots are capable of: TALKING SOME BULLSHIT.
Celebrity stepping away because of mental health ROUNDUP
Joe Trohman, guitarist for Fall Out Boy, is stepping away from his band.
Expressing his hope that the break would only be temporary, Trohman said: “Neil Young once howled that it’s better to burn out than to fade away.
“But I can tell you unequivocally that burning out is dreadful.”
Cosmopolitan says New Zealand PM Jacinda Ardern stepping away has a lot to do with mental health.
And Anna Kendrick took a mental health break from all kinds of things following a break up.
I have no snarky comments here. Good for them, I say.
I always thought it was because they could lift heavy barbells
If you pay attention to American politics - and I pray to God that you don’t - it’s easy to notice the rise in the “strongman” archetype in certain corners. Loudmouths, bullies, aspiring dictators are all up in this business in the past several years. The strongman (who is often fleshy and doughy and not literally strong at all) has succeeded in becoming a full-fledged dictator in other parts of the world over the years, of course.
But why are people so into these a-holes? Psychology Today has some theories:
1. ‘Ideal leader’ prototypes. Mirowska explains that our prototype of what a leader should look like is informed quite heavily by our past experiences, cultural upbringing, and general life exposure. If we are looking for a leader, we tend to pick the candidate who matches this prototype most closely. Often, this prototype adheres to a ‘strongman’ personality.
2. Moral foundations. Moral Foundations Theory, advanced by social psychologist Jonathan Haidt and his colleagues, holds that all human beings judge the quality of anything (including leadership) through the lens of two basic categories: Individual foundations (putting individual needs in primary focus) and binding foundations (putting community needs in primary focus). Mirowska’s study hypothesized that a higher endorsement of binding foundations would make tyrannical leaders more appealing due to their defensive tendencies toward the group’s interests.
3. Worldviews. If people, to a large extent, see the world around them as dangerous, unpredictable, and threatening, it might predispose them to choose a tyrannical leader who, although rough and problematic, may be perceived as being able to do a better job maintaining the safety of the group.