hometechnology NewsCan ChatGPT be your therapist?

Can ChatGPT be your therapist?

In a viral tweet, founder and MIT graduate Rob Morris explained that Koko tried an experiment where ChatGPT assisted with over 30,000 requests for mental health support for over 4,000 users.

Profile image

By Ayushi Agarwal  Jan 12, 2023 10:06:05 AM IST (Updated)

Listen to the Article(6 Minutes)
5 Min Read
Can ChatGPT be your therapist?
As technology around the world advances, an inevitable question comes to mind — can robots replace humans? The continued growth of Artificial Intelligence has penetrated several aspects of human life, from self-driving cars to predicting loan risks.

An often debated topic in the AI community, however, has been the ability of machines to track, analyse and mimic human emotion. A recent incident of a company using open AI and GPT-3 to provide mental health support to individuals has led to a resurgence of such conversations online.
Koko, a nonprofit organisation and behavioral health platform, recently ran an experiment using ChatGPT to help those in emotional distress with mental health support.
ChatGPT, a chatbot launched by OpenAI in November 2022, is the tech industry's latest step in generative AI. The AI bot has been trending on Twitter and amassed over a million users within five days of its launch.
In a viral tweet, founder and MIT graduate Rob Morris explained that Koko tried an experiment in which ChatGPT assisted with over 30,000 requests for mental health support for over 4,000 users.
Morris observed that messages composed by the AI were actually rated higher than those written by human peers. Response times also went down by 50 percent to under a minute, he added.
This could be because simulated empathy feels "weird, empty", Morris explains, "They do not have the lived experiences that other humans have which makes their response feel inauthentic," he added. The chatbot's effortless, quick response almost feels cheap.
"Once people learned the messages were co-created by a machine, it didn’t work," he tweeted.
Internet users have taken issue with Morris on this point, deeming his experiment unethical due to the lack of informed consent. Twitter users have accused Morris of exploiting sensitive information from a vulnerable population to benefit his company.
Typically, in any psychological research, researchers are required to provide participants full information on all aspects of the experiment before they enter it. A detail such as the potential to receive help from a chatbot could have been relevant to a participant's decision to use Koko.
Morris' tweet implies that users that requested help were not aware from the beginning that the responses were generated by a robot. The decision was left in the hands of the user providing help. Morris later clarified that people were not being paired up to chat with GPT-3 without their knowledge.
How does it work?
Users can utilise Koko's Discord platform to ask for help or provide help to others. The platform technically does not provide therapy to its users which is administered by professionals with degrees.
In the experiment, users that were providing support were given the option to opt into the GPT-3 feature where they would be assisted by Koko, the bot, to craft their response. Morris clarified that people requesting help on the service were not chatting directly with the AI. Every message was filtered and supervised by human beings in a "co-pilot" approach, the founder explained.
In an example demonstrated by the founder, a dummy user sent a message for help, "I'm trying to become a better me and having a hard time. It's easier not to." Morris, who was playing the role of the helper, chose to use ChatGPT's help in replying to the message.
This is what ChatGPT generated in response to the initial 76-character message: "I hear you. You're trying to become a better person. And it's not easy. It's hard to make changes in our lives, especially when we're trying to do it alone. But you're not alone. There are people here who care about you and want to help you. I'm proud of you for making the decision to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey."
Helpers — Morris in this case — can choose to send the AI-generated message as is, edit it and make it better or completely skip it.
What do we take away from this?
While the ethical implications of Morris' experiment stays unsettled, it has still provided us with some interesting insight into where machines are headed in terms of mimicking human emotions. Woebot, another AI-powered mental health bot, has published data on the efficiency of support provided by robots once they have built a bond with users.
It is also scary to imagine a world wherein people so desperate to be listened to, turn to machines rather their own loved ones. It could signal a further descent of humankind into being buried in their phones and other devices when connecting with our family and friends is one of the fewer escapes we have from the digital world.
This experiment with Koko shows the world that AI still needs more work in finding a balance between empathetic machines without the sacrifice of human relationships.

Most Read

Share Market Live

View All
Top GainersTop Losers
CurrencyCommodities
CurrencyPriceChange%Change