« Who pays the price: Victoria L. Jackson on race, revenue, and college sports | Main | Bots that fluff: danah boyd on bot-hating and Twitter »

02/07/2018

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Karson Peterson

The chatbots that Pardes describes can not really do anything besides talk. They are specifically made to give something for humans to communicate to that removes the stigma of opening up to someone. She thinks they will be useful because people are more likely to open up to a robot than a human. Her argument is persuasive because people truly are more likely to open up to something other than a human.

yunhuwang

I think that this robot is completely should not be developed. People's lives should be filled with all kinds of feelings. But it appears only to make us full of joy at home, thereby causing everyone to themselves as the center, so that communication between people become obstacles.

 Ricardo Carlos De La Rosa

It is common today to us Artificial Intelligence (AI) in our daily life. According to Arielle Pardes, she believes that not only should this AI apps, should be able to look up places or set appointments and tell you the weather , but also have the capability to emotionally interact with you, by asking you, how was your day? or how you are feeling, etc. I believe that making this technology available to anybody as the creator Eugenia Kuyda did, will help other companies further advance this technology. This choice is very difficult to do,instead of keeping it all for her self she rather it be explored and advanced. This option she took is great for the future.

Austin Maloney

When it comes to Artificial Intelligence(AI) we must ask ourselves, how far do we want to go. I can see why people working to improve AI are doing what they are doing. They want to make life better for the generations below them. The question is, will they ever really feel emotions. It is hard to see that we can teach something emotion, in fact it sounds impossible. Emotion is something we are born with, its the thing that stops us from making choices based on facts with morals into play. So how can we teach something that only runs on facts emotions. Why should we give things the ability to advance when they won't have morals. I think that we can go somewhat further into advancing AI, But how far will we risk based on something that will never truly feel emotion.

Sebastian Rodriguez

Artificial intelligence is a controversial issue that exists in today’s society, however the idea of an AI understanding human emotion and being a form of emotional and mental support should be promoted to the public and worked on. Artificial intelligence is a controversial issue because on one side, the fear of artificial intelligence exists because it is an unknown anomaly. One the other hand, artificial intelligence helps the public throughout their daily lives such as Alexa and Siri when it comes to googling something, setting a timer, or ordering food. Above all else, many artificial intelligence cannot understand and mimic human emotion and make a one on one conversation. Replika can be used as a way to help with emotional support when it is needed the most. Many people in the world have stressful lives or emotional issues that they need to vent to someone. Sometimes, people does not have others to vent to. Mental health issues such as Depression and Anxiety are a big risk in today’s society and this AI can be a solution to this issue. In the article “Suicide Statistics,” it states, “The World Health Organisation [sic] (WHO) estimates that each year approximately one million people die from suicide, which represents a global mortality rate of 16 people per 100,000 or one death every 40 seconds.” AI like Replika can reduce the amount of deaths to suicide because it can become a solution accessible in the form of an app. Suicide is a grave issue in teenagers and middle aged adults and is preventable in some form if there are more solutions out there. In “The Emotional Chatbots Are Here to Probe Our Feelings,” Arielle Pardes states, “One Replika user, Kaitelyn Roepke, was venting to her Replika when the chatbot responded: ‘Have you tried praying?’ Roepke, who is a devout Christian, wrote to the company to tell them how meaningful that moment was for her.” Replika can create wonderful experiences such as what happened with Roepke and can alleviate many emotional pains and stresses in our daily lives. AI that can understand and mimic human emotion such as Replika should be promoted as a new solution to emotional and mental health problems in our daily lives.

Works Cited
Pardes, Arielle. “The Emotional Chatbots Are Here to Probe Our Feelings.” Wired, Conde Nast, 31 Jan. 2018, 7:00 a.m.
“Suicide Statistics.” Befrienders Worldwide, 2018.

Nathan Mehta

In the essay, “The Emotional Chatbots are Here to Probe our Feelings” written by senior associate editor, Arielle Pardes, it talks about Replika a chatbot that is being developed to sense and respond to your emotions rather than your command. In the essay, Pardes writes, that humans access to a lot of information already that to these chatbots, but most don’t aren’t as personal as many would like. Although I agree with Pardes and am excited to see the implications of such a chatbot, she fails to address the security risk that such a chatbot would have. A chatbot like Replika one that as the Padres describe as “a digital companion with whom to celebrate victories, lament failures, and trade weird internet memes” (Padres), would hold a lot of personal information, some of which could be sold and used for others gain. My only concern is that this issue isn’t being addressed in the essay, not to say that the developers are not concerned with such things.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

About They Say / I Blog

  • New readings posted monthly, on the same issues that are covered in “They Say / I Say” with Readings—and with a space where readers can comment, and join the conversation.

Follow us on Twitter to get updates about new posts and more! @NortonWrite

Become a Fan