« Who pays the price: Victoria L. Jackson on race, revenue, and college sports | Main | Bots that fluff: danah boyd on bot-hating and Twitter »

02/07/2018

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Karson Peterson

The chatbots that Pardes describes can not really do anything besides talk. They are specifically made to give something for humans to communicate to that removes the stigma of opening up to someone. She thinks they will be useful because people are more likely to open up to a robot than a human. Her argument is persuasive because people truly are more likely to open up to something other than a human.

yunhuwang

I think that this robot is completely should not be developed. People's lives should be filled with all kinds of feelings. But it appears only to make us full of joy at home, thereby causing everyone to themselves as the center, so that communication between people become obstacles.

 Ricardo Carlos De La Rosa

It is common today to us Artificial Intelligence (AI) in our daily life. According to Arielle Pardes, she believes that not only should this AI apps, should be able to look up places or set appointments and tell you the weather , but also have the capability to emotionally interact with you, by asking you, how was your day? or how you are feeling, etc. I believe that making this technology available to anybody as the creator Eugenia Kuyda did, will help other companies further advance this technology. This choice is very difficult to do,instead of keeping it all for her self she rather it be explored and advanced. This option she took is great for the future.

Austin Maloney

When it comes to Artificial Intelligence(AI) we must ask ourselves, how far do we want to go. I can see why people working to improve AI are doing what they are doing. They want to make life better for the generations below them. The question is, will they ever really feel emotions. It is hard to see that we can teach something emotion, in fact it sounds impossible. Emotion is something we are born with, its the thing that stops us from making choices based on facts with morals into play. So how can we teach something that only runs on facts emotions. Why should we give things the ability to advance when they won't have morals. I think that we can go somewhat further into advancing AI, But how far will we risk based on something that will never truly feel emotion.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

About They Say / I Blog

  • New readings posted monthly, on the same issues that are covered in “They Say / I Say” with Readings—and with a space where readers can comment, and join the conversation.

Become a Fan