As robots and mechanized transactions become increasingly more commonplace, questions about their abilities and their “humanness” become ever more urgent and complicated. Science writer Robin Marantz Henig explores some of the issues surrounding robots in life-and-death situations in this January 2014 article in the New York Times Magazine.
Read it here: Henig, "Death by Robot"
- Henig asserts that the decision-making processes of a driverless car in an emergency situation are fundamentally different from the processes of a human driver. What is the difference? Do you agree that the two situations really are different? Why or why not?
- Henig presents a lot of information in a straightforward journalistic manner. In what ways does she reveal her own position? Point to specific passages or statements. Is her position expressed effectively? Why or why not?
- Henig opens her article with the example of Sylvia, an injured elderly woman, and Fabulon, her robot caregiver. Henig sets up this fictional example so that readers will desire a specific outcome. What is the desired outcome? What would need to change in the robot’s programming to bring about that outcome?
- Henig states that the fundamental problem is that of mixing “automation with morality” and that “most people intuitively feel the two are at odds.” What do you say? Are automation and morality at odds? Are our moral decisions more than complex algorithms? Why or why not? Write an essay in which you address these questions, using Henig and/or any of her sources as your They Say.
The difference is that human vs robot error is that human crashes may cause death.no i do not agree that the two are really different because i believe both are a possibility of human death. I nelieve robots woulld cause more deaths because machines break, and thats all robots are, machines. Her position is not expressed effectively because i could not find her specific position through the entire article.
Posted by: maree | 02/19/2015 at 11:44 AM
I feel robots can cause just as many or even more errors long term do to the fact that machines do break and they don't always do exactly what you want them to do. Human obviously make errors when they are doing their work but if a robot breaks in the middle of the job then the rest of the work will be wrong unless someone checks on the machine.
Posted by: Cayman | 02/19/2015 at 03:55 PM
When looking at the whole situation there are many things different between a robot and a human mind while driving. The first big thing, the computer can process things a lot faster than a human can. For example, you are driving on the freeway and the person on the next lane is not paying attention and attempts to change lanes. You have about 10 or 15 seconds to react and do something. The computer may be able to process the situation and can avoid it. As humans we only pay so much attention to things. We tend to get distracted by the small things such as, your cell phone, the radio, the person your driving with, the city or the things that are going on outside of the car. If a computer was driving most of those things are being taken out of the danger column. The computer isn’t worried about those things. It is more focused on doing what it was told to do, in this case drive.
Yes I do agree that the two things are very different. The computer is so much more complex than the human brain. There are only so many things we can do at once, and texting and driving is not one of them. There is noting good about that. But, we may be able to save so many more lives by having computers do it all for us. They can make decisions in a split second. There are so many people that can be saved by that. People don’t always think the situations thought and this will help solve those. All in all, there are many advantages that can come from having a non-human operating a vehicle.
Posted by: Meghan | 02/22/2015 at 08:56 PM
How well do the participants in these exchanges summarize one another's claims before making their own responses? How would you characterize any discussion? Is there a true meeting of the minds or are the writers sometimes caricatured or treated as straw men? How do these online discussions compare with face to face discussions you have in class? What advantages does each offer?
Posted by: David | 03/17/2015 at 08:09 PM
A man drives a car and does not pay attention leading to the death of two people. A robot drives a car with slight error in its calculations kills two people. The situation remains the same although with different drivers. Do the situations change? No, they remain the same because of the same end result of two deaths.
One might say that the situations change because of the differences in a robot and a human but the two do share similar characteristics. A robot uses pre-made rules that they must follow and achieve. A human goes through a childhood in which they create their own rule sets that they follow. Both a robot and a man process information through some sort of "brain" in their system. A robot uses a processor that serves as the brain of it.Both "brains" try to take actions that lead to the most favorable outcome although the end result might lead to two deaths the result does seem more favorable than the death of ten. With the various situations that occur when driving, a robot and a human both do reach similar processes when taking the wheel.
Posted by: Joshua Natividad | 04/06/2015 at 07:07 PM
There have been movies showing what life would be like with robots, such as "I, Robot". They could help humans, but they can do the opposite also. My stand is that robots should not be created to help humans. One, robots could mouth function. A good example is of the elderly woman who's robot couldn't get her painkiller because the robot stopped working. A human wouldn't just stop working. Second, robots don't have ethics. For instance a robotic car will try to avoid a collision using radar. If the car swerves it could hit another car with an infant. No one wants a baby to die in their right mind, but a robot can't make those kinds of decisions. I don't believe we should have robots help us.
Posted by: Alannah | 04/06/2015 at 07:15 PM
Robots contribute greatly to our society and often do their job better than humans. However, while I believe we should take advantage of our technological advancements and use robot power, I think a certain danger lies in relying on machines too heavily.
As close as we may come to understanding morality and decision making, I think that robots will never be able to completely replicate a human and that to attempt to do this would cause unforeseen consequences. Too many possible unpredictable situations exist for a coder to accurately provide a robot with the correct response, compromising human safety. Robots are breakable and are unable to improvise, two things that could make it difficult when faced with new, unexpected scenarios. Also, as Asaro brings up in the article, “A machine, ‘is not capable of considering the value of those human lives’ that it is about to end”. Replacing morality with robots appears to me as taking away what makes us humans. Without having to consider the rights and wrongs of human life we will lose our ability to make these decisions. I feel it is unnatural for a machine to display the same characteristics as a human. While robots do open new possibilities to fix human error, and they should be continued to be used in our society, we must limit ourselves so as not to let robots completely rule our lives.
Posted by: Elena | 04/06/2015 at 10:34 PM
Although Henig brings up a lot of great questions and arguments about robots, I disagree with her position. The ethical decision is made when the person chooses whether or not the robot needs human authorization to manage narcotics, or to shoot. Machines respond to their inputs. All ethical decisions are made by humans. The author supports her points by using the sources from roboticists and philosophers rather than the engineers, which makes her evidence for the argument weak. Wallach, for example, talks of a “moral Turing test” in which a robot’s behavior will someday be indistinguishable from a human’s to show the optimism about robots's ethical prospects. However, Wallach never discusses what kind of human should be the standard for robots to follow. Also in the car accident situation, a person would not want to get sacrificed by the robot to protect a group of people who committed D.U.I. Also during a wars or other life-threatening situations, a person would not want his or her robot not to shoot the wounded opponent, who attempt to kill him or her. The program of the robots should be about finding the lowest possibilities to avoid collisions during dangerous situations rather than choosing which decision would be beneficial.
Posted by: Hyun-Jun Lee | 04/06/2015 at 10:38 PM
Robots are in no doubt part of the human life. Whether that might be in the soon future or the present, robots are begining to be created. The question is whether it is ethical to allow robots to function in certain situations by giving the robots human characteristics. I agree that robots should be allowed to function, except in certain situations. Robots are machines, not humans. Therefore it is impossible for a robot to fully understand the value of a human life. Henig gives the situation of a robotic car sacrificing deaths to avoid the most possible destruction. The robot does not know the value of life that is going to be sacrificed but only knows to follow the algorithm it has been given. It would be unfair for someones life to be held by a robot unknowing of lives value. At the same time, the benefits of a robot can not be denied. Robots can function in certain situations that restrict humans. Such as the emotion of fear, panic, etc... Robots are also not bound by the same physical abilities of humans, therefore they can preform certain tasks much more effectively then humans. Robots are undeniably a way to benefit the human race and should be allowed to function given certain situations, at the same time, as the human race, we must not allow robots to control our lives. The human race must never forget that robots are machines and not humans.
Posted by: Aia Amoguis | 04/09/2015 at 10:18 PM
Robots are beginning to take a part in our society by accomplishing basic tasks to help humans with their efficiency. In recent years, scientists have been trying to take the robot to a new level. They want to find a way to give robots human characteristics such as morality and the ability to make ethical decisions. I believe that robots should not have these human characteristics, and only be used to help humans with basic tasks.
A robot is not alive; it is simply programed to do tasks and does them. Even though new technology has given robots the ability to accomplish tasks under many different circumstances and use a superficial sense of morality; it is still not human. Humans are able use morality and make ethical decisions, because they are alive. They have life experiences, complex emotions, and instincts. This is something that a robot could never achieve. If you were to have a robot driving a car everything would run smoothly, until a complex conflict requiring a sense of morality arises. A scientist could program he robot to use “moral math” and always choose the option that kills fewer humans, but shouldn’t it be the human driver’s choice to decide what to do in the moment? What if the more ethical option is not right due to the circumstances provided? Humans should only let the robot drive when there are no obstacles requiring higher level decision-making. Overall, I believe that robots should never be made with human characteristics or have the ability to make complex ethical decisions.
Posted by: Sofia Padron | 04/14/2015 at 08:10 PM
Robots are integrating more and more into our modern society, whether in medicine or the military. This brings up the issue of the extent to which a robot can be both ethically and efficiently effective. To a certain extent, I agree with Henig that robots should be kept out of real-world situations that involve a morality and something greater than algorithms and computerized assessments.
In daily civilian life, robots remain questionable. The lack of emotion and use of digital calculations enable them to be efficient but not always ethical. As posed by Wendell Wallach in the article, a driverless car would only be able to determine its actions by making "rapid probabilistic predictions based on what the observed objects have been doing". Objectively, they are programmed to limit the amount of damage caused from the collision. If a human is driving the car, past experiences on the roads will kick into action, and they will also try to be least affected by the collision. The difference is that while robots, in their 'moral math', are set to handle certain situations in certain ways, humans have a wider ability to control and think. They develop from situations where creativity and intelligence are tested such as in the game at a four-way stop sign where they negotiate who will go first. In return, humans learn to adapt and become more efficient in their decisions, minimizing the amount of human error.There are no limits as to what the average human being can do under stress, pressure, or anxiety.
In the military however, robots prove to be useful in conducting offensive strikes on enemies in war without losing more human life. Many of these autonomous weapons systems like drones and cruise missiles are programmed to target certain areas or are not allowed to because of civilian-related activity that would cause more casualties than necessary. Morality proves not to be an issue here because of the constant development these robots have under the "international rules of war".
Overall, humans should rise to the occasion in situations where higher level decisions are called for and leave the technical areas for robots to figure out. Robots may excel in efficiency, but humans are wired for everything and anything.
Posted by: Alexandra Ro | 04/18/2015 at 11:42 PM
Robots Are Not Humans
3. Common sense seems to dictate that many people would want the outcome that the robot makes the decision to giver her the medication so she will not be in pain anymore. However, what if that woman was an addict and she really was not in pain but just wanted more medicine? The robot would not know if she really was in pain and would not know what to do since it could not contact the supervisor. Robin Marantz Henig states, “The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission.” In order for the outcome to be the desired one, the robot would need more advanced problem solving skills and a greater intelligence to understand the difference between right and wrong. These robots would need to think exactly like a human would but without the negative thoughts and feelings such as anger, fear, or jealousy, etc to be able to understand the right thing to do in those types of situations. Henig quotes Matthias Scheutz when he says, “Human caregivers would have a choice and would be able to justify their actions to a supervisor after the fact. But these are not decisions, or explanations, that robots can make.” What is more important is that these robots can not make these decisions so we can not trust them to take care of a human being. In my experience, my internet goes down frequently because of bad service, so a robot that needs to contact a supervisor will not be able to in a situation that could be life or death. A robot can not give the desired outcome if they do not have the capacity to make those decisions.
Do you think it is possible to program a robot to make the right decisions? And if it is possible, would you put your life in the hands of a robot?
Posted by: Airiel V | 04/21/2015 at 10:26 AM
Can a car decide the better option?
1. It is often said that robotic cars are the future of the world. Being able to steer themselves, park themselves, even brake themselves is really appealing to the human brain. But, will making these cars this advanced be a good idea? The cars that control themselves do not have the technology to be able to decide on the better option. According to Henig, “Here’s the difficulty, and it is something unique to a driver-less car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own”. Basically, Henig is saying that the cars are not going to be smart enough to make the decision to kill the other two passengers or to kill the robotic cars passenger. If the car was human operated the human might be able to swerve out of the way crashing in a way that wouldn’t kill anyone. Another examples Henig states is, “Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into”. Here is another way that the car might decide is the only two options it has. Perhaps there are other options that humans can think of but not robots.
Could a robotic mind make better decisions than a human mind in a stressful situation?
Posted by: Joshua P | 04/21/2015 at 10:26 AM
In the article “Death by Robot” by The New York Times Magazine, bioethics claim that “Driverless cars will no doubt be more consistently safe than cars are now, at least on the highway, where fewer decisions are made and where human drivers are often texting or changing lanes willy-nilly” (Henig). Although, the two situations are different considering the differences between teen drivers and a robot. If texting has really became the biggest risk in our society then robots would make a improvement besides the fact that robots are not always that perfect and bug free. By demonstrating how robots are helpful, Henig presents a lot of information about different robot aids to help people for many things that we could do manually without the help of a robot. I agree that they are trying to aim for people who would not be able to do things themselves because my experience with not having the ability to do something that is too difficult for yourself to do when no one else is around to help you confirms it. Creating these types of things will cause people to become more lazy and not do anything more than depend on a object to do everything for you. To improve the robots would be helpful for the elderly since they are not able to take much care of themselves if they were injured. The desired outcome they are trying to achieve and would need to make the robots more intelligent. According to Henig “There’s a term for this discomfort the sense that when a robot starts to seem almost but not quite human, it is even more disturbing than if it were obviously a machine”. Automation with morality are at odds because they want to create the robots to the point where you cannot tell the difference between human and robot. Though I concede that robots can help the elderly or people with disability’s, I still insist that giving robots morality would create a bond between human and robot. Indeed, it is highly likely that morality could be possible. Nonetheless, that would be taking the evolution a bit to far. Anyone familiar with creating morality similar to the human behavior should see that this would be a difficult task for today’s technology to achieve the full life like movement that we have. Henig argues “One day robots will be even more morally consistent than humans.” This evolution in robots could surpass humans in many ways and be faulty in others, as in the movies you see that robots are not safe and in maybe in others they perfectly can coexist with humans can or cannot create many problems with social interactions with others and health problems from depending on the robots to do everything for people. Would robots ever get to the point of achieving human morality?
Posted by: Daina C | 04/21/2015 at 10:27 AM
Henig defaults the problematic dissensions of robot morality to the hot button issue that robotics and morality are at an in-pass that seemingly has no solution in sight. The focus of the whole topic of robotics is kilter to the real issue at hand; the idea of adding morality to robots in general. There are a plethora of examples in films and novels where robots go bad because of their moral wiring. In May a whole movie revolves around the plot of a robot questions its morality and orders it was programmed to do and turns AWOL in The Avengers: Age of Ultron. The fact of the matter is a robot is just wires and metal and despite the wiring’s similarities to humans neurotic wiring system, it is still an inanimate, lifeless object. To say a robot doesn’t have intelligence however is blasphemous as robots and electronics do things we humans could do in milliseconds rather than minutes. But to add a sense of morality to the object will add to a cluster of more issues.
Henig also throws the word morality around without accurately defining which form of morality is right. Yes Henig cites Scheutz quoting that “morality, broadly, as a factor that can come into play when choosing between contradictory paths” (Henig). Lawrence Kolhberg split morality into 3 stages; preconvetional, conventional, and postconventional morality. In each stage, each person feels that their moral judgement is right based upon the stages justification whether it is self-interest or upholding the rules of society. Regardless of the three stages they depict how morality is not a concrete “algorithm” that can just be implanted into a robot or anyone; that morality forms from growth and experience.
Posted by: Ruben | 04/21/2015 at 10:27 AM
Henig argues that the decision-making processes of a driverless car in an emergency situation are fundamentally different from the processes of a human driver. I would claim that driverless cars would have to adhere to a specific formula that formatted for general situations, and the technology would not be able to make certain decisions regard life or death situations. This is supported by the article Death by Robot, when Henig states “If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own.” A human driver although under this stressful situation could possible do something different that the robot wouldn’t saving themselves. I feel the human drivers differ from robotics in the fact that they would be quicker in response to problems that involve ethically moral solutions, but the question I raise is that in the future is it possible for scientist to create robots with the moral reasoning of a human being?
Posted by: Lucas Quinn | 04/21/2015 at 10:27 AM
According to the article “Death By Robot” Marantz Henig discusses how a driverless car and a human driver would react in an emergency situation. I think the big difference here is the human factor because in the article henig mentions how there’s a choice between hitting an SUV or hitting a kid i think its unethical and immoral because for the human aspect some people may be very moral and swerve completely off the road and miss both the SUV and Kid but on the other hand you have the driverless car maybe hitting the kid or choosing the SUV personally I wouldn’t really like a robotic car making that decision. Henig then goes onto explaining how driverless cars may choose a certain car to hit because it would know that everyone would survive and it is great but I do not like the idea of a car deciding I am the safest car to hit in the situation when the car is supposed to be safe and not supposed to be putting the passenger at risk and I also would not like the fact that if I was in the driverless car that I would not be able to do anything but wait until I crash into something in an emergency situation but then the human factor plays in on what if was another driver who put the driverless car into the decision of the accident. Henig states “Here’s the difficulty, and it is something unique to a driverless car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into.” I support robotics but robotic cars are just something that does not do it for me I get that companies are trying to make cars safer and have less accidents but you have to remember were all human accidents happen you cannot control that but I understand that if a person is under the influence of something drugs, alcohol that these cars would be suitable for the situation.
Lastly if a driverless car did make a decision to take out a kid or a pedestrian for that matter as a driver you’re taught to avoid pedestrians at all costs so if the driverless car makes that decision and kills the pedestrian who is supposed to be held responsible? the car? the company who programmed the car?
Posted by: Brian Veilleux | 04/21/2015 at 10:27 AM
Henig argues that driverless cars are not safe because they will not be able to make the decisions that even humans struggle over. Henig states, "If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own." I agree with Henig because cars do not actually have a human mind and know right from wrong. If one of the sensors broke your whole life is at risk. Sensors on your car do not know which move to make in that split second of time before a collision. Wallach says, "Let’s say the only way the car can avoid a collision with another car is by hitting a pedestrian. “That’s an ethical decision of what you do there, and it will vary each time it happens,”. These cars now put everyone around you in risk instead of just yourself. If you do not believe in the car and you get hit by someone else who owns the car you will be very upset.
How will a self driving car work on a 5 lane highway full of bumper to bumper traffic? Will missing your exit be more frequent?
~Rachel
Posted by: Rachel | 04/21/2015 at 10:27 AM
According to the article "Death by Robot" Marantz Henig say that why should cars and other robots be able to make the choice between which humans life gets taken? i think Marantz is wrong do to the fact that he overlooked that people have to make these choices too. Basically Marantz is saying is that because they are not human they should be able to make a choice of taking a humans life, but what is the difference when we go hunting and we have choice to kill another animal, its the same thing as when Martanz say that the cars have to "choose the option in which the fewest people die." Some may say that Robots are just another animal in the kingdom, they may be man made but are still being controlled by human just like other animals are locked up in zoo. Robots are slowly being programmed by humans so they have no free will, their for they are not the ones making the choice that is the human the programmed it. Much like when Martanz say "A robot,s lack of emotion is precisely what makes many people uncomfortable with the idea of trying to give human characteristic," People of course, may want question whether or not it really is the human that is at fault. Yet is it always true that when a kid gets charged with a crime that the parents are at fault? is it always the case, as i have explained that the programmers are at fault? No, something can go wrong with the programming and the robot can end up hurting someone. Now is that the programmers or the robots fault? Did the robot mean to hurt that person or did something just go wrong, and should we really have robots that are basically are slaves? Everyone has their own opinion, mine is that we should give the robots and their programmers their chance.
Posted by: Sean | 04/21/2015 at 10:29 AM
Robin Henig argues that the decision-making processes of a driverless car in a emergency situation are fundamentally different from the processes of a human driver and I have mixed feelings about it. On one hand, I agree that that when overlooking the risk factors and benefits of operating a driverless car it possesses and upholds the importance of precautious driving and promoting safety for people both on and off the road, But on the other hand (and a more prominent fact) Driverless cars dont have the ability to make a decision based on reasoning and logic such as humans so therfore, they are diffenent in the comparison. Patrick Lin, director of the Ethics and Emerging Sciences Group at Cal Poly said,” It evokes the classic Ethics 101 situation known as the trolley problem: deciding whether a conductor should flip a switch that will kill one person to avoid a crash in which five would otherwise die." A decision which, shouldnt be left up to a car to determine who lives or die. In That case, what could be a alternative method to prevent the deaths of pedestrians or citizens in an emergency situation without leaving the decision up to a car?
Posted by: Cyrus Williams | 04/21/2015 at 10:30 AM
Although I agree with Hening on robots are not only better but that they do pose a serious question on what our near future is going to become, up to a point, I cannot accept his overall conclusion that the desired outcome of beginning that it is taking jobs away from our people. On the other hand, creating jobs in the robotic and computer department but all the people that have pharmaceutical jobs an jobs and medicine are going to lose them because there will be no need for them. We do not need robots, right now computers are sufficient enough. “The coders who built Fabulon have programmed it with a set of instructions: The robot must not hurt its human. The robot must do what its human asks it to do. The robot must not administer medication without first contacting its supervisor for permission. On most days, these rules work fine. On this Sunday, though, Fabulon cannot reach the supervisor because the wireless connection in Sylvia’s house is down. Sylvia’s voice is getting louder, and her requests for pain meds become more insistent.” Basically, Hening is saying that robots can have problems and are not always 100% reliable but the majority of the time they are. If there happens to be an emergency a robot can’t do what a human can. If they are in the driver less car and there is an emergency vehicle coming the vehicle is suppose to pull over, but it may fail in that. “If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into.” On the other hand these are really cool cars and hope to see them in the near future. If the robot has a malfunction sort of like this one did, then what? What is in store for our future?
Posted by: Marc | 04/21/2015 at 10:30 AM
Throughout this article Hening pushes for the readers to feel like robots are still not able to live in the same world as humans at there current stage of development. With all the problems between morality and emotions and how a robot should act in specific situations it is very difficult to find the answer to the question that Hening states. One example Hening brings up is "Here’s the difficulty, and it is something unique to a driverless car: If the decision-making algorithm were to always choose the option in which the fewest people die, the car might avoid another car carrying two passengers by running off the road and risking killing just one passenger: its own. Or it might choose to hit a Volvo instead of a Mini Cooper because its occupants are more likely to survive a crash, which means choosing the vehicle that is more dangerous for its owner to plow into." this quote shows how the robots would have trouble with the moral decisions that a human being would make in a split second. I do support the idea of robot helping humans in the future although I believe it will be difficult for these robots to be excepted and integrated into society to the point where humans except all the decisions that the robots make. I do think it is possible but humans might have to change their moral views to be able to except these robots and that is what I think will be the hardest part of this integration of robots. What will happen in the future ? Will humans be able to except these ideas ? I don't know, I believe that we will have to introduce these robots into society and try to make them work before completely giving up on the idea of robots helping in society.
Posted by: June Cera | 04/26/2015 at 08:16 PM
Employing robots to complete human tasks does have some positive results, but the negative consequences greatly outweigh them. Our human integrity, which has been tested over the past few years, will diminish in certain areas of life. As Henig pointed out in the article, war robots which will need to make decisions about whether to shoot or not shoot can be detrimental. Sure, soldiers' lives will be saved, but at the cost of our morality. If a literal killing-machine were to shoot someone's innocent family member, would there be a trial? Of course not, because murder is committed by humans, and robots cannot be jailed. This family would simply have to accept the death. Humanized robots will reduce humans to objects, because no matter how programmed a machine is, metal (or plastic, or anything that a robot could be made out of) cannot feel emotion. In fact, by even trying to complete such a robot, we are just playing God, which reduces our morality that much further.
Although it may seem far-fetched, humans may end up like the ones in Pixar's "Wall-e" if robots come far enough. Like the elderly woman in Henig's blog, healthy people may eventually buy machines to do simple tasks for them, like get a drink out of the fridge or make their bed. Although no lives are at stake in this analogy, humans' integrity is at question. What has become of society if people are too lazy to get their own drink out of the fridge? Do we really want to support such a society? In short, automation is not a question of morality in the sense of its uses, but rather what it is not being used for.
Posted by: Brooke Towns | 05/03/2015 at 03:12 PM
Although robots benefit humans by means of care and protection, I oppose the implementation of robotic intervention. In relation to driverless cars, robots use a vast database, created by humans, to make moral decisions in an emergency. But what constitutes moral? The selfdriving car choses an action based off of the stablity of surounding cars and the amount of people in each car. What would happen if this system caused a car accident with a single person car that carried a pregnant woman? Though the car had only one person in it, this calculated risk jepardized two lives. This ethics system will only thrive with technological improvement, but the self driving car would only work perfectly if there were no regular cars, only self driving cars; this way everything could be automated. Over time, I fear, humans will rely too much on this "flawless" ethical plan, leading to a lack of determination; therefore, a reliant society. In conclusion, I think the ethics of robots is flawed, and will lead to reliance. If this technology continues and one day shorts-out, we would not be able to conduct ourselves in a normal manner. Today, our lives revolve around our phones; in fact some people cannot live without their phones. If this dependency transfers to our transportation, we would not function as a society.
Posted by: Madelene C | 05/04/2015 at 10:36 PM
In the long run, our world as we know it would benefit tremendously with robot employment. Robots are practical, efficient, and precise. They are capable of performing tasks that surpass the abilities of humans, when programmed correctly. What humans can do, robots can execute even better. As humans, we must accept error, mistakes, and uncertainty; however, this is not the case for robots. Machines, inherently, are programmed to be accurate, meticulous, and calculating. Human minds are no match. Take, for instance, a simple calculator. Though the human mind is able to compute basic arithmetic, some more than others, it cannot rival the abilities of calculating devices. Not does the human mind fall short in abilities, it lags behind in both time and efficiency. While an above average math student can solve a problem in perhaps two minutes, a device can spit out an answer in a matter of seconds. Furthermore, while humans are limited in energy and may only compute mathematical problems for so long, calculators last for long periods of time. Now, blow up this idea to a grander scale. This calculator can be a life-sustaining device. Lives depend on it. The well-being of individuals is jeopardized. In such a scenario, there is no room for human error, shortcomings, miscalculations, or lack of time. Instead, a machine designed to tend to such a crisis will be far more likely to restore the human to good health. In essence, the robots are indeed the best solution. They inevitably loom ahead in the horizon on mankind. At the rate technology is developing today, there is no doubt machines will be designated to greater and more important tasks. It is simply a matter of efficiency and increased success rates. The implementation of robots will only lead to a more innovative and productive world. Industry will soar. There will be a health-care revolution.
However, many oppose the use of robots, partially because they simply are not "human." Many claim that they are incapable of human morals, and cannot- or rather should not- make decisions as properly as humans. As humans, we logic through the dilemmas we face for the best solution. Many fear that robots cannot do this. They fear that robots are detrimental to mankind, as seen in the science fiction films. Such ideas are in movies for a reason- they are fiction themselves. People must keep the greater picture in mind- no, robots are not here to replace humans. They are here to enhance lifestyles globally, and to make a difference in combating world problems.
Posted by: Svea Cheng | 05/04/2015 at 10:57 PM