手机APP下载

您现在的位置: 首页 > 在线广播 > VOA慢速英语 > VOA慢速-科技报道 > 正文

VOA慢速英语(翻译+字幕+讲解):研究希望让机器人具备类似人类的社交技能

来源:可可英语 编辑:aimee   可可英语APP下载 |  可可官方微信:ikekenet
  
  • One argument for why robots will never fully measure up to people is because they lack human-like social skills.
  • 为什么机器人永远不能完全达到人类一样的水平?其中一个理由是,因为机器人缺少类似人类的社交技能。
  • But researchers are experimenting with new methods to give robots social skills to better interact with humans.
  • 但研究人员正在测试新方法,赋予机器人社交技能,以更好地与人类互动。
  • Two new studies provide evidence of progress in this kind of research.
  • 两项新研究提供了此类研究取得进展的证据。
  • One experiment was carried out by researchers from the Massachusetts Institute of Technology, MIT.
  • 其中一项试验由麻省理工学院(简称MIT)的研究人员进行。
  • The team developed a machine learning system for self-driving vehicles
  • 该团队开发了一种用于自动驾驶车辆的机器学习系统,
  • that is designed to learn the social characteristics of other drivers.
  • 旨在学习其他司机的社会特征。
  • The researchers studied driving situations to learn how other drivers on the road were likely to behave.
  • 研究人员研究了驾驶情况,以了解路上其他驾驶者可能有的行为方式。
  • Since not all human drivers act the same way, the data was meant to teach the driverless car to avoid dangerous situations.
  • 因为不是所有人类驾驶员的行为方式都一样,所以这些数据旨在教导无人驾驶车辆避免危险情况。
  • The researchers say the technology uses tools borrowed from the field of social psychology.
  • 研究人员表示,这项技术使用了从社会心理学领域借来的工具。
  • In this experiment, scientists created a system that attempted to decide whether a person's driving style is more selfish or selfless.
  • 在这项实验中,科学家开发了一个系统,以试图确定一个人的驾驶风格是更为自私还是更加无私。
  • In road tests, self-driving vehicles equipped with the system improved their ability to predict what other drivers would do by up to 25 percent.
  • 在道路测试中,配备这一系统的自动驾驶车辆预测其他司机行为的能力提高了25%。
  • In one test, the self-driving car was observed making a left-hand turn.
  • 在一项测试中,研究人员观察到自动驾驶车辆向左转弯。
  • The study found the system could cause the vehicle to wait before making the turn if it predicted the oncoming drivers acted selfishly and might be unsafe.
  • 这项研究发现,如果预测到迎面而来的驾驶员会做出自私而且不安全的行为,那该系统会让车辆在转弯之前等待。
  • But when oncoming vehicles were judged to be selfless, the self-driving car could make the turn without delay
  • 但是当迎面驶来的车辆被判定为无私时,那自动驾驶车辆会立刻转弯,
  • because it saw less risk of unsafe behavior.
  • 因为其认为出现不安全行为的风险很小。
  • Wilko Schwarting is the lead writer of a report describing the research.
  • 一篇报告介绍了这项研究,威尔科·施瓦廷是报告的主要作者。
  • He told MIT News that any robot working with or operating around humans
  • 他对麻省理工学院新闻网表示,任何与人类合作或在人类周围工作的机器人
  • needs to be able to effectively learn their intentions to better understand their behavior.
  • 都必须能有效地学习人类的意图,以更好地理解人类的行为。
  • "People's tendencies to be collaborative or competitive often spill over into how they behave as drivers," Schwarting said.
  • 施瓦廷说:“人类的合作或竞争倾向往往会渗透到他们的驾驶行为中。”
  • He added that the MIT experiments sought to understand whether a system could be trained to measure and predict such behaviors.
  • 他还表示,麻省理工学院的实验试图了解系统在经过训练后能否判断和预测这类行为。
  • The system was designed to understand the right behaviors to use in different driving situations.
  • 这一系统旨在了解不同驾驶情况中应该采用的正确行为。
  • For example, even the most selfless driver should know that quick and decisive action is sometimes needed to avoid danger, the researchers noted.
  • 研究人员指出,举例来说,即使是最自私的司机也应该知道,避免危险有时需要迅速且果断的行动。
  • The MIT team plans to expand its research model to include other things that a self-driving vehicle might need to deal with.
  • 麻省理工学院团队计划扩大其研究模型,以涵盖自动驾驶车辆可能需要应对的其他问题。
  • These include predictions about people walking around traffic, as well as bicycles and other things found in driving environments.
  • 这包括预测车辆周围的行人以及在驾驶环境中发现的的自行车和其他东西。
  • The researchers say they believe the technology could also be used in vehicles with human drivers.
  • 研究人员表示,他们认为这项技术也能应用在人类司机驾驶的车辆上。
  • It could act as a warning system against other drivers judged to be behaving aggressively.
  • 它可以作为预警系统,以应对其他被判定为行为激进的驾驶员。
  • Another social experiment involved a game competition between humans and a robot.
  • 另一项社会实验涉及人与机器人之间的游戏竞赛。
  • Researchers from Carnegie Mellon University tested whether a robot's "trash talk" would affect humans playing in a game against the machine.
  • 卡内基梅隆大学的研究人员测试了机器人的“垃圾话”是否会影响与机器玩游戏的人类。
  • To "trash talk" is to talk about someone in a negative or insulting way usually to get them to make a mistake.
  • “垃圾话”是以消极或侮辱的方式谈论某人,通常是为了让他们犯错。
  • A humanoid robot, named Pepper, was programmed to say things to a human opponent like "I have to say you are a terrible player."
  • 人形机器人“佩珀”被设计成会像人类对手说一些话,比如“我不得不说,你真是一个糟糕玩家”。
  • Another robot statement was, "Over the course of the game, your playing has become confused."
  • 机器人还会说:“在游戏过程中,你的表现变得很混乱。”
  • The study involved each of the humans playing a game with the robot 35 different times.
  • 这项研究涵盖每个人在35个不同时间与机器人玩游戏的情况。
  • The game was called Guards and Treasures which is used to study decision making.
  • 该游戏名为《守卫与宝藏》,用于研究决策制定。
  • The study found that players criticized by the robot generally performed worse in the games than humans receiving praise.
  • 研究发现,遭到机器人批评的玩家在游戏中的表现通常要比得到称赞的玩家差。
  • One of the lead researchers was Fei Fang, an assistant professor at Carnegie Mellon's Institute for Software Research.
  • 该研究的主导研究员之一是卡内基梅隆大学软件研究所的助理教授方菲(音译)。
  • She said in a news release the study represents a departure from most human-robot experiments.
  • 一篇新闻介绍了这项不同于大多数人类和机器人实验的研究,里面提到了方菲的看法。
  • "This is one of the first studies of human-robot interaction in an environment where they are not cooperating," Fang said.
  • 她说:“这是首批聚焦人类与机器人在非合作环境中进行互动的研究之一。”
  • The research suggests that humanoid robots have the ability to affect people socially just as humans do.
  • 研究表明,人形机器人有能力像人类那样在社交方面影响人类。
  • Fang said this ability could become more important in the future when machines and humans are expected to interact regularly.
  • 方菲说,预计未来人类和机器人会经常互动,而届时这种能力就会变得更加重要。
  • "We can expect home assistants to be cooperative," she said.
  • 她说:“我们可以期待家庭助手能和我们合作。
  • "But in situations such as online shopping, they may not have the same goals as we do."
  • 但是在涉及网上购物等情形时,它们可能不会和我们有相同的目标。”
  • I'm Bryan Lynn.
  • 布莱恩·林恩报道。


扫描二维码可进行跟读训练
  下载MP3到电脑  [F8键暂停/播放]   批量下载MP3到手机
jpean[tBveec%u

Zt23GFOdBQ4~

Research Aims to Give Robots Human-Like Social Skills
One argument for why robots will never fully measure up to people is because they lack human-like social skills.
But researchers are experimenting with new methods to give robots social skills to better interact with humans. Two new studies provide evidence of progress in this kind of research.
One experiment was carried out by researchers from the Massachusetts Institute of Technology, MIT. The team developed a machine learning system for self-driving vehicles that is designed to learn the social characteristics of other drivers.
The researchers studied driving situations to learn how other drivers on the road were likely to behave. Since not all human drivers act the same way, the data was meant to teach the driverless car to avoid dangerous situations.
The researchers say the technology uses tools borrowed from the field of social psychology. In this experiment, scientists created a system that attempted to decide whether a person's driving style is more selfish or selfless. In road tests, self-driving vehicles equipped with the system improved their ability to predict what other drivers would do by up to 25 percent.
In one test, the self-driving car was observed making a left-hand turn. The study found the system could cause the vehicle to wait before making the turn if it predicted the oncoming drivers acted selfishly and might be unsafe. But when oncoming vehicles were judged to be selfless, the self-driving car could make the turn without delay because it saw less risk of unsafe behavior.
Wilko Schwarting is the lead writer of a report describing the research. He told MIT News that any robot working with or operating around humans needs to be able to effectively learn their intentions to better understand their behavior.
"People's tendencies to be collaborative or competitive often spill over into how they behave as drivers," Schwarting said. He added that the MIT experiments sought to understand whether a system could be trained to measure and predict such behaviors.
The system was designed to understand the right behaviors to use in different driving situations. For example, even the most selfless driver should know that quick and decisive action is sometimes needed to avoid danger, the researchers noted.
The MIT team plans to expand its research model to include other things that a self-driving vehicle might need to deal with. These include predictions about people walking around traffic, as well as bicycles and other things found in driving environments.
The researchers say they believe the technology could also be used in vehicles with human drivers. It could act as a warning system against other drivers judged to be behaving aggressively.

M!H|hGQ;tg@KI

人形机器人.jpg
Another social experiment involved a game competition between humans and a robot. Researchers from Carnegie Mellon University tested whether a robot's "trash talk" would affect humans playing in a game against the machine. To "trash talk" is to talk about someone in a negative or insulting way usually to get them to make a mistake.
A humanoid robot, named Pepper, was programmed to say things to a human opponent like "I have to say you are a terrible player." Another robot statement was, "Over the course of the game, your playing has become confused."
The study involved each of the humans playing a game with the robot 35 different times. The game was called Guards and Treasures which is used to study decision making. The study found that players criticized by the robot generally performed worse in the games than humans receiving praise.
One of the lead researchers was Fei Fang, an assistant professor at Carnegie Mellon's Institute for Software Research. She said in a news release the study represents a departure from most human-robot experiments. "This is one of the first studies of human-robot interaction in an environment where they are not cooperating," Fang said.
The research suggests that humanoid robots have the ability to affect people socially just as humans do. Fang said this ability could become more important in the future when machines and humans are expected to interact regularly.
"We can expect home assistants to be cooperative," she said. "But in situations such as online shopping, they may not have the same goals as we do."
I'm Bryan Lynn.

^CU1i52kLH)Y

F!Z^uGtrY[,VjeE%X!Pf

2htz,U~&J^hsDk+yQyFST7fI^e5N+[;A@a9Cho13C5Pw=N*

重点单词   查看全部解释    
social ['səuʃəl]

想一想再看

adj. 社会的,社交的
n. 社交聚会

 
expand [iks'pænd]

想一想再看

v. 增加,详述,扩展,使 ... 膨胀,
v

联想记忆
affect [ə'fekt]

想一想再看

vt. 影响,作用,感动

联想记忆
revive [ri'vaiv]

想一想再看

vt. 使重生,恢复精神,重新记起,唤醒
vi

联想记忆
departure [di'pɑ:tʃə]

想一想再看

n. 离开,出发,分歧

 
fang [fæŋ]

想一想再看

n. 尖牙

联想记忆
release [ri'li:s]

想一想再看

n. 释放,让渡,发行
vt. 释放,让与,准

联想记忆
avoid [ə'vɔid]

想一想再看

vt. 避免,逃避

联想记忆
statement ['steitmənt]

想一想再看

n. 声明,陈述

联想记忆
vehicle ['vi:ikl]

想一想再看

n. 车辆,交通工具,手段,工具,传播媒介

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。