The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction.
由于人工智能和机器学习领域发展得太迅速,以致于任何伦理概念都滞后几十年,或是留给了科幻作品。
This might explain a new study out of Shanghai Jiao Tong University, which says computers can tell whether you will be a criminal based on nothing more than your facial features.
这也许能够解释上海交通大学的一项新研究。该研究表明,计算机只需根据你的面部特征就能分辨出你是否是一个罪犯。
In a paper titled "Automated Inference on Criminality using Face Images," two Shanghai Jiao Tong University researchers say they fed "facial images of 1,856 real persons" into computers and found "some structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle."
在一篇题为《基于面部图像的自动犯罪概率推断》的文章中,两位上海交通大学的研究人员表示,他们将"1856个真人的面部图像"录入计算机,发现"一些能够预测犯罪率的结构特征,例如上唇曲率、内眼角间距和鼻唇角角度。"
They conclude that "all classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic."
他们的结论是:"尽管该主题一直具有历史争议,但是所有的分类器都表现出色,并为人脸识别技术辨认罪犯的有效性提供了证据。"
In the 1920s and 1930s, the Belgians, in their role as occupying power, put together a national program to try to identify individuals' ethnic identity through phrenology, an abortive attempt to create an ethnicity scale based on measurable physical features such as height, nose width and weight.
在20世纪20年代及30年代,比利时人以占领国的身份制定了一项国家计划,试图通过骨相来识别个人的民族特性,试图根据可测量的身体特征,如身高、鼻子宽度和重量,来划分一个的种族范围。
The study contains virtually no discussion of why there is a "historical controversy" over this kind of analysis — namely, that it was debunked hundreds of years ago.
此项研究几乎没有讨论为什么这种分析有一个"历史争议",它在几百年前就被揭穿了。
Rather, the authors trot out another discredited argument to support their main claims: that computers can't be racist, because they're computers.
相反,作者提出了另一个可信的论点来支持他们的主要论断:计算机不能成为种族主义者,因为它们是计算机。
Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc.
与人类检查员/法官不同,计算机视觉算法或分类器绝对没有主观看法、没有情绪、没有由于过去经验、种族、宗教、政治信条、性别、年龄等而造成的偏见。
Besides the advantage of objectivity, sophisticated algorithms based on machine learning may discover very delicate and elusive nuances in facial characteristics and structures that correlate to innate personal traits.
除了客观性的优势,基于机器学习的复杂算法可能发现面部特征和结构中非常微妙和难以捉摸的细微差别,这些细微差别与先天的个人特征相关。