So we're actually looking for conduct. Conduct being using the service to repeatedly or episodically harass someone,
所以我们实际上在研究行为模式。比如说,有人利用这个平台来持续或是间断性地骚扰他人,
using hateful imagery that might be associated with the KKK or the American Nazi Party.
利用可憎的意象,比如可能与3K党有关的意象,或是与美国纳粹党有关的。
Those are all things that we act on immediately. We're in a situation right now where that term is used fairly loosely,
这些都是我们会立即处理的问题。现在,这些术语被使用的次数相对较多,用词不严谨,
and we just cannot take any one mention of that word accusing someone else as a factual indication that they should be removed from the platform.
所以我们不能因为有人提到这些单词,就指控某人有罪,并且以此为证据将他们赶出这个平台。
So a lot of our models are based around, number one: Is this account associated with a violent extremist group? And if so,
所以我们的许多模型要检测的第一件事是:这个账号是否与暴力极端组织有联系?如果答案是肯定的,
we can take action. And we have done so on the KKK and the American Nazi Party and others.
那我们可以采取措施。我们对于3K党、美国纳粹党以及其他组织就采取了措施。
And number two: Are they using imagery or conduct that would associate them as such as well?
第二个问题:这些账号是否在使用与上述组织有关的图片,或是其行为是否与上述组织有关?
How many people do you have working on content moderation to look at this?
你安排了多少人给账号行为评分,来检查这些行为?
It varies. We want to be flexible on this, because we want to make sure that we're,
人数不固定。我们希望能灵活应对这件事,因为我们想保证,
number one, building algorithms instead of just hiring massive amounts of people,
第一,建立算法而不是雇佣大量的人,
because we need to make sure that this is scalable, and there are no amount of people that can actually scale this.
因为我们这项任务是会延展的雇多少人都会显得不够。
So this is why we've done so much work around proactive detection of abuse that humans can then review.
这就是为什么我们要努力建立能积极检测辱骂信息的系统,然后让人来审阅这些信息。
We want to have a situation where algorithms are constantly scouring every single tweet
我们希望可以做到让算法能够不断检测所有的推文,
and bringing the most interesting ones to the top so that humans can bring their judgment to whether we should take action or not, based on our terms of service.
把其中问题最大的挑出来,这样人就可以决定到底要不要采取措施,基于我们的服务条款。