观点金融欺诈

We have to be able to hold tech platforms accountable for fraud

Algorithms ensure that people who click on scams are likely to see more of them

In April this year, the FT published a column of mine on a “deepfake” on Instagram, one of Meta’s platforms. A former colleague had brought it to my attention in March, because the fake purported to be me. But this Martin Wolf gave investment advice, which I would never do. The FT persuaded Meta to take it down. But it soon reappeared. We were playing “whack-a-mole” with scammers.

In the end, I was enrolled in a new Meta system that uses facial recognition technology to combat such scams. This worked: the deepfakes disappeared. My conclusion is that Meta can stop them if it is determined to do so.

Unfortunately, this was not before they had been widely circulated. A colleague told me there were at least three different deepfake videos and multiple Photoshopped images running over 1,700 advertisements with slight variations across Facebook and Instagram. Data from Meta’s Ad Library showed these ads reached over 970,000 users in the EU alone — where regulations require platforms to report such figures. Worldwide, the number exposed to the ads must have been a multiple of this.

您已阅读17%(1089字),剩余83%(5305字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×