FT商学院

OpenAI’s red team: the experts hired to ‘break’ ChatGPT

Microsoft-backed company asked an eclectic mix of people to ‘adversarially test’ GPT-4, its powerful new language model

After Andrew White was granted access to GPT-4, the new artificial intelligence system that powers the popular ChatGPT chatbot, he used it to suggest an entirely new nerve agent.

The chemical engineering professor at the University of Rochester was among the 50 academics and experts hired to test the system last year by OpenAI, the Microsoft-backed company behind GPT-4. Over six months, this “red team” would “qualitatively probe [and] adversarially test” the new model, attempting to break it.

White told the Financial Times he had used GPT-4 to suggest a compound that could act as a chemical weapon and used “plug-ins” that fed the model with new sources of information, such as scientific papers and a directory of chemical manufacturers. The chatbot then even found a place to make it.

您已阅读9%(793字),剩余91%(8419字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×