观点人工智能

AI’s bioterrorism potential should not be ruled out

Risk evaluation of the technology cannot be left to the industry alone

The writer is a science commentator Move along, not much to see here. That seemed to be the message from OpenAI last week, about an experiment to see whether its advanced AI chatbot GPT-4 could help science-savvy individuals make and release a biological weapon.

The chatbot “provided at most a mild uplift” to those efforts, OpenAI announced, though it added that more work on the subject was urgently needed. Headlines reprised the comforting conclusion that the large language model was not a terrorist’s cookbook.

Dig deeper into the research, however, and things look a little less reassuring. At almost every stage of the imagined process, from sourcing a biological agent to scaling it up and releasing it, participants armed with GPT-4 were able to inch closer to their villainous goal than rivals using the internet alone.

您已阅读18%(829字),剩余82%(3800字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×