The world would have been a far darker place had Nazi Germany beaten the US to build the world’s first atomic bomb. Mercifully, the self-defeating hatred of Adolf Hitler’s regime sabotaged its own efforts. A 1933 law sacking “civil servants of non-Aryan descent” stripped one quarter of Germany’s physicists of their university posts. As the historian Richard Rhodes noted, 11 of those 1,600 scholars had already earned, or would go on to earn, the Nobel Prize. Scientific refugees from Nazi Europe were later central to the Manhattan atomic bomb project in the US.
The agonising soul-searching of scientists over building nuclear weapons resonate strongly today as researchers develop artificial intelligence systems that are increasingly adopted by the military. Excited though they are about the peaceful uses of AI, researchers know it is a dual-use general purpose technology that can have highly destructive applications. The Stop Killer Robots coalition, with more than 180 non-governmental member organisations from 66 countries, is campaigning hard to outlaw so-called lethal autonomous weapons systems, powered by AI.
War in Ukraine has increased the urgency of the debate. Earlier this month, Russia announced that it had created a special department to develop AI-enabled weapons. It added that its experience in Ukraine would help make its weapons “more efficient and smarter.” Russian forces have already deployed the Uran-6 autonomous mine-clearing robot as well as the KUB-BLA unmanned suicide drone, which its manufacturer says uses AI to identify targets (although these claims are disputed by experts).