A secret program being developed by the U.S. military is using artificial intelligence to detect potential nuclear attacks before they take place.

According to Reuters, numerous U.S. officials speaking on condition of anonymity say funding is increasing for several classified research projects tasked with utilizing AI to anticipate nuclear missile launches.

The project’s aim is to create a system that would quickly sift through massive amounts of data to find indicators of attack, including satellite imagery that could show a mobile launcher preparing to initiate a nuclear strike.

“In order to carry out the research, the project is tapping into the intelligence community’s commercial cloud service, searching for patterns and anomalies in data, including from sophisticated radar that can see through storms and penetrate foliage,” Reuters writes.

The AI system would then alert U.S. analysts who would verify the data before handing it off to senior military officials.

“Forewarned, the U.S. government would be able to pursue diplomatic options or, in the case of an imminent attack, the military would have more time to try to destroy the missiles before they were launched, or try to intercept them,” Reuters writes.

A U.S. official stated that the program, which remains in its infancy, has already produced a pilot project aimed specifically at North Korea.

Another U.S. official also said an early prototype system for tracking mobile missile launchers is currently being used by the military.

“This project involves military and private researchers in the Washington D.C. area,” Reuters added. “It is pivoting off technological advances developed by commercial firms financed by In-Q-Tel, the intelligence community’s venture capital fund…”

Budget documents and U.S. officials indicate the Trump administration is proposing to triple the anti-missile project’s funding to $83 million for fiscal year 2019.

The top commander of U.S. nuclear forces, U.S. Air Force General John Hyten, has previously argued that such systems must have built-in safeguards to prevent machines from making high-stakes decisions without human input.

Other experts have noted that research indicates such AI-systems can also be fooled into misidentifying images, potentially allowing U.S. adversaries to disguise their weapons systems.

The Pentagon is currently working to expand its AI capabilities in a race against countries such as China and Russia.


Got a tip? Contact Mikael securely: keybase.io/mikaelthalen


Related Articles


Comments