The groundwork is being laid for a SkyNet type drone system to attack the enemy. The question is, who will be the enemy?

The nascent days of aerial warfare saw tethered hot air balloons launched for trench warfare reconnaissance. Soon airplanes took up the task, and the first air-to-air skirmishes were between handgun wielding lookouts blasting away from fragile canvas covered aircraft. The pace of fighting aircraft development exploded into a literal arms race and nearly a century later, we’re standing on the precipice of a science fiction reality where the lines between human and machine are being blurred, and might soon be erased completely.

Arguably the American effort has maintained air-superiority almost from the beginning. Now we find ourselves in the era of the fifth generation fighter which very well may be the last time a human physically flies into combat. The prevailing thought is that the next generation will be dominated by remotely controlled aircraft that are part of Artificial Intelligence “swarms” that will prowl the battlefield looking for targets to engage.

DARPA, the governments military research arm, is developing a project called the System of Systems Integration Technology and Experimentation, also known as ( SoSite).

This open-source, modular-system framework is being developed to control the battlefields of the future.

The basic idea as seen in this video is that swarms of drones will be programmed and guided to a target by artificial intelligence ostensibly overseen by human controllers.

Cost and flexibility are the two advantages DARPA touts in it’s description.

“Open-systems architectures create common standards and tools for developing interchangeable modules and platforms that can be quickly upgraded and swapped out as needed. This concept enables distribution of key functions, such as electronic warfare; sensors; weapons; battle management; positioning, navigation and timing; and data/communication links, across a variety of manned and unmanned platforms.”

What they don’t reference is the elephant in the room. There is a well defined line in philosophy and common sense where you don’t allow an autonomous entity to possess the ability to kill. It is in fact the very first rule in Isaac Asimov’s Three Laws of Robotics. This precept was introduced in his seminal work I Robot and has been a major theme in science fiction and futurist thought for decades.

But now, we find ourselves facing the very real possibility of flying robots run amok. The debate about AI is raging in technology and philosophical cultures. Groups like the Campaign to Stop Killer Robots have been actively exploring these possibilities and vociferously fighting to maintain the ideals that would prevent autonomous beings from being embed with the power to kill.

The other chilling possibility is that autocratic regimes would turn these systems against their own people. Despot mass murderers throughout history would have salivated at the thought of having legions of mechanical robots to unleash upon those who would resist their tyranny. History has shown time and time again that those who possess dictatorial power will mercilessly put down insurrections at any cost. We only need to look as far as the Obama administration’s extra judicial killings using unmanned vehicles to see that that line has already being crossed.

Are we that far away from a globalist one world government that uses technology grids and squadrons of flying death robots to conquer and subdue humanity? The thought is horrific, yet here we are, gazing into the abyss of that reality.


NEWSLETTER SIGN UP

Get the latest breaking news & specials from Alex Jones and the Infowars Crew.

Related Articles


Comments