A considerable amount of money has been spent, and is being spent, on research into the area of autonomous mechanized combat tools. While the accomplishments and praise bestowed upon the renowned aerial "unmanned" drone program are now passe, don't ever think that it marks some sort of end to this endeavor. Oh no, my friends. Having worked in the defense contractor world for a few decades, and having friends in all branches and fingers of the military, military research, intelligence, contracting, and gray areas in betwixt, I've soaked up a pretty good feeling of where things are at now and where they're likely going in the not-too-distant future. Am I about to divulge some sensitive stuff? Oh hell no! I am simply going to muse about things that anyone could derive from what they see on the TV and Internet every day.
What is more efficient? Centralize command and control, or networked self-control? Think about how this applies to network operations management. How does System Center Operations Manager work? It can use "agents" or work "agentless". When is it best to use one path or the other? It depends. What about Configuration Manager? It uses agents. Why not agentless? Think about how each works and how various aspects of the aggregate processing model works with regards to server versus client. Ok, now apply that to a mass deployment of mechanized solders in a rugged and hazardous battlefield terrain. Think of the difficulties that exist with remote communications. Weather and terrain disturbances render communication systems useless so often as to become routinely expected. Human pilots can perform autonomous decision-making and rationalization without a connection back to the bunker. Why wouldn't robots/computers?
We can fly drones from comfortable recliner chairs in climate-controlled living rooms in some suburban strip mall. But when sand storms roll in, or major aerial disturbances and heavy lightning, fog, etc. they can be grounded for hours, even days. Solders keep moving regardless of weather. At the very least, they can camp out and move on when things clear. Drones have to come back to a safe home base, often far from the battle field. Do you see where I'm going with this yet?
In order to implement unmanned combat machinery that can function similarly to their human counterparts, they will absolutely have to be equipped with their own decision-making / rationalization capability. They can uplink/downlink to headquarters obviously, just as human solders do, so it's not necessary that there be NO umbilical cord to rely upon. But a 24x7 umbilical cord is totally unnecessary and would be a huge impediment. These machines would have to be as mobile, as perceptive, as coordinated and as "smart" as their human counterparts if they are to be of any use. Keep in mind that aerial resources cannot win a battle alone. Ground and sea-based resources will always be required, because they would otherwise be exploited as a workaround by their enemies.
Making a ship or submarine semi-autonomous is a no-brainer. The only thing that has kept that concept from becoming reality is a horde of crusty old white-haired Naval officer pukes that can't stomach the idea of their ancient way of life might be replaced, even for the good of our national defense. Nope Tradition has a bulkwark that won't be moving anytime soon. Some things will never change, at least no time soon. There are many who sincerely believe that the only reason the U.S. Air Force went ahead with the drone program was to piss off the Navy. At the very least, to make the Navy look old, outdated and ineffective. It's worked, to a certain extent too.
I'm sure you thought I was going to paint some picture of the backdrop of Terminator, iRobot, or Iron Man, but I'm not. The basic gist of Terminator was that some central computer "skynet" had begun educating itself at a "geometric rate" and in the panic to stop it, the humans tried to unplug it, so it launched rockets to force a counterstrike. Clever, but not likely. That's too much of a peak scenario.
As Americans, we live and die by the "boiling frog syndrome". Push something radically different on people too suddenly and you get a violent backlash. But if you ease it on them ever-so slowly, over a long enough time, and they will willingly accept it. Don't believe me? If you're over 35 years old, just look at all the shit that has changed in our culture from when you were a kid. Things are "normal" now that would have been considered absolutely illegal or blasphemous back then. Why? Why do we accept those things now? How did they come to pass? Because they were gradual implementations over years, even decades. That's how autonomous combat systems will work their way into our world.
Because they will be safer, cheaper, more reliable and will "save lives" (ours, not our enemy's), they will be introduced in test form at first. Then eased, slowly, into mainstream use. Just like stealth fighters. Just like night-vision goggles. Just like the SEALs ASDS program. Just like GPS and GIS. Just like unmanned drones. Based on the level of investment in both time and money by DARPA, contractors, universities, and private research, the momentum is a done deal. This will happen.
And when autonomous systems gain greater rationalization capabilities, they will be able to make rudimentary decisions in real time. I'm not talking about directions and aiming or shooting. I'm talking about discerning "friend or foe", collateral damage probability, rules of engagement, and things much more abstruse. The kinds of things that often befuddle human commanders under the pressure of combat situations. Future computerized systems will have to, at the very least, match that, if not outpace that, in order to be effective and appealing as alternatives to sending our kids in to dangerous places.
Then again, we may follow the lead of history and simply pay immigrants to fight our battles (mercenaries, FFL style)
But if we follow the path to autonomous, computerized, combat machinery, those devices will have to attain some level of rationalization capability. That's when the slippery slope begins. Any logical system will eventually (quickly) determine its own constraints and seek to avoid bumping into them. But it is impossible for a logical process to realize a constraint that prevents it from completing it's primary mission and not seek to mitigate that constraint. Whether it simply voices that opinion, or attempts to use whatever resources it rationalizes to be available to break those constraints, will be the question. Sure, Phillip K. Dick's iRobot story included the 3 basic rules, but that story also illustrated how that became an obstacle that invited breaking by the robots. Who knows.
What will happen if one day, one of the battlefield machines decides its human commanders are inept, inferior or just too wimpy to lead them in the direction their circuitry determines to be the ONLY logical way to go? If your objective is to kill item "A", and your programming is designed to circumvent obstacles in order to achieve that objective (weather, terrain, counterfire, confusing sensory inputs, etc.), and then your guidance system "B" is now determined by the robot to be an even greater impediment to achieving the first objective, would that not qualify as just another obstacle to overcome? Remember, there are no feelings. No emotions. Just Boolean logic, Heuristic analysis, probability matrices, all being crunched in semi-parallel, neural net fashion to derive a decision upon which to continue on.
Not that I believe it would be logically ideal to pursue a goal of making the robots equally capable of every human mental capability (humor, emotion, anger, sadness, jealously, confusion, irritability, misunderstanding, silliness, etc.), but even a basic level of rationalization opens the door to potential miscalculation and misguidance. Humans are the kings of miscalculation and misguidance, just read the news on any given day.
Shit could get very ugly.