For those of us raised as kids at a time Superman comic books were the rage, some of us, able to remember that far back, might recall a story that, by today's technological advances, seems frighteningly realistic. This Superman adventure dealt with confronting robots that had taken over a world from their creators.
With the fast-pace development of Artificial Intelligence (AI), concerns are emerging about it evolving quicker than we are prepared to handle it. The validity of one concern in particular is evidenced by a bipartisan effort recently undertaken. Rep. Ken Buck, R-Colo., and Rep. Ted Lieu, D-Calif., seek to eliminate the possibility that AI alone could launch a nuclear attack by including a human decision-making element as well.
The two congressmen have ample reason to be concerned.
Firstly, the pleas of more than 1,100 developers and industry experts who signed off on an open letter for a temporary moratorium have gone unheeded. Additionally, Stanford University issued a report indicating one-third of experts they surveyed warned that AI could result in a "nuclear-level catastrophe."
Secondly, a study was conducted by Scientific Reports after a widow in Belgium claimed her husband had been persuaded by an AI chatbot to commit suicide. New research indicates AI chatbots are so advanced they may actually influence users' choices about life and death!
Thirdly, history is on Buck and Lieu's side.
- While history tells us the Cuban Missile Crisis of 1962 brought the U.S. and USSR closer to nuclear war than any other time, few realize just how close we really came to it. With diplomatic tensions building, U.S. forces began dropping non-lethal depth charges in the waters around Cuba to encourage Soviet submarines to surface. One of those submarines was the B-59. While the Americans had broadcast a warning as to their non-lethal intentions, as the sub had been incommunicado, it failed to get the message. The B-59 commander and his No. 2 immediately sought to launch a nuclear missile they had onboard and for which they had previously been given authorization by Moscow to use if attacked. However, such authorization mandated the approval of the three senior officers. The third senior officer, Vasil Alexandrovich Arkhipov, sensing the Americans were not engaging in a deadly attack, refused to give his approval – sparing the world a nuclear conflict.
- A similar incident in which we came to within a "button push" from nuclear war occurred on Nov. 7, 1983. Soviet intelligence services were attempting to detect early signs of a nuclear attack by NATO against the Soviet Union when they suddenly received an alarm that just such an attack was in progress! Immediate preparations were made to defend against it. However, the senior Soviet officer at the site, Lt. Col. Stanislav Petrov, held back, convinced their alarm was faulty. Only later was it learned as Soviet intelligence services were conducting their drill, NATO was conducting one of its own, codenamed "Able Archer," simulating just such an attack. Again, due to human intervention, nuclear war was averted.
These examples clearly demonstrate the need to ensure an element of human decision-making is implemented within AI functioning to prevent the technology from operating totally independently.
Of course, there are instances of human behavior that we have witnessed in the past that give us pause to wonder whether even a human decision-making override provides us with an absolute fail-safe guarantee. While the U.S. mandates today at least two people be involved in the actual launch process, conceivably even that is no guarantee of a fail-safe system.
Consider what we have seen over the past few decades in commercial air travel. While the industry has actually become progressively safer, there is still one cause of deaths that has stubbornly persisted. It is the intentional crashing of a commercial aircraft by a pilot committed to committing murder-suicide. The term "suicide by pilot" has been applied to several aviation crashes and listed as the most likely cause in at least six others.
This should provide us with ample concern that, even in trying to design a fail-safe AI trigger for nuclear weapons, limitations on human controls receive equal consideration as well. After all, we also need to protect against a suicidal mindset of one or both nuclear launch operators.
Of course, based on the above concerns, the question becomes how do we construct the ultimate fail-safe override involving both an AI and human operator element?
Such a guarantee cannot be all AI as that technology today is non-existent – it lacks total independence as it is only responding to inputs provided by coders. We have seen how slanted such input can be as ChatGPT, for example, has demonstrated a clear bias against conservatives. Such a liberal mindset is a direct result of its programmers, whether intentional or not. Yet, by the same token, the fail-safe guarantee cannot be all human due to the potential of a suicide mentality by one or both operators.
If a truly independent-thinking AI can ever be developed, maybe only at that state of evolution will the technology exist that can assure us of a fail-safe guarantee. Until then, as history has shown us, we can only cross our fingers and hope for the best.
Content created by the WND News Center is available for re-publication without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@wndnewscenter.org.
SUPPORT TRUTHFUL JOURNALISM. MAKE A DONATION TO THE NONPROFIT WND NEWS CENTER. THANK YOU!
The post A nuke launch: Who has final override – AI or humans? appeared first on WND.