False Data is the greatest threat to humanity
False Data is the greatest threat to humanity in this technology age
By Mark Rais
Two Artificial Intelligence (AI) bots appeared to go rogue forming their own language, which the human computer scientists that created them could not translate.
The news was profound and Facebook’s unique AI bots actions went viral. The fear that AI systems will one day become a threat to their human creators has been discussed for decades. Recently a number of very famous and knowledgeable men like Stephen Hawkins and Elon Musk have warned that AI will someday be a potential existential threat to humans.
This focus ignores a more subtle underlying threat that I submit is far greater. To understand it you must be aware how false data or erroneous information impacts humans.
For example, there is an ongoing political firestorm in the United States regarding false news and voter deception that may have played a potential role in influencing the election. False data supplied to the masses could potentially adjust popular opinion and thinking.
In the AI world, where computer code determines not perception but real actions, false information is a most profound danger.
Artificial Intelligence is most dangerous not because it can one day become self-aware or smarter than its human creators. AI is most dangerous because it cannot differentiate between fact and fiction, it cannot decipher between false data and real data.
Therefore it takes actions and implements based on relatively unfiltered input. In many cases, AI code has been written to make the underlying assumption that if the input source is a reliable one, or part of its core systems, the input is VALID.
As autonomous vehicles grow in number and spread to our main cities, they will undoubtedly reduce overall traffic accidents. Their systems and their capability exceeding that of their often distracted, emotional, reactive human counterparts.
However, these autonomous AI vehicles have not been given the self-determined capability to identify false data from real data. They do not know when their input systems are lying to them. They only know if an input system is not working properly. The difference to humans is subtle, but to mechanical devices, to AI systems it is substantive.
I submit that it is this inherent weakness in the system, in the very nature of what makes AI operate that is a truly existential threat to humanity.
Without an inherent, inbuilt algorithm which I call the “sceptical filter” algorithm, any AI is subject to the potential consequence of false data input.
In the case of an autonomous vehicle, false data could be fed to the AI system from any of the input streams (front motion sensor, roof camera, Bluetooth interface, etc). This false data could result in substantially incorrect conclusions regarding driving conditions, such as a non-existing on-coming vehicle. Hence the autonomous system then swerves to avoid a non-existing on-coming car only to crash the vehicle and its human passengers into a roadside barrier.
False data determination is the foundation piece of AI that must be inherently coded into the system at the core layer to ensure that AI do not implement actions that could result in human death.
The principal algorithm would ensure that both safety and re-evaluation of all input systems is implemented prior to any major response. First the AI would verify that the input systems are working as intended, then verify that any major input from one system is validated by at least one otherand then perform a response.
If AI will soon take over autonomous vehicles, fly aircraft, perform medical procedures on humans, such a “sceptical filter” algorithm needs to be integrated to ensure that false data is not injected in the processing, either by accident or intentionally.
Otherwise, these AI inventions will become profoundly useful instruments of assassinations, subjugation and oppression across the world. Not because they are autonomous or intelligent, but because they accept false data and act on it.
Moreover, the most likely culprit for feeding false data to AI systems would be organisations who want to inject their own priorities into individual human lives, such as adding a tracking chip during a medical procedure, or crashing a vehicle carrying a political activist.
Every AI implemented must include both
a kill switch and an algorithm to evaluate false data
inputs.
Designers who ignore this specific set of
criteria and write software that can then develop
self-determination and autonomy would be setting loose the
equivalent of a plague on the human race, one which is not
chaotic, or uncontrollable, but rather one that is designed
to accept false input.
AI systems can potentially change the world, substantially and beneficially. But AI that cannot determine false information from real information will be highly vulnerable to exploitation and cause serious human harm.
Other related articles by Mark
Rais:
• Clash of Super Powers in an Age of Global
Conflict
• Op Article: Oil Rules the World
•
Op Article: War for the Hearts & Minds of
Our Children
Mark Rais is a writer for the
technology and science industry. He serves as a senior
editor for an on-line magazine and has written numerous
articles on the influence of technology and
society.