Do you want to view index?
As robots become increasingly automated and more capable, the consequences of them being hacked and controlled by malicious cyber actors will also increase. Particularly as robots may become common, akin to the proliferation of Information Technology. Hence, robot cyber resilience has been recognised as a multi-faceted problem that is causing major concerns (Matellán, 2018).
The latest version of the Atlas ‘human-like’ robot has progressed significantly since its early prototype that could barely walk. Hence, if robots can be programmed to run, jump and somersault like a gymnast, then its plausible they can also be programmed to fight like a soldier or conduct other dangerous military tasks that are currently done by humans.
To get a better sense of just how far robotics technology has progressed click here to view Boston Dynamics Atlas Robot.
Drones have also become a prominent feature of the current technology landscape and are arguably as dangerous as robotic systems in terms of threat potential. This is because drones have already been weaponised by military forces and non-state actors in recent conflicts. The US military, in particular, has merged armed drones with Artificial Intelligence (AI) via Project Maven as part of its Global War on Terrorism.
Drones have also become a prominent feature of the current technology landscape and are arguably as dangerous as robotic systems in terms of threat potential. This is because drones have already been weaponised by military forces and non-state actors in recent conflicts. The US military, in particular, has merged armed drones with Artificial Intelligence (AI) via Project Maven as part of its Global War on Terrorism. (Byrne, 2015, p. 91).
So, whilst combat drones are presently controlled by human-operators via global satellite links, there are detailed plans for these types of AI enabled combat systems to be granted increasing levels of autonomy, reasoning and decision-making. Thus, in context of progressively relinquishing decision-making to drones, it begins to illuminate risks that could become serious issues.
While the consequences of weaponised robots and drones being hacked is a grave concern, the growing number of autonomous vehicles (AV) presents a different class of cyber risk profile that could extend to humans in the physical world. This is because of the moral dimension to AV operation, where an AV is able to make rational decisions while driving via their pre-programmed algorithms.
It is these critical life or death decisions that a moving AV can be configured to make, which might be a target for black-hat hackers’ intent on causing injury to individuals or widespread disruption to transport systems. Vulnerabilities in self-driving cars have been exposed in attempts to penetrate embedded operating system security, such as the Tesla Model 3 white hat hacking competition in 2019. (Lambert, 2019).
Designing trusted and safe autonomous systems will be no simple task as they will develop a spectrum of capability that could disrupt established practices. (Burton et al, 2020, p. 1). Thus, understanding intelligent machines via a ‘spectrum of sentience’ is advantageous because it offers a coherent framework for cyber defence developers. As although AV’s, drones and robots will continue to mature, so too will evolution of cyber-attack mechanisms specifically designed to corrupt cyber-physical systems.
This is particularly poignant if Moore’s Law is considered and exponential growth of autonomous systems unfolds analogous to how multiple generations of Information Technology advanced. Computers displaced entire industries and rendered countless other technologies obsolete, but computers also brought new challenges. (Lin et al, 2011, p. 1). Hence, comparing computers evolutionary tribulations with progress of complex autonomous systems is instructive. As this approach may provide early warning of bespoke risks and ethical matters along its nascent technology growth path.
Do you want to view References?