Robots of the future could improve business efficiencies, but companies must implement risk management beyond legal requirements to minimize accidents and damage to their reputation.
In recent years, robotics applications have increasingly been demanded by a growing range of industries that see it as a way of increasing efficiency and reducing costs. However, accompanying these advantages is a raft of risks, some of which relate to the general public and could therefore have serious ramifications for a company’s reputation.
There are two distinct robotics markets and communities: industrial and service. Industrial robots are typified by robotic arms on assembly lines, whereas service robots reflect the iconic humanoid form and have their roots in the academic or research community.
The automotive industry is the largest customer for industrial robots, but demand from the food and pharmaceutical sectors is increasing as technology improves, says Grant Collier, head of marketing at the British Automation and Robot Association.
Unmanned air vehicles are potentially the most significant application of robotic technology in the near future, says Chris Mailey, vice president of knowledge resources at the Association for Unmanned Vehicle Systems International (AUVSI). The association recently completed an economic impact report on the integration of unmanned aircraft systems into the U.S. national airspace system, which is scheduled for 2015. “We found that more than 70,000 new jobs could be created in the following three years and the total economic impact could exceed $13 billion during that period,” Mailey says.
The precision agriculture industry is expected to be the largest market for such technology, helping farmers monitor crops and distribute pesticides to a much higher level of accuracy than currently possible. This could help improve efficiency and also reduce the total amount of pesticides sprayed, saving money and reducing environmental impact.
In the service robot space, the focus is on systems that can be used in conjunction with humans and could provide companionship. But robots would need to look human, Collier says. “Many researchers are looking at what is known as the human-machine interface, enabling robots to emulate our body language, for example.”
Rehabilitation of injured people is a care service that will increasingly be delivered by robots, says Professor Chris Melhuish, director at Bristol Robotics Laboratory.
“We will be controlling robots made from new materials in new ways, and this will require a different flavor of artificial intelligence because the robot will be required to act as an intentional agent. This will involve robots being able to respond to prompts such as facial expression or gestures.”
Removing Human Error
Driverless cars have attracted attention through the work of researchers at Oxford University and Google, as well as the leading vehicle manufacturers.
AUVSI’s Mailey believes autonomous vehicles could eliminate more than 90 percent of accidents, with manufacturers suggesting that mass-market robotic cars could be commercially viable within a decade.
“This will be a huge development for the insurance industry, particularly since the amount of data carried in the vehicle will make it possible to re-create the circumstances of an accident. It won’t stop people suing each other, but it will make the investigative process easier. There will also be new insurance opportunities created in the unmanned air vehicle market and for robots in the home in the longer-term.”
British Automation and Robot Association’s Collier agrees that robotics could revolutionize the car insurance industry by reducing the impact of human error. “Anything that reduces risk must be positive. Industrial robots typically run for about 100,000 hours — continuously — before they require maintenance, and once programmed correctly they don’t make mistakes.”
All these uses of robotics can enable companies to do things more cheaply and efficiently. But many of them carry risks that are not fully understood and could result in a large lawsuit and seriously damage a company’s reputation.
In the service space, because machines can be incredibly powerful, even a slight miscalculation could seriously harm any humans they interact with.
Work is being done to develop “soft” socially intelligent robots that have some awareness of their environment so that a sensor failure would not be potentially dangerous to nearby humans.
“A care service robot’s interaction with humans is at least partially subject to user behavior, so it can be difficult to apply current risk analysis methods because there are so many variables, in terms of an elderly or unwell person’s movement and behavior,” Collier cautions.
Because robotic uses are in their relative infancy, and models may be upgraded regularly in the future, manufacturers will not have large amounts of data on previous accidents, near accidents, claims and their causes.
Similarly, the benefits of widespread unmanned air vehicles must be balanced against the risks and uncertainties they would bring. For example, in the event of such a vehicle crashing into a ground-based structure, who would be liable? “Would it be the manufacturer who developed the system or the operator who may or may not have been flying the vehicle under proper constraints?” Mailey asks. “How would you prove that an operator was following manufacturer guidelines for use, training and maintenance?”
Robotics could also be subject to cyber criminality, Mailey suggests. In a similar way to how a company’s network could be hacked, robotics could be accessed remotely by cyber criminals that might control a company’s unmanned aircraft or driverless car.
“This is a long-standing problem not unique to unmanned air vehicles but it’s likely to gain more attention as the internet of things becomes of a reality,” Mailey says.
Cyber attacks also could be directed against the health care sector to seize control of robotic carers down to things such as controllers for prosthetic limbs.
To minimize these risks as well as the risk of an expensive lawsuit that damages a company’s brand, companies must undertake risk management measures beyond what is legally required, and be able to demonstrate these measures to external parties, if necessary.
Robotic products should be checked against the three main causes of accidents caused by robots: engineering errors, human mistakes and poor environmental conditions.
Engineering errors include programming bugs, faulty algorithms, and loose connections across parts and faulty electronics. Accidents caused by these errors are difficult to predict, but human errors — such as inattention or in observance of the guarding procedures — can be minimized by installing stringent company training programs and regularly reviewing procedures.
Adverse environmental factors refer to extreme temperature, poor sensing in difficult weather or lighting conditions, which can lead to incorrect response by the robot.
Ultimately, companies might be able to outsource work to robotics, but they cannot outsource the risks.
This article was first published in Progress +, ACE European Group’s flagship publication for brokers and clients.