When the Russian spacecraft Venera-7 touched down on the surface of Venus in December 1970, it set about a task that only a specially designed machine could do.
Resisting the planet’s 400° heat and crushing atmospheric pressure, it spent the last 23 minutes of its life transmitting data back to Earth that was vital to our understanding of the hostile environment into which it had fallen.
Putting machines to work in hostile environments has many uses on our own planet, too. Industries such as oil and gas, mining and subsea exploration have long sought to remove human workers from harm’s way, and instead use machines capable of conducting the same tasks.
Focusing on the task rather than the robot as a whole leads to an efficient approach. Why design a robot hand capable of delicately turning a hand-wheel on a valve shaft, for example, when it’s cheaper and more efficient to remove the wheel from the design and use a simple remotely-activated control?
Replicating the human’s role in the environment is one thing, but what if the robotic solution had additional capabilities that could make it even more useful?
For example, survey teams at Lloyd’s Register’s office in Maine, USA, use drones to inspect high voltage transmission facilities for damage. As they go about the task, the drones also map the physical structure using light sensors to create a point cloud.
The point cloud is used to generate a realistic “digital twin” of the installation, enabling reliable simulations to be performed to predict the impact of environmental stresses and potential catastrophic events like earthquakes. That data can then feed back into design and maintenance.
While humans remain in the loop, the way the system and humans interact with each other is a critical element of any design.
For instance, a team at the ORCA hub led by Professor Helen Hastie is working on human-machine interaction techniques, ranging from voice interfaces to gesture-based communications. These are aimed at greatly helping to improve co-operation in hazardous environments, allowing tasks to be completed faster, more safely and more successfully.
While the development of autonomous systems for hazardous environments presents major opportunities, one of the biggest challenges is verifying their performance and certifying that they are appropriate for use in a given application.
Our supplier selection work with the Maritime and Port Authority of Singapore, for example, revealed that not every drone is suited to the task of flying through cargo holds and ballast tanks for inspection purposes – with some struggling to navigate in the confined space. A similar process conducted for the Canadian navy was necessary to verify that underwater vehicles used for security applications met the exacting requirements of a recent NATO exercise.
As more systems employ artificial intelligence, testing and verifying their performance becomes more difficult. The frameworks used to certify the safety of assets for the oil and gas industries, for instance, don’t yet extend to systems that learn and adapt their behaviour as they go.
That’s why the ORCA Hub is working on a self-certification framework for self-learning systems – essential to ensure safety in environments where autonomous systems make their own decisions.
Recent years have seen huge advances in the quest to develop robotic solutions to the problem of working in environments that are dangerous for humans. As an industry, we must continue this progress while continuing to ensure safety and efficiency remains at the fore.
Simon Reeve, VP Technology & Innovation at Lloyd’s Register