by Simon Beechinor
In the near-term, remotely controlled vessels will still share sea-room with traditionally crewed and operated vessels. Those competing modes of operation will conflict with one another and the skills of the two groups of ‘mariners’ will, to a degree, be incompatible. How then will we train to ensure competence is maintained as we transition from a traditional operating environment to a ‘remote control world’?
If there is to be ‘transition’, is the future engineer or ship operator of tomorrow adequately trained even for today’s mode of operation? Who will train them and how will they demonstrate competence for tomorrow’s technology?
It’s perfectly possible that the ‘master’ of an autonomous vessel could find the vessel at risk of being stranded, or in difficulty of some other kind, that could only be relieved if previously ‘hidden’ capacity was released. How would be the master even be made aware that such capacity was available? What risks might escalate, or scenario deteriorate, while a vessel’s owner attempted to identify, communicate and negotiate with some software developer over the cost of unlocking software. Increasingly, we’ll begin to face situations where routine operations may become catastrophic, because they weren’t resolved in a timely way because of subscription issues.
Situations that would normally be considered routine could become complex or deteriorate unnecessarily just because equipment options that are needed and available aren’t in fact made available simply because of an operator’s chosen subscriptions.
Such delays would get even more complex as a ship ages or as software suppliers are bought by other firms or simply go out of business altogether. Who is to say that the new buyer of a software business will have the same outlook or benign business interests as the original supplier?
We are facing a reality where our vessels may hold more operating potential than we have access to because of financial rather than any technical constraints. Risks and situations could quickly get out of hand in the elevated risk, life or death, and even routine scenarios, we commonly experience at sea.
To what extent should software manufacturers be permitted to have, potentially, complete control over when it gets to remotely intervene in situations with virtually no accountability to a vessel’s owners. In the Tesla scenario that Westbrook described, Tesla only acted after a Florida resident inquired about unlocking the extra capability which Tesla then agreed to do to, for the common good in this case.
Who will be empowered to make these decisions and lift the limits on equipment? What implications does this have for our ability to control our vessels?
What will be the insurance implications of having, potentially, catastrophe-saving technology and capacity sitting idle behind a simple paywall? Will it drive premiums up…or down?
If software developers build-in firewalls and subscriptions exist to vary capacity of shipboard machinery, how might classification societies lay out the required levels of functionality or critical paths in ship design and construction?
How will those who need to make those decisions even be made aware and kept up to date with the capacity that exists behind the paywall or how to use it?
It will be interesting to see how the courts, insurers and other stakeholders view these decisions. If a software developer doesn’t act when it could, will they see it as reasonable that life was lost or the environment was contaminated simply because a ship-owner lacked the foresight, cash or ability to pay for the full functionality. Whose fault will it be if a vessel was lost or an environment contaminated yet a salvor could have benefited from the capacity that remained hidden or unused behind the paywall?
‘Wait and see’ hardly sounds like to sensible response to these issues but I’m not sure what alternatives there are. What’s apparent is that vessels could have the critical functionality made available only when the software developer determines it’s necessary—if it’s even determined to be necessary by the owner, insurer or other stakeholders. Could a SOSREP, for example, force a software developer to release capacity?
Soon we’ll find that the necessary shipboard decisions will be made by the managers of a business that’s far removed from the business of shipping and with little understanding of the realities ‘on-deck’. Decisions could fall to managers with an entirely different and perhaps unacceptable focus on profits, publicity and business outlook. In such cases what are the chances that right decisions will be made?
Will we let software developers charge a premium for services involving the safety and security of our assets and then just hope things turn out OK? How will we ensure that software developers remain accountable to vessel owners for their services and products?
We shouldn’t simply rely on software developers, as we now rely on classification societies, banks and insurers, to be there when we need them. We’ve come to rely on class, our banks and insurers because of centuries of experience. How well are we likely to really know and trust a software company?
I’ve no idea…
Simon Beechinor is a Commercial Operations Director, Project Manager and Master Mariner with extensive senior management experience of the maritime industries. His background includes the management of a major shipping company as Commercial Operations Director, and subsequently CEO, of a large marine consultancy and cargo services company based in S.E Asia and a Pacific-based regional liner trade.