Publication
An update on Alberta’s Bill 26: Health Statutes Amendment Act
Alberta’s Bill 26 seeks to continue the government’s restructuring of healthcare in Alberta and introduces prohibitions on the treatment of minors for gender dysphoria.
Global | Publication | July 2020
In the May 2019 edition of Legalseas, we reflected on the implication of the Court of Appeal decision in the case of Evergreen Marine v Nautical Challenge (Evergreen) when considering the interaction (and interpretation) of the Collision Regulations (COLREGs) (specifically the crossing rule (Rule 15) and narrow channel rule (Rule 9)) in circumstances when they appeared to conflict.
In this edition, we consider how the facts in Evergreen demonstrate the challenges faced by those developing autonomous vessels and particularly the algorithm-based navigational systems which will need to interpret the Regulations for the Prevention of Collisions at Sea 1972 (COLREGs). We have used the Evergreen case to consider circumstances where obligations under the COLREGs appear to conflict and speculate how the outcome may have differed if both or either the Ever Smart (the at-fault vessel) and Alexandra I (the inbound vessel) were fully autonomous.
The regulatory framework governing safe navigation has historically been premised on objective rules interpreted through a human element; for example the “manning” of ships, the “charge of a master,” or taking precautions required by the “ordinary practice of seaman.” Subjective standards are pervasive throughout the UN Law of the Sea Convention 1982, IMO Regulations, domestic shipping legislation, including the Merchant Shipping Act 1995, and civil liability conventions.
The COLREGs are particularly relevant in this regard. Since 1977, seafarers have been obliged to comply with the COLREGs on issues of collision avoidance and, indeed, courts have interpreted the COLREGs when apportioning liability arising from collisions. Being practical rules, having as their primary object the prevention of collisions at sea, the COLREGs provide objective guidance on vessel priority but also necessitate (subjective) deviations from the rules, in accordance with the ordinary practice of seamen if the circumstance requires. By way of example, COLREGs, Rule 2 states, “Nothing in these Rules shall exonerate any vessel, or the owner, master or crew thereof, from the consequences of any neglect to comply with these Rules or of the neglect of any precaution which may be required by the ordinary practice of seamen, or by the special circumstances of the case.” This subjective interpretation of an objective rulebook highlights the inherent challenge in automating deviations from a set of rules, absent a human element.
There has been significant discussion across the shipping industry as to whether unmanned or fully AI-enabled vessels can strictly comply with provisions under the current COLREGs, including on Rule 2 (responsibility), Rule 8 (action to avoid-collision) with regard to the seamanship standard, Rule 5 (look-out), and Rule 18 (responsibilities between vessels) with regard to vessels “under command.”
Various research studies conducted over the course of the past 12 to 18 months have allegedly demonstrated that autonomous vessels can meet (or exceed) the current COLREGs collision avoidance rules. Rolls Royce’s MAXCMAS project (Machine Executable Collision Regulations for Marine Autonomous Systems), in partnership with Lloyd’s Register (amongst others), claim to have developed an algorithm–enabling, AI-based navigational system to effectively enact the COLREGs rules in a manner that is “indistinguishable from good seafarer behaviour,” even in circumstances “when the give-way vessel isn’t taking appropriate action.” The latter will be essential when both autonomous vessels and manned vessels are trying to keep out of the way of one another.
While this article does not seek to address issues of strict compliance, the case of Evergreen demonstrates two issues: (i) that the identity of the “give way” vessel may not always be readily apparent to experienced deck officers; and (ii) that “good seafarer behaviour”, in the context of apportioning liability, is not a fixed standard – it is a product of factual circumstance, interpreted through the (various) rules of the COLREGs, past case law, and the views of expert nautical assessors (the Elder Brethren of Trinity House) post-event. Just as one of the dilemmas facing masters and bridge watch keepers is what to do when faced with a situation where obligations under the COLREGs appear to conflict, those developing autonomous shipping solutions must equally grapple with the same dilemmas; save that they have to program these decisions pre-event in a way that is predictable or the system will have to apply machine learning to be able to comply with the Rules.
In this article we assume that the 1972 COLREGs are applicable to both manned vessels and vessels controlled by AI. The issue of whether an autonomous ship can be programmed to determine whether Section II - Conduct of vessels in sight of one another (Rules 11-18) and Section III - Conduct of vessels in restricted visibility (Rule 19) of Part B - Steering and sailing rules, applies to a developing close quarters situation is an important one but while manned and unmanned ships are sharing the same waterways, then it will be essential that both comply with the same rules. We discuss the issues arising from this assumption at the end of this article.
Below, we have considered whether, on the facts in Evergreen, two autonomous vessels would have been able to avoid a collision. In doing so, we also consider a number of the challenges facing developers of maritime AI solutions from a collision liability perspective.
Counsel for Ever Smart (the at-fault vessel) argued on appeal that “there was no rule of law” as to the priority of the narrow channel rule (Rule 9) in a crossing vessel situation (Rule 15). When interpreting the interaction of Rule 15 and Rule 9, the first instance judge relied (with some emphasis) on statements of principle from two non-binding cases with a similar (although not identical) fact pattern, The Canberra Star [1962] and Kulemesin v HKSAR [2013]; the former a first instance decision and the latter a decision of a foreign court in criminal proceedings. While persuasive, neither case proffered definitive ratio (a finding that sets a legal precedent); the first instance judge chose to apply the statements of principle – both because of the “experience and knowledge” of the respective trial judges and also because he agreed with them – he was not, however, strictly bound to do so.
In determining whether the crossing rule applied, the first instance judge had considered whether Alexandra I was on a “sufficiently defined course.” There is no strict requirement under Rule 15 of the COLREGs that a vessel must be on a sufficiently defined course (or indeed any course) in order to be subject to the rule. The principle was established by Lord Wright in The Alcoa Rambler [1949]. Alexandra I’s course made good varied between 081 and 127 degrees at about 1-2 knots over the ground. She had traveled less than a mile in approximately 20 minutes. The court was satisfied that this was not ‘sufficiently defined’ to be considered a course, notwithstanding the constant south-easterly heading, and instead described Alexandra I as “waiting for the pilot vessel to arrive.” Consequently she was not bound by Rule 15 as she was not on a course that was crossing with that of the Ever Smart.
Neither the court of first instance nor the Court of Appeal provided additional clarification as to when a vessel (either by speed or by line or heading) will be deemed to be on a sufficiently constant course. Rather, the test appears to require an observer (who has spent “sufficient time” observing the vessel) to ascertain if the vessel is not on a defined course (i.e. constantly changing her heading). In the context of automation, this raises an obvious concern. For example, had Alexandra I been travelling at three knots, would that have made a material difference? Equally, had her course made good varied by a lesser degree (say between 90 and 110 degrees), would the system have drawn a different conclusion? What degree of variation would an AI system require to deem another vessel to be on a constant course?
If this situation was not apparent to two experienced masters, and at Court required an application of case law to determine the obligations of the two vessels, then is it likely that two autonomous vessels would have definitively been able to identify their respective obligations under the COLREGs? The very fact that permission to appeal was granted with respect to the issue of priority demonstrates that there was uncertainty as to the application of the narrow channel rule, and indeed this uncertainty would have arguably been amplified had the approach of Alexandra I been from the East (i.e. the hypothetical East to West scenario that the Elder Brethren were asked to comment on by the Court of Appeal judges) and not from the West. Further, absent clear guidance on when a vessel will be considered to be on a “sufficiently defined course,” it remains unclear as to whether a crossing situation could arise in the same or similar factual circumstance if the speed or bearing of Alexandra I had been more established. Even with the use of advanced algorithms, this may be a difficult puzzle for an autonomous system to solve.
Notwithstanding this conclusion, it is possible that autonomous vessels may have been able to avoid a collision, or at least may have acted so as to reduce the damage sustained from the collision, by correcting the “human errors” that were identified as increasing the causative potency of the respective masters’ actions.
As a general comment, many maritime casualties are not caused by one catastrophic mistake or failure; rather they are caused by a series of isolated minor decisions or circumstances which, in combination, result in the incident. To use a modern analogy, the holes in the Swiss cheese line up. These errors include the officer on watch (OOW) not following the correct procedure or missing some warning sign whether it be from the echo sounder, Electronic Chart Display and Information System (ECDIS), automatic radar plotting aid (ARPA) or visually. The OOW is often distracted and can be mentally overloaded by the pressure of the environment and the flood of information, particularly in congested waters. AI would presumably not be distracted in this way and would not miss a warning sign.
There is reason to question why Alexandra I was present at the approach to the narrow channel in the first instance; both as a result of her early arrival to the approach channel (by 25 minutes or so) and the port Vessel Traffic Service (VTS) Officer’s approval for Alexandra 1 to proceed to the channel entrance buoys when Ever Smart was travelling outbound from Jebel Ali. In addition to her proximity to the end of the channel, Alexandra I’s AIS was not operating at the time of the incident, making her less visible to local traffic, and she was criticized for maintaining a poor aural lookout – mistaking a VHF conversation between Port Control and a local tug boat.
While these contributing errors do little to exonerate the actions of Ever Smart from a liability perspective, it is anticipated that autonomous vessels will (by necessity) operate using enhanced AIS, GPS and radar, in addition to a full suite of sensors and cameras (including thermal and infrared), and will adopt predictive control algorithms to track and anticipate future vessel movements and respond accordingly.
Within congested or restricted shipping areas, automated VTS (or eNAV) will likely be implemented to ensure that vessels manoeuvring within a restricted area are informed of potential collision risks in real time – indeed, the Maritime and Port Authority of Singapore has already trialled Artificial Intelligence (AI) to analyze marine traffic risks in the Singapore Strait. The provisional results demonstrate that the technology has the ability to “quantify risk in more detail and more quickly than it could be detected by human operators.”
Standardized messaging formats, including the use of hybrid messaging services such as a VHF Data Exchange Systems (VDES), supported by satellite as opposed to (or in addition to) radio frequencies, also have the potential to reduce miscommunication and increase the speed at which collision threats are communicated – absent the risk of misunderstanding (not identifying the relevant vessel) or miscomprehension (not understanding the VHF message due to linguistic or technological restrictions).
While these technologies are still being trialed, their potential to identify and report a collision risk, when applied to the factual scenario in Evergreen, may very well have highlighted the potential for collision between Alexandra I and Ever Smart substantially sooner than the “three seconds” in which the master of Ever Smart came to realise that a collision was inevitable.
The first instance judge concluded that the actions of Ever Smart in proceeding along the port side of the narrow channel, in addition to her excessive speed at 11.8 knots and failure to keep a good visual lookout, had the greatest ‘causative potency’ in causing the damage that resulted from the collision.
Notwithstanding the arguments of the master of Ever Smart as to why he chose not to proceed to the starboard side (namely that he was not required to under the crossing rules), developments in the software designed to assist with unmanned or autonomous navigation could readily ensure that, within a narrow channel, both inbound and outbound vessel proceed on the starboard side (insofar as is practicable for it to do so) at pre-set maximum (safe) speeds.
Modern manned vessels are already equipped with Electronic Nautical Chart Systems (ECDIS), which are in turn linked to speed and depth sensors, as well as GPS and AIS. Implementing these systems to operate autonomously would allow Port Control (with the assistance of relevant hydrographic offices in creating/amending the charts) to better control speed limits, both during ordinary navigation but also when vessels are navigating within pre-specified distances of each other, to ensure that ‘safe speed’ is observed. While these restrictions do not, in themselves, eradicate the risk of collision, they do reduce the scope of likely damage arising from collisions.
With respect to Ever Smart’s failure to keep a good visual lookout, thermal and infrared high resolution cameras have the ability to identify objects when the human eye cannot. While the master of Ever Smart was only able to make out Alexandra I when she turned her deck lights on (three seconds before the collision) – modern cameras may have picked up Alexandra I ‘s heat signature, if not her outline using infrared, significantly earlier than the master.
While technological advancements undoubtedly demonstrate the potential that autonomous vessels have in reducing collision risk, developers are faced with a number of problems that cannot be readily surmounted.
Unlike our past experience of large-scale adoption of autonomously-controlled machines, there will necessarily be a period in which autonomous, unmanned and manned vessels will navigate in the same waterways. Until there is clear guidance to the contrary, the expectation will be that the human standard will apply. It is relevant to note in this regard that case law has established that overreliance on technology will not satisfy the principles of good seamanship and, in any case, there is currently no case law considering a collision between a manned and unmanned or autonomous Vessel.
The duties under COLREGs differ whether Section II or Section III applies. Section II - Conduct of vessels in sight of one another (Rules 11-18) and Section III - Conduct of vessels in restricted visibility (Rule 19) of Part B - Steering and sailing rules, separately apply to a developing close quarters situation depending on the visibility. As part of applying the COLREGs to manned and unmanned ships, the AI systems will have to be able to understand the limitations of human eyesight to determine whether a manned ship is “not in sight” and then to follow Rule 19, instead of following Rules 11-18.
The fact that the AI system might have infra-red or night vision and therefore is able to “see” the other vessel would not be permitted to change the position, in fog for example, that the vessels are not “in sight” of one another. Alternatively should the regulators remove Rule 19 from the COLREGs altogether as a result of advances in technology on all ships (better radars, ARPA, AIS, better navigation systems, infra-red cameras etc) and rely only on Rule 6 (Safe Speed) and Section II? Rule 19 has been confusing generations of seafarers since 1977 so their deletion may not be mourned. But either way, it is hard to see how regulators can allow autonomous ships to sail the oceans while the COLREGs contain two sets of steering and sailing rules.
There will be a risk to software developers and Owners of autonomous vessels alike. Developers of marine Al systems are not only required to codify compliance with the seamanship standard currently in use, but are also required to produce algorithms that allow autonomous vessels to interact with manned vessels, unmanned (remote controlled) vessels and truly autonomous vessels in a way that is predictable to each of them; irrespective of the differing states of technology on-board (for example, autonomous vessels may be required to interpret standard frequency VHF messages even when equipped with a VDES system).
But even if the COLREGs were unambiguous, comprehensive and consistent (which they are not), then we still would not normally programm systems to have no discretion at all. This is because situations always exist where the best course of action is to ignore or break the rules and designers of systems cannot identify all these exceptional situations in advance. Therefore machine learning will be required which must learn the necessary navigational behaviors to avoid or mitigate collisions, even given (indeed, especially given) ambiguous and conflicting regulations, just as human navigators do. But, of course, effective machine learning is only possible with sufficient data, and particularly data arising from collisions or near misses (what CS people call “edge cases”).
Despite all of that, accidents may still occur. Given that there is no case law on the matter, third party liability in the event of a collision involving an autonomous vessel is not yet clear. It is possible that developers may be liable for collision damage if it can be proven that a fault in programming onboard systems or in the way the machine learning has developed caused (or contributed to) a collision. Would such fault be akin to unseaworthiness? Would the software writers need to be covered by collision insurance?
In addition, there are also ethical considerations as to how an autonomous vessel should be programmed in scenarios in which AI is required to choose between loss or damage to its own vessel or cargo, and loss of human life or serious pollution (and the inevitable concerns that this may have from a liability perspective to developers, owners and insurers alike).
Consideration must also be given to future scenarios in which an autonomous vessel suffers a catastrophic failure – the worst case scenario being a complete electrical breakdown (for example, as a result of generator failure, cyber-attack, or electro-magnetic disruption). The vessel may no longer be a vessel “under command” for the purposes of the COLREGs, however it may also be restricted in its ability to communicate this to nearby vessels or to shore based control centres in the absence of a ‘non-digital’ Master – who may still have the benefit of a satellite phone or, in the traditional way, hoist two black balls to the top of the mast.
Evergreen demonstrates that autonomous vessels may have struggled in those circumstances to definitively identify their respective obligations under the COLREGs due to the inherent ambiguity in priority. It remains unclear as to whether other factual scenarios can demonstrate similar ambiguities in priority between various rules of the COLREGs and it may be found necessary to review the COLREGs to remove as much uncertainty as possible. That said, no amount of redrafting will be able to give conclusive meaning to phrases such as “which may be required by the ordinary practice of seamen, or by the special circumstances of the case” – Rule 2 – Responsibility.
Evergreen does, however, demonstrate that two autonomous vessels may have been able to identify the collision risk earlier than the Masters of Ever Smart and Alexandra I were able to, principally as a result of enhanced communications, audio-visual and locational technology. Programming of systems should prevent excessive speeds in narrow channels and prevent vessels loitering in hazardous positions. An earlier identification of the potential collision risk could have reduced, or altogether removed, the risk of collision and consequent damage sustained by Alexandra I making the question of a “sufficiently defined course” completely redundant.
Publication
Alberta’s Bill 26 seeks to continue the government’s restructuring of healthcare in Alberta and introduces prohibitions on the treatment of minors for gender dysphoria.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023