Backlit circuit board showing etched data pathways

When someone says "cyber incident" in a maritime context, most people picture a phishing email hitting the purser's laptop. Maybe a credential-stuffing attack on the booking portal. Annoying, recoverable, and firmly in the IT department's problem space.

That was last year's threat model. SAFETY4SEA's annual report logged 828 maritime cyber incidents in 2025, up 103% year-over-year. Ransomware cases more than doubled to 372. But the number that should keep vessel operators awake is not the total count. It is where the ransomware is landing: ballast water control, engine monitoring, and other operational technology that keeps the vessel actually moving.

The IT/OT boundary is gone

If you build AI systems for vessels (which is what we do at ShipboardAI), this shift changes your architecture decisions today, not next quarter. IT ransomware encrypts files you need. OT ransomware affects the physical systems that control a vessel's stability and propulsion. A compromised booking system means angry guests and a bad charter review. A compromised ballast system means the vessel's stability is in question. Those are different categories of problem, and they should never share a network path.

The CYTUR white paper behind these numbers breaks down the attack vectors, and the pattern is consistent: the majority of successful OT compromises start with an IT network breach and move laterally. The attacker gets in through a crew workstation or a vendor VPN, pivots through flat network segments, and eventually reaches systems that were never designed to be internet-accessible in the first place.

Cloud-dependent AI widens the blast radius

Here is the architecture argument I keep coming back to. Most vessel AI platforms on the market right now maintain persistent connections to cloud infrastructure. Always-on VPN tunnels, open API endpoints, cloud-hosted model inference. Every one of those is a network pathway an attacker can traverse in the opposite direction.

If your AI needs a constant link to a cloud provider to function, you are maintaining an open door. When the rest of your network security depends on minimizing entry points, a persistent outbound tunnel to a cloud inference endpoint is the architectural equivalent of leaving a window open in a locked house.

A sovereign AI stack running entirely on local hardware does not eliminate the cyber threat (nothing does), but it removes an entire class of attack surface: no persistent cloud tunnels, no API keys that can be harvested from a shore-side breach, no outbound data flows an attacker can piggyback on. The AI keeps running behind a properly segmented network, and the blast radius of any single compromise stays contained to its segment.

What secure-by-design looks like in practice

I spend more time on network architecture than model architecture these days, and I think that is the correct priority for anyone deploying compute on a vessel right now. Here is what we spec for every deployment:

Four-segment network topology. Guest Wi-Fi, crew devices, vessel operations, and the AI compute stack each sit on their own VLAN with explicit firewall rules between them. The AI stack communicates with the operations network through a controlled API gateway with request validation. It never touches the guest network. If a guest's phone gets compromised, the path to engine monitoring is blocked at the network level.

No persistent outbound connections. When the satellite link is up, the vessel can pull model updates and sync telemetry on a schedule. Those sync jobs are authenticated, time-bounded, and initiated from the vessel side. There is no always-on tunnel sitting open for a scanner to find.

Signed boot and attested workloads. The compute stack boots from verified firmware and runs inference in an attested environment. If someone manages to tamper with the models or the serving layer, the system flags it before the next inference cycle.

The regulatory clock is ticking

IACS Unified Requirements E26 and E27 are now in force for new builds. These requirements mandate cyber resilience for onboard systems and equipment, covering exactly the operational technology that the 2025 ransomware numbers show is under active attack. Marcus covered the broader threat landscape when he wrote about Mythos-class vulnerability discovery, and the compliance implications line up directly with what we are seeing in the incident data.

If you are planning an AI deployment on a vessel in the next twelve months, your network architecture will be audited against these requirements. Building it correctly from day one costs a fraction of retrofitting after a surveyor flags it, or after an incident exposes it.

The 828 number is the floor

That 103% increase represents where the trend line is heading, not where it levels off. AI-assisted vulnerability discovery is accelerating on both sides of the fence. Defenders will patch faster. Attackers will find new entry points faster. The vessels that come through this are the ones where the AI, the operations systems, and the guest services all run behind a properly segmented, locally controlled architecture. Sovereign AI is not just about surviving a satellite outage. It is about reducing your blast radius when the threat is not the weather, but the network itself.


Planning a vessel AI deployment that needs to hold up in a hostile threat environment? Let's talk. We design vessel AI architectures that are hardened from the network layer up.