Satellite communications equipment on a maritime vessel
Networking

Satellite Connectivity for AI at Sea: What Actually Happens Between the Marketing Brochure and the South Pacific

James Calder11 min read

Your satellite provider quoted you 200 Mbps. You're getting 11. The vessel is pitching 4 degrees in moderate seas south of New Zealand, and your AI pipeline just stalled because the model update it was pulling is 1.2 GB and the link dropped mid-transfer. Again.

This is the gap between satellite connectivity marketing and satellite connectivity operations. If you're deploying AI workloads on vessels, you need to understand exactly what's happening in that gap — because your system architecture depends on it.

The Three Orbits, Without the Sales Pitch

Satellite constellations sit at three altitude bands, and each one makes a fundamentally different trade-off between latency, throughput, and coverage reliability.

GEO (Geostationary, ~35,786 km). These are the incumbents — Inmarsat, SES, Intelsat. The satellite parks itself over one spot on the equator and covers about a third of the Earth's surface. The physics are simple: signal travels 35,786 km up, 35,786 km back down. That's 240 ms one-way at the speed of light, and real-world round-trip latency lands between 550–700 ms once you account for processing. Throughput per vessel on shared beams is typically 2–20 Mbps down, though dedicated VSAT can push higher if you're paying for it. GEO doesn't care about handovers because the satellite doesn't move relative to you. But that latency is a wall. You're not running interactive inference over a 600 ms link.

MEO (Medium Earth Orbit, ~8,000–20,000 km). SES's O3b mPOWER is the main player here. Orbiting at roughly 8,000 km, round-trip latency drops to 125–175 ms — meaningfully better. Throughput per beam is substantial; O3b can deliver hundreds of Mbps to a single terminal. The catch: MEO satellites do move across the sky, so your terminal needs to track them and hand over between satellites. O3b mPOWER uses steerable spot beams, which helps, but you're still dependent on a constellation that has fewer satellites than LEO and correspondingly fewer ground stations. Coverage at high latitudes gets thin.

LEO (Low Earth Orbit, ~340–550 km). Starlink, OneWeb, soon Kuiper. This is where the hype is. Latency is 25–60 ms — close to terrestrial. Starlink Maritime advertises up to 220 Mbps download. The constellation is massive: over 6,000 satellites for Starlink as of early 2026. But LEO satellites cross your sky in about 4 minutes, which means your terminal is constantly handing over. And coverage depends entirely on ground station proximity, inter-satellite laser links, and constellation density at your latitude.

Those are the physics. Now here's what they mean when you're actually at sea.

Real-World Throughput vs. the Spec Sheet

Starlink Maritime in the North Atlantic or Gulf of Mexico — good ground station coverage, dense constellation overhead — delivers 80–150 Mbps down fairly consistently, with peaks touching 200+. That's real. It's also the best-case scenario.

Move to the central Pacific, 1,500 nautical miles from the nearest ground station, and that number drops to 15–40 Mbps. Inter-satellite laser links (ISLs) are supposed to close this gap, and Starlink has been deploying them aggressively on newer V2 Mini satellites. But ISLs add hops, each hop adds latency and jitter, and the effective throughput for a single terminal degrades. You're sharing capacity with every other maritime and aviation user in that coverage cell.

OneWeb at 1,200 km altitude runs about 30–70 Mbps in practice, with latency around 70–100 ms. Solid, but fewer ground stations than Starlink and less capacity per cell.

Legacy GEO VSAT — what most of the commercial fleet still runs — gives you 3–10 Mbps on a standard plan. Predictable, though. The link doesn't vanish every 4 minutes.

The critical metric most providers don't advertise: sustained throughput over 24 hours, including handovers, weather, and congestion. For Starlink Maritime in favorable coverage, that's roughly 40–80 Mbps sustained. In the South Pacific, it can be 8–25 Mbps sustained. That's what you architect for.

Handovers, Rain Fade, and the Gaps Nobody Talks About

LEO handovers are the thing that will bite you if you've designed your AI pipeline like a shore-side application. Starlink terminals switch satellites every 15–90 seconds depending on geometry. Each handover causes a brief interruption — usually 50–500 ms, sometimes longer. Most of the time, TCP handles it fine. But if you're streaming telemetry to a shore-side model, or pulling a 400 MB ONNX model update, those micro-interruptions compound. A large transfer that should take 45 seconds at 80 Mbps ends up taking 3 minutes because of retransmits and window resets.

Rain fade is real at every orbit, but it's worse at higher frequencies. Starlink uses Ku-band; heavy rain can attenuate the signal by 3–6 dB, dropping throughput by 50% or more. In the tropics during monsoon season, you can lose hours of usable bandwidth per day. GEO VSAT in C-band is more rain-resilient, which is one reason it hasn't disappeared from vessel installs despite the latency penalty.

Then there's the geometry problem. LEO constellations thin out at the equator and above roughly 60° latitude. In the Southern Ocean, below 55°S, Starlink coverage gets spotty. If your vessels transit between New Zealand and Antarctica — or even just work the southern Indian Ocean — you will have periods with no LEO coverage. You need a backup link, or you need your systems to survive without one.

And the problem nobody in the sales meeting mentions: ground station backhaul. Every LEO satellite that doesn't have an ISL path to a ground station is useless to you. Starlink has been building ground stations aggressively, but there are still ocean areas where the satellite over your head can see you but can't see shore. The link lights up, the terminal says it's connected, throughput is zero. Your monitoring dashboard shows "connected" while your pipeline starves.

What This Means for AI Workloads

Most teams get the architecture wrong here. They design their AI system as if satellite connectivity is a slightly worse version of shore-side broadband. It's not. It's a fundamentally different medium with different failure modes.

Shore-side broadband fails in binary: it works or it doesn't. Satellite connectivity fails in gradients. You get 80 Mbps, then 30, then 8, then 0 for 12 seconds, then 45, then 2 during a rain cell, then back to 60. Your system needs to operate usefully across that entire range.

This has concrete implications for AI workload architecture.

Model deployment. If you're pushing model updates to vessels, you cannot assume a clean, fast transfer. A 500 MB model update over a 25 Mbps link with 3% packet loss and periodic handover interruptions is a 10-minute operation on a good day. You need chunked, resumable transfers with integrity verification at each chunk. Delta updates — sending only the changed weights — cut transfer sizes by 60–90% depending on the update. ONNX model diffing keeps most updates under 50 MB.

Inference architecture. Run inference on the vessel. This isn't a nice-to-have; it's the only architecture that works. If your AI system requires a round trip to shore for every inference, you've built a system that stops working the moment the link degrades below your latency or throughput threshold. Edge inference with models sized for vessel-side compute is the baseline. Shore-side is for training, aggregation, and model improvement — not real-time decisions.

Telemetry and data upload. Vessels generate enormous volumes of sensor data. A modern vessel with vibration monitoring, thermal imaging, AIS, weather sensors, and camera feeds can produce 5–50 GB per day. You're not uploading all of that over satellite. You need an intelligent data pipeline that prioritizes: anomaly data and inference results go immediately (small payloads, kilobytes to low megabytes). Raw sensor data gets queued, compressed, and uploaded opportunistically during high-bandwidth windows. Historical bulk data waits for port, where you've got a gigabit ethernet drop and no metering.

Federated learning. If you're improving models across a fleet, the vessel doesn't send raw training data. It sends gradient updates — typically 1–10 MB per round. That's feasible even on a degraded LEO link. Design your federated learning protocol to tolerate stale updates and variable participation. Some vessels will report every round. Others will miss five rounds while transiting the South Pacific. Your aggregation server needs to handle both without the global model diverging.

Architecting for Graceful Degradation

The phrase "graceful degradation" gets thrown around a lot. Here's what it means in practice for maritime AI.

Define three operating modes for your system, tied to measured link quality — not to what the terminal reports, but to what your application layer actually observes.

Connected (sustained throughput > 20 Mbps, latency < 200 ms). Full sync. Push model updates, pull telemetry, run any shore-side augmentation you've designed for. This is your North Atlantic, good-weather mode.

Degraded (1–20 Mbps, or latency > 200 ms, or > 5% packet loss). Reduce sync frequency. Switch to delta-only model updates. Compress telemetry aggressively. Disable any shore-side inference calls. The vessel is autonomous for all real-time decisions; shore-side gets a summary feed.

Disconnected (< 1 Mbps sustained or total link loss). The vessel operates on its last-synced models. Local inference continues. Telemetry queues to local storage. The system keeps working. When the link returns, it reconciles state — newest model version wins, telemetry uploads in priority order, and the shore-side dashboard catches up.

The key discipline: never design a feature that only works in Connected mode unless it's explicitly non-critical. If your anomaly detection, your route optimization, your predictive maintenance — whatever your core value proposition is — requires Connected mode, you've built a demo, not a product.

Measuring What Matters

Install your own link quality monitor. Don't trust the terminal's reported throughput. Run a lightweight agent that pings a known endpoint every 10 seconds and logs actual RTT, jitter, and achieved throughput on small test transfers. Log satellite handover events. Correlate link quality with vessel position and weather.

After 30 days, you'll have a coverage and performance map for your specific routes. That data is worth more than any constellation coverage map from a provider's website, because it reflects your terminal, your vessel's motion characteristics, your routes, and real atmospheric conditions.

Most operators overestimate their usable bandwidth by 3–5x because they're looking at peak numbers instead of P50 sustained throughput. The P50 is what your architecture needs to survive on. The P10 — the throughput you exceed 90% of the time — is what it needs to survive on gracefully.

The Multi-Constellation Question

Should you run two providers? Maybe. Starlink plus a GEO VSAT backup is becoming a common configuration. Starlink handles the bulk throughput; GEO VSAT provides the fallback that works everywhere, even if it's slow. The cost is real — you're paying for two terminals, two subscriptions, two sets of cabling and mounting. On a vessel where connectivity is operationally critical, it's worth it. On a bulk carrier running basic monitoring, probably not.

The bonding and failover layer matters. SD-WAN appliances from vendors like Peplink or the maritime-specific offerings from Dualog and GTMaritime can manage multi-link failover and traffic steering. The AI-specific consideration: make sure your failover rules understand your application's needs. Model updates can tolerate a switch to the slow GEO link; they just take longer. Real-time telemetry sync should pause and queue rather than attempt to stream over a 5 Mbps GEO link and clog the pipe for everything else.

Stop Designing for the Brochure

The satellite connectivity landscape is better than it's ever been. LEO constellations have fundamentally changed what's possible for AI at sea. But "possible" and "reliable" are different words.

Design your AI systems for the link you'll actually have — the 2 AM throughput in the South Pacific during a rain squall, not the demo on a calm day in the English Channel. Build edge-first, sync-when-able, queue-always. Test your degraded and disconnected modes as aggressively as you test your connected mode.

If you're planning a maritime AI deployment and want to understand what your actual connectivity profile looks like on your routes, reach out to our team. We've built the monitoring tools and the architecture patterns to make AI work where the link doesn't.