On-Vessel GPU Selection Guide
Blackwell B200, H200, H100, L40S, and A100 compared for vessel deployments. When each is the right call and the power realities.
James Calder·April 11, 2026
8 posts tagged with this topic.
Blackwell B200, H200, H100, L40S, and A100 compared for vessel deployments. When each is the right call and the power realities.
James Calder·April 11, 2026
Google's TurboQuant compresses the KV cache to 3 bits with no accuracy loss. Here is what that means for a vessel GPU rack.
Ethan Marsh·April 10, 2026
AMD Strix Halo and NVIDIA DGX Spark bring 128 GB of unified memory to a shelf-sized box for under $4,000. When to buy one.
James Calder·April 10, 2026
VRAM and KV cache math for models from 7B to 405B at 1M tokens. Per-user scaling for crew and guests. The numbers are ugly.
James Calder·April 7, 2026
Vercel ran the eval: a compressed doc file beat on-demand retrieval 100% to 53%. What that means for AI deployments on vessels.
Ethan Marsh·April 4, 2026
Cloud chatbots fail when the ship leaves port. Here is what an on-vessel AI concierge looks like, and why guests will not notice.
James Calder·April 1, 2026
On-vessel AI handles crew scheduling, certifications, and compliance for 2,000+ crew without relying on a satellite link.
James Calder·March 25, 2026
Your satellite link cannot support cloud-based AI on a vessel. Here is why, and what actually works 200 miles from shore.
James Calder·March 4, 2026