Skip to content

We're upgrading our operations to serve you better. Orders ship as usual from Laval, QC. Questions? Contact us

Bitcoin accepted at checkout  |  Ships from Laval, QC, Canada  |  Expert support since 2016

Archives: AI Hardware

AI

Apple Mac Studio (M3 Ultra)

Apple Silicon’s inference appliance: up to 192 GB unified memory at 800 GB/s, runs 70B+ models on a coffee-cup-sized box.

AI

RTX 5090

Blackwell flagship: 32 GB GDDR7, 1792 GB/s bandwidth — the first consumer card that comfortably runs 70B models at Q8.

AI

AMD Strix Halo (Ryzen AI Max+ 395)

AMD’s mobile/mini-PC APU with up to 128 GB unified LPDDR5X — the AMD answer to Apple’s unified-memory approach.

AI

RTX 4090

Ada Lovelace’s consumer flagship: 24 GB, 1 TB/s bandwidth, 82.6 FP16 TFLOPS. Fastest single-card pleb option for inference.

AI

RTX A4000

Single-slot Ampere workstation card with 16 GB and a blower. The quiet-rack pleb’s favourite for dense multi-GPU builds.

AI

RTX A5000

Dual-slot blower with 24 GB and ECC. The professional’s 3090 — same VRAM, quieter, rack-ready.

AI

RTX 3090

NVIDIA’s 2020 flagship remains the pleb sweet spot: 24 GB of GDDR6X for $600–800 used, runs 32B models comfortably at Q4.

AI

Tesla P40

The budget pleb pick: 24 GB of Pascal-era VRAM for $150–250 used. Slow by 2026 standards but unbeatable $/GB.