Skip to content

We're upgrading our operations to serve you better. Orders ship as usual from Laval, QC. Questions? Contact us

Bitcoin accepted at checkout  |  Ships from Laval, QC, Canada  |  Expert support since 2016

Category: AI

Sovereign AI for plebs — open-source models, self-hosting, AI Hashcenters.

AI

Self-Hosted AI Troubleshooting: GPU Not Detected, OOM, Slow Tokens

Self-hosted AI breaks. So does firmware. Troubleshooting is a skill plebs already have — this post just translates the common AI failure modes (GPU not detected, OOM on load, slow tokens, service won’t start) into the vocabulary you already use.

AI

Used RTX 3090 for LLMs in 2026: Still King?

24 GB of VRAM at $600–$800 used. For LLMs under 70B parameters at Q4–Q5 quants, the RTX 3090 is still the pleb standard in 2026. Here’s the head-to-head vs 4090, 5090, P40, and A5000, plus a buying checklist.

AI

The Pleb’s Guide to Self-Hosted AI

Self-hosted AI isn’t as easy as opening ChatGPT — but for plebs who already run nodes and miners, the learning curve is half what it looks like. Here’s the whole picture before you install anything.

AI

Sovereign AI for Bitcoiners: A Manifesto

Bitcoin replaced centralized money with math we run ourselves. Frontier AI is the next centralized layer. The plebs — with power, hardware, and sovereignty instincts — are already halfway to sovereign AI.

AI

Connect Your Self-Hosted AI to Home Assistant, Obsidian, Shortcuts

ChatGPT is worth its monthly fee because it powers your tools. Your local Ollama speaks the same OpenAI API. Here’s how to wire Home Assistant voice, Obsidian notes, VS Code Continue, and iPhone Shortcuts to your Hashcenter — no subscriptions, no cloud.

AI

LM Studio vs Ollama vs llama.cpp: Which Runner for Plebs?

Three excellent open-source runners. Three different plebs. llama.cpp is the foundation Gerganov built. Ollama wraps it for daemon simplicity. LM Studio wraps it in a polished GUI. Here’s the 15-minute decision guide.

AI

Heating Your Home With Inference, Not Just Hashing

Bitcoin ASICs dump nearly all their power as heat — which is why mining heaters are a category. GPUs doing LLM inference follow the same thermodynamics. If you’re going to heat your home electrically, you may as well be running Llama 3.1 too.

AI

BTC-AI Public Companies: The Hashcenter-to-Tokens Pivot

Hut 8, Core Scientific, IREN, and TeraWulf are leaving Bitcoin Hashcenters to become AI datacenter operators. It’s a rational pivot for public companies — and it makes sovereign AI on pleb-owned hardware more important, not less.

AI

Open WebUI: The ChatGPT Experience, But Yours

The terminal is fine for testing, unusable for daily driving. Open WebUI is the ChatGPT-style interface that plugs into your local Ollama — multi-user, RAG, web search, reachable from anywhere over Tailscale. One Docker command; your Hashcenter becomes your private ChatGPT.

AI

ComfyUI for Plebs: Your First Local Image Generation

You installed Ollama and got local chat. Time for local image generation. ComfyUI runs SDXL, SD 3.5, and FLUX.1 on hardware you already own — the Midjourney/DALL-E subscription you can cancel. Here’s the pleb on-ramp.

AI

From S19 to Your First AI Hashcenter

The mining shed is the hardest part of an AI Hashcenter — and you already have it. 240V service, airflow, sound isolation, breaker capacity. A weekend of work converts an S19 shed into a hybrid BTC + sovereign-AI Hashcenter.

AI

GGUF, Q4, Q8, fp16: A Pleb’s Guide to LLM Quantization

Quantization is lossy compression for LLMs — same idea as JPEG for photos. It’s the reason a used 3090 runs 70B models and an 8 GB laptop runs Phi-3.5. Here’s what the Q4_K_M and GGUF suffixes actually mean, and which quant to pick for your rig.