Skip to content

We're upgrading our operations to serve you better. Orders ship as usual from Laval, QC. Questions? Contact us

Bitcoin accepted at checkout  |  Ships from Laval, QC, Canada  |  Expert support since 2016

// d-central.tech / ai

Sovereign AI for Plebs

Self-hosted AI on hardware you already own. Credits all the open-source projects that made this possible. One more layer decentralized — the same sovereignty move Bitcoin made for money.

What this section is

D-Central is shipping AI content and, soon, AI hardware and firmware built for the individual Bitcoiner who already thinks like a miner. DCENT_Inference OS and DCENT Heatbox AI are in closed beta under GPL-3.0, with public beta landing summer 2026. None of this replaces the Bitcoin core of the shop — the AI vertical is additive. Same hashcenter mindset, new compute workload.

Pick your pillar

Five entry points

Every post on this site slots into one of these five. Pick the one that matches where your head is at.

First time here?

Start here — the five-step path

Read these in order. By the end you’ll have a local model running on your box, a real UI in front of it, and enough theory to pick the right quant.

  1. 1.

    Read the Manifesto

    The narrative anchor. Why sovereign AI is the same move Bitcoin made for money.

  2. 2.

    Read the Pleb's Guide

    Whole-stack overview — models, runners, hardware, the works.

  3. 3.

    Install Ollama

    Ten minutes from zero to a local model answering prompts on your own box.

  4. 4.

    Give it a UI (Open WebUI)

    A ChatGPT-shaped front end that talks to your Ollama node. No cloud.

  5. 5.

    Understand quantization

    GGUF, Q4, Q8, FP16 — what actually fits in your VRAM and why.

Latest drops

Fresh from the vertical

Newest posts across all six AI categories.

Model library

Open-weight models, catalogued

Long-form model pages — architecture, quantizations, hardware that runs them. Built on the dc_ai_model CPT.

Qwen 3

Alibaba

Alibaba's May 2025 release — first open family with hybrid reasoning (toggle-able chain of thought), Apache 2.0 across…

View model →

Llama 4 (Scout/Maverick)

Meta

Meta's April 2025 MoE-and-multimodal release, headlined by Scout's 10M-token context window and the pre-announced Behemoth frontier model.

View model →

Gemma 3

Google

Google DeepMind's March 2025 Gemma family — vision-capable (4B+), 128K context, with official quantization-aware 4-bit variants.

View model →

Mistral Small 3

Mistral AI

Mistral AI's January 2025 24B model — Apache 2.0, competitive with Llama 3.3 70B, fits on a single…

View model →

DeepSeek R1

DeepSeek

DeepSeek's January 2025 reasoning model — frontier chain-of-thought quality, plus six MIT-licensed distills from 1.5B to 70B.

View model →

DeepSeek V3

DeepSeek

DeepSeek's December 2024 frontier-scale MoE — 671B total, 37B active, trained for ~$5.6M in compute.

View model →

Full library →

Hardware library — upcoming

GPUs, rigs, and the hashcenter retrofit

A dedicated hardware CPT plus benchmarks database ships in v1. Until then, the foundational reads:

Heads up: a full hardware CPT plus benchmarks taxonomy lands in a later task on the roadmap.

Shoulders of giants

None of this is ours. The entire sovereign-AI stack the plebs now run at home was built by an open ecosystem of researchers, engineers, and labs who released weights, code, and tools under terms anyone can use. D-Central stands on their shoulders and contributes where it can. Named with gratitude:

llama.cpp (Georgi Gerganov) Ollama LM Studio / Element Labs Open WebUI (Timothy J. Baek et al.) Meta (Llama) Google (Gemma) Alibaba (Qwen) Mistral AI DeepSeek Black Forest Labs (FLUX) Stability AI Microsoft (Phi) Hugging Face
[closed beta] DCENT_Inference OS and DCENT Heatbox AI are in closed beta; public beta summer 2026. GPL-3.0. One more layer decentralized.