Skip to content

We're upgrading our operations to serve you better. Orders ship as usual from Laval, QC. Questions? Contact us

Bitcoin accepted at checkout  |  Ships from Laval, QC, Canada  |  Expert support since 2016

// d-central.tech / ai

Sovereign AI for Plebs

Self-hosted AI on hardware you already own. Credits all the open-source projects that made this possible. One more layer decentralized — the same sovereignty move Bitcoin made for money.

What this section is

D-Central is shipping AI content and, soon, AI hardware and firmware built for the individual Bitcoiner who already thinks like a miner. DCENT_Inference OS and DCENT Heatbox AI are in closed beta under GPL-3.0, with public beta landing summer 2026. None of this replaces the Bitcoin core of the shop — the AI vertical is additive. Same hashcenter mindset, new compute workload.

Pick your pillar

Five entry points

Every post on this site slots into one of these five. Pick the one that matches where your head is at.

First time here?

Start here — the five-step path

Read these in order. By the end you’ll have a local model running on your box, a real UI in front of it, and enough theory to pick the right quant.

  1. 1.

    Read the Manifesto

    The narrative anchor. Why sovereign AI is the same move Bitcoin made for money.

  2. 2.

    Read the Pleb's Guide

    Whole-stack overview — models, runners, hardware, the works.

  3. 3.

    Install Ollama

    Ten minutes from zero to a local model answering prompts on your own box.

  4. 4.

    Give it a UI (Open WebUI)

    A ChatGPT-shaped front end that talks to your Ollama node. No cloud.

  5. 5.

    Understand quantization

    GGUF, Q4, Q8, FP16 — what actually fits in your VRAM and why.

Latest drops

Fresh from the vertical

Newest posts across all six AI categories.

Model library

Open-weight models, catalogued

Long-form model pages — architecture, quantizations, hardware that runs them. Built on the dc_ai_model CPT.

Mistral Small 3

Mistral AI

Mistral AI's January 2025 24B model — Apache 2.0, competitive with Llama 3.3 70B, fits on a single…

View model →

Command R+

Cohere

Cohere's April 2024 RAG-native flagship — 104B dense, first-class grounded citation and tool use, CC-BY-NC 4.0.

View model →

Whisper Large v3

OpenAI

OpenAI's November 2023 open ASR model — 1.55B params, MIT-licensed, the open reference for multilingual speech-to-text.

View model →

FLUX.1 schnell

Black Forest Labs

Black Forest Labs' August 2024 Apache 2.0 FLUX variant — 12B distilled to 1-4 steps for fast, commercially-open…

View model →

Stable Diffusion 3.5

Stability AI

Stability AI's October 2024 MMDiT flagship — 2B (Medium) and 8B (Large) variants with dramatically improved prompt adherence…

View model →

Qwen 3

Alibaba

Alibaba's May 2025 release — first open family with hybrid reasoning (toggle-able chain of thought), Apache 2.0 across…

View model →

Full library →

Hardware library — upcoming

GPUs, rigs, and the hashcenter retrofit

A dedicated hardware CPT plus benchmarks database ships in v1. Until then, the foundational reads:

Heads up: a full hardware CPT plus benchmarks taxonomy lands in a later task on the roadmap.

Shoulders of giants

None of this is ours. The entire sovereign-AI stack the plebs now run at home was built by an open ecosystem of researchers, engineers, and labs who released weights, code, and tools under terms anyone can use. D-Central stands on their shoulders and contributes where it can. Named with gratitude:

llama.cpp (Georgi Gerganov) Ollama LM Studio / Element Labs Open WebUI (Timothy J. Baek et al.) Meta (Llama) Google (Gemma) Alibaba (Qwen) Mistral AI DeepSeek Black Forest Labs (FLUX) Stability AI Microsoft (Phi) Hugging Face
[closed beta] DCENT_Inference OS and DCENT Heatbox AI are in closed beta; public beta summer 2026. GPL-3.0. One more layer decentralized.