Skip to content

We're upgrading our operations to serve you better. Orders ship as usual from Laval, QC. Questions? Contact us

Bitcoin accepted at checkout  |  Ships from Laval, QC, Canada  |  Expert support since 2016

AI

Heating Your Home With Inference, Not Just Hashing

· D-Central Technologies · ⏱ 12 min read

Last updated:

The Bitcoin mining heater is no longer a thought experiment. It is a shipping product category with a vendor list. SATO Technologies quietly turned Quebec basements into hashrate years ago. Heatbit convinced American households that a space heater and a Bitcoin node are the same object. HeatCore took the idea into enterprise and small-commercial heating. 21Energy put Antminer guts into Austrian living rooms. The pleb community has been stuffing hashboards into furnace plenums and pool pump rooms since before any of these brands had a logo. Mining heaters are a category because the physics are obvious — every electron that goes into a computer comes back out as heat, and if you have to heat your home anyway, you may as well get paid in sats to do it.

Here is the unspoken corollary, and it is the reason this post exists: an LLM inference rig obeys the same first law of thermodynamics. The GPU fans that scream when Llama 3.1 70B is writing you a bedtime story are moving exactly as many joules as the element in a baseboard heater drawing the same wattage. A rack of RTX 3090s running batch inference is a space heater that also happens to know about the French Revolution and how to refactor your Bash script. The category of “AI inference heater” is not a category yet in the same way “Bitcoin mining heater” is — the vendors haven’t caught up, the product names don’t exist on retail shelves — but the plebs are already building these boxes, and the physics does not care whether a vendor has blessed the idea.

This post is D-Central planting a flag. We make mining heaters. We will make inference heaters. And the frame that glues both categories together is the one we have been using for years: the Hashcenter. Not a datacenter. Not a server farm. A home whose compute load is also its heating load, where every watt does double duty, and where sovereignty comes from owning the box in your furnace room instead of renting a slice of someone else’s desert colocation. Shoulders of giants all the way down — the mining-heater pioneers named above showed that this frame ships. Our job in 2026 and beyond is to extend it to the workload that is eating the world.

The thermodynamic case

Plebs already know this, but it is worth putting in print because the broader internet keeps pretending compute and heat are separable. They are not. Every joule of electrical energy delivered to a computer leaves it as heat. A tiny fraction leaves as acoustic energy (fan noise), a tiny fraction as electromagnetic radiation (WiFi, RF leakage), and a vanishingly small amount is momentarily retained as stored charge in silicon — but on any timescale longer than a few microseconds, the whole budget is heat. Bitcoin ASICs convert roughly 95–98% of their input power directly to dissipated thermal energy with the remainder being those losses; a GPU doing inference converts an equally high fraction. The “useful work” — the hash found, the token emitted, the pixel rendered — has a thermodynamic cost that is essentially the full electrical draw.

A few concrete numbers for the pleb doing HVAC math on the back of an envelope:

  • A single RTX 4090 under sustained inference load draws ~450W, which is ~1,535 BTU/hr.
  • A dual RTX 3090 rig hitting ~700W during batch generation: ~2,390 BTU/hr.
  • A four-GPU open-frame build (two 3090s + two P40s) pushed to ~1,200W: ~4,094 BTU/hr.
  • For reference, a standard electric baseboard heater is 1,500W / ~5,120 BTU/hr.
  • An Antminer S21 at wall: ~3,500W / ~11,943 BTU/hr. That is not a space heater; that is a furnace.

The only meaningful difference between a $50 resistive baseboard unit and a $4,000 dual-GPU inference shelf drawing the same wattage is that the second one also runs Llama 3.1 70B, generates Stable Diffusion images for your household, serves a local RAG pipeline over your personal documents, and otherwise provides you sovereign intelligence capacity for the duration of the heating season. The heat bill is the same. The byproducts are different.

This is first-law-of-thermodynamics stuff. It is also politics, and we will come back to that at the bottom.

Inference duty cycle vs mining duty cycle

One nuance separates the two workloads, and if you ignore it you will undersize or oversize your heating plan. Mining is a constant thermal output. Inference is bursty.

A Bitcoin ASIC runs at 100% 24/7/365. Plug in an S21 and it is going to hold somewhere between 3,200W and 3,500W of dissipation for every second it is powered on, modulo firmware tuning. The thermal output is predictable to within a few percent. You can size a duct, a register, and a thermostat setpoint around that number and it will behave. This is why ASIC heaters work as primary heat sources in well-insulated spaces.

Inference does not behave like that. A GPU sitting idle with a model loaded in VRAM pulls roughly 30–60W depending on the card. During generation it spikes to 300–450W per card and holds it for the duration of the generation pass — typically seconds to a few minutes per request. A pleb household that uses Open WebUI casually for chat and the occasional image generation might see an average draw in the 100–200W range across 24 hours, even with a card rated for 450W peak. The peak matters for circuit sizing. The average matters for heat output.

What this means practically:

  • Inference alone is supplemental heat, not baseload heat. A 200W-average GPU is not going to keep a Quebec winter at bay.
  • Hybrid setups are the sweet spot. One ASIC for constant baseload thermal output. One (or several) GPUs for peak compute, averaging whatever they average. You get a stable floor of heat from the miner and a variable contribution from the inference rig, and the combination covers the heat load with fewer resistive-backup hours than either alone.
  • You can deliberately push the inference duty cycle up if you actually want the GPU to earn its thermal keep. Batch inference over a large corpus, scheduled image-generation pipelines, LAN-wide serving to family members all using the local LLM, overnight RAG re-indexing jobs, LoRA fine-tuning runs that can chew sustained power for hours, scheduled benchmark sweeps — every one of these pushes the average draw up without requiring you to sit at the keyboard typing prompts.

The future DCENT_Inference OS — currently in closed beta, GPL-3.0, with public beta targeted for summer 2026 — applies the same logic our DCENT_OS already does for ASIC miners: schedule the power-hungry work for the hours when you actually need the heat. Warm up the LLM queue when the thermostat wants heat, let it idle when the house is already at setpoint. That is the same pattern mining heaters have been doing since Heatbit shipped v1, translated to the inference workload.

The pleb’s first AI heater build

There is a spectrum here, and it roughly tracks with how much basement and how much capital a pleb has to work with.

Simple path — single-GPU workstation as warm-room heater. You already own a PC. Add an RTX 3090 (used, ~$700–900 in 2026) or 4090. Install Ollama or your preferred inference stack — see our Install Ollama in 10 Minutes guide. Run Open WebUI on the LAN. Park the machine in whichever room you want to be warm. Run inference 24/7 by leaving a batch job, a small agent loop, or household-wide Open WebUI access going. Thermal output is modest (100–300W average) but nonzero, and you have now turned your PC into a dorm-grade warm-room heater that also serves sovereign LLM capacity. This is entry-level. Most plebs will start here.

Upgraded path — dedicated multi-GPU frame. An open-air mining frame (same aluminum extrusion the Ethereum miners used back in the day), a second-hand server PSU, a cheap x11-slot riser board, and two RTX 3090s plus two Tesla P40s. Total sustained draw if you push it: ~1,200W. Put it in the basement. Duct the exhaust to a colder zone. This is real heating output — ~4,000+ BTU/hr — enough to matter in a medium-sized room. Used 3090s and P40s together give you enough VRAM for the current open-weight flagships (Llama 3.1 70B quantized, Qwen models, DeepSeek-V3 distills) with room for batch work. See Used RTX 3090 for LLMs in 2026 for why the 3090 is still the pleb card of choice.

Hashcenter path — hybrid rack, ASIC + GPU shelf. This is the category-defining configuration. One ASIC on the bottom of the rack running 24/7 for baseload heat — an S21, a refurbished S19j Pro, or a DCENT_axe farm of open-source miners for the sovereignty-maxi build. Above it, a dual-GPU inference shelf for peak compute. Shared intake ducting from a cold zone (garage, outside air, cold basement corner), shared exhaust ducting into the living space. One monitoring stack — the closed-beta DCENT Toolbox watches both workloads, schedules the inference jobs for when the house wants heat, and logs thermal and economic output in one place. This is a residential Hashcenter. It looks nothing like a datacenter because it is architected around a completely different premise: the heat is the product. See From S19 to Your First AI Hashcenter for a more detailed build walkthrough.

Practical considerations every pleb will run into:

  • 240V circuits. If you are in Quebec, Ontario, or anywhere with decent residential electrical capacity, run a dedicated 240V circuit for anything above about 1,500W sustained. The existing D-Central content on 120V vs 240V for mining applies one-for-one to GPU rigs — higher voltage means lower amperage means less resistive loss in the wiring and cooler breakers. Canadian plebs: this is not optional above ~12A sustained.
  • Noise. Open-frame GPU rigs under load run quieter than an S21 but not by as much as you hope. Three or four high-RPM axial fans on blower-style GPUs can hit 60–70 dBA in the room. Basement placement, a noise-isolated enclosure, or under-desk boxed builds all help. Ducted exhaust into a different room solves most of it.
  • Cooling intake. Same playbook as ASIC heaters: pull intake air from a cold space (garage, outside air through a filtered inlet, a dedicated cold room), exhaust into the zone you want warmed. Do not recirculate exhaust air through the intake — you will cook the GPUs the same way you cook an ASIC that sits in its own heat plume.
  • Fire safety. Same rules as any space heater. Three-foot clearance from combustibles, working smoke detector in the room, dedicated breaker, no surge bars, no daisy-chained extensions. Inference rigs draw less peak current than ASICs but the same physics of “1,500W of sustained thermal output next to fabric” applies.
  • Insurance and building code. Residential electrical code in Canada and the US is generally fine with dedicated high-wattage circuits as long as they are permitted and inspected. Your insurer is fine with baseboard heaters; they are fine with computers. They may raise eyebrows at a 20kW mining operation. Hashcenter-scale residential builds are a gray area nobody has litigated yet — use common sense and stay under the attention threshold of the residential utility.

The current product landscape

Honest picture, shoulders of giants, no positioning D-Central as superior to anyone. The category is still being invented.

Mining heaters — established vendors.

  • SATO Technologies (Canadian) — several product lines of Bitcoin-only ASIC heaters, serious engineering, serious Quebec presence. The closest thing to a category leader in North America.
  • Heatbit (US) — consumer-grade ASIC heater designed to look like a floor-standing space heater. Proved the consumer concept.
  • HeatCore — enterprise-oriented ASIC heating, pairs with resistive backups for hybrid systems in commercial spaces.
  • 21Energy (Austrian) — European-side leader, Bitcoin-only, strong design language, pushed the frame into central European HVAC conversations.
  • The DIY community — years of hashboard-based space heaters, repurposed Antminers in custom enclosures, immersion-cooled builds with the heat going into hot water tanks. The community was first, and it still is.

AI inference heaters — emerging.

  • The formal vendor list is approximately empty as of mid-2026. Nobody has shipped a retail-branded inference heater yet.
  • A handful of startups in adjacent spaces are doing industrial waste-heat recovery from hyperscaler datacenters — piping exhaust into district heating for Nordic cities, or selling residual heat to greenhouses. This is not the same thing. That is exporting datacenter heat as an afterthought; a Hashcenter is a home where the heat was always the point.
  • The pleb community, predictably, is ahead of the vendors. Reddit’s /r/LocalLLaMA and /r/homelab threads feature dozens of multi-GPU builds parked in basements and garages with ducting to living spaces. No brand. No warranty. Just physics applied.
  • D-Central is entering this category. The DCENT Heatbox AI is on our v1 roadmap — closed beta now, GPL-3.0, public beta summer 2026. This post is not a product pitch for it. The pitch comes later. The category-defining work comes first, and that work is admitting that the DIY community got here before any vendor did and that our role is to make the build easier, not to gatekeep the idea.

The takeaway: most plebs will build their own for the next year or two. That is fine and correct. This post, and the rest of the D-Central AI content library, exists to make that DIY path shorter. See The Pleb’s Guide to Self-Hosted AI for the software stack, and BTC-AI Public Companies: The Hashcenter-to-Tokens Pivot for what the publicly traded version of this trade looks like.

Comparison: the three boxes that heat a room

Resistive baseboard Bitcoin mining heater AI inference heater
Thermal output (rated) ~5,120 BTU/hr @ 1,500W ~11,940 BTU/hr @ 3,500W (S21) ~4,100 BTU/hr @ 1,200W (4-GPU rig, pushed)
Work produced None Hashrate → BTC Tokens/sec → sovereign LLM output
Duty cycle Thermostat-gated, typ. 30–60% Constant 100% Bursty, typ. 20–40% average
Ownership model Appliance Appliance + revenue node Appliance + capability asset
Decentralization layers 0 (utility → element) Adds: monetary (sats) Adds: monetary + compute + weights
Noise (dBA typical) ~0 70–85 (raw), 45–55 (enclosed) 55–70 (open frame), 40–50 (enclosed)
Best residential fit Everywhere electric heat is legal Cold climates, off-peak rates, BTC-friendly jurisdictions Cold climates, plebs who want local AI anyway
Winter-friendly incentives Standard electric heat credits Same + miner depreciation (commercial); personal-use gray area Same + home-office equipment treatment in some jurisdictions

One thing the table does not show, and it matters: the inference heater is the only box of the three whose output value scales with the sophistication of the software you run on it. A baseboard is a baseboard forever. An ASIC miner’s output scales with BTC price and difficulty. An inference rig’s output scales with every new open-weight model release — Llama 3.1 70B this year, something better from Meta or Alibaba or DeepSeek or Black Forest Labs next year, all running on the same hardware that already lives in your basement heating your living room. The thermodynamics are constant; the capability curve is not.

The sovereignty angle

Mining heaters reclaim kWh that would otherwise have been pure thermal waste and turn them into sats. Inference heaters take the same kWh and add a third transformation: kWh → joules → sovereign intelligence. The same electron, leaving the same wall outlet, can do three jobs if you stack the rack right.

  • Heat you made, not heat you bought from a utility that burned a remote fossil plant to generate it.
  • Compute you own, not compute you rent from a hyperscaler that logs every prompt.
  • Weights you chose, not weights a vendor swapped out on you in a silent update.
  • Hardware you bought, not hardware whose terms of service are a click-through.

That is four layers decentralized from the hyperscaler datacenter that would otherwise be serving your LLM queries. In the sovereignty stool of Bitcoin — your keys, your node, your coins — the AI parallel is shaping up the same way: your hardware, your weights, your heat. One more layer every time you move a workload home.

For the broader philosophical anchor, see the Sovereign AI Manifesto. Everything in this post is downstream of the argument there.

Closing

Datacenters export heat to the sky. Cooling towers, evaporative rejection, a plume of warm air that vanishes into the atmosphere and contributes nothing to anybody’s life. Hashcenters return heat to a living room. It is a small physical fact with large political implications: the grid electricity you are going to pay for anyway can either warm a desert cooling loop or warm your couch. There is no third option where the electrons do less work. The only question is what useful byproducts you extract on the way to thermal equilibrium.

If you are heating your home electrically this winter, you are running a heater. The only open question is whether that heater is also running Bitcoin, AI, or both. D-Central will cover all three.


D-Central is a Canadian Bitcoin mining hardware company. Our mining heaters have kept Quebec basements warm for years. Our AI products — DCENT_OS, DCENT_axe, DCENT Toolbox, and the forthcoming DCENT Heatbox AI and DCENT_Inference OS — are closed beta, GPL-3.0, with public beta targeted for summer 2026.

Space Heater BTU Calculator See how your miner doubles as a heater — calculate BTU output and heating savings.
Try the Calculator
Antminer S19 Space Heater Edition
Antminer S19 Space Heater Edition Price range: 755.00 $CAD through 1,038.00 $CAD
Shop Space Heaters

D-Central Technologies

Bitcoin Mining Experts Since 2016

ASIC Repair Bitaxe Pioneer Open-Source Mining Space Heaters Home Mining

D-Central Technologies is a Canadian Bitcoin mining company making institutional-grade mining technology accessible to home miners. 2,500+ miners repaired, 350+ products shipped from Canada.

About D-Central →

Related Posts

AI

Self-Hosted AI Troubleshooting: GPU Not Detected, OOM, Slow Tokens

Self-hosted AI breaks. So does firmware. Troubleshooting is a skill plebs already have — this post just translates the common AI failure modes (GPU not detected, OOM on load, slow tokens, service won’t start) into the vocabulary you already use.

Start Mining Smarter

Whether you are heating your home with sats, building a Bitaxe, or scaling up — D-Central has the hardware, repairs, and expertise you need.

AI

Used RTX 3090 for LLMs in 2026: Still King?

24 GB of VRAM at $600–$800 used. For LLMs under 70B parameters at Q4–Q5 quants, the RTX 3090 is still the pleb standard in 2026. Here’s the head-to-head vs 4090, 5090, P40, and A5000, plus a buying checklist.

Start Mining Smarter

Whether you are heating your home with sats, building a Bitaxe, or scaling up — D-Central has the hardware, repairs, and expertise you need.

AI

The Pleb’s Guide to Self-Hosted AI

Self-hosted AI isn’t as easy as opening ChatGPT — but for plebs who already run nodes and miners, the learning curve is half what it looks like. Here’s the whole picture before you install anything.

Start Mining Smarter

Whether you are heating your home with sats, building a Bitaxe, or scaling up — D-Central has the hardware, repairs, and expertise you need.

Start Mining Smarter

Whether you are heating your home with sats, building a Bitaxe, or scaling up — D-Central has the hardware, repairs, and expertise you need.

Browse Products Talk to a Mining Expert