Passer au contenu

Nous améliorons nos opérations pour mieux vous servir. Les commandes sont expédiées normalement depuis Laval, QC. Questions? Contactez-nous

Bitcoin accepté au paiement  |  Expédié depuis Laval, QC, Canada  |  Soutien expert depuis 2016

BITAXE_SWARM_FAIL Warning

Bitaxe – Swarm Mode Fails After Firmware Upgrade

Bitaxe AxeOS Swarm dashboard breaks after a firmware update: rows blank, peers ghost or duplicate, AutoScan returns zero, or data flashes briefly then clears. Individual peers continue mining; only the aggregate swarm renderer is broken. Recovery is browser-side or a re-flash + re-add of peers — no individual unit is bricked.

Warning — Should be addressed soon

Affected Models: Bitaxe Supra, Bitaxe Ultra, Bitaxe Gamma (601 / 602), Bitaxe Gamma Turbo (GT), Bitaxe Hex (multi-unit deployments using AxeOS swarm endpoint)

Symptoms

  • AxeOS Swarm tab loads its frame (header, columns, controls) but every data cell reads `--`, `null`, or stays empty
  • Swarm rows show miner names duplicated across rows — same Bitaxe in two rows, or two different miners share a name
  • Swarm data loads briefly (~5 s) then disappears, leaving only the miner count visible
  • Ghost / phantom peers appear in the swarm list that you never added (old MAC addresses, stale IPs, duplicates)
  • One specific Bitaxe in the swarm cannot display its OWN data on the dashboard (shows everyone else but blanks itself)
  • Swarm worked perfectly before flashing v2.14.0b1 / early-access-2026-03 / a specific beta build, broke immediately after
  • Browser DevTools Network tab shows `/api/swarm/info` or per-peer `/api/system/info` calls returning 200 with empty payloads, 404, or CORS errors
  • Browser DevTools Console shows JS errors: `TypeError: Cannot read property 'X' of undefined` or `JSON.parse` failures
  • Master Bitaxe's serial console (115200 8N1) shows repeated `swarm: peer timeout` or `http_client: connection refused` lines on Swarm tab load
  • AutoScan on the master returns 0 peers even though every Bitaxe is on the same /24 subnet and pingable
  • After update the master is on a different IP than the peers expect — DHCP lease churned during boot
  • Fleet is mixed-firmware (some peers on the old build, some on the new) — version skew across the swarm
  • Some peers on stock AxeOS, others on `shufps/ESP-Miner-NerdQAxePlus` or `BitMaker-hub/ESP-Miner-NerdAxe` forks — schema divergence

Step-by-Step Fix

1

Open the master's Swarm tab and click `Refresh` three times slowly. The aggregator re-fans-out HTTP calls to every peer. Sometimes the first call hits during a peer's watchdog reboot or DHCP renewal and the second/third call recovers. Per upstream issue #1658, a user with 5 Gamma 602 units recovered the swarm with exactly this pattern: refresh, refresh, refresh, AutoScan, delete ghosts. No flashing required for ~60% of post-update swarm fails.

2

Clear the browser cache / hard-reload the swarm page. Chrome/Edge: Ctrl+Shift+R or Cmd+Shift+R. The swarm UI is a single-page app with aggressively cached JS. After a firmware update the browser may run OLD swarm JS against NEW peer JSON, which causes 'rows render then disappear' symptoms. Hard reload pulls the new JS so the renderer matches the schema.

3

Confirm every peer is on the same firmware version. Open `http://<peer-ip>/` in tabs and read the version footer on each AxeOS dashboard. If any peer is older, flash it to match using the Bitaxe Web Flasher: `bitaxe.org/flasher`, Connect, pick model, leave `Erase Flash` UNCHECKED to preserve NVS, click Install. 30 seconds per peer. This single step closes the schema-drift failure mode entirely.

4

Delete every ghost / duplicate / stale row from the swarm peer list. Swarm tab → click `Delete` on each problem row. Once the list is clean, click `AutoScan` and wait 60 seconds for mDNS sweep. Healthy peers should reappear with correct names and live data. Repeat the delete pass if AutoScan re-introduces a stale entry — sometimes the cache takes two passes to clear.

5

Set DHCP reservations on every Bitaxe at the router. UniFi / ASUS / OpenWrt / pfSense all support reserving an IP per MAC address. This eliminates the post-update DHCP-churn failure mode where peers come back on different IPs after reboot and the swarm peer list points at the wrong host. Restart the master and re-AutoScan after setting reservations.

6

Hand-edit the swarm peer list via the master's HTTP API. From a terminal: `curl http://<master-ip>/api/swarm/info` to read current peer list. Then `curl -X POST http://<master-ip>/api/swarm/info -H 'Content-Type: application/json' -d '[{"IP":"192.168.1.51"},{"IP":"192.168.1.52"},{"IP":"192.168.1.53"}]'` to overwrite with a clean array. Adjust schema to your AxeOS version — newer builds may require `hostname` and `MAC` fields. Check the OpenAPI spec at `main/http_server/openapi.yaml` in the upstream repo.

7

Rebuild swarm config from a known-good template. If the master's NVS is too far gone, save the working swarm config from a different healthy AxeOS dashboard, modify IPs to match your fleet, and POST it to the master via Step 6. Backup your working swarm config periodically: `curl http://<master-ip>/api/swarm/info > swarm-backup-$(date +%F).json`. A 200-byte JSON file beats re-typing 17 IPs by hand after every NVS wipe.

8

Capture the master's serial console during a swarm load. USB-C to the master, open `screen /dev/cu.usbmodem* 115200` (macOS/Linux) or PuTTY at 115200 8N1 (Windows). Open the Swarm tab in your browser and watch the console. Lines like `swarm: peer 192.168.1.52 timeout` or `http_client: connection refused at 192.168.1.51` tell you which peer is failing and why. `nvs: read failed key=swarm_peers` means NVS is corrupt — go to Step 9.

9

Surgically erase only the swarm-related NVS keys. A full NVS wipe (5-second RESET hold) loses WiFi creds and pool config too. To preserve those and only clear swarm state: from serial console use AxeOS's `nvs erase swarm` command where supported, or `esptool.py --chip esp32s3 --port COMx erase_region <addr> <size>` if you've identified the swarm key's flash address from a flash dump. Conservative approach: full NVS wipe + WiFi/pool re-entry takes 2 minutes and is reliably safe.

10

Roll back to the last-known-good stable AxeOS release. From `bitaxeorg/ESP-Miner/releases`, pick the version that was working before your update. Flash via Web Flasher (NVS preserve mode). If swarm works on the older version, you've confirmed it's an upstream regression — file a GitHub issue with reproduction steps and link existing reports like #1658 and #1617.

11

Force a homogeneous AxeOS version across the entire fleet. Tier 2 nuclear option: flash every Bitaxe to the same version using `esptool.py` from the command line. Script it: iterate over each peer's USB-C COM port and run `esptool.py --chip esp32s3 --port COMx --baud 921600 write_flash -z 0x0 esp-miner-factory-gamma-vX.Y.Z.bin`. After re-flash, re-pair, re-build the swarm peer list once. From there on, version every fleet update through one canary peer first before rolling to the rest.

12

Audit network-layer multicast / mDNS for AutoScan failures. From a Linux host on the same VLAN: `avahi-browse -a -r` should list every Bitaxe announcing `_http._tcp` and `_bitaxe._tcp`. On macOS: `dns-sd -B _http._tcp local.`. If peers don't appear, mDNS is being filtered. Common offenders: UniFi USG with igmp_snooping enabled but mDNS reflector disabled; any router with VLAN segmentation and no mDNS bridging; OpenWrt blocking 5353/udp on bridge interfaces. Fix at the router; mDNS isn't optional for AutoScan.

13

Build a static swarm config from inventory and push it everywhere. Once you have N Bitaxes with reserved DHCP IPs, generate a single JSON config file with all peer IPs and POST it to the master via Step 6. Save to git. When you add a new Bitaxe, edit the JSON and re-POST. This bypasses AutoScan entirely and removes network multicast from the variable list. For 10+ unit fleets this is the only sane long-term approach — D-Central runs our own internal Bitaxe rack this way.

14

Separate fork families into separate swarms. If you mix stock-AxeOS Bitaxes with `NerdQAxePlus` multichip units and `NerdAxe` units, run two or three master dashboards instead of one — one per fork family. Schema drift between forks isn't a bug, it's a design difference, and trying to swarm across them will always glitch. Acceptable: one stock-AxeOS swarm dashboard for Bitaxe units, separate `NerdQAxePlus` dashboard for multichip units, separate NerdAxe dashboard.

15

Capture full HTTP traces during a swarm failure for upstream bug reports. Browser DevTools → Network → preserve log → reload. Save the HAR file. Open a serial console on the master simultaneously and capture boot + swarm-load output. Open a GitHub issue at `bitaxeorg/ESP-Miner` with: HAR + serial log + firmware version on every peer + your network topology. Existing issues #1658 and #1617 are the templates — match that level of detail and the maintainers can fix the regression.

16

Stop DIY (Tier 4) when: a specific peer won't accept any firmware version cleanly (flash succeeds, peer reboots, peer's dashboard shows `asic_init_fail` or stays in AP-config); OR you hot-air reflowed a peer's ESP32-S3 and made it worse; OR the master itself bricks during firmware migration and UART recovery also fails; OR you see physical damage on any peer (scorching, cracked epoxy, swollen caps, burnt smell); OR USB-C pads on a peer have torn loose past hand-reflow. Ship affected unit(s) to D-Central — we pioneered the Bitaxe accessory ecosystem, built the original Bitaxe Mesh Stand, designed the first Bitaxe + Bitaxe Hex heatsinks, and stock ESP32-S3 modules + 16 MB SPI flash + USB-C connectors.

17

D-Central bench process: full diagnostic on bench fixtures with known-good 3.3V UART harness, clean USB-C tester, separate VLAN for swarm reproduction. We replicate your fleet topology if you send a description, then test the failing peer in isolation and in a known-good 3-unit reference swarm. If the unit's hardware is fine, we re-flash factory and confirm swarm aggregation against our reference master. If hardware is the problem (failing flash chip, dead ESP32-S3, lifted USB-C pads), chip-level repair on the same bench. Turnaround 3-7 business days.

18

Ship safely. ESD bag (anti-static), foam cradle inside a rigid box. Note inside describing the failure: which AxeOS versions were involved (before / after), which peers were healthy vs broken, what the swarm tab showed before/after the update, your network topology (subnets, VLANs, mDNS reflector status). If you have a HAR capture or serial log, include them — saves an hour of bench diagnostic time. Canada Post small-parcel or UPS Ground; we receive at D-Central HQ in Quebec and crack open within 48 hours.

When to Seek Professional Repair

If the steps above do not resolve the issue, or if you are not comfortable performing these repairs yourself, professional service is recommended. Attempting advanced repairs without proper equipment can cause further damage.

Related Error Codes

Still Having Issues?

Our team of Bitcoin Mining Hackers has been repairing ASIC miners since 2016. We have seen it all and fixed it all. Get a professional diagnosis.