Passer au contenu

Nous améliorons nos opérations pour mieux vous servir. Les commandes sont expédiées normalement depuis Laval, QC. Questions? Contactez-nous

Bitcoin accepté au paiement  |  Expédié depuis Laval, QC, Canada  |  Soutien expert depuis 2016

BITAXE_FAILBACK Warning

Bitaxe – Device Stays on Fallback Pool / Won’t Return to Primary

AxeOS / ESP-Miner switches from primary to fallback pool correctly on primary outage, but never switches back to primary when it recovers. The Bitaxe stays parked on the fallback indefinitely until something forcibly restarts the Stratum task — a documented firmware logic gap (GitHub bitaxeorg/ESP-Miner #1618).

Warning — Should be addressed soon

Affected Models: Bitaxe Supra, Bitaxe Ultra, Bitaxe Gamma, Bitaxe GT, Bitaxe Hex (any AxeOS / ESP-Miner v2.x build with a configured fallback pool)

Symptoms

  • AxeOS dashboard's `Pool` field shows the configured fallback hostname when you expect the primary
  • Primary pool is reachable from the same network (`ping`, `nc`) but the Bitaxe ignores it
  • Pool dashboard for the primary shows your worker offline; fallback dashboard shows it online
  • AxeOS Logs tab shows the original switch-to-fallback event hours/days ago with no subsequent primary re-attempts
  • `curl http://<bitaxe-ip>/api/system/info` returns `stratumURL` matching fallback, not primary
  • Hashrate, share submission, and ASIC temperatures all look healthy — only pool selection is wrong
  • Manual reboot via AxeOS UI immediately reconnects to primary and stays there until the next outage
  • Multiple Bitaxes in the same swarm stuck on the fallback simultaneously after a single primary outage
  • Behaviour confirmed in ESP-Miner commit 5156662 and reproduces across the v2.x firmware line
  • No specific error code, log exception, or red banner — silent logic gap, not a thrown error

Step-by-Step Fix

1

Open AxeOS at `http://<bitaxe-ip>` and click Restart (Settings → Restart). Wait 30 seconds. Refresh the dashboard. Confirm the Pool field now shows your configured primary. This is the immediate manual fix — useful when you just noticed and want it on the right pool right now. It also confirms the diagnosis: ESP-Miner re-checks primary first on Stratum-task restart.

2

Audit your fallback pool selection. Stickiness means you may live on the fallback for weeks at a time. Choose a fallback you would be okay being permanently parked on — same mining model as primary (solo-to-solo or pooled-to-pooled). For a solo-mining Bitaxe, fallback should be another solo pool (`solo.ckpool.org` or `public-pool.io`). Mixing solo primary with pooled fallback is a strategic mistake under failback-stuck conditions.

3

Set a calendar reminder to manually verify `stratumURL` weekly until you automate the fix. AxeOS top of Settings page or `curl http://<bitaxe-ip>/api/system/info | jq .stratumURL`. Ten-second check; catches drift early before the variance profile of your mining strategy diverges from intent.

4

Use the same BTC address and worker name on both pools. Differentiating the address per pool will confuse your future-self when you check pool dashboards and try to reconcile share counts. One address, one worker, both slots — easier troubleshooting, no payout fragmentation.

5

If you cannot tolerate fallback drift at all and your primary is reliable enough, blank out the fallback fields entirely in AxeOS Settings and save. The Bitaxe will retry primary forever during outages — zero hashrate during downtime, zero risk of wrong-pool drift. This is a strategic call (tolerate downtime vs tolerate drift); pick whichever matches your home-mining philosophy.

6

Add a nightly restart cron job on any always-on device on your network (Raspberry Pi, NAS, Home Assistant box, Synology task scheduler). Recommended: every 24 h at 3-4 AM local. One line: `curl -X POST http://<bitaxe-ip>/api/system/restart`. Each Bitaxe gets one cron entry. Schedule for 4 AM if you're on a Canadian residential ISP — typical IP refresh windows finish by then.

7

Verify the cron is actually firing by adding a logging redirect: `curl -sf -X POST http://<bitaxe-ip>/api/system/restart && echo "$(date) restarted <ip>" >> /var/log/bitaxe-failback.log`. Check the log weekly for the first month — silently-not-firing crons are a classic Pi gotcha and you only find them when something else exposes the issue.

8

Stagger restarts across a swarm to avoid fleet-wide hashrate dips. Don't put 5 Bitaxes on `0 3 * * *` — they all reboot simultaneously and you lose 30 seconds of total fleet hashrate at once. Stagger 60 seconds: `0 3 * * * <ip1>` / `1 3 * * * <ip2>` / `2 3 * * * <ip3>`. Fleet stays >80% online during the cycle.

9

Add a primary-reachability pre-check before each restart so you don't reboot during a primary outage. `nc -z -w 5 <primary-host> <primary-port> && curl -X POST http://<bitaxe-ip>/api/system/restart`. The restart only fires if the primary is currently reachable — otherwise the cron is a no-op and you don't burn 30 seconds of hashrate just to immediately fall back to fallback again.

10

Once you've validated the workflow, tighten the schedule from daily to hourly (`0 * * * *`). Worst-case fallback exposure drops from 24 h to 1 h. The Bitaxe handles 24 reboots per day fine — total downtime is roughly 12 minutes/day per miner, well below the cost of staying on the wrong pool for hours.

11

Deploy the watchtower script (see Mining Hacker Notes for the reference bash). Polls `/api/system/info` every 5 minutes per Bitaxe. If `stratumURL` ≠ configured primary AND primary is reachable via `nc`, fires the restart. Otherwise no-op. Worst-case fallback exposure drops to under 10 minutes per Bitaxe — close to the practical floor without flooding the network with restart calls.

12

Run the watchtower against your full swarm in parallel from a single host. Loop over each Bitaxe IP, query each independently, decide each independently. A 50-line bash or Python script handles a 20-Bitaxe fleet without issue. Pin to a `systemd` service or a `docker` container so it survives reboots of the host.

13

Log every state change to a file or to a Discord/Telegram webhook. Format: `"Bitaxe at 192.168.1.27 was on fallback solo.ckpool.org, primary pool.local:3333 reachable, fired restart, recovered to primary in 8 seconds."` That log is gold for proving the watchtower is working and for debugging your home network when something else goes sideways.

14

Add a sanity guard against restart loops. If the watchtower has restarted the same Bitaxe more than 3 times in 1 h, stop trying and alert you. Either your primary is flapping (wrong-target restart) or that specific Bitaxe has a different problem. Don't burn 50 reboots per hour silently — the alert is what tells you a deeper issue exists.

15

Hook the watchtower output into Home Assistant or Uptime Kuma if you already operate one. HA: `command_line` sensor + automation gives you a UI tile per Bitaxe with current pool, last restart timestamp, and a manual Restart button. Uptime Kuma: HTTP-keyword monitor on `/api/system/info` does similar with less custom code. Both options graduate the workaround from a script to a real piece of home infrastructure.

16

Export a Prometheus metric per Bitaxe (`bitaxe_on_primary{ip="..."} = 0|1`). One Grafana panel for the whole swarm — `0` lights up red the moment a Bitaxe drops to fallback, alerting fires before the next watchtower poll runs. Real fleet telemetry, no guessing. Combine with the watchtower for closed-loop detection-and-recovery.

17

Open a D-Central support ticket at https://d-central.tech/contact/ if a specific Bitaxe in your swarm refuses to bind to the primary even after watchtower-fired restart, while other Bitaxes in the same swarm do. That's a deeper failure (DNS-layer, routing, or hardware-revision-specific Stratum bug) and our Bitaxe team will work through it with you.

18

Track and contribute to ESP-Miner upstream. Subscribe to issue #1618 at github.com/bitaxeorg/ESP-Miner/issues/1618 — when the firmware ships a real failback timer (periodic primary re-resolution every N minutes when on fallback, automatic switch-back if reachable), the watchtower becomes optional. If you're a C developer, pick up the issue and submit a PR — decentralizing the firmware fix is the most Mining Hacker move available on this one.

When to Seek Professional Repair

If the steps above do not resolve the issue, or if you are not comfortable performing these repairs yourself, professional service is recommended. Attempting advanced repairs without proper equipment can cause further damage.

Related Error Codes

Still Having Issues?

Our team of Bitcoin Mining Hackers has been repairing ASIC miners since 2016. We have seen it all and fixed it all. Get a professional diagnosis.