Passer au contenu

Nous améliorons nos opérations pour mieux vous servir. Les commandes sont expédiées normalement depuis Laval, QC. Questions? Contactez-nous

Bitcoin accepté au paiement  |  Expédié depuis Laval, QC, Canada  |  Soutien expert depuis 2016

ICERIVER_BRICKED Critical

How to Recover a Bricked IceRiver After Failed Firmware Upgrade

ICERIVER_BRICKED — IceRiver KS-series unit unresponsive after firmware corruption: no front-panel LEDs cycling correctly, no DHCP lease, no port 80 web UI, often stuck at IceRiver splash or 'Initializing...'. Stock OTA cannot recover; requires factory-reset retry, SD-card recovery boot, UART + U-Boot eMMC reflash, or eMMC chip-level bench work depending on the depth of corruption.

Critical — Immediate action required

Affected Models: All IceRiver KS-series Kaspa miners — KS0, KS0 Pro, KS0 Ultra, KS1, KS2, KS3, KS3L, KS3M, KS5, KS5L, KS5M

Symptoms

  • Power-on cycle: PSU fan spins but no front-panel LEDs (`D1`-`D4`) light at all after 60 seconds
  • LEDs come on but stay frozen on a single state past the normal 30-90 second boot window
  • Miner accepts a DHCP lease and ping responds, but TCP port `80` (web UI) refuses connection across multiple reboots
  • Web UI loads briefly then crashes with `404 Not Found` or 'Service Unavailable' on every reboot
  • Miner stuck on the IceRiver splash screen / 'Initializing…' indefinitely (> 5 min)
  • Brick state began immediately after an OTA firmware upgrade — interrupted Wi-Fi, browser timeout, or power blip mid-flash
  • Brick state began after flashing third-party firmware (`xyys`, `tswift`, or any `iceriver-oc` GitHub fork)
  • Brick state began after factory reset attempt that never completed cleanly
  • Reset button held `20 s` produces red flashing, but no recovery happens after 5+ minutes
  • Network 'Detect IP' tool from `iceriver.io` finds the MAC address but assigns no IP, or finds nothing at all
  • Chassis fans never ramp past idle even after 5+ minutes (firmware never reached hashboard init)
  • Repeated `800` / `801` / `802` codes in last log before brick — typical 'OTA tried, OTA failed, eMMC half-written' sequence
  • Unit was on stock firmware with auto-update enabled; an unattended OTA killed it overnight
  • PSU and AC plug show no scorching / odour (rules out hardware-damage brick — focus is firmware-level recovery)

Step-by-Step Fix

1

Hard power-cycle for 60 seconds at the breaker. Not a soft reset — full mains kill for a full minute. Roughly 10% of brick tickets resolve at this step alone because the failure was a wedged daemon state surviving soft reboots. Always try this before anything destructive at higher tiers.

2

Hold the reset button for the full `20 seconds` until the red status LED starts flashing, then release. Wait `5 minutes` untouched. Factory reset on KS hardware copies the factory partition to the active partition and reboots. If your brick was config-level rather than rootfs-level this recovers it; if nothing happens within 5 min the active partition was already too corrupt to land the reset.

3

Verify network path before assuming brick. Power on, wait 5 min, check your router DHCP table for the MAC. If a lease is present but port `80` is dead, you have a partial brick — proceed to higher-tier steps. If the MAC is absent, swap Ethernet cable and switch port before condemning the unit; dead ports masquerade as bricks.

4

Catalogue the brick trigger: mid-OTA, third-party firmware, power event, days/months of stable operation, last known firmware version. This shortens diagnosis time at every higher tier and saves D-Central 30+ minutes per unit if you eventually ship — savings passed back in the repair quote.

5

Try IceRiver's 'Detect IP' tool from `iceriver.io` one more time. Some bricks present as 'no DHCP' but the unit is answering on `192.168.1.1` (factory default fallback) because DHCP failed. Plug directly into a laptop with a static IP on `192.168.1.0/24` and try `http://192.168.1.1`. Long shot, but free.

6

Try SD-card recovery boot (KS0/KS1/KS2/KS3 family). Download the official recovery image from `iceriver.io/firmware-download/` for your exact model AND hardware revision (wrong rev bricks further). Write to microSD with `balenaEtcher`. Power off, insert SD, hold reset, power on while holding reset for `5 s`, release. The bootloader detects the SD and reflashes from it over 3-8 minutes. Don't power-cycle during this.

7

Verify line voltage stability before any flash retry. Multimeter on AC at the wall under load. KS units accept `180-285 V` AC. If line voltage sagged below `190 V` during your previous OTA attempts, that is why the unit bricked — and trying again on the same circuit will brick it again. Move to a stable circuit, ideally UPS-protected, before any flash retry.

8

Try wired-Ethernet OTA from the same subnet. If the unit answers ping but port `80` is dead, the OTA daemon is sometimes alive even when the web UI is dead — some KS firmware exposes an OTA-only endpoint at a separate port. Check IceRiver's firmware-download FAQ for the model-specific OTA-only recovery procedure and try pushing a firmware image directly. Long shot, but works on some firmware revisions.

9

Inspect the eMMC chip and PMIC visually. Phillips #2 to open chassis. Locate control board. Photograph any visible damage: scorched components, swollen capacitors, lifted IC corners, PCB discolouration. If anything looks damaged, stop — that's a hardware brick, not a firmware brick, and reflashing won't help. Ship to D-Central.

10

Check reset/boot button continuity with a multimeter. Some 'bricks' are a stuck reset button holding the unit in reset state because button springback failed. Multimeter on continuity, probe leads with button un-pressed — should read open. If it reads closed (electrically pressed when mechanically released), replace the button or short across it temporarily to test.

11

Get UART access to the control board. USB-to-TTL adapter (`CH340`, `CP2102`, or `FTDI`, `3.3 V` logic). Locate the model-specific debug header (KS0: 4-pin near Ethernet; KS3/KS3M: 4-pin on long edge; KS5L/KS5M: similar). Connect `GND-GND`, `TX (board) → RX (adapter)`, `RX (board) → TX (adapter)`. **Do not connect VCC** — board has its own power. Open `PuTTY` / `screen` at `115200 8N1`. Power on. U-Boot output should appear within 3 seconds.

12

Interrupt U-Boot autoboot and inspect eMMC. Press any key during the 3-second countdown to drop to the U-Boot prompt. `mmc info` should report eMMC detected with capacity (typically 4-8 GB on KS hardware). `mmc part` reads the partition table — compare against IceRiver's published partition map for your model. If `mmc info` fails, the eMMC chip is damaged and reflash will not help — Tier 4.

13

Reflash the rootfs partition via U-Boot. With UART link stable: `loadx` (Y-modem) or `tftpboot` (network) the recovery rootfs image into RAM. The exact addresses are model-specific — check IceRiver's firmware-download portal or the `rdugan/iceriver-oc` GitHub for documented partition layouts. Then `mmc write <RAM addr> <partition offset> <size>` to write rootfs to the correct partition. Takes 5-10 min during which UART link MUST stay connected. Power-cycle when complete.

14

Reflash the bootloader (U-Boot SPL) if rootfs reflash didn't fix it. Some bricks corrupt the bootloader itself — re-flashing U-Boot SPL via the BootROM's serial download protocol (varies by SoC: Allwinner = `sunxi-fel`, Rockchip = `rkdeveloptool`, etc.). Identify your KS model's SoC first, then use the matching tool. This is the deepest software-level recovery before chip-level bench work.

15

Verify firmware integrity post-flash. After successful reflash, run the miner for a full hour at nameplate hashrate. Check the miner log for `mmc` errors, eMMC write errors, or I/O retry warnings. A 'successful' reflash that produces I/O errors during normal operation means eMMC has marginal blocks and the brick will return — plan for control-board replacement at Tier 4.

16

Stop DIY and ship to D-Central when (a) `mmc info` from U-Boot fails (eMMC chip-level damage), (b) UART produces no output at all (BootROM-level fault), (c) reflash succeeds but eMMC errors return within 24 hours, (d) you see scorched components or PMIC damage, or (e) the brick happened during a power event and downstream hardware damage is suspected. Past those points, time and risk of further DIY exceed the repair cost.

17

D-Central bench process: UART access with stable bench power, full eMMC dump and analysis, eMMC chip desolder + external programmer reflash for irrecoverable cases, control-board swap with image transfer when chip replacement is uneconomic, full PMIC continuity testing for power-event bricks, and post-repair 24-hour burn-in at nameplate hashrate before sign-off. Western retail bench — no shipping to China, no Zeus-style trust gap.

18

Ship hashboards or whole unit in anti-static bags, double-boxed with ≥5 cm foam on every side. Include a printed brick history: original firmware version, brick trigger (OTA / third-party FW / power event / unknown), recovery steps already attempted, highest Tier reached. Saves D-Central 30+ minutes per unit and that savings passes back directly in the repair quote.

When to Seek Professional Repair

If the steps above do not resolve the issue, or if you are not comfortable performing these repairs yourself, professional service is recommended. Attempting advanced repairs without proper equipment can cause further damage.

Related Error Codes

Still Having Issues?

Our team of Bitcoin Mining Hackers has been repairing ASIC miners since 2016. We have seen it all and fixed it all. Get a professional diagnosis.