Closed blhoward2 closed 6 months ago
I'm able to successfully include and exclude with None security with both the 800-series and 500-series devices I've tried.
The 800-series device didn't want to SmartStart, but it's a beta device and I've had problems with it SmartStarting before, so not sure if it's the stick or the device.
Heal and reinterview seem fine with the same test devices as above. As do health checks back to the controller.
Sends and gets of Multilevel Switch v4 and v2 values work. Same with Color Switch v1, Config v1, and Config v2. Getting Meter v2 values seems to work as well.
In the UI, the RF Region defaulted to USA LR, but things seemed to be working in normal mesh mode. A change of that setting to USA didn't change anything functionally that I can see.
I just excluded my 800-series device and it reincluded with SmartStart S2 successfully.
This is for 800 series controllers, not devices. Devices are backwards compatible.
This is for 800 series controllers, not devices. Devices are backwards compatible.
Yes, correct. I'm testing with the 800-series stick I just got. It just happens one of the devices on my test/dev network is also 800-series.
Got it, thanks!
Also, to back up what you already have up there, I tried restoring an NVM backup from my previous 700-series stick to my 800-series stick and got the following error:
The target NVM has an unsupported format, cannot restore 700-series NVM onto it! (ZW0280) restoreNVM undefined
Does backup from the 800 work? If so can you send a copy of a backup with a device included to info@zwave-js.io?
There was no error, but the file produced was 0 bytes.
$ ll -tr ~/Downloads/NVM_20221216161653.bin
-rw-r--r--. 1 kris kris 0 Dec 16 10:16 /home/kris/Downloads/NVM_20221216161653.bin
2022-12-16 16:16:52.419 INFO Z-WAVE: Calling api backupNVMRaw with args: [ [length]: 0 ]
2022-12-16T16:16:52.419Z CNTRLR Backing up NVM...
2022-12-16T16:16:52.419Z CNTRLR Turning RF off...
2022-12-16T16:16:52.424Z SERIAL » 0x0104001000eb (6 bytes)
2022-12-16T16:16:52.424Z DRIVER » [REQ] [SetRFReceiveMode]
enabled: false
2022-12-16T16:16:52.427Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:52.429Z SERIAL « 0x0104011001eb (6 bytes)
2022-12-16T16:16:52.429Z SERIAL » [ACK] (0x06)
2022-12-16T16:16:52.430Z DRIVER « [RES] [SetRFReceiveMode]
success: true
2022-12-16T16:16:52.436Z SERIAL » 0x0104002e00d5 (6 bytes)
2022-12-16T16:16:52.436Z DRIVER » [REQ] [NVMOperations]
command: Open
2022-12-16T16:16:52.439Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:52.443Z SERIAL « 0x0107012e00000000d7 (9 bytes)
2022-12-16T16:16:52.443Z SERIAL » [ACK] (0x06)
2022-12-16T16:16:52.444Z DRIVER « [RES] [NVMOperations]
status: OK
address offset / NVM size: 0x00
2022-12-16T16:16:52.451Z SERIAL » 0x0104002e03d6 (6 bytes)
2022-12-16T16:16:52.451Z DRIVER » [REQ] [NVMOperations]
command: Close
2022-12-16T16:16:52.453Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:52.455Z SERIAL « 0x0107012e00000000d7 (9 bytes)
2022-12-16T16:16:52.455Z SERIAL » [ACK] (0x06)
2022-12-16T16:16:52.456Z DRIVER « [RES] [NVMOperations]
status: OK
address offset / NVM size: 0x00
2022-12-16T16:16:52.458Z CNTRLR Performing soft reset...
2022-12-16T16:16:52.462Z SERIAL » 0x01030008f4 (5 bytes)
2022-12-16T16:16:52.463Z DRIVER » [REQ] [SoftReset]
2022-12-16T16:16:52.465Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:52.468Z CNTRLR Waiting for the controller to reconnect...
2022-12-16T16:16:53.682Z SERIAL « 0x010a000a03000102010000fe (12 bytes)
2022-12-16T16:16:53.683Z SERIAL » [ACK] (0x06)
2022-12-16T16:16:53.684Z DRIVER « [REQ] [SerialAPIStarted]
wake up reason: WatchdogReset
watchdog enabled: false
generic device class: 0x02
specific device class: 0x01
always listening: false
supports Long Range: false
2022-12-16T16:16:53.685Z CNTRLR reconnected and restarted
2022-12-16T16:16:53.685Z CNTRLR Enabling Smart Start listening mode...
2022-12-16T16:16:53.689Z CNTRLR NVM backup completed
2022-12-16T16:16:53.689Z CNTRLR Turning RF on...
2022-12-16T16:16:53.691Z SERIAL » 0x0105004a4900f9 (7 bytes)
2022-12-16T16:16:53.692Z DRIVER » [REQ] [AddNodeToNetwork]
action: Enable Smart Start listening mode
2022-12-16T16:16:53.694Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:53.698Z CNTRLR Smart Start listening mode enabled
2022-12-16T16:16:53.699Z SERIAL » 0x0104001001ea (6 bytes)
2022-12-16T16:16:53.700Z DRIVER » [REQ] [SetRFReceiveMode]
enabled: true
2022-12-16T16:16:53.702Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:53.703Z SERIAL « 0x0104011000ea (6 bytes)
2022-12-16T16:16:53.703Z SERIAL » [ACK] (0x06)
2022-12-16T16:16:53.704Z DRIVER « [RES] [SetRFReceiveMode]
success: false
2022-12-16T16:16:53.706Z CNTRLR The controller response indicated failure after 1/3 attempts. Scheduling next
try in 100 ms.
2022-12-16T16:16:53.806Z SERIAL » 0x0104001001ea (6 bytes)
2022-12-16T16:16:53.807Z DRIVER » [REQ] [SetRFReceiveMode]
enabled: true
2022-12-16T16:16:53.809Z SERIAL « [ACK] (0x06)
2022-12-16T16:16:53.810Z SERIAL « 0x0104011001eb (6 bytes)
2022-12-16T16:16:53.810Z SERIAL » [ACK] (0x06)
2022-12-16T16:16:53.811Z DRIVER « [RES] [SetRFReceiveMode]
success: true
2022-12-16 16:16:53.815 INFO Z-WAVE: Success zwave api call backupNVMRaw { data: <Buffer >, fileName: 'NVM_20221216161653' }
Ok, thanks. Can you confirm your fw version? Can you post a log of the restore attempt?
firmware version: 1.1
manufacturer ID: 0x027a
product type: 0x04
product ID: 0x0610
2022-12-16 16:26:51.282 INFO Z-WAVE: Calling api restoreNVM with args: [
<Buffer 01 00 9a b2 01 00 00 d0 fe ff ff 0f ff ff ff ff ff ff ff 5f 0a 33 00 a8 00 00 00 ff f3 33 00 88 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... 49102 more bytes>,
[length]: 1
]
2022-12-16T16:26:51.283Z CNTRLR Turning RF off...
2022-12-16T16:26:51.290Z SERIAL » 0x0104001000eb (6 bytes)
2022-12-16T16:26:51.291Z DRIVER » [REQ] [SetRFReceiveMode]
enabled: false
2022-12-16T16:26:51.295Z SERIAL « [ACK] (0x06)
2022-12-16T16:26:51.296Z SERIAL « 0x0104011001eb (6 bytes)
2022-12-16T16:26:51.297Z SERIAL » [ACK] (0x06)
2022-12-16T16:26:51.297Z DRIVER « [RES] [SetRFReceiveMode]
success: true
2022-12-16T16:26:51.301Z CNTRLR Converting NVM to target format...
2022-12-16T16:26:51.307Z SERIAL » 0x0104002e00d5 (6 bytes)
2022-12-16T16:26:51.307Z DRIVER » [REQ] [NVMOperations]
command: Open
2022-12-16T16:26:51.310Z SERIAL « [ACK] (0x06)
2022-12-16T16:26:51.313Z SERIAL « 0x0107012e00000000d7 (9 bytes)
2022-12-16T16:26:51.313Z SERIAL » [ACK] (0x06)
2022-12-16T16:26:51.314Z DRIVER « [RES] [NVMOperations]
status: OK
address offset / NVM size: 0x00
2022-12-16T16:26:51.323Z SERIAL » 0x0104002e03d6 (6 bytes)
2022-12-16T16:26:51.323Z DRIVER » [REQ] [NVMOperations]
command: Close
2022-12-16T16:26:51.326Z SERIAL « [ACK] (0x06)
2022-12-16T16:26:51.327Z SERIAL « 0x0107012e00000000d7 (9 bytes)
2022-12-16T16:26:51.327Z SERIAL » [ACK] (0x06)
2022-12-16T16:26:51.328Z DRIVER « [RES] [NVMOperations]
status: OK
address offset / NVM size: 0x00
2022-12-16T16:26:51.344Z CNTRLR Turning RF on...
2022-12-16T16:26:51.348Z SERIAL » 0x0104001001ea (6 bytes)
2022-12-16T16:26:51.349Z DRIVER » [REQ] [SetRFReceiveMode]
enabled: true
2022-12-16T16:26:51.351Z SERIAL « [ACK] (0x06)
2022-12-16T16:26:51.354Z SERIAL « 0x0104011001eb (6 bytes)
2022-12-16T16:26:51.354Z SERIAL » [ACK] (0x06)
2022-12-16T16:26:51.355Z DRIVER « [RES] [SetRFReceiveMode]
success: true
2022-12-16 16:26:51.358 INFO Z-WAVE: The target NVM has an unsupported format, cannot restore 700-series NVM onto it! (ZW0280) restoreNVM undefined
I have a SiLabs 800 dev kit. I had to upgrade to firmware 7.19.0 for NVM backup to work, otherwise I was seeing the same 0-byte file. 7.19.0 is a "Pre-Certified GA", which is more like a beta. I don't see anything obvious in the release notes to indicate this was fixed, but it made a difference.
Here's a backup of the 800 after a factory reset. I don't know if it actually produced a valid fail, but the process was successful and the file size non-zero (it's roughly the same size as a 700 backup).
Here's my version info:
2022-12-16T17:38:29.999Z CNTRLR received API capabilities:
firmware version: 7.19
manufacturer ID: 0x00
product type: 0x04
product ID: 0x04
2022-12-16T17:38:30.479Z CNTRLR received protocol version info:
protocol type: Z-Wave
protocol version: 7.19.0
appl. framework build no.: 108
git commit hash: 30303030303030303030303030303030
The SiLabs dev kit and the Zooz stick are different kinds of modules (ZGM230S SiP Module vs. ZG23 SoC), akin to Aeotec Z-Stick 7 vs. Zooz 700. Not sure if that makes any difference.
It appears Zooz is using a custom controller firmware now?
firmware version: 1.1 manufacturer ID: 0x027a product type: 0x04 product ID: 0x0610
In that case, I would have to assume you should not use the generic SiLabs firmware files like with the 700.
The restore from a 700 still did not work, however the error is different. Looks like Z-Wave JS doesn't recognize the format (assuming the backup was actually valid):
Did not find a matching NVM 500 parser implementation! Make sure that the NVM data belongs to a controller with Z-Wave SDK 6.61 or higher. (ZW0280)
I also noticed changing the region in Z-Wave JS is failing. I tried to switch from US LR to US:
2022-12-16T17:55:27.091Z SERIAL » 0x0105000b4001b0 (7 bytes)
2022-12-16T17:55:27.092Z DRIVER » [REQ] [SerialAPISetup]
command: SetRFRegion
region: USA
2022-12-16T17:55:27.095Z SERIAL « [ACK] (0x06)
2022-12-16T17:55:27.097Z SERIAL « 0x0105010b4000b0 (7 bytes)
2022-12-16T17:55:27.097Z SERIAL » [ACK] (0x06)
2022-12-16T17:55:27.098Z DRIVER « [RES] [SerialAPISetup]
command: SetRFRegion
success: false
2022-12-16T17:55:27.099Z CNTRLR The controller response indicated failure after 1/3 attempts. Scheduling next
try in 100 ms.
2022-12-16T17:55:27.200Z SERIAL » 0x0105000b4001b0 (7 bytes)
2022-12-16T17:55:27.200Z DRIVER » [REQ] [SerialAPISetup]
command: SetRFRegion
region: USA
2022-12-16T17:55:27.206Z SERIAL « [ACK] (0x06)
2022-12-16T17:55:27.207Z SERIAL « 0x0105010b4000b0 (7 bytes)
2022-12-16T17:55:27.207Z SERIAL » [ACK] (0x06)
2022-12-16T17:55:27.208Z DRIVER « [RES] [SerialAPISetup]
command: SetRFRegion
success: false
2022-12-16T17:55:27.209Z CNTRLR The controller response indicated failure after 2/3 attempts. Scheduling next
try in 1100 ms.
2022-12-16T17:55:28.309Z SERIAL » 0x0105000b4001b0 (7 bytes)
2022-12-16T17:55:28.310Z DRIVER » [REQ] [SerialAPISetup]
command: SetRFRegion
region: USA
2022-12-16T17:55:28.315Z SERIAL « [ACK] (0x06)
2022-12-16T17:55:28.316Z SERIAL « 0x0105010b4000b0 (7 bytes)
2022-12-16T17:55:28.316Z SERIAL » [ACK] (0x06)
2022-12-16T17:55:28.317Z DRIVER « [RES] [SerialAPISetup]
command: SetRFRegion
success: false
2022-12-16T17:55:28.330Z SERIAL » 0x0104000b20d0 (6 bytes)
2022-12-16T17:55:28.331Z DRIVER » [REQ] [SerialAPISetup]
command: GetRFRegion
payload: 0x20
2022-12-16T17:55:28.335Z SERIAL « [ACK] (0x06)
2022-12-16T17:55:28.337Z SERIAL « 0x0105010b20fe2e (7 bytes)
2022-12-16T17:55:28.338Z SERIAL » [ACK] (0x06)
2022-12-16T17:55:28.338Z DRIVER « [RES] [SerialAPISetup]
command: GetRFRegion
region: Unknown
I had to restart Z-Wave JS as Get Region continued to show Unknown (perhaps a soft reset would have sufficed).
2022-12-16T17:57:09.478Z SERIAL » 0x0104000b20d0 (6 bytes)
2022-12-16T17:57:09.478Z DRIVER » [REQ] [SerialAPISetup]
command: GetRFRegion
payload: 0x20
2022-12-16T17:57:09.487Z SERIAL « [ACK] (0x06)
2022-12-16T17:57:09.487Z SERIAL « 0x0105010b2009d9 (7 bytes)
2022-12-16T17:57:09.487Z SERIAL » [ACK] (0x06)
2022-12-16T17:57:09.488Z DRIVER « [RES] [SerialAPISetup]
command: GetRFRegion
region: USA (Long Range)
2022-12-16T17:57:09.789Z DRIVER No configuration update available...
From the Zooz stick:
2022-12-16T17:26:05.069Z CNTRLR received protocol version info:
protocol type: Z-Wave
protocol version: 7.18.1
appl. framework build no.: 273
Notably different from the SiLabs 800 dev kit.
Notably different from the SiLabs 800 dev kit.
Well that would be because I update it every Gecko release. The latest Pre-Certified came out yesterday. I don't remember what it arrived with, probably < 7.18. The Zooz sticks may have been manufactured prior to 7.18.2 (the last relevant GA release) and shipped that way. 🤷🏻♂️
Also, I would advise not attempting to use PC Controller to backup and restore from a 700. It bricked my 800 and I had to recover it using some of the debug tools. 😅
I also have a Zooz 800 usb tick, so posting here to follow along as the thread is updated.
I have a working ZooZ 800 but not all functions of course, watching.
Similar as above two -- ZooZ ZST39 (800 stick), generally working, a few nodes in my 54-device network (not always the same ones each time) seem to go dead every day and need to be pinged. Also sometimes gets stuck in "Exclude failed" mode when I'm trying to add or remove devices, requiring a VM reboot. Not sure if it's the stick or the network, as this is first time experience with both HA/Zwave-JS and the 800-stick. Watching the thread for knowledge updates. "FW 1.1 SDK 7.18.1".
@ErikDOlson
seem to go dead every day and need to be pinged
This occurs on 700 series controllers as well, I have 115 node 700 series z-wave network, all devices connect directly to the controller and this occurs at least 1 time a day if not more. I have had to setup automation to ping the dead nodes so they don't stay dead longer than 30 minutes at a time.
This occurs on 700 series controllers as well
It hasn't been clear whether this was a firmware issue or a Z-Wave JS issue. I had read some posts indicating that the firmware updates solved the problem, but then again I just read another (recent) set that firmware updates did nothing. I also have set up HA to automatically ping the dead nodes.
I have 103 devices on my mesh (74 repeater, 29 battery), and I have only seen nodes go dead a couple times in the past year. Certainly not daily, weekly, or enough that I setup a scheduled ping. So I don't know - could be controller, device, or mesh dependent? Certainly not a universal thing on zwave js though.
I have 55 devices on my network. I have tested on 500, 700 and 800 controller. On UZB1(500) I did not see dead nodes running several years. On 700 (Z-Stick 7 SDK 7.18.1) and 800 (ZVidar SDK 7.18.3) I have dead nodes daily.
Screenshot from yesterday:
Odd. I currently use a uzb7 on US frequency, in case it matters. No dead nodes.
Anyway, that is probably a better discussion for a different thread anyway, so we don't derail this one.
How do you chart dead nodes and how do you instruct HASS to automatically ping them?
Let's please move this discussion to its own issue or a discussion. This issue is not for individual troubleshooting or potential issues larger than the 800 series controllers.
What is the best way to move from a 700 to a new 800 stick? NVM backup/restore or Controller Shift?
What is the best way to move from a 700 to a new 800 stick? NVM backup/restore or Controller Shift?
From my past experience with 700 series I wouldn't touch 800 with a stick. I would probably suggest 500 series still for a couple more months just to be on the safe side.
To answer that question anyways, the only option right now is a controller shift using other software. NVM access on 800 series doesn't work currently, so that migration process is not an option.
Upgraded from Zooz 700 to 800 stick by including the 800, doing controller shift and excluding the 700. Overall the network seems fine and I feel like there are fewer route changes during health checks.
However, 2 device types (Zooz ZEN30 and ZEN25) are stuck with lifeline associations to Node 1 which no longer exists (the controller is Node 50 now). This prevents events from the devices to be recognized by HA/ZwaveJS.
Let's see what Zooz support says, feels like Node 1 is just hardcoded in the firmware. Wondering if I can get the new controller to have ID 1 by switching back to the 700 and including/excluding the 800 until ID reaches 232 and then once more...
AFAIK this bug has been confirmed for the zen25 already. 30 is new to me
AFAIK this bug has been confirmed for the zen25 already. 30 is new to me
It looks like the devices running SDK: v6.71.3 are affected. I have several on my network: ZEN30 (hw v2), ZEN25, ZEN16
I was able to assign the new controller ID 1 by creating a lot of Virtual Device entries up to ID 232. Then I included the controller and it became ID 1. After switching the controller the virtual devices became real so I ran Remove All Failed Nodes in zwavejs UI overnight. In the hindsight I should have removed all virtual devices before adding a controller.
So a way to migrate controllers keeping Node ID 1 without NVM backup/restore:
Am I reading things right that 1) I'd be better off not switching to the Zooz 800 controller yet and 2) that later there should be a way to migrate all my devices from the Zooz 700 to the 800? Asking so that I can make sure adding a bunch of new devices to the 700 isn't going to cause me a ton of extra work later.
@genebean , if you're starting a new Z-Wave network, or your existing network is very small, I see no reason not to use the 800-series controller. However, if you've got a big, existing network already, then moving to the 800-series controller is going to be a pain until NVM backup/conversion/restore is supported.
Eh, I'd still consider the 800 series experimental. The major bugs with the 700 series weren't apparent until after many months of use by people with larger networks.
Eh, I'd still consider the 800 series experimental. The major bugs with the 700 series weren't apparent until after many months of use by people with larger networks.
What’s considered large? I’ve been running 800 since I started with HA in the past few months and currently have 40ish or so devices.
That's not very large. Most 700 issues were with networks >100, but even then it was only some networks. It required a fair number of devices sending reports at once for it to be fairly noticeable.
I do think a major difference between the 500-to-700 change and the 700-to-800 change is that there's no protocol/sdk change going from 700 to 800. I think that makes it much safer.
@genebean , if you're starting a new Z-Wave network, or your existing network is very small, I see no reason not to use the 800-series controller. However, if you've got a big, existing network already, then moving to the 800-series controller is going to be a pain until NVM backup/conversion/restore is supported.
I’ve only got 5 nodes besides my he controller but I’m about to add 34 new ones. Are you suggesting just wiping out the existing setup and starting fresh? No problem if so, just making sure.
FWIW I would advise not going with 800 series at this point. At least my experience has been much worse than my old Aeotec stick (500?). I get random nodes dropping whenever sending commands to them. Not enough to make it unusable but annoying (scheduled things like turning on my porch light after sunset do not work reliably). I also had to turn off a number of features, such as energy usage (which I was using quite a bit) and sending back state changes (e.g. when someone flips a light switch on manually). Both those were working well on the Aeotec.
Not meant to be a dig on Zooz (they say clearly the 800 series is for early adopters / experts) or 800 or anything. Just pointing out it might be bumpy.
Maybe it's something with my setup since I'm currently using the Zooz 800 GPIO module on a Pi4. I have a 800 USB stick to try but am waiting on the NVM backup issue, though I re-read the comments above and looks like I might be able to do it if I want to risk trying to upgrade the firmware. I'm worried I might brick it though.
+1 to worse experience than 500 series... I upgraded from a HomeSeer SmartStick+, excluded and re-included all nodes to migrate to the Zooz 800 (and moved from HomeSeer -> HA in the process). I regularly get dropped/dead nodes :( maybe ~10 per week (~100 nodes total), and noticed some missed automations. I now have a HA automation to ping dead nodes, which helps, but obviously not totally reliable and not something I want long term. Hoping it improves over time with newer firmware...
btw i upgraded firmware using Simplicity Studio -- I initially thought it helped with dropped nodes, but may have been wishful thinking: FW: v1.2 SDK: v7.18.3
Sounds like a similar experience to the 700 series with large networks
@ErikDOlson
seem to go dead every day and need to be pinged
This occurs on 700 series controllers as well, I have 115 node 700 series z-wave network, all devices connect directly to the controller and this occurs at least 1 time a day if not more. I have had to setup automation to ping the dead nodes so they don't stay dead longer than 30 minutes at a time.
How did you do this?
I have had to setup automation to ping the dead nodes so they don't stay dead longer than 30 minutes at a time.
How did you do this?
That wasn't me with the 115 nodes -- I have about 55 nodes on an 800-series network -- but we're probably doing similar scripts. My reference is here: https://community.home-assistant.io/t/automate-zwavejs-ping-dead-nodes/374307
As per Kown Issues in the release notes from the latest GA version (7.19.1):
"Not possible to migrated NVM3 files from a 700 based system to a 800 system. Especially important for gateways when replacing a 700 with a 800."
Hope they fix this in a future release...
As per Kown Issues in the release notes from the latest GA version (7.19.1):
"Not possible to migrated NVM3 files from a 700 based system to a 800 system. Especially important for gateways when replacing a 700 with a 800."
Hope they fix this in a future release...
It was fixed in 7.19.2. https://www.silabs.com/documents/public/release-notes/SRN14910-7.19.2.0.pdf
Lol, they literally just release this the day I posted it! coolio, I need to post these things more often....
Now we just need a brave soul to test this I guess...
I will later
Hm, the Gecko SDK repo is still on 7.19.1. Where did you find those release notes?
It’s now showing on GitHub.
Does 7.19.2 mean migrations are now a thing?
not sure if this is 800 series related but i updated to the latest version of zwavejs and now i get tons of "Value added" logs every second from all types of devices and it seems to be trashing my network. All types of devices are going dead randomly then coming back. i double checked configs for these devices and nothing has changed as far as the reporting frequency. not sure where to go from here
@ljmerza I have not switched to my 800 series stick (waiting on Zooz to release 7.19.2 so I can do NVM backup/restore from my ZST10-700) but something has been happening lately on network where devices seem like they are disappearing and reappearing while losing their settings in the process. I have several ZEN71/76/73 that control either lights or fans. For those devices that show up as device type switch, I change the device type in HA to light or fan (this creates a virtual switch entity within the same device and hides the actual switch entity, "change device type of a switch"). In the last several days none of these stay anymore. All of them disappeared and several of them I've put back multiple times now, but I wake up the next day and the virtual switch is gone and the real switch back. This has worked fine for over a year now, so my best guess is this is a ZWave issue, not HA, but HA is seeing it as a "new device" each time.
Update (2024-04-02)
800 series controllers are fully supported as of https://github.com/zwave-js/node-zwave-js/releases/tag/v12.4.4.
NVM Backup/Restore requires a firmware based on Z-Wave SDK 7.19.0 or higher.
Experimental Long Range support is available in https://github.com/zwave-js/node-zwave-js/releases/tag/v12.5.0, which is being released now