Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
0
Comments
74
Joined
3 yr. ago

  • Not sure what you mean by this - Nabu Casa has a Zwave device already called ZWA-2 which is fully supported.

  • A strong mesh is a better way to go to me - ensuring you have a mesh of router devies between the coordinator and the end device has worked well to ensure that no matter where the device is it works. A better antenna may help but all it takes is a glitch like your 2.4 wifi moving to overlap with the Zigbee range and the device drops out.

    I have a tubesb Zigbee device with an external antenna and I’m not sure I’ll benefit from the ZBT2 but the 2.4ghz band is very busy here. I’m tempted to try it and see if it makes any difference. I find my Zigbee network ‘slow’ - like sensor updates take 1-2 seconds before HA receives them.

  • Bcache can’t differentiate between data and metadata on the cache drive (it’s block level caching), so if something happens to a write-back cache device you lose data, and possibly the entire array. I wouldn’t use bcache (or zfs caching) without mirrored devices personally to ensure resiliency of the array. I don’t know if zfs is smarter - presumably is can be because it’s in control of the raw disks, I just didn’t want to deal with modules.

  • For your second scenario - yes you can use md under bcache with no issues. It becomes more to configure but once set up has been solid. I actually do md/raid1 - luks - bcache - btrfs layers for the SSD cache disks, where the data drives just use luks - bcache - btrfs. Keep in mind that with bcache if you lose a cache disk you can’t mount - and of course if you’re doing write-back caching then the array is also lost. With write-through caching you can force disconnect the cache disk and mount the disks.

  • This. If you have any sort of set up - just do a backup and restore. All the configuration, automations, etc. will come across exactly as it was, including your subscription set up.

    I’ve migrated from a Pi to a mini pc so it works between different platforms too - there I had to reinstall add ons but it was still generally an easy migration.

  • I work around this with the uptime integration then conditions in automations that uptime must be over whatever time I want.

    You could try using not_from in your state trigger but I’ve had limited success with that working recently. Something like this:

     
        
    #…
      - trigger: state
        entity_id:
          - event.inovelli_on_off_switch_config
        not_from:
          - unavailable
          - unknown
    #…
    
      
  • There’s your answer: you need an active PoE injector that follows 802.3af. None of the ones you pictured are the correct ones, they are passive not active and worst case can damage your device.

    The difference is the active injector and the device communicate to determine how much power to provide, where the passive injectors just whack the device with their rated power. The device shouldn’t work without negotiation (per the spec).

  • Based on what I’ve seen with my use of ZRam I don’t think it reserves the total space, but instead consumes whatever is shown in the output of zramctl --output-all. If you’re swapping then yes it would take memory from the system (up to the 8G disk size), based on how compressible the swapped content is (like if you’re getting a 3x ratio it’s 8GB/3=2.6GB). That said - it will take memory from the disk cache if you’re swapping.

    Realistically I think your issue is IO and there’s not much you can do with if your disk cache is being flushed. Switching to zswap might help as it should spill more into disk if you’re under memory pressure.

  • YouTube blocks it. There are extensions to allow it (like Vinegar) but by default it’s blocked. Brave might work around YouTube’s block in the same way.

  • You can try adding

     
        
    continue_on_error: true
    
    
      

    to the scene action so it doesn’t kill the entire automation. Note that if later parts depend on this action then they’ll fail in weird ways. The best thing is to fix the Zigbee network so the device doesn’t drop off but I know that’s not easy.

  • I’ve had to hard reset my controllers (both Zwave and Zigbee) a few times now, haven’t really found a cause but it’s usually been around times when updates were applied. It almost seemed to me like the device wasn’t released by the old container and that needed a hard disconnect to force it. IIRC logs just showed a generic can’t connect to device error but no sign of what had the device locked. First time I did some investigation, the few times it’s happened since then I just unplugged and reconnected the usb device, restarted the container and it worked after.

    I haven’t had it happen for a while at least.

  • Check with your provider for SIP server, username and password, and if they have a suggested app (even if you don’t want to use it, it means they have some kind of support). It’s probably in their support pages somewhere.

  • I don’t know deConz but ZHA shows RSSI on the device in home assistant, and you can see RSSI in the Zigbee2MQTT UI list of devices. I’d assume it’s something like that in deConz.

    I’d say if the device is closer to the controller then I’d suspect the devices. Do you have any other devices yet or just the Aquara sensors? It’s possible they work better through a Zigbee router too so you can try connecting them via one.

  • How is the link strength for the devices? Do they still drop off if you leave them right by the controller? If you’re just getting started I’m guessing you don’t have a strong mesh yet with plugged in devices to provide routers to the network.

    My experience is that some manufacturers are better at following the spec and devices work better or worse based on that.

  • Is the reverse proxy using an add on or did you roll your own? Reason I ask is proxing HA needs special treatment for websockets (wss:// or ws:// scheme). Add ons should do it themselves but I had to do it myself with Apache. I’m not sure if there’s special config needed for nginx too.

  • Did you set up the proxy as a trusted forwarder? That means setting use_x_forwarded_for and trusted_proxies in configuration.yaml?

  • Silly question but does it still work directly without the proxy (like http://homeassistant.local:8123/ )? Check the logs in system- logs and see if you can find anything relevant. AFAIK the proxy shouldn’t change how calendars get loaded.

  • I use an acurite 06002RM temperature and humidity sensor with a rtl 433 compatible receiver plugged into home assistant and an rtl2mqtt add on. It’s indoor/outdoor and has worked well for all sorts of weather. Combined with a sun shade and it’s a good solution I think, and completely local.