With this release, we finally got to spend less time on building up infrastructure and processes and more on working on meaningful improvements and new features with was really good to see!
This is also the first release with support for migrating LXD clusters over to Incus. The built-in lxd-to-incus migration tool can now handle most standalone and clustered cases, performing a full initial validation of all settings to avoid surprises.
The full announcement and changelog can be found here. And for those who prefer watching over reading, I’ve also recorded a video overview:
And lastly, my company is now offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.
And for anyone who just wants to contribute a bit towards my work on this and other open source projects, you’ll also find me on Github Sponsors, Patreon and Ko-fi.
A year ago today, my girlfriend and I (along with our cat), moved into our new house. It’s a pretty modern house, on a 1.5 acres (~6000 sqm) piece of forested land and even includes an old maple shack!
Fully moving all our stuff from our previous house in Montreal took quite some time, but we got it all done and sold the Montreal house at the end of July.
The new house has 4 bedrooms and 2 bathrooms upstairs, a massive open space for living room, dining room and kitchen on the main floor along with a mudroom, washroom and pantry, then on the lower level, we got a large two car garage, mechanical room and another small storage room.
That’s significantly larger than what we had back at the old house and that’s ignoring the much larger outside space which includes a large deck, the aforementioned maple shack, a swimming pool, chicken coop and a lot of trees!
Home automation platform
Now, being the geek that I am, I’ve always had an interest in home automation, though I also have developed quite an allergy to anything relying on cloud services and so focus on technologies that can work completely offline.
These days, it means running a moderately complex installation of Home Assistant, along with Mosquitto for all the MQTT integrations, MediaMTX to manage camera feeds and Frigate to analyze and record all the video footage.
Obviously, all of the software components run in Incus containers with some nested Docker containers for the software components handling my Z-Wave, Zigbee and 433Mhz radios.
Networking
On the networking side, the new house is getting 3Gbs symmetric fiber Internet along with a backup 100Mbps/30Mbps HFC link. Both of those are then connected back to my datacenter infrastructure over Wireguard, letting me use BGP with my public IP allocation and ASN at home too.
I used the opportunity of setting up a new house to go for a decent amount of future proofing, by which I mean, building an overkill network… I installed a fiber patching box which gets the main internet fiber along with two pairs of singlemode fiber to each of the switches around the property, two outside and four inside the house. The fiber patching box then uses a pair of MTP fibers to carry all 24 or so fibers to the core switch over just two cables. Each switch gets a bonded 2x10Gbps link back to the core switch, so suffice to say, I’m not going to have a client saturate a link anytime soon!
All the switches are Mikrotik, with the core being a CRS326-24S+2Q+RM and the edge switches being a mix of netPower 16P and CRS326-24G-2S+RM depending on whether PoE is needed. I’m sticking to their RouterOS products as that then lets me handle configuration and backups through Ansible.
The wifi is a total of 7 Ubiquiti 6 APs, 3 U6-LR for the inside and 4 U6-Lite for the outside.
On the logical side of things, I’ve got separate VLANs and SSIDs for trusted clients, untrusted clients, IoT sensors and IoT cameras. With the last two of those getting no external access whatsoever.
This allows me to use just about any random brand camera or IoT device without fear of them dialing back home with all my data. The only real requirement for cameras is that they give me RTSP/RTMP.
Hardware
Now on the more home automation hardware side of things. As mentioned above, my home assistant setup can handle devices connected over Z-Wave, Zigbee, 433Mhz or accessible over the internal network.
In general, I’m a big proponent of having home automation be there to help those living in the house, it should never get in the way of normal interactions with the house. This mostly means, that every light switch is expected to function as a light switch, same goes for thermostats or anything else that’s visible to someone living in the space.
Here is an overview of what I ended up going with:
Outdoor non-IR cameras: Revotech Mini Camera (PoE)
Cat feeder: Aqara Smart Pet Feeder C1 (Zigbee)
Sound system: Sonos Arc & IKEA Symfonisk (wifi)
Note that this isn’t necessarily an endorsement for any of those products 🙂 For example, the cameras, I’ve been going through a variety of manufacturer, some more reliable than others, especially when it comes to water ingress…
Automation
All of those in Home Assistant allows for some pretty good automation, things like getting notified of the mail box being open, along with a photo of whoever was there at the time. Same goes for the immediate perimeter of the house when we’re not there, useful to monitor deliveries. It also is used to keep the house at a comfortable temperature year round without needlessly wasting energy heating unused rooms.
Our chicken coop is also fully automated, opening automatically in the morning, closing at night, sending a photo to confirm that the chicken is back in and keeping the chicken inside when it’s too cold outside, also turning on heat to the water dispenser to avoid it freezing over.
The swimming pool equipment that came with the house also didn’t allow any real automation, instead, we’re relying on relays/smart plugs to control it. The pool temperature sensor combined with the big relay controlling the thermopump allows for home assistant to act as a thermostat for the pool. The smart plug controlling the pump allows for quite some energy savings by only filtering the pool for as long as is needed (based on usage and temperature).
And probably the most useful of all from a financial standpoint, support for automatically handling peak periods with the utility, during which time they credit money for any kWh of power saved compared to the same period of the previous days. Home assistant can simply pre-heat the house a bit ahead of time and then turn off just about anything for the peak period. Keeps things perfectly livable and saves a fair amount of money by the end of the season!
What’s next
In general, I’m very happy with how things stand now.
There are really only two things which aren’t controllable yet and which would be useful to be able to control, especially when the utility provides incentive to reduce power consumption. That’s the water heater and the air exchanger. For the water heater, Sinopé makes the perfect controller for it, just waiting for it to be more readily available. For the air exchanger, I’m yet to decide between trying to reverse engineer the control signal with an ESP8266 or going the lazy route and just use a controllable outlet and leave it in the same mode forever.
We’ve been getting a few power cuts and despite having 6U worth of UPS for my servers and the core network, it’s annoying to have the rest of the house lose power. To fix that, we got a second electrical panel installed for the critical loads which will hopefully soon be fed by a battery system.
Over the next year, I also expect the maple shack to get brought to a more livable state, with the current plan to relocate the servers and part of my office down to it. This will involve a fair bit of construction as well as running fiber and a beefier power cable down there but would provide for good home/work separation while still not having to drive anywhere 🙂
Incus was created just over two months ago by Aleksa Sarai, forking the LXD project shortly after Canonical took it over and kicked out all community maintainers from it. It aims at providing the same great system container and virtual machine management, clustering, … as LXD, but in a more community driven and distribution-agnostic way.
Over those two months, the focus has been on taking ownership of the code base, doing a lot of housekeeping work, effectively modernizing the code base, removing a number of less used or Ubuntu-specific features and developing tooling that will allow the project to keep up with LXD while also allowing it to grow its own features separate of LXD.
From a technical standpoint, it’s very similar to LXD 5.18 and supports migrating all data from LXD 4.0 or newer (up till 5.18). From a community standpoint, it’s the beginning of a new great project, run by the original LXD maintainers along with Aleksa and that has already received a number of contributions from various community members!
You can easily try Incus 0.1 for yourself with our online demo.
Separately from the Incus project, I’m also personally providing packages for Incus to Debian and Ubuntu users through my company, Zabbly. And I’m naturally able to provide paid support, development and migration services to anyone who would like that!
My open source work can also be sponsored directly through Github Sponsors.
Now to go back to fixing bugs and processing all the great user feedback so far!
One very neat feature we had back when LXD was hosted on the Linux Containers infrastructure was the ability to try it online. For that, we were dynamically allocating a LXD container with nested support, allowing the user to quickly get a shell and try LXD for a bit.
This was the first LXD experience for tens of thousands of people and made it painless to discover LXD and see if it’s a good fit for you.
With the move of LXD to Canonical, this was lost and my understanding is that for LXD, there’s currently no plan to bring it back.
Enter Incus
Now that Incus is part of the Linux Containers project, it gets to use some of the infrastructure which was once provided to LXD, including the ability to provide a live demo server!
Quite a few things have changed on the infrastructure side since the LXD days.
For one thing, the server code has seen some substantial updates, porting it to Incus, adding support for virtual machines, talking to remote clusters, making the configuration file easier to read, adding e-mail notifications for when users leave feedback and more!
On the client side, the code was also ported from the now defunct term.js over to the actively maintained xterm.js. The instructions were obviously updated to fit Incus too.
But the exciting part is that we’re no longer using nested containers run inside of one large mostly stateless VM, that had to be rebuilt daily for security reasons. No, we’re now spawning individual virtual machines against a remote Incus cluster!
Each session now gets an Ubuntu 22.04 VM for a duration of 30 minutes. Each VM is running on an Incus cluster with a few beefy machines available. They use the Incus daily repository along with both my kernel and zfs builds.
Resource wise, we’re also looking at a big upgrade, moving from just 1 CPU, 256 MB of RAM and 5 GB of slow disk to a whopping 2 CPU, 4 GB of RAM and 50 GB of NVME storage!
The end result is that while the session startup time is a bit longer, up to around 15s from just 5s, the user now gets a full dedicated VM with fast storage and a lot more resources to play with. The most notable change this introduces is the ability to play with Incus VMs too!
Next steps
The demo server is currently using Incus daily builds as there’s no stable Incus release yet. This will obviously change as soon as we have a stable release!
Other than that, the instructions may be expanded a bit to cover more resource intensive parts of Incus, making use of the extra resources now available.
Just a very short post to mention that I’ve enabled ActivityPub on my blog, making it possible to follow it by simply following @blog@stgraber.org in Mastodon or other Fediverse platforms!
For good measure I’ve also made @stgraber@stgraber.org redirect to my current Mastodon account which should make for easier discovery.