Announcing Incus 0.3

Another month, another Incus release!
Incus 0.3 is now out, featuring OpenFGA support, a lot of improvements to our migration tool and support for hot-plug/hot-remove of shared paths in virtual machines.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

Finally just a quick reminder that my company is now offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.
You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Posted in Incus, Planet Ubuntu, Zabbly | Leave a comment

Containers and kernel devrooms at FOSDEM 2024

As has become a bit of a tradition by now, I’ll be attending FOSDEM 2024 in Brussels, Belgium on the weekend of the 3-4th of February 2024.

I’m once again one of the organizers of the containers devroom, a devroom we’ve been running for over 5 years now. And on top of that, will also help organize the kernel devroom. This is going to be our second year for this devroom after a very successful first year in 2023!

The CFPs for both devrooms are currently still open with a submission deadline of December 10th:

If you have anything that’s containers or kernel related, please send it, we have a variety of time slot lengths to accommodate anything from a short demo to a full size talk.

But those are just two of a lot of different devrooms running over the weekend, you can find a full list here along with all the CFP links.

See you in Brussels!

PS: A good chunk of the LXC/Incus team is going to be attending, so let us know if you want to chat and we’ll try to find some time!

Posted in Conferences, Planet Ubuntu | Leave a comment

Adding a web UI to the Incus demo service

Introduction

For anyone who hasn’t seen it before, you can try the Incus container and virtual machine manager online by just heading to our website and starting a demo session.

This gets you a dedicated VM on an Incus cluster with a bunch of resources and with a 30min time limit so you can either poke around for yourself or go through our guided showcase.

Now as neat as this is, it’s nothing new and we’ve been offering this for quite a while.

What’s new is that “Try a Web UI” link you can see alongside the console timer, click it and you’ll be getting into a web app that lets you play with the same temporary Incus server as the regular text demo.

What web UI?

Unlike LXD, Incus doesn’t have an official web UI. Instead, it just serves whatever web UI you want.

That means that getting a stateless (javascript + html only) web UI is as simple as unpacking a bunch of files in /opt/incus/ui/ and then accessing Incus from a web browser. For more complex, stateful web UIs (those using dynamic server-side languages or an external database), a simple index.html file can be dropped into /opt/incus/ui/ to then redirect the user to the correct web server.

In a recent livestream, I spent a bit of time packaging the Canonical LXD UI in my Incus package repository so that it’s now as simple as apt install incus-ui-canonical to get that one up and running.

Part of that work was to also do some minimal re-branding, changing some links and updating the list of config options so it matches Incus. That’s handled as a series of patches that are applied during the package build.

How does it all work?

Now to get this available for anyone as part of the online demo service, some work had to be done!

The first part was the easy one, simply get the incus-ui-canonical package installed in our demo image. Those images are generated through a simple shell script, building a new base image every day.

With the package present and Incus configured to listen on the network, the next step was to add a bunch of logic to the incus-demo-server codebase. Each demo session is identified by a UUID. You can see that UUID in the URL whenever you start a demo session.

When a new session is created, a database record is made which amongst other things records the IPv6 address of the instance. Until now, this wasn’t really used other than for debugging purposes.

Now the easy approach would have been to just provide the IPv6 address to the end user and so long as they have IPv6 connectivity, they could just access the web UI directly. There are a few problems with that approach though:

  • Adoption rate for IPv6 is only slightly above 50% when looking at Incus users
  • The target web server (Incus) doesn’t have a valid TLS certificate
  • Authentication in the web UI requires a client certificate in the user’s browser

This would have made for a very high bar for anyone to try the UI, something better was needed. And so that’s where the idea of having incus-demo-server act as a SNI-aware proxy came about.

The setup basically looks like:

  • User hits https://<UUID>.incus-demo-linuxcontainers.org
  • Wildcard DNS record catches *.incus-demo.linuxcontainers.org and sends to HAProxy
  • HAProxy uses a valid Let’s Encrypt wildcard certificate to answer
  • HAProxy forwards the traffic to incus-demo-server on a dedicated port, keeping the SNI (Server Name Indication) value
  • incus-demo-server inspects the SNI value, extracts the instance UUID and gets the target IPv6 address from its database
  • incus-demo-server forwards the traffic to the Incus server running in the instance, using its own TLS client certificate for that connection

This results in the end user being able to access the web UI in their temporary instance with a valid HTTPS certificate and without needing to worry about authentication at all.

You can find the incus-demo-server side of this logic here.

Conclusion

I believe this turned out to be a very elegant trick, making things as easy as humanly possible for anyone to try Incus, letting them mix and match using the CLI or using a web UI.

As mentioned, there is no such thing as an official web UI with Incus, so we’re looking forward to getting some more of the alternative web UIs packaged and will be looking at ways to offer them up on the demo service too, likely by having the user install whichever one they want through the terminal.

Posted in Incus, Planet Ubuntu, Zabbly | Leave a comment

Announcing Incus 0.2

The second release of Incus is now out!

With this release, we finally got to spend less time on building up infrastructure and processes and more on working on meaningful improvements and new features with was really good to see!

This is also the first release with support for migrating LXD clusters over to Incus.
The built-in lxd-to-incus migration tool can now handle most standalone and clustered cases, performing a full initial validation of all settings to avoid surprises.

The full announcement and changelog can be found here.
And for those who prefer watching over reading, I’ve also recorded a video overview:

And lastly, my company is now offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.

You’ll find all details of that here: https://zabbly.com/incus

And for anyone who just wants to contribute a bit towards my work on this and other open source projects, you’ll also find me on Github Sponsors, Patreon and Ko-fi.

Posted in Incus, LXD, Planet Ubuntu, Zabbly | 1 Comment

Setting up a new house

Introduction

A year ago today, my girlfriend and I (along with our cat), moved into our new house. It’s a pretty modern house, on a 1.5 acres (~6000 sqm) piece of forested land and even includes an old maple shack!

Fully moving all our stuff from our previous house in Montreal took quite some time, but we got it all done and sold the Montreal house at the end of July.

The new house has 4 bedrooms and 2 bathrooms upstairs, a massive open space for living room, dining room and kitchen on the main floor along with a mudroom, washroom and pantry, then on the lower level, we got a large two car garage, mechanical room and another small storage room.

That’s significantly larger than what we had back at the old house and that’s ignoring the much larger outside space which includes a large deck, the aforementioned maple shack, a swimming pool, chicken coop and a lot of trees!

Home automation platform

Now, being the geek that I am, I’ve always had an interest in home automation, though I also have developed quite an allergy to anything relying on cloud services and so focus on technologies that can work completely offline.

These days, it means running a moderately complex installation of Home Assistant, along with Mosquitto for all the MQTT integrations, MediaMTX to manage camera feeds and Frigate to analyze and record all the video footage.

Obviously, all of the software components run in Incus containers with some nested Docker containers for the software components handling my Z-Wave, Zigbee and 433Mhz radios.

Networking

On the networking side, the new house is getting 3Gbs symmetric fiber Internet along with a backup 100Mbps/30Mbps HFC link. Both of those are then connected back to my datacenter infrastructure over Wireguard, letting me use BGP with my public IP allocation and ASN at home too.

I used the opportunity of setting up a new house to go for a decent amount of future proofing, by which I mean, building an overkill network… I installed a fiber patching box which gets the main internet fiber along with two pairs of singlemode fiber to each of the switches around the property, two outside and four inside the house. The fiber patching box then uses a pair of MTP fibers to carry all 24 or so fibers to the core switch over just two cables. Each switch gets a bonded 2x10Gbps link back to the core switch, so suffice to say, I’m not going to have a client saturate a link anytime soon!

All the switches are Mikrotik, with the core being a CRS326-24S+2Q+RM and the edge switches being a mix of netPower 16P and CRS326-24G-2S+RM depending on whether PoE is needed. I’m sticking to their RouterOS products as that then lets me handle configuration and backups through Ansible.

The wifi is a total of 7 Ubiquiti 6 APs, 3 U6-LR for the inside and 4 U6-Lite for the outside.

On the logical side of things, I’ve got separate VLANs and SSIDs for trusted clients, untrusted clients, IoT sensors and IoT cameras. With the last two of those getting no external access whatsoever.

This allows me to use just about any random brand camera or IoT device without fear of them dialing back home with all my data. The only real requirement for cameras is that they give me RTSP/RTMP.

Hardware

Now on the more home automation hardware side of things. As mentioned above, my home assistant setup can handle devices connected over Z-Wave, Zigbee, 433Mhz or accessible over the internal network.

In general, I’m a big proponent of having home automation be there to help those living in the house, it should never get in the way of normal interactions with the house. This mostly means, that every light switch is expected to function as a light switch, same goes for thermostats or anything else that’s visible to someone living in the space.

Here is an overview of what I ended up going with:

  • Light switches (with neutral): Zooz ZEN77 dimmers (Z-Wave)
  • Light switches (without neutral): Innoveli Blue Series dimmer (Zigbee)
  • Baseboard thermostats: Stelpro STZW402WBPLUS (Z-Wave)
  • Garage ceiling heater thermostat: Centralite 3000-W (Zigbee)
  • Bathroom floor thermostat: Sinopé TH13000ZB (Zigbee)
  • Concrete slab thermostat: Sinopé TH14000ZB (Zigbee)
  • Pool pump control (smart switch): Zooz ZEN05 (Z-Wave)
  • Pool heater control: Sinopé RM3250ZB (Zigbee)
  • Door locks: Yale YRD156 (Z-Wave)
  • Power panel metering: Aeon Labs ZW095 (Z-Wave)
  • Air quality sensors: TuYA TS0601 (Zigbee)
  • Door sensors: Third Reality 3RDS17BZ (Zigbee)
  • Small appliance control: Sonoff S31ZB (Zigbee)
  • Leak detectors: Third Reality 3RWS18BZ (Zigbee)
  • Outdoor light sensors: Xiaomi GZCGQ11LM (Zigbee)
  • Fridge and freezer sensors: AcuRite 06044M (433Mhz)
  • Pool temperature sensor: InkBird IBS-P01R (433Mhz)
  • Outdoor air temperature/humidity sensor: Hama Weather station (433Mhz)
  • Outdoor rain meter: AcuRite 00899 (433Mhz)
  • Vaccum cleaner: Roomba 960 (wifi)
  • Car charger: OpenEVSE (wifi)
  • Custom meters & controllers: ESP8266 boards (wifi)
  • AC and fan control: Bestcon RM4C Mini (wifi)
  • Indoor wireless cameras: Sonoff 1080P (wifi)
  • Indoor wired cameras: Reolink ‎RLC-520A (PoE)
  • Outdoor IR cameras: CTVISION 5MP, Reolink 5MP, Reolink 4K, Veezoom 5MP (PoE)
  • Outdoor non-IR cameras: Revotech Mini Camera (PoE)
  • Cat feeder: Aqara Smart Pet Feeder C1 (Zigbee)
  • Sound system: Sonos Arc & IKEA Symfonisk (wifi)

Note that this isn’t necessarily an endorsement for any of those products 🙂
For example, the cameras, I’ve been going through a variety of manufacturer, some more reliable than others, especially when it comes to water ingress…

Automation

All of those in Home Assistant allows for some pretty good automation, things like getting notified of the mail box being open, along with a photo of whoever was there at the time. Same goes for the immediate perimeter of the house when we’re not there, useful to monitor deliveries. It also is used to keep the house at a comfortable temperature year round without needlessly wasting energy heating unused rooms.

Our chicken coop is also fully automated, opening automatically in the morning, closing at night, sending a photo to confirm that the chicken is back in and keeping the chicken inside when it’s too cold outside, also turning on heat to the water dispenser to avoid it freezing over.

The swimming pool equipment that came with the house also didn’t allow any real automation, instead, we’re relying on relays/smart plugs to control it. The pool temperature sensor combined with the big relay controlling the thermopump allows for home assistant to act as a thermostat for the pool.
The smart plug controlling the pump allows for quite some energy savings by only filtering the pool for as long as is needed (based on usage and temperature).

And probably the most useful of all from a financial standpoint, support for automatically handling peak periods with the utility, during which time they credit money for any kWh of power saved compared to the same period of the previous days. Home assistant can simply pre-heat the house a bit ahead of time and then turn off just about anything for the peak period. Keeps things perfectly livable and saves a fair amount of money by the end of the season!

What’s next

In general, I’m very happy with how things stand now.

There are really only two things which aren’t controllable yet and which would be useful to be able to control, especially when the utility provides incentive to reduce power consumption. That’s the water heater and the air exchanger. For the water heater, Sinopé makes the perfect controller for it, just waiting for it to be more readily available. For the air exchanger, I’m yet to decide between trying to reverse engineer the control signal with an ESP8266 or going the lazy route and just use a controllable outlet and leave it in the same mode forever.

We’ve been getting a few power cuts and despite having 6U worth of UPS for my servers and the core network, it’s annoying to have the rest of the house lose power. To fix that, we got a second electrical panel installed for the critical loads which will hopefully soon be fed by a battery system.

Over the next year, I also expect the maple shack to get brought to a more livable state, with the current plan to relocate the servers and part of my office down to it. This will involve a fair bit of construction as well as running fiber and a beefier power cable down there but would provide for good home/work separation while still not having to drive anywhere 🙂

Posted in Incus, Planet Ubuntu, Zabbly | 3 Comments