Adding a web UI to the Incus demo service


For anyone who hasn’t seen it before, you can try the Incus container and virtual machine manager online by just heading to our website and starting a demo session.

This gets you a dedicated VM on an Incus cluster with a bunch of resources and with a 30min time limit so you can either poke around for yourself or go through our guided showcase.

Now as neat as this is, it’s nothing new and we’ve been offering this for quite a while.

What’s new is that “Try a Web UI” link you can see alongside the console timer, click it and you’ll be getting into a web app that lets you play with the same temporary Incus server as the regular text demo.

What web UI?

Unlike LXD, Incus doesn’t have an official web UI. Instead, it just serves whatever web UI you want.

That means that getting a stateless (javascript + html only) web UI is as simple as unpacking a bunch of files in /opt/incus/ui/ and then accessing Incus from a web browser. For more complex, stateful web UIs (those using dynamic server-side languages or an external database), a simple index.html file can be dropped into /opt/incus/ui/ to then redirect the user to the correct web server.

In a recent livestream, I spent a bit of time packaging the Canonical LXD UI in my Incus package repository so that it’s now as simple as apt install incus-ui-canonical to get that one up and running.

Part of that work was to also do some minimal re-branding, changing some links and updating the list of config options so it matches Incus. That’s handled as a series of patches that are applied during the package build.

How does it all work?

Now to get this available for anyone as part of the online demo service, some work had to be done!

The first part was the easy one, simply get the incus-ui-canonical package installed in our demo image. Those images are generated through a simple shell script, building a new base image every day.

With the package present and Incus configured to listen on the network, the next step was to add a bunch of logic to the incus-demo-server codebase. Each demo session is identified by a UUID. You can see that UUID in the URL whenever you start a demo session.

When a new session is created, a database record is made which amongst other things records the IPv6 address of the instance. Until now, this wasn’t really used other than for debugging purposes.

Now the easy approach would have been to just provide the IPv6 address to the end user and so long as they have IPv6 connectivity, they could just access the web UI directly. There are a few problems with that approach though:

  • Adoption rate for IPv6 is only slightly above 50% when looking at Incus users
  • The target web server (Incus) doesn’t have a valid TLS certificate
  • Authentication in the web UI requires a client certificate in the user’s browser

This would have made for a very high bar for anyone to try the UI, something better was needed. And so that’s where the idea of having incus-demo-server act as a SNI-aware proxy came about.

The setup basically looks like:

  • User hits https://<UUID>
  • Wildcard DNS record catches * and sends to HAProxy
  • HAProxy uses a valid Let’s Encrypt wildcard certificate to answer
  • HAProxy forwards the traffic to incus-demo-server on a dedicated port, keeping the SNI (Server Name Indication) value
  • incus-demo-server inspects the SNI value, extracts the instance UUID and gets the target IPv6 address from its database
  • incus-demo-server forwards the traffic to the Incus server running in the instance, using its own TLS client certificate for that connection

This results in the end user being able to access the web UI in their temporary instance with a valid HTTPS certificate and without needing to worry about authentication at all.

You can find the incus-demo-server side of this logic here.


I believe this turned out to be a very elegant trick, making things as easy as humanly possible for anyone to try Incus, letting them mix and match using the CLI or using a web UI.

As mentioned, there is no such thing as an official web UI with Incus, so we’re looking forward to getting some more of the alternative web UIs packaged and will be looking at ways to offer them up on the demo service too, likely by having the user install whichever one they want through the terminal.

Posted in Incus, Planet Ubuntu, Zabbly | Leave a comment

Announcing Incus 0.2

The second release of Incus is now out!

With this release, we finally got to spend less time on building up infrastructure and processes and more on working on meaningful improvements and new features with was really good to see!

This is also the first release with support for migrating LXD clusters over to Incus.
The built-in lxd-to-incus migration tool can now handle most standalone and clustered cases, performing a full initial validation of all settings to avoid surprises.

The full announcement and changelog can be found here.
And for those who prefer watching over reading, I’ve also recorded a video overview:

And lastly, my company is now offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.

You’ll find all details of that here:

And for anyone who just wants to contribute a bit towards my work on this and other open source projects, you’ll also find me on Github Sponsors, Patreon and Ko-fi.

Posted in Incus, LXD, Planet Ubuntu, Zabbly | 1 Comment

Setting up a new house


A year ago today, my girlfriend and I (along with our cat), moved into our new house. It’s a pretty modern house, on a 1.5 acres (~6000 sqm) piece of forested land and even includes an old maple shack!

Fully moving all our stuff from our previous house in Montreal took quite some time, but we got it all done and sold the Montreal house at the end of July.

The new house has 4 bedrooms and 2 bathrooms upstairs, a massive open space for living room, dining room and kitchen on the main floor along with a mudroom, washroom and pantry, then on the lower level, we got a large two car garage, mechanical room and another small storage room.

That’s significantly larger than what we had back at the old house and that’s ignoring the much larger outside space which includes a large deck, the aforementioned maple shack, a swimming pool, chicken coop and a lot of trees!

Home automation platform

Now, being the geek that I am, I’ve always had an interest in home automation, though I also have developed quite an allergy to anything relying on cloud services and so focus on technologies that can work completely offline.

These days, it means running a moderately complex installation of Home Assistant, along with Mosquitto for all the MQTT integrations, MediaMTX to manage camera feeds and Frigate to analyze and record all the video footage.

Obviously, all of the software components run in Incus containers with some nested Docker containers for the software components handling my Z-Wave, Zigbee and 433Mhz radios.


On the networking side, the new house is getting 3Gbs symmetric fiber Internet along with a backup 100Mbps/30Mbps HFC link. Both of those are then connected back to my datacenter infrastructure over Wireguard, letting me use BGP with my public IP allocation and ASN at home too.

I used the opportunity of setting up a new house to go for a decent amount of future proofing, by which I mean, building an overkill network… I installed a fiber patching box which gets the main internet fiber along with two pairs of singlemode fiber to each of the switches around the property, two outside and four inside the house. The fiber patching box then uses a pair of MTP fibers to carry all 24 or so fibers to the core switch over just two cables. Each switch gets a bonded 2x10Gbps link back to the core switch, so suffice to say, I’m not going to have a client saturate a link anytime soon!

All the switches are Mikrotik, with the core being a CRS326-24S+2Q+RM and the edge switches being a mix of netPower 16P and CRS326-24G-2S+RM depending on whether PoE is needed. I’m sticking to their RouterOS products as that then lets me handle configuration and backups through Ansible.

The wifi is a total of 7 Ubiquiti 6 APs, 3 U6-LR for the inside and 4 U6-Lite for the outside.

On the logical side of things, I’ve got separate VLANs and SSIDs for trusted clients, untrusted clients, IoT sensors and IoT cameras. With the last two of those getting no external access whatsoever.

This allows me to use just about any random brand camera or IoT device without fear of them dialing back home with all my data. The only real requirement for cameras is that they give me RTSP/RTMP.


Now on the more home automation hardware side of things. As mentioned above, my home assistant setup can handle devices connected over Z-Wave, Zigbee, 433Mhz or accessible over the internal network.

In general, I’m a big proponent of having home automation be there to help those living in the house, it should never get in the way of normal interactions with the house. This mostly means, that every light switch is expected to function as a light switch, same goes for thermostats or anything else that’s visible to someone living in the space.

Here is an overview of what I ended up going with:

  • Light switches (with neutral): Zooz ZEN77 dimmers (Z-Wave)
  • Light switches (without neutral): Innoveli Blue Series dimmer (Zigbee)
  • Baseboard thermostats: Stelpro STZW402WBPLUS (Z-Wave)
  • Garage ceiling heater thermostat: Centralite 3000-W (Zigbee)
  • Bathroom floor thermostat: Sinopé TH13000ZB (Zigbee)
  • Concrete slab thermostat: Sinopé TH14000ZB (Zigbee)
  • Pool pump control (smart switch): Zooz ZEN05 (Z-Wave)
  • Pool heater control: Sinopé RM3250ZB (Zigbee)
  • Door locks: Yale YRD156 (Z-Wave)
  • Power panel metering: Aeon Labs ZW095 (Z-Wave)
  • Air quality sensors: TuYA TS0601 (Zigbee)
  • Door sensors: Third Reality 3RDS17BZ (Zigbee)
  • Small appliance control: Sonoff S31ZB (Zigbee)
  • Leak detectors: Third Reality 3RWS18BZ (Zigbee)
  • Outdoor light sensors: Xiaomi GZCGQ11LM (Zigbee)
  • Fridge and freezer sensors: AcuRite 06044M (433Mhz)
  • Pool temperature sensor: InkBird IBS-P01R (433Mhz)
  • Outdoor air temperature/humidity sensor: Hama Weather station (433Mhz)
  • Outdoor rain meter: AcuRite 00899 (433Mhz)
  • Vaccum cleaner: Roomba 960 (wifi)
  • Car charger: OpenEVSE (wifi)
  • Custom meters & controllers: ESP8266 boards (wifi)
  • AC and fan control: Bestcon RM4C Mini (wifi)
  • Indoor wireless cameras: Sonoff 1080P (wifi)
  • Indoor wired cameras: Reolink ‎RLC-520A (PoE)
  • Outdoor IR cameras: CTVISION 5MP, Reolink 5MP, Reolink 4K, Veezoom 5MP (PoE)
  • Outdoor non-IR cameras: Revotech Mini Camera (PoE)
  • Cat feeder: Aqara Smart Pet Feeder C1 (Zigbee)
  • Sound system: Sonos Arc & IKEA Symfonisk (wifi)

Note that this isn’t necessarily an endorsement for any of those products 🙂
For example, the cameras, I’ve been going through a variety of manufacturer, some more reliable than others, especially when it comes to water ingress…


All of those in Home Assistant allows for some pretty good automation, things like getting notified of the mail box being open, along with a photo of whoever was there at the time. Same goes for the immediate perimeter of the house when we’re not there, useful to monitor deliveries. It also is used to keep the house at a comfortable temperature year round without needlessly wasting energy heating unused rooms.

Our chicken coop is also fully automated, opening automatically in the morning, closing at night, sending a photo to confirm that the chicken is back in and keeping the chicken inside when it’s too cold outside, also turning on heat to the water dispenser to avoid it freezing over.

The swimming pool equipment that came with the house also didn’t allow any real automation, instead, we’re relying on relays/smart plugs to control it. The pool temperature sensor combined with the big relay controlling the thermopump allows for home assistant to act as a thermostat for the pool.
The smart plug controlling the pump allows for quite some energy savings by only filtering the pool for as long as is needed (based on usage and temperature).

And probably the most useful of all from a financial standpoint, support for automatically handling peak periods with the utility, during which time they credit money for any kWh of power saved compared to the same period of the previous days. Home assistant can simply pre-heat the house a bit ahead of time and then turn off just about anything for the peak period. Keeps things perfectly livable and saves a fair amount of money by the end of the season!

What’s next

In general, I’m very happy with how things stand now.

There are really only two things which aren’t controllable yet and which would be useful to be able to control, especially when the utility provides incentive to reduce power consumption. That’s the water heater and the air exchanger. For the water heater, Sinopé makes the perfect controller for it, just waiting for it to be more readily available. For the air exchanger, I’m yet to decide between trying to reverse engineer the control signal with an ESP8266 or going the lazy route and just use a controllable outlet and leave it in the same mode forever.

We’ve been getting a few power cuts and despite having 6U worth of UPS for my servers and the core network, it’s annoying to have the rest of the house lose power. To fix that, we got a second electrical panel installed for the critical loads which will hopefully soon be fed by a battery system.

Over the next year, I also expect the maple shack to get brought to a more livable state, with the current plan to relocate the servers and part of my office down to it. This will involve a fair bit of construction as well as running fiber and a beefier power cable down there but would provide for good home/work separation while still not having to drive anywhere 🙂

Posted in Incus, Planet Ubuntu, Zabbly | 3 Comments

Releasing Incus 0.1

Incus was created just over two months ago by Aleksa Sarai, forking the LXD project shortly after Canonical took it over and kicked out all community maintainers from it.
It aims at providing the same great system container and virtual machine management, clustering, … as LXD, but in a more community driven and distribution-agnostic way.

Over those two months, the focus has been on taking ownership of the code base, doing a lot of housekeeping work, effectively modernizing the code base, removing a number of less used or Ubuntu-specific features and developing tooling that will allow the project to keep up with LXD while also allowing it to grow its own features separate of LXD.

The result is the release of Incus 0.1!

From a technical standpoint, it’s very similar to LXD 5.18 and supports migrating all data from LXD 4.0 or newer (up till 5.18). From a community standpoint, it’s the beginning of a new great project, run by the original LXD maintainers along with Aleksa and that has already received a number of contributions from various community members!

You can easily try Incus 0.1 for yourself with our online demo.

Separately from the Incus project, I’m also personally providing packages for Incus to Debian and Ubuntu users through my company, Zabbly. And I’m naturally able to provide paid support, development and migration services to anyone who would like that!

My open source work can also be sponsored directly through Github Sponsors.

Now to go back to fixing bugs and processing all the great user feedback so far!

Posted in Incus, Planet Ubuntu, Zabbly | Leave a comment

Bringing back the Incus demo server


One very neat feature we had back when LXD was hosted on the Linux Containers infrastructure was the ability to try it online. For that, we were dynamically allocating a LXD container with nested support, allowing the user to quickly get a shell and try LXD for a bit.

This was the first LXD experience for tens of thousands of people and made it painless to discover LXD and see if it’s a good fit for you.

With the move of LXD to Canonical, this was lost and my understanding is that for LXD, there’s currently no plan to bring it back.

Enter Incus

Now that Incus is part of the Linux Containers project, it gets to use some of the infrastructure which was once provided to LXD, including the ability to provide a live demo server!

This is now live at:

Technical details

Quite a few things have changed on the infrastructure side since the LXD days.

For one thing, the server code has seen some substantial updates, porting it to Incus, adding support for virtual machines, talking to remote clusters, making the configuration file easier to read, adding e-mail notifications for when users leave feedback and more!

On the client side, the code was also ported from the now defunct term.js over to the actively maintained xterm.js. The instructions were obviously updated to fit Incus too.

But the exciting part is that we’re no longer using nested containers run inside of one large mostly stateless VM, that had to be rebuilt daily for security reasons. No, we’re now spawning individual virtual machines against a remote Incus cluster!

Each session now gets an Ubuntu 22.04 VM for a duration of 30 minutes. Each VM is running on an Incus cluster with a few beefy machines available. They use the Incus daily repository along with both my kernel and zfs builds.

Resource wise, we’re also looking at a big upgrade, moving from just 1 CPU, 256 MB of RAM and 5 GB of slow disk to a whopping 2 CPU, 4 GB of RAM and 50 GB of NVME storage!

The end result is that while the session startup time is a bit longer, up to around 15s from just 5s, the user now gets a full dedicated VM with fast storage and a lot more resources to play with. The most notable change this introduces is the ability to play with Incus VMs too!

Next steps

The demo server is currently using Incus daily builds as there’s no stable Incus release yet. This will obviously change as soon as we have a stable release!

Other than that, the instructions may be expanded a bit to cover more resource intensive parts of Incus, making use of the extra resources now available.

Posted in Incus, Planet Ubuntu, Zabbly | 2 Comments