Category Archives: LXC

LXC/LXCFS/Incus 6.0.1 LTS release

Introduction

The Linux Containers project maintains Long Term Support (LTS) releases for its core projects.
Those come with 5 years of support from upstream with the first two years including bugfixes, minor improvements and security fixes and the remaining 3 years getting only security fixes.

Our current LTS release, 6.0, is as the name implies the 6th time we’ve released an LTS release of our projects, starting over 10 years ago, in February 2014.

At the time of writing, we have three currently supported LTS releases:

  • 4.0 (supported until June 2025, security-only)
  • 5.0 (supported until June 2027, security-only)
  • 6.0 (supported until June 2029).

The 6.0 LTS release begun in April 2024 and was the first to include Incus.

LXC

LXC is the oldest Linux Containers project and the basis for almost every other one of our projects.
This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still actively used directly or indirectly on millions of systems.

Announcement: https://discuss.linuxcontainers.org/t/lxc-6-0-1-lts-has-been-released/20283

Highlights of this point release:

  • Fixed some build tooling issues
  • Fixed startup failures on system without IPv6 support
  • Updated AppArmor rules to avoid potential warnings

LXCFS

LXCFS is a FUSE filesystem used to workaround some shortcomings of the Linux kernel when it comes to reporting available system resources to processes running in containers.
The project started in late 2014 and is still actively used by Incus today as well as by some Docker and Kubernetes users.

Unfortunately the LXCFS approach is starting to run into issues due to tools relying more and more on system call interfaces or other methods to obtain resource information these days requiring more complex solution such as Incus’ system call interception support (using the Seccomp Notifier).

Because of that development, we’ve been slowly discussing better ways to provide reliable resource information to userspace without having to rely on filesystem tricks or costly system call interception, but as with anything that requires widespread userspace adoption, it will take a while until such a solution is in place and so LXCFS isn’t going anywhere any time soon!

Announcement: https://discuss.linuxcontainers.org/t/lxcfs-6-0-1-lts-has-been-released/20277

Highlights of this point release:

  • Support for running multiple instances of LXCFS (--runtime-dir)
  • Detect systems that has a Yama policy preventing reading process personalities

Incus

Incus is our most actively developed project. This virtualization platform is less than a year old but has already seen over 3000 commits by over 100 individual contributors. Its first LTS release made it usable in production environments and significantly boosted its user base.

Announcement: https://discuss.linuxcontainers.org/t/incus-6-0-1-lts-has-been-released/20297

Highlights of this point release:

  • Extended source syntax for ZFS pools (allows mirror & raidz1/raidz2)
  • Cross-project listing on all objects (instances, profiles, images, storage volumes/buckets, networks, …)
  • Additional functions exposed to instance placement scriptlet
  • All create sub-commands in the CLI now accept YAML input
  • All list sub-commands in the CLI now accept customizable columns
  • The migration.stateful config key was expanded to containers too
  • Stateless network ACLs are now supported on OVN
  • New timestamp exposed for instance uptime
  • New incus top command (uses existing metric API)
  • System load information in incus info --resources
  • PCI devices information in incus info --resources
  • Ability to query who has access to a given project or instance
  • Forceful deletion of projects
  • Improved alias handling in incus-simplestreams

What’s next?

We’re going to keep backporting all relevant fixes and minor improvements to our LTS branches and will likely be releasing another LTS point release of those 3 projects later this year.

There is no set schedule on LTS point releases as we instead prefer to wait until we feel there are significant enough fixes to warrant one, then make sure that all three projects are properly tested and ready for a release.

This year we’ve also decided to start releasing non-LTS releases of both LXC and LXCFS.
It’s something we used to do some years ago but then stopped, mostly due to lack of time.
So you can look forward to LXC and LXCFS 6.1 in Q4 of 2024!

Posted in Incus, LXC, LXCFS, Planet Ubuntu | Leave a comment

A month later

It’s now been a whole month since I left Canonical and started working as an independent!

This has been quite the month, both professionally and personally!
In no particular order, this included, setting up a new business, dealing with a somewhat last minute datacenter move (thankfully just one floor down), doing some initial sponsored work, helping out with a LXD fork, selling a house and caring for a sick cat (now all back to normal).

Given everything that’s been happening, I thought I’d use the opportunity to write down some details on the most relevant things I’ve been doing and what to expect moving forward.

Zabbly

Zabbly is the name of the business I’ve registered here in Canada.

I didn’t really like the idea of doing all business moving forward just under my own name as I may want to sub-contract some aspects of it or even have employees down the line.
Having the business part of my life have its own name will make that a fair bit cleaner.

For now, the main things that have been moved over to Zabbly are my organization and IP allocations with ARIN, membership on the Montreal Internet Exchange (QIX) and a number of associated contracts related to AS399760 (my BGP ASN). As part of that, Zabbly is also now listed as the sponsor for all the Linux Containers infrastructure.

Allowing to more clearly separate personal and work-related expenses is going to be another benefit of this move even if legally and from a tax point of view, it’s still all me.

ZFS delegation

An initial bit of sponsored work I got to do this month has been adding support for ZFS delegation to LXD. This makes use of a ZFS 2.2 feature which allows for a dataset to be delegated to a particular user namespace. The ZFS tools can then be used from within that container to create nested datasets or manage snapshots.

This is very exciting as it was the one feature that btrfs had which ZFS offered no equivalent for. It should allow for things like running Docker with the ZFS backend inside of LXD containers, having VPS users be able to create their own datasets, handled their own snapshots and be able to send and receive datasets.

The pull request can be found here: https://github.com/canonical/lxd/pull/12056

Incus

Some of you may have seen the announcement of a new LXD fork called Incus and its subsequent inclusion into the Linux Containers project.

This was quite an exciting development and the LXC team spent quite a bit of time over the past couple weeks chatting with Aleksa and seeing where things were headed.

On my end, I initially helped out trying to make the thing actually pass the testsuite, quite a bit harder than it may sound when dealing with a pretty big codebase and everything having been renamed! I also contributed some ideas of what such a fork may want to change compared to stock LXD.

It’s not often that you get a second chance at designing something like LXD/Incus.
While having a working upgrade path and good backward compatibility is obviously still very important, the fact that anyone migrating will need to deal with some amount of manual work also makes it possible to do away with past mistakes and remove some bits that are seldom used.

I expect I’ll be spending a bunch of my time over the next couple of months helping get Incus into a releasable state. Continuing with the current cleanups, getting the documentation back into shape, putting CI and publishing infrastructure back online (basically re-using what I was once providing to LXD).

The biggest task yet to come is to write tooling and processes to monitor changes happening in Canonical’s LXD and then cherry-pick those into Incus. Again, the hard fork, name and path changes and variety of other changes is going to make that a bit of a challenge but once done, it should make it quite easy to do weekly syncs and reviews of changes.

What’s next

As mentioned, I expect to spend a fair bit of my time over the next few weeks/months helping out with Incus, getting it into shape for an initial release.

For those who enjoyed the LXD YouTube channel, I’m also setting up a new channel that will primarily cover Incus but also some other of my projects: https://www.youtube.com/@TheZabbly.

I’m all set up for contract work and sponsorship now, so if there’s anything you think I can do for you, feel free to reach out at info@zabbly.com.

I’ve also been added to the Github Sponsors program, so if you’d just like to help out with my work on those various projects, that’s available too: https://github.com/sponsors/stgraber

Posted in Incus, LXC, LXCFS, Planet Ubuntu, Zabbly | 4 Comments

Custom user mappings in LXD containers

LXD logo

Introduction

As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups

Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:65536
root:100000:65536

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:65536
root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

stgraber@castiana:~$ cat /etc/subuid
lxd:100000:1000000000
root:100000:1000000000

stgraber@castiana:~$ cat /etc/subgid
lxd:100000:1000000000
root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

root@vorash:~# systemctl restart lxd
root@vorash:~# cat /var/log/lxd/lxd.log
lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000
root@vorash:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

stgraber@castiana:~$ lxc config set test security.idmap.isolated true
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

stgraber@castiana:~$ lxc config set test security.idmap.size 200000
stgraber@castiana:~$ lxc restart test
stgraber@castiana:~$ lxc config get test volatile.last_state.idmap
[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
error: Not enough uid/gid available for the container.

Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
Device home added to test

So that was pretty easy, but did it work?

stgraber@castiana:~$ lxc exec test -- bash
root@test:~# ls -lh /home/
total 529K
drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
lxd:201105:1
root:201105:1

stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
lxd:200512:1
root:200512:1

stgraber@castiana:~$ sudo systemctl restart lxd

stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -

stgraber@castiana:~$ lxc restart test

At which point, things should be working in the container:

stgraber@castiana:~$ lxc exec test -- su ubuntu -l
ubuntu@test:~$ ls -lh
total 119K
drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap
ubuntu@test:~$ 

Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Discussion forun: https://discuss.linuxcontainers.org
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXC, LXD, Planet Ubuntu | Tagged | 10 Comments

LXC 2.0 has been released!

LXD logo

Introduction

Today I’m very pleased to announce the release of LXC 2.0, our second Long Term Support Release! LXC 2.0 is the result of a year of work by the LXC community with over 700 commits done by over 90 contributors!

It joins LXCFS 2.0 which was released last week and will very soon be joined by LXD 2.0 to complete our collection of 2.0 container management tools!

What’s new?

The complete changelog is linked below but the main highlights for me are:

  • More consistent user experience between the various LXC tools.
  • Improved checkpoint/restore support.
  • Complete rework of our CGroup handling code, including support for the CGroup namespace.
  • Cleaned up storage backend subsystem, including the addition of a new Ceph RBD backend.
  • A massive amount of bugfixes.
  • And lastly, we managed to get all that done without breaking our API, so LXC 2.0 is fully API compatible with LXC 1.0.

The focus with this release was stability and maintaining support for all the environments in which LXC shines. We still support all kernels from 2.6.32 though the exact feature set does obviously vary based on kernel features. We also improved support for a bunch of architectures and fixed a lot of bugs and other rough edges.

This is the release you want to run in production for the next few years!

Support length

As mentioned, LXC 2.0 is a Long Term Support release.
This is the second time we do such a release with the first being LXC 1.0.

Long Term Support releases come with a 5 years commitment from upstream to do bugfixes and security updates and release new point releases when enough fixes have accumulated.

The end of life date for the various LXC versions is as follow:

  • LXC 1.0, released February 2014 will EOL on the 1st of June 2019
  • LXC 1.1, released February 2015 will EOL on the 1st of September 2016
  • LXC 2.0, released April 2016 will EOL on the 1st of June 2021

We therefore very strongly recommend LXC 1.1 users to update to LXC 2.0 as we will not be supporting this release for very much longer.

We also recommend production deployments stick to our Long Term Support release.

Project information

Upstream website: https://linuxcontainers.org/lxc/
Release announcement: https://linuxcontainers.org/lxc/news/
Code: https://github.com/lxc/lxc
IRC channel: #lxcontainers on irc.freenode.net
Mailing-lists: https://lists.linuxcontainers.org

Try it online

Want to see what a container with LXC 2.0 installed feels like?
You can get one online to play with here.

Posted in Canonical voices, LXC, Planet Ubuntu | Tagged | 7 Comments

NorthSec 2015: behind the scenes

TLDR: NorthSec is an incredible security event, our CTF simulates a whole internet for every participating team. This allows us to create just about anything, from a locked down country to millions of vulnerable IoT devices spread across the globe. However that flexibility comes at a high cost hardware-wise, as we’re getting bigger and bigger, we need more and more powerful servers and networking gear. We’re very actively looking for sponsors so get in touch with me or just buy us something on Amazon!

What’s NorthSec?

NorthSec is one of the biggest on-site Capture The Flag (CTF), security contest in North America. It’s organized yearly over a weekend in Montreal (usually in May) and since the last edition, has been accompanied by a two days security conference before the CTF itself. The rest of this post will only focus on the CTF part though.

the-room

A view of the main room at NorthSec 2015

Teams arrive at the venue on Friday evening, get setup at their table and then get introduced to this year’s scenario and given access to our infrastructure. There they will have to fight their way through challenges, each earning them points and letting them go further and further. On Sunday afternoon, the top 3 teams are awarded their prize and we wrap up for the year.

Size wise, for the past two years we’ve had a physical limit of up to 32 teams of 8 participants and then a bunch of extra unaffiliated visitors. For the 2016 edition, we’re raising this to 50 teams for a grand total of 400 participants, thanks to some shuffling at the venue making some more room for us.

Why is it special?

The above may sound pretty simple and straightforward, however there are a few important details that sets NorthSec apart from other CTFs.

  • It is entirely on-site. There are some very big online CTFs out there but very few on-site ones. Having everyone participating in the same room is valuable from a networking point of view but also ensures fairness by enforcing fixed size teams and equal network bandwidth and latency.
  • Every team gets its very own copy of the whole infrastructure. There are no shared services in the simulated world we provide them. That means one team’s actions cannot impact another.
  • Each simulation is its own virtual world with its own instance of the internet, we use hundreds of LXC containers and thousands of VLANs and networks FOR EVERY TEAM to provide the most realistic and complete environment you can think of.

World map of our fake internet

World map of our fake internet

What’s our infrastructure like?

Due to the very high bandwidth and low latency requirements, most of the infrastructure is hosted on premises and on our hardware. We do plan on offloading Windows virtual machines to a public cloud for the next edition though.

We also provide a mostly legacy free environment to our contestants, all of our challenges are connected to IPv6-only networks and run on 64bit Ubuntu LTS  in LXC with state of the art security configurations.

Our rack

Our rack, on location at NorthSec 2015

 

All in all, for 32 teams (last year’s edition), we had:

  • 48000 virtual network interfaces
  • 2000 virtual carriers
  • 16000 BGP routers
  • 17000 Ubuntu containers
  • 100 Windows virtual machines
  • 20000 routing table entries

And all of that was running on:

  • Two firewalls (DELL SC1425)
  • Two infrastructure servers (DELL SC1425)
  • One management server (HP DL380 G5)
  • Four main contest hosts (HP DL380 G5)
  • Three backup contest hosts (DELL C6100)

On average we had 7 full simulations and 21 virtual machines running on every host (the backup hosts only had one each). That means each of the main contest hosts had:

  • 10500 virtual network interfaces
  • 435 virtual carriers
  • 3500 BGP routers
  • 3700 Ubuntu containers
  • 21 Windows virtual machines
  • 4375 routing table entries

Not too bad for servers that are (SC1425) or are getting close (DL380 G5) to being 10 years old now.

Past infrastructure challenges

In the past editions we’ve found numerous bugs in the various technologies we use when put under such a crazy load:

  • A variety of switch firmware bugs when dealing with several thousand IPv6-only networks.
  • Multiple Linux IPv6 kernel bugs (and one security issue) also related to an excess of IPv6 multicast traffic.
  • Several memory leaks and other bugs in LXC and related components that become very visible when you’re running upwards of 10000 containers.
  • Several more Linux kernel bugs related to performance scaling as we create more and more namespaces and nested namespaces.

As our infrastructure staff is very invested in these technologies by being upstream developers or contributors to the main projects we use, those bugs were all rapidly reported, discussed and fixed. We always look forward to the next NorthSec as an opportunity to test the latest technology at scale in a completely controlled environment.

How can you help?

As I mentioned, we’ve been capped at 32 teams and around 300 attendees for the past two years. Our existing hardware was barely sufficient  to handle  the load during those two editions, we urgently need to refresh our hardware to offer the best possible experience to our participants.

We’re planning on replacing most if not all of our hardware with slightly more recent equivalents, also upgrading from rotating drives to SSDs and improving our network. On the software side, we’ll be upgrading to a newer Linux kernel, possibly to Ubuntu 16.04, switch from btrfs to zfs and from LXC to LXD.

We are a Canadian non-profit organization with all our staff being volunteers so we very heavily rely on sponsors to be able to make the event a success.

If you or your company would like to help by sponsoring our infrastructure, get in touch with me. We have several sponsoring levels and can get you the visibility you’d like, ranging from a mention on our website and at the event to on-site presence with a recruitment booth and even, if our interests align, inclusion of your product in some of our challenges.

We also have an Amazon wishlist of smaller (cheaper) items that we need to buy in the near future. If you buy something from the list, get in touch so we can properly thank you!

Oh and as I briefly mentioned at the beginning, we have a two days, single-track conference ahead of the CTF. We’re actively looking for speakers, if you have something interesting to present, the CFP is here.

Extra resources

Posted in Canonical voices, LXC, LXD, Planet Ubuntu | Tagged | 7 Comments