This is the third blog post in this series about LXD 2.0.
As there are a lot of commands involved with managing LXD containers, this post is rather long. If you’d instead prefer a quick step-by-step tour of those same commands, you can try our online demo instead!
Creating and starting a new container
As I mentioned in the previous posts, the LXD command line client comes pre-configured with a few image sources. Ubuntu is the best covered with official images for all its releases and architectures but there also are a number of unofficial images for other distributions. Those are community generated and maintained by LXC upstream contributors.
Ubuntu
If all you want is the best supported release of Ubuntu, all you have to do is:
lxc launch ubuntu:
Note however that the meaning of this will change as new Ubuntu LTS releases are released. So for scripting use, you should stick to mentioning the actual release you want (see below).
Ubuntu 14.04 LTS
To get the latest, tested, stable image of Ubuntu 14.04 LTS, you can simply run:
lxc launch ubuntu:14.04
In this mode, a random container name will be picked.
If you prefer to specify your own name, you may instead do:
lxc launch ubuntu:14.04 c1
Should you want a specific (non-primary) architecture, say a 32bit Intel image, you can do:
lxc launch ubuntu:14.04/i386 c2
Current Ubuntu development release
The “ubuntu:” remote used above only provides official, tested images for Ubuntu. If you instead want untested daily builds, as is appropriate for the development release, you’ll want to use the “ubuntu-daily:” remote instead.
lxc launch ubuntu-daily:devel c3
In this example, whatever the latest Ubuntu development release is will automatically be picked.
You can also be explicit, for example by using the code name:
lxc launch ubuntu-daily:xenial c4
Latest Alpine Linux
Alpine images are available on the “images:” remote and can be launched with:
lxc launch images:alpine/3.3/amd64 c5
And many more
A full list of the Ubuntu images can be obtained with:
lxc image list ubuntu: lxc image list ubuntu-daily:
And of all the unofficial images:
lxc image list images:
A list of all the aliases (friendly names) available on a given remote can also be obtained with (for the “ubuntu:” remote):
lxc image alias list ubuntu:
Creating a container without starting it
If you want to just create a container or a batch of container but not also start them immediately, you can just replace “lxc launch” by “lxc init”. All the options are identical, the only different is that it will not start the container for you after creation.
lxc init ubuntu:
Information about your containers
Listing the containers
To list all your containers, you can do:
lxc list
There are a number of options you can pass to change what columns are displayed. On systems with a lot of containers, the default columns can be a bit slow (due to having to retrieve network information from the containers), you may instead want:
lxc list --fast
Which shows a different set of columns that require less processing on the server side.
You can also filter based on name or properties:
stgraber@dakara:~$ lxc list security.privileged=true +------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+-----------------------------------------------+------------+-----------+ | suse | RUNNING | 172.17.0.105 (eth0) | 2607:f2c0:f00f:2700:216:3eff:fef2:aff4 (eth0) | PERSISTENT | 0 | +------+---------+---------------------+-----------------------------------------------+------------+-----------+
In this example, only containers that are privileged (user namespace disabled) are listed.
stgraber@dakara:~$ lxc list --fast alpine +-------------+---------+--------------+----------------------+----------+------------+ | NAME | STATE | ARCHITECTURE | CREATED AT | PROFILES | TYPE | +-------------+---------+--------------+----------------------+----------+------------+ | alpine | RUNNING | x86_64 | 2016/03/20 02:11 UTC | default | PERSISTENT | +-------------+---------+--------------+----------------------+----------+------------+ | alpine-edge | RUNNING | x86_64 | 2016/03/20 02:19 UTC | default | PERSISTENT | +-------------+---------+--------------+----------------------+----------+------------+
And in this example, only the containers which have “alpine” in their names (complex regular expressions are also supported).
Getting detailed information from a container
As the list command obviously can’t show you everything about a container in a nicely readable way, you can query information about an individual container with:
lxc info <container>
For example:
stgraber@dakara:~$ lxc info zerotier Name: zerotier Architecture: x86_64 Created: 2016/02/20 20:01 UTC Status: Running Type: persistent Profiles: default Pid: 31715 Processes: 32 Ips: eth0: inet 172.17.0.101 eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8 eth0: inet6 fe80::216:3eff:feec:65a8 lo: inet 127.0.0.1 lo: inet6 ::1 lxcbr0: inet 10.0.3.1 lxcbr0: inet6 fe80::c0a4:ceff:fe52:4d51 zt0: inet 29.17.181.59 zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1 zt0: inet6 fe80::79:e7ff:fe0d:5123 Snapshots: zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless)
Life-cycle management commands
Those are probably the most obvious commands of any container or virtual machine manager but they still need to be covered.
Oh and all of them accept multiple container names for batch operation.
start
Starting a container is as simple as:
lxc start <container>
stop
Stopping a container can be done with:
lxc stop <container>
If the container isn’t cooperating (not responding to SIGPWR), you can force it with:
lxc stop <container> --force
restart
Restarting a container is done through:
lxc restart <container>
And if not cooperating (not responding to SIGINT), you can force it with:
lxc restart <container> --force
pause
You can also “pause” a container. In this mode, all the container tasks will be sent the equivalent of a SIGSTOP which means that they will still be visible and will still be using memory but they won’t get any CPU time from the scheduler.
This is useful if you have a CPU hungry container that takes quite a while to start but that you aren’t constantly using. You can let it start, then pause it, then start it again when needed.
lxc pause <container>
delete
Lastly, if you want a container to go away, you can delete it for good with:
lxc delete <container>
Note that you will have to pass “–force” if the container is currently running.
Container configuration
LXD exposes quite a few container settings, including resource limitation, control of container startup and a variety of device pass-through options. The full list is far too long to cover in this post but it’s available here.
As far as devices go, LXD currently supports the following device types:
- disk
This can be a physical disk or partition being mounted into the container or a bind-mounted path from the host. - nic
A network interface. It can be a bridged virtual ethernet interrface, a point to point device, an ethernet macvlan device or an actual physical interface being passed through to the container. - unix-block
A UNIX block device, e.g. /dev/sda - unix-char
A UNIX character device, e.g. /dev/kvm - none
This special type is used to hide a device which would otherwise be inherited through profiles.
Configuration profiles
The list of all available profiles can be obtained with:
lxc profile list
To see the content of a given profile, the easiest is to use:
lxc profile show <profile>
And should you want to change anything inside it, use:
lxc profile edit <profile>
You can change the list of profiles which apply to a given container with:
lxc profile apply <container> <profile1>,<profile2>,<profile3>,...
Local configuration
For things that are unique to a container and so don’t make sense to put into a profile, you can just set them directly against the container:
lxc config edit <container>
This behaves the exact same way as “profile edit” above.
Instead of opening the whole thing in a text editor, you can also modify individual keys with:
lxc config set <container> <key> <value>
Or add devices, for example:
lxc config device add my-container kvm unix-char path=/dev/kvm
Which will setup a /dev/kvm entry for the container named “my-container”.
The same can be done for a profile using “lxc profile set” and “lxc profile device add”.
Reading the configuration
You can read the container local configuration with:
lxc config show <container>
Or to get the expanded configuration (including all the profile keys):
lxc config show --expanded <container>
For example:
stgraber@dakara:~$ lxc config show --expanded zerotier name: zerotier profiles: - default config: security.nesting: "true" user.a: b volatile.base_image: a49d26ce5808075f5175bf31f5cb90561f5023dcd408da8ac5e834096d46b2d8 volatile.eth0.hwaddr: 00:16:3e:ec:65:a8 volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]' devices: eth0: name: eth0 nictype: macvlan parent: eth0 type: nic limits.ingress: 10Mbit limits.egress: 10Mbit root: path: / size: 30GB type: disk tun: path: /dev/net/tun type: unix-char ephemeral: false
That one is very convenient to check what will actually be applied to a given container.
Live configuration update
Note that unless indicated in the documentation, all configuration keys and device entries are applied to affected containers live. This means that you can add and remove devices or alter the security profile of running containers without ever having to restart them.
Getting a shell
LXD lets you execute tasks directly into the container. The most common use of this is to get a shell in the container or to run some admin tasks.
The benefit of this compared to SSH is that you’re not dependent on the container being reachable over the network or on any software or configuration being present inside the container.
Execution environment
One thing that’s a bit unusual with the way LXD executes commands inside the container is that it’s not itself running inside the container, which means that it can’t know what shell to use, what environment variables to set or what path to use for your home directory.
Commands executed through LXD will always run as the container’s root user (uid 0, gid 0) with a minimal PATH environment variable set and a HOME environment variable set to /root.
Additional environment variables can be passed through the command line or can be set permanently against the container through the “environment.<key>” configuration options.
Executing commands
Getting a shell inside a container is typically as simple as:
lxc exec <container> bash
That’s assuming the container does actually have bash installed.
More complex commands require the use of a separator for proper argument parsing:
lxc exec <container> -- ls -lh /
To set or override environment variables, you can use the “–env” argument, for example:
stgraber@dakara:~$ lxc exec zerotier --env mykey=myvalue env | grep mykey mykey=myvalue
Managing files
Because LXD has direct access to the container’s file system, it can directly read and write any file inside the container. This can be very useful to pull log files or exchange files with the container.
Pulling a file from the container
To get a file from the container, simply run:
lxc file pull <container>/<path> <dest>
For example:
stgraber@dakara:~$ lxc file pull zerotier/etc/hosts hosts
Or to read it to standard output:
stgraber@dakara:~$ lxc file pull zerotier/etc/hosts - 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
Pushing a file to the container
Push simply works the other way:
lxc file push <source> <container>/<path>
Editing a file directly
Edit is a convenience function which simply pulls a given path, opens it in your default text editor and then pushes it back to the container when you close it:
lxc file edit <container>/<path>
Snapshot management
LXD lets you snapshot and restore containers. Snapshots include the entirety of the container’s state (including running state if –stateful is used), which means all container configuration, container devices and the container file system.
Creating a snapshot
You can snapshot a container with:
lxc snapshot <container>
It’ll get named snapX where X is an incrementing number.
Alternatively, you can name your snapshot with:
lxc snapshot <container> <snapshot name>
Listing snapshots
The number of snapshots a container has is listed in “lxc list”, but the actual snapshot list is only visible in “lxc info”.
lxc info <container>
Restoring a snapshot
To restore a snapshot, simply run:
lxc restore <container> <snapshot name>
Renaming a snapshot
Renaming a snapshot can be done by moving it with:
lxc move <container>/<snapshot name> <container>/<new snapshot name>
Creating a new container from a snapshot
You can create a new container which will be identical to another container’s snapshot except for the volatile information being reset (MAC address):
lxc copy <source container>/<snapshot name> <destination container>
Deleting a snapshot
And finally, to delete a snapshot, just run:
lxc delete <container>/<snapshot name>
Cloning and renaming
Getting clean distribution images is all nice and well, but sometimes you want to install a bunch of things into your container, configure it and then branch it into a bunch of other containers.
Copying a container
To copy a container and effectively clone it into a new one, just run:
lxc copy <source container> <destination container>
The destination container will be identical in every way to the source one, except it won’t have any snapshot and volatile keys (MAC address) will be reset.
Moving a container
LXD lets you copy and move containers between hosts, but that will get covered in a later post.
For now, the “move” command can be used to rename a container with:
lxc move <old name> <new name>
The only requirement is that the container be stopped, everything else will be kept exactly as it was, including the volatile information (MAC address and such).
Conclusion
This pretty long post covered most of the commands you’re likely to use in day to day operation.
Obviously a lot of those commands have extra arguments that let you be more efficient or tweak specific aspects of your LXD containers. The best way to learn about all of those is to go through the help for those you care about (–help).
Extra information
The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
And if you don’t want or can’t install LXD on your own machine, you can always try it online instead!
What of Open vSwitch? will it be possible to connect containers to it without having to set up veth interfaces and adding another external layer to set up VLANs?
Yes, just set the “parent” property to an OpenVSwitch bridge name and it should all work. liblxc has had transparent openvswitch support for a while.
Thank you! My experience with LXC is a bit outdated, I used it only on Trusty. I was waiting for Xenial to try LXD.
There is a typo in “Editing a file directory”: should be “directly”.
Other than that, is there a way to share a directory with the host filesystem? With legacy LXC, I would simply create a symlink… (with the appropriate chmod of course).
Thanks.
You could still symlink to /var/lib/lxd/containers/NAME/rootfs/PATH if you just want the host to see a given path.
Alternatively you can use the “lxc file” command to just deal with the files you want which will also work remotely.
Lastly, you can setup a bind-mount so that a host path is visible inside the container with:
lxc device add shared-path disk path=/path/in/the/container source=/path/on/the/host
Can you please explain the last example
lxc device add shared-path disk path=/path/in/the/container source=/path/on/the/host
I get “error: unknown command: device” and can’t find more documentation on this.
Oops, sorry, I made a typo, it should be:
lxc config device add shared-path disk path=/path/in/the/container source=/path/on/the/host
Thank you so much for helping and for LXC/LXD.
Merci Stéphane, j’ai très hate de lire la suite!
Is it possible to use zfs in a container?
Can you use and load the zfs module in a container, create a zpool from a disk-image or disk/volume dedicated for the container?
If the lxd host (server) dont support zfs, could you still use zfs as described above in the container?
No you can’t. Containers share the host kernel so therefore cannot load modules themselves.
ZFS upstream has plans to support container nesting at which point if the host kernel supports ZFS, a container would be able to use it by itself for its sub-containers, but that’s not the case right now.
These are great articles. Looking forward to the rest.
With the latest network bridge (lxdbr0) setup, as of just a few days ago, it seems to behave differently from the old lxcbr0. But I haven’t found much/any info on how to modify it. For example, if I want to use a physical NIC to bridge my containers to, getting a DHCP address from an upstream server instead of using NAT in LXD, how would I do that? Maybe that’s an entire article in itself?
Hi,
In your case, you would define a bridge manually in /etc/network/interfaces with something like:
auto br0
iface br0 inet manual
bridge-ports eth1
And then run “dpkg-reconfigure lxd”, select that you do not want to setup a new bridge, then select that you want to use an existing bridge, then type in br0.
Alternatively, you could use MACVlan by again, running “dpkg-reconfigure lxd”, this time saying no to use an existing bridge so the lxd-bridge feature is turned entirely off, then run:
lxc profile device set default eth0 nictype macvlan
lxc profile device set default eth0 parent eth1
Thanks Stéphane! Suggestion #1 worked after a bit of trial and error. This was a bit more challenging because I’m running LXD for testing on a desktop dev machine but I wanted to use the local LAN instead of having lxdbr0 provide NAT. Because this machine is set up using Network Manager to support a bunch of other things, rather than directly editing /etc/network/interfaces, I had to build the bridge using Network Manager.
I would really like to have this bridging working. I am running Ubuntu 16.04 and I have an ens3 as my main ethernet device. I have tried it with a bridge and I have reinstalled and tried it straight from reinstalling. I have the lxd container running, but no networking.
When you mentioned these two commands:
lxc profile device set default eth0 nictype macvlan
lxc profile device set default eth0 parent eth1
what is eth0 and what is eth1 ?
sorry I can only guess..
I am trying to setup network bridge inside a VirtualBox VM. The bridge is working and I can access VM and LXD containers from my MAC but the containers are not able to communicate with outside world. Though they can communicate with VM itself. Same goes with macvlan setup.
Is there any problem with virtual machines and network bridges?
I am using VirtualBox 5.0 with Ubuntu 16.04 connecting to router with bridge network of VirtualBox. Initialised LXD with my bridge br0.
I would like to add that I am trying to use a Centos 7 host, that is hosting my Ubuntu 16.04. I am trying to run my LXD on that Ubuntu 16.04 kvm image. I can get the containers started, but I only see IPv6 link-local and IPv6 static auto config addresses – no IPv4.
Hi Steve,
Have you initialised your LXD daemon with lxd init? That process should setup new bridge lxdbr0 and IPv4/IPv6 NAT network as well.
Try following command, It should start network configuration
sudo dpkg-reconfigure -p medium lxd
Turns out its an issue with Wi-Fi and bridge setup. https://www.virtualbox.org/ticket/10019
Hello, Stéphane!
Could you please detail more about networking to LXD containers?
for example: i have a personal pc, and i want to run on it a lamp server ( multisite) and mail server ( zimbra ) using LXD containers, how should i setup the network interfaces and bridge so that this servers are seen from the outside?
pc name domain.com ( with static external ip, and name servers pointing to it )
LXD container 1 named lamp (domain.com, domain2.com, …)
LXD container 2 named mail ( mail.domain.com )
Just like you would virtual machines, setup NAT rules on the host which forward the ports you want to the container’s IPs.
For a small setup (only a handful of containers) what do you think is best practice?
From this post I gather you would assign the ips to the host (eth0, eth0:0, …) and then use iptables to forward ports. Another way would be routing, which I get the impression is the preferred method of networking. Instead of creating a bridge on the host and directly connection the virtual ethernet devices to that bridge. While I think I can set up both methods, I lack the experience to weigh the pros and cons of each method.
My questions are as such: How would you assign the ips to the container? Both when routing and when using nat, the containers need persistent ip addresses. Would you assign static ips or use dhcp to assign them persistent ips? And how? Using lxc config files for static ip? How? What method for dhcp? For example the /etc/default/lxd-bridge file has a line to include a dnsmasq config file, but also says in the beginning that it shouldn’t be edited by hand.
I wish you would discuss these things more. How you would go about solving up real world scenarios and why you choose which way.
One little sidenote: For a small server I have limited ram and I read it is recommended to set aside at least 8GB of memory just for zfs itself. So I went with btrfs as my next choice. When doing lxd init, it only gave me zfs or dir as a choice. I strongly suspect that it still enables all btrfs features when choosing “dir”, but I would have liked it to be more precise in that at this point in the setup process. Because when discussing filesystem choice in your blog series, “simple directory” is an option which is strongly discouraged. To me it looked like zfs and “simply directory” were the options at that point.
To clarify: I meant to ask about best practice ideas for assigning public ips to containers. Either by using a bridge on the host and directly connecting or by routing or by nat. And I gather from your posts that the latter two are more popular. Since you need persistent private ips to enable both nat and routing, I was wondering about the best method for that.
I have another question:
How does NAT and lxdbr0 work anyways? I use Xenial for both host and container and then switch the container’s /etc/network/interfaces eth0 from dhcp to static and lose internet access. Which might be how it is supposed to be, since there is not nat setup on the host for my new, static ip. The problem: with dhcp it works, but I can’t find out why, because I can’t find the corresponding entry in iptables.
So where is that switch? How does lxd connect the containers to the internet, when it distributes ip addresses via dhcp?
Scratch that. I found the iptables entry:
# iptables -t nat -L
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all — 10.1.135.0/24 !10.1.135.0/24
And it works.
But it didn’t work before, after I installed ufw. I think I already have an idea. One container was already running and ufw or iptables may have “grandfathered” that connection in, while not showing the rules, when I looked for them.
So please ignore parent.
Hello, Stéphane!
Could you please advice about this error.
error: Get https://cloud-images.ubuntu.com/releases/streams/v1/index.json: looku p cloud-images.ubuntu.com on [::1]:53: read udp [::1]:60984->[::1]:53: read: con nection refused
Looks like DNS resolution failure for cloud-images.ubuntu.com
Hi Stéphane, i’m newbie in LXD, LXC or container technology at all. My problem is create bridged NIC. On my physical server i have eno1 (eth interface) connected to NATed network – > internet. LXD created for me lxdbr0 (with DHCP for containers). also in LXD/LXC in default created in my container eth0 (with IP from lxdbr0).
I would like to have in my container bridged eth1 interface connected to phys. en01.
my command was: lxc config device add B1 eth1 nic nictype=bridged parent=eno1.
B1 is name of container. After that when i try to start container i get error message:
error: Error calling ‘lxd forkstart B1 /var/lib/lxd/containers /var/log/lxd/B1/lxc.conf’ : err=’exit status 1′
If you can help me, it would be appreciated 🙂 Phys. Server runnig ubuntu 16.04. And container is also 16.04.
lxc config device set B1 eth1 nictype macvlan
That should do the trick. Your eno1 isn’t a bridge so you either need to create a bridge on your host and then use that with nictype=bridged or just use nictype=macvlan which does something similar for you without any extra host side configuration. The above command changes the interface from bridged to macvlan.
THX, but i found my problem, it was in config file, /etc/lxd-bridge.conf where LXD_IPV4_NAT=”” so i changed to “true”.
So now lxdbr0 bridge working 🙂 my container has access to internet(NAT network).
Now i’m trying give my wireless interface to container, but again i’m in trouble because after – lxc config device add B1 wlan1 nic nictype=physical parent=wlo1 . I get error: wtf? hwaddr= name=eth1
I am facing the same issue when I try to pass one of my host interfaces to the LXD container. Any insight?
Hello Stéphane,
I am new to LXC and LXD. While trying to follow your tutorial, my machine tinks LXC is not installed
[code]
bas@Viki ~ $ lxc
No command ‘lxc’ found, did you mean:
Command ‘llc’ from package ‘llvm’ (universe)
Command ‘lc’ from package ‘mono-devel’ (main)
Command ‘axc’ from package ‘afnix’ (universe)
Command ‘lpc’ from package ‘cups-bsd’ (main)
Command ‘lpc’ from package ‘lpr’ (universe)
Command ‘lpc’ from package ‘lprng’ (universe)
lxc: command not found
bas@Viki ~ $
[/code]
It does think LXD is installed, with some error messages
[code]
bas@Viki ~ $ lxd
WARN[04-27|16:50:32] Per-container AppArmor profiles are disabled because the mac_admin capability is missing.
WARN[04-27|16:50:32] Couldn’t find the CGroup pids controller, process limits will be ignored.
WARN[04-27|16:50:32] CGroup memory swap accounting is disabled, swap limits will be ignored.
error: remove /var/lib/lxd/devlxd/sock: permission denied
[/code]
Some background info:
Running Mint 17 (Ubuntu 14.04 LTS)
Installed using ppa:ubuntu-lxc/stable
I probably did something wrong. What did I oversee?
Sounds like you have lxd installed but not lxd-client.
Ubuntu installs recommends by default so “apt-get install lxd” gets you both “lxd” and “lxd-client”, but maybe Mint is weird in that regard.
Anyway, install the lxd-client package and things should work a bit better.
Hi, Stéphane,
And many thanks for your wonderful series of LXD articles! I would like to ask you a couple of questions on the macvlan case:
* Can I set a fixed MAC address for an LXD container?
* Configuring iptables inside the container, is this the common practice?
Hi,
1) LXD will setup a fixed MAC address for you (as a volatile key). You can either change that with “lxc config edit” or you can just add the macvlan nic device to the container directly and then set the hwaddr key there.
2) Correct, macvlan’s design is such that firewalling at the host level is basically impossible. So firewall in the container or firewall at your gateway but firewalling on the host basically can’t be done.
thank you, stephane. similar newbie, struggling with lxd networking bridge setup. 16.04 host and 16.04 guests. pure bridge, not NAT. host ifconfig tells me eno1 as my host ethernet to the outside world. (I am still confused when to use eth1, eth0, or eno0, so I usually try all), so I tried your /etc/network/interfaces:
auto lo
iface lo inet loopback
auto br0
iface br0 inet manual
bridge-ports eth1
with eth1 randomly replaced by eth0 or eno1, but in all cases
host# /etc/init.d/networking restart
fails to come up. grrr… any advice?
Do you have the bridge-utils package installed?
not yet…thanks for the hint.
hi
i have installed lxd but when i login with ssh with root user to container ubunut some main files in root folder like /home not shown
when i go to /home and enter ls , it says :
ls: cannot open directory ‘.’: Permission denied
Sounds like you bind-mounted /home or are otherwise seeing some paths from the host which aren’t mapped in your container’s id range.
I would like to create a network namespace inside a container. This won’t work.
root@beta:~# ip netns add ntkv0
mount –make-shared /var/run/netns failed: Permission denied
Is it really impossible to do?
Thanks,
Luca
I also tried to use “unconfined” for apparmor profile. This gained some steps but not enough. Now I can add a network namespace but not use it.
root@beta:~# ip netns add ntkv0
root@beta:~# ip netns
ntkv0
root@beta:~# ip netns exec ntkv0 bash
mount of /sys failed: Operation not permitted
I have the same problem. I set “lxc config set container security.nesting true”, and now I’m getting:
“mount of /sys failed: Operation not permitted”
Any help would be appreciated
Hi Stephane,
I’m very interested in LXD, I played with a couple of hosts and I like it but I have a big problem:
I added a second NIC to one of my hosts and I created a second bridge, the I edited the default profile to let containers use it.
/etc/network/interfaces
auto ens160
iface ens160 inet manual
auto br0
iface br0 inet static
address 192.168.1.103
netmask 255.255.255.0
gateway 192.168.1.1
bridge-ifaces ens160
bridge-ports ens160
up ifconfig ens160 up
auto ens192
iface ens192 inet manual
auto br1
iface br1 inet static
address 10.10.0.103
netmask 255.255.255.0
bridge-ifaces ens192
bridge-ports ens192
bridge-stp on
up ifconfig ens192 up
lxd_editor
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
eth1:
name: eth1
nictype: bridged
parent: br1
type: nic
All IPs are statically defined in container. I can talk external network from eth0 but not from eth1. I can ping from container eth1 only the host address, nothing else. Where is my mistake?
Thanks
Giuseppe
Hi,
I’m running bacula in a lxc(2.0.1)/lxd(2.0.2) container, and added some tape drives to it:
lxc config device add bacula01 /dev/st13m unix-char path=/dev/st13
(+few other aliases these drives have). The problem is when bacula tries using the drives it fails the tests (Attempt to WEOF on non-appendable Volume).
Dmesg on the host says:
st 7:0:2:0: [st13] MTSETDRVBUFFER only allowed for root.
How can I make the container root use the drive as root (or let it use all the features)?
I’ve tried doing ‘mt -f /dev/nst0 drvbuffer’ on host and in the container, it fails with ‘/dev/nst0: Operation not permitted’ in the latter one.
Hello Stéphane,
I read the following in the official Ubuntu documentation for lxd:
“It is possible to create a container without a private network namespace. In this case, the container will have access to the host networking like any other application. Note that this is particularly dangerous if the container is running a distribution with upstart, like Ubuntu, since programs which talk to init, like shutdown, will talk over the abstract Unix domain socket to the host’s upstart, and shut down the host.”
However, in lxd documentation on Github I found the following restriction:
“LXD supports different kind of network devices: physical: Straight physical device passthrough from the host. The targeted device will vanish from the host and appear in the container”
What I am trying to achieve is precisely what was described in the first quote: a container sharing network namespace with the supervisor host. I am fully aware of the consequences and am ready to accept them. However, no matter how I try, the only thing I’ve been able to achieve so far is what is described in the second quote, up to the point where the network adapter used on the supervisor vanishes causing it to loose network connectivity, while the container takes over and becomes accessible over the network using hosts’ IP address.
I am able to achieve what I want in Docker when using “host” network driver. How can I do this in lxd?
Hi Stephane,
Thank you for the great article. I am new to LXC/LXD and while setting up my test containers I faced following issue. I configured LXC to use LVM as its back storage with following parameters:
storage.lvm_fstype: xfs
storage.lvm_vg_name: vg_cont
storage.lvm_volume_size: 300GiB
Unfortunately every time I create a new container the disk space is set to 10GB default. I een tried to expand the LV assigned to the container with lvexpand command, although the LV is expanded but it is not visible to the container.
Hi,
I’m running LXD on Arch (4.8.6-1-ARCH). In “lxd init” wizard I selected existing ZFS dataset (storage/lxd), and existing bridge (br0).
I noticed that after that ZFS mountpoint for storage/lxd is broken (it’s set to none), so I did a “sudo zfs set mountpoint=/storage/lxd storage/lxd”.
After running “lxc launch ubuntu:16.04 xenial -c security.privileged=true” I can see two additional ZFS datasets (storage/lxd/{containers,images}), but they have mountpoints set to /var/lib/lxd/{containers,images} – that’s not something I want obviously.
Both directories under storage/lxd are empty, while those under /var/lib/lxd have containers/images.
My questions are:
1) How can I make “lxc launch…” use datasets under storage/lxd, not /var/lib/lxd?
2) How can I make each container use a prepared ZFS dataset, so that I can have separated snapshots for each container? With LXC I was using “lxc-create –dir=…”
Thanks,
Predrag
Is host networking available on LXD containers? Is there a way for LXD containers to use the host physical port directly?
In the default setup container network is behind lxd-bridge and ports can be forward to specific services. How about the source address? Is there a possible way to show the real source address instead the NAT-IP to the application inside the container? It’s necessary to deal with access-list in apps like postfix, apache, bind. thx
Hi Stephane
I would like some help because I think I made a mistakes. Trying to fix an issue with lxdbr0 I deleted it from brctl
I tried to delete it after from lxd network and I have the message:
“`bash
lxc network delete lxdbr0
error: not found
“`
How can I fix this plz ?
Cheers
Winael
Please help.
I’m been trying to init lxd on a Raspberry pi B, running ubuntu-core-16-rc2.img on it. I’m connect through ssh. When I want to init lxd, have the following error:
jorgel-herrera@localhost:~$ sudo lxd init
error: Unable to talk to LXD: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
I’m googling a lot but not find any clear answer.
hope you can help.
Thank’s
@Jorge,
did you get it working, I’m having the same problem.
Regards
You should consider changing the snapshot delete syntax to something more differentiated from container delete. I just mistakenly deleted my container by replacing the slash in “lxc delete /” with a space.. /facepalm
On an Ubuntu 16.04 server, in order to be able to resolve container names from the host machine, I added the lines
search lxd
nameserver 10.0.3.1
to /etc/resolvconf/resolv.conf.d/head
I could then ping the containers by name, e.g.
ping new_container
PING new_container.lxd (10.0.3.96) 56(84) bytes of data.
64 bytes from new_container.lxd (10.0.3.96): icmp_seq=1 ttl=64 time=0.064 ms
This certainly works and survives reboots, however it may not be the optimum way to achieve this goal, so correct me if I am wrong.
Desperate…this is the word…
I have lost a week of work time. When i restarted the first time the host machine i have lost zfs pool….so all the containers.
Can you assist me please? I have post the logs in https://github.com/lxc/lxd/issues/2904
But the issue is closed…
Thanks a billion times
Hello,
I cannot find the answer for the following question. Is it possible to hook my IDE (PHPStorm and IntelliJ) to actually work on my ZFS based projects?
Thank you in advance for the time spent.
I have done it via the PHPStorm Deployment tool.
Hello !
Someone can explain how to configure IPv6/DHCP on a LXC ?
Hi,
I have a problem with the creation of a new container.
I try to create a new a new ubuntu:18.04 container
I have 4 running containers with the same configuration and if I try to create a new one I got this error message.
zfs clone failed: cannot open ‘default/images/6700bee14eb3034ba4bd0c3d0165f938faa161d2690e919465aab2946490689b@readonly’: dataset does not exist\n
I use lxc ubuntu 18.04 and a standard installation, lxc client and server 3.0.3
Thanks for any advise
after Reboot the ubuntu server 18.04
is automatic unmont this loop device for lxd create.
please help me out
/dev/loop17 btrfs 41G 17M 39G 1% /var/lib/lxd/storage-pools/finlabsindia
what is the difference between lxc list and lxc-ls?