Posted on :: Tags: , , ,

What is Incus

I posted about LXD some time ago. It is a tool that allow you to manage LXC containers and VMs. After Canonical took over LXD and removed it from "Linux Containers" control, it got forked under Incus name. The goals were:

  • Keep project away of Canonical's influence;
  • maintain community-driven approach;
  • avoid potential commercialization.

As date of writing Incus not much different in terms on commands and internal functioning, but I believe this will change sooner or later.
Anyway, I'm slowly replacing LXD with Incus for all my environments, thanks to already prepared Migration tool. B.t.w. Incus docs have detailed instructions on installation and migration processes (and a lot of other stuff) and constantly improving.

I'll use Debian Bookworm here. It's Debian 13, as of date of writing.
Initially, I decided to go with Debian Trixie just because... Well... Trixie The Great and Powerfull, cheeky winking. And system had the latest kernel. And RK3588S2 support, which I kinda needed. Unfortunately, system died on me three times in a row and I dragged myself back to Bookworm.
Somewhere between my struggles Bookworm got version 13, fresh kernel, CPU support and so on. I'm not really lost a thing.
I believe Bookworm will do fine now Bookhorse, blushing.

So, You're saying you killed Debian three times in a row and Arch on your laptop still doing well... For how much?.. Four years now?
Unbelievable...

Preparations

Several packages should be already installed on the target system to start using Incus right away, or provide extended functionality. Also, system settings should be tweaked to allow better security.

Package: dnsmasq

Incus relay on dnsmasq for its DNS and DHCP capabilities. Package should be already installed and dnsmasq daemon running otherwise Incus will fail in a middle of initialization process. On Debian dnsmasq can be installed with following command:

apt install -y dnsmasq

Package: qemu

Incus is able to control Virtual Machines too!
In order this to work qemu-system package should be installed. This is optional though, and if you not require VMs, just omit installing this package. If necessity to use VMs arise, package can be installed at any time:

apt install -y qemu-system

Packages: zfs and btrfs

If you require storage Incus polls to use zfs or btrfs support for those file systems should be installed too. They both can be installed before or after initialization. Installed before initialization, they will provide additional options at storage pool creation step.

Tweaks: subgid and subig

Incus will try to launch unprivileged containers by default. For this to work the OS should have appropriate range of subgid and subuid configured. You can manually add root:1000000:1000000000 line into both /etc/subuid and /etc/subgid files or use command bellow:

echo "root:1000000:1000000000" | sudo tee -a /etc/subuid /etc/subgid

Install

There is actually two package sources available: provided by Debian itself - will contain only LTS versions, and Zabbly which will contain every release. If chosen to install Incus LTS version command bellow can be used. To use Zabbly package repository, and get every release some additional actions is required.

apt update && apt upgrade -y
apt install -y incus

When installation finishes, you can add your user to incus-admin or incus groups. Adding user to incus-admin group will grant him access to all Incus capabilities. Only users you trust should have this group. Group incus will provide restricted access.
I will go for full access for my user:

usermod -a -G incus-admin $USER

Don't forget to log out and back in to pick up new group you just added.

Initialize

Now, to make Incus fully operational, initialization should be performed. Like its predecessor, Incus will try to drown you with questions too. It will ask mostly the same stuff, although with subtle differences.

incus admin init

Questions, alongside with my answers bellow:

  • Would you like to use LXD clustering? (yes/no): no
  • Do you want to configure a new storage pool? (yes/no): yes
  • Name of the storage backend to use (dir): dir
  • Would you like to create a new local network bridge? (yes/no): yes
  • What should the new bridge be called?: incusbr0
  • What IPv4 address should be used?: auto
  • What IPv6 address should be used?: auto
  • Would you like the LXD server to be available over the network? (yes/no): no

First Container

List of available images can be displayed with command like this:

# filtering by flavor...
incus image list images:alpine
# ... version ...
incus image list images:alpine/edge
# ... architecture even
incus image list images:alpine/edge/arm64

Filtering by architecture is often not necessary, though. Incus will do its best to match containers architecture to the host.
Once image was found - container can be launched using command bellow:

# run container
incus launch images:alpine/edge test
# list containers, watch for State - RUNNING
incus list

To manage containers use:

# start
incus start test
# stop
incus stop test
# drop into container's shell
incus shell test
# run arbitrary command
incus exec test -- apk update
# remove
incus delete test
# push file into instance
incus file push /tmp/file.txt test/var/tmp
# pull file from instance
incus file pull test/var/tmp /tmp/file.txt

Accessing Containers from LAN

The Bridge

Default bridge is NAT-ed and I can't find any way to make it un-NAT. Nor even create a proper bridge in an Incus way. I will use netplan for that purpose instead.
Netplan configuration stored in /etc/netplan/. I provide a working example from my system. Create a file called interface-and-bridge.yaml. Name can be anything you want, just keep yaml extension. Permissions should be set for root user and root group, both read and write. Something like that:

touch /etc/netplan/interface-and-bridge.yaml
chmod 600 /etc/netplan/interface-and-bridge.yaml
vim /etc/netplan/interface-and-bridge.yaml

Content of a file with explanations:

network:
  version: 2
  renderer: networkd
  ethernets:
    end0:
      # disabling DHCP for both ipv4 and ipv6
      # bridged network not work with dynamic IP
      # for slve interface
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      # Interfaces, which will be used by a bridge
      interfaces: [end0]
      # Can I enable `dhcp4` and `dhcp6` for
      # `br0` and still have network working?
      dhcp4: no
      dhcp6: no
      # system will get this IP
      addresses: [192.168.0.100/24]
      routes:
        # Adjust according to your network configuration
        # 192.168.0.1 is gateway's IP.
        # Gateway is a router in this case
        - to: 0.0.0.0/0
          via: 192.168.0.1
      nameservers:
        # IP of DNS servers
        addresses: [8.8.8.8, 8.8.4.4]

There will be another file, called 10-dhcp-all-interfaces.yaml with content similar to this:

network:
  version: 2
  renderer: networkd
  ethernets:
    all-eth-interfaces:
      match:
        name: "e*"
      dhcp4: yes
      dhcp6: yes
      ipv6-privacy: yes

That file governs all interfaces and makes network "just work" initially. It will interfere with our new settings. I renamed it to 10-dhcp-all-interfaces.yaml.bak to keep netplan from parsing it.

mv /etc/netplan/10-dhcp-all-interfaces.yaml /etc/netplan/10-dhcp-all-interfaces.yaml.bak

Alternatively we can use another file name.
Instead just interface-and-bridge.yaml let's call it 100-interface-and-bridge.yaml. netplan will read files in order according to their names. Our config will be parsed last and override settings from previous files.

After tweaks done execute:

# try configuration
# it will bounce back to previous
# if there was erros
netplan try
# apply new configuration permanently
netplan apply

Containers and Default Profile

Newly created containers still won't know about our new interface though. We should change profile called Default to have interface which tied to our bridge, rather default incusbr0.

# remove previous interface
incus profile device remove default eth0

# add a new one
incus profile device add default eth0 nic nictype=bridged parent=br0

If you already have container running, named test in my case, this should add new interface to it:

# remove previous interface
incus config device remove test eth0

# add a new one
incus config device add test eth0 nic nictype=bridged parent=br0

You can also leave old interface intact, and add a new one, something like eth1 for example.
Also, it's possible to create another profile, instead of modifying default, and launch instances using new profile.
I'll leave it for later, though.

Relationships with Docker

Like with LXD the main motto will be "do not have them doth on the same system". While attempts to befriend Docker and LXD on same host ended up in LXD network not working, pairing Docker with Incus had rather opposite effect - Docker containers lost their networking entierly. Solution can be found here. If I be able to test it, I'll put solution here in extended and permanent form.

The End?

And that's all for this time! Stay tuned for more.