Instalare si configurare LXD
Postat la Sun 22 April 2018 in tutoriale, lxd, lxc
Documentatia LXD recomanda folosirea ca sistem de stocare pentru containere/VM ZFS ca sistem de fisiere care permite:
- crearea rapida de partitii ca dataseturi
- optiuni de backup si restaurare rapida
- putem monta o partitie ca share in mai multe containere
Mai multe gasiit aici: http://zfsonlinux.org/
Dupa ce am ales masina potrivita vom face o instalare simpla de Ubuntu 18.04 server (beta 2 in acest moment) pe care vom instala serviciul de ssh pentru conectare remote.
Cum am partiionat discurile, in varianta mea am folosind 2 HDD de 250Gb:
- /dev/sda1 - 250M - /boot
- /dev/sda2 - 8G - swap
- /dev/sda3 - 100G - /
- /dev/sda4 - restul discului pe care nu am format la instalare
- /dev/sdb - nepartitionat
Dupa instalare am creat un bridge la care am adaugat interfatza enp9s0 folosind utilitarul netplan.
Am editat fisierul /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
enp9s0:
dhcp4: false
dhcp6: false
bridges:
br0:
interfaces:
- enp9s0
dhcp4: false
addresses: [ 192.168.25.200/24 ]
gateway4: 192.168.25.1
nameservers:
addresses: [ 1.1.1.1,8.8.8.8 ]
si am aplicat configuratia cu comanda
netplan apply
iar configuratia a devenit:
2: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 00:30:05:a1:c7:63 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 32:f0:1b:7c:b1:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.25.200/24 brd 192.168.25.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::30f0:1bff:fe7c:b1e0/64 scope link
valid_lft forever preferred_lft forever
Pachetul lxd vine preinstalat in Ubuntu 18.04 server la versiunea 3.0. Asa ca mai avem nevoie de utilitarele pentru ZFS
apt install zfsutils-linux
Pentru ca doream ZFS pe mai multe discuri am creat manual pool-ul, pe partitiile /dev/sda4 si /dev/sdb:
zpool create lxd001 /dev/sda4 /dev/sdb
Verificam configuratia ZFS:
~ zpool status
pool: lxd001
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
lxd001 ONLINE 0 0 0
sda4 ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
Mai multe despre configurare sistemului de fisiere ZFS gasiti in documentatia Oracle aici.
Automat pool-ul ZFS este montat in /lxd001 si pentru pasul urmator acesta trebuie sa fie demontat:
zfs umount lxd001
Acum putem initializa lxd
~ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd001
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: lxd001
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: br0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
cluster: null
networks:
description: ""
managed: false
name: br0
type: "bridge"
storage_pools:
- config:
source: lxd001
description: ""
name: lxd001
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: lxd001
type: disk
name: default
Alternativa recomandata pentru storage este btrfs, dar se poate folosi si LVM , folder normal (dar mai lent). Mai multe gasiti in documentatie.
Completare: LXD se poate instala si pe CentOS 7 cum este descris in acest artico
Articolul face parte din seria Virtualizare cu LXD