Testing out changes in a production environment is never a good idea. However prepping test servers can be tedious as you have to find the hardware and setup the operating system before you can begin. So I want a faster and more cost effective medium, turning a single Cloud Server into a virtualized host server for my test servers. Welcome LXC.
Taken from the providers site, LXC is an operating system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs.) LXC is similar to Solaris Containers, FreeBSD jails and OpenVZ.
To managed my LXC containers, I prefer to use Proxmox VE 5, which provides a clean control panel for managing my containers.
This guide will document how to install Proxmox on a 4G Rackspace Cloud Server running Debian 9. There will be a 50G SSD Cloud Block Storage volume attached to the server utilizing ZFS that will store the containers, which is outlined more below. The Proxmox installation will install everything needed to run LXC. The IP’s for the containers will be provided via NAT served from the server, therefore creating a self contained test environment.
Configure system for LXC according to best practices
Increase the open files limit by appending the following to the bottom of /etc/security/limits.conf:
[root@proxmox01 ~]# vim /etc/security/limits.conf ... * soft nofile 1048576 unset * hard nofile 1048576 unset root soft nofile 1048576 unset root hard nofile 1048576 unset * soft memlock 1048576 unset * hard memlock 1048576 unset
Now setup some basic kernel tweaking at the bottom of /etc/sysctl.conf:
[root@proxmox01 ~]# vim /etc/sysctl.conf ... # LXD best practices: https://github.com/lxc/lxd/blob/master/doc/production-setup.md fs.inotify.max_queued_events = 1048576 fs.inotify.max_user_instances = 1048576 fs.inotify.max_user_watches = 1048576 vm.max_map_count = 262144
Install Proxmox VE 5
For this to work, we need a vanilla Debian 9 Cloud Server and install Proxmox on top of it, which will install the required kernel.
To get things started, update /etc/hosts to setup your fqdn, and remove any resolvable ipv6 domains:
[root@proxmox01 ~]# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 123.123.123.123 proxmox01.yourdomain.com proxmox01-iad # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
Test to confirm /etc/files is setup properly. This should return your servers IP address:
[root@proxmox01 ~]# hostname --ip-address
Add the Proxmox VE repo and add the repo key:
[root@proxmox01 ~]# echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list [root@proxmox01 ~]# wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
Update the package index and then update the system for Proxmox:
[root@proxmox01 ~]# apt update && apt dist-upgrade * Select option for 'Install the package maintainer's version' when asked about grub
Install Proxmox VE and reboot:
[root@proxmox01 ~]# apt install proxmox-ve postfix open-iscsi [root@proxmox01 ~]# reboot
Once the cloud server comes back online, confirm you are running the pve kernel:
[root@proxmox01 ~]# uname -a Linux proxmox 4.13.4-1-pve #1 SMP PVE 4.13.4-25 (Fri, 13 Oct 2017 08:59:53 +0200) x86_64 GNU/Linux
Setup NAT for the containers
As the Rackspace Cloud server comes with 1 IP address, I will be making use of NAT’ed IP addresses to assign to my individual containers. The steps are documented below:
Update /etc/sysctl.conf to allow ip_forwarding:
[root@proxmox01 ~]# vim /etc/sysctl.conf ... net.ipv4.ip_forward = 1 ...
Then apply the new settings without a reboot:
[root@proxmox01 ~]# sysctl -p
To setup the NAT rules, we need to setup a script that will start on boot. Two things need to be taken into consideration here:
1. Change IP address below (123.123.123.123) in the NAT rule to your Cloud server’s public IP address. 2. This assumes you want to use a 192.168.1.0/24 network for your VE’s.
The quick and dirty script is below:
[root@proxmox01 ~]# vim /etc/init.d/lxc-routing #!/bin/sh case "$1" in start) echo "lxc-routing started" # It's important that you change the SNAT IP to the one of your server (not the local but the internet IP) # The following line adds a route to the IP-range that we will later assign to the VPS. That's how you get internet access on # your VPS. /sbin/iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j SNAT --to 123.123.123.123 # Allow servers to have access to internet: /sbin/iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT /sbin/iptables -A FORWARD -d 192.168.1.0/24 -j ACCEPT # Be sure to add net.ipv4.ip_forward=1 to /etc/sysctl.conf, then run sysctl -p # These are the rules for any port forwarding you want to do # In this example, all traffic to and from the ports 11001-11019 gets routed to/from the VPS with the IP 192.168.1.1. # Also the port 11000 is routed to the SSH port of the vps, later on you can ssh into your VPS through yourip:11000 #/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11000 -j DNAT --to 192.168.1.1:22 #/sbin/iptables -t nat -A PREROUTING -i eth0 -p udp --dport 11001:11019 -j DNAT --to 192.168.1.1 #/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11001:11019 -j DNAT --to 192.168.1.1 # In my case I also dropped outgoing SMTP traffic, as it's one of the most abused things on servers #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 25 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2525 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 587 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 465 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 2526 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 110 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 143 #/sbin/iptables -A FORWARD -j DROP -p tcp --destination-port 993 ;; *) echo "Usage: /etc/init.d/lxc-routing {start}" exit 2 ;; esac exit 0
Setup permissions, set to run on boot, and run it:
[root@proxmox01 ~]# chmod 755 /etc/init.d/lxc-routing [root@proxmox01 ~]# update-rc.d lxc-routing defaults [root@proxmox01 ~]# /etc/init.d/lxc-routing start
When you go to start a new container, the container will not start as Proxmox will complain about an error similar to below:
-- Unit [email protected] has begun starting up. Nov 06 06:07:07 proxmox01.*********** systemd-udevd[11150]: Could not generate persistent MAC address for vethMVIWQY: No such file or directory Nov 06 06:07:07 proxmox01.*********** kernel: IPv6: ADDRCONF(NETDEV_UP): veth100i0: link is not ready
This can be corrected by:
[root@proxmox01 ~]# vim /etc/systemd/network/99-default.link [Link] NamePolicy=kernel database onboard slot path MACAddressPolicy=none
Then reboot:
[root@proxmox01 ~]# reboot
Navigate your browser to the control panel, login with your root SSH credentials, and setup a Linux Bridge
- Navigate your browser to: https://x.x.x.x:8006 - Click on System --> Network - On top, click 'Create' --> 'Linux Bridge' - Name: vmbr0 - IP address: 192.168.1.1 - Subnet mask: 255.255.255.0 - Autostart: checked - Leave everything else blank
Setup the 50G SSD Cloud Block Storage Volume with ZFS and add to proxmox. Assuming the device is already mounted, check to see what it got mapped to by:
[root@proxmox01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 80G 0 disk └─xvda1 202:1 0 80G 0 part / xvdb 202:16 0 50G 0 disk <--- This is my new volume
First, install the ZFS utils for Linux, and enable the kernel module:
[root@proxmox01 ~]# apt-get install zfsutils-linux [root@proxmox01 ~]# /sbin/modprobe zfs
Then add the drive to the zpool:
[root@proxmox01 ~]# zpool create zfs /dev/xvdb [root@proxmox01 ~]# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs 49.8G 97.5K 49.7G - 0% 0% 1.00x ONLINE -
Now add the new disk to proxmox:
- Navigate your browser to: https://x.x.x.x:8006 - Click on Datacenter --> Storage - On top, click 'Add' --> 'ZFS' - Name: zfs - ZFS Pool: zfs - Enable: Checked - Thin provision: Checked
Add Docker support to the containers
Docker can successfully run within a LXC container with some additional configuration. However, as the proxmox kernel may be older, the latest versions of Docker may fail to work properly. The versions of Docker you receive from the OS repos seem to work though.
First, create the containers as desired for Docker via Proxmox, then add the following to the bottom of containers LXC config file:
[root@proxmox01 ~]# /etc/pve/lxc/100.conf ... #insert docker part below lxc.aa_profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop:
After restarting that container, you will be able to install and configure Docker as normal on that container.
Add NFS support to the containers
NFS can successfully run within a LXC container with an additional configuration.
First, create an apparmor profile for NFS:
vim /etc/apparmor.d/lxc/lxc-default-with-nfs # Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which # will source all profiles under /etc/apparmor.d/lxc profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) { #include# allow NFS (nfs/nfs4) mounts. mount fstype=nfs*, }
Then reload the LXC profiles by:
apparmor_parser -r /etc/apparmor.d/lxc-containers
You can explicitly allow NFS in containers by adding another apparmor profile for them
Finally, add the following to the bottom of containers LXC config file:
[root@proxmox01 ~]# /etc/pve/lxc/100.conf ... #insert near bottom lxc.apparmor.profile: lxc-container-default-with-nfs
After restarting that container, you will be able to install and configure NFS as normal on that container.