OS Installation
General Recommendations Installing Proxmox
When installing Proxmox VE the hostname/node name should contain the domain, if no domain is available then simply use .local
APT Repositories
Without a Support Subscription
As soon as you install the node(s) in your cluster:
- Enable the PVE No-Subscription Repository (If no license was bought)
- Disable the PVE Enterprise Repository
- Disable the CEPH Enterprise Repository
With a Support Subscription
Enable your License Key and enjoy those sweet Enterprise Packages!
IMPORTANT Dependencies
Firstly, run an apt-get update -y
to get the latest APT Lists.
Network
The following Network Oriented Dependencies are usually very good to have:
- ifupdown2 → Allows hot-reloading bridges
- ethtool → For networking debugging and disabling off-loading
- net-tools → Includes many useful things like
ifconfig
apt-get install ifupdown2 ethtool net-tools
Microcode (CPU Support)
Do an apt update and install the AMD or Intel microcode depending on your HW.
For this you may need the contrib
, non-free
or non-free-firmware
repositories.
Get your current release with:
source /etc/os-release && echo $VERSION_CODENAME
Assuming $release
is your Debian release:
deb http://deb.debian.org/debian $release main
deb http://deb.debian.org/debian-security/ $release-security main
deb http://deb.debian.org/debian $release-updates main
deb http://ftp.debian.org/debian/ $release main contrib non-free non-free-firmware
deb http://security.debian.org/ $release-security main contrib non-free non-free-firmware
Network Recommendations
As mentioned before, the following dependencies are good to have if not installed:
apt-get update -y
apt-get install ifupdown2 ethtool net-tools
It’s recommendable to Disable NIC Offloading for performance reasons.
Example on /etc/network/interfaces
file.
# eno1 is an example Physical Interface Name
auto eno1
iface eno1 inet manual
post-up /sbin/ethtool -offload eno1 tx off rx off; /sbin/ethtool -K eno1 gso off; /sbin/ethtool -K eno1 tso off;
#INTERFACE NAME
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
#BRIDGE NAME
Storage
If you have an even number of pairs of disks, and prefer Local I/O performance over fast(er) migration and distributed storage,use ZFS in RAID10 without any underlying HW RAID controllers (or just set them to JBOD/Passthrough)
Otherwise feel free to test out CEPH, its pretty well integrated with Proxmox. A few caveats here:
- Calculate the correct PG groups with the formula or PGCalc from CEPH.
- If using HDDs, disable the Physical HDD cache for higher performance.
- Enable jumbo frames
We personally prefer the ZFS solution due to its very good performance, but it requires a bit more micromanagement in terms of migrating and replicating VMs/CTs.
CEPH also takes more space.