Proxmox VE unifies your compute and storage systems, i.e. you can use the same physical nodes within a cluster for both computing (processing VMs and containers) and replicated storage. The traditional silos of compute and storage resources can be wrapped up into a single hyper-converged appliance. Separate storage networks (SANs) and connections via network attached storages (NAS) disappear. With the integration of Ceph, an open source software-defined storage platform, Proxmox VE has the ability to run and manage Ceph storage directly on the hypervisor nodes.
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.
Easy setup and management with CLI and GUI support
Scalable to the exabyte level
Setup pools with different performance and redundancy characteristics
Data is replicated, making it fault tolerant
Runs on economical commodity hardware
No need for hardware RAID controllers
For small to mid sized deployments, it is possible to install a Ceph server for RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes, see Ceph RADOS Block Devices (RBD). Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on the same node is possible.
To simplify management, we provide pveceph - a tool to install and manage Ceph services on Proxmox VE nodes.
[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/]
, for use as a RBD storage:
Ceph Monitor (ceph-mon)
Ceph Manager (ceph-mgr)
Ceph OSD (ceph-osd; Object Storage Daemon)
To build a hyper-converged Proxmox + Ceph Cluster there should be at least three (preferably) identical servers for the setup.
Check also the recommendations from Ceph’s website.
Higher CPU core frequency reduce latency and should be preferred. As a simple rule of thumb, you should assign a CPU core (or thread) to each Ceph service to provide enough resources for stable and durable Ceph performance.
Especially in a hyper-converged setup, the memory consumption needs to be carefully monitored. In addition to the intended workload from virtual machines and container, Ceph needs enough memory available to provide good and stable performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. OSD caching will use additional memory.
We recommend a network bandwidth of at least 10 GbE or more, which is used
exclusively for Ceph. A meshed network setup
[Full Mesh Network for Ceph https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server]
is also an option if there are no 10 GbE switches available.
The volume of traffic, especially during recovery, will interfere with other services on the same network and may even break the Proxmox VE cluster stack.
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith will ensure that it isn’t your bottleneck and won’t be anytime soon, 25, 40 or even 100 GBps are possible.
When planning the size of your Ceph cluster, it is important to take the recovery time into consideration. Especially with small clusters, the recovery might take long. It is recommended that you use SSDs instead of HDDs in small setups to reduce recovery time, minimizing the likelihood of a subsequent failure event during recovery.
In general SSDs will provide more IOPs than spinning disks. This fact and the higher cost may make a class based separation of pools appealing. Another possibility to speedup OSDs is to use a faster disk as journal or DB/WAL device, see creating Ceph OSDs. If a faster disk is used for multiple OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
Aside from the disk type, Ceph best performs with an even sized and distributed amount of disks per node. For example, 4 x 500 GB disks with in each node is better than a mixed setup with a single 1 TB and three 250 GB disk.
One also need to balance OSD count and single OSD capacity. More capacity allows to increase storage density, but it also means that a single OSD failure forces ceph to recover more data at once.
As Ceph handles data object redundancy and multiple parallel writes to disks (OSDs) on its own, using a RAID controller normally doesn’t improve performance or availability. On the contrary, Ceph is designed to handle whole disks on it’s own, without any abstraction in between. RAID controller are not designed for the Ceph use case and may complicate things and sometimes even reduce performance, as their write and caching algorithms may interfere with the ones from Ceph.
|Avoid RAID controller, use host bus adapter (HBA) instead.|
|Above recommendations should be seen as a rough guidance for choosing hardware. Therefore, it is still essential to adapt it to your specific needs, test your setup and monitor health and performance continuously.|
Initial Ceph installation & configuration
With Proxmox VE you have the benefit of an easy to use installation wizard for Ceph. Click on one of your cluster nodes and navigate to the Ceph section in the menu tree. If Ceph is not already installed you will be offered to do so now.
The wizard is divided into different sections, where each needs to be finished successfully in order to use Ceph. After starting the installation the wizard will download and install all required packages from Proxmox VE’s ceph repository.
After finishing the first step, you will need to create a configuration. This step is only needed once per cluster, as this configuration is distributed automatically to all remaining cluster members through Proxmox VE’s clustered configuration file system (pmxcfs).
The configuration step includes the following settings:
Public Network: You should setup a dedicated network for Ceph, this setting is required. Separating your Ceph traffic is highly recommended, because it could lead to troubles with other latency dependent services, e.g., cluster communication may decrease Ceph’s performance, if not done.
Cluster Network: As an optional step you can go even further and separate the OSD replication & heartbeat traffic as well. This will relieve the public network and could lead to significant performance improvements especially in big clusters.
You have two more options which are considered advanced and therefore should only changed if you are an expert.
Number of replicas: Defines the how often a object is replicated
Minimum replicas: Defines the minimum number of required replicas for I/O to be marked as complete.
Additionally you need to choose your first monitor node, this is required.
That’s it, you should see a success page as the last step with further instructions on how to go on. You are now prepared to start using Ceph, even though you will need to create additional monitors, create some OSDs and at least one pool.
The rest of this chapter will guide you on how to get the most out of your Proxmox VE based Ceph setup, this will include aforementioned and more like CephFS which is a very handy addition to your new Ceph cluster.
Installation of Ceph Packages
Use Proxmox VE Ceph installation wizard (recommended) or run the following command on each node:
This sets up an apt package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.
Creating initial Ceph configuration
pveceph init --network 10.10.10.0/24
This creates an initial configuration at /etc/pve/ceph.conf with a dedicated network for ceph. That file is automatically distributed to all Proxmox VE nodes by using pmxcfs. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.
Creating Ceph Monitors
The Ceph Monitor (MON)
[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
maintains a master copy of the cluster map. For high availability you need to have at least 3 monitors. One monitor will already be installed if you used the installation wizard. You wont need more than 3 monitors as long as your cluster is small to midsize, only really large clusters will need more than that.
On each node where you want to place a monitor (three monitors are recommended), create it by using the Ceph → Monitor tab in the GUI or run.
This will also install the needed Ceph Manager (ceph-mgr) by default. If you do not want to install a manager, specify the -exclude-manager option.
Creating Ceph Manager
The Manager daemon runs alongside the monitors, providing an interface for
monitoring the cluster. Since the Ceph luminous release the
[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/]
daemon is required. During monitor installation the ceph manager will be installed as well.
|It is recommended to install the Ceph Manager on the monitor nodes. For high availability install more then one manager.|
Creating Ceph OSDs
pveceph createosd /dev/sd[X]
|We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly among your, at least three nodes (4 OSDs on each node).|
If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot sector and any OSD leftover the following commands should be sufficient.
dd if=/dev/zero of=/dev/sd[X] bs=1M count=200 ceph-disk zap /dev/sd[X]
|The above commands will destroy data on the disk!|
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
introduced, the so called Bluestore
[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/]
. This is the default when creating OSDs in Ceph luminous.
pveceph createosd /dev/sd[X]
|In order to select a disk in the GUI, to be more fail-safe, the disk needs
to have a GPT
[GPT partition table https://en.wikipedia.org/wiki/GUID_Partition_Table]
partition table. You can create this with gdisk /dev/sd(x). If there is no GPT, you cannot select the disk as DB/WAL.
If you want to use a separate DB/WAL device for your OSDs, you can specify it through the -journal_dev option. The WAL is placed with the DB, if not specified separately.
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
|The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s internal journal or write-ahead log. It is recommended to use a fast SSD or NVRAM for better performance.|
Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can still be used and might give better performance in small setups, when backed by an NVMe SSD or similar.
pveceph createosd /dev/sd[X] -bluestore 0
|In order to select a disk in the GUI, the disk needs to have a
partition table. You can create this with gdisk /dev/sd(x). If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.
If you want to use a dedicated SSD journal disk:
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] -bluestore 0
Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk.
pveceph createosd /dev/sdf -journal_dev /dev/sdb -bluestore 0
This partitions the disk (data and journal partition), creates filesystems and starts the OSD, afterwards it is running and fully functional.
|This command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using: ceph-disk zap /dev/sd[X]|
You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended to achieve good performance.
Creating Ceph Pools
A pool is a logical group for storing objects. It holds Placement Groups (PG, pg_num), a collection of objects.
When no options are given, we set a default of 128 PGs, a size of 3 replicas and a min_size of 2 replicas for serving objects in a degraded state.
|The default number of PGs works for 2-5 disks. Ceph throws a HEALTH_WARNING if you have too few or too many PGs in your cluster.|
It is advised to calculate the PG number depending on your setup, you can find
the formula and the PG calculator
[PG calculator http://ceph.com/pgcalc/]
online. While PGs can be increased later on, they can never be decreased.
You can create pools through command line or on the GUI on each PVE host under Ceph → Pools.
pveceph createpool <name>
If you would like to automatically get also a storage definition for your pool, active the checkbox "Add storages" on the GUI or use the command line option --add_storages on pool creation.
Further information on Ceph pool handling can be found in the Ceph pool
[Ceph pool operation http://docs.ceph.com/docs/luminous/rados/operations/pools/]
Ceph CRUSH & device classes
The foundation of Ceph is its algorithm, Controlled Replication
Under Scalable Hashing
CRUSH calculates where to store to and retrieve data from, this has the advantage that no central index service is needed. CRUSH works with a map of OSDs, buckets (device locations) and rulesets (data replication) for pools.
|Further information can be found in the Ceph documentation, under the
section CRUSH map
[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/]
This map can be altered to reflect different replication hierarchies. The object replicas can be separated (eg. failure domains), while maintaining the desired distribution.
A common use case is to use different classes of disks for different Ceph pools. For this reason, Ceph introduced the device classes with luminous, to accommodate the need for easy ruleset generation.
The device classes can be seen in the ceph osd tree output. These classes represent their own root bucket, which can be seen with the below command.
ceph osd crush tree --show-shadow
Example output form the above command:
ID CLASS WEIGHT TYPE NAME -16 nvme 2.18307 root default~nvme -13 nvme 0.72769 host sumi1~nvme 12 nvme 0.72769 osd.12 -14 nvme 0.72769 host sumi2~nvme 13 nvme 0.72769 osd.13 -15 nvme 0.72769 host sumi3~nvme 14 nvme 0.72769 osd.14 -1 7.70544 root default -3 2.56848 host sumi1 12 nvme 0.72769 osd.12 -5 2.56848 host sumi2 13 nvme 0.72769 osd.13 -7 2.56848 host sumi3 14 nvme 0.72769 osd.14
To let a pool distribute its objects only on a specific device class, you need to create a ruleset with the specific class first.
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
name of the rule, to connect with a pool (seen in GUI & CLI)
which crush root it should belong to (default ceph root "default")
at which failure-domain the objects should be distributed (usually host)
what type of OSD backing store to use (eg. nvme, ssd, hdd)
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
ceph osd pool set <pool-name> crush_rule <rule-name>
|If the pool already contains objects, all of these have to be moved accordingly. Depending on your setup this may introduce a big performance hit on your cluster. As an alternative, you can create a new pool and move disks separately.|
You can then configure Proxmox VE to use such pools to store VM or Container images. Simply use the GUI too add a new RBD storage (see section Ceph RADOS Block Devices (RBD)).
You also need to copy the keyring to a predefined location for a external Ceph cluster. If Ceph is installed on the Proxmox nodes itself, then this will be done automatically.
|The file name needs to be <storage_id> + `.keyring - <storage_id> is the expression after rbd: in /etc/pve/storage.cfg which is my-ceph-storage in the following example:|
mkdir /etc/pve/priv/ceph cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
Ceph provides also a filesystem running on top of the same object storage as RADOS block devices do. A Metadata Server (MDS) is used to map the RADOS backed objects to files and directories, allowing to provide a POSIX-compliant replicated filesystem. This allows one to have a clustered highly available shared filesystem in an easy way if ceph is already used. Its Metadata Servers guarantee that files get balanced out over the whole Ceph cluster, this way even high load will not overload a single host, which can be an issue with traditional shared filesystem approaches, like NFS, for example.
Proxmox VE supports both, using an existing CephFS as storage to save backups, ISO files or container templates and creating a hyper-converged CephFS itself.
Metadata Server (MDS)
CephFS needs at least one Metadata Server to be configured and running to be able to work. One can simply create one through the Proxmox VE web GUI’s Node -> CephFS panel or on the command line with:
pveceph mds create
Multiple metadata servers can be created in a cluster. But with the default settings only one can be active at any time. If an MDS, or its node, becomes unresponsive (or crashes), another standby MDS will get promoted to active. One can speed up the hand-over between the active and a standby MDS up by using the hotstandby parameter option on create, or if you have already created it you may set/add:
mds standby replay = true
in the ceph.conf respective MDS section. With this enabled, this specific MDS will always poll the active one, so that it can take over faster as it is in a warm state. But naturally, the active polling will cause some additional performance impact on your system and active MDS.
Multiple Active MDS
Since Luminous (12.2.x) you can also have multiple active metadata servers
running, but this is normally only useful for a high count on parallel clients,
as else the MDS seldom is the bottleneck. If you want to set this up please
refer to the ceph documentation.
[Configuring multiple active MDS daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
Create a CephFS
With Proxmox VE’s CephFS integration into you can create a CephFS easily over the Web GUI, the CLI or an external API interface. Some prerequisites are required for this to work:
After this got all checked and done you can simply create a CephFS through either the Web GUI’s Node -> CephFS panel or the command line tool pveceph, for example with:
pveceph fs create --pg_num 128 --add-storage
This creates a CephFS named ‘'cephfs'' using a pool for its data named
`'cephfs_data'' with `128` placement groups and a pool for its metadata named
'cephfs_metadata'' with one quarter of the data pools placement groups (`32).
Check the Proxmox VE managed Ceph pool chapter or visit the
Ceph documentation for more information regarding a fitting placement group
number (pg_num) for your setup
[Ceph Placement Groups http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/]
. Additionally, the `--add-storage’ parameter will add the CephFS to the Proxmox VE storage configuration after it was created successfully.
|Destroying a CephFS will render all its data unusable, this cannot be undone!|
If you really want to destroy an existing CephFS you first need to stop, or destroy, all metadata server (M̀DS). You can destroy them either over the Web GUI or the command line interface, with:
pveceph mds destroy NAME
on each Proxmox VE node hosting a MDS daemon.
Then, you can remove (destroy) CephFS by issuing a:
ceph fs rm NAME --yes-i-really-mean-it
on a single node hosting Ceph. After this you may want to remove the created data and metadata pools, this can be done either over the Web GUI or the CLI with:
pveceph pool destroy NAME
Ceph monitoring and troubleshooting
A good start is to continuosly monitor the ceph health from the start of initial deployment. Either through the ceph tools itself, but also by accessing the status through the Proxmox VE API.
The following ceph commands below can be used to see if the cluster is healthy (HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors (HEALTH_ERR). If the cluster is in an unhealthy state the status commands below will also give you an overview on the current events and actions take.
# single time output pve# ceph -s # continuously output status changes (press CTRL+C to stop) pve# ceph -w
To get a more detailed view, every ceph service has a log file under
/var/log/ceph/ and if there is not enough detail, the log level can be
[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/]
You can find more information about troubleshooting
[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
a Ceph cluster on its website.