Basic functionality • Volume groups (VGs) can be resized online by absorbing new
physical volumes (PVs) or ejecting existing ones. • Logical volumes (LVs) can be resized online by concatenating
extents onto them or truncating extents from them. • LVs can be moved between PVs. • Creation of read-only
snapshots of logical volumes (LVM1), leveraging a copy on write (CoW) feature, or read/write snapshots (LVM2) • VGs can be split or merged
in situ as long as no LVs span the split. This can be useful when migrating whole LVs to or from offline storage. • LVM objects can be tagged for administrative convenience. • VGs and LVs can be made active as the underlying devices become available through use of the lvmetad daemon.
Advanced functionality •
Hybrid volumes can be created using the
dm-cache target, which allows one or more fast storage devices, such as flash-based
SSDs, to act as a
cache for one or more slower
hard disk drives. • Thinly provisioned LVs can be allocated from a pool. • On newer versions of
device mapper, LVM is integrated with the rest of device mapper enough to ignore the individual paths that back a dm-multipath device if devices/multipath_component_detection=1 is set in lvm.conf. This prevents LVM from activating volumes on an individual path instead of the multipath device.
RAID • LVs can be created to include
RAID functionality, including
RAID 1,
5 and
6. • Entire LVs or their parts can be striped across multiple PVs, similarly to
RAID 0. • A RAID 1 backend device (a PV) can be configured as "write-mostly", resulting in reads being avoided to such devices unless necessary. • Recovery rate can be limited using lvchange --raidmaxrecoveryrate and lvchange --raidminrecoveryrate to maintain acceptable I/O performance while rebuilding a LV that includes RAID functionality.
High availability The LVM also works in a shared-storage
cluster in which disks holding the PVs are shared between multiple host computers, but can require an additional daemon to mediate metadata access via a form of locking. ; CLVM : A
distributed lock manager is used to broker concurrent LVM metadata accesses. Whenever a cluster node needs to modify the LVM metadata, it must secure permission from its local clvmd, which is in constant contact with other clvmd daemons in the cluster and can communicate a desire to get a lock on a particular set of objects. ; HA-LVM : Cluster-awareness is left to the application providing the high availability function. For the LVM's part, HA-LVM can use CLVM as a locking mechanism, or can continue to use the default file locking and reduce "collisions" by restricting access to only those LVM objects that have appropriate tags. Since this simpler solution avoids contention rather than mitigating it, no concurrent accesses are allowed, so HA-LVM is considered useful only in active-passive configurations. ; lvmlockd : , a stable LVM component that is designed to replace clvmd by making the locking of LVM objects transparent to the rest of LVM, without relying on a distributed lock manager. It saw massive development during 2016. The above described mechanisms only resolve the issues with LVM's access to the storage. The file system selected to be on top of such LVs must either support clustering by itself (such as
GFS2 or
VxFS) or it must only be mounted by a single cluster node at any time (such as in an active-passive configuration).
Volume group allocation policy LVM VGs must contain a default allocation policy for new volumes created from it. This can later be changed for each LV using the lvconvert -A command, or on the VG itself via vgchange --alloc. To minimize fragmentation, LVM will attempt the strictest policy (contiguous) first and then progress toward the most liberal policy defined for the LVM object until allocation finally succeeds. In RAID configurations, almost all policies are applied to each leg in isolation. For example, even if a LV has a policy of
cling, expanding the file system will not result in LVM using a PV if it is already used by one of the other legs in the RAID setup. LVs with RAID functionality will put each leg on different PVs, making the other PVs unavailable to any other given leg. If this was the only option available, expansion of the LV would fail. In this sense, the logic behind
cling will only apply to expanding each of the individual legs of the array. Available allocation policies are: •
Contiguous – forces all
LEs in a given LV to be adjacent and ordered. This eliminates fragmentation but severely reduces a LV expandability. •
Cling – forces new LEs to be allocated only on PVs already used by an LV. This can help mitigate fragmentation as well as reduce vulnerability of particular LVs should a device go down, by reducing the likelihood that other LVs also have extents on that PV. •
Normal – implies near-indiscriminate selection of PEs, but it will attempt to keep parallel legs (such as those of a RAID setup) from sharing a physical device. •
Anywhere – imposes no restrictions whatsoever. Highly risky in a RAID setup as it ignores isolation requirements, undercutting most of the benefits of RAID. For linear volumes, it can result in increased fragmentation. == Implementation ==