Архитектура и реализация btrfs
Администрирование btrfs - руководство от Oracle
Using Btrfs with Multiple DevicesFrom btrfs Wiki
A Btrfs filesystem can be created on top of many devices, and more devices can be
added after the FS has been created.
By default, metadata will be mirrored across two devices and data will be striped
across all of the devices present.
If only one device is present, metadata will be duplicated on that one device.
Btrfs can add and remove devices online, and freely convert between RAID levels
after the FS has been created.
Btrfs supports raid0, raid1, raid10, raid5 and raid6 (but see the section below
about raid5/6), and it can also duplicate metadata on a single spindle. When
blocks are read in, checksums are verified. If there are any errors, Btrfs tries to
read from an alternate copy and will repair the broken copy if the alternative copy
See the Gotchas page for some current issues when using btrfs with multiple
volumes of differing sizes in a RAID1 style setup.
Please read the parity RAID status page first: RAID56.
Note that the minimum number of devices required for RAID5 is 2. In case of a 2
device RAID5 filesystem, one device has data and the other has parity data.
Similarly, for RAID6, the minimum is 3 devices.
mkfs.btrfs will accept more than one device on the command line. It has options to
control the raid configuration for data (-d) and metadata (-m). Valid choices are
raid0, raid1, raid10 and single. The option -m single means that no duplication of
metadata is done, which may be desired when using hardware raid.
Raid10 requires at least 4 devices.
Create a filesystem across four drives (metadata mirrored, linear data allocation)
#mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde
Stripe the data without mirroring
#mkfs.btrfs -d raid0 /dev/sdb /dev/sdc
Use raid10 for both data and metadata
#mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
Don't duplicate metadata on a single drive (default on single SSDs)
#mkfs.btrfs -m single /dev/sdb
If you want to use devices of different sizes, striped RAID levels (RAID-0, RAID-10,
RAID-5, RAID-6) may not use all of the available space on the devices. Non-striped
equivalents may give you a more effective use of space (single instead of RAID-0,
RAID-1 instead of RAID-10).
Use full capacity of multiple drives with different sizes (metadata mirrored, data not mirrored and not striped)
#mkfs.btrfs -d single /dev/sdb /dev/sdc
Once you create a multi-device filesystem, you can use any device in the FS for the
#mkfs.btrfs /dev/sdb /dev/sdc /dev/sde
#mount /dev/sde /mnt
If you want to mount a multi-device filesystem using a loopback device, it's not
sufficient to use mount -o loop. Instead, you'll have to set up the loopbacks
Create and mount a filesystem made of several disk images
#mkfs.btrfs img0 img1 img2
losetup /dev/loop0 img0
losetup /dev/loop1 img1
losetup /dev/loop2 img2
#mount /dev/loop0 /mnt/btrfs
After a reboot or reloading the btrfs module, you'll need to use btrfs device scan
to discover all multi-device filesystems on the machine (see below)
The UseCases page gives a few quick recipes for filesystem creation.
btrfs device scan is used to scan all of the block devices under /dev and probe
for Btrfs volumes. This is required after loading the btrfs module if you're running
with more than one device in a filesystem.
Scan all devices
#btrfs device scan
Scan a single device
#btrfs device scan /dev/sdb
btrfs filesystem show will print information about all of the btrfs filesystems on
btrfs filesystem show gives you a list of all the btrfs filesystems on the systems
and which devices they include.
btrfs device add is used to add new devices to a mounted filesystem.
btrfs filesystem balance can balance (restripe) the allocated extents across all
of the existing devices. For example, with an existing filesystem mounted at /mnt,
you can add the device /dev/sdc to it with:
#btrfs device add /dev/sdc /mnt
At this point we have a filesystem with two devices, but all of the metadata and
data are still stored on the original device(s). The filesystem must be balanced to
spread the files across all of the devices.
#btrfs filesystem balance /mnt
The balance operation will take some time. It reads in all of the FS data and
metadata and rewrites it across all the available devices.
A non-raid filesystem is converted to raid by adding a device and running a balance filter that will change the chunk allocation profile. For example, to convert an existing single device system (/dev/sdb1) into a 2 device raid1 (to protect against a single disk failure):
#mount /dev/sdb1 /mnt
#btrfs device add /dev/sdc1 /mnt
#btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt
If the metadata is not converted from the single-device default, it remains as DUP,which does not guarantee that copies of block are on separate devices. If data is not converted it does not have any redundant copies at all.
btrfs device delete is used to remove devices online. It redistributes the any
extents in use on the device being removed to the other devices in the filesystem.
#mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde
#mount /dev/sdb /mnt
Put some data on the filesystem here
#btrfs device delete /dev/sdc /mnt