CSE 265: System and Network Administration

Lab #4

Today we focus on disk, volume, and partition management. The lab is particularly detail-oriented.

  1. Review the partitioning and filesystem slides on Disks, partitions and filesystems that we covered in class. You might also want to skim Chapter 8 of ULSAH.

  2. Resizing the home filesystem

    Let's start by exploring. If you type df at the shell, you'll see four filesystems mounted on your system. The ones that correspond to parts of your hard drive are / (the root filesystem on which other filesystems are mounted), /boot (the filesystem that contains grub and other booting-specific files), and /home (the filesystem that holds home directories of the users of this system, such as yours.)

    To see how things are allocated at a lower level, as root, run fdisk (with the correct argument). See how your drive has one partition for booting, and one additional partition for LVM.

    Right now all space on your hard drive has been allocated. On my machine, roughly .5GB is allocated to /boot, 50GB is allocated to /, and 402GB is allocated to /home. (Note that df has an option to show easier-to-read sizes, such as GB or MB instead of KB where appropriate.)

    So our initial goal is to shrink that /home partition, since it has the bulk of the storage on the drive, and indeed the bulk of unused storage on the drive as well. Typically you cannot modify a filesystem while it is in use. And the /home filesystem is in use when a user is logged in since it is where your home directory is. Fortunately, the root user's home directory is somewhere else (where?). So, log out, and log back in as root.

    Now that you are root, you can unmount the /home filesystem. Beforehand, run ls -l /home to see your home directory there. Then run umount /home. This unmounts the specified filesystem that is mounted on /home (note that you cannot, for example, unmount the /etc directory because it is not a mounted filesystem). Now if you check the contents of /home you'll find it is empty. Your home directory still exists on the filesystem on the partition used for /home, but it is not currently part of the active filesystem. Since it is no longer active, we can make changes to it.

    Fortunately, linux provides a command to resize some kinds of filesystems (such as the ext4 type that we are using). Review briefly the resize2fs command. Note that it acts on the filesystem on a device. How do we know what device /home is on? It turns out that /home is on a virtual device (a logical volume) but that doesn't matter for the moment. To find out what device holds a filesystem, look in /etc/fstab. This file, as you may recall, holds the list of devices and where they are mounted in the filesystem. In it, you'll find a long path that corresponds to the particular LVM device used for /home. Now use that information in the resize2fs command to change the filesystem size to be 10GB.

    When you do this correctly, the system will complain, and tell you to run e2fsck instead. That's fine -- e2fsck is a special version of the fsck command for ext filesystems. Running it will check to make sure that the filesystem is coherent, and the resize2fs command wants to ensure that it is not operating on a corrupted filesystem. After you run the suggested command, you'll be able to run resize2fs again successfully. You now have a /home filesystem that only spans 10GB of the 400+G logical volume (virtual partition) on which it is placed. You can view the size of all logical volumes with the lvs command.

    We now want to reduce the size of the logical volume to match the new size of the filesystem on it. Use lvreduce to do so, and re-run lvs to verify the change that you made in the lv_home logical volume. You now have about 400GB free on your drive!

  3. Software RAID

    Now that we have available space, we can create some more logical volumes that have some value. Let's start by exploring RAID. In particular, let's use software RAID to create a new filesystem. RAID 1 (mirroring) or RAID 5 (n+1) can be applied to a set of equal-sized drive partitions (below LVM) or logical volumes (above LVM). Let's build our RAID on top of LVM. The RAID-specific steps will be the same if we were using partitions across multiple drives.

    The first step is to create some new logical volumes, each 10GB in size. Our goal is to provide a 20GB RAID-5 volume for /home. Type lvcreate -L10G -n lv_home01 vg_sandbox (assuming the volume group on your machine is called vg_sandbox). Do the same for two more (e.g., lv_home02, lv_home03). Use lvs to see that you have created three new logical volumes. (Note that you can use lvremove if you make a mistake in this process.)

    Now let's establish our software RAID device. Type mdadm --create /dev/md0 --auto=yes --level=raid5 -n 3 /dev/mapper/X1 /dev/mapper/X2 /dev/mapper/X3 where you replace X1..X3 with the names of the newly created logical volumes (found in /dev/mapper). This tells mdadm to create a new raid 5 device at /dev/md0 with the three specific volumes as the underlying sources, and when finished, lets us access the combined device as /dev/md0.

    We can now place a filesystem on it, using mke2fs (or mkfs). Make sure you create an ext4 filesystem. Once that is in place, try mounting it at /mnt and running df. Look at /proc/mdstat to see the status of the RAID filesystem. If it shows useful information about RAID (e.g., active raid5, etc.), then it is functioning---the OS is performing software RAID and making it visible via /dev/md0. (Notice that you can use the filesystem even when the RAID devices are not yet consistent.)

    RHEL/CentOS assumes that RAID is only started prior to LVM. In this case, we are creating a RAID that exists on top of LVM, so it must be started afterwards. You'll need to edit the /etc/rc.d/rc.sysinit file -- move the seven lines for lvm to be prior to the MD RAID block right above it. This is a good place to reboot to make sure that the RAID shows up functioning again. (It is safe to reboot, even if the RAID is still rebuilding.) If it shows up correctly (after checking /proc/mdstat), you can mount it again to /mnt.

    Finally, to make this permanent, we should move the contents of the existing /home (since it has at least one user directory in it) to our new filesystem on /mnt, and then unmount both and re-mount the new one to /home. (It would be a good idea to make sure that no regular user is logged in while you move their home directory.) To finish, add and remove the appropriate entries in /etc/fstab and reboot to make sure it works.

  4. Additional swap

    Your machine was intially set up to have about 8GB of swap space. (Use top to see this.) Our task at this point is to create a new swap partition by re-using the old home logical volume.

    Before we begin, as root, run lvs. It will show six logical volumes that are part of one volume group. Three of them are the volumes created for the raid volume above. A fourth logical volume is used for swap space. A fifth used to hold user home directories. The last is for the root filesystem.

    Use mkswap to establish the old home logical volume as a swap file (this is very fast). Now use top to see how much swap you have. Although the filesystem is ready, the OS hasn't yet been told to use it. Edit /etc/fstab to include the new swap partition, then use swapon -a to turn it on. Run top once again to see that you now have 10GB more of swap.

  5. Resizing a volume

    (Warning: this part is particularly dangerous to your filesystem. Double check your work before you do something that requires you to reinstall your OS from scratch!)

    At this point your /home filesystem is much smaller than when we started. (Again, you can use df and lvs to see the filesystems and logical volumes known to your system.) If you add the space used by the logical volumes shown by lvs and compare to the size of the partition used by LVM, you will find that there is quite a bit of space unused. However, if you use pvs to see the information about your physical volumes, you'll see that your LVM partition is listed as almost the whole drive. This means you cannot create new partitions (as there is no space on your drive). You can only create logical volumes to utilize available space within the Volume Group already defined.

    Sometimes it is useful to be able to create partitions directly, skipping LVM volumes entirely. Our final goal in this lab will be to shrink the physical volume so that you could create new partitions on your drive. Unfortunately, the pvresize command will not shrink a physical volume that has extents allocated in areas that will no longer be covered by a smaller volume.

    So your first step will be to find out whether the allocated extents of your physical volume are indeed allocated outside of the minimum needed. As you may recall from your readings about LVM, a physical volume is comprised of physical extents (portions of a physical volume that can be allocated to a particular volume group). As we saw above with pvs, there was lots of space free on your only volume group. Since LVM could use any extent (it is not required to use only the lowest-ordered extents), we must first find out where our logical volumes have been allocated (that is, which physical extents are being used).

    We can find out what LVM is doing by telling it to create a backup LVM volume group configuration. Run /sbin/vgcfgbackup and it will create the file /etc/lvm/backup/vg_sandbox with human-readable contents.

    Examine this file. You will probably want to make some notes to reflect the layout being described. The physical_volumes entry will show you the range of extents used (pe_start and pe_count). Each logical volume has a starting extent (specified by "pv0" within stripes) and an extent_count saying how large it is. What you want to see is that all of the physical extents used by your logical volumes are packed together at the beginning of the partition. If that is the case, it means that we can truncate this partition without harm.

    If we have physical extents in use in the area we wish to free (at the end) that correspond to volumes containing data, we need to move that data to an earlier available extent. The command to do this is pvmove. Use it to pack your physical extents all at the beginning of the partition. And then re-run vgcfgbackup to update your backup file, which you can examine to verify that it is now correct.

    Next is to actually change the size of the LVM physical volume. While we could modify the volume backup file and tell the system about the revised file with /sbin/vgcfgrestore, lvs also has a command that does exactly what we want: pvresize. So, look at its short man page, and figure out how to use 100GB less storage from your drive.

    Once you've done so, LVM is now convinced that its storage allocation has changed but the disk partitioning does not yet reflect it. Run fdisk again to see that the partitions have not changed.

    What we need is to resize the /dev/sda2 partition. We can use parted to accomplish our goal. Run parted on the drive. At the parted prompt, type print to list the partitions that it sees. We are going to replace the existing sda2 partition with a smaller one. Use the rm command to delete the 2nd partition. Then use mkpart primary fat32 (not mkpartfs) to create a new one that leaves 100GB free. Note that mkpartfs would install a filesystem on the new partition (overwriting our existing filesystem!). Check the revised partition table with print, and notice that the new (smaller) partition does not have the right filesystem type. The one we removed had the lvm flag. Now use the set command to set the lvm flag. That's it! You can now quit. While parted does not require rebooting after these changes, you might want to do so to verify that everything is still functional.

  6. Wrapping Up

    In order to sign the lab completion sheet, you will need to:
    1. show me /etc/fstab and the output of /proc/mdstat and df to demonstrate the use of RAID
    2. show me top demonstrating the activation of your additional swap space
    3. show me the (correct) output of lvs, pvs, and fdisk


This page can be reached from http://www.cse.lehigh.edu/~brian/course/2016/sysadmin/labs/
Last revised: 16 February 2016.