CSE 265: System and Network Administration

Lab #3

Today we focus on disk, volume, and partition management. Since some of the steps are performed while not in graphical mode, I recommend printing this lab before starting it.

  1. Review the partitioning and filesystem slides on Disks, partitions and filesystems that we covered in class. You should also finish the slides from that topic on file access modes as they are straightforward and we will not cover them further in class. You might also want to read through a useful reference to LVM.

  2. Let's start by exploring RAID. In particular, let's use software RAID to create a new filesystem. RAID 1 (mirroring) or RAID 5 (n+1) can be applied to a set of equal-sized partitions (below LVM) or logical volumes (above LVM). At the moment, we don't have drive space that is available for new partitions (we would have to resize the existing large partition that is currently used by LVM). On the other hand, we do have space within LVM to allocate to additional logical volumes (since we didn't use it all in our first lab). So let's build our RAID on top of LVM. While not a particularly realistic scenario, the RAID-specific steps will be the same if we were using partitions across multiple drives.

    The first step is to create some new logical volumes, each 1GB in size. Our goal is to provide a 2GB RAID-5 volume for /home. Type lvcreate -L1G -n lv_home01 vg_sandbox (assuming the volume group on your machine is called vg_sandbox). Do the same for two more (e.g., lv_home02, lv_home03). Use lvs to see that you have created three new logical volumes. (Note that you can use lvremove if you make a mistake in this process.)

    Now let's establish our software RAID device. Type "mdadm --create /dev/md0 --auto=yes --level=raid5 -n 3 /dev/mapper/X1 /dev/mapper/X2 /dev/mapper/X3" where you replace X1..X3 with the names of the newly created logical volumes (found in /dev/mapper). This tells mdadm to create a new raid 5 device at /dev/md0 with the three specific volumes as the underlying sources, and when finished, lets us access the combined device as /dev/md0.

    We can now place a filesystem on it, using mke2fs (or mkfs). Make sure you create an ext4 filesystem. Once that is in place, try mounting it at /mnt and running df. Look at /proc/mdstat to see the status of the RAID filesystem. If it shows useful information about RAID (e.g., active raid5, etc.), then it is functioning---the OS is performing software RAID and making it visible via /dev/md0. (Notice that you can use the filesystem even when the RAID devices are not yet consistent.)

    RHEL/CentOS assumes that RAID is only started prior to LVM. In this case, we are creating a RAID that exists on top of LVM, so it must be started afterwards. You'll need to edit the /etc/rc.d/rc.sysinit file -- move the three lines for lvm to be prior to the MD RAID block right above it. This is a good place to reboot to make sure that the RAID shows up functioning again (check /proc/mdstat after boot). If it does, you can mount it again to /mnt.

    Finally, to make this permanent, we should use mv to move the contents of the existing /home (since it has at least one user directory in it) to our new filesystem on /mnt, and then unmount both and re-mount the new one to /home. (It would be a good idea to make sure that no regular user is logged in while you move their home directory.) To finish, add and remove the appropriate entries in /etc/fstab and reboot to make sure it works.

  3. The last step is dangerous, and so we will need to install a rescue disk. Actually, the net install disk we used originally would work, but now that we have a booting machine we can instead install that disk on our machines. This means that we can effectively access a rescue disk without using removable media.

    First, download the CD image from http://ftp.cse.lehigh.edu/pub/centos/6.2/isos/x86_64/CentOS-6.2-x86_64-netinstall.iso. This file is 227M so it will fit in your home directory (which is now in a 2GB fielsystem). We can access the files on any disk image (ISO) with the comand mount -o loop disk.iso /mnt (please substitute the right filename for disk.iso). Go ahead, mount it and you can cd to /mnt to see the file system that exists within that image.

    We need to copy the contents (files and directories) from this image to /boot (because /boot is on a partition and is thus accessible by the rescue kernel). Please do so -- it should fit on your /boot partition. We now need to add an entry in grub so that you can start the rescue kernel. Copy an existing entry and modify it with your own title and change the following. The kernel to run is /isolinux/vmlinuz and the initrd file is /isolinux/initrd.img. Please add the keyword rescue to the kernel line. That should be enough. Reboot, and try your rescue kernel. When it asks where to find the rescue image, say on a hard drive in /dev/sda. Later, when it asks to try to find your filesystems, choose continue, and eventually it will find them, and give you the option of starting a shell. Just type exit from the shell and choose reboot.

  4. Assuming your rescue kernel works, we can continue. Our task at this point is to resize the existing linux root partition of your drive and create a new swap partition. This kind of work is perhaps the most "dangerous" that we do -- we are going to modify a functional filesystem. Certain kinds of human errors in this step may render the current installation on your drive useless, and you might need to reinstall the OS as you did in the first lab. That said, this is exactly the place to make such errors and not get fired or even lose points from a grade.

    To be more precise, what we will do is first resize the filesystem -- that is, convince the filesystem that it is smaller, occupying 10 GB less of the underlying logical volume. Then we will resize the logical volume underneath the filesystem to match. This will free many logical extents (raw storage space) from which we can create additional logical volumes as desired (such as for additional swap space).

    This process (the resizing in particular) can take a while. Before you get started, you should read through the man pages for mount, umount, mke2fs, resize2fs, mkswap, swapon, lvs, lvreduce, lvcreate, vgchange, and vgscan. If we were to resize a partition rather than a logical volume, we would use parted and fdisk.

    Before we begin, as root, run lvs. It will show six logical volumes that are part of one volume group. Three of them are the volumes created for the raid volume above. A fourth logical volume is used for swap space. A fifth holds all user home directories. The last (for the root filesystem) is the volume that we will resize. Write down the current size, and figure out what 10G less is.

    Since we want to modify the root filesystem of our drives, we need to boot into some other filesystem because we cannot shrink most filesystems while they are in use. For this we need a "rescue CD", and the CentOS installation CD could serve, but we also just installed a rescue kernel, so we will use that. (No need for DVD drives!)

    There are at least two methods possible from this point. Choose one.

    1. When asked whether you want to find CentOS installations, choose "Skip". That will boot to a shell. Run "lvm vgscan -v --mknodes". Run "lvm vgchange -ay". These tell the OS about the logical volumes but does not mount any filesystems.
    2. When asked whether you want to find CentOS installations, choose anything other than "Skip". Use umount to unmount the /mnt/sysimage/ partitions. Notice that umount by itself gives the error that the partition is busy. Instead run umount with the lazy unmount option which succeeds.

    Now that the logical volume containing the root partition is not mounted, but the OS knows about the logical volumes on the drive, we can continue. Type resize2fs -p /dev/mapper/vg_sandbox-lv_root XG (where XG is the new smaller size that you want) to resize the filesystem in lv_root. (Actually, you'll need to run e2fsck first, as instructed when you try resize2fs.) When I did this, the e2fsck and resizing took a few minutes.

    Run lvm lvreduce -L XG vg_sandbox/lv_root (again replacing XG) to resize the lv_root logical volume. Yes, this is a dangerous operation, but it is what we want to do.

    At this point you can and should reboot to multiuser mode and give the CD to someone else if you have not already done so. If you run df, you'll see a much smaller root volume. The extra space (no longer used by any logical volume) is now visible to pvscan (as some free space). We want to create additional swap space, so first we need to create a new logical volume. Create one as we did earlier, but this time 10G in size.

    Use mkswap to establish the new volume as a swap file (this is very fast). Now use top to see how much swap you have. Although the filesystem is ready, the OS hasn't yet been told to use it. Edit /etc/fstab to include the new swap partition, then use swapon -a to turn it on. Run top to see that you now have 10GB more of swap.

In order to sign the lab completion sheet, you will need to:

  1. show me /etc/fstab and the output of /proc/mdstat and df to demonstrate the use of RAID
  2. show me /etc/grub.conf to see your rescue entry
  3. show me top demonstrating your additional swap space


This page can be reached from http://www.cse.lehigh.edu/~brian/course/2012/sysadmin/labs/
Last revised: 3 February 2012.