Recently, I was asked to remove an old virtual disk from a VM running Linux operating system and replace it with a new one. Seems pretty simple, right?

Well, as it turned out, managing “physical” volumes on Linux is a very interesting topic for many of my colleagues, so I decided to create this post and explain here in detail how should you approach such task.

Removing a volume from the guest (Linux) perspective

Before removing the virtual disk from VMware-level, we need to ensure the guest OS using it will not be affected in any way. In Linux, this can be achieved by:

  • stopping all processes which might be using said disk,
  • unmounting the disk from the filesystem,
  • and, finally, rescanning the storage controller.

In this post we are going to assume we are in root-level control of a RHEL VM with two physical volumes: /dev/sda and /dev/sdb; the latter being the drive we want to remove mounted at /junk.

Let’s begin with the first bit - stopping (or killing) all processes using the unneeded drive.

Taking it slow, review if there are actually any processes stopping us from removing the drive:

[vlku@rhel.ssh.guru]: ~>$ sudo lsof | grep /junk

The command above should list running processes and filter them by the mountpoint - if you receive no output then you’re in luck; empty output of this command indicates there are no processes “locking” the drive. More than likely however, you will receive some kind of output from it. One of the ways to kill all the processes in our way is to issue the following command:

[vlku@rhel.ssh.guru]: ~>$ sudo fuser -cuk /junk

You can run the lsof command again to verify if the processes are gone. If you keep getting stuck with only one process, you should check if your current directory is different than the mountpoint we are trying to get rid of…

[vlku@rhel.ssh.guru]: ~>$ pwd

Ok, at this point we should be safe to move on to unmounting the drive from the filesystem. This is arguably the easiest part of the whole operation as only a single command is needed:

[vlku@rhel.ssh.guru]: ~>$ sudo umount /dev/sdb1

It is up to your judgement if you need to delete the directory which used to be a mountpoint for the drive - it is not essential for removing the volume from the OS.

[vlku@rhel.ssh.guru]: ~>$ sudo rm -r /junk

Once that’s done we have to clear the /etc/fstab from any references to the old drive. If you plan to add it back at some stage or replace it with a similar one (i.e. smaller drive mounted in the same place), you can leave the lines in place and just comment them out (mark them with #).

[vlku@rhel.ssh.guru]: ~>$ sudo vi /etc/fstab

Your /etc/fstab might differ greatly from the one in this example, however by looking at the below cat command result, you should be able to tell what changes have been applied and how to apply them in the real world:

[vlku@rhel.ssh.guru]: ~>$ cat /etc/fstab
/dev/sda1           	/         	ext4      	rw,relatime,data=ordered	0 1
# /dev/sdb1           	/junk     	ext4      	rw,relatime,data=ordered	0 2
/swapfile		none	swap	defaults	0	0

To finish it all up, we have to let the kernel know about the changes applied to /etc/fstab and the upcoming removal of the drive. The easiest way to do that is to rescan the storage controller:

[vlku@rhel.ssh.guru]: ~>$ sudo echo "- - -" > /sys/class/scsi_host/host0/scan

At this point, it is safe to remove the drive from the VMware level; it is recommended to run the following after the drive is gone:

[vlku@rhel.ssh.guru]: ~>$ sudo echo 1 > /sys/block/sdb/device/delete
[vlku@rhel.ssh.guru]: ~>$ sudo echo "- - -" > /sys/class/scsi_host/host0/scan

…which will ensure that the drive we removed will be completely gone from the memory of the OS. Not doing so might cause issues once we decide to add the drive back or replace it with a different one.

Adding a volume from guest (Linux) perspective

Adding a new volume to Linux guest is generally considered to be easier than removing one. To start, check if the drive is visible to the OS by running the fdisk command:

[vlku@rhel.ssh.guru]: ~>$ sudo fdisk -l

If the drive recently added to the VM is not visible in the fdisk command, we can force a rescan of the storage controller by running:

[vlku@rhel.ssh.guru]: ~>$ sudo echo "- - -" > /sys/class/scsi_host/host0/scan

Before continuing, verify if the drive is now visible in fdisk output and note its identifier (i.e. /dev/sdc) - we will need it to create a partition table on the drive:

[vlku@rhel.ssh.guru]: ~>$ fdisk /dev/sdc

In the fdisk wizard, we can issue the following sequence of commands to format the drive with a single, primary partition occupying all of its available disk space: n (new), p (primary), enter (default first block), enter (default last block), enter (default block size), w (write changes), q (exit).For more advanced operation, please review the fdisk article on the Arch Wiki.

Once the drive has a partition table, we can format the partition located on it:

[vlku@rhel.ssh.guru]: ~>$ mkfs.ext4 /dev/sdc1

You can replace the ext4 bit with etx3 or any other format if needed.

The next step is to mount the new drive at a mountpoint. This can either be done as temporary mount (mount command) or permanent /etc/fstab entry. For the new disk to accessible without a reboot and available after every future system rest, we should use both of these methods.

Firstly, let’s mount the new volume at /newjunk.

[vlku@rhel.ssh.guru]: ~>$ sudo mkdir /newjunk
[vlku@rhel.ssh.guru]: ~>$ sudo mount /dev/sdc1 /newjunk

We can verify if the operation was successful by running either of these commands:

[vlku@rhel.ssh.guru]: ~>$ cat /proc/mounts
[vlku@rhel.ssh.guru]: ~>$ df -h

Lastly, we need to add a new entry to the /etc/fstab:

[vlku@rhel.ssh.guru]: ~>$ sudo vi /etc/fstab

An entry to /etc/fstab should always look similar to this:

/dev/sda1           	/         	ext4      	default	0 1

where: /dev/sda1 is the name of the drive / is the mountpointext4 is the format used default are additional options0 1 mounting order.

In this case, we are going to add an entry as follows:

/dev/sdc1           	/newjunk         	ext4      	default	0 2

To finish adding the disk, we can force the storage controller to rescan:

[vlku@rhel.ssh.guru]: ~>$ sudo echo "- - -" > /sys/class/scsi_host/host0/scan