And this is for Linux only.
There is no short way to do this. It is not supported directly through the AWS Console where I can just push buttons and click away. It took SSH-ing to my Linux server. Typing in some terminal commands. Starting/stopping my EC2 instance a couple of times. However, it is not that hard to do. It probably took me about 1 hour to complete everything because I made a couple of mistakes along the way.
Mileage may vary according to how large the EBS volume is, and how much data is in there. In my case I’ve been running with an EBS volume that has been only 25% used for most of the time for several years. That’s a lot of wasted space, not to mention money. Snapshots do count too. I don’t expect this to change much over the next few years. Maybe a 2% to 3% growth annually?
I have been reading about shrinking EBS volumes for a long time now. Finally, after procrastinating for months – maybe years – I finally did it last weekend!

Let’s Get Started!
What I have is an EBS volume with lots of unused real estate. Let’s say it’s 50GB large. I have Ubuntu on it. SSH port enabled when needed via Security Group, selected IPs only.
1) The first step: Backup! I took a snapshot of the root EBS volume.
For the purpose of this document, I will call this one as Snapshot A.
Remember to mark/tag the snapshot to avoid confusion. Also helps for housekeeping after all this is done. How long snapshots take vary. Size of volume. How much data in it.
I’m only running a small personal website. I can afford to take it down an hour or two. So that’s what I did. Turned off the EC2 instance.
Taking down the instance before taking a snapshot is not mandatory. Regular snapshot schedules are actually done on running instances. I just did it that way because I want to get a clean backup of the volume while nothing is getting flushed into it.
Busy site? Best to have a temporary redirect page, and inform visitors that the site is under maintenance. A static web page will do. This can be done however necessary. Spin up a new instance with a web server that hosts that Under Maintenance page for your domain as a temporary redirect to the site’s domain URL. Too much? Why not use AWS S3. It is very good for hosting static web pages. I believe you can point the domain URL directly to a AWS S3 object.
This may not be necessary, but to each their own. I also stopped the database. Made a complete backup of everything. Downloaded the backup copy to another place. Any other resources – such as images, documents, important stuff – I also made zipped copies and stored them somewhere safe. Only a precaution. I didn’t anticipate a disaster to occur, but it helps to be redundant.
2) Created the new, smaller replacement volume.
Now, I will call this as Volume R.
It replaced the bigger root volume. I had a 50GB. Only using like 12GB-ish. Of course this had grown over time. I expect it to grow more. Made a new 20GB volume. Enough buffer to last me years in my case. That’s 30GB less than the original. Expanding an EBS volume is far easier and faster on AWS too.
Now that I got those first steps out of the way, I have a few options.
3-a) Create a volume from Snapshot A. I’ll name this Volume B. I’ll use this new volume to copy stuff to Volume R.
Snapshot A -> Create Volume B -> Copy Data -> Volume R
OR
3-b) Not create a volume from Snapshot A. I’ll copy directly from the 50GB root volume.
Root Volume -> Copy Data -> Volume R
This decision can be answered on whether I choose to do the next step on (3-c) my main EC2 instance. The alternative is to (3-d) create a temporary instance, attach Volume B from Step #3-a, attach volume R from Step #2. Do everything from the temp.
Advantages of 3-c/3-d: My site goes back up online as soon as possible. Minimal downtime. If I made stupid command line mistakes, I won’t screw up my main. Data is more consistent when copied. Helpful for a high traffic website. Also, best works when the database is not running from the same volume. Could be Amazon RDS or a separate EC2 instance solely for the DB.
Disadvantages of 3-d: Now, I have to pay more to AWS. Jeff Bezos wins either way!
I would decide later on that I won’t be adding a few more cents into Jeff’s bank account. I went with Step #3-a.
Next Steps
4) Attached Volume B to main instance. Attached Volume R to main instance.
Took notes on how these two volumes were named when attached. For example, the former can be in /dev/sdg, then the latter is /dev/sdf.
Did I mention if I’ve already started the main instance back up after the snapshot in Step #1? If I didn’t then yes, I might have forgotten. Attaching non-root volumes to a running instance is quite alright. Both volumes here are not.
5) Started the instance. Again, I did not go with creating a temporary instance anymore.
6) SSHed into my instance.
Opened as many connections as necessary to make it more convenient by just switching terminals.
NOTE: These next steps below will involve terminal commands. I’ve indicated those with “:~$” , but that is not part of the actual command.
7) Created a filesystem for Volume R (only).
Volume R is a clean volume. It is just a block of storage disk with nothing in it. To make it ready for Linux, I have to make it into ext4.
I took notes of the volumes when attaching in Step #4. That will have been /dev/sdf, right? Inside Linux, this volume will be named differently. The volume or device will now look like /dev/xvdf.
Ran the command below to format it to ext4.
:~$ sudo mkfs -t ext4 /dev/xvdf
Do not do the same for Volume B! Leave it as is.
8) Mounted the volumes.
The mount directories I created are in /mnt/src and /mnt/dst as source and destination, respectively.
Volume B goes to /mnt/src, while Volume R goes to /mnt/dst.
The create dir commands are,
:~$ sudo mkdir /mnt/src
:~$ sudo mkdir /mnt/dst
Proceeded to mount with the following commands
:~$ sudo mount /dev/xvdg /mnt/src
:~$ sudo mount /dev/xvdf /mnt/dst
Time to copy …
9) Used the handy rsync tool to copy data between 2 locations.
And the command is,
:~$ sudo rsync -aHAXxSP /mnt/src/ /mnt/dst
What those options are: –aHAXxSP
From man rsync
- -a, –archive archive mode; equals -rlptgoD (no -H,-A,-X)
- -H, –hard-links preserve hard links
- -A, –acls preserve ACLs (implies -p)
- -X, –xattrs preserve extended attributes
- -x, –one-file-system don’t cross filesystem boundaries
- -S, –sparse handle sparse files efficiently
- -P same as –partial –progress
Take note of the source location argument – /mnt/src/. It has a trailing forward slash. That is on purpose. I forgot this at first and ended up doing it wrongly. Without it I ended up like this:
/mnt/dst/<EXTRA_FOLDER_LAYER>/copied_data_files
The above is wrong! It should look like the one below when you CD into the directory.
/mnt/dst/copied_data_files
Basically, what I will see when I CD into /mnt/src, is exactly how it should be in /mnt/dst. Minus the timestamps when using ls command to inspect the file structure, because I forgot to use a rsync option to preserve modification times. Thus all files took on the timestamp at the moment it was copied. I could have redone this step but felt lazy. I figured, this was something I can go without. Please check the rsync manual for that option. I believe it is -t
.
-t, --times preserve modification times
The whole rsync process may take a while. That depends on how large is the data it is copying. This was probably the longest part in my experience. I did not time it but my estimate is somewhere around 10 to 15 minutes.
Prepping the replacement volume
10) Installed grub on Volume R with this command
:~$ sudo grub-install --root-directory=/mnt/dst/ --force /dev/xvdf
11) Checked for the UUID of the root volume using blkid
. Just enter blkid into the terminal. Use sudo
if it doesn’t show anything. Only one volume is relevant – /dev/xvda. Other mounted devices will show there too (e.g. /dev/xvdf). Ignore those and the virtual devices (/dev/loop[N]).
The result will look like so,
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/xvda: LABEL="cloudimg-rootfs" UUID="12345678-a0b21-1a20-77ce-3b45678dc9d0" TYPE="ext4"
Copied the UUID of the original root volume, which is used replace the one on the replacement root volume – Volume R. Copy only after the key UUID=....
up to the first closing quote.
12) Replaced the UUID of Volume R with the one from step #11 using tune2fs
The full command is below and UUID gets replaced with the copied one.
:~$ sudo tune2fs -U UUID /dev/xvdf
If there was a warning after just ignore it. As long as it is not an error. When issuing the blkid command again after this step, the UUID of Volume R looked the same? That is fine. The changes will already have been made, it will show later on once Volume R takes over as root volume in the next boot.
In Step #11 the result of blkid showed the UUID. It also printed LABEL
and TYPE
. The next step is to mark the label of Volume R with the same as the one for /dev/xvda.
13) Labelled Volume R as root.
To do that,
:~$ sudo e2label /dev/xvdf cloudimg-rootfs
Now it is almost at the end. I can unmount Volume R or not. Logout of the SSH session or forget about that. Doesn’t matter.
Off to the AWS Console
14) Stopped AWS EC2 instance.
15) Detached ALL volumes.
16) Attached Volume R as /dev/sda1.
17) Fired up the EC2 instance …
Waited a few seconds (with fingers crossed 🤞)
Et voila!
😝👍
Now after all those steps, was testing and validation. Refreshed the site a few times (cleared browser cache as necessary). Login to whatever needs to be checked. Connect by SSH. Checked the running services. And so on…
Last but not least, housekeeping. Deleted Volume B. Redundant snapshots. Old volumes. WARNING! Do not do this until after it is sure that everything is okay.
Similar Posts:
- > Upgrading Ubuntu LTS, PHP and WordPress May 17, 2020
- > making Kubuntu hibernate May 7, 2007
- > password-less ssh logins June 8, 2007
- > Save A File When Not Opened With Root Permission On Vim February 23, 2015
- > WordPress: Show Posts of a Category in a Page August 18, 2020