In this article we will see how you can regain access to an EC2 Linux instance in case the key pair is lost.

The EC2 Linux instance in AWS can be accessed through ssh. The ssh access is protected by a key pair system. You can find more information about this by going through this article Amazon AWS – Understanding EC2 Key Pairs and How they are Used for Windows and Linux Instances. Also this article will refer very often to volumes so you can go through this article to understand what volumes are, Amazon AWS – Understanding EC2 storage – Part I.

VMware Training – Resources (Intense)

The procedure is applicable only to EBS-backed instances and it is not applicable to store-backed instances.

The steps to perform the recovery are:

  1. Stop the instance
  2. Detach its root volume
  3. Attach the root volume to another instance as data volume
  4. Modify the authorized_keys file
  5. Detach the volume from the new instance
  6. Attach the volume to the original instance
  7. Start the original instance

Our scenario assumes that you have an EBS backed instance for which you have lost the key pair and now you cannot access the EC2 instance (an Amazon Linux instance in this case):

To be able to connect again to this instance, we need to launch another EC2 instance in the same Availability Zone. In our case, the initial instance is launched in us-east-1e AZ:

Launch another EC2 instance and on “Step 3: Configure Instance Details” make sure you are selecting the right AZ:

Just before launching the EC2 instance, you will be required to specify the key pair to be used to access the new EC2 instance. You could either select one that is already created or you can create one. If you go with the latter option, download the key pair and save it as we are going to need it later:

Soon after, the recovery EC2 instance is running:

And these are a few of its characteristics:

Next we will need to stop the initial EC2 instance for which we lost the key pair. Select the EC2 instance, right click on it, select “Instance State” and then “Stop”:

In few seconds, the instance will be stopped:

The next step is to detach the volume of the initial EC2 instance. For this, from EC2 Management Console, select “Volumes” and then scroll to the right until you see the “Attachment Information” column. Write down the device name. In this case, /dev/xvda. We will need this information later:

Select the volume, right click on it and then click on “Detach Volume”:

Wait until you see it in the “available” state:

And then right click again on it and select “Attach Volume”:

In the next window, in the “Instance” field select the recovery instance as the instance where the volume will be attached:

In the “Device” field, you can leave it as it is, /dev/sdf:

Once you click on “Attach”, in a few seconds, the volume will be attached to the recovery instance:

Now it’s time to change the authorized_keys file from the volume of the initial EC2 instance with the file from the recovery EC2 instance:

These are the operations needed to achieve this:

  • Login to the recovery EC2 instance
  • Become root
  • Create the folder where the volume will be mounted
  • Mount the volume
  • Change the authorized_keys file
  • Unmount the volume

Let’s see it in action:

1. Login to the recovery EC2 instance:
lab@jump-point:~/AWS$ ssh -i kp_recovery_instance.pem ec2-user@52.4.182.1
The authenticity of host '52.4.182.1 (52.4.182.1)' can't be established.
ECDSA key fingerprint is dd:fa:01:8e:2c:f7:31:8a:02:6d:9d:2f:39:a5:4e:36.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '52.4.182.1' (ECDSA) to the list of known hosts.

       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2015.03-release-notes/
20 package(s) needed for security, out of 48 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-172-31-49-100 ~]$
2. Become root:
[ec2-user@ip-172-31-49-100 ~]$ sudo su
[root@ip-172-31-49-100 ec2-user]#
3. Create the folder where the volume will be mounted:
[root@ip-172-31-49-100 ec2-user]# mkdir /mnt/volume_initial_instance
[root@ip-172-31-49-100 ec2-user]#
4. Mount the volume:
[root@ip-172-31-49-100 ec2-user]# mount /dev/sdf /mnt/volume_initial_instance
mount: /dev/xvdf is write-protected, mounting read-only
mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
[root@ip-172-31-49-100 ec2-user]#

If you remember, we saw that the device for this volume when we attached it was /dev/sdf, which is not working here. This is because the device was renamed to /dev/xvdf. Let’s confirm this by using the command “lsblk”:

[root@ip-172-31-49-100 ec2-user]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
`-xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
`-xvdf1 202:81   0   8G  0 part
[root@ip-172-31-49-100 ec2-user]#

So we need to use the following command to mount the volume:

[root@ip-172-31-49-100 ec2-user]# mount /dev/xvdf1 /mnt/volume_initial_instance
[root@ip-172-31-49-100 ec2-user]#

Let’s confirm that the volume was mounted:

[root@ip-172-31-49-100 ec2-user]# mount
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=500896k,nr_inodes=125224,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,relatime)
/dev/xvda1 on / type ext4 (rw,noatime,data=ordered)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/xvdf1 on /mnt/volume_initial_instance type ext4 (rw,relatime,data=ordered)
[root@ip-172-31-49-100 ec2-user]#
5. Change the authorized_keys file:
[root@ip-172-31-49-100 ec2-user]# cat /home/ec2-user/.ssh/authorized_keys > /mnt/volume_initial_instance/home/ec2-user/.ssh/authorized_keys
[root@ip-172-31-49-100 ec2-user]#

Now let’s compare the files from the volume of the initial EC2 instance with the one from the volume of the recovery instance. This is for the initial EC2 instance:

[root@ip-172-31-49-100 ec2-user]# cat /mnt/volume_initial_instance/home/ec2-user/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCko6/paqXb8cBuGIdeOI3KvSryzvB4Mr3A7aBTZNfCPFQwqJI54HHZwO9sCdecycy7jqUi7acJdcpMPlPIIEq1p7UxrpSfbFXbYzJ50V3HFs9iOOjcyaRoBJF3GZACRRw+KKFbgGXNW5kwpip8zEnBdG+saqmIK6TB9jJKaAx4jyrnOLkylmnXHbipLxAtU84v4R+kO8EwQewM9oKzsuoVGnSDVRAxsagkwLOYhvtfVSxo6wko/OzXR75Di5pQruzC2+itgEPPFTLLsMXj/cEJqVqH8LBj6EkSJKXR1icrqIVAcmpKWm+CkUC96SdmO6pUHnmeP/a9Ra+2aw4E1Qen kp_recovery_instance
[root@ip-172-31-49-100 ec2-user]#

And this is for the recovery EC2 instance:

[root@ip-172-31-49-100 ec2-user]# cat /home/ec2-user/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCko6/paqXb8cBuGIdeOI3KvSryzvB4Mr3A7aBTZNfCPFQwqJI54HHZwO9sCdecycy7jqUi7acJdcpMPlPIIEq1p7UxrpSfbFXbYzJ50V3HFs9iOOjcyaRoBJF3GZACRRw+KKFbgGXNW5kwpip8zEnBdG+saqmIK6TB9jJKaAx4jyrnOLkylmnXHbipLxAtU84v4R+kO8EwQewM9oKzsuoVGnSDVRAxsagkwLOYhvtfVSxo6wko/OzXR75Di5pQruzC2+itgEPPFTLLsMXj/cEJqVqH8LBj6EkSJKXR1icrqIVAcmpKWm+CkUC96SdmO6pUHnmeP/a9Ra+2aw4E1Qen kp_recovery_instance
[root@ip-172-31-49-100 ec2-user]#
6. Unmount the volume:
[root@ip-172-31-49-100 ec2-user]# umount /mnt/volume_initial_instance/
[root@ip-172-31-49-100 ec2-user]#

After this is done, you need to detach the volume from the recovery instance and attach it to the initial instance that currently is stopped:

And for device, you need to use the same device that we wrote down before we detach it from the initial instance, /dev/xvda. Then click on “Attach”:

Once the volume has been attached, you can start the initial instance:

Let’s see if we can connect now by using the public IP address and the key pair that we downloaded when we launched the recovery instance:

And it is working:

lab@jump-point:~/AWS$ ssh -i kp_recovery_instance.pem ec2-user@52.0.151.0
The authenticity of host '52.0.151.0 (52.0.151.0)' can't be established.
ECDSA key fingerprint is e7:10:4a:6b:6c:17:dc:55:87:c4:f0:e7:aa:af:4f:1f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '52.0.151.0' (ECDSA) to the list of known hosts.

       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2015.03-release-notes/
20 package(s) needed for security, out of 48 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-172-31-63-141 ~]$

And that’s it. You have now access again on the instance. You can delete the recovery instance, but do not delete the key pair that was created when you launched the recovery instance as you will end up in the same state and you will need to repeat the entire process.

Reference