Before discussing the IAM roles, you need to know that sometimes you have to allow/delegate access to either users or services to AWS resources, although they don’t have it by default.

One example would be an application that needs to use the AWS resources. Or maybe a user from one AWS account needs access to resources from another AWS account.

Of course you can do this easily by providing the security credentials within the application itself. But what happens if the security credentials will change and you have 100 applications where you have hard coded the credentials?

VMware Training – Resources (Intense)

Obviously, you would need to change every single one and this is time consuming.

No need to say that the approach is not very safe. Someone can extract the credentials and use them in an unauthorized way.

You can overcome this situation by using IAM roles. A role describes the permissions to access resources. However the permissions are not linked to any IAM user or group.

By doing this, the applications or services assume the role (permissions) at run time. This means that AWS is providing to the application or the user temporary security credentials that can be used whenever they need access to those resources.

There are few use cases when you would need to use roles:

  • * Applications running on EC2 instances that need access to different AWS resources. For instance, you have an application running in EC2 that needs access to AWS S3.
  • * Cross account access. You have users from one AWS account that needs access to resources of another AWS account.
  • * Identity federation. It allows users to use their own usernames (the ones created outside of AWS) to access resources in AWS.
  • * Web identity federation. This is for web-based or mobile applications that need access to AWS resources.

To demonstrate the power of IAM roles, I will provide an example of the first use case (application that is running in an EC2 instance).

In this specific case, the application from the EC2 instance is retrieving the temporary security credentials from the instance metadata iam/security-credentials/<ROLE>. The temporary security credentials are changed by Amazon five minutes before the old ones expire.

Instance metadata is data about the instance that you can use to configure the running EC2 instance. Check the Reference section at the end of the article to get the URL from AWS’s website that you can use to get more information about instance metadata.

Because I’m familiar with Python, I will create a Python script that will connect to S3 service, create a bucket, then add an object to that bucket and then read the object.

Boto is the AWS SDK for Python and it is providing Python APIs for many of the AWS services. The well known and most used (Amazon S3, Amazon EC2) are supported.

You can read more about this integration here: AWS SDK for Python (Boto) and boto: A Python interface to Amazon Web Services.

I have the Python script on an EC2 instance that doesn’t use IAM role. The security credentials are passed as parameters inside the script:

[ec2-user@EC2-REDHAT-01 ~]$ cat s3.py
import boto.s3
from boto.s3.connection import S3Connection
from boto.s3.key import Key
conn = boto.s3.connection.S3Connection(aws_access_key_id='AKIAIRVA5URZXNDQTXUQ', aws_secret_access_key='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')
my_bucket = conn.create_bucket('bucket_test_iam_role')
write = Key(my_bucket)
write.key = 'test_file'
write.set_contents_from_string('A file created through Python SDK')
read = Key(my_bucket)
read.key = 'test_file'
print 'The content of the file from S3 bucket is:\n'
print read.get_contents_as_string()
[ec2-user@EC2-REDHAT-01 ~]$

Notice that inside the script, I provided the access key ID and secret access key.

If I will run this script, a S3 bucket will be created (bucket_test_iam_role) with an object inside (test_file). That file will contain the string ‘A file created through Python SDK’.

Then the same script is reading the content of the S3 object.

Let’s run the script and confirm that the S3 bucket was created along with the object:

[ec2-user@EC2-REDHAT-01 ~]$ python s3.py
The content of the file from S3 bucket is:

A file created through Python SDK
[ec2-user@EC2-REDHAT-01 ~]$

As you can see, the content of the object was read and the output is the expected one.

Also, checking on S3 console, we can see the object being created:

So let’s create a new role that will allow the script that is running on an EC2 instance to run correctly; however, without the access keys.

This is how you can create the role.

From the IAM console, choose “Roles” and then “Create New Role”. Give the role a name that is meaningful to you and click on “Next Step”:

On the next step, select “Amazon EC2”:

Then keep the default “Select Policy Template” and scroll down to “Amazon S3 Full Access” and select it:

The next step will show you what the policy looks like and what its name is:

Continue to the last step and then create the role:

Now that the role was created, let’s launch another EC2 instance with this role. The process is a regular one, with the difference that you need to specify the role assigned to the EC2 instance. I will show only the step where you need to specify the IAM role:

All other steps are the same.

Once the new EC2 is running, I will run the same script without specifying the access credentials. This will be the new script:

[ec2-user@ip-172-31-18-109 ~]$ cat s3.py
import boto.s3
from boto.s3.connection import S3Connection
from boto.s3.key import Key
conn = boto.s3.connection.S3Connection()
my_bucket = conn.create_bucket('bucket_test_iam_role')
write = Key(my_bucket)
write.key = 'test_file'
write.set_contents_from_string('A file created through Python SDK')
read = Key(my_bucket)
read.key = 'test_file'
print 'The content of the file from S3 bucket is:\n'
print read.get_contents_as_string()
[ec2-user@ip-172-31-18-109 ~]$ 

The new script is returning the same output:

[ec2-user@ip-172-31-18-109 ~]$ python s3.py
The content of the file from S3 bucket is:

A file created through Python SDK
[ec2-user@ip-172-31-18-109 ~]$

As you can see in the S3 console, the bucket was created and an object was put inside:

As I mentioned, the script on the instance is getting the access credentials assigned by the role from the instance metadata.

Let’s check the security credentials for our IAM role:

[ec2-user@ip-172-31-18-109 ~]$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3_role
{
  "Code" : "Success",
  "LastUpdated" : "2014-09-15T11:08:27Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIAIRTGHY43HMZBJSHQ",
  "SecretAccessKey" : "Bar99CJWYLgMOzAKNmijxRs7T7+dwOi/QsLI0k5e",
  "Token" : "AQoDYXdzEKT//////////wEa0ANiqbHEE0T6t3jDLtJkW9oh1UFiR6CPgWghazj0m3jP6AEWaKFiTffJdR7lxC30wCigTxRFScd9+uhyg7DzYRtolz/eamicy/r7kru3Prt46CL6No7XvZYFHGMu3Uuw094wQaHNr57Kisy4rMloS/hAE5XrsiS8h3Et0hTFdTJ7V+tSznxETqZTXiQADm+0uYw/uPe02IVcf2VEq8s6toEq+FGkjcda97XQUNMANtv5kPL1cXf64cdy+imsjx0RglaYOt+8JDjczMliukJ5k2BJLCqSjMJ33Sun2ns3K4Pz0kiLITieyIv0hsz4Nm1W3QlsJwmM1DcRCjPHbpAbpGnKpEv7SBEkyuieFT1Eka4dep5w2CuOJWgeYvEWx49r7XIM1JT9A6emY2BwqsPPtcPCFoz9/yMVl558D4CO0C1Blq72EZ5CWSoWJ1IBj1a2buOA8/xoWQCrxJJG+u/NgRoVR26ICgwRlCM+rncQVXs2U1sFg9O+c26U1oyvQrfKoEoXlwUfW23NiNic7KaEbV7aJPNIisMwpXKfGsAydAlyGguEPyea/PKxgnFPdprgK2INQi6eTCbItTEgdu75f5u3DM6UNanWPnUItehK98AziiDHkdugBQ==",
  "Expiration" : "2014-09-15T17:15:20Z"
}[ec2-user@ip-172-31-18-109 ~]$

As you can see, the expiration date is returned along with the security credentials. As I mentioned, five minutes before the expiration date, the security credentials will be changed by Amazon.

And we reached the end of the article that discussed IAM roles, what they are, what the use cases are and how you can actually use them.

We created a role that was assigned to an EC2 instance where an application was requesting access to AWS S3.

This is just the start of what you can accomplish with IAM roles.

However this article gives you a good starting point and you can take it further by checking the references shown below.

Reference