Terraform Basics 101: Part 4

Terraform on AWS

The way we create resources on aws using terraform should look like this:

Install AWS Cli from here

Then create your IAM Account in the main account. Once done, choose the IAM account and create access key. Use those to configure using aws configure

Then use the code in main_aws.tf to create an IAM User named my-user

provider "aws"{

region = "us-east-1"

}

resource "aws_iam_user" "my-user" {

name = "my-user"

tags={

Description = "This is my user" }

}

Then plan and apply changes

Here is the IAM User created

let’s set some IAM policies to our user. Let’s give the user all access for now to all resources following this format

resource "aws_iam_policy" "my-user-policy" {

name = "my-user-policy"

policy = <<EOF {

"Version": "2012-10-17",

"Statement": [

{ "Effect": "Allow",

"Action": "*",

"Resource": "*"} ] }

EOF

}

Here we have used aws_iam_policy to create a policy named “my-user-policy”. Also the policy started with DELIMITER as EOF

Then we have provided date as version, then permission set as allowed, all actions and resources to be allowed.

Then we need to attach this policy to the user

resource "aws_iam_user_policy_attachment" "my-user-policy-attachment" {

user = aws_iam_user.my-user.name

policy_arn = aws_iam_policy.my-user-policy.arn

}

Here, we have created new attachment policy with username and policy_arn (amazon resource name) mentioned.

Finally we have planned and applied changes

If we check our AWS now, we can see a new policy

If we look the my-user-policy, we can see that 438/438 services are allowed. Surely, we mentioned that earlier as all resources were chosen (*) and all were allowed for the user.

We can also keep the policy on a different file and just mention that file to the terraform

Here is the policy in json format. Here is the updated terraform file

resource "aws_iam_policy" "my-user-policy" {

name = "my-user-policy"

policy= file("my-user-policy.json")

}

Once done, we need to plan and apply changes.

The policy will be attached to our IAM User profile

Amazon s3 with terraform

Amazon S3 basically stores objects in buckets. We have bucket level policies and access control list which is on the object level permission.

We can give users permission to check those objects just like this

Here Lucy was given permission to check object within all-pets

AWS DynamoDB with Terraform

A typical dynamodb table looks like this

This table name is cars and each row can be presented like this

Here, we must have a primary key which is a must to fill

Let’s set our table with name cars and primary key/hash_key as VIN (Vehicle Identification Number)

resource "aws_dynamodb_table" "cars"{

name = "cars"

hash_key = "VIN"

billing_mode = "PAY_PER_REQUEST"

attribute {

name = "VIN"

type = "S"

} }

Once done, use terraform plan and apply

You can verify that the table is created now

Let’s add values to the table

We have to keep the content within the «EOF block

But due to the dynamodb , we have to provide with data type

resource "aws_dynamodb_table_item" "car-items"{

table_name = aws_dynamodb_table.cars.name

hash_key = aws_dynamodb_table.cars.hash_key

item = jsonencode({

"Manufacturer": {"S":"Toyota"},

"Make": {"S":"Corolla"},

"Year": {"N":"2004"},

"VIN": {"S":"1234567890"}, } )

}

Once done, plan and apply changes

You can verify that the table content is up there

Remote State

We don’t share our tfstates in the github repositories. We keep them basically in s3 bucket and keep our code in the repositories. While working on a team, our other team members can get the codes from the github repository and access the s3 bucket for the tfstates

Note: tfstates have all private and sensitive information which we don’t want anyone to access and therefore we don’t push it on the github repository.

State locking of the Terraform states

While we are running (plan, apply changes) on a terraform file on a terminal, we can’t apply that from another terminal at the same time.

Terraform basically locks the state so that, it lets us firstly apply the changes and update state file. Once done, another terminal can be used to update the states.

If we push our tfstates in the github repository, we can never get this state locking benefit. So, we push that to AWS S3 & Terraform Consul and push our code to GitHub repository. Amazon s3 and terraform consul support state locking.

Let’s apply this

Assume we have 1 bucket called kodekloud-terraform-state-bucket01 and 1 dynamodb called state locking

Then we create a main.tf file to create pets.txt file

Then we use terraform plan and apply that. Once done, we got terraform.tfstate file.

Now if we want to upload this tfstate(terraform.tfstate) file to the bucket (kodekloud-terraform-state-bucket01) with a dynamodb table (state-locking), we use this

The dynamodb table should have a primary key/hash key called “lockID”

It’s much better if we keep them in different tf file

Then initialize it! After that, we need to remove the .tfstate file from the local file.

Then we can plan and apply changes. Terraform will automatically make changes to the remote tfstate every time from now on.

Terraform state commands

We can use various state commands to see their resources and details. For example, assume that we are in a folder where we have a tfstate file. We have dynamodb table called cars and bucket named finance-2020922 created using the tf file.

If we use these commands, we can see the list of resources here. Also we can specifically search for the resource name

We can also see more details of the resource

Now, remember that we created a dynamodb file using this tf file

Once initialized , planned and applied, this is the tfstate file created

Now if we want to change the resource name, we can use the mv command like this. Once done, the tfstate file is updated

But we need to manually update the main.tf file to the new name and apply changes

Also, after we keep tfstate file remotely, we don’t have tfstate file locally anymore.

Then we need to pull the state file

If we want to remove any resource, we need to use the rm command

Here the bucket has been removed.

Amazon EC2 instance using Terraform

Here are two tf files one with instance name, ami name, instance type (for payment), tags, script to run at first (user-data). Second one with region specified for the ami instance

Once applied, we can see the webserver in the aws portal

But we have our ubuntu webserver created . We have also installed nginx server on it using user_data.

But how to connect to the ubuntu server from local device? We need to make a ssh client and connect.

We can provide an user supplied key to login to the ec2 instance.

Here we have key (web.pub) on the root folder. We are using the public key here.

Rather than mentioning the path, we can also provide the content of public key (web.pub) directly like this

Then we need to specify the key_name like this

This was about connecting the ami as an admin from our machine.

What about we want our audience to access the ami instance from port 22? We just need to go to the aws portal and set the security group source as 0.0.0.0/0 meaning anyone can access the port 22

or, we could do it in the tf file like this

And then set the vpc_security_group_ids

We can also try to see the public ip address in the output once we apply terraform

Here is the output after the apply

Terraform taint

Assume that we want to save the public ip address in a wrong address. This will make an error

Here the webserver is tainted and thus no further progress is possible

Debugging

We can check the logs to track anomalies

Once we choose the TRACE , while using terraform plan, we can see all of the logs in details

We can also save them in a file and check first 10 logs too

To stop logging, use this one

Terrafirn import

Earlier we have seen that we can take input or data from other manager services than terraform.

What if we want to use other service managed resources as terraform managed one?

For example, here we can see the public ip of the newserver managed by service other than terraform (may be AWS)

We can’t destroy the webserver, or make any changes as this is not managed by terraform.

To solve this issue, we need to set a resource type called webserver-2 with blank box . Then we need to import it mentioning the webserver-2 and the instance id

Once done, if we check the tfstate file, we can see the ami instance

You can also verify that from the aws portal

As we can see details in the tfstate file, we can fill the main.tf file which we kept blank earlier.

Once done, plan the changes