Getting Started with the Terraform AWS Provider

choubertsprojects

The Best WordPress plugins!

1. WP Reset

2. WP 301 Redirects

3. WP Force SSL

Terraform is an open-source configuration management tool that allows you to easily build and manage infrastructure. Its simplicity has made it a favorite among developers looking for an easy way to deploy resources such as VPCs, EC2 instances, and EBS volumes in their private cloud environment. In order to use Terraform with AWS specifically, we must first install the respective provider from terraform/aws.

Terraform is a tool that allows users to create and manage infrastructure. This provider allows you to use Terraform with the AWS platform. Read more in detail here: terraform getting started aws.

Getting Started with the Terraform AWS Provider

If you want to use Terraform to manage and interact with Amazon Web Services (AWS), you’ll need to use the AWS provider. Terraform’s AWS provider allows you to connect with AWS’s many resources, including Amazon S3, Elastic Beanstalk, Lambda, and many more.

In this comprehensive book, you’ll learn all you need to know about the AWS provider and how to utilize it with Terraform to manage your Amazon infrastructure, step by step.

Let’s get started!

Prerequisites

If you want to follow along with this lesson, make sure you have the following:

How to Install Terraform on Windows is a related topic.

  • A code editor — While you may work with Terraform configuration files with any text editor, you should have one that understands the HCL Terraform language. Visual Studio (VS) Code is a good place to start.

A Tutorial on What You Need to Know About Visual Studio Code

What exactly is an AWS Provider?

Terraform uses plugins to communicate with cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Oracle. One of the most extensively utilized providers in the Amazon Web Services (AWS) ecosystem. This provider interacts with a variety of AWS services, including Amazon S3, Elastic Beanstalk, Lambda, and many more.

Terraform connects to Amazon using AWS Provider and the appropriate credentials to manage or deploy/update hundreds of AWS services.

AWS Provider is defined in the Terraform configuration file and comprises parameters like version, endpoint URLs, and cloud regions, among others.

Declaring the Amazon Web Services Provider

You must declare a local name when you need to refer to a provider’s name. The local name is the name of the provider that is specified in the required providers block. When a provider is required, local names are given and must be unique per module.

In Terraform 0.13 and subsequent versions, you must additionally mention the source parameter when defining the required providers. The source argument specifies the location from which Terraform may get plugins.

The following are the three pieces of a source address:

  • Hostname — The Terraform registry that distributes the provider’s hostname. Registry.terraform.io is the default hostname.
  • Within the defined registry, a namespace for an organization.
  • Type – A type is a short term for the platform or system that the provider maintains. It must be unique.

The source parameter’s declaration syntax is shown below, with slashes (/) separating the three elements of a source address.

## <HOSTNAME>/]<NAMESPACE>/<TYPE> # EXAMPLE USAGE OF SOURCE PARAMETER SYNTAX # Declaring the source location/address where Terraform can download plugins # The official AWS provider belongs to the hashicorp namespace on the # registry.terraform.io registry. So, hashicorp’s source address id hashicorp/aws source = “hashicorp/aws”

The following Terraform configuration defines the needed provider’s name (aws), as well as the source address, AWS provider version, and provider’s region (us-east-2).

# Declaring the Provider Requirements when Terraform 0.13 and later is installed terraform { # A provider requirement consists of a local name (aws), # source location, and a version constraint. required_providers { aws = { # Declaring the source location/address where Terraform can download plugins source = “hashicorp/aws” # Declaring the version of aws provider as greater than 3.0 version = “~> 3.0” } } } # Configuring the AWS Provider in us-east-2 region region = “us-east-2” provider “aws”

Using Hard-Coded Credentials to Authenticate an AWS Account

Let’s go through how to authenticate an AWS account now that you have a basic idea of how to define the AWS provider.

You may use a variety of strategies to authenticate with the AWS provider, including setting environment variables and saving credentials in a named profile. However, hard-coding the credentials into your AWS Provider is the easiest approach to login to an AWS account.

Although hard-coded credentials are not encouraged since they are vulnerable to leaks, you may specify them in Terraform configuration to rapidly test any AWS resource. However, if you keep reading, you’ll discover a better approach to authenticate an AWS account by using environment variables.

As illustrated below, the configuration defines a local name (aws) as well as the provider area (us-east-2). The AWS provider block also specifies an access key and a secret key for authenticating an AWS account, as you can see.

aws provider “aws” # Declaring the provider region region = “us-east-2” # Declaring an AWS provider called aws provider “aws” # Access key and secret key declarations access key = “access-key” secret key = “secret-key”

Declaring Environment Variables to Secure Credentials

You just learnt that using Terraform to hard-code static credentials to authenticate an AWS cloud service is doable. Hard coding, on the other hand, is risky and should only be used in a test environment to rapidly test programs.

Is there an other method to keep credentials safe? Yes, you may define as many as you want by designating them as environment variables. Environment variables are variables with a name/value pair that are set outside of the Terraform configuration file.

To export each environment variable, use the export instructions listed below. When you export environment variables, they are accessible for the duration of the application or until Terraform runs.

AWS ACCESS KEY ID export AWS ACCESS KEY ID=”access-key” # Variable AWS ACCESS KEY ID export AWS ACCESS KEY ID=”access-key” # AWS SECRET ACCESS KEY is being exported. AWS SECRET ACCESS KEY=”secret-key” # AWS DEFAULT REGION export AWS DEFAULT REGION=”us-east-2″ # AWS DEFAULT REGION=”us-east-2″

Using a Named Profile to Store Multiple Credentials

You may authenticate an AWS account one at a time using both hard-coding and declaring credentials as environment variables. But what if you need to have various credentials on hand and utilize them as needed? The best solution is to save credentials in a named profile.

The following code generates a named profile (Myprofile) with an access key and a secret key.

On Linux and macOS, the default path for named profiles is $HOME/.aws/credentials/Myprofile, while on Windows, it’s percent USERPROFILE percent.awscredentialsMyprofile. Replace Myprofile with the identified profile’s real name.

# Named Profile ‘Myprofile’ [Myprofile] is created. AKIAVWOJMI5836154yRW31 aws access key id aws secret accesss key = vIaGmx2bJCAK90hQbpNhPV2k5wlW7JsVrP1bm9Ft aws secret accesss key = vIaGmx2bJCAK90hQbpNhPV2k5wlW7JsVrP1bm9Ft aws secret_

After you’ve generated a named profile in /.aws/credentials, you may use the profile attribute to refer to it in your Terraform setup. You’re referring to Myprofile, which is a named profile.

# Setting up the ‘aws’ AWS Provider in the us-east-2 region provider “aws” region = “us-east-2” # Declaring a named profile (Myprofile) profile = “Myprofile” # Declaring a named profile (Myprofile) profile = “Myprofile”

In AWS Provider, declaring the Assume Role

Before launching Terraform, you learnt how to setup an AWS Provider by declaring hard-coded credentials. However, it’s possible that you’ll wish to declare credentials at run-time. If that’s the case, you’ll want to use the AssumeRole API. Temporary credentials are provided by AssumeRole, which include an access key ID, a secret access key, and a security token. You may login to AWS using these credentials.

The code below specifies the aws provider as well as an assume role with a role name and a session name. To set up AssumeRole access, you’ll need to create an IAM role that describes which entities may assume it and what capabilities it gives.

# Declaring the AWS Provider ‘aws’ provider “aws” # Declaring the AssumeRole assume role # Declaring a resource name # The Amazon Resource Name (ARN) of the IAM Role to assume is role arn. # The AWS account’s ARN is a unique number that is associated with all of the resources in the account. role arn = “arn:aws:iam::ACCOUNT ID:role/ROLE NAME” role arn = “arn:aws:iam::ACCOUNT ID:role/ROLE NAME” session name = “SESSION NAME” # Declaring a session name session name = “SESSION NAME”

Multiple AWS Providers Declared

You’ve now learnt how to use Terraform to define and setup an AWS Provider for a single region. What if you need to manage your infrastructure or AWS services across many cloud regions? You’ll need to specify the keyword alias in such scenario.

The alias enables you to establish different provider configurations and choose which one to use on a per-resource or per-module basis, as well as support numerous regions.

The code below specifies the default AWS Provider, aws, with the us-east-2 region. Then it creates a second AWS Provider with the same name as the first, but with the alias west and the region set to us-west-2.

Depending on the need, declaring an alias enables you to build resources in the us-east-2 region by default or in the us-west-2 region if you pick the provider aws.west.

provider “aws” # The default provider configuration resources that come with aws. ‘aws’ region = “us-east-2” # Declaring us-east-2 region for AWS provider ‘aws’ # For the west area, there is an additional provider setup. # This may be referenced as aws.west in resources. “west” is an alias. ‘west’ region = “us-west-2” # Declaring us-west-2 region for AWS provider referenced as “west” # Declaring the resource “aws instance” “west-region” in the west region resource “aws instance” # Declaring aws.west as a provider # aws.west is a reference to the AWS provider ‘aws’ with the alias ‘west’ provider = aws.west aws.west aws.west aws.west aws.west aws.west aws.west aws.west aws.west aws.west # Declaring the resource with the “aws instance” “east-region” default provider resource.

Customizing the Endpoint Configuration of an AWS Provider

When connecting to non-default AWS service endpoints, such as AWS Snowball or doing local testing, customizing endpoint settings comes in useful.

To utilize customized endpoints, configure Terraform AWS Provider. As illustrated below, you may do so by defining the endpoints configuration block inside the provider’s block.

The setup below enables you to access the AWS S3 service on local port 4572 as if you were using an AWS account. Similarly, the setting allows you to use port 4569 to access dynamodb locally. DynamoDB is a NoSQL database service that offers excellent speed and scalability.

Check out the Terraform AWS Provider’s list of customizable endpoints.

# Declaring AWS provider named ‘aws’ provider “aws” { # Declaring endpoints endpoints { # Declaring the dynamodb on the localhost with port 4569 dynamodb = “<http://localhost:4569>” # Declaring the S3 on the localhost with port 4572 s3 = “<http://localhost:4572>” } }

Creating Tags

You learnt how to declare an AWS provider with settings like region, source location, and so on before. However, you must add tags at the provider level to better manage your resources.

Labels made up of user-defined keys and values are known as tags. When you need to verify the invoicing, ownership, automation, access control, and many other aspects of your AWS account, tags come in useful.

Instead of Creating Tags to all resources individually, let’s learn how to add tags to all resources at the provider level that will help save a lot of code and time.

The code below configures an AWS provider with tags defined inside the default_tags. The benefit of Creating Tags within the provider is that specified tags will automatically get added when you create any resource with this provider.

# Setting up the ‘aws’ AWS Provider provider “aws” # Declaring tags value tags = Environment = “Production” Owner = “shanky” # Adding the default tags Environment and Owner at the Provider Level default tags # Creating the ‘aws vpc’ resource with the Name=MyVPC tag resource “aws vpc” “myinstance” tags = “MyVPC” = “Tags” = “Tags” = “Tags” = “Tags” = “Tags” = “Tags” = “Tags

Tags are being ignored.

Tags may be enabled at the provider level, which makes it easier to deploy tags throughout the environment. However, there are situations when you need to disregard the tags, such as when you don’t want to apply a default tag to an EC2 instance and instead apply it to the rest of the AWS account’s resources. Let’s get started!

The code below sets up an AWS provision with ignore tags configured inside it. When you don’t want to add default tags to some resources but want them to apply to all of them, you may use the ignore tag.

When you use the aws provider to generate a resource, all resources will disregard the LastScannedand kubernetes.io tags.

# Ignore tag key prefixes and keys across all resources under the AWS provider aws provider “aws” # Ignore tag key prefixes and keys across all resources under the AWS provider aws provider “aws” (aws) key prefixes = [“kubernetes.io”] ignore tags ignore tags keys = [“LastScanned”] ignore tags ignore tags ignore tags ignore tags ignore tags ignore tags ignore tags ignore tags ignore tags ignore tags ignore tags ignore_

Creating an Amazon Web Services S3 Bucket

By now, you have learned all about how to declare and configure the AWS providers in-depth. But just Declaring the Amazon Web Services Provider is doing nothing until you manage AWS resources such as provisioning an AWS S3 bucket or deleting an Ec2 instance etc. So, Let learn how to create an AWS S3 bucket!

1. Create a folder called /terraform-s3-demo, then shift (cd) to that folder as the working directory. Your configuration file, as well as other related files created by Terraform, will be stored in the /terraform-s3-demo folder.

mkdir /terraform-s3-demo/terraform-s3-demo/terraform-s3-demo/terraform-s terraform-s3-demo/terraform-s3-demo/terraform-s3-demo/terraform-s3-dem

2. In your chosen code editor, paste the settings below and save it as main.tf in the /terraform-s3-demo directory.

The main.tf file generates the following resources:

  • A provider requirement is made up of three parts: a local name, a source location, and a version restriction.
  • The Amazon S3 Encryption key assists the S3 bucket in encrypting any new items before they are put in the bucket. Terraform’s aws kms key command is used to generate encryption keys.
  • AWS Provider Configuration: Declare the provider name (aws) and the region us-east-2.
  • Bucket: The terraformdemobucket bucket is created by this Terraform module. Terraform is unable to delete this bucket since it has the force destroy flag.

Versioning in Amazon S3 refers to the storage of numerous versions of an item in the same bucket.

# Configuring the AWS Provider named aws provider “aws” { # Ignore tag key_prefixes and keys across all resources under a provider aws. ignore_tags { key_prefixes = [“kubernetes.io/”] } ignore_tags { keys = [“LastScanned”] } } # Declaring the Provider Requirements terraform { required_providers { aws = { source = “hashicorp/aws” version = “~> 3.0” } } } # Configuring the AWS Provider (aws) with region set to ‘us-east-2’ region = “us-east-2” provider “aws” # Granting the Bucket Access resource “aws_s3_bucket_public_access_block” “publicaccess” { bucket = aws_s3_bucket.demobucket.id block_public_acls = false block_public_policy = false } # Creating the encryption key which will encrypt the bucket objects resource “aws_kms_key” “mykey” { deletion_window_in_days = “20” } # Creating the bucket named terraformdemobucket resource “aws_s3_bucket” “demobucket” { bucket = terraformdemobucket force_destroy = false server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { kms_master_key_id = aws_kms_key.mykey.arn sse_algorithm = “aws:kms” } } } # Keeping multiple versions of an object in the same bucket versioning { enabled = true } }

3. Navigate to the terraform-s3-demo directory and start Terraform using the lines below. Terraform sets up the necessary plugins and providers to deal with resources.

Related: [Step-by-Step] Creating Your First EKS Cluster with Terraform

Terraform normally employs a three-command approach: terraform init, terraform plan, and terraform apply, in that sequence.

cd terraform-s3-demo # Change directory to terraform-s3-demo terraform init # Start Terraform terraform plan # Create a Terraform plan Make sure your terraform setup syntax is right. # -auto-approve -auto-approve -auto-approve -auto- Create an AWS S3 bucket.

Creating IAM Users and AWS EC2 Instances

You learnt how to use Terraform with AWS Provider to generate a single object (AWS S3 bucket) in the previous section. However, Terraform with AWS Provider allows you to build numerous objects of the same kind.

1. Make a folder called /terraform-ec2-iam-demo and go inside it.

2. Open your preferred code editor, put the settings below into it, and save it as main.tf in the /terraform-ec2-iam-demo directory.

The code below creates two EC2 instances, ec21a and ec21a, using the t2.micro and t2.medium instance types, as well as four IAM users. The ami specified in the code is an Amazon Machine Image (AMI), which contains the information needed to start an instance, such as the kind of operating system to use, which applications to install, and so on.

The Amazon EC2 console may be used to locate Linux AMIs.

# Creating a t2.micro and t2.medium instance using the “aws instance” “my-machine” resource. # The AMI is declared. for each = key1 = “t2.micro” key2 = “t2.medium” ami = “ami-0a91cd140a1fc148a” ami = “ami-0a91cd140a1fc148a” ami = “ami-0a91cd140a1fc148a” ami = “ami-0a91cd140a1fc148a each.instance type = each.instance type = each.instance_ value each = key name key tags = each.value each.value each.value each.value each.value each.value each.value each.value each.value each.value each. aws iam user” “accounts” for each = toset([“Account1”, “Account2”, “Account3”, “Account4”]) name = each.key

3. Next, create a new file and save it as vars.tf in the /terraform-ec2-iam-demo directory, copying and pasting the code below.

The following code defines all of the variables mentioned in the main.tf file. The variable tag ec2 with the values ec21a and ec21b is allocated to the two EC2 instances declared in the main.tf file when you run the terraform code.

type = list(string) default = [“ec21a”,”ec21b”] variable “tag ec2”

4. In the /terraform-ec2-iam-demo directory, create a new Terraform configuration file named output.tf, and copy/paste the code below into it.

The values of $aws instance.my-machine.*.id and $aws iam user.accounts.*.name should appear at the Conclusion of the terraform apply command output once it has been successfully executed.

Terraform is told to utilize the aws instance and aws iam user resources specified in the main.tf configuration file by the code below.

output “aws instance” value = “$aws instance.my-machine.*.id” output “aws iam user” value = “$aws iam user.accounts.*.name” output “aws iam user” value = “$aws iam user.accounts.*.name”

5. In the /terraform-ec2-iam-demo directory, create a new configuration file named provider.tf and paste the code below into it. The provider.tf file below describes the Terraform AWS provider, which tells Terraform how to interact with all of the AWS resources you specified before.

region = “us-east-2” provider “aws”

6. Now, use the tree command to check that all of the essential files are in the /terraform-ec2-iam-demo folder.

 All of the Terraform configuration files that are necessary are shown. All of the Terraform configuration files that are necessary are shown.

7. To start Terraform and create AWS EC2 Instances and IAM Users, run the instructions below in the sequence listed.

terraform begin, terraform plan, and terraform execute

The Terraform apply command was successfully run. The Terraform apply command was successfully run.

After that, go to the AWS Management Console and then to the AWS EC2 service and IAM console.

You can see that the EC2 instances and IAM users are present in the images below.

Verifying the two Terraform-created Amazon EC2 instances Verifying the two Terraform-created Amazon EC2 instances

Verifying the four Terraform IAM users that were created Verifying the four Terraform IAM users that were created

Conclusion

You now have the information you need to work with the AWS Provider, from declaring to running AWS Provider inside Terraform, thanks to this Ultimate tutorial. You also learnt how AWS Provider enables you to securely disclose credentials in a variety of methods.

Now, which AWS service do you want to use AWS Provider and Terraform to manage?

Terraform is an open-source tool that allows users to easily manage infrastructure. The “terraform-aws github” is a provider for Terraform that allows users to manage AWS resources.

Related Tags

  • terraform aws tutorial pdf
  • terraform aws example
  • terraform providers
  • terraform aws documentation
  • terraform documentation

Table of Content