Understanding Terraform

 Terraform is a tool used  for building, changing, and versioning infrastructure safely and efficiently. A Terraform can manage multiple cloud services using a provider plugin 



Terraform work flow?
Terraform work flow will be comprised of validate -> plan -> apply and finally destroy.

what is the important file of terraform which records the actions of Terraform apply?
For every Terraform execution the action will be recorded in a file called Terraform state file . 

The terraform init command is used to initialise a working directory containing Terraform configuration files.
During init, the configuration is searched for module blocks, and the source code for referenced modules is retrieved from the locations given in 
their source arguments.

The terraform plan command is used to create an execution plan.
It will not modify things in infrastructure.

The terraform apply command is used to apply the changes required to reach the desired state of the configuration.
Terraform apply will also write data to the terraform.tfstate file.

The terraform refresh command is used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure.
This does not modify infrastructure but does modify the state file.

The terraform destroy command is used to destroy the Terraform-managed infrastructure.
terraform destroy command is not the only command through which infrastructure can be destroyed

The terraform validate command validates the configuration files in a directory.
Validate runs checks that verify whether a configuration is syntactically valid and thus primarily useful for general verification of reusable 
modules, including the correctness of attribute names and value types.

What are the important components of terraform Code?

Providers 
Resources 
Variables 
Statefile 
Provisioners 
Backends 
Modules 
Data Sources 
Service Principals 


what is a provider and write  a small sample code?
In Terraform, a provider is a plugin that allows you to interact with a specific cloud or service provider, such as AWS, Google Cloud, Azure, or Kubernetes.

eg:

provider "aws" {
  region = "us-west-2"
}

resource "aws_instance" "example" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  subnet_id     = "subnet-0cfd988b012345678"
}

eg2: 

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.0.0"
    }
  }
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}
     subscription_id = "XXXXXXXXXX"
     client_id = "XXXXXXXXXXXX"
     client_secret = "XXXXXXXXXXXXX"
     tenant_id = "XXXXXXXXXXX"
}

resource "azurerm_resource_group" "rg" {
  name     = "my_new_test_grp"
  location = "East US"
}


what is a resource in Terraform and give me an example?
In Terraform, a resource is a declarative representation of a cloud infrastructure object, such as a virtual machine, network interface, or DNS record. Resources are the main building blocks of Terraform code, and they describe what infrastructure should be created or modified.

resource "aws_instance" "my_test_vm" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  subnet_id     = "subnet-0cfd988b012345678"
}




what is a terraform variable?

Variables will be helpful to  pass a value to the terraform code 

Variables are used to make a terraform code 
variable "ami_id" {
  type    = string
  default = "ami-0c55b159cbfafe1f0"
}

variable "instance_type" {
  type    = string
  default = "t2.micro"
}

variable "subnet_id" {
  type = string
}

resource "aws_instance" "example" {
  ami           = var.ami_id
  instance_type = var.instance_type
  subnet_id     = var.subnet_id
}

subnet_id = "subnet-0cfd988b012345678"

terraform apply -var "subnet_id=subnet-0cfd988b012345678"

In Terraform, you can define variables in a separate file with the .tfvars extension, which can then be used to assign values to variables in your Terraform code. Here's an example of how to define a variable in a .tfvars file:

# vars.tfvars

ami_id = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = "subnet-0cfd988b012345678"


# main.tf

variable "ami_id" {}
variable "instance_type" {}
variable "subnet_id" {}

resource "aws_instance" "example" {
  ami           = var.ami_id
  instance_type = var.instance_type
  subnet_id     = var.subnet_id
}



Please write a terraform code?


terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.0"
    }
  }
}

provider "aws" {
  region      = "us-west-2"
  access_key  = var.aws_access_key
  secret_key  = var.aws_secret_key
}

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "ami_id" {
  type    = string
  default = "ami-0c55b159cbfafe1f0"
}
variable "instance_type" {
  type    = string
  default = "t2.micro"
}
variable "subnet_id" {
  type = string
}

resource "aws_instance" "ec2" {
  count         = var.instance_count
  ami           = var.ami_id
  instance_type = var.instance_type
  subnet_id     = var.subnet_id

  tags = {
    Name = "webserver-${count.index}"
  }
}


===========================================
test.tfvars

aws_access_key = "YOUR_AWS_ACCESS_KEY"
aws_secret_key = "YOUR_AWS_SECRET_KEY"
instance_count = 100
subnet_id = "subnet-0cfd988b012345678"


What is a remote statefile and what is it's use?

After the deployment is finished terraform creates a state file to keep track of current state of the infrastructure.
In simple terms it compares the "current state" with "desired state" using this file.

A file with a name of "terraform.tfstate" will be created in your working directory

How to achieve concurrency in terraform or how can you create a remote state file ?

In order to achieve concurrency among multiple users working on terraform code can be achieved by writing a terraform backend block in the terraform code.

eg:

terraform {
  backend "s3" {
    bucket = "my-terraform-state_bucket"
    key    = "terraform.tfstate"
    region = "us-east-1"
  }
}

After writing the code , run terraform init and validate , this will create a state file in the remote location called as remote state file 

A terraform remote state file will be locked when we execute "terraform plan and apply" making the consistent code changes across multiple terraform client connections.

Terraform remote state file will be updated with latest information from the local state file after the completion of terraform init and apply job





How to recover a terraform file , assume there is no remote statefile or backup file?

If you don't have a backup, you can try to recreate the state file by inspecting your infrastructure resources and creating a new state file based on their current state. This can be a time-consuming process, especially if you have a large infrastructure with many resources.

what is terraform import?

terraform import command can be used to import existing infrastructure resources into the Terraform state. This command allows you to bring resources that were not created by Terraform into the Terraform state file and manage them with Terraform going forward.

terraform import <resource_type>.<resource_name> <resource_id>
terraform import aws_instance.my_ec2_instance i-0123456789abcdefg

After running this command, Terraform will create a new state file with the imported resource, and you can manage the resource with Terraform going forward


what is a module and what is its use?

Modules are very useful when you want to create multiple environments such as production, staging, and development, each with their own resources but sharing common configuration. You can create a module for each environment and reuse it with different input variables to create those environments. This helps reduce code duplication and ensures consistency across environments.


eg:
create a separate directory called "prod" under terraform directory 

#main.tf
resource "aws_instance" "prod" {
  ami           = var.ami_id
  instance_type = var.instance_type
  subnet_id     = var.subnet_id
}

# Define input variables
variable "ami_id" {}
variable "instance_type" {}
variable "subnet_id" {}

------------------------ 
#prodvar.tf 
variable "ami_id" {}
variable "instance_type" {}
variable "subnet_id" {}

------------------------
output "instance_id" {
  value = aws_instance.example.id
}

-----------------------------
main file

# Define the provider

provider "aws" {
  region      = "us-west-2"
  access_key  = var.aws_access_key
  secret_key  = var.aws_secret_key
}

# Call the ec2-instance module
module "prod-ec2-instance" {
  source = "./prod"
  
  ami_id         = var.prod_ami_id
  instance_type  = var.prod_instance_type
  subnet_id      = var.prod_subnet_id
}

# Define input variables
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "prod_ami_id" {}
variable "prod_instance_type" {}
variable "prod_subnet_id" {}

# Define output variables
output "instance_id" {
  value = module.prod_ec2_instance.instance_id
}

what is a data source?

Data Source refers to a way of deploying  your terraform resources using the created resources out of terraform.

For example: you want to create an EC2 instance using the already created network infra with awscli , then we can use "data" option 
in the terraform code

eg: data "aws_virtual_network" "test-vpc" {
      }

   data keyword will import the already created information in the terraform code.





what is terraform locals or local values?
   local values are useful to substitue large resource name with a simple readable value , otherwise 
Terraform locals (or local values) are variables that are defined within a Terraform module and are only visible within that module. Locals are used to store intermediate values or to calculate values that are used multiple times within the module.

eg:
locals {
  region = "us-east-1"
  instance_type = "t2.micro"
  ami = "ami-0c55b159cbfafe1f0"
}


Here we can reference the value using "local.region,local.instance_type,local.ami"

resource "aws_instance" "example" {
  ami           = local.ami
  instance_type = local.instance_type
  region        = local.region

}

==============================
What is Terraform Import? How to import existing infra into statefile?

Terraform import is a command that allows you to import existing infrastructure into your Terraform state file. This is useful when you have infrastructure that was created outside of Terraform, and you want to start managing it with Terraform.

eg: 
if you have an existing AWS EC2 instance with the ID "test1234566", you can import it into your Terraform state file with the following command:

terraform import aws_instance.example i "test1234566"

=============================================
What are Terraform Functions?

Terraform functions are built-in functions that allow you to manipulate data and perform calculations within your Terraform configuration files. These functions are used to generate dynamic values for resource attributes, interpolation, and other parts of the configuration

Terraform provides a large set of functions for common tasks such as string manipulation, mathematical operations, and date/time formatting. Some examples of Terraform functions include:

concat: concatenate strings together
lower: convert a string to lowercase
floor: round a number down to the nearest integer
formatdate: format a timestamp into a specific date and time format
coalesce: return the first non-null argument

Functions are used within Terraform configurations by wrapping them in the ${} syntax. For example, to concatenate two strings together, you would use the concat function like this:

resource "aws_s3_bucket" "example" {
  bucket_name = "${lower(concat(var.environment, "-example-bucket"))}"
  # ...
}

join (separator,list)
join ("",["aaa","bbb","ccc"])

For example, to create the name of the RDS database, you could use the concat and lower functions to concatenate the environment variable with a static string and convert the result to lowercase:

resource "aws_db_instance" "example" {
  identifier = "${lower(concat(var.environment, "-example-db"))}"
  # ...
}

create the name of the EC2 instance, you could use the format function to format a string with the environment variable and a counter:

resource "aws_instance" "example" {
  count = 2
  ami = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"
  tags = {
    Name = "${format("%s-example-instance-%02d", var.environment, count.index + 1)}"
  }
  # ...
}


This creates two EC2 instances with unique names based on the value of the environment variable and a counter. The format function is used to format the string with the environment variable and the count.index value, which is incremented for each instance.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

What are Terraform Provisioners?

Terraform provisioners are a set of features in Terraform that allow you to configure and manage software on a newly created resource after it has been provisioned. Provisioners are used to perform actions on the resource such as installing software packages, configuring software, or copying files.

Terraform supports two types of provisioners:

Local-exec provisioners: These are used to execute commands on the machine running Terraform. For example, you could use a local-exec provisioner to run a script that installs software on the newly created resource.

Remote-exec provisioners: These are used to execute commands on the newly created resource itself. For example, you could use a remote-exec provisioner to configure a software package or run a script on the resource.

eg:
resource "aws_instance" "example" {
  ami = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"

  provisioner "remote-exec" {
    inline = [
      "sudo yum install -y httpd",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd"
    ]
  }
}


File provisioner:

Suppose you're creating an AWS EC2 instance that will be running a web application. You have a file called rman_bkp.sh that contains the configuration settings for the application, and you want to copy this file to the EC2 instance after it has been provisioned.

resource "aws_instance" "web_server" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"

  provisioner "file" {
    source      = "rman_bkp.sh"
    destination = "/var/www/app_config.json"
  }
}


Local-exec provisioner:

Suppose you're creating an AWS EC2 instance that will be running a web application and you need to install some additional software packages on the instance after it has been provisioned.

resource "aws_instance" "web_server" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"

  provisioner "local-exec" {
    command = "sudo apt-get update && sudo apt-get install -y nginx"
  }
}


Remote-exec provisioner:

Suppose you're creating an AWS EC2 instance that will be running a web application, and you need to configure the web server after it has been provisioned.

resource "aws_instance" "web_server" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update && sudo apt-get install -y nginx",
      "sudo systemctl start nginx",
      "sudo systemctl enable nginx"
    ]
  }
}

resource "aws_instance" "web_server" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update && sudo apt-get install -y nginx",
      "sudo systemctl start nginx",
      "sudo systemctl enable nginx"
    ]
  }
}


===================================================
What are Terraform Workspaces?

Terraform workspaces are a feature that allows you to maintain multiple "environments" or "instances" of a single Terraform configuration. Each workspace is essentially a separate instance of the same configuration, with its own state file and associated resources.

terraform workspace new dev
terraform apply

terraform workspace new staging
terraform apply

terraform workspace new prod
terraform apply

In this example, we create three workspaces named dev, staging, and prod, and apply the Terraform configuration to each one. The resources created in each workspace will be isolated from the others and stored in separate state files.

You can also switch between workspaces using the terraform workspace select command:4
terraform workspace select dev
terraform apply

# Set up the provider
provider "aws" {
  region = "us-east-1"
}

# Define the EC2 instance resource
resource "aws_instance" "example" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t2.micro"

  # Use the workspace name as part of the instance name
  tags = {
    Name = "example-${terraform.workspace}"
  }
}

# Set up a separate state file for each workspace
terraform {
  backend "s3" {
    bucket = "example-bucket"
    key    = "example-${terraform.workspace}.tfstate"
    region = "us-east-1"
  }
}

# Create the dev workspace and apply the configuration
terraform workspace new dev
terraform apply

# Create the staging workspace and apply the configuration
terraform workspace new staging
terraform apply

# Create the prod workspace and apply the configuration
terraform workspace new prod
terraform apply


===================================
Terraform Foreach - Deploy multi vm's with different sizes give example

for-each is a kind of loop concept 
Example of how you can use the for_each meta-argument with Terraform to deploy multiple VMs with different sizes in Azure:

# Define a map of VM sizes
variable "vm_sizes" {
  default = {
    "web" = "Standard_B1s"
    "app" = "Standard_B2s"
    "db"  = "Standard_D2_v3"
  }
}

# Define a map of VM names
variable "vm_names" {
  default = {
    "web" = "web-vm"
    "app" = "app-vm"
    "db"  = "db-vm"
  }
}

# Define the Azure provider
provider "azurerm" {
  features {}
}

# Define the resource group
resource "azurerm_resource_group" "example" {
  name     = "example-rg"
  location = "westus"
}

# Define the VMs using a for_each loop
resource "azurerm_linux_virtual_machine" "vm" {
  for_each = var.vm_sizes

  name                = "${var.vm_names[each.key]}"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  size                = each.value
  admin_username      = "adminuser"
  network_interface_ids = [
    azurerm_network_interface.example[each.key].id,
  ]

  os_disk {
    name              = "${var.vm_names[each.key]}-osdisk"
    caching           = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "16.04-LTS"
    version   = "latest"
  }

  custom_data = filebase64("${path.module}/cloud-init.yml")
}

# Define the network interfaces using a for_each loop
resource "azurerm_network_interface" "example" {
  for_each = var.vm_sizes

  name                = "${var.vm_names[each.key]}-nic"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  ip_configuration {
    name                          = "${var.vm_names[each.key]}-ipconfig"
    subnet_id                     = azurerm_subnet.example.id
    private_ip_address_allocation = "Dynamic"
  }
}

# Define the subnet
resource "azurerm_subnet" "example" {
  name                 = "example-subnet"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = ["10.0.2.0/24"]
}

# Define the virtual network
resource "azurerm_virtual_network" "example" {
  name                = "example-vnet"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  subnet {
    name = azurerm_subnet.example.name
    address_prefix = azurerm_subnet.example.address_prefixes[0]
  }
}




Tell me a scenario where you come across provisioners?
Provisioners are typically used to perform post-deployment tasks such as installing software, configuring services, or executing scripts on newly provisioned resources. A scenario where provisioners may be used is in the deployment of an application that requires additional configuration after the underlying infrastructure resources have been provisioned.

What are plugins and providers in Terraform?
Providers are plugins that allow Terraform to interact with various cloud providers and services to create and manage infrastructure resources. Plugins are external binaries or scripts that can be executed by Terraform to extend its functionality.

How do you deploy the Terraform code manually or with some automation? Have you configured locks on the backend statefile?
Terraform code can be deployed manually using the "terraform apply" command or with automation tools such as Jenkins, GitLab CI/CD, or AWS CodePipeline. It is recommended to configure locks on the backend statefile to prevent multiple users or systems from modifying the statefile at the same time and potentially causing conflicts.

When you want to deploy the same Terraform code on different env then what is the best strategy?
The best strategy for deploying the same Terraform code on different environments is to use Terraform workspaces to manage separate sets of state files for each environment. This allows for isolated deployments and avoids conflicts between different environments.

How do you standardize Terraform code so that can be shared across multiple teams in an organization?
Terraform code can be standardized by following coding best practices, using modules to encapsulate reusable code, and implementing a code review process to ensure consistency and quality across the organization.

How do you call output of one module in another module?
The output of one module can be called in another module by using the module keyword and specifying the module name followed by the output variable name. For example, "${module.my_module.output_variable_name}".
Lets say you have created lot of resources using Terraform, is there a way to delete one the resource through Terraform?
Yes, Terraform allows individual resources to be deleted by running the "terraform destroy" command followed by the resource name or address. For example, "terraform destroy aws_instance.my_instance".

Can we merge 2 different state files?
Yes, two different state files can be merged using the "terraform state pull" and "terraform state push" commands to export and import state data between the files.

What are few challenges that you came across while working with Terraform?
Some common challenges with Terraform include managing state files, dealing with dependencies between resources, and handling errors and rollbacks during deployments.

Best way to authenticate cloud providers through Terraform?
The best way to authenticate cloud providers through Terraform is to use the respective provider's authentication methods such as API keys, access keys, or OAuth tokens. These credentials can be stored securely in environment variables or in a configuration file.


Lets assume 2 resources you are creating using terraform, but we need make sure once 1st resource created 
                 successfully then only need to start creating 2nd resource. Is this possible?

Yes, it is possible to ensure that the second resource is created only after the first resource has been successfully created. This can be achieved by using the "depends_on" argument in the resource block. For example, if resource B depends on resource A, then the "depends_on" argument can be set as follows:

resource "aws_instance" "instance_a" {
  ...
}

resource "aws_instance" "instance_b" {
  depends_on = [aws_instance.instance_a]
  ...
}


 What is null resource in terraform?
A null_resource is a resource block that represents a provisioner that does not create any infrastructure. It is useful when you want to run a provisioner, such as a script or a command, on the local machine or on a remote machine using SSH, without creating any infrastructure resources.


What happens if statefile is missed or delete?

If the state file is missing or deleted, Terraform will not be able to manage the resources that are defined in the state file. Terraform will consider all the resources as new resources and will attempt to create them. This can result in duplication of resources or errors if resources with the same names already exist.

Can terraform used for automating on prem infra?
Yes, Terraform can be used to automate on-prem infrastructure. Terraform has support for various providers, including VMware vSphere, OpenStack, and Microsoft Hyper-V, which can be used to manage on-prem infrastructure.

What if we encounter a serious error and want to rollback?
If a serious error is encountered during a Terraform deployment, the deployment can be rolled back by running "terraform destroy" to destroy all the resources that were created and then running "terraform apply" again to recreate the resources.

How to call existing resources from AWS or Azure to terraform without hardcoding the values or terraform import?
To call existing resources from AWS or Azure to Terraform without hardcoding the values or using Terraform import, you can use data sources. Data sources allow you to retrieve information about existing resources and use that information in your Terraform configuration. For example, to retrieve information about an existing AWS S3 bucket, you can use the following data source:

data "aws_s3_bucket" "bucket" {
  bucket = "my-bucket"
}


This data source can be used to reference the bucket in other resource blocks, such as:
resource "aws_s3_bucket_object" "object" {
  bucket = data.aws_s3_bucket.bucket.id
  ...
}


If we give count zero in resources level what will happen?
If the count argument is set to zero in a resource block, that resource will not be created. This is useful when you want to conditionally create resources based on variables or other conditions.


 What is Dynamic Block in terraform?
Dynamic blocks are used to create multiple instances of a nested block, where the number of instances is determined dynamically based on a list or map. This allows for more flexible and reusable Terraform code. For example, to create multiple AWS security group rules based on a list of ports, you can use the following dynamic block:

dynamic "ingress" {
  for_each = var.ports
  content {
    from_port = ingress.value
    to_port   = ingress.value
    protocol  = "tcp"
  }
}



Best Practices in Terraform?
Best practices in Terraform include using modules to organize and reuse Terraform code, using version control to manage Terraform configurations, keeping the state file separate from the configuration files, using variables to make configurations more flexible and reusable, and using Terraform's plan and apply commands to preview and apply changes. It is also important to follow the provider-specific best practices for the cloud provider being used.




















Comments

Popular posts from this blog

How to make CRS and ASM not to restart after server reboot

How to repair ASM disk header