I am trying to use https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/12.2.0(Terraform AWS EKS provider)
What is the difference between worker nodes and node group?
I am trying to use https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/12.2.0(Terraform AWS EKS provider)
What is the difference between worker nodes and node group?
The Terraform docs for some weird reason do not explain what "Error: Cycle" means. I've looked everywhere but there is no mention of it on the official docs. (Turns out it is well-known term, a circular dependency, that someone apparently renamed thinking it would make them sound cool...)
Here is a terraform script I lifted from this repo
provider "aws" {
region = "${var.aws_region}"
profile = "${var.aws_profile}"
}
##----------------------------
# Get VPC Variables
##----------------------------
#-- Get VPC ID
data "aws_vpc" "selected" {
tags = {
Name = "${var.name_tag}"
}
}
#-- Get Public Subnet List
data "aws_subnet_ids" "selected" {
vpc_id = "${data.aws_vpc.selected.id}"
tags = {
Tier = "public"
}
}
#--- Gets Security group with tag specified by var.name_tag
data "aws_security_group" "selected" {
tags = {
Name = "${var.name_tag}*"
}
}
#--- Creates SSH key to provision server
module "ssh_key_pair" {
source = "git::https://github.com/cloudposse/terraform-aws-key-pair.git?ref=tags/0.3.2"
namespace = "example"
stage = "dev"
name = "${var.key_name}"
ssh_public_key_path = "${path.module}/secret"
generate_ssh_key = "true"
private_key_extension = ".pem"
public_key_extension = ".pub"
}
#-- Grab the latest AMI built with packer - widows2016.json
data "aws_ami" "Windows_2016" {
owners = [ "amazon", "microsoft" ]
filter {
name = "is-public"
values = ["false"]
}
filter {
name = "name"
values = ["windows2016Server*"]
}
most_recent = true
}
#-- sets the user data script
data "template_file" "user_data" {
template = "/scripts/user_data.ps1"
}
#---- Test Development Server
resource "aws_instance" "this" {
ami = "${data.aws_ami.Windows_2016.image_id}"
instance_type = "${var.instance}"
key_name = "${module.ssh_key_pair.key_name}"
subnet_id = "${data.aws_subnet_ids.selected.ids[01]}"
security_groups = ["${data.aws_security_group.selected.id}"]
user_data = "${data.template_file.user_data.rendered}"
iam_instance_profile = "${var.iam_role}"
get_password_data = "true"
root_block_device {
volume_type = "${var.volume_type}"
volume_size = "${var.volume_size}"
delete_on_termination = "true"
}
tags {
"Name" = "NEW_windows2016"
"Role" = "Dev"
}
#--- Copy ssh keys to S3 Bucket
provisioner "local-exec" {
command = "aws s3 cp ${path.module}/secret s3://PATHTOKEYPAIR/ --recursive"
}
#--- Deletes keys on destroy
provisioner "local-exec" {
when = "destroy"
command = "aws s3 rm 3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pem"
}
provisioner "local-exec" {
when = "destroy"
command = "aws s3 rm s3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pub"
}
}
When I tun terraform plan
I got this error message:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.template_file.user_data: Refreshing state...
Error: Error refreshing state: 1 error(s) occurred:
* provider.aws: error validating provider credentials: error calling sts:GetCallerIdentity: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have existing infrastructure in Terraform and have been using it for a while. Recently I had swapped the AWS credentials of my local laptop (the creds stored in ~/.aws/credentials
) and it stopped working until I re-set those credentials back.
The problem is that I'm declaring the creds in the Terraform source itself but it doesn't seem to be using them at all.
terraform {
backend "s3" {
bucket = "example_tf_states"
key = "global/vpc/us_east_1/example_state.tfstate"
encrypt = true
region = "us-east-1"
}
}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
variable "access_key" {
default = "<hidden_for_stack_exchange_post>"
}
variable "secret_key" {
default = "<hidden_for_stack_exchange_post>"
}
variable "region" {
default = "us-east-1"
}
The access ID permissions are 100% good.
I am using the same account ID and secret key both for the aws configure
settings that go into ~/.aws/credentials
as I am in the above Terraform variable declarations.
Everything works fine as long as the creds are in ~/.aws/credentials
but as soon as the OS level credentials are gone (ie rm ~/.aws/credentials
) I get the following when trying to run Terraform operations, such as terraform plan
:
Failed to load backend:
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error.
If you'd like to update the configuration interactively without storing
the values in your configuration, run "terraform init".
If I re-populate the ~/.aws/credentials
by running aws configure
it will work fine again.
I'm not understanding -- if my provider
setting is explicitly declaring the credentials to use inside the Terraform source code, why does my OS-level AWS configuration matter at all?
How can I make Terraform only use the creds defined in my Terraform configuration and ignore what's in my OS user profile?
Edit, it's Terraform v0.11.7
Edit: Please note that I'm trying to solve the issue on why the statically declared creds are not being utilized in the provider declaration. Not looking for alternative methods or workarounds. Thanks.
I am trying to use a nested loop in terraform. I have two list variables list_of_allowed_accounts
and list_of_images
, and looking to iterate over list list_of_images
and then iterate over list list_of_allowed_accounts
.
Here is my terraform code.
variable "list_of_allowed_accounts" {
type = "list"
default = ["111111111", "2222222"]
}
variable "list_of_images" {
type = "list"
default = ["alpine", "java", "jenkins"]
}
data "template_file" "ecr_policy_allowed_accounts" {
template = "${file("${path.module}/ecr_policy.tpl")}"
vars {
count = "${length(var.list_of_allowed_accounts)}"
account_id = "${element(var.list_of_allowed_accounts, count.index)}"
}
}
resource "aws_ecr_repository_policy" "repo_policy_allowed_accounts" {
count = "${length(var.list_of_images)}"
repository = "${element(aws_ecr_repository.images.*.id, count.index)}"
count = "${length(var.list_of_allowed_accounts)}"
policy = "${data.template_file.ecr_policy_allowed_accounts.rendered}"
}
This is a bash equivalent of what I am trying to do.
for image in alpine java jenkins
do
for account_id in 111111111 2222222
do
// call template here using variable 'account_id' and 'image'
done
done