AWS, IPv6 dual stacking, and Terraform


My server had been running for 2 years, and it was overdue for a rebuild. I decided it was time I revisit AWS, and this time, try setting up dual stacking from the start.

Which meant I needed to figure out how to get IPv6 working on AWS, and just to make things more interesting, try to do it entirely with Terraform.

This is my experience, notes, and pitfalls that I’ve hit.

Just give me the code

My terraform files are on github, go wild: github.com/kyl191/terraform-aws-ipv6

It also has examples of using the Terraform random generators, random selection, and for_each usage.

What is in the UI isn’t in Terraform, and other complaints

  1. The terraform provider doesn’t support adding a v6 block to the default VPC (yet).
  2. Adding a v6 CIDR block to the default VPC doesn’t add an IPv6 route to the route table, unlike the IPv4 route (added at VPC creation time).
    I’ve only found one piece of documentation that calls out needing to add the route yourself.
    The default VPC docs have a link on adding IPv6, which just covers adding the block, not the route.
  3. Security groups can only allow or deny ICMPv6 packets, even through the console.
    In comparison, you can be very selective about what ICMPv4 types and codes are allowed. If you drop all ICMPv6 packets you might force clients to do MTU discovery, it’s a tradeoff.
Screenshot of the Security Groups console showing some options for ICMPv4
  1. The AWS API for getting the default EBS encryption key will return the alias if you’re using the default AWS managed key, not the actual key ARN. The EC2 instance description will return the ARN, so Terraform will think the instance needs to be rebuilt because the state no longer matches.

Solutions/Workarounds

  1. I ended up creating my own VPC and requested a v6 block at creation.
    Alternatively I might have been able to import the default VPC into my Terraform state, and modify it there, but that is excessively complex compared to just creating a new one.
  2. Creating a VPC (for solution 1) also creates an empty route table, adding the IPv4 and IPv6 routes is necessary anyway.
    Alternative: Add the v6 block to the default VPC, still add the v6 route to the route table.
  3. Shrug, and hope nothing bad happens over ICMPv6?
  4. Make Terraform ignore changes on the KMS key id used by the instance.

(Un?)necessary tricks

I loathe magic constants, so I used random generators to choose a v4 octet for the VPC, and to choose which AZ to run in.

I used for_each to create subnets in every AZ instead of manually maintaining a list of AZs or just creating one or two subnets.

I have SSH publicly exposed, but only allow IPv6 traffic to it to reduce the key scanning noise in my logs.
I know it’s bad practice, but I’m not about to run a VPN for a single instance, and I’ve got key auth as an additional line of defense.

Weirdness

Some weird/unexpected things:

Closing

Success! I actually have an instance that is successfully running a dual IPv4/v6 stack. I’m mostly impressed – AWS is notorious for releasing a half baked feature and waiting to see how it’s used before continuing development.

Considering that the push for IPv6 is primarily client to server, having it working within a VPC is not something I would have thought is a business priority for many (any?) companies. This is probably the reason not much else supports IPv6, still, props to the AWS Networking folks who have at least laid the building blocks for support to be added.

I’m pretty sure this makes precisely zero impact for anyone using AWS as anything more than a generic server host (I haven’t found any service except ALB and the classic ELB supporting dual stacked addresses). But AWS now has an additional datapoint for their customer demand – at least one more account is using IPv6.

My #awswishlist

Nothing is perfect, right? Here’s what I’d like AWS to work on:

  1. Add IPv6 ranges to S3 and DynamoDB endpoints
  2. Add ICMPv6 type/code support to security groups
  3. Fix the GetEbsDefaultKmsKeyId API to return the ARN of the key once it’s created, instead of the alias
  4. Support Ed25519 keypairs, not just RSA keypairs
  5. Support Elastic IPv6 addresses
  6. Support IPv6 on RDS

Footnote: That VPC Endpoint weirdness

I suspect it’s because VPCs don’t support IPv6 by default, so it wasn’t thought of. It’s mainly notable because AWS suggests using VPC Endpoints to avoid data to/from S3 traversing a NAT gateway. I’m not using a NAT gateway, and data to/from S3 is free in the same region, so I enabled it.

At the very least, the VPC console suggests that the S3 and DynamoDB endpoints don’t have v6 blocks:

Screenshot of the route table showing VPC endpoints with IPv4 addresses listed
Sanitized excerpt of my route table, note the IPv4 addresses on the VPC Endpoint destinations
Screenshot of the route table filtering for IPv6 routes only, with the VPC endpoints no longer listed
Restricting it to just v6 routes only drops the VPC endpoints entirely

AWS publishes its IP ranges. The IPv4 range matches the one listed on the route table, but the IPv6 ranges aren’t included.

$ jq -r '.prefixes[] | select(.region=="us-west-2")|select(.service=="S3")' < ip-ranges.json
{
  "ip_prefix": "52.218.128.0/17",
  "region": "us-west-2",
  "service": "S3",
  "network_border_group": "us-west-2"
}
$ jq -r '.ipv6_prefixes[] | select(.region=="us-west-2")|select(.service=="S3")' < ip-ranges.json
{
  "ipv6_prefix": "2600:1fa0:4000::/40",
  "region": "us-west-2",
  "service": "S3",
  "network_border_group": "us-west-2"
}
{
  "ipv6_prefix": "2600:1ffa:4000::/40",
  "region": "us-west-2",
  "service": "S3",
  "network_border_group": "us-west-2"
}
{
  "ipv6_prefix": "2600:1ff8:4000::/40",
  "region": "us-west-2",
  "service": "S3",
  "network_border_group": "us-west-2"
}
{
  "ipv6_prefix": "2600:1ff9:4000::/40",
  "region": "us-west-2",
  "service": "S3",
  "network_border_group": "us-west-2"
}

Footnote: t3.micros are free tier eligible?

The free tier FAQ says t3.micro is free tier only in regions that don’t have t2.micro, but free tier limits docs say “any combination of t1.micro, t2.micro and t3.micro instances”.

Bonus doc weirdness: The free tier limits example says instances are priced hourly.

I pulled a Corey Quinn and tested it:

Screenshot of my bill

$0.00 per Linux t3.micro instance-hour (or partial hour) under monthly free tier

Alright then. Maybe now that I’ve posted about it it’ll be fixed, but I hope not.

,

  1. No comments yet.
(will not be published)