amazing of an idea it is
This commit is contained in:
parent
15d4d2c082
commit
e2b2a99155
54 changed files with 1872 additions and 0 deletions
134
terraform/README.md
Normal file
134
terraform/README.md
Normal file
|
|
@ -0,0 +1,134 @@
|
|||
# Infrastructure Terraform Repository
|
||||
|
||||
This repository manages the permanent infrastructure behind the assessment model. Key components to this include
|
||||
|
||||
- Database
|
||||
- Blob storage (s3 buckets)
|
||||
|
||||
# Terraform AWS Deployment
|
||||
|
||||
This project uses Terraform to create infrastructure in AWS. This README covers the steps necessary to deploy the resources.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- AWS CLI v2 installed and configured with profiles
|
||||
- Terraform v1.2.0 or higher
|
||||
|
||||
## Deploying
|
||||
|
||||
The deployment process can be broken down into the following steps:
|
||||
|
||||
1. Initialization: This downloads the necessary provider plugins for Terraform.
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
```
|
||||
|
||||
2. Workspace setup: Before you deploy, create a workspace for the environment. For example, if you're setting up the development environment:
|
||||
|
||||
```bash
|
||||
terraform workspace new dev
|
||||
```
|
||||
|
||||
3. Planning: This step creates an execution plan, showing what changes Terraform will make to reach the desired state.
|
||||
|
||||
```bash
|
||||
terraform plan -var-file=dev.tfvars
|
||||
```
|
||||
|
||||
Note: replace dev.tfvars with your appropriate variables file. For a production deployment, this would be the prod.tfvars file.
|
||||
|
||||
4. Apply: This step applies the desired changes to reach the desired infrastructure state.
|
||||
|
||||
```bash
|
||||
terraform apply -var-file=dev.tfvars
|
||||
```
|
||||
|
||||
Note: replace dev.tfvars with your appropriate variables file.
|
||||
|
||||
## AWS Profiles
|
||||
|
||||
This project uses AWS profiles for managing different environments. Ensure you have your profiles set up in your AWS credentials file `(~/.aws/credentials)`. Here is a sample:
|
||||
|
||||
```ini
|
||||
[DevAdmin]
|
||||
aws_access_key_id = YOUR_ACCESS_KEY
|
||||
aws_secret_access_key = YOUR_SECRET_KEY
|
||||
```
|
||||
|
||||
In the given example, DevAdmin is the profile for the development environment. Replace YOUR_ACCESS_KEY and YOUR_SECRET_KEY with your actual AWS credentials.
|
||||
|
||||
## Switching Environments
|
||||
|
||||
If you need to switch environments (e.g., from development to production), use the following command:
|
||||
|
||||
```bash
|
||||
terraform workspace select prod
|
||||
```
|
||||
|
||||
Remember to update your variables file accordingly when planning and applying changes (`-var-file=prod.tfvars` for production, for example).
|
||||
|
||||
# Deployment with Github actions
|
||||
|
||||
Deployment has been automated with github actions. In order to trigger a new deployment, simply make a pull request to either the `dev` or `prod` branches, depending on the deployment you wish to trigger. Then, once that pull request is merge, it will trigger a workflow in Github actions which will perform the deployment steps, as described above.
|
||||
|
||||
# AWS Route 53 and SSL Certificate Creation
|
||||
|
||||
This document provides an explanation of the Terraform script used to create AWS Route 53 resources and an SSL certificate.
|
||||
|
||||
## Code Explanation
|
||||
|
||||
The code can be found in the route53 module in `modules/route53/main.tf`
|
||||
|
||||
- First, an AWS Route 53 hosted zone is created for your domain with the aws_route53_zone resource.
|
||||
|
||||
- Next, the aws_acm_certificate resource requests a wildcard SSL certificate for your domain. This certificate will secure your domain and all its subdomains. The validation_method is set to DNS, which means that Amazon will verify that you own the domain by checking DNS records.
|
||||
|
||||
- After requesting the certificate, we use the aws_route53_record resource to create a DNS validation record in our Route 53 hosted zone. This record is used by Amazon to verify our ownership of the domain.
|
||||
|
||||
- With the aws_acm_certificate_validation resource, we tell AWS to use our validation DNS record to validate the certificate.
|
||||
|
||||
- An AWS Route 53 CAA record is also created with the aws_route53_record resource. This record specifies that Amazon is authorized to issue certificates for our domain.
|
||||
|
||||
- Finally, the ARN of the certificate is stored in AWS SSM Parameter Store using the aws_ssm_parameter resource, so that it can be accessed by other resources.
|
||||
|
||||
## Initial Verification
|
||||
|
||||
The first time you run this script, Amazon will need to verify your ownership of the domain before issuing the certificate. To do this, Amazon will check for the presence of the DNS validation record that was created by the script.
|
||||
|
||||
When using Google Domains, follow these steps to add the validation record:
|
||||
|
||||
Go to the Google Domains website and sign in.
|
||||
Select your domain.
|
||||
Open the DNS settings.
|
||||
In the 'Custom resource records' section, add a new record with the following details:
|
||||
|
||||
- Name: <CNAME name>
|
||||
- Type: CNAME
|
||||
- TTL: 1H
|
||||
- Data: <CNAME value>
|
||||
After you've added the record, Amazon will automatically validate your domain once it detects the new DNS record. This process can take a few minutes to several hours, depending on DNS propagation times.
|
||||
|
||||
Note: Please replace the Name and Data values with the ones provided by the script's output.
|
||||
|
||||
### Certificate Renewal
|
||||
|
||||
Amazon will automatically renew managed certificates 60 days before they expire. This means that, once your domain is validated and the certificate is issued, Amazon will handle the renewal of the certificate for you. You will not need to manually renew the certificate or repeat the validation process.
|
||||
|
||||
## Future TODOS
|
||||
|
||||
- At the moment, the database is publicly accessible. We could add an inbound rule to a security group to restrict access to the ip of the vercel application in prod which would look something like this:
|
||||
|
||||
```
|
||||
resource "aws_security_group_rule" "allow_specific_ip" {
|
||||
type = "ingress"
|
||||
from_port = 5432
|
||||
to_port = 5432
|
||||
protocol = "tcp"
|
||||
cidr_blocks = ["your.vercel.app.ip.address/32"]
|
||||
security_group_id = aws_db_instance.default.vpc_security_group_ids[0]
|
||||
}
|
||||
```
|
||||
|
||||
- Set up prod!
|
||||
- Automate deployments
|
||||
50
terraform/lambda/_template/README.md
Normal file
50
terraform/lambda/_template/README.md
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
## Checklist for adding a new Lambda
|
||||
|
||||
### 1. Create the Lambda scaffold
|
||||
- Copy the template:
|
||||
|
||||
`cp -r lambda/_template lambda/<lambda_name>`
|
||||
|
||||
---
|
||||
|
||||
### 2. Add infrastructure prerequisites (shared stack)
|
||||
- Add a new ECR repository in:
|
||||
|
||||
infrastructure/terraform/shared/main.tf
|
||||
|
||||
- Create a PR to deploy this to main then dev in order to deploy the shared stack
|
||||
|
||||
- Verify the ECR repository exists in AWS
|
||||
|
||||
---
|
||||
|
||||
### 3. Add Docker build configuration
|
||||
- Create a `Dockerfile` for the Lambda
|
||||
- Verify the Dockerfile path and build context
|
||||
- Add a new image build job in `deploy_terraform.yml` using `_build_image.yml`
|
||||
|
||||
---
|
||||
|
||||
### 4. Wire the Lambda deploy job (CI)
|
||||
- Add a deploy job using `_deploy_lambda.yml`
|
||||
- Ensure the deploy job depends on the image build job
|
||||
|
||||
---
|
||||
|
||||
### 5. Deploy
|
||||
- Push changes to GitHub
|
||||
- CI will:
|
||||
1. Build and push the Docker image
|
||||
2. Deploy the Lambda
|
||||
3. Verify everything deployed. Good things to check:
|
||||
- ECR with image
|
||||
- SQS
|
||||
- Trigger SQS
|
||||
- Cloud watch logs
|
||||
---
|
||||
### 5. Delete
|
||||
1. Delete README if you used cp -r
|
||||
|
||||
---
|
||||
|
||||
## Please feel free to update this document to make it easier for the next person
|
||||
63
terraform/lambda/_template/main.tf
Normal file
63
terraform/lambda/_template/main.tf
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
# ==============================================================================
|
||||
# TEMPLATE: Lambda Configuration with Optional S3 IAM Policy
|
||||
# ==============================================================================
|
||||
# Instructions:
|
||||
# 1. Replace "REPLACE ME" with your lambda name (e.g., "my-lambda-name")
|
||||
# 2. Add any additional environment variables as needed
|
||||
# 3. To attach S3 IAM policies from shared state:
|
||||
# - Uncomment the S3 policy attachment section below
|
||||
# - Update the policy_arn to match the output from shared/main.tf
|
||||
# - Available shared outputs (examples):
|
||||
# - data.terraform_remote_state.shared.outputs.condition_etl_s3_read_arn
|
||||
# - data.terraform_remote_state.shared.outputs.postcode_splitter_s3_read_arn
|
||||
# 4. To create a NEW S3 policy:
|
||||
# - Add a new module "lambda_s3_policy" in shared/main.tf using the
|
||||
# s3_iam_policy module (see examples in shared/main.tf)
|
||||
# - Then reference it here using data.terraform_remote_state.shared.outputs
|
||||
# ==============================================================================
|
||||
|
||||
data "terraform_remote_state" "shared" {
|
||||
backend = "s3"
|
||||
config = {
|
||||
bucket = "assessment-model-terraform-state"
|
||||
key = "env:/${var.stage}/terraform.tfstate"
|
||||
region = "eu-west-2"
|
||||
}
|
||||
}
|
||||
|
||||
module "lambda" {
|
||||
source = "../modules/lambda_with_sqs"
|
||||
|
||||
name = REPLACE ME #"address2uprn" for example
|
||||
stage = var.stage
|
||||
|
||||
image_uri = local.image_uri
|
||||
|
||||
|
||||
environment = {
|
||||
STAGE = var.stage
|
||||
LOG_LEVEL = "info"
|
||||
}
|
||||
}
|
||||
|
||||
# ======================================================================
|
||||
# OPTIONAL: Attach S3 IAM policy to Lambda execution role
|
||||
# ======================================================================
|
||||
# Uncomment and configure the resource below to attach S3 permissions
|
||||
#
|
||||
# Example 1: Attach existing policy from shared state
|
||||
# resource "aws_iam_role_policy_attachment" "lambda_s3_policy" {
|
||||
# role = module.lambda.role_name
|
||||
# policy_arn = data.terraform_remote_state.shared.outputs.YOUR_POLICY_OUTPUT_NAME_arn
|
||||
# }
|
||||
#
|
||||
# Example 2: Attach multiple policies
|
||||
# resource "aws_iam_role_policy_attachment" "lambda_read_policy" {
|
||||
# role = module.lambda.role_name
|
||||
# policy_arn = data.terraform_remote_state.shared.outputs.postcode_splitter_s3_read_arn
|
||||
# }
|
||||
#
|
||||
# resource "aws_iam_role_policy_attachment" "lambda_write_policy" {
|
||||
# role = module.lambda.role_name
|
||||
# policy_arn = data.terraform_remote_state.shared.outputs.another_policy_arn
|
||||
# }
|
||||
16
terraform/lambda/_template/provider.tf
Normal file
16
terraform/lambda/_template/provider.tf
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "~> 4.16"
|
||||
}
|
||||
}
|
||||
|
||||
backend "s3" {
|
||||
bucket = REPLACE_ME
|
||||
key = "terraform.tfstate"
|
||||
region = "eu-west-2"
|
||||
}
|
||||
|
||||
required_version = ">= 1.2.0"
|
||||
}
|
||||
27
terraform/lambda/_template/variables.tf
Normal file
27
terraform/lambda/_template/variables.tf
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
variable "lambda_name" {
|
||||
type = string
|
||||
description = "Logical name of the lambda (e.g. address2uprn)"
|
||||
}
|
||||
|
||||
variable "stage" {
|
||||
description = "Deployment stage (e.g. dev, prod)"
|
||||
type = string
|
||||
}
|
||||
variable "ecr_repo_url" {
|
||||
type = string
|
||||
description = "ECR repository URL (no tag, no digest)"
|
||||
}
|
||||
|
||||
variable "image_digest" {
|
||||
type = string
|
||||
description = "Image digest (sha256:...)"
|
||||
}
|
||||
|
||||
|
||||
locals {
|
||||
image_uri = "${var.ecr_repo_url}@${var.image_digest}"
|
||||
}
|
||||
|
||||
output "resolved_image_uri" {
|
||||
value = local.image_uri
|
||||
}
|
||||
48
terraform/lambda/modules/lambda_with_sqs/main.tf
Normal file
48
terraform/lambda/modules/lambda_with_sqs/main.tf
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
############################################
|
||||
# IAM role
|
||||
############################################
|
||||
module "role" {
|
||||
source = "../../../modules/lambda_execution_role"
|
||||
name = "${var.name}-lambda-${var.stage}"
|
||||
}
|
||||
|
||||
output "role_name" {
|
||||
value = module.role.role_name
|
||||
}
|
||||
|
||||
############################################
|
||||
# SQS queue + DLQ
|
||||
############################################
|
||||
module "queue" {
|
||||
source = "../../../modules/sqs_queue"
|
||||
name = "${var.name}-queue-${var.stage}"
|
||||
}
|
||||
|
||||
############################################
|
||||
# Lambda
|
||||
############################################
|
||||
module "lambda" {
|
||||
source = "../../../modules/lambda_service"
|
||||
|
||||
name = "${var.name}-${var.stage}"
|
||||
role_arn = module.role.role_arn
|
||||
image_uri = var.image_uri
|
||||
|
||||
timeout = var.timeout
|
||||
memory_size = var.memory_size
|
||||
|
||||
environment = var.environment
|
||||
}
|
||||
|
||||
############################################
|
||||
# SQS → Lambda trigger
|
||||
############################################
|
||||
module "sqs_trigger" {
|
||||
source = "../../../modules/lambda_sqs_trigger"
|
||||
|
||||
lambda_arn = module.lambda.lambda_arn
|
||||
lambda_role_name = module.role.role_name
|
||||
queue_arn = module.queue.queue_arn
|
||||
|
||||
batch_size = var.batch_size
|
||||
}
|
||||
12
terraform/lambda/modules/lambda_with_sqs/outputs.tf
Normal file
12
terraform/lambda/modules/lambda_with_sqs/outputs.tf
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
output "lambda_arn" {
|
||||
value = module.lambda.lambda_arn
|
||||
}
|
||||
|
||||
output "queue_arn" {
|
||||
value = module.queue.queue_arn
|
||||
}
|
||||
|
||||
output "queue_url" {
|
||||
value = module.queue.queue_url
|
||||
}
|
||||
|
||||
36
terraform/lambda/modules/lambda_with_sqs/variables.tf
Normal file
36
terraform/lambda/modules/lambda_with_sqs/variables.tf
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
variable "name" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "stage" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "image_uri" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "region" {
|
||||
type = string
|
||||
default = "eu-west-2"
|
||||
}
|
||||
|
||||
variable "timeout" {
|
||||
type = number
|
||||
default = 60
|
||||
}
|
||||
|
||||
variable "memory_size" {
|
||||
type = number
|
||||
default = 1024
|
||||
}
|
||||
|
||||
variable "environment" {
|
||||
type = map(string)
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "batch_size" {
|
||||
type = number
|
||||
default = 10
|
||||
}
|
||||
89
terraform/lambda/postcodeSplitter/main.tf
Normal file
89
terraform/lambda/postcodeSplitter/main.tf
Normal file
|
|
@ -0,0 +1,89 @@
|
|||
data "terraform_remote_state" "shared" {
|
||||
backend = "s3"
|
||||
config = {
|
||||
bucket = "assessment-model-terraform-state"
|
||||
key = "env:/${var.stage}/terraform.tfstate"
|
||||
region = "eu-west-2"
|
||||
}
|
||||
}
|
||||
data "aws_secretsmanager_secret_version" "db_credentials" {
|
||||
secret_id = "${var.stage}/assessment_model/db_credentials"
|
||||
}
|
||||
|
||||
|
||||
locals {
|
||||
db_credentials = jsondecode(data.aws_secretsmanager_secret_version.db_credentials.secret_string)
|
||||
}
|
||||
|
||||
# Reference the existing address2UPRN Lambda outputs from address2uprn state
|
||||
data "terraform_remote_state" "address2uprn" {
|
||||
backend = "s3"
|
||||
config = {
|
||||
bucket = "address2uprn-terraform-state"
|
||||
key = "env:/${var.stage}/terraform.tfstate"
|
||||
region = "eu-west-2"
|
||||
}
|
||||
}
|
||||
|
||||
module "lambda" {
|
||||
source = "../modules/lambda_with_sqs"
|
||||
|
||||
name = "postcode-splitter"
|
||||
stage = var.stage
|
||||
|
||||
image_uri = local.image_uri
|
||||
|
||||
|
||||
environment = merge(
|
||||
{
|
||||
STAGE = var.stage
|
||||
LOG_LEVEL = "info"
|
||||
DB_USERNAME = local.db_credentials.db_assessment_model_username
|
||||
DB_PASSWORD = local.db_credentials.db_assessment_model_password
|
||||
GOOGLE_SOLAR_API_KEY = "test"
|
||||
SAP_PREDICTIONS_BUCKET = "test"
|
||||
CARBON_PREDICTIONS_BUCKET = "test"
|
||||
HEAT_PREDICTIONS_BUCKET = "test"
|
||||
HEATING_KWH_PREDICTIONS_BUCKET = "test"
|
||||
HOTWATER_KWH_PREDICTIONS_BUCKET = "test"
|
||||
API_KEY = "test"
|
||||
ENVIRONMENT = "test"
|
||||
SECRET_KEY = "test"
|
||||
PLAN_TRIGGER_BUCKET = "test"
|
||||
DATA_BUCKET = "test"
|
||||
EPC_AUTH_TOKEN = "test"
|
||||
ENGINE_SQS_URL = "test"
|
||||
ENERGY_ASSESSMENTS_BUCKET = "test"
|
||||
ADDRESS2UPRN_QUEUE_URL = data.terraform_remote_state.address2uprn.outputs.address2uprn_queue_url
|
||||
S3_BUCKET_NAME = data.terraform_remote_state.shared.outputs.retrofit_sap_data_bucket_name
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
# Attach S3 read policy to the Lambda execution role
|
||||
resource "aws_iam_role_policy_attachment" "postcode_splitter_s3_read" {
|
||||
role = module.lambda.role_name
|
||||
policy_arn = data.terraform_remote_state.shared.outputs.postcode_splitter_s3_read_arn
|
||||
}
|
||||
|
||||
# Create SQS send policy for address2UPRN queue
|
||||
module "postcode_splitter_sqs_policy" {
|
||||
source = "../../modules/general_iam_policy"
|
||||
|
||||
policy_name = "postcode-splitter-sqs-send-${var.stage}"
|
||||
policy_description = "Allow postcode-splitter Lambda to send messages to address2UPRN queue"
|
||||
|
||||
actions = [
|
||||
"sqs:SendMessage"
|
||||
]
|
||||
|
||||
resources = [
|
||||
data.terraform_remote_state.address2uprn.outputs.address2uprn_queue_arn
|
||||
]
|
||||
}
|
||||
|
||||
# Attach SQS policy to the Lambda execution role
|
||||
resource "aws_iam_role_policy_attachment" "postcode_splitter_sqs_send" {
|
||||
role = module.lambda.role_name
|
||||
policy_arn = module.postcode_splitter_sqs_policy.policy_arn
|
||||
}
|
||||
16
terraform/lambda/postcodeSplitter/provider.tf
Normal file
16
terraform/lambda/postcodeSplitter/provider.tf
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "~> 4.16"
|
||||
}
|
||||
}
|
||||
|
||||
backend "s3" {
|
||||
bucket = "postcode-splitter-terraform-state"
|
||||
key = "terraform.tfstate"
|
||||
region = "eu-west-2"
|
||||
}
|
||||
|
||||
required_version = ">= 1.2.0"
|
||||
}
|
||||
35
terraform/lambda/postcodeSplitter/variables.tf
Normal file
35
terraform/lambda/postcodeSplitter/variables.tf
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
variable "lambda_name" {
|
||||
type = string
|
||||
description = "Logical name of the lambda (e.g. address2uprn)"
|
||||
}
|
||||
|
||||
variable "stage" {
|
||||
description = "Deployment stage (e.g. dev, prod)"
|
||||
type = string
|
||||
}
|
||||
variable "ecr_repo_url" {
|
||||
type = string
|
||||
description = "ECR repository URL (no tag, no digest)"
|
||||
}
|
||||
|
||||
variable "image_digest" {
|
||||
type = string
|
||||
description = "Image digest (sha256:...)"
|
||||
}
|
||||
|
||||
locals {
|
||||
image_uri = "${var.ecr_repo_url}@${var.image_digest}"
|
||||
}
|
||||
|
||||
output "resolved_image_uri" {
|
||||
value = local.image_uri
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
65
terraform/modules/cloudfront/main.tf
Normal file
65
terraform/modules/cloudfront/main.tf
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
resource "aws_cloudfront_distribution" "s3_distribution" {
|
||||
origin {
|
||||
domain_name = var.bucket_domain_name
|
||||
origin_id = "S3-${var.bucket_name}"
|
||||
|
||||
s3_origin_config {
|
||||
origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
|
||||
}
|
||||
}
|
||||
|
||||
enabled = true
|
||||
|
||||
default_cache_behavior {
|
||||
allowed_methods = ["GET", "HEAD"]
|
||||
cached_methods = ["GET", "HEAD"]
|
||||
target_origin_id = "S3-${var.bucket_name}"
|
||||
viewer_protocol_policy = "redirect-to-https"
|
||||
compress = true
|
||||
|
||||
forwarded_values {
|
||||
query_string = false
|
||||
cookies {
|
||||
forward = "none"
|
||||
}
|
||||
}
|
||||
|
||||
min_ttl = 0
|
||||
default_ttl = 86400
|
||||
max_ttl = 31536000
|
||||
}
|
||||
|
||||
price_class = "PriceClass_All"
|
||||
|
||||
restrictions {
|
||||
geo_restriction {
|
||||
restriction_type = "none"
|
||||
}
|
||||
}
|
||||
|
||||
viewer_certificate {
|
||||
cloudfront_default_certificate = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_cloudfront_origin_access_identity" "oai" {
|
||||
comment = "OAI for ${var.bucket_name}"
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_policy" "bucket_policy" {
|
||||
bucket = var.bucket_id
|
||||
|
||||
policy = jsonencode({
|
||||
Version = "2012-10-17"
|
||||
Statement = [
|
||||
{
|
||||
Effect = "Allow"
|
||||
Principal = {
|
||||
AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ${aws_cloudfront_origin_access_identity.oai.id}"
|
||||
}
|
||||
Action = "s3:GetObject"
|
||||
Resource = "${var.bucket_arn}/*"
|
||||
},
|
||||
]
|
||||
})
|
||||
}
|
||||
24
terraform/modules/cloudfront/variables.tf
Normal file
24
terraform/modules/cloudfront/variables.tf
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
variable "bucket_name" {
|
||||
description = "The name of the bucket"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "stage" {
|
||||
description = "The deployment stage"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "bucket_id" {
|
||||
description = "The ID of the S3 bucket"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "bucket_arn" {
|
||||
description = "The ARN of the S3 bucket"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "bucket_domain_name" {
|
||||
description = "The regional domain name of the S3 bucket"
|
||||
type = string
|
||||
}
|
||||
30
terraform/modules/container_registry/main.tf
Normal file
30
terraform/modules/container_registry/main.tf
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
resource "aws_ecr_repository" "this" {
|
||||
name = "${var.name}-${var.stage}"
|
||||
|
||||
image_tag_mutability = "MUTABLE"
|
||||
|
||||
image_scanning_configuration {
|
||||
scan_on_push = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_ecr_lifecycle_policy" "this" {
|
||||
repository = aws_ecr_repository.this.name
|
||||
|
||||
policy = jsonencode({
|
||||
rules = [
|
||||
{
|
||||
rulePriority = 1
|
||||
description = "Expire old images"
|
||||
selection = {
|
||||
tagStatus = "any"
|
||||
countType = "imageCountMoreThan"
|
||||
countNumber = var.retain_count
|
||||
}
|
||||
action = {
|
||||
type = "expire"
|
||||
}
|
||||
}
|
||||
]
|
||||
})
|
||||
}
|
||||
11
terraform/modules/container_registry/outputs.tf
Normal file
11
terraform/modules/container_registry/outputs.tf
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
output "repository_name" {
|
||||
value = aws_ecr_repository.this.name
|
||||
}
|
||||
|
||||
output "repository_url" {
|
||||
value = aws_ecr_repository.this.repository_url
|
||||
}
|
||||
|
||||
output "repository_arn" {
|
||||
value = aws_ecr_repository.this.arn
|
||||
}
|
||||
15
terraform/modules/container_registry/variables.tf
Normal file
15
terraform/modules/container_registry/variables.tf
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
variable "name" {
|
||||
description = "Base name of the repository (without stage)"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "stage" {
|
||||
description = "Deployment stage (e.g. dev, prod)"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "retain_count" {
|
||||
description = "Number of images to retain"
|
||||
type = number
|
||||
default = 10
|
||||
}
|
||||
33
terraform/modules/ecr/main.tf
Normal file
33
terraform/modules/ecr/main.tf
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
# This ecr works for things deployed by serverless.
|
||||
# TODO: unify ecr and container_registry to one
|
||||
|
||||
resource "aws_ecr_repository" "my_repository" {
|
||||
name = "${var.ecr_name}"
|
||||
image_tag_mutability = "MUTABLE"
|
||||
# Allows overwriting image tags, change to IMMUTABLE if you want to prevent overwriting
|
||||
|
||||
image_scanning_configuration {
|
||||
scan_on_push = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_ecr_lifecycle_policy" "my_repository_policy" {
|
||||
repository = aws_ecr_repository.my_repository.name
|
||||
|
||||
policy = jsonencode({
|
||||
rules = [
|
||||
{
|
||||
rulePriority = 1
|
||||
description = "Retain only the last 10 images"
|
||||
selection = {
|
||||
tagStatus = "any"
|
||||
countType = "imageCountMoreThan"
|
||||
countNumber = 10
|
||||
}
|
||||
action = {
|
||||
type = "expire"
|
||||
}
|
||||
}
|
||||
]
|
||||
})
|
||||
}
|
||||
10
terraform/modules/ecr/outputs.tf
Normal file
10
terraform/modules/ecr/outputs.tf
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
output "ecr_repository_name" {
|
||||
description = "Name of the EPR repo in AWS"
|
||||
value = aws_ecr_repository.my_repository.name
|
||||
}
|
||||
|
||||
|
||||
output "ecr_repository_url" {
|
||||
description = "Full ECR repository URL"
|
||||
value = aws_ecr_repository.my_repository.repository_url
|
||||
}
|
||||
4
terraform/modules/ecr/variables.tf
Normal file
4
terraform/modules/ecr/variables.tf
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
variable "ecr_name" {
|
||||
description = "The name for the ECR repository"
|
||||
type = string
|
||||
}
|
||||
21
terraform/modules/general_iam_policy/main.tf
Normal file
21
terraform/modules/general_iam_policy/main.tf
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
# IAM Policy with dynamic actions and resources
|
||||
resource "aws_iam_policy" "policy" {
|
||||
name = var.policy_name
|
||||
description = var.policy_description
|
||||
|
||||
policy = jsonencode({
|
||||
Version = "2012-10-17"
|
||||
Statement = [
|
||||
merge(
|
||||
{
|
||||
Effect = "Allow"
|
||||
Action = var.actions
|
||||
Resource = var.resources
|
||||
},
|
||||
var.conditions != null ? { Condition = var.conditions } : {}
|
||||
)
|
||||
]
|
||||
})
|
||||
|
||||
tags = var.tags
|
||||
}
|
||||
9
terraform/modules/general_iam_policy/outputs.tf
Normal file
9
terraform/modules/general_iam_policy/outputs.tf
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
output "policy_arn" {
|
||||
value = aws_iam_policy.policy.arn
|
||||
description = "ARN of the created IAM policy"
|
||||
}
|
||||
|
||||
output "policy_name" {
|
||||
value = aws_iam_policy.policy.name
|
||||
description = "Name of the created IAM policy"
|
||||
}
|
||||
32
terraform/modules/general_iam_policy/variables.tf
Normal file
32
terraform/modules/general_iam_policy/variables.tf
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
variable "policy_name" {
|
||||
description = "Name of the IAM policy"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "policy_description" {
|
||||
description = "Description of the IAM policy"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "actions" {
|
||||
description = "List of IAM actions allowed by this policy"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "resources" {
|
||||
description = "List of AWS resources this policy applies to"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "conditions" {
|
||||
description = "Optional IAM policy conditions"
|
||||
type = any
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "tags" {
|
||||
description = "Tags to apply to the policy"
|
||||
type = map(string)
|
||||
default = {}
|
||||
}
|
||||
21
terraform/modules/lambda_execution_role/main.tf
Normal file
21
terraform/modules/lambda_execution_role/main.tf
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
data "aws_iam_policy_document" "assume" {
|
||||
statement {
|
||||
effect = "Allow"
|
||||
principals {
|
||||
type = "Service"
|
||||
identifiers = ["lambda.amazonaws.com"]
|
||||
}
|
||||
actions = ["sts:AssumeRole"]
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_iam_role" "this" {
|
||||
name = var.name
|
||||
assume_role_policy = data.aws_iam_policy_document.assume.json
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy_attachment" "basic_logs" {
|
||||
role = aws_iam_role.this.name
|
||||
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
|
||||
}
|
||||
|
||||
7
terraform/modules/lambda_execution_role/outputs.tf
Normal file
7
terraform/modules/lambda_execution_role/outputs.tf
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
output "role_arn" {
|
||||
value = aws_iam_role.this.arn
|
||||
}
|
||||
|
||||
output "role_name" {
|
||||
value = aws_iam_role.this.name
|
||||
}
|
||||
4
terraform/modules/lambda_execution_role/variables.tf
Normal file
4
terraform/modules/lambda_execution_role/variables.tf
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
variable "name" {
|
||||
description = "IAM role name for the Lambda execution role"
|
||||
type = string
|
||||
}
|
||||
15
terraform/modules/lambda_service/main.tf
Normal file
15
terraform/modules/lambda_service/main.tf
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
resource "aws_lambda_function" "this" {
|
||||
function_name = var.name
|
||||
role = var.role_arn
|
||||
|
||||
package_type = "Image"
|
||||
image_uri = var.image_uri
|
||||
|
||||
timeout = var.timeout
|
||||
memory_size = var.memory_size
|
||||
publish = true
|
||||
|
||||
environment {
|
||||
variables = var.environment
|
||||
}
|
||||
}
|
||||
3
terraform/modules/lambda_service/outputs.tf
Normal file
3
terraform/modules/lambda_service/outputs.tf
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
output "lambda_arn" {
|
||||
value = aws_lambda_function.this.arn
|
||||
}
|
||||
18
terraform/modules/lambda_service/variables.tf
Normal file
18
terraform/modules/lambda_service/variables.tf
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
variable "name" { type = string }
|
||||
variable "role_arn" { type = string }
|
||||
variable "image_uri" { type = string }
|
||||
|
||||
variable "timeout" {
|
||||
type = number
|
||||
default = 30
|
||||
}
|
||||
|
||||
variable "memory_size" {
|
||||
type = number
|
||||
default = 512
|
||||
}
|
||||
|
||||
variable "environment" {
|
||||
type = map(string)
|
||||
default = {}
|
||||
}
|
||||
23
terraform/modules/lambda_sqs_trigger/main.tf
Normal file
23
terraform/modules/lambda_sqs_trigger/main.tf
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
resource "aws_lambda_event_source_mapping" "this" {
|
||||
event_source_arn = var.queue_arn
|
||||
function_name = var.lambda_arn
|
||||
batch_size = var.batch_size
|
||||
enabled = true
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy" "allow_sqs" {
|
||||
role = var.lambda_role_name
|
||||
|
||||
policy = jsonencode({
|
||||
Version = "2012-10-17"
|
||||
Statement = [{
|
||||
Effect = "Allow"
|
||||
Action = [
|
||||
"sqs:ReceiveMessage",
|
||||
"sqs:DeleteMessage",
|
||||
"sqs:GetQueueAttributes"
|
||||
]
|
||||
Resource = var.queue_arn
|
||||
}]
|
||||
})
|
||||
}
|
||||
8
terraform/modules/lambda_sqs_trigger/variables.tf
Normal file
8
terraform/modules/lambda_sqs_trigger/variables.tf
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
variable "lambda_arn" { type = string }
|
||||
variable "lambda_role_name" { type = string }
|
||||
variable "queue_arn" { type = string }
|
||||
|
||||
variable "batch_size" {
|
||||
type = number
|
||||
default = 10
|
||||
}
|
||||
60
terraform/modules/route53/main.tf
Normal file
60
terraform/modules/route53/main.tf
Normal file
|
|
@ -0,0 +1,60 @@
|
|||
resource "aws_route53_zone" "my_hosted_zone" {
|
||||
name = var.domain_name
|
||||
}
|
||||
|
||||
# Request an SSL certificate for the domain
|
||||
resource "aws_acm_certificate" "my_certificate_request" {
|
||||
provider = aws.aws_use1
|
||||
domain_name = "*.${var.domain_name}"
|
||||
validation_method = "DNS"
|
||||
|
||||
tags = {
|
||||
Name : var.domain_name
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
# Create a DNS record to prove that we own the domain
|
||||
# for)each syntax as discussed here:
|
||||
# https://github.com/hashicorp/terraform-provider-aws/issues/10098#issuecomment-663562342
|
||||
resource "aws_route53_record" "my_validation_record" {
|
||||
zone_id = aws_route53_zone.my_hosted_zone.zone_id
|
||||
for_each = {
|
||||
for dvo in aws_acm_certificate.my_certificate_request.domain_validation_options: dvo.domain_name => {
|
||||
name = dvo.resource_record_name
|
||||
record = dvo.resource_record_value
|
||||
type = dvo.resource_record_type
|
||||
}
|
||||
}
|
||||
name = each.value.name
|
||||
records = [each.value.record]
|
||||
type = each.value.type
|
||||
ttl = 60
|
||||
}
|
||||
|
||||
resource "aws_acm_certificate_validation" "my_certificate_validation" {
|
||||
provider = aws.aws_use1
|
||||
certificate_arn = aws_acm_certificate.my_certificate_request.arn
|
||||
validation_record_fqdns = [for record in aws_route53_record.my_validation_record: record.fqdn]
|
||||
}
|
||||
|
||||
resource "aws_route53_record" "my_caa_record" {
|
||||
zone_id = aws_route53_zone.my_hosted_zone.zone_id
|
||||
name = var.domain_name
|
||||
type = "CAA"
|
||||
records = [
|
||||
"0 issue \"amazon.com\"",
|
||||
"0 issuewild \"amazon.com\""
|
||||
]
|
||||
ttl = 60
|
||||
}
|
||||
|
||||
# Store the certificate in SSM so that we can access it from other resources
|
||||
resource "aws_ssm_parameter" "certificate_arn" {
|
||||
name = "/ssl_certificate_arn"
|
||||
type = "String"
|
||||
value = aws_acm_certificate.my_certificate_request.arn
|
||||
}
|
||||
9
terraform/modules/route53/variables.tf
Normal file
9
terraform/modules/route53/variables.tf
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
variable "domain_name" {
|
||||
description = "Name of the base domain for Route53"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "api_url_prefix" {
|
||||
description = "Prefix for the API URL"
|
||||
type = string
|
||||
}
|
||||
32
terraform/modules/s3/main.tf
Normal file
32
terraform/modules/s3/main.tf
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
resource "aws_s3_bucket" "bucket" {
|
||||
bucket = "${var.bucketname}"
|
||||
acl = "private"
|
||||
|
||||
# cors_rule {
|
||||
# allowed_headers = ["Content-Type", "Authorization"]
|
||||
# allowed_methods = ["PUT"]
|
||||
# allowed_origins = var.allowed_origins
|
||||
# expose_headers = ["ETag"]
|
||||
# max_age_seconds = 3000
|
||||
# }
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
prevent_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_public_access_block" "block_public" {
|
||||
bucket = aws_s3_bucket.bucket.id
|
||||
block_public_acls = true
|
||||
block_public_policy = true
|
||||
ignore_public_acls = true
|
||||
restrict_public_buckets = true
|
||||
}
|
||||
16
terraform/modules/s3/outputs.tf
Normal file
16
terraform/modules/s3/outputs.tf
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
output "bucket_name" {
|
||||
description = "The name of the S3 bucket"
|
||||
value = aws_s3_bucket.bucket.bucket
|
||||
}
|
||||
|
||||
output "bucket_id" {
|
||||
value = aws_s3_bucket.bucket.id
|
||||
}
|
||||
|
||||
output "bucket_arn" {
|
||||
value = aws_s3_bucket.bucket.arn
|
||||
}
|
||||
|
||||
output "bucket_domain_name" {
|
||||
value = aws_s3_bucket.bucket.bucket_regional_domain_name
|
||||
}
|
||||
13
terraform/modules/s3/variables.tf
Normal file
13
terraform/modules/s3/variables.tf
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
variable "bucketname" {
|
||||
description = "The name of the bucket to create"
|
||||
type = string
|
||||
}
|
||||
|
||||
# Between production and development, we need to specify the
|
||||
# allowed origins for CORS differently. This variable is set to allow
|
||||
# us to generate pre-signed urls and in development, we want to be able to
|
||||
# do so from localhost.
|
||||
variable "allowed_origins" {
|
||||
description = "Allowed origins for CORS"
|
||||
type = list(string)
|
||||
}
|
||||
31
terraform/modules/s3_iam_policy/main.tf
Normal file
31
terraform/modules/s3_iam_policy/main.tf
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
# Dynamically build S3 resources list from bucket ARNs and resource paths
|
||||
locals {
|
||||
# Generate full resource ARNs by combining bucket ARNs with resource paths
|
||||
resources = flatten([
|
||||
for bucket_arn in var.bucket_arns : [
|
||||
for path in var.resource_paths : "${bucket_arn}${path}"
|
||||
]
|
||||
])
|
||||
}
|
||||
|
||||
# IAM Policy with dynamic actions and resources
|
||||
resource "aws_iam_policy" "s3_policy" {
|
||||
name = var.policy_name
|
||||
description = var.policy_description
|
||||
|
||||
policy = jsonencode({
|
||||
Version = "2012-10-17"
|
||||
Statement = [
|
||||
merge(
|
||||
{
|
||||
Effect = "Allow"
|
||||
Action = var.actions
|
||||
Resource = local.resources
|
||||
},
|
||||
var.conditions != null ? { Condition = var.conditions } : {}
|
||||
)
|
||||
]
|
||||
})
|
||||
|
||||
tags = var.tags
|
||||
}
|
||||
14
terraform/modules/s3_iam_policy/outputs.tf
Normal file
14
terraform/modules/s3_iam_policy/outputs.tf
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
output "policy_arn" {
|
||||
description = "ARN of the S3 IAM policy"
|
||||
value = aws_iam_policy.s3_policy.arn
|
||||
}
|
||||
|
||||
output "policy_name" {
|
||||
description = "Name of the S3 IAM policy"
|
||||
value = aws_iam_policy.s3_policy.name
|
||||
}
|
||||
|
||||
output "policy_id" {
|
||||
description = "ID of the S3 IAM policy"
|
||||
value = aws_iam_policy.s3_policy.id
|
||||
}
|
||||
42
terraform/modules/s3_iam_policy/variables.tf
Normal file
42
terraform/modules/s3_iam_policy/variables.tf
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
variable "policy_name" {
|
||||
description = "Name of the IAM policy"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "policy_description" {
|
||||
description = "Description of the IAM policy"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "bucket_arns" {
|
||||
description = "List of S3 bucket ARNs to grant access to"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "actions" {
|
||||
description = "List of S3 actions to allow (e.g., ['s3:GetObject'], ['s3:PutObject'], ['s3:DeleteObject'])"
|
||||
type = list(string)
|
||||
default = ["s3:GetObject"]
|
||||
}
|
||||
|
||||
variable "resource_paths" {
|
||||
description = "List of resource paths within buckets (e.g., ['/*'] for all objects, ['/specific-prefix/*'] for specific prefix)"
|
||||
type = list(string)
|
||||
default = ["/*"]
|
||||
}
|
||||
|
||||
variable "conditions" {
|
||||
description = "Optional IAM policy conditions to apply to the statement"
|
||||
type = any
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "tags" {
|
||||
description = "Tags to apply to the policy"
|
||||
type = map(string)
|
||||
default = {}
|
||||
}
|
||||
|
||||
|
||||
|
||||
83
terraform/modules/s3_presignable_bucket/main.tf
Normal file
83
terraform/modules/s3_presignable_bucket/main.tf
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
resource "aws_s3_bucket" "bucket" {
|
||||
bucket = var.bucketname
|
||||
acl = "private"
|
||||
|
||||
cors_rule {
|
||||
allowed_headers = ["Content-Type", "Authorization"]
|
||||
allowed_methods = ["PUT"]
|
||||
allowed_origins = var.allowed_origins
|
||||
expose_headers = ["ETag"]
|
||||
max_age_seconds = 3000
|
||||
}
|
||||
|
||||
server_side_encryption_configuration {
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
prevent_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_iam_user" "presign_frontend_user" {
|
||||
name = "presign_frontend_user-${var.bucketname}"
|
||||
path = "/system/"
|
||||
}
|
||||
|
||||
resource "aws_iam_access_key" "presign_frontend_user_access_key" {
|
||||
user = aws_iam_user.presign_frontend_user.name
|
||||
}
|
||||
|
||||
resource "aws_secretsmanager_secret" "presign_frontend_user_access_key" {
|
||||
name = "${var.bucketname}/presign_frontend/access_key"
|
||||
}
|
||||
|
||||
resource "aws_secretsmanager_secret_version" "presign_frontend_user_access_key" {
|
||||
secret_id = aws_secretsmanager_secret.presign_frontend_user_access_key.id
|
||||
secret_string = aws_iam_access_key.presign_frontend_user_access_key.id
|
||||
}
|
||||
|
||||
resource "aws_secretsmanager_secret" "presign_frontend_user_secret_key" {
|
||||
name = "${var.bucketname}/presign_frontend/secret_key"
|
||||
}
|
||||
|
||||
resource "aws_secretsmanager_secret_version" "presign_frontend_user_secret_key" {
|
||||
secret_id = aws_secretsmanager_secret.presign_frontend_user_secret_key.id
|
||||
secret_string = aws_iam_access_key.presign_frontend_user_access_key.secret
|
||||
}
|
||||
|
||||
resource "aws_iam_user_policy" "presign_frontend_user_policy" {
|
||||
name = "presign_frontend_user_policy-${var.bucketname}"
|
||||
user = aws_iam_user.presign_frontend_user.name
|
||||
|
||||
policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:PutObject",
|
||||
"s3:PutObjectAcl",
|
||||
"s3:GetObject",
|
||||
"s3:GetObjectAcl",
|
||||
"s3:DeleteObject"
|
||||
],
|
||||
"Resource": "arn:aws:s3:::${aws_s3_bucket.bucket.bucket}/*"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_public_access_block" "block_public" {
|
||||
bucket = aws_s3_bucket.bucket.id
|
||||
block_public_acls = true
|
||||
block_public_policy = true
|
||||
ignore_public_acls = true
|
||||
restrict_public_buckets = true
|
||||
}
|
||||
14
terraform/modules/s3_presignable_bucket/outputs.tf
Normal file
14
terraform/modules/s3_presignable_bucket/outputs.tf
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
output "bucket_name" {
|
||||
description = "The name of the S3 bucket"
|
||||
value = aws_s3_bucket.bucket.bucket
|
||||
}
|
||||
|
||||
output "presign_frontend_access_key_secret_name" {
|
||||
description = "The name of the access key secret in AWS Secrets Manager for the frontend user"
|
||||
value = aws_secretsmanager_secret.presign_frontend_user_access_key.name
|
||||
}
|
||||
|
||||
output "presign_frontend_secret_key_secret_name" {
|
||||
description = "The name of the secret key secret in AWS Secrets Manager for the frontend user"
|
||||
value = aws_secretsmanager_secret.presign_frontend_user_secret_key.name
|
||||
}
|
||||
18
terraform/modules/s3_presignable_bucket/variables.tf
Normal file
18
terraform/modules/s3_presignable_bucket/variables.tf
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
variable "environment" {
|
||||
description = "The environment for the bucket (dev or prod)"
|
||||
type = string
|
||||
}
|
||||
|
||||
# Between production and development, we need to specify the
|
||||
# allowed origins for CORS differently. This variable is set to allow
|
||||
# us to generate pre-signed urls and in development, we want to be able to
|
||||
# do so from localhost.
|
||||
variable "allowed_origins" {
|
||||
description = "Allowed origins for CORS"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "bucketname" {
|
||||
description = "The name of the bucket to create"
|
||||
type = string
|
||||
}
|
||||
50
terraform/modules/ses/main.tf
Normal file
50
terraform/modules/ses/main.tf
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
resource "aws_ses_domain_identity" "this" {
|
||||
domain = var.domain_name
|
||||
}
|
||||
|
||||
# DKIM signing
|
||||
resource "aws_ses_domain_dkim" "this" {
|
||||
domain = aws_ses_domain_identity.this.domain
|
||||
}
|
||||
|
||||
# IAM user for SES SMTP
|
||||
resource "aws_iam_user" "ses_user" {
|
||||
name = "${var.stage}-ses-user"
|
||||
}
|
||||
|
||||
resource "aws_iam_user_policy" "ses_send_policy" {
|
||||
name = "AllowSESSendEmail"
|
||||
user = aws_iam_user.ses_user.name
|
||||
|
||||
policy = jsonencode({
|
||||
Version = "2012-10-17"
|
||||
Statement = [
|
||||
{
|
||||
Effect = "Allow"
|
||||
Action = [
|
||||
"ses:SendEmail",
|
||||
"ses:SendRawEmail"
|
||||
]
|
||||
Resource = "*"
|
||||
}
|
||||
]
|
||||
})
|
||||
}
|
||||
|
||||
resource "aws_iam_access_key" "ses_user" {
|
||||
user = aws_iam_user.ses_user.name
|
||||
}
|
||||
|
||||
# Store SMTP credentials in AWS Secrets Manager
|
||||
resource "aws_secretsmanager_secret" "ses_smtp" {
|
||||
name = "${var.stage}/ses/smtp_credentials"
|
||||
description = "SMTP credentials for SES (${var.stage})"
|
||||
}
|
||||
|
||||
resource "aws_secretsmanager_secret_version" "ses_smtp" {
|
||||
secret_id = aws_secretsmanager_secret.ses_smtp.id
|
||||
secret_string = jsonencode({
|
||||
username = aws_iam_access_key.ses_user.id
|
||||
password = aws_iam_access_key.ses_user.ses_smtp_password_v4
|
||||
})
|
||||
}
|
||||
66
terraform/modules/ses/outputs.tf
Normal file
66
terraform/modules/ses/outputs.tf
Normal file
|
|
@ -0,0 +1,66 @@
|
|||
# These are our DNS records that will need to be added to our Krystal account
|
||||
|
||||
# TXT record
|
||||
output "verification_record" {
|
||||
description = "TXT record required to verify the domain with SES"
|
||||
value = {
|
||||
name = "_amazonses.${aws_ses_domain_identity.this.domain}"
|
||||
type = "TXT"
|
||||
value = aws_ses_domain_identity.this.verification_token
|
||||
}
|
||||
}
|
||||
|
||||
# DKIM CNAME records
|
||||
output "dkim_records" {
|
||||
description = "CNAME records required to enable DKIM for SES"
|
||||
value = [
|
||||
for dkim in aws_ses_domain_dkim.this.dkim_tokens : {
|
||||
name = "${dkim}._domainkey.${aws_ses_domain_identity.this.domain}"
|
||||
type = "CNAME"
|
||||
value = "${dkim}.dkim.amazonses.com"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# SMTP credentials - send them to secrets manager
|
||||
output "ses_smtp_secret_arn" {
|
||||
description = "ARN of the SES SMTP credentials stored in Secrets Manager"
|
||||
value = aws_secretsmanager_secret.ses_smtp.arn
|
||||
}
|
||||
|
||||
output "smtp_password" {
|
||||
value = aws_iam_access_key.ses_user.ses_smtp_password_v4
|
||||
sensitive = true
|
||||
description = "SMTP password for SES"
|
||||
}
|
||||
|
||||
output "dns_records" {
|
||||
description = "All DNS records required for SES verification and recommended deliverability"
|
||||
value = concat(
|
||||
[
|
||||
{
|
||||
name = "_amazonses.${aws_ses_domain_identity.this.domain}"
|
||||
type = "TXT"
|
||||
value = aws_ses_domain_identity.this.verification_token
|
||||
},
|
||||
{
|
||||
name = var.domain_name
|
||||
type = "TXT"
|
||||
value = "v=spf1 include:amazonses.com -all"
|
||||
},
|
||||
{
|
||||
name = "_dmarc.${var.domain_name}"
|
||||
type = "TXT"
|
||||
value = "v=DMARC1; p=quarantine; rua=mailto:postmaster@${var.domain_name}"
|
||||
}
|
||||
],
|
||||
[
|
||||
for dkim in aws_ses_domain_dkim.this.dkim_tokens : {
|
||||
name = "${dkim}._domainkey.${aws_ses_domain_identity.this.domain}"
|
||||
type = "CNAME"
|
||||
value = "${dkim}.dkim.amazonses.com"
|
||||
}
|
||||
]
|
||||
)
|
||||
}
|
||||
|
||||
9
terraform/modules/ses/variables.tf
Normal file
9
terraform/modules/ses/variables.tf
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
variable "domain_name" {
|
||||
description = "The domain to verify with SES (e.g. domna.homes)"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "stage" {
|
||||
description = "Deployment stage (e.g. dev, prod)"
|
||||
type = string
|
||||
}
|
||||
14
terraform/modules/sqs_queue/main.tf
Normal file
14
terraform/modules/sqs_queue/main.tf
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
resource "aws_sqs_queue" "dlq" {
|
||||
name = "${var.name}-dlq"
|
||||
}
|
||||
|
||||
resource "aws_sqs_queue" "this" {
|
||||
name = var.name
|
||||
|
||||
visibility_timeout_seconds = 1000
|
||||
|
||||
redrive_policy = jsonencode({
|
||||
deadLetterTargetArn = aws_sqs_queue.dlq.arn
|
||||
maxReceiveCount = var.max_receive_count
|
||||
})
|
||||
}
|
||||
7
terraform/modules/sqs_queue/outputs.tf
Normal file
7
terraform/modules/sqs_queue/outputs.tf
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
output "queue_arn" {
|
||||
value = aws_sqs_queue.this.arn
|
||||
}
|
||||
|
||||
output "queue_url" {
|
||||
value = aws_sqs_queue.this.url
|
||||
}
|
||||
6
terraform/modules/sqs_queue/variables.tf
Normal file
6
terraform/modules/sqs_queue/variables.tf
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
variable "name" { type = string }
|
||||
|
||||
variable "max_receive_count" {
|
||||
type = number
|
||||
default = 1
|
||||
}
|
||||
30
terraform/modules/tf_state_bucket/main.tf
Normal file
30
terraform/modules/tf_state_bucket/main.tf
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
resource "aws_s3_bucket" "this" {
|
||||
bucket = var.bucket_name
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_versioning" "this" {
|
||||
bucket = aws_s3_bucket.this.id
|
||||
|
||||
versioning_configuration {
|
||||
status = "Enabled"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
|
||||
bucket = aws_s3_bucket.this.id
|
||||
|
||||
rule {
|
||||
apply_server_side_encryption_by_default {
|
||||
sse_algorithm = "AES256"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket_public_access_block" "this" {
|
||||
bucket = aws_s3_bucket.this.id
|
||||
|
||||
block_public_acls = true
|
||||
block_public_policy = true
|
||||
ignore_public_acls = true
|
||||
restrict_public_buckets = true
|
||||
}
|
||||
7
terraform/modules/tf_state_bucket/outputs.tf
Normal file
7
terraform/modules/tf_state_bucket/outputs.tf
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
output "bucket_name" {
|
||||
value = aws_s3_bucket.this.bucket
|
||||
}
|
||||
|
||||
output "bucket_arn" {
|
||||
value = aws_s3_bucket.this.arn
|
||||
}
|
||||
3
terraform/modules/tf_state_bucket/variables.tf
Normal file
3
terraform/modules/tf_state_bucket/variables.tf
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
variable "bucket_name" {
|
||||
type = string
|
||||
}
|
||||
19
terraform/shared/dev.tfvars
Normal file
19
terraform/shared/dev.tfvars
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
stage = "dev"
|
||||
region = "eu-west-2"
|
||||
|
||||
# Domain
|
||||
# Remember to create a production url in Google domains before deploying
|
||||
domain_name = "dev.hestia.homes"
|
||||
api_url_prefix = "api"
|
||||
|
||||
# Database
|
||||
allocated_storage = 20
|
||||
instance_class = "db.t4g.medium"
|
||||
database_name = "DevAssessmentModelDB"
|
||||
|
||||
# S3
|
||||
allowed_origins = ["*"]
|
||||
# For prod this should be something like:
|
||||
# allowed_origins = ["https://www.assessment-model.vercel.app"]
|
||||
# or whatever we end up calling the prod site
|
||||
|
||||
410
terraform/shared/main.tf
Normal file
410
terraform/shared/main.tf
Normal file
|
|
@ -0,0 +1,410 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "~> 4.16"
|
||||
}
|
||||
}
|
||||
backend "s3" {
|
||||
bucket = "assessment-model-terraform-state"
|
||||
region = "eu-west-2"
|
||||
key = "terraform.tfstate"
|
||||
}
|
||||
|
||||
required_version = ">= 1.2.0"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
region = var.region
|
||||
}
|
||||
|
||||
# Additional provider for resources that need to be in us-east-1, specifically the SSL certificate
|
||||
provider "aws" {
|
||||
alias = "aws_use1"
|
||||
region = "us-east-1"
|
||||
}
|
||||
|
||||
# Assuming the secret is already created and the name is "<stage>/assessment_model/db_credentials"
|
||||
data "aws_secretsmanager_secret" "db_credentials" {
|
||||
name = "${var.stage}/assessment_model/db_credentials"
|
||||
}
|
||||
|
||||
data "aws_secretsmanager_secret_version" "db_credentials" {
|
||||
secret_id = data.aws_secretsmanager_secret.db_credentials.id
|
||||
}
|
||||
|
||||
# Default VPC
|
||||
data "aws_vpc" "default" {
|
||||
default = true
|
||||
}
|
||||
|
||||
# For MVP, we allow all inbound traffic to the DB - this will need to be changed later; we'll likely
|
||||
# need to re-deploy the frontend to AWS so that it's within the same VPC as the DB
|
||||
resource "aws_security_group" "allow_db" {
|
||||
name = "allow_tls"
|
||||
description = "Allow TLS inbound traffic"
|
||||
vpc_id = data.aws_vpc.default.id
|
||||
|
||||
ingress {
|
||||
# TLS (change to whatever ports you need)
|
||||
from_port = 5432
|
||||
to_port = 5432
|
||||
protocol = "tcp"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
egress {
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
protocol = "-1"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_db_instance" "default" {
|
||||
allocated_storage = var.allocated_storage
|
||||
engine = "postgres"
|
||||
engine_version = "14.17"
|
||||
instance_class = var.instance_class
|
||||
db_name = var.database_name
|
||||
username = jsondecode(data.aws_secretsmanager_secret_version.db_credentials.secret_string)["db_assessment_model_username"]
|
||||
password = jsondecode(data.aws_secretsmanager_secret_version.db_credentials.secret_string)["db_assessment_model_password"]
|
||||
parameter_group_name = "default.postgres14"
|
||||
skip_final_snapshot = true
|
||||
vpc_security_group_ids = [aws_security_group.allow_db.id]
|
||||
lifecycle {
|
||||
prevent_destroy = true
|
||||
}
|
||||
# For the moment, we make the database publically accessible so that we can connect to it from the frontend.
|
||||
# We will look to change this in the future but as we are pre-MVP at the time of setting this, we don't
|
||||
# have major security demand and don't want to set this up now
|
||||
publicly_accessible = true
|
||||
# Specify the CA certificate with the default RDS CA certificate
|
||||
ca_cert_identifier = "rds-ca-rsa2048-g1"
|
||||
# Temporary to enfore immediate change
|
||||
apply_immediately = true
|
||||
# Set up storage type to gp3 for better performance
|
||||
storage_type = "gp3"
|
||||
}
|
||||
|
||||
# Set up the bucket that recieve the csv uploads of epc to be retrofit
|
||||
module "s3_presignable_bucket" {
|
||||
source = "../modules/s3_presignable_bucket"
|
||||
bucketname = "retrofit-plan-inputs-${var.stage}"
|
||||
environment = var.stage
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "s3_due_considerations_bucket" {
|
||||
source = "../modules/s3_presignable_bucket"
|
||||
bucketname = "retrofit-due-considerations-${var.stage}"
|
||||
environment = var.stage
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "s3_eco_spreadseet_bucket" {
|
||||
source = "../modules/s3_presignable_bucket"
|
||||
bucketname = "retrofit-eco-spreadsheet-${var.stage}"
|
||||
environment = var.stage
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "s3" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-datalake-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "model_directory" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-model-directory-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_sap_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-sap-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_sap_data" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-data-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
output "retrofit_sap_data_bucket_name" {
|
||||
value = module.retrofit_sap_data.bucket_name
|
||||
description = "Name of the retrofit SAP data bucket"
|
||||
}
|
||||
|
||||
module "retrofit_carbon_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-carbon-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_heat_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-heat-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_lighting_cost_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-lighting-cost-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_heating_cost_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-heating-cost-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_hot_water_cost_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-hot-water-cost-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_heating_kwh_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-heating-kwh-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_hotwater_kwh_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-hotwater-kwh-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "retrofit_sap_baseline_predictions" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "retrofit-sap-baseline-predictions-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
// We make this bucket presignable, because we want to generate download links for the frontend
|
||||
module "retrofit_energy_assessments" {
|
||||
source = "../modules/s3_presignable_bucket"
|
||||
bucketname = "retrofit-energy-assessments-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
environment = var.stage
|
||||
}
|
||||
|
||||
# Set up the route53 record for the API
|
||||
module "route53" {
|
||||
source = "../modules/route53"
|
||||
domain_name = var.domain_name
|
||||
api_url_prefix = var.api_url_prefix
|
||||
providers = {
|
||||
aws.aws_use1 = aws.aws_use1
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# Create an ECR repository for storage of the lambda's docker images
|
||||
module "ecr" {
|
||||
ecr_name = "fastapi-repository-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "lambda_sap_prediction_ecr" {
|
||||
ecr_name = "lambda-sap-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "due_considerations_ecr" {
|
||||
ecr_name = "due-considerations-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "eco_spreadsheet_ecr" {
|
||||
ecr_name = "eco-spreadsheet-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "lambda_carbon_prediction_ecr" {
|
||||
ecr_name = "lambda-carbon-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "lambda_heat_prediction_ecr" {
|
||||
ecr_name = "lambda-heat-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
# ECR repos for lighting cost, heating cost and hot water cost models
|
||||
module "lambda_lighting_cost_prediction_ecr" {
|
||||
ecr_name = "lighting-cost-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "lambda_heating_cost_prediction_ecr" {
|
||||
ecr_name = "heating-cost-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "lambda_hot_water_cost_prediction_ecr" {
|
||||
ecr_name = "hot-water-cost-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
# For heating and hot water kwh models
|
||||
module "lambda_heating_kwh_prediction_ecr" {
|
||||
ecr_name = "heating-kwh-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
module "lambda_hotwater_kwh_prediction_ecr" {
|
||||
ecr_name = "hotwater-kwh-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
# Baselining models
|
||||
module "sap_baseline_ecr" {
|
||||
ecr_name = "sap-baseline-prediction-${var.stage}"
|
||||
source = "../modules/ecr"
|
||||
}
|
||||
|
||||
##############################################
|
||||
# CDN - Cloudfront
|
||||
##############################################
|
||||
module "cloudfront_distribution" {
|
||||
source = "../modules/cloudfront"
|
||||
bucket_name = module.s3.bucket_name
|
||||
bucket_id = module.s3.bucket_id
|
||||
bucket_arn = module.s3.bucket_arn
|
||||
bucket_domain_name = module.s3.bucket_domain_name
|
||||
stage = var.stage
|
||||
}
|
||||
|
||||
################################################
|
||||
# SES - Email sending
|
||||
################################################
|
||||
module "ses" {
|
||||
source = "../modules/ses"
|
||||
domain_name = "domna.homes"
|
||||
stage = var.stage
|
||||
}
|
||||
|
||||
output "ses_dns_records" {
|
||||
value = module.ses.dns_records
|
||||
}
|
||||
|
||||
################################################
|
||||
# Address2UPRN – Lambda ECR
|
||||
################################################
|
||||
module "address2uprn_state_bucket" {
|
||||
source = "../modules/tf_state_bucket"
|
||||
bucket_name = "address2uprn-terraform-state"
|
||||
|
||||
}
|
||||
|
||||
module "address2uprn_registry" {
|
||||
source = "../modules/container_registry"
|
||||
name = "address2uprn"
|
||||
stage = var.stage
|
||||
|
||||
}
|
||||
|
||||
# S3 policy for postcode splitter to read from retrofit data bucket
|
||||
module "address2uprn_s3_read_and_write" {
|
||||
source = "../modules/s3_iam_policy"
|
||||
|
||||
policy_name = "Address2UPRNReadandWriteS3"
|
||||
policy_description = "Allow address2uprn Lambda to read and write from retrofit-data bucket"
|
||||
bucket_arns = ["arn:aws:s3:::retrofit-data-${var.stage}"]
|
||||
actions = ["s3:GetObject", "s3:ListBucket", "s3:PutObject"]
|
||||
resource_paths = ["/*"]
|
||||
}
|
||||
|
||||
output "address_2_uprn_s3_read_and_write_arn" {
|
||||
value = module.address2uprn_s3_read_and_write.policy_arn
|
||||
}
|
||||
|
||||
################################################
|
||||
# Condition ETL – Lambda ECR
|
||||
################################################
|
||||
module "condition_etl_state_bucket" {
|
||||
source = "../modules/tf_state_bucket"
|
||||
bucket_name = "condition-etl-terraform-state"
|
||||
|
||||
}
|
||||
|
||||
module "condition_etl_registry" {
|
||||
source = "../modules/container_registry"
|
||||
name = "condition-etl"
|
||||
stage = var.stage
|
||||
|
||||
}
|
||||
|
||||
# Condition Data S3 Bucket to store initial data
|
||||
module "condition_data_bucket" {
|
||||
source = "../modules/s3"
|
||||
bucketname = "condition-data-${var.stage}"
|
||||
allowed_origins = var.allowed_origins
|
||||
}
|
||||
|
||||
module "condition_etl_s3_read" {
|
||||
source = "../modules/s3_iam_policy"
|
||||
|
||||
policy_name = "ConditionETLReadS3"
|
||||
policy_description = "Allow Lambda to read objects from condition-data-${var.stage}"
|
||||
bucket_arns = ["arn:aws:s3:::condition-data-${var.stage}"]
|
||||
actions = ["s3:GetObject"]
|
||||
resource_paths = ["/*"]
|
||||
}
|
||||
|
||||
output "condition_etl_s3_read_arn" {
|
||||
value = module.condition_etl_s3_read.policy_arn
|
||||
}
|
||||
|
||||
|
||||
################################################
|
||||
# Postcode Splitter – Lambda ECR
|
||||
################################################
|
||||
module "postcode_splitter_state_bucket" {
|
||||
source = "../modules/tf_state_bucket"
|
||||
bucket_name = "postcode-splitter-terraform-state"
|
||||
|
||||
}
|
||||
|
||||
module "postcode_splitter_registry" {
|
||||
source = "../modules/container_registry"
|
||||
name = "postcode_splitter"
|
||||
stage = var.stage
|
||||
|
||||
}
|
||||
|
||||
# S3 policy for postcode splitter to read from retrofit data bucket
|
||||
module "postcode_splitter_s3_read" {
|
||||
source = "../modules/s3_iam_policy"
|
||||
|
||||
policy_name = "PostcodeSplitterReadS3"
|
||||
policy_description = "Allow postcode splitter Lambda to read from retrofit-data bucket"
|
||||
bucket_arns = ["arn:aws:s3:::retrofit-data-${var.stage}"]
|
||||
actions = ["s3:GetObject", "s3:ListBucket", "s3:PutObject"]
|
||||
resource_paths = ["/*"]
|
||||
}
|
||||
|
||||
output "postcode_splitter_s3_read_arn" {
|
||||
value = module.postcode_splitter_s3_read.policy_arn
|
||||
}
|
||||
|
||||
################################################
|
||||
# Categorisation – Lambda ECR
|
||||
################################################
|
||||
module "categorisation_state_bucket" {
|
||||
source = "../modules/tf_state_bucket"
|
||||
bucket_name = "categorisation-terraform-state"
|
||||
|
||||
}
|
||||
|
||||
module "categorisation_registry" {
|
||||
source = "../modules/container_registry"
|
||||
name = "categorisation"
|
||||
stage = var.stage
|
||||
}
|
||||
1
terraform/shared/secrets.tf
Normal file
1
terraform/shared/secrets.tf
Normal file
|
|
@ -0,0 +1 @@
|
|||
# This script contains secrets which for the moment, are being stored in aws secrets manager
|
||||
39
terraform/shared/variables.tf
Normal file
39
terraform/shared/variables.tf
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
variable stage {
|
||||
description = "The stage of the environment"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "region" {
|
||||
description = "AWS region"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "domain_name" {
|
||||
description = "Name of the base domain for Route53"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "api_url_prefix" {
|
||||
description = "Prefix for the API URL"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable allocated_storage {
|
||||
description = "The allocated storage in gigabytes"
|
||||
type = number
|
||||
}
|
||||
|
||||
variable instance_class {
|
||||
description = "The instance type of the RDS instance"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable database_name {
|
||||
description = "The name of the database to create when the DB instance is created"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "allowed_origins" {
|
||||
description = "Allowed origins for CORS policy"
|
||||
type = list(string)
|
||||
}
|
||||
Loading…
Add table
Reference in a new issue