Note, this module is not intended to be used outside of the organization, as the template provides a consistent blueprint for the provisioning of accounts with the Appvia AWS estate.
Please refer to one of the application, platform or sandbox pipelines for an example of how to use this module.
Tenants are able to provision notifications within the designated region. This first step to ensure notifications is enabled.
notifications = {
email = {
addresses = ["MY_EMAIL_ADDRESS"]
}
slack = {
webhook = "MY_SLACK_WEBHOOK"
}
}The notifications can used to send notifications to users via email or slack, for events related to costs, security and budgets.
Additional service control policies can be applied to the account. This is useful for ensuring that the account is compliant with the organization's security policies, specific to the accounts requirements.
You can configure additional service control policies using the var.service_control_policies variable, such as the below example
data "aws_iam_policy_document" "deny_s3" {
statement {
effect = "Deny"
actions = ["s3:*"]
resources = ["*"]
}
}
module "account" {
service_control_policies = {
"MY_POLICY_NAME" = {
name = "deny-s3"
policy = data.aws_iam_policy_document.deny_s3.json
}
}
}AWS Config Conformance Packs are collections of AWS Config rules and remediation actions that are packaged together for common compliance and security best practices. You can configure compliance packs using the var.aws_config variable to ensure your account meets specific compliance requirements.
Compliance packs can be created using either a template body (YAML or JSON) or a template URL. You can also override default parameters in the compliance pack template to customize the rules for your specific requirements.
module "account" {
aws_config = {
enable = true
compliance_packs = {
"security-best-practices" = {
template_body = file("${path.module}/templates/security-best-practices.yml")
}
}
}
}data "http" "security_hub_enabled" {
url = "https://s3.amazonaws.com/aws-service-catalog-reference-architectures/AWS_Config_Rules/Security/SecurityHub/SecurityHub-Enabled.json"
}
module "account" {
aws_config = {
enable = true
compliance_packs = {
"security-hub-enabled" = {
template_body = data.http.security_hub_enabled.body
}
}
}
}Many compliance packs support parameter overrides that allow you to customize the behavior of the rules within the pack. For example, you can adjust thresholds, specify resource types, or configure other rule-specific settings.
module "account" {
aws_config = {
enable = true
compliance_packs = {
"hipaa-compliance" = {
template_body = file("${path.module}/templates/hipaa-compliance.yml")
parameter_overrides = {
"AccessKeysRotatedParamMaxAccessKeyAge" = "45"
"PasswordPolicyParamMinimumPasswordLength" = "14"
"PasswordPolicyParamRequireUppercaseCharacters" = "true"
"PasswordPolicyParamRequireLowercaseCharacters" = "true"
"PasswordPolicyParamRequireNumbers" = "true"
"PasswordPolicyParamRequireSymbols" = "true"
}
}
"pci-dss-compliance" = {
template_body = file("${path.module}/templates/pci-dss-compliance.yml")
parameter_overrides = {
"EncryptedVolumesParamKmsKeyId" = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
}
}
}
}
}The compliance pack configuration supports the following options:
- template_body: (Required) The YAML or JSON template body for the compliance pack. This can be provided directly as a string, loaded from a file using
file(), or fetched from a URL usingdata.http. - template_url: (Optional) The URL of the compliance pack template. Note: Either
template_bodyortemplate_urlmust be provided, but not both. - parameter_overrides: (Optional) A map of parameter overrides to customize the compliance pack rules. The keys should match the parameter names defined in the compliance pack template, and the values are the custom values you want to apply.
AWS provides several pre-built compliance pack templates that you can use. These templates are available in the AWS Service Catalog and can be referenced by their S3 URLs. Common examples include:
- Operational Best Practices: General security and operational best practices
- HIPAA Compliance: Healthcare industry compliance requirements
- PCI-DSS Compliance: Payment card industry data security standards
- Security Best Practices: Security-focused configuration rules
- CIS AWS Foundations Benchmark: Center for Internet Security benchmarks
You can find the complete list of AWS managed compliance pack templates in the AWS Config Conformance Pack Sample Templates documentation.
You can configure multiple compliance packs simultaneously to meet various compliance requirements:
module "account" {
aws_config = {
enable = true
compliance_packs = {
"operational-best-practices" = {
template_body = file("${path.module}/templates/operational-best-practices.yml")
}
"security-best-practices" = {
template_body = file("${path.module}/templates/security-best-practices.yml")
parameter_overrides = {
"CheckPublicReadAclParam" = "true"
"CheckPublicWriteAclParam" = "true"
}
}
"cis-aws-foundations-benchmark" = {
template_body = file("${path.module}/templates/cis-aws-foundations-benchmark.yml")
parameter_overrides = {
"AccessKeysRotatedParamMaxAccessKeyAge" = "90"
}
}
}
}
}Note: Ensure that AWS Config is enabled (enable = true) in the aws_config variable for compliance packs to be provisioned. The compliance packs will be deployed to the account and will continuously evaluate your resources against the rules defined in the pack.
You can configure additional AWS Config managed rules using the var.aws_config variable. AWS Config rules allow you to evaluate the configuration settings of your AWS resources to ensure they comply with your organization's policies.
module "account" {
aws_config = {
enable = true
rules = {
"encrypted-volumes" = {
description = "Checks whether EBS volumes are encrypted"
identifier = "ENCRYPTED_VOLUMES"
resource_types = ["AWS::EC2::Volume"]
}
"s3-bucket-public-read-prohibited" = {
description = "Checks that your S3 buckets do not allow public read access"
identifier = "S3_BUCKET_PUBLIC_READ_PROHIBITED"
resource_types = ["AWS::S3::Bucket"]
}
"rds-instance-public-access-check" = {
description = "Checks whether the Amazon Relational Database Service instances are not publicly accessible"
identifier = "RDS_INSTANCE_PUBLIC_ACCESS_CHECK"
resource_types = ["AWS::RDS::DBInstance"]
max_execution_frequency = "TwentyFour_Hours"
inputs = {
"publicAccessCheckValue" = "true"
}
}
"tagged-resources" = {
description = "Checks whether resources are properly tagged"
identifier = "REQUIRED_TAGS"
resource_types = ["AWS::EC2::Instance"]
inputs = {
"tag1Key" = "Environment"
"tag2Key" = "Owner"
}
scope = {
compliance_resource_types = ["AWS::EC2::Instance"]
tag_key = "Environment"
tag_value = "Production"
}
}
}
}
}The rules configuration supports the following options:
- description: A description of what the rule checks
- identifier: The identifier of the AWS managed Config rule (e.g.,
ENCRYPTED_VOLUMES,S3_BUCKET_PUBLIC_READ_PROHIBITED) - resource_types: A list of resource types that the rule evaluates (for documentation purposes)
- inputs: (Optional) A map of input parameters for the rule
- max_execution_frequency: (Optional) The maximum frequency at which the rule runs. Valid values:
One_Hour,Three_Hours,Six_Hours,Twelve_Hours,TwentyFour_Hours - scope: (Optional) Defines which resources are evaluated by the rule:
- compliance_resource_types: A list of resource types to scope the rule
- tag_key: (Optional) The tag key to scope the rule
- tag_value: (Optional) The tag value to scope the rule
For a complete list of available AWS managed Config rules and their identifiers, see the AWS Config Managed Rules documentation.
The IAM password policy can be configured to enforce password policies on the account. This is useful for ensuring that the account is compliant with the organization's security policies, specific to the accounts requirements.
iam_password_policy = {
enabled = true
allow_users_to_change_password = true
hard_expiry = false
max_password_age = 90
minimum_password_length = 8
password_reuse_prevention = 24
require_lowercase_characters = true
require_numbers = true
require_symbols = true
require_uppercase_characters = true
}The IAM access analyzer can be configured to analyze access to resources within your account and produce findings related to excessive permissions and or permissions which carry a high risk.
iam_access_analyzer = {
enabled = true
analyzer_name = "lza-iam-access-analyzer" # optional
analyzer_type = "ORGANIZATION" # optional but default
}You can control the enable for disabling of the AWS Inspector service via the var.inspector variable, such as the below example
module "account" {
inspector = {
enable = true
delegate_account_id = "123456789012" # Usually the security account
}
}The EBS encryption can be configured to encrypt all EBS volumes within the account. The feature ensures all volumes are automatically encrypted.
ebs_encryption = {
enabled = true
create_kms_key = true
key_alias = "lza/ebs/default"
}The S3 block public access can be configured to block public access to S3 buckets within the account. The feature ensures all buckets are automatically blocked from public access.
s3_block_public_access = {
enabled = true
enable_block_public_policy = true
enable_block_public_acls = true
enable_ignore_public_acls = true
enable_restrict_public_buckets = true
}This module can ensure a set of IAM policies are created within the account. This is useful for ensuring that the account is preloaded with any required policy sets.
You can configure additional IAM policies using the var.iam_policies variable, such as the below example
module "account" {
iam_policies = {
"deny_s3" = {
name = "deny-s3"
description = "Used to deny access to S3"
policy = data.aws_iam_policy_document.deny_s3.json
}
"deny_s3_with_prefix" = {
name_prefix = "deny-s3-"
policy = data.aws_iam_policy_document.deny_s3.json
description = "Used to deny access to S3"
path = "/"
}
}
}This module can ensure a set of IAM roles are created within the account. This is useful for ensuring that the account is compliant with the organization's security policies, specific to the accounts requirements. Note, the IAM role have an automatic dependency on any IAM policies defined above to ensure ordering.
You can configure additional IAM roles using the var.iam_roles variable, such as the below example
module "account" {
iam_roles = {
"s3_administrator" = {
name = "MY_ROLE_NAME"
assume_roles = ["arn:aws:iam::123456789012:role/role-name"]
description = "Administrator role for S3"
path = "/"
permissions_boundary_arn = null
permissions_arns = [
"arn:aws:iam::aws:policy/AmazonS3FullAccess"
]
#policies = [data.aws_iam_policy_document.deny_s3.json]
}
"ec2_instance_profile" {
name = "lza-ssm-instance-profile"
assume_services = ["ec2.amazonaws.com"]
description = "Instance profiles for ec2 compute machine"
path = "/"
permissions_arns = [
"arn:aws:iam::aws:policy/AmazonSSMDirectoryServiceAccess",
"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
"arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy",
]
}
"kms_admin" = {
name = "kms-admin"
assume_accounts = ["123456789012"]
description = "Administrator role for KMS"
path = "/"
permissions_arns = [
"arn:aws:iam::aws:policy/AmazonKMSFullAccess"
]
}
}
}This module provides the ability for tenants to manage the assignment of prescribed roles to users and groups within the account. The sso_assignment module is used to manage the assignment of roles to users and groups within the account.
Note, the roles permitted for assignment can be found within local.sso_permitted_permission_sets, an example of the permitted roles can be found below:
sso_permitted_permission_sets = {
"devops_engineer" = "DevOpsEngineer"
"finops_engineer" = "FinOpsEngineer"
"network_engineer" = "NetworkEngineer"
"network_viewer" = "NetworkViewer"
"platform_engineer" = "PlatformEngineer"
"security_auditor" = "SecurityAuditor"
}This maps the exposed name used in the var.rbac to the name of the role within the AWS Identity Center.
Tenants can assign roles to users and groups by providing a map of users and groups to roles within the var.rbac variable. An example of this can be found below:
rbac = {
"devops_engineer" = {
users = ["MY_SSO_USER"]
groups = ["MY_SSO_GROUP"]
}
}AWS Resource Groups allow you to organize and manage AWS resources by grouping them based on tags, resource types, or other criteria. This module provides the ability to create and manage resource groups within your account, making it easier to organize, discover, and manage related resources.
You can create resource groups using the var.resource_groups variable. The recommended approach is to use the query object which provides a simpler, more intuitive way to define resource queries.
module "account" {
resource_groups = {
"production-ec2-instances" = {
description = "All EC2 instances in production environment"
query = {
resource_type_filters = ["AWS::EC2::Instance"]
tag_filters = {
"Environment" = ["production"]
}
}
}
}
}Resource groups are commonly used to organize resources by tags, making it easier to manage resources across different environments or applications:
module "account" {
resource_groups = {
"production-resources" = {
description = "All production resources"
query = {
resource_type_filters = ["AWS::AllSupported"]
tag_filters = {
"Environment" = ["production"]
"Product" = ["my-product"]
}
}
}
"development-resources" = {
description = "All development resources"
query = {
resource_type_filters = ["AWS::AllSupported"]
tag_filters = {
"Environment" = ["development"]
}
}
}
}
}You can create resource groups that include only specific resource types. If no tag_filters are specified, the resource group will include all resources of the specified types:
module "account" {
resource_groups = {
"s3-buckets" = {
description = "All S3 buckets in the account"
query = {
resource_type_filters = ["AWS::S3::Bucket"]
}
}
"lambda-functions" = {
description = "All Lambda functions"
query = {
resource_type_filters = ["AWS::Lambda::Function"]
}
}
"rds-instances" = {
description = "All RDS database instances"
query = {
resource_type_filters = ["AWS::RDS::DBInstance"]
}
}
}
}Resource groups can include configuration settings for specific use cases, such as AWS Systems Manager maintenance windows or other group-based operations:
module "account" {
resource_groups = {
"maintenance-window-targets" = {
description = "EC2 instances for maintenance windows"
query = {
resource_type_filters = ["AWS::EC2::Instance"]
tag_filters = {
"MaintenanceWindow" = ["enabled"]
}
}
configuration = {
type = "AWS::SSM::MaintenanceWindowTarget"
parameters = [
{
name = "WindowTargetId"
values = ["target-123456"]
}
]
}
}
}
}The resource group configuration supports the following options:
- description: (Required) A description of the resource group
- query: (Optional) An object that defines the query used to select resources for the group. This is the recommended approach:
- resource_type_filters: (Optional) A list of AWS resource types (e.g.,
["AWS::EC2::Instance"],["AWS::S3::Bucket"], or["AWS::AllSupported"]). Defaults to["AWS::AllSupported"]if not specified. - tag_filters: (Optional) A map where keys are tag names and values are lists of tag values. For example:
{ "Environment" = ["production"], "Product" = ["my-app"] }
- resource_type_filters: (Optional) A list of AWS resource types (e.g.,
- resource_query: (Optional) A JSON string that defines the query used to select resources for the group. This is an alternative to the
queryobject for advanced use cases or backward compatibility. The query must follow AWS Resource Groups query syntax. - type: (Optional) The type of resource query. Defaults to
"TAG_FILTERS_1_0"if not specified. - configuration: (Optional) Configuration settings for the resource group:
- type: The type of group configuration (e.g.,
"AWS::SSM::MaintenanceWindowTarget") - parameters: (Optional) A list of parameters for the configuration:
- name: The parameter name
- values: A list of parameter values
- type: The type of group configuration (e.g.,
The query object provides a simpler and more intuitive way to define resource group queries:
query = {
resource_type_filters = ["AWS::EC2::Instance"] # List of resource types
tag_filters = { # Map of tag keys to values
"Environment" = ["production", "staging"] # Tag key with multiple values
"Product" = ["my-app"] # Additional tag filter
}
}For advanced use cases or backward compatibility, you can provide a raw JSON string:
resource_query = jsonencode({
ResourceTypeFilters = ["AWS::EC2::Instance"]
TagFilters = [
{
Key = "Environment"
Values = ["production", "staging"]
}
]
})Resource groups are useful for:
- Environment Management: Group resources by environment (production, staging, development)
- Application Organization: Group resources belonging to a specific application or service
- Cost Management: Organize resources for cost allocation and budgeting
- Security Management: Group resources for security scanning and compliance checks
- Maintenance Windows: Organize resources for scheduled maintenance operations
- Resource Discovery: Quickly find and list related resources across your account
This example shows how to organize resources across multiple environments using the query object:
module "account" {
resource_groups = {
"production-app-resources" = {
description = "All resources for the production application"
query = {
resource_type_filters = ["AWS::AllSupported"]
tag_filters = {
"Environment" = ["production"]
"Application" = ["my-app"]
}
}
}
"staging-app-resources" = {
description = "All resources for the staging application"
query = {
resource_type_filters = ["AWS::AllSupported"]
tag_filters = {
"Environment" = ["staging"]
"Application" = ["my-app"]
}
}
}
}
}You can specify multiple values for a single tag key to match resources with any of those values:
module "account" {
resource_groups = {
"production-and-staging" = {
description = "All resources in production or staging environments"
query = {
resource_type_filters = ["AWS::AllSupported"]
tag_filters = {
"Environment" = ["production", "staging"]
}
}
}
}
}Note: Resource groups are dynamic and automatically update as resources are created, modified, or deleted based on the resource query criteria. Ensure your resources are properly tagged to be included in the appropriate resource groups. When using the query object, if resource_type_filters is not specified, it defaults to ["AWS::AllSupported"].
This module includes comprehensive GitHub repository management capabilities through the modules/github_repository module. This allows tenants to create and manage GitHub repositories with enterprise-grade security and compliance features.
- Repository Creation: Create GitHub repositories with customizable names, descriptions, and visibility
- Security & Compliance: Branch protection, required reviews, status checks, vulnerability alerts
- Collaboration Management: User and team access control, environment protection
- Automation: Repository templates, merge strategies, and automated workflows
module "my_repository" {
source = "./modules/github_repository"
repository = "my-project"
description = "My awesome project"
visibility = "private"
}module "enterprise_repository" {
source = "./modules/github_repository"
repository = "enterprise-critical-system"
description = "Enterprise critical system with strict controls"
# Security settings
visibility = "private"
# Branch protection
enforce_branch_protection_for_admins = true
required_approving_review_count = 3
dismiss_stale_reviews = true
prevent_self_review = true
# Status checks
required_status_checks = [
"CI / Build and Test",
"Security / Security Scan",
"Compliance / Compliance Check"
]
# Environments
repository_environments = ["staging", "production"]
default_environment_review_users = ["senior-dev1", "senior-dev2"]
# Collaborators
repository_collaborators = [
{
username = "senior-dev1"
permission = "admin"
}
]
# Topics
repository_topics = ["enterprise", "terraform", "aws", "critical"]
}For complete GitHub repository management examples, see the examples/github_repository/ directory.
Tenants are able to receive budgets notifications related to the services. Once notifications have been configured they will automatically receive daily, weekly or monthly reports and notifications on where they sit in the budget.
Tenants are able to provision anomaly detection rules within the designated region. This is useful for ensure cost awareness and alerting on any unexpected costs.
cost_anomaly_detection = {
enabled = true
monitors = [
{
name = lower("lza-${local.region}")
frequency = "IMMEDIATE"
threshold_expression = [
{
and = {
dimension = {
key = "ANOMALY_TOTAL_IMPACT_ABSOLUTE"
match_options = ["GREATER_THAN_OR_EQUAL"]
values = ["100"]
}
}
},
{
and = {
dimension = {
key = "ANOMALY_TOTAL_IMPACT_PERCENTAGE"
match_options = ["GREATER_THAN_OR_EQUAL"]
values = ["50"]
}
}
}
]
specification = jsonencode({
"And" : [
{
"Dimensions" : {
"Key" : "REGION"
"Values" : [local.region]
}
}
]
})
}
]
}CloudWatch Cross-Account Observability allows you to centralize monitoring and observability data from multiple AWS accounts. This feature supports two configurations:
- Observability Sink: Configure an account to receive observability data from other accounts
- Observability Source: Configure an account to send its observability data to a central sink account
An observability sink is typically configured in a central monitoring or security account that aggregates observability data from multiple source accounts. The sink allows specified accounts to link their CloudWatch resources.
module "monitoring_account" {
cloudwatch = {
observability_sink = {
enable = true
identifiers = [
"123456789012", # Source account 1
"234567890123", # Source account 2
]
resource_types = [
"AWS::CloudWatch::Metric",
"AWS::CloudWatch::Dashboard",
"AWS::CloudWatch::Alarm",
"AWS::CloudWatch::LogGroup",
"AWS::CloudWatch::LogStream",
]
}
}
}- enable: (Required) A flag indicating if the observability sink should be enabled
- identifiers: (Required) A list of AWS account IDs that are allowed to link their resources to this sink
- resource_types: (Optional) A list of CloudWatch resource types that can be linked to the sink. Defaults to:
AWS::CloudWatch::MetricAWS::CloudWatch::DashboardAWS::CloudWatch::AlarmAWS::CloudWatch::LogGroupAWS::CloudWatch::LogStream
An observability source is configured in accounts that need to send their CloudWatch data to a central sink account. This allows centralized monitoring and analysis of observability data across multiple accounts.
module "source_account" {
cloudwatch = {
observability_source = {
enable = true
account_id = "123456789012" # The monitoring account ID
sink_identifier = "arn:aws:oam:us-east-1:123456789012:sink/observability-sink"
resource_types = [
"AWS::CloudWatch::Metric",
"AWS::CloudWatch::Dashboard",
"AWS::CloudWatch::Alarm",
"AWS::CloudWatch::LogGroup",
"AWS::CloudWatch::LogStream",
]
}
}
}- enable: (Required) A flag indicating if the observability source should be enabled
- account_id: (Required) The AWS account ID of the sink account that will receive the observability data
- sink_identifier: (Required) The ARN of the OAM sink in the monitoring account (format:
arn:aws:oam:region:account-id:sink/sink-id) - resource_types: (Optional) A list of CloudWatch resource types to link to the observability sink. Defaults to:
AWS::CloudWatch::MetricAWS::CloudWatch::DashboardAWS::CloudWatch::AlarmAWS::CloudWatch::LogGroupAWS::CloudWatch::LogStream
This example shows how to set up centralized monitoring with a monitoring account and multiple source accounts:
Monitoring Account (Sink):
module "monitoring_account" {
cloudwatch = {
observability_sink = {
enable = true
identifiers = [
"111111111111", # Production account
"222222222222", # Development account
"333333333333", # Staging account
]
}
}
}Source Account (Production):
module "production_account" {
cloudwatch = {
observability_source = {
enable = true
account_id = "999999999999" # Monitoring account ID
sink_identifier = "arn:aws:oam:us-east-1:999999999999:sink/observability-sink"
}
}
}Note: The sink must be created first in the monitoring account. Once the sink is created, you can obtain its ARN from the AWS Console or Terraform outputs, and use that ARN in the sink_identifier field for all source accounts.
- Centralized Monitoring: Aggregate metrics, logs, and alarms from multiple accounts in a single location
- Unified Dashboards: Create dashboards that span multiple accounts without switching contexts
- Cost Optimization: Reduce duplicate monitoring infrastructure across accounts
- Security: Centralize security monitoring and alerting in a dedicated security account
- Compliance: Simplify compliance reporting by centralizing observability data
CloudWatch Logs Account Subscription Filter Policies allow you to control which log groups can have subscription filters created and what destinations those subscription filters can send logs to. This provides account-level governance for log forwarding and helps ensure compliance with organizational policies.
module "account" {
cloudwatch = {
account_subscriptions = {
"lambda-forwarding" = {
# https://docs.aws.amazon.com/cli/latest/reference/logs/put-account-policy.html
policy = jsonencode({
DestinationArn = aws_lambda_function.test.arn
FilterPattern = "test"
})
selection_criteria = "LogGroupName NOT IN [\"excluded_log_group_name\"]"
}
}
}
}You can use selection criteria to apply the policy only to specific log groups based on resource attributes:
module "account" {
cloudwatch = {
account_subscriptions = {
"lambda-forwarding" = {
policy = jsonencode({
DestinationArn = aws_lambda_function.test.arn
FilterPattern = "test"
})
selection_criteria = "LogGroupName NOT IN [\"excluded_log_group_name\"]"
}
}
}
}You can configure multiple subscription filter policies for different destinations or log groups:
module "account" {
cloudwatch = {
account_subscriptions = {
"kinesis-streams" = {
policy = jsonencode({
Statement = [
{
Action = [
"logs:CreateLogDelivery",
"logs:GetLogDelivery",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:ListLogDeliveries"
]
Effect = "Allow"
Principal = {
Service = "logs.amazonaws.com"
}
Resource = "arn:aws:logs:*:*:log-delivery:*"
Condition = {
StringEquals = {
"logs:destinationType" = "KinesisStream"
}
}
}
]
Version = "2012-10-17"
})
selection_criteria = "ALL"
}
"firehose-delivery" = {
policy = jsonencode({
Statement = [
{
Action = [
"logs:CreateLogDelivery",
"logs:GetLogDelivery",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:ListLogDeliveries"
]
Effect = "Allow"
Principal = {
Service = "logs.amazonaws.com"
}
Resource = "arn:aws:logs:*:*:log-delivery:*"
Condition = {
StringEquals = {
"logs:destinationType" = "Firehose"
}
}
}
]
Version = "2012-10-17"
})
selection_criteria = jsonencode({
LogGroupName = "/aws/application/*"
})
}
}
}
}- policy: (Required) The IAM policy document (as JSON string) that defines what actions are allowed for subscription filters. The policy must allow
logs:CreateLogDelivery,logs:GetLogDelivery,logs:UpdateLogDelivery,logs:DeleteLogDelivery, andlogs:ListLogDeliveriesactions. - selection_criteria: (Optional) A JSON string that specifies which log groups the policy applies to. Use
"ALL"to apply the policy to all log groups, or provide a JSON object with selection criteria such as:LogGroupName: Filter by log group name pattern (e.g.,"/aws/lambda/*")ResourceArn: Filter by log group ARN pattern
The subscription filter policy can control access to the following destination types:
- KinesisStream: Forward logs to Amazon Kinesis Data Streams
- Firehose: Forward logs to Amazon Kinesis Data Firehose
- Lambda: Forward logs to AWS Lambda functions
- Compliance: Ensure only approved destinations can receive log data
- Security: Control which log groups can forward logs to external systems
- Cost Management: Restrict log forwarding to specific destinations to control costs
- Governance: Enforce organizational policies on log data handling
Note: Account subscription filter policies are account-level policies that apply to all log groups in the account (or those matching the selection criteria). They work in conjunction with resource-based policies on individual log groups.
Tenants are able to provision networks within the designated region, while allowing the platform to decide how these are wired up into the network topology of the organization i.e. ensuring the are using IPAM, connected to the transit gateway, egress via the central vpc and so forth.
All networks are defined within the var.networks variable, an example of this can be found below:
networks = {
my_vpc_name = {
subnets = {
private = {
netmask = 28
}
database = {
netmask = 22
}
}
vpc = {
availability_zones = 2
enable_ipam = true
enable_transit_gateway = true
}
}
my_second_vpc = {
subnets = {
private = {
netmask = 28
}
}
vpc = {
enable_ipam = true
enable_transit_gateway = true
}
}
}When network have defined the enable_transit_gateway boolean it is the responsibility of the consumer of this module to have defined the correct transit gateway id and any default routing requirements.
Assuming the following configuration
module "my_account" {
...
networks = {
dev = {
vpc = {
enable_transit_gateway = true
ipam_pool_name = "development"
netmask = 21
}
transit_gateway = {
gateway_id = "tgw-1234567890"
gateway_routes = {
private = "10.0.0.0/8"
}
}
subnets = {
private = {
netmask = 24
}
}
},
}We can also create transit gateway route table associations by extending the above configuration
module "my_account" {
...
networks = {
dev = {
vpc = {
enable_transit_gateway = true
ipam_pool_name = "development"
netmask = 21
}
transit_gateway = {
gateway_id = "tgw-1234567890"
gateway_routes = {
private = "10.0.0.0/8"
}
gateway_route_table_id = "rtb-1234567890"
}
}
}
}The terraform-docs utility is used to generate this README. Follow the below steps to update:
- Make changes to the
.terraform-docs.ymlfile - Fetch the
terraform-docsbinary (https://terraform-docs.io/user-guide/installation/) - Run
terraform-docs markdown table --output-file ${PWD}/README.md --output-mode inject .
| Name | Version |
|---|---|
| aws | >= 6.0.0 |
| aws.identity | >= 6.0.0 |
| aws.management | >= 6.0.0 |
| aws.network | >= 6.0.0 |
| aws.tenant | >= 6.0.0 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| environment | The environment in which to provision resources | string |
n/a | yes |
| git_repository | The git repository to use for the account | string |
n/a | yes |
| home_region | The home region in which to provision global resources | string |
n/a | yes |
| owner | The owner of the product, and injected into all resource tags | string |
n/a | yes |
| product | The name of the product to provision resources and inject into all resource tags | string |
n/a | yes |
| tags | A collection of tags to apply to resources | map(string) |
n/a | yes |
| account_alias | The account alias to apply to the account | string |
null |
no |
| aws_config | Account specific configuration for AWS Config | object({ |
{ |
no |
| budgets | A collection of budgets to provision | list(object({ |
[] |
no |
| central_dns | Configuration for the hub used to centrally resolved dns requests | object({ |
{ |
no |
| cloudwatch | Configuration for the CloudWatch service | object({ |
{ |
no |
| cost_anomaly_detection | A collection of cost anomaly detection monitors to apply to the account | object({ |
{ |
no |
| cost_center | The cost center of the product, and injected into all resource tags | string |
null |
no |
| dns | A collection of DNS zones to provision and associate with networks | map(object({ |
{} |
no |
| ebs_encryption | A collection of EBS encryption settings to apply to the account | object({ |
null |
no |
| guardduty | A collection of GuardDuty settings to apply to the account | object({ |
null |
no |
| iam_access_analyzer | The IAM access analyzer configuration to apply to the account | object({ |
{ |
no |
| iam_groups | A collection of IAM groups to apply to the account | list(object({ |
[] |
no |
| iam_instance_profiles | A collection of IAM instance profiles to apply to the account | map(object({ |
{} |
no |
| iam_password_policy | The IAM password policy to apply to the account | object({ |
{} |
no |
| iam_policies | A collection of IAM policies to apply to the account | map(object({ |
{} |
no |
| iam_roles | A collection of IAM roles to apply to the account | map(object({ |
{} |
no |
| iam_service_linked_roles | A collection of service linked roles to apply to the account | list(string) |
[ |
no |
| iam_users | A collection of IAM users to apply to the account | list(object({ |
[] |
no |
| identity_center_permitted_roles | A map of permitted SSO roles, with the name of the permitted SSO role as the key, and value the permissionset | map(string) |
{ |
no |
| include_iam_roles | Collection of IAM roles to include in the account | object({ |
{ |
no |
| infrastructure_repository | The infrastructure repository provisions and configures a pipeline repository for the landing zone | object({ |
null |
no |
| inspector | Configuration for the AWS Inspector service | object({ |
null |
no |
| kms_administrator | Configuration for the default kms administrator role to use for the account | object({ |
{ |
no |
| kms_key | Configuration for the default kms encryption key to use for the account (per region) | object({ |
{ |
no |
| macie | A collection of Macie settings to apply to the account | object({ |
null |
no |
| networks | A collection of networks to provision within the designated region | map(object({ |
{} |
no |
| notifications | Configuration for the notifications to the owner of the account | object({ |
{ |
no |
| rbac | Provides the ability to associate one of more groups with a sso role in the account | map(object({ |
{} |
no |
| resource_groups | Configuration for the resource groups service | map(object({ |
{} |
no |
| s3_block_public_access | A collection of S3 public block access settings to apply to the account | object({ |
{ |
no |
| service_control_policies | Provides the ability to associate one of more service control policies with an account | map(object({ |
{} |
no |
| ssm | Configuration for the SSM service | object({ |
{} |
no |
| Name | Description |
|---|---|
| account_id | The account id where the pipeline is running |
| auditor_account_id | The account id for the audit account |
| environment | The environment name for the tenant |
| infrastructure_repository_git_clone_url | The URL of the infrastructure repository for the landing zone |
| infrastructure_repository_url | The SSH URL of the infrastructure repository for the landing zone |
| ipam_pools_by_name | A map of the ipam pool name to id |
| log_archive_account_id | The account id for the log archive account |
| networks | A map of the network name to network details |
| private_hosted_zones | A map of the private hosted zones |
| private_hosted_zones_by_id | A map of the hosted zone name to id |
| sns_notification_arn | The SNS topic ARN for notifications |
| sns_notification_name | Name of the SNS topic used to channel notifications |
| tags | The tags to apply to all resources |
| tenant_account_id | The region of the tenant account |
| vpc_ids | A map of the network name to vpc id |