-
Notifications
You must be signed in to change notification settings - Fork 239
Description
SDK version
v2.16.0
Use-cases
Deploy following config:
resource "azurerm_resource_group" "test" {
name = "mgd-test123"
location = "eastus2"
}
The default timeouts are set:
$ cat terraform.tfstate | jq '.resources | .[] | select(.type == "azurerm_resource_group") | .instances[0].private' | tr -d '"' | base64 -d | jq '.["e2bfb730-ecaa-11e6-8f88-34363bc7c4c0"]'
{
"create": 5400000000000,
"delete": 5400000000000,
"read": 300000000000,
"update": 5400000000000
}
Adding the timeouts
for read = "10m"
, and refresh, the stored meta in state is not changed for the read timeout:
$ tf refresh
azurerm_resource_group.test: Refreshing state... [id=/subscriptions/67a9759d-d099-4aa8-8675-e6cfd669c3f4/resourceGroups/mgd-test123]
$ cat terraform.tfstate | jq '.resources | .[] | select(.type == "azurerm_resource_group") | .instances[0].private' | tr -d '"' | base64 -d | jq '.["e2bfb730-ecaa-11e6-8f88-34363bc7c4c0"]'
{
"create": 5400000000000,
"delete": 5400000000000,
"read": 300000000000,
"update": 5400000000000
}
Also, the terraform plan
shows no diff.
The only way to update the timeout is to change something in the resource to trigger an apply:
$ tf apply -auto-approve
...
Terraform will perform the following actions:
# azurerm_resource_group.test will be updated in-place
~ resource "azurerm_resource_group" "test" {
id = "/subscriptions/0000/resourceGroups/mgd-test123"
name = "mgd-test123"
~ tags = {
+ "foo" = "bar"
}
# (1 unchanged attribute hidden)
+ timeouts {
+ read = "10m"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
$ cat terraform.tfstate | jq '.resources | .[] | select(.type == "azurerm_resource_group") | .instances[0].private' | tr -d '"' | base64 -d | jq '.["e2bfb730-ecaa-11e6-8f88-34363bc7c4c0"]'
{
"create": 5400000000000,
"delete": 5400000000000,
"read": 600000000000,
"update": 5400000000000
}
This is fine in most cases, as simply updating the timeout for the resource is meaningless, until any operation happens to the resource, i.e. when applying.
However, there is an unfortunate fact that the new timeout only takes effect after the plan stage (PlanResourceChange
in context of the proto) during apply, but not before, i.e. the read (ReadResource
in context of the proto) during refresh is still using the old read timeout. This causes issues like hashicorp/terraform-provider-azurerm#14213, where users are managing a collection of same typed resources, the default timeout of it is not long enough to finish the read. In that case, changing the timeout
for read doesn't work for the terraform apply
/terraform plan
as the refresh part will still use the old timeout, results into timeout.
The workaround for this is:
- Either make a dummy change for all these resources to trigger an apply (with
-fresh=false
to avoid refreshing) - Or recreate all these resources with timeout set before apply
Both solutions seem not ideal, it would be cool if we can detect the diff for timeouts
and make an apply to update it.