Skip to content
This repository was archived by the owner on Jul 3, 2023. It is now read-only.
This repository was archived by the owner on Jul 3, 2023. It is now read-only.

Questions about managing service/task revisions in combination with continuous deployment #4

@jhedev

Description

@jhedev

I'm not sure if this is the right place to ask these questions, but since there is no comments section on the blog post I figured this may be the best place to ask.

As the title suggests my questions are about continuous deployment of services/tasks. Currently I have a similar setup to yours, but ended up not managing the ECS services and tasks through terraform because I wanted to continuously deploy my applications. So I build a docker container for each commit to the master, tag it with a version and update the task definition/service accordingly through the cli.

Now I'm curious how you guys do this or would do this? As it looks that you track the services and task revisions through terraform and updating the terraform files for every version is not really practical if deploying this frequently.

Activity

achille-roussel

achille-roussel commented on Jun 17, 2016

@achille-roussel
Contributor

This is a hard problem to solve and depends on the use-case. We have continuous deployments set for some of our services as well, the way we avoid having terraform get too much out of sync is by using wildcard tags on the docker images.

For example, if we have a service in version 1.2.3, the docker image will be tagged with 1.2.3, 1.2.x and 1.x.
In the terraform config we reference 1.x, so it's technically the "latest" v1 of the service, while the continuous integration will explicitly set the version to 1.2.3 in the ECS task definition.
That way if you re-run the terraform config it won't deploy an older version... but there's still a risk of deploying something that's not production ready if no continuous deployment was setup.

This isn't perfect and we're still experimenting with it, happy to hear how others are tackling this problem as well!

jhedev

jhedev commented on Jun 20, 2016

@jhedev
Author

Thanks for your answer.

I'll keep you updated if I find a better way!

shinzui

shinzui commented on Jun 24, 2016

@shinzui

I've been experimenting with tagging the docker image with git describe and using git describe to set the terraform variable. That only works though if your terraform config for the service and its code lives in the same repo.

gregwebs

gregwebs commented on Aug 12, 2016

@gregwebs
Contributor

I am over-writing a tag when I want to deploy, this seems the easiest for normal workflows. Note that if you overwrite your tag the ecs-agent will just pull it down and the next time the task is started it will get used. So you definitely don't want to push the tag until you are starting your app deployment. The same tag is used in terraform so that it cannot be out of sync with the deployment.

I do need to develop the ability to rollback quickly. Here is my plan: to also publish actual version tags.
In terms of tags mentioned here I am advocating for pushing 1.x and 1.2.3, but only referencing 1.x in the task definition for both the deployment and the terraform. Rollback means pushing 1.x as 1.2.2 and deploying. Better terminology for this approach is to use a "latest" tag in place of 1.x. Rather than "latest" you probably want an environment name, i.e. prod. You need to keep track of successfully deployed versions to know what to rollback to (assuming our tag versions are also git tags and not all of them are successfully deployed). It is also possible to just maintain a prod-previous tag of the last successful deployment.

gregwebs

gregwebs commented on Sep 9, 2016

@gregwebs
Contributor

There is a PR for terraform for being able to pull in task definitions as data sources
hashicorp/terraform#8509

I fokred terraform and merged it: https://github.com/kariusdx/terraform

andersonkyle

andersonkyle commented on Mar 1, 2017

@andersonkyle
Contributor

This was finally merged and released in 0.8.6

kc-dot-io

kc-dot-io commented on Mar 23, 2017

@kc-dot-io

Has anyone in this thread evolved their process since the 0.8.6 release?

I'm currently trying to determine the best approach for this and I'm not clear from what I've seen about the best strategy. I've tried tainting the resource but that seems to cause terraform stack some problems. The tagging strategy you all have mentioned makes good sense to me but I'm unclear about the best way to "restart" the service so that it picks up the new image, is that simply a matter of changing the tag in terraform and applying or are you guys restarting the service via the aws console / cli in some way? Thanks for your help.

gregwebs

gregwebs commented on Mar 26, 2017

@gregwebs
Contributor

I am now tagging my docker image with the git revision and using that. I also update the latest tag (actually its a per environment tag) to point to that.

nathanielks

nathanielks commented on Apr 21, 2017

@nathanielks
Contributor

I also tag my image with the git revision. If you wanted to use latest as the image tag and Terraform to update your task definition, you can taint the task definition resource to force Terraform to create a new revision, which will in turn tell ECS to deploy a new revision of the task definition (even though the version is still the same).

nathanielks

nathanielks commented on Apr 21, 2017

@nathanielks
Contributor

If you have your web service module defined like this:

module "some_web_service" {
  source  = "github.com/segmentio/stack//service"
  version = "latest"
  // trimmed...
}

You can taint it and force a new revision like so:

terraform taint -module=web-service-module-name.task aws_ecs_task_definition.main
terraform plan -out plan
terraform apply plan
egarbi

egarbi commented on Jun 21, 2017

@egarbi

This is what I'm doing straight from jenkins:

data "aws_ecs_task_definition" "main" {
  task_definition = "${var.name}-${data.terraform_remote_state.vpc.environment}"
}

module "main" {
  source          = "git::ssh://git@bitbucket.org/ldfrtm/stack//service-no-task"
  name            = "${var.name}"
  subnet_ids      = ["${data.terraform_remote_state.vpc.internal_subnets}"]
  security_groups = ["${data.terraform_remote_state.vpc.internal_elb}"]
  log_bucket      = "${data.terraform_remote_state.vpc.log_bucket_id}"
  zone_id         = "${data.terraform_remote_state.vpc.zone_id}"
  desired_count   = "${lookup(var.desired_count, terraform.env)}"
  task_definition = "${data.aws_ecs_task_definition.main.family}:${data.aws_ecs_task_definition.main.revision}"
  healthcheck     = "/health"
  environment     = "${data.terraform_remote_state.vpc.environment}"
  cluster         = "${data.terraform_remote_state.vpc.cluster}"
  iam_role        = "${data.terraform_remote_state.vpc.iam_role}"
  vpc_id          = "${data.terraform_remote_state.vpc.vpc_id}"
}

where 'data.terraform_remote_state.vpc' is basically a datasource reading the stack's module.
The task definition is created in a previous step using aws CLI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @shinzui@gregwebs@kc-dot-io@nathanielks@achille-roussel

        Issue actions

          Questions about managing service/task revisions in combination with continuous deployment · Issue #4 · segmentio/stack