I have a few services (containers) that get deployed to fargate. The problem is that the new service B requires the new service A to be up and running… as it the case, service B beats out service A by a handful of minutes , which means the site is offline/impaired.
Is there a way I can put a script, a check, something in the deploy job of service A that won’t finish the job until the service goes green? (aka online, up and running)? Or maybe make service B have a check that wont start until service A is up and running?
Or maybe there is just a workflow dependency that can be configured across workflows? What solutions have you guys come up with?
Could you give us some more information on how you are deploying these containers?
I think that this issue is best solved with the existing tooling and configurations provided by AWS Fargate. Looking through the fargate documentation it looks like there are two ways to deploy containers. Either have them all live in the same task definition, or have sepratate task definitions.
Specifically:
You should put multiple containers in the same task definition if:
Containers share a common lifecycle (that is, they should be launched and terminated together).
It sounds like this is the case for you here. Are you currently following this practice?