Suggestions for debugging ECR healthcheck failure?

I have a custom Python docker image containing a flask app, that has port 5000 exposed and works locally. If I run “docker run -d -p 8080:5000 testing”, a json with “hello world” appears at localhost:8080. nosetests pass when I run in CircleCI as a workflow.

I followed the instructions for deploying to Fargate ecs-ecr, changing the Dockerfile and .circleci/config.yml to match my application. Everything runs except the “Test image” step… The result of that is always “curl: (7) Failed to connect to localhost port 8080: Connection refused”. (All of these curl commands succeed locally.) If I comment out that test, the config.yml proceeds to the end, but my ECS services all have an “unhealthy” healthcheck status. I’m guessing this is related to the reason this image test is also failing… any suggestions of other things to try here? Thanks!

My code snippet is below for the Test Image step, with various things I tried.

  • run:
    name: Test image
    command: |
    docker run -d -p 8080:5000 --name built-image $FULL_IMAGE_NAME
    sleep 10
    docker run --network container:built-image appropriate/curl --retry 10 --retry-connrefused http://localhost:8080 | grep “hello”
    #docker exec curl http : / / localhost:8080 | grep “hello”
    #curl http : / / localhost:8080 | grep “hello”

COPY . /app
RUN pip install -r requirements.txt
ENTRYPOINT [“python”]
CMD [“”]

Hm, it looks like the -p 8080:5000 isn’t being passed in the “docker run” step, because I was able to get this test to pass by doing the curl to localhost:5000 instead of to localhost:8080!

And then also changing the container port to 5000 in the and in the terraform config files worked to get the whole app deployed - great!

But I’m still be curious to know why passing -p 8080:5000 in the config.yml doesn’t behave the same way it does when I pass it to docker run on my local machine… Thanks!

You cannot expose ports in CircleCI, if you using your own installation of Docker (that’s actually Docker-in-Docker). I believe this is because the permissions you would need to do so would present a security problem for other customers on the host.

There are two ways to fix this:

  • Use the remote Docker system where you specify your parent image for the build container - you can spin up secondary containers here.
  • Use Docker Compose to expose a server to the internal Docker Compose network, and consume the service in an additional DC container. You do not need to publish ports for this approach.

(Your YAML and your Dockerfile could do with being rendered in code formatting. You can use a line of triple-backticks above and below these blocks, to format them in a readable fashion).

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.