Multi-container application stack testing with docker or docker-compose on CircleCI

docker

#1

Hi, I’m interested in using circleCI to run acceptance tests on an python based ETL pipeline. The acceptance tests run a subset of the ETL, which stores the data in a Neo4J data warehouse, and then the tests make assertions about what data is where in the data warehouse using cypher, the sql of graph databases.

Unfortunately Neo4J does not expose python bindings for an impermanent in-memory graph database for our test suite to run for its purposes. We already use docker containers (and docker compose) for development and deployment in our entire application stack, and have been starting towards adding to that stack a test database container for the purposes of the test suite to connect to, and then blast clean, during its tests.

I was reading through the docs at https://circleci.com/docs/docker to get some insight on running an application stack with docker containers on circle CI for testing purposes. The docs talk about testing individual containers from an image build, or how to run a test suite inside a container, or how to connect to external services from inside a container, but not about how or whether it is possible to run a multi-container setup on a circleCI server. Is this possible? Does circleCI provide support for docker-compose, and/or supporting multicple containers with several pre-execute docker run commands?

Are there any gotchas/has anyone else done this before?

Thank you!

Laura


Update docker version
#2

It is definitely possible to run a multi-container setup in a CircleCI build—to achieve that you would need to download / pull all the necessary images and then run them all with docker-compose, any other orchestration tool, or just manually.

We don’t pre-install docker-compose in the build image, but you can certainly install it yourself during the build (it should only take a few seconds) by adding the following to your circle.yml file:

dependencies:
  pre:
    - sudo pip install docker-compose

You could run the whole stack by adding the docker-compose up command to the same circle.yml file:

dependencies:
  pre:
    - sudo pip install docker-compose
  post:
    - docker-compose up

Would be great to hear about your experience setting this up.


#3

Thanks. Some gotchas I ran into

  • existing port bindings on circleCI servers - our base development compose file, development.yml, binds to canonical host ports for many of its services. I had a conflict binding to 5672 for rabbitmq and 3306 for mysql with services already running on the circleCI server. Since I don’t actually need the ports published for the purposes of testing, I first tried splitting out our port mappings into a separate development.ports.yml for use in the real development environment by the override paradigm supported by docker-compose 1.5+. Later for reasons detailed in the next section, and to prevent any changes to our existing development process, I instead switched the circleCI commands to a series of docker-compose run commands which bypass port publication by default.
  • persistent container processes - one container runs the tests with a finite nosetests command, but its dependent containers (a mysql db and a neo4j db) are persistent processes. What I really wanted was for those dependent services to run only as long as nosetests needed them; a simple docker-compose up results in the persistent processes continuously running on circleCI’s server indefinitely, causing a timeout. So instead of starting up a whole system with docker-compose up, I call out just the test specific services I needed with docker-compose runs, running the persistent ones in detatched mode with -d and the finite testing container running the nosetests command in the foreground. Once that stops, I can safely stop all the persistent processes. I ended up using docker stop as opposed to docker-compose stop since the latter doesn’t appear to work against the naming convention for docker-compose run.
  • container logs - to debug anything that may go wrong with the containers, I wanted to stash the docker logs. I experimented with the docker-compose logs command, but it doesn’t seem to be as robust regular old docker logs and would cause a timeout. I didn’t dig into it to deep and just decided to use regular old docker logs for each of the containers since I knew the naming convention docker-compose would use for them.
  • artifacts - I need to store artifacts that are generated from the test command itself, so I shared the host’s (circleCI’s) $CIRCLE_TEST_REPORTS dir to something inside my test container and had my container’s command write its output to that shared volume

In short, the way it looks is

circle.yml

machine:
  python:
    version: 2.7.3
  services:
    - docker
dependencies:
  pre:
     - sudo pip install docker-compose
     - sudo mkdir -p $CIRCLE_TEST_REPORTS/coverage_html/
test:
  pre:
    # starting the backend services I want for my test container using docker run, which will not bind published ports by default
     - docker-compose -f development.yml -f development.test.yml run -d db
     - docker-compose -f development.yml -f development.test.yml run -d testdb
     - docker-compose -f development.yml -f development.test.yml run -d mysqldbdata
     - docker-compose -f development.yml -f development.test.yml run -d mysqldb
  override:
     # docker-compose run was restarting all linked services with their published ports and causing port bind collisions
     # so specifying --no-deps so they don't restart
     - docker-compose -f development.yml -f development.test.yml run --no-deps test
  post:
     #docker-compose stop was not stopping containers started with docker-compose run
     # so using this hackity hack to force stop them all
     - docker stop $(docker ps -a -q)
     # get each log separately - docker-compose logs times out, I think it's trying to stream even though the containers are stopped.
     # depends on the standard naming convention of docker-compose
     - docker logs reponame_db_run_1 > $CIRCLE_TEST_REPORTS/db.log
     - docker logs reponame_testdb_run_1 > $CIRCLE_TEST_REPORTS/testdb.log
     - docker logs reponame_mysqldbdata_run_1 > $CIRCLE_TEST_REPORTS/mysqldbdata.log
     - docker logs reponame_mysqldb_run_1 > $CIRCLE_TEST_REPORTS/mysqldb.log
     - docker logs reponame_test_run_1 > $CIRCLE_TEST_REPORTS/test.log

and a shortened version of my development.test.yml that includes my testing container

test:
    build: .
    dockerfile: Dockerfile-docs
    volumes:
        - .:/reponame
        - $CIRCLE_TEST_REPORTS:/circle_artifacts
    working_dir: /reponame
    links:
        - mysqldb:mysqldb
        - db:db
        - testdb:testdb
    command: nosetests -v --with-xunit --xunit-file=/circle_artifacts/nosetests.xml --with-coverage --cover-html --cover-html-dir=/circle_artifacts/coverage_html/

Hope this helps some other folks! And of course open to more suggestions.

Laura


#4

Thank you very much for sharing this, Laura! Great insight into your setup.


#5

for the last section where you grab the logs, reponame is usually the name of the folder docker-compose is running in. You can override this value using the --project-name option on docker-compose


#6

I’v try to make same circle.yml:

machine:
    python:
        version: 2.7.3
    services:
        - docker

dependencies:
    cache_directories:
        - ~/docker
    override:
        - sudo pip install docker-compose
        - docker info
        - docker-compose version
        - docker build -t leadbean .

test:
    pre:
        - docker-compose run -d redis
        - docker-compose run -d postgres
    override:
        - docker-compose run --no-deps leadbean npm test
    post:
        - docker stop $(docker ps -a -q)
        - docker logs leadbean_redis_run_1    > $CIRCLE_TEST_REPORTS/redis.log
        - docker logs leadbean_postgres_run_1 > $CIRCLE_TEST_REPORTS/postgres.log
        - docker logs leadbean_leadbean_run_1 > $CIRCLE_TEST_REPORTS/leadbean.log

but leadbean container doesn’t see other containers:

Error: Redis connection to redis:6379 failed - getaddrinfo ENOTFOUND redis redis:6379 (error from node.js)

docker-compose.yml:

...
leadbean:
    image: leadbean
    container_name: leadbean
    expose:
        - "80"
    links:
        - redis:redis
        - postgres:postgres
...

how i can resolve this problem ?


#7

Thank you so much for your input! Here is my circle CI and docker compose file. Hope this helps somebody:

machine:
  pre:
    - curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
  services:
    - docker
  environment:
      MOCHA_FILE: "$CIRCLE_TEST_REPORTS/junit/test-results.xml"

dependencies:
  pre:
    - pip install docker-compose
    - sudo mkdir -p $CIRCLE_TEST_REPORTS/junit/
  override:
    - docker info
    - npm install

test:
  pre:
    - docker-compose up -d
  override:
    - docker-compose -f docker-compose.test.yml up
  post:
    # docker-compose stop was not stopping containers started with docker-compose run
    # so using this hackity hack to force stop them all
    - docker stop $(docker ps -a -q)
    # get each log separately - docker-compose logs times out, I think it's trying to stream even though the containers are stopped.
    # depends on the standard naming convention of docker-compose
    # - docker logs reponame_db_run_1 > $CIRCLE_TEST_REPORTS/db.log
    - if [[ -e test-results.xml ]]; then sudo cp test-results.xml $CIRCLE_TEST_REPORTS/junit/test-results.xml; fi

and my docker compose file for running the app:

version: '2.0'

services:
  app:
    image: borntraegermarc/app
    container_name: app
    build: .
    ports:
      - "5000:5000"
      - "5001:5001"
    depends_on:
      - mongo
    volumes:
      - .:/home/app
    environment:
      - APP_ENV=dev # dev / beta / production
      - KEY_LOCATION=./misc/ssl/localhost/server.key
      - CERT_LOCATION=./misc/ssl/localhost/server.crt
      - MONGO_HOST=komed-mongo
      - MONGO_DATABASE=komed-health
      - PORT=5000
      - SSL_PORT=5001

  komed-mongo:
    container_name: komed-mongo
    image: mongo:3.2.11

and my docker compose to run the tests:

version: '2.0'

services:
  test-integration:
    image: borntraegermarc/test-integration
    container_name: test-integration
    build:
      context: .
      dockerfile: Dockerfile.test.integration
    external_links:
      - app
      - mongo
    volumes:
      - .:/home/app
    environment:
      - HOST_URL=app
      - HOST_PORT=5001
      - MONGO_HOST=mongo
      - MONGO_DATABASE=health

Basically I run a nodejs app and execute my mocha tests like this.


#8

Can anyone confirm this is still working with CircleCI. I’m trying to get a build working but I’m getting the following error when running docker-compose.

ERROR: In file './docker-compose.yml' service 'version' doesn't have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.

This appears to be because docker 1.5.2 is installed on the build host, however upgrading docker-engine via Apt breaks the docker service. Has something changed with Circle? Is there something I’m obviously doing wrong here? I’m planning on just moving the docker-compose.yml file to v1 syntax and see if that works but I’d love to not have to do that.


#9

I believe this is related to breaking changes with the latest version of docker-compose.

Can you try downgrading to 1.9.x and pining the dependency?


#10

For anyone finding this thread these days, please check out this document on how to use docker-compose with CircleCI 2.0. This will be the best way to have this supported going forward.


#11

Now we have the choice between:

  • docker builder, with remote docker and all the pains with volumes, network…
  • machine execution, slow and maybe costly in the future

What about trying a third option: bootstrap the build “inside” a docker-compose:

  • it means we first need to checkout the code
  • then we bootstrap the job again based on a docker-compose found in the repository
  • this new “build” container shares network/volumes with other services found in docker-compose.
  • all further steps are run inside the “build” container

That would be my ideal workflow.


Docker vs machine: best of both worlds with docker-compose?
#12