Sharing Application data between two containers


We are using CircleCI for quite a while now, and we always used Apache based images with mod_php. We are now in the process of switching to Nginx+PHP-FPM which means having to different containers for Nginx and PHP.

in our local dev environment and on Kubernetes, I can easily share data between the two containers, either through a directory mapping, or a shared volume.

Now, on CircleCI, I have a main container (my PHP-FPM container), which does the git checkout. And the Nginx Containers runs alongside it (like mysq, ElasticSearch, etc.). Is there a way to share my application data between those 2 containers? Or even to copy it to the Nginx container?

My container setup so far is :

    resource_class: large
    parallelism: 6
      - image: ${CONTAINER_REG_HOSTNAME}/my-fpm-container
          username: ...
          password: ...
          - LISTEN_ON_ADDRESS=
      - image: ${CONTAINER_REG_HOSTNAME}/nginx:2.0.0
          username: ...
          password: ...
          - NGINX_PORT=8090
          - PHP_FPM_HOST=

And the Nginx output is the following :

2024/03/12 10:56:53 [crit] 36#36: *2 realpath() "/var/www/html/public" failed (2: No such file or directory), client:, server: _, request: "GET /<my-route> HTTP/1.1", host: ""

because it’s looking for the folder /var/www/html/public which isn’t there.

Any hint is appreciated,
thanks and best regards,

The CircleCI docker based environment loads the first listed image as the primary container inside which it runs all the ‘run’ shell commands. Any additional images are loaded alongside this primary container.

Such a configuration works well when all the additional images are services that can be controlled by the defined environment variables and assessed via network ports, but there are a number of limitations as listed here

Using the Docker execution environment - CircleCI

To be able to control volume mounts you will need to switch to a machine based environment. You will then be able to use the complete feature set of the docker command (or compose) to define your containers, including volumes.

The downside of this change is that all your config.yml based shell commands will now be run at the machine level rather than within the first defined image/container and so will need to be refactored. In part you can get around this by doing the checkout within the main ‘machine’ based environment and making the resulting directory available to all the containers via a volume mount.

Thanks! I refactored now everything to he machine executor. That seems indeed to be the solution here. Another advantage is, that it streamlines our local and build environment.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.