WARNING: IPv4 forwarding is disabled. Networking will not work

I have a job running stuff inside a Docker image (machine executor).

It fails because it cannot download some test data via curl. Exactly the same code worked fine on CCI Friday (new commit contained only doc changes).

In the output, I see

docker run -it --rm \
  -v "$(pwd)":/build \
  xxx/build:latest \
  /build/support/download_test_data.sh

WARNING: IPv4 forwarding is disabled. Networking will not work.

Are you using the Docker executor or the Machine executor? What parent image are you using for your build server? What image is being run with the docker run command (the xxx/build:latest)? What is the parent image for xxx/build:latest?

Please update your question to include that information. In the meantime, can you pull this image to your local machine and see if networking is working there?

Are you using the Docker executor or the Machine executor?

As mentioned in the post, the machine executor.

What parent image are you using for your build server?

Could you clarify what exactly this means? Is it relevant for machine executor?

What image is being run with the docker run command (the xxx/build:latest )? What is the parent image for xxx/build:latest ?

Some custom image we build in a step before, based on nvidia/cuda:10.1-devel-ubuntu16.04.

Please update your question to include that information. In the meantime, can you pull this image to your local machine and see if networking is working there?

I know it works on CircleCI because the same code has run successfully on CircleCI before. (as described in my post). What concerns me is that the same code sometimes works and sometimes not, probably depending on some networking conditions on CircleCI.

That is the purpose of my questions - is it the same code? When you pull your image based on nvidia/cuda:10.1-devel-ubuntu16.04, it will pull in all the upstream changes from this Nvidia image. Have a look at the image digests (a 40 character hash) when you pull/build to see if it changed at the point it breaks.

The “devel” part of this image name means that it is the development branch, which is probably unstable. Are you able to base your work on stable releases?

What is in download_test_data.sh?

It’s possible of course, but given the above, I wonder if you should rule out changes on your side first.

I just did a search for this and found that this is a Docker error (it could equally have been output from something in the shell script). I wonder if the version of Docker you are using has changed - are you installing it in the Machine build server yourself?

Yep, It is a docker error.

My job definition looks like this:

eval-tests:
  working_directory: ~/build
  machine: true
  steps:
    - run:
        name: Create workspace dir
        command: |
          sudo mkdir /build
          sudo chown $(whoami):$(whoami) /build

    - attach_workspace:
        at: /build

    - run:
        name: Load Docker image layer cache
        command: |
          set +o pipefail
          # from a previous job in the same workflow
          docker load -i /build/data/docker-build.tar | true

    - run:
        name: Chown and move workspace dir
        command: |
          sudo chown $(whoami):$(whoami) /build
          rmdir ~/build
          sudo mv /build ~/build

    - run:
        name: Download test data
        command: |
          sudo chown -R 1000:1000 .
          docker run -it --rm \
            -v "$(pwd)":/build \
            xxx/build:latest \
            /build/support/download_test_data.sh # this fails
...

It does not defined any docker version or anything. Maybe CircleCI changed the underlying Docker version? I checked again, this exact stuff ran successfully friday, with the only diff between that and the failed job being something in the README.

It may have changed, but note that CircleCI may not have done that - it could come from an upstream change. I think the images are based on Debian releases (and you can change them, see the docs).

What version of Docker are you running presently, and is there any evidence in past build logs as to what version worked previously? It would be good to rule that out.

If you don’t have it already, I suggest putting a version output command in your run steps for Docker, so you can keep an eye on that in the future.

I re-ran the build (simply by blicking the button), and it worked.
So I assume it was some random one-time network bork. We’ll probably never know :stuck_out_tongue:

1 Like