How to change node version in CircleCI 2.0?

nodejs
yarn

#1

I can’t seem to properly change the node version on a CircleCI VM image.

The default node version is 6.1.0: https://raw.githubusercontent.com/circleci/image-builder/picard-vm-image/provision.sh. To use yarn, however, we need 6.2.2.

Node: ^4.8.0 || ^5.7.0 || ^6.2.2 || ^8.0.0

I tried to add a step like:

- run:
    name: Install node@6.2.2 (need right version for `yarn`)
    command: |
      set +e              # https://github.com/creationix/nvm/issues/993#issue-130348877
      curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.5/install.sh | bash
      export NVM_DIR="/opt/circleci/.nvm"
      [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
      nvm install v6.2.2
      nvm alias default v6.2.2

But this only changed the node version for this step. The node version for the next step is still 6.1.0. Apparently steps aren’t sequential or don’t share configs? Very confusing.

So then I tried to write to $BASH_ENV, which should be sourced before each step:

environment:
  BASH_ENV: "~/.bashrc"
...      
- run:
    name: Install node@6.2.2 (need right version for `yarn`)
    command: |
      set +e             
      curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.5/install.sh | bash
      echo 'export NVM_DIR="/opt/circleci/.nvm"' >> $BASH_ENV
      echo '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"' >> $BASH_ENV
      echo 'nvm install v6.2.2' >> $BASH_ENV
      echo 'nvm alias default v6.2.2' >> $BASH_ENV

But now builds are hanging at the very first step that runs bash and I can’t SSH into the build.


#2

There are two issues.

  1. Each command runs in its own shell so running nvm alias default will only apply to that current section as you have noticed.

  2. BASH_ENV is a bit confusing. The ENVAR already exists and you must touch it in order to use it.

Remove the environment: key and add touch $BASH_ENV after set +e

You also need to set the NVM_DIR to $HOME instead of /opt

- run:
    name: Install node@6.2.2 (need right version for `yarn`)
    command: |
      set +e             
      touch $BASH_ENV  
      curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.5/install.sh | bash
      echo 'export NVM_DIR="$HOME/.nvm"' >> $BASH_ENV
      echo '[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"' >> $BASH_ENV
      echo 'nvm install v6.2.2' >> $BASH_ENV
      echo 'nvm alias default v6.2.2' >> $BASH_ENV

You can see this working here: https://circleci.com/gh/levlaz/circleci-sandbox/9


#3

Anyone who might run into this in the future you can see the config for this issue here: https://github.com/levlaz/circleci-sandbox/tree/using_node_6.2


#4

Thanks for the late-night reply :grin:.

I was able to get this working with:

- run:
    name: Install node@6.2.2 (need right version for `yarn`)
    command: |
      set +e             
      curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.5/install.sh | bash
      export NVM_DIR="/opt/circleci/.nvm"
      [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
      nvm install v6.2.2
      nvm alias default v6.2.2
      
      # Each step uses the same `$BASH_ENV`, so need to modify it
      echo 'export NVM_DIR="/opt/circleci/.nvm"' >> $BASH_ENV
      echo "[ -s \"$NVM_DIR/nvm.sh\" ] && . \"$NVM_DIR/nvm.sh\"" >> $BASH_ENV

No need to touch $BASH_ENV; it already exists and each step uses the same one.

Also not sure why it’s necessary to set the NVM_DIR to $HOME instead of /opt. I took export NVM_DIR="/opt/circleci/.nvm" from nvm's own output.

Edit: Oh, I think I see the difference: I’m using machine executor, you’re using docker. machine is used b/c of this issue w/ docker-compose: Running tests via docker-compose

:thinking: Your whole setup’s unnecessary w/ docker executor. Just use circleci/node:6.11, which meets yarn's requirements. It’s the VM’s built-in 6.1.0 that’s the problem.


#5

No need to touch $BASH_ENV; it already exists and each step uses the same one.

Good to know. BASH_ENV is confusing and I can’t wait till we support proper variable interpolation.

:thinking: Your whole setup’s unnecessary w/ docker executor. Just use circleci/node:6.11, which meets yarn’s requirements. It’s the VM’s built-in 6.1.0 that’s the problem.

Thanks! My “setup” was just a dummy repo to debug your environment. Sorry I didn’t realize that you were using the machine executor.


#6

Speaking of which, what is the reason that you are using the machine executor? Just curious.


#7

I am blind. Sorry :frowning:

Just saw that you said that.


#8

You should be able to use docker-compose with the remote_docker instead of machine. This will allow you to keep using docker images instead of the VM. Did you give this a try? https://circleci.com/docs/2.0/docker-compose/#nav-button ?


#9

Hmm, I think I tried that early on. I’m don’t recall what the problem was–probably something about not being able to publish ports to localhost.

Or maybe it was some trouble w/ using 2 Postgres images w/ different versions:

db-a:
  image: postgres:9.5.3
  ports:
    - "5433:5432"
  environment:
    ...
  volumes:
    - ./db/:/docker-entrypoint-initdb.d/

db-b:
  image: postgres:9.2
  environment:
    ...
  ports:
    - "5678:5432"

The executor types overview suggests that running multiple versions of the same software is not supported by the docker executor–IIRC I ran into problems w/ conflicting port 5432 :man_shrugging:.

As mentioned in the other post, what I’d like is to docker-compose up -d to bring up supporting services that I can access via localhost, which is similar to how I do local testing. This doesn’t seem possible w/ the docker executor.

If there is a way or a reasonable workaround, I’d love to know about it. Among other advantages, the docker executor seems to execute steps significantly faster than the machine executor, which helps w/ our long build times.


#10

If localhost is a hard requirement then sadly the machine executor is the only way to go.

If you need to run multiple instances of postures, the best way to get around that is to use different ports.

The good news is we are working on making machine executor faster and better. So in the future the performance differences will be negligible.


#11

localhost is not a “hard” requirement per se, it’s just the closest to what we do locally. But I think there’s a bigger problem in that, per the docker-compose article

The primary container runs in a seperate [sic] environment from Remote Docker and the two cannot communicate directly. To interact with a running service, use docker and a container running in the service’s network.

So it’s not clear how the primary container, which is the app under test, can communicate w/ the services that have been docker-composed up.

I guess the way around this is to make the app itself part of the docker-compose network? But then wouldn’t all the steps in config.yml need to be part of the app’s Dockerfile?


#12

Check out this example https://github.com/CircleCI-Public/circleci-demo-docker/blob/docker-compose/.circleci/config.yml#L68

Notice how docker-compose starts up a service named contacts – you can access it from other steps by using its name. This name can be completely arbitrary.

By using --network container:contacts it makes it look like its on the local network.

Sorry for any confusion. Your use case should totally work just fine.

If you have an example failing build using the remote_docker approach send me a link and I am glad to take a look at it.


#13

So the strategy there seems to be:

  • Run unit tests via the checked-out code
  • Run integration tests by spinning up supporting services via docker-compose, then start the app in a container and link it to the services.

That sound about right?


#14

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.