Reuse build artifacts for deployment

I want to create a simple workflow, where I build, test and deploy a small library.

I managed to create working yml file, however, I see that it rebuilds the project on the second step instead of reusing results of previous step.

How could it be avoided? Here is my yml:

version: 2
jobs:
  build:
    docker:
      - image: pzixel/solidity-dotnet:latest
    working_directory: ~/Solidity.Roslyn
    steps:
      - checkout
      - run:
          name: Restore
          command: dotnet restore
      - run:
          name: Build
          command: dotnet build      
      - run:
          name: Test
          command: dotnet test Solidity.Roslyn.Test
  deploy:
    docker:
      - image: pzixel/solidity-dotnet:latest
    working_directory: ~/Solidity.Roslyn            
    steps:
      - checkout
      - run:
          name: pack
          command: dotnet pack Solidity.Roslyn --configuration=Release --include-symbols --output nupkgs
workflows:
  version: 2
  build-and-deploy:
    jobs:
      - build
      - deploy:
          requires:
            - build
          filters:
            branches:
              only: master

Would you explain that a bit more, for readers who are not .net proficient? What does the dotnet restore command do? You have a blank container with a code checkout, so I imagine there is no build in the first step to restore.

dotnet restore restores all packages (similar to npm install).

The problem is that second step should finish in the same machine as the first one, it’s just a next logical step. But it starts restoring and building solution from scratch: https://circleci.com/gh/Pzixel/Solidity.Roslyn/25

As you see it pulls all images in Spin up Environment when they should be already pulled on step 1.

Ah, I see, you’re talking about the deploy job; I was looking at the build job.

I think when you say “step” you mean “job”. A job is comprised of steps, a workflow is comprised of jobs.

In the job link you have provided, you will get a clean container, by design. However, if you wish to preserve a directory from a prior job, you can use workflows. I don’t have a link to hand, but it’s in the manual - it is a way of saving and restoring directories across a workflow.

Ah, I may have misinterpreted your question. However, this behaviour is by design too - the two jobs in your workflow probably run on different Docker build machines, and I should think they are supplied clean (no Docker layer cache) partly for security reasons.

The trick here is to minimise the size of Docker images, so pulling them is done quickly.

So It’s ok having different jobs built from scratch? I’m new to building in docker so I probably need to know when I violate some guidelines. So if best practice says it’s ok then there is not matter to be worried about :slight_smile:

It’s fine in the sense it is perfectly allowed (both in CircleCI and Docker terms). You’d have to decide whether you were satisfied that it is producing the same build artefacts between the two jobs. (Of course if you do your tests on one artefact and then deploy a different one, you will get into a pickle!)

I think I would lean towards trying workflows, given that it is probably not too much effort to implement.

The Docker pull issue is nothing to worry about - the only minor thing you may want to consider is whether the latest tag is stable enough for you. Some people pin to a specific image hash, but this is not mandatory.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.