Suggestions for Speeding Up Workspace Persistence Step of Code Checkout

We are pretty heavy CircleCI users over at Bond. Every commit associated with a PR in our firmware monorepo triggers a build of images for all ~60 products.

Over time our checkout_code step has been getting slower. It took 14 min just to get the code from GitHub and persist to the workspace.

Any tips on how we can speed this up? Currently we need to keep switching tasks as we wait for CI. But it was not always this way.


Our job:


defaults: &defaults
  docker:
    - image: $BOND_CORE_DOCKER
      aws_auth:
        aws_access_key_id: $AWS_ACCESS_KEY_ID
        aws_secret_access_key: $AWS_SECRET_ACCESS_KEY

  checkout_code:
    <<: *defaults
    steps:
      - checkout
      - run:
          name: "Pull Submodules"
          command: |
            git -c submodule."proto/BMatter/esp-matter".update=none submodule update --init --recursive
            git submodule update --init --depth 1 proto/BMatter/esp-matter
      - run:
          name: "Setup pyenv"
          command: |
            pyenv local $(pyenv global)
            pip install -r requirements-ci.txt
      - run:
          name: "Setup esp-idf"
          command: |
            ./setup_idf5_env.sh
            direnv allow .
            pip install -r requirements-ci.txt
      - persist_to_workspace:
          root: ./
          paths:
            - ./*

And the steps taking time are:

  • spin up environment: 2 min
  • checkout code: 3 sec
  • setup esp-idf: 2 min (ok this is our dept)
  • persisting to workspace: 6 min 43 sec
Creating workspace archive...

Uploading workspace archive...
Total size uploaded: 4.1 GiB
Workspace archive uploaded successfully.

Any tips on speeding up our iterations would be :metal: .

Do you need all of the code to be checked out? We have an option to use a “blobless clone” that we’ve seen be helpful for many users who don’t need all of the code to be cloned

“To help improve the overall performance of code checkouts from Git source code hosts, a “blobless” strategy is being rolled out. This reduces the amount of data fetched from the remote, by asking the remote to filter out objects that are not attached to the current commit.”

Since it only takes 3 seconds to clone and only 2 minutes to do your setup step, what is it you’re persisting to the workspace?

Can you just run the clone and setup in every job instead?

Also, 2 minutes to download the docker image seems like a very long time - are you storing the image in us-east-1? If so it must be huge