Bundler / Ruby caching dependencies

Looking at https://circleci.com/docs/2.0/caching/#bundler-ruby and Partial Cache Restoration & Bundler
I’m trying to follow the recommendation to run a bundle cache clean --force before restoring or installing deps, e.g.,

  bundle_install:
    docker:
      - image: circleci/ruby:2.6-stretch-browsers-legacy
    steps:
      - checkout
      - run: sudo gem install --force bundler --version '~> 2.1.0'
      - run: bundle clean --force
      - *restore_deps
      - run: bundle install --path vendor/bundle
      - save_cache:
          paths:
            - vendor/bundle
          key: v2-{{ arch }}-{{ .Branch }}-{{ checksum "Gemfile.lock" }}

However, I then just get an error, since none of those gems are installed:

Could not find concurrent-ruby-1.1.6 in any of the sources

Is the above still the preferred practice? Is the issue because I’m using vendored gems?
With the bundle clean first, do I still need to do a bundle check || bundle install?

We had gotten rid of the more opportunistic caching strategy because we were running into some consistency issues when the Gemfile.lock was updated; had been hoping that the explicit clean would be sufficient.

*restore_deps is just

restore_deps: &restore_deps
  restore_cache:
    keys:
      - v2-{{ arch }}-{{ .Branch }}-{{ checksum "Gemfile.lock" }}
      - v2-{{ arch }}-{{ .Branch }}-
      - v2-{{ arch }}-

Did you manage to find the cause or a fix for this problem? We’re running into a similar if not the same situation. Restoring the vendor/bundle from cache seems to result in bundle still installing all previously cached gems. Making the cache basically unused.

When reading through the documentation we added bundle clean --force. But running that command immediately results into it saying a gem doesn’t exist. In our case rake.

@michiels Hi - I’ve changed jobs and don’t have access to that config; I also don’t remember at this point what the resolution was, if anything.

In our case (not a ruby shop), this was used mostly for a single project for some terraform integration tests, so it’s possible that we may not have found a great resolution and just left with the slower and / or less opportunistic caching options.