Persist_to_workspace Hanging

workflow
config

#1

I appear to be running into an issue where the persist_to_workspace command seems to run indefinitely. I’ve witnessed it take > 60m before cancelling the build. At some further point the job is marked as skipped with the message:

Warning: skipping this step: Missing workflow workspace identifiers, this step must be run in the context of a workflow.

The build eventually reflects a Success despite not proceeding to any other jobs in the flow.

I’ve looked over my config and don’t see anything especially exotic. Our repo is rather large / high file count, so not if sure if that would be a complicating factor. There’s no output beyond Persisting files to workspace... so it’s hard to say what the problem is. Here’s a sample of the job that’s failing with the timeout:

version: 2
jobs:

  setup_env:

    working_directory: /home/circleci/back_royal
    docker:
      - image: booleanbetrayal/circleci-trusty-ruby-node-python-awscli-psql-chrome

    steps:
      - checkout
      - restore_cache:
          keys:
            - bundler-{{ .Branch }}-{{ checksum "Gemfile.lock" }}

      - run:
          name: Additional Dependencies
          command: ./build/install_deps.sh
          no_output_timeout: 5m

      - save_cache:
          key: bundler-{{ .Branch }}-{{ checksum "Gemfile.lock" }}
          paths:
            - ./vendor/bundle

      - persist_to_workspace:
          root: /home/circleci
          paths:
            - .

Thanks in advance for any tips!


#2

I’ve tried several path variations but they all seem to fail in the same way. Would love to get some insight around what the root cause is. Not sure how to do that at the moment without detailed console logging in this task at the very least.


#3

So it turns out that this was related to either file size or count. I was able to aggressively trim out some unnecessary files from our source repo and get the persist_to_workspace step to complete successfully.

I did notice a rather dubious issue with attach_workspace failing to ever mount if the mount point was the same path as the job’s working_directory. This means all subsequent task have to begin a higher level in the directory hierarchy than I would prefer, but easy enough to work around.


#4

Hi Brent,

I tried to reproduce the attach_workspace step failing when it mounts to the job’s working_directory, but I wasn’t able to. Could you share a link to a build where you saw that happen?


#5

Think that was a red-herring or some cross-chatter during the debug blitz. Seems to be working now with absolute paths.


#6

Hello!

Could you take a look at this example and see if it helps make sense of this?

Happy to answer any questions!


#7

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.