Test timings are still missing with 2.0 and workflows


#1

Hi
I know this issue has been raised multiple times, but the test splitting still doesn’t seem to work for me. Initially it was said that test splitting was not supported for workflows. Then, according to the comments here from about 2 months ago, it became functional. But I keep seeing this message in the CI output:

#!/bin/bash -eo pipefail
bundle exec rspec --profile 10 \
                  --format RspecJunitFormatter \
                  --out test_results/rspec.xml \
                  --format progress \
                  $(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)

Requested historical based timing, but they are not present.  Falling back to name based sorting

Here’s the relevant part of config:

steps:
  - checkout

  - restore_cache:
      keys:
        - v2-bundle-{{ checksum ".ruby-version" }}-{{ checksum "Gemfile.lock" }}

  - run:
      name: Wait for database server
      command: dockerize -wait tcp://localhost:5432 -timeout 1m

  - run:
      name: Database setup
      command: bin/rails db:schema:load --trace

  - type: shell
    name: Ruby tests
    command: |
      bundle exec rspec --profile 10 \
                        --format RspecJunitFormatter \
                        --out test_results/rspec.xml \
                        --format progress \
                        $(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)

  # Save test results for timing analysis
  - store_test_results:
      path: test_results

  - store_artifacts:
      path: test_results

  - store_artifacts:
      path: coverage

Do I need to adjust my config somehow or is the function still not supported? Thank you.


#2

This a dummy reply to keep the topic open.


#3

+1 for this issue


#4

Hello,

Could you try adding --timings-type=classname

ex: circleci tests split --split-by=timings --timings-type=classname

Let us know how this works for you.


#5

I am having similar problems. It seems that the --timings-type=classname does not make a difference. Each build reports “Requested historical based timing, but they are not present.”

What is interesting and maybe related, is that if I have a failed build and I rerun failed jobs (from the workflow rerun dropdown), then it appears the timings are picked up (that build does not say “historical based timing are not present”).

I have 4 jobs in my workflow and the timing is used only in the downstream jobs. Based on some other threads, I wonder if there is something special about the first job. I’ve only included store_test_results and store_artifacts settings for the jobs that require it. Is it possible that because my first two jobs (which do not have store_test_results specified) don’t store anything, that the downstream jobs also don’t get that information?

Additionally, I wonder if the fact that 2 of these jobs both have store_test_results is causing the issue. The two jobs are spec and cucumber. Each job runs a different set of tests and saves the test_results to different locations. When you store_test_results what exactly does that mean? where are they stored? and how does the next build know where to look to pick them up?


#7

Hi KyleTryon,

Thanks for the reply. I’ve seen that suggestion in other threads and had tried it before posting.


#8

The test results are tied to your job name; they won’t conflict. As long as you’re storing the test results they will be available for future the job of the same name.

There is a known bug where renaming RepoA to RepoB and creating a new RepoA will lose your ability to pull in test results without manual intervention from our end, so watch out for that.


#9

Hello,

I just noticed a news from July 5th that test splitting now(!) works(https://circleci.com/changelog/#support-for-test-splitting-with-workflows). Which means, so far it wasn’t even supposed to work. The only bummer is that nothing has visibly changed, it still doesn’t work.


#10

I’m pretty confident our system is working properly at this time. If you want to open a ticket maybe we can further investigate the specifics.

I get asked a lot what I mean by “specifics” in this case. It’s a tedious debug process; I have no perfect advice to give to solve this. Storing your test results as an artifact is prudent to have insight into what you’re actually storing.

The most common mistake I see with timing-based test splitting is re-running tests and expecting it to be okay. Re-running tests will always skew the timing data.


#11

@rohara I’m not as confident (but hopefully I’m wrong). I’m actually about to open a ticket related to this (but that goes a step further and includes glob splitting which is only testing & reporting for one container). Re: timings - we haven’t renamed our repo or anything else and, yet, timings haven’t been working for us from day 1. We’ve tried everything in the docs, other help sites, and any workarounds we came across verbatim, numerous times (we even removed workflows entirely to minimize complexity until this and the other issue is resolved but have thus far been unfruitful).

Here’s our config (I’ll create a separate ticket for the missing tests issue):

version: 2.0
jobs:
build:
working_directory: ~/sml
shell: /bin/bash --login
docker:
- image: circleci/build-image:ubuntu-14.04-XXL-upstart-1189-5614f37
command: /sbin/init
environment:
- DATABASE_URL: postgres://postgres:@127.0.0.1:5432/circle_test
- REDIS_URL: redis://localhost:6379/0
- DEPRECATION_BEHAVIOR: silence
- RAILS_ENV: test
steps:
- setup_remote_docker
- checkout

  - run:
      name: Make dirs
      command: mkdir /tmp/circleci-test-results

  - run:
      working_directory: admin_app
      command: 'sudo service postgresql status || sudo service postgresql start;
        sudo redis-cli ping >/dev/null 2>&1 || sudo service redis-server start;
        sudo docker info >/dev/null 2>&1 || sudo service docker start; '

  - run:
      name: rm .rvmrc
      command: rm -f .rvmrc; echo 2.3.3 > .ruby-version; rvm use 2.3.3 --default

  - run:
      working_directory: admin_app
      command: nvm install 6.11.3 && nvm alias default 6.11.3

  - run:
      working_directory: admin_app
      command: |-
        printf '127.0.0.1       admin.sml.localhost
        127.0.0.1       client.sml.localhost
        ' | sudo tee -a /etc/hosts

  - restore_cache:
      keys:
      # This branch if available
      - v1-dep-{{ .Branch }}-
      # Default branch if not
      - v1-dep-master-
      # Any branch if there are none on the default branch - this should be unnecessary if you have your default branch configured correctly
      - v1-dep-

  - run:
      name: install firefox
      working_directory: firefox-34
      command: wget -O firefox-34.0.tar.bz2 'https://archive.mozilla.org/pub/firefox/releases/34.0/linux-x86_64/en-US/firefox-34.0.tar.bz2';tar xjf firefox-34.0.tar.bz2;firefox_cmd=`which firefox`;sudo rm -f $firefox_cmd;sudo ln -s `pwd`/firefox/firefox $firefox_cmd

  - run:
      working_directory: admin_app
      name: bundle config sidekiq
      command: bundle config enterprise.contribsys.com [TOKEN_HERE]

  - run:
      working_directory: admin_app
      name: bundle install
      command: bundle install --path vendor/bundle

  - run:
      working_directory: admin_app
      name: bundle rake i18n js
      command: bundle exec rake i18n:js:export

  - run:
      command: yarn
      working_directory: admin_app/front-end/member

  # temp fix for a dependency issue
  - run:
      command: npm rebuild node-sass
      working_directory: admin_app/front-end/member

  - save_cache:
      key: v1-dep-{{ .Branch }}-{{ epoch }}
      paths:
      # This is a broad list of cache paths to include many possible development environments
      # You can probably delete some of these entries
      - ~/virtualenvs
      - ~/.m2
      - ~/.ivy2
      - ~/.bundle
      - ~/.go_workspace
      - ~/.gradle
      - ~/.cache/bower
      - admin_app/vendor/bundle
      - admin_app/front-end/member/node_modules

  - run:
      name: Export vars
      working_directory: admin_app
      command: echo -e "export DATABASE_URL=postgres://ubuntu:@127.0.0.1:5432/circle_test\nexport REDIS_URL=redis://localhost:6379/0\nexport DEPRECATION_BEHAVIOR=silence\nexport PATH=${PATH}:${HOME}/${CIRCLE_PROJECT_REPONAME}/node_modules/.bin" >> $BASH_ENV

  - run:
      working_directory: admin_app
      name: bundle rake db test prep
      command: bundle exec rake db:test:prepare

  - run:
      working_directory: admin_app/front-end/member
      name: Run npm
      command: if echo "${CIRCLE_BRANCH:-$CIRCLE_TAG}" | grep -qE '^release'; then npm run build:prod; else npm run build:dev; fi

  - run:
      name: AWS ECR - Login
      working_directory: admin_app
      command: if echo "${CIRCLE_BRANCH:-$CIRCLE_TAG}" | grep -qE '^master|bug|develop|release|staging|production|hotfix|task|story'; then eval $(aws ecr get-login); else true; fi

  - run:
      name: AWS ECR - Build
      working_directory: admin_app
      command: if echo "${CIRCLE_BRANCH:-$CIRCLE_TAG}" | grep -qE '^master|bug|develop|release|staging|production|hotfix|task|story'; then docker build --rm -t [PRIVATE_ID_HERE].dkr.ecr.us-east-1.amazonaws.com/sml:$CIRCLE_SHA1 . ; else true; fi

  - run:
      name: AWS ECR - Push
      working_directory: admin_app
      command: if echo "${CIRCLE_BRANCH:-$CIRCLE_TAG}" | grep -qE '^master|bug|develop|release|staging|production|hotfix|task|story'; then docker push [PRIVATE_ID_HERE].dkr.ecr.us-east-1.amazonaws.com/sml:$CIRCLE_SHA1 ; else true; fi

  - run:
      name: AWS ECR - Create Version
      working_directory: admin_app
      command: if echo "${CIRCLE_BRANCH:-$CIRCLE_TAG}" | grep -qE '^master|bug|develop|release|staging|production|hotfix|task|story'; then bash bin/eb_create_version ; else true; fi

  - run:
      name: Rake test
      working_directory: admin_app
      command: bundle exec rake test TESTOPTS="--ci-dir=/tmp/circleci-test-results/reports"

  - run:
      name: Rake spec
      working_directory: admin_app
      command: bundle exec rake spec:javascript

  - run:
      name: Rspec
      working_directory: admin_app
      command: |
        TEST_FILES="$(circleci tests glob 'spec/**/*_spec.rb' | circleci tests split --split-by=timings --total=4)"
        echo $TEST_FILES
        bundle exec rspec -r rspec_junit_formatter \
                          --profile 10 \
                          --format RspecJunitFormatter \
                          --out /tmp/circleci-test-results/rspec/junit${CIRCLE_NODE_INDEX}.xml \
                          --format documentation \
                          -- $(echo "${TEST_FILES}" | sed -e 's/\n/\\n/' -e 's/ /\ /')


  - store_test_results:
      path: /tmp/circleci-test-results
  # Save artifacts
  - store_artifacts:
      path: /tmp/circleci-artifacts
  - store_artifacts:
      path: /tmp/circleci-test-results

#12

I can’t really recommend that image you’re using; a smaller one with only what you need installed is ideal.

I don’t see any parallelism noted in that config. Why are you splitting the tests if there is no parallelism?


#13

Thank you @rohara. We were using this image as a phase 1 just to get us to 2.0.

I didn’t realize that key was required. Here’s why (maybe it’ll help improve the docs):

I was originally using workflows where I had a separate job for code “checkout” which served as a prerequisite for 2 subsequent jobs - one being our test suite. Setting the parallelism key caused failures when they ran concurrently (I only wanted the 2 subsequent jobs to run in parallel). The documentation specifies that we can also set this at the job level but further down on the same page, I had interpreted “You can manually set this by using the --total flag.” as meaning the “–total” flag is an alternative to using the “parallelism” key. With no warnings or errors thrown, I thought there was an issue on CircleCI’s side.

Documentation link for reference: https://circleci.com/docs/2.0/parallelism-faster-jobs/#specifying-a-jobs-parallelism-level).

Anyway, that solved the timings issue for me (and the other issues).

Thanks again @rohara!


#14

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.