Unable to save the test result & could not able to parallelise the tests

Project Structre -

          pom.xml - //Main module
             module1 - pom.xml
             module2 - pom.xml
config.yml

    version: 2.0

jobs:
  test:
    parallelism: 2 # parallel containers to split the tests among
    docker:
      - image: circleci/openjdk:stretch
    working_directory: ~/kp
    steps:
      - checkout
      - restore_cache:
          keys:
            - kp-jobs-dependency-cache-{{ checksum "pom.xml" }} # appends cache key with a hash of pom.xml file
            - kp-jobs-dependency-cache- # fallback in case previous cache key is not found
      - run: |
          mvn -Dtest=$(for file in $(circleci tests glob "./**/src/test/**/*.scala" \
          | circleci tests split --split-by=timings); \
          do basename $file \
          | sed -e "s/.scala/,/"; \
          done | tr -d '\r\n') \
          -e test
      - save_cache:
            paths:
              - ~/.m2
            key: kp-jobs-dependency-cache-{{ checksum "pom.xml" }}
      - run:
           name: sonar
           command: |
             mvn -X sonar:sonar -Dlog4j.configuration=./logs sonar:sonar -Dsonar.projectKey=project-sunbird_knowledge-platform-jobs -Dsonar.organization=project-sunbird -Dsonar.host.url=https://sonarcloud.io -Dsonar.exclusions=**/cert-processor/** -Dsonar.scala.coverage.reportPaths=/home/circleci/kp/target/scoverage.xml
      - store_test_results:
          path: test-results
      - store_artifacts:
          path: test-results    

  build:
    docker:
      - image: circleci/openjdk:stretch
    steps:
      - checkout
      - restore_cache:
          keys:
            - kp-jobs-dependency-cache-{{ checksum "pom.xml" }} # appends cache key with a hash of pom.xml file
            - kp-jobs-dependency-cache- # fallback in case previous cache key is not found
      - run: mvn clean install -DskipTests
      - save_cache:
            paths:
              - ~/.m2
            key: kp-jobs-dependency-cache-{{ checksum "pom.xml" }}

workflows:
  version: 2

  build-then-test:
    jobs:
      - build
      - test:
          requires:
            - build

ERROR: Upload Test Result:

Unable to save test results from /home/circleci/kp/test-results
Error path is not valid /home/circleci/kp/test-results: error accessing path: /home/circleci/kp/test-results: lstat /home/circleci/kp/test-results: no such file or directory
Found no test results, skipping

ERROR: Upload Artifact Error

    Uploading /home/circleci/kp/test-results to test-results
  No artifact files found at /home/circleci/kp/test-results

Tried with providing the /tmp dir as well.

       - store_test_results:
          path: /tmp/test-results
      - store_artifacts:
          path: /tmp/test-results
          destination: test_results


Error:

Unable to save test results from /tmp/test-results
Error path is not valid /tmp/test-results: error accessing path: /tmp/test-results: lstat /tmp/test-results: no such file or directory
Found no test results, skipping

I tried adding this config but it throwing a No timing found for class error

steps:
      - checkout
      - restore_cache:
          keys:
            - kp-jobs-dependency-cache-{{ checksum "pom.xml" }} # appends cache key with a hash of pom.xml file
            - kp-jobs-dependency-cache- # fallback in case previous cache key is not found
      - run: |
          mvn -Dtest=$(for file in $(circleci tests glob "./**/src/test/**/spec/*.scala" \
          | circleci tests split --split-by=timings --timings-type=classname); \
          do basename $file \
          | sed -e "s/.scala/,/"; \
          done | tr -d '\r\n') \
          -e test
      - run:
          name: Save test results
          command: |
            mkdir -p ~/test-results/junit/
            find . -type f -regex ".*/target/surefire-reports/.*xml" -exec cp {} ~/test-results/junit/ \;
          when: always
      - save_cache:
            paths:
              - ~/.m2
            key: kp-jobs-dependency-cache-{{ checksum "pom.xml" }}
      - store_test_results:
          path: ~/test-results

Any work around for this ?

Have a you tried to ssh into the container and check that the paths in the job are being created as you expect? I think that’s the best first step in resolving issues with the tests results not uploading. From what I can see in your first example if there was a test-results folder in ~kp it should have at least tried to upload without error. So i’m curious to know if the folders are being created in the paths you expect them to be.

By making these code changes the above exception is resolved but I’m getting the another error

- run: |
         mvn -Dtest=$(for file in $(circleci tests glob "./**/src/test/**/spec/*.scala" \
         |  circleci tests split --split-by=timings --timings-type=classname); \
         do basename $file \
         | sed -e "s/.scala/,/"; \
         done | tr -d '\r\n') \
         -e test
     - run:
         name: Save test results
         command: |
           mkdir -p ~/test-results/junit/
           find . -type f -regex ".*/target/surefire-reports/.*xml" -exec cp {} ~/test-results/junit/ \;
         when: always
     - save_cache:
           paths:
             - ~/.m2
           key: kp-jobs-dependency-cache-{{ checksum "pom.xml" }}
     - store_test_results:
         path: ~/test-results


No timing found for "specpath/spec1.scala"
No timing found for "specpath/spec2.scala
No timing found for "specpath/spec3.scala
No timing found for "specpath/spec4.scala

Note This project is multi module project.

@fernfernfern Is the above command which I mentioned not correct to run for the multi module project?

I could able to see the download of test result and upload of test result.

Uploading Test Results
Archiving the following test results
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-11254bbb-9e1f-4822-bd53-9ea9d4b05d16.xml
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-136b512f-1aa8-4b4a-b3eb-8e1718ac41ea.xml
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-2e73fbf1-b547-4fc7-bd64-987ff9c27c91.xml
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-5c9744d6-24d6-4389-82f5-6a4ae66dce12.xml
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-64c4cab9-a62e-4085-9b03-9a4f8f718dd5.xml
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-b123d15e-77e8-4678-a2e3-57fae192a9e9.xml
  * /home/circleci/test-results/junit/TEST-org.scalatest.tools.DiscoverySuite-fbdf8c99-5f7e-4767-961e-a9f36c6bf6cb.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.postpublish.helpers.DialHelperTest.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.publish.helpers.spec.QuestionPublisherSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.publish.helpers.spec.QuestionSetPublisherSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.spec.PostPublishProcessorTaskTestSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.spec.QuestionSetPublishStreamTaskSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.spec.RelationCacheUpdaterTaskTestSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.spec.SearchIndexerTaskTestSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.spec.VideoStreamGeneratorTaskTestSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.job.spec.service.VideoStreamServiceTestSpec.xml
  * /home/circleci/test-results/junit/TEST-org.sunbird.publish.spec.EcarGeneratorSpec.xml

If you just got the test results to upload successfully it will take a few runs for the timing data to be created I believe. How many pipelines have you triggered since you got the test reporting working?

The timing data should be automatically inserted into the running job and then the CircleCI tests split --split-by=timings --timings-type=classname command will pick it up.

You can actually inspect the file itself. It should be created in $CIRCLE_INTERNAL_TASK_DATA/circle-test-results. You can add a run command like ls $CIRCLE_INTERNAL_TASK_DATA/circle-test-results into the job to see if the timing data exists. Though as I said it takes a few pipeline runs for it to be generated.

Thanks for your response @fernfernfern
I can see some data in the $CIRCLE_INTERNAL_TASK_DATA/circle-test-results/results.json file. But I don’t see the parallel execution of test cases. I have run the circle ci more than 6 times but still, I see the same module test cases are getting executed in both slots. ( parallelism: 2)

results.json

{
  "tests": [
    {
      "classname": "org.sunbird.job.postpublish.helpers.DialHelperTest",
      "file": null,
      "name": "fetchExistingReservedDialcodes should return map of dialcodes that is reserved",
      "result": "success",
      "run_time": 0.008,
      "message": null,
      "source": "junit",
      "source_type": "junit"
    },
    {
      "classname": "org.sunbird.job.postpublish.helpers.DialHelperTest",
      "file": null,
      "name": "reserveDialCodes should reserve dialcodes and return a map of the same",
      "result": "success",
      "run_time": 0.003,
      "message": null,
      "source": "junit",
      "source_type": "junit"
    },
    {
      "classname": "org.sunbird.job.postpublish.helpers.DialHelperTest",
      "file": null,
      "name": "reserveDialCodes with error response should throw an exception",
      "result": "success",
      "run_time": 0.004,
      "message": null,
      "source": "junit",
      "source_type": "junit"
    },
    {
      "classname": "org.sunbird.spec.CoreTestSpec",
      "file": null,
      "name": "DataCache hgetAllWithRetry function should be able to retrieve the map data from Redis",
      "result": "success",
      "run_time": 0.012,
      "message": null,
      "source": "junit",
      "source_type": "junit"
    },
    {
      "classname": "org.sunbird.spec.CoreTestSpec",
      "file": null,
      "name": "StringSerialization functionality should be able to serialize the input data as String",
      "result": "success",
      "run_time": 0.001,
      "message": null,
      "source": "junit",
      "source_type": "junit"
    }
     .....
     .....
]}

Still, I’m able to see these errors.

No timing found for "jobs-core/src/test/scala/org/sunbird/spec/BaseMetricsReporter.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/BaseProcessFunctionTestSpec.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/BaseProcessTestConfig.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/BaseSpec.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/BaseTestSpec.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/CoreTestSpec.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/DefinitionCacheTestSpec.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/ScalaJsonUtilSpec.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/TestJobRequestStreamFunc.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/TestMapStreamFunc.scala"
No timing found for "jobs-core/src/test/scala/org/sunbird/spec/TestStringStreamFunc.scala"

usually how many runs will it take to execute the tests parallelly? Since I have already re-run the circleci more than 6 times. Any config did i miss ?

Note: Project structure is module based project.

Same modules are getting executed in both the task slots.

Okay that’s great if you see the results.json and it’s populated with data. You mention it’s a multi-module project. Is that the same as a monorepo, or is this a scala/maven specific setup? I’m not entirely familiar with these tools so someone might have to step in and save me here.

In any case it would be helpful to break down your test step a bit more to make it easier to troubleshoot.

You can add a separate run step to your job that outputs circleci tests glob "./**/src/test/**/spec/*.scala" | circleci tests split --split-by=timings --timings-type=classname tot he console rather than passing it directly into your test command. It would be good to see the result of the test split command in the console output to better understand how the tests are being split across containers. If some of the specs aren’t reporting any timing data it could be for a number or reasons. The output xml data for the specs may be malformed, for instance. This can break parsing of the junit data and cause timing data to not be created.

But I think adding a new run line with the glob/split commands output will help see what the command is passing into your for loop. From there it should be easier to track what is happening in each container, such as which containers are picking up each scala file.