Documentation for this topic is now available on our documentation site: https://circleci.com/docs/2.0/faq/
What if I wanted to make my own base image (Like a Xenial one)? Are there docs on that front yet?
Thanks for the question, we will add it to the FAQ.
You can use any public image that you want. So if you wanted to make your own base image you can do that locally, push it up to Docker Hub, and then use that in your build. A sample Dockerfile might look like:
FROM ubuntu:xenial RUN apt-get install $MY_PACKAGES RUN $SOME_OTHER_STUFF
Then you can:
docker build -t $USERNAME/$MY_IMAGE:$VERSION . docker push $USERNAME/$MY_IMAGE:$VERSION
Lastly, as a part of your build you would just chose this image:
version: 2 executorType: docker containerInfo: - image: $USER/$MY_IMAGE:$VERSION
/bin/sh: mkdir: command not found
Is parallelism only at a high-level pipeline level? Meaning is it possible to run some commands in my circle.yml file in parallel (tests) but not others (asset prep)?
Seems like the parallel option skips the deployment type by default but applies to the rest of the pipeline entirely, is that correct?
Can anything be done to make the checkout step more efficient? Maybe a shallow clone?
remote: Counting objects: 112349, done. remote: Compressing objects: 100% (5531/5531), done. Receiving objects: 52% (58422/112349), 279.40 MiB | 59.19 MiB/s Receiving objects: 58% (66220/112349), 363.28 MiB | 59.48 MiB/s Receiving objects: 59% (66286/112349), 363.28 MiB | 59.48 MiB/s Receiving objects: 60% (67496/112349), 423.15 MiB | 57.67 MiB/s Receiving objects: 61% (68533/112349), 483.70 MiB | 58.56 MiB/s Receiving objects: 61% (69086/112349), 517.32 MiB | 59.45 MiB/s Receiving objects: 62% (69657/112349), 584.68 MiB | 60.52 MiB/s Receiving objects: 62% (69837/112349), 584.68 MiB | 60.52 MiB/s Receiving objects: 65% (73027/112349), 638.77 MiB | 61.19 MiB/s Receiving objects: 65% (74000/112349), 665.50 MiB | 60.96 MiB/s Receiving objects: 70% (78645/112349), 693.72 MiB | 60.13 MiB/s Receiving objects: 70% (78923/112349), 712.05 MiB | 56.79 MiB/s Receiving objects: 70% (78924/112349), 746.34 MiB | 50.89 MiB/s Receiving objects: 80% (89880/112349), 767.49 MiB | 48.29 MiB/s remote: Total 112349 (delta 29785), reused 26036 (delta 26036), pack-reused 80781 Receiving objects: 100% (112349/112349), 779.07 MiB | 45.71 MiB/s, done. Resolving deltas: 11% (8185/74186) Resolving deltas: 28% (20776/74186) Resolving deltas: 43% (31901/74186) Resolving deltas: 58% (43036/74186) Resolving deltas: 72% (53680/74186) Resolving deltas: 88% (65331/74186) Resolving deltas: 99% (73448/74186) Resolving deltas: 100% (74186/74186), done. Checking connectivity... done.
Customizable git checkout
Apologies for the delay.
Yes, this is by design.
Your assumption is correct.
All steps are executed in parallel except deploy step.
I have these steps under
- type: artifacts-store path: /tmp/circle/artifacts destination: artifacts - type: test-results-store path: /tmp/circle/test_reports
However, when running
circleci-builder build locally, I get:
====>> 3. Uploading artifacts Warning: skipping this step: storage is not configured (missing "destination" key) ====>> 4. Uploading test results Error: Skipping uploading test results from /tmp/circle/test_reports as no storage is configured
Where am I missing a
destination key? How to configure that storage?
A couple of questions:
is there a way to specify more than one stage under stages? Right now, everything is grouped under one stage and I would like to separate out build/test/deploy etc just for easier management.
For cache-save, it looks like its not possible to update an existing cache-key. this makes it hard to reuse a cache across branches for all the common stuff. Any plans on implementing that?
Previously, I was outputting test out from the test processes directly under CIRCLE_TEST_REPORTS/filename.xml. It looks like that env variable is no longer set. Is there more details on how to write test output to the right place, especially if I have multiple test steps (lint, test etc)
No, but you can name your Steps to describe what they do. With that said, this may be supported in the future. I certainly understand your use case.
We have already put in a feature request for more control over caching Having a “primed” cache would be useful in a lot of cases. Add Mechanism to Update Existing Cache Key
The env var $CIRCLE_TEST_REPORTS should still be accessible
- The other reason to have separate stages is to use parallelism. I only want to run certain steps in parallel. How does parallelism work in general? Can it pick any step out of order, or is it still going to run in sequence, just n at a time?
The steps run in order on each box. Just specify the parallelism in the stage.
This is how to restrict things to a single container:
if [[ $CIRCLE_NODE_INDEX = 0 ]]; then