Hi Everybody,
We need to do some cleanup even for failed builds. This is critical to release expensive cloud resources. Cannot find this in documentation. Is there a new/hacky way for this?
Forcing a failed step to succeed is not an option. Defeats the purpose of such a service. Should we move to another CI provider?
Thanks,
Ruben
I have 15 steps in the pipeline. I’m not going to pollute all those 15 steps. Also, such procedure would also hide the real failed step in the UI, right?
I guess there is no other option than moving to another CI vendor.
You can be cynical and defeatist, but remember, that’s a choice. If you keep on doing that, you will run out of vendors to loudly boycott - there are not very many good ones. (I’m just a fellow customer, and this forum’s annoying pro bono philosopher ).
I think it would be a small tweak to two or three steps - it is hardly pollution.
It would, yes. I have seen a lot of CI pipelines, and I don’t think I have ever seen one where it mattered which step failed.
If you can give more detail - maybe with a friendly smile - perhaps myself or other readers can give you more ideas.
If not clearly seeing which step fails is not important, then you should also question the need of seeing step logs in the first place.
I’m running pretty complex pipeline, with unit tests, build, packaging, image upload, pretty complicated orchestration. Also involves acquiring custom AWS resources. But in reality this shouldn’t matter, cause in the end of the day I’m running just bunch of bash scripts (already reached to 20).
I’m using the pipeline both for production but also for active development. It is critical to clearly see which step fails and see the log output as quickly as possible. All i needed i just to run a step which executes in the end regardless of the pipeline status. Any more details I can provide that would help?
Do you need to do this “cleanup” for all of your steps? I agree, in that case, that it will bloat each step. Nevertheless, my attitude to CI is: get it working, get it fast, get it elegant. You’re on step 0 at the moment, so bloat it up if you have to.
(Would you remove that image from your post? Let’s practice kindness here if we can.)
No idea if this was available when the question was asked last year, but for anyone looking for such functionality today, it is possible through the when property that can be added to any step. Circle’s default value is on_success. Other possible values are: on_fail and always.
Example
A step can be added at the end to always clean up external resources:
- run:
name: Release licenses and shut down servers
when: always
command: cleanup.sh
I tried to find a way too, and i found an orb swissknife that has function ( queue_jobs_within_workflow) to run a job after success/failure. I think this is the best workaround I found.
The only problem is that waiting only for one job. But you can take the code and change it to get a list of jobs instead of one.
Hopes it will help someone.