How to run two jobs sequentially always running the second if the first fails

I have my config yml setup as shown below to run two jobs sequentially. But the second job fails to run if the first one has errors. I always want to run the second job. The reason is that each job is automation that tests two different products and I want to test them independently.

Is there a way to setup my config to do this?

      - schedule:
          cron: "0 5 * * *"
              only: master
      - product1
      - product2:
            - product1

My immediate question is why do you want to run the jobs sequentially then? The desired result you mentioned is best done by the jobs running in parallel, which is default.

The requires key states (in docs) that, “A list of jobs that must succeed for the job to start”.

So this behaviour will happen.

I’d suggest running them in parallel. If there’s a step or two that both jobs need, either use YAML anchors or CircleCI v2.1 config to accomplish that, or make a parent job between the two product testing jobs.

I can’t run them in parallel. Product 1 must be tested in isolation first without Product 2 being tested. Once Product 1 is tested, then Product 2 testing can start. Neither are dependent on the other passing or failing. It’s just two products that must be tested synchronously.

If possible can you show a config example to accomplish this.

Why? How do they connect?

Why can’t they run in parallel?

The better I understand they relationship the better chance I have of coming up with a solution.

1 Like

There are many reasons why. Most involve executing workflows in Product 1 and thereafter executing against that workflow started in Product 1 within Product 2. (eg in Product 1(lets say a web app) I created an auction with User A, later in Product 2(lets say a mobile native app) I need to act like the bidder User B).

Just a simple example there. The point is there are guaranteed hand offs that need to be sequential between Product 1 and Product 2. There is no way around it and some are time based so its not as easy as staging data fixtures.

It sounds like Product 1 produces some test artefacts that Product 2 relies upon. If this is the case, then I would suggest baking that artefact into Product 2, since (1) you know what a good artefact looks like, and (2) if a single production of this artefact is sufficient to test Product 2, then the same one will always be good enough to test Product 2.

I am wondering though whether this artefact is different for every run of Product 1. If so, this is risky, since repeating a test failure in Product 2 will be dependent on getting the same artefacts from Product 1. You may even get into a situation where multiple runs of both products in sequence will sometimes produce failures and sometimes not, which is something you need to strenuously avoid.

Would you expand on that? It sounds like you need a data fixture containing timing information, but it is a bit vague. It sounds like your tests have a dependency that is going to trip you up regardless of platform, but perhaps there is just something I am not seeing here.

Appreciate you guys trying to solve automation issues for us.

This is just one of so many things going on that we cannot avoid. We will not always have the capability to stage data fixtures. Either due to time constraints, enormity of projects or the workflows must be executed in real time(web a couple hours earlier from real automation interaction) and mobile must execute on those.

Ultimately if there is no way to orchestrate two workflows synchronously that always execute in CircleCI we will just combine our workflows into one, operate at the step level and swallow the errors via ‘set +e’, ‘2>/dev/null || :’, etc.

It would be a single job that you’d have to do.

Instead of the bash setting you listed, you can have a step that kicks off the second project and use the when key of run to see always run:

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.