Yes, and no, and yes!
The most correct answer I believe would be to say that there should likely be multiple workflows, each that would be configured in a way where you wouldn’t need to drill down any further than that. That should also reduce complexity.
You could orbify a job for instance and then call it twice, once in each workflow, with the set parameter for that workflow.
Take a look at this workflow example from the aws-sam-serverless orb.
You can see here in a single workflow the
sam/deploy job is called twice. (Side note, we are able to do that in the same workflow because each is provided a unique
name parameter). The job is called twice but was supplied a different
stack-name which in this case meant we only had to define
deploy once, but were able to change the deploy location.
You can use “reusable config” and author parameterized jobs directly in your config.
sayhello: # defines a parameterized job
description: A job that does very little other than demonstrate what a parameterized job looks like
description: "To whom shall we say hello?"
- run: echo "Hello << parameters.saywhat >>"
- sayhello: # invokes the parameterized job
Or if this is a task you use across multiple projects, maybe consider authoring an orb.
Within a job itself, you can even execute different steps based on the boolean value of a parameter using the
jobs: # conditional steps may also be defined in `commands:`
- run: echo "my custom checkout"
custom_checkout: "any non-empty string is truthy"
So if you actually wanted to, you could drill down even deeper to accomplish more of what you had mentioned in your original post py passing parameters into your jobs and using the when steps.
Hope that was helpful! Let me know if we can clear anything up.