Sending message to slack current for manual approved PR's

The project I am working on uses github flow, but when we merge back to master, there is a manual approval job to deploy master to production.

This means that some times PR’s stack up before they get deploy to prod

We also need to send a message to a slack group to say PR’s are being deployed.

Sending a single message is easy enough, but can any one give me a pointer on how to send a message that will contain multiple PR’s if they are stacked.

Eg. if there are 5 undeployed PR’s, ready to go in master, and I click deploy on the latest one, then as this is the last commit to master, all the others also get deployed.

(This leaves the previous 4 PR’s as on hold but we usually go in and manually cancel those)

I just cant work out how to get the list of non-deployed PRs

hope this all makes sense!

Hello,

Thanks so much for posting on Discuss!

You can try setting up a custom response with slack to send a list of PRs using the slack orb.

    steps:
      - slack/notify:
          custom: |
            {
              "blocks": [
                {
                  "type": "section",
                  "fields": [
                    {
                      "type": "plain_text",
                      "text": "*This is a text notification*",
                      "emoji": true
                    }
                  ]
                }
              ]
            }
          event: always

Also, would you happen to have a build link of what the project currently looks like?

@JCi has already mentioned that you can utilize a custom Slack message from the orb to send configurable messages to Slack but IIUC they haven’t shown you how to retrieve the list of non-deployed PRs which is non-trivial given CCI’s API design IIUC.

Here’s a somewhat complex but mostly robust python3 script with zero external dependencies that does that. You’ll need to play with it to make it do what you want but the bones of what you need to do should be there.

#!/usr/bin/env python3

##########################################################################
### Retrieve on_hold workflows for the current project, branch, and
### workflow name
###
### This is a fairly robust, in my experience, python3 script with zero
### external dependencies (as of at least python 3.8) to spider the CCI
### API based on environment variables set in all CCI runs and look for
### workflows matching certain criteria.
###
### The basic algorithm is:
###
### 1. Look up our own workflow information which allows us to retrieve
###    our workflow name. This is a robustness requirement if your
###    pipelines run more than one workflow but only one of them should be
###    considered blocking or otherwise needs to be interacted with from
###    the current workflow.
###
### 2. Retrieve the first page of pipelines to subsequently retrieve their
###    workflows. All of these endpoints are paginated. If you need to
###    search beyond the first page then doing so is an exercise I'll
###    leave to you. It involves iterating on the request using the
###    previous response's `next_page_token`.
###
### 3. We iterate over each of the pipelines and retrieve their workflows.
###
###    A key point of logic here is being able to DTRT in the face of a
###    'nascent' pipeline. CCI pipelines always seem to start empty and
###    sometimes can stay that way due to instability. I ignore pipelines
###    that have been 'nascent' for more than 5 minutes and otherwise I
###    raise an error that I can handle in retry logic not shown in this
###    script.
###
###    This is also where we use our 'self' workflow's name to determine
###    whether we're looking at a relevant workflow.
###
###    Finally, we assert that the workflow's 'status' is on_hold to
###    consider it included in what should be returned.
###
### You'd then need to emit a list of links or whatever makes sense in
### your context and call this script using bash process substitution or
### similar to feed into the slack notification orb.
##########################################################################

import datetime
import json
import logging
import operator
import os
import pprint
import urllib.parse
import urllib.request

logging.basicConfig(level=logging.DEBUG,
                    format='%(asctime)s:%(levelname)s:%(name)s:%(message)s')
logger = logging.getLogger(__name__)

assert os.getenv('circle_token')
assert os.getenv('CIRCLE_PROJECT_USERNAME')
assert os.getenv('CIRCLE_PROJECT_REPONAME')
assert os.getenv('CIRCLE_BRANCH')
assert os.getenv('CIRCLE_WORKFLOW_ID')

logger.info('All required environment variables set')

class NascentPipelineError(Exception):
    pass

logger.info("Retrieving our own workflow information")
req = urllib.request.Request(
    url='https://circleci.com/api/v2/workflow/{}'.format(
        os.getenv('CIRCLE_WORKFLOW_ID')
    ),
    method='GET',
    headers={
        'Circle-Token': os.getenv('circle_token')
    }
)
try:
    with urllib.request.urlopen(req) as response:
        response_raw_body = response.read()
        logger.debug("Raw workflow ‘%r’", response_raw_body)
        parsed_response = json.loads(response_raw_body)
        logger.debug("Pretty workflow \n%s",
                     pprint.pformat(parsed_response))
        assert parsed_response.get('name'), \
            '‘{}’ has no ‘name’ key which is impossible'.format(
                parsed_response
            )
        our_workflow = parsed_response
except:
    logger.exception("Exception while retrieving our own workflow from CircleCI")
    raise

project_pipelines_url = (
    "https://circleci.com/api/v2/project/github/{}/{}/pipeline?{}"
).format(
    os.getenv('CIRCLE_PROJECT_USERNAME'),
    os.getenv("CIRCLE_PROJECT_REPONAME"),
    urllib.parse.urlencode({
        'branch': os.getenv("CIRCLE_BRANCH"),
    })
)
logger.info("Checking for pipelines at ‘%r’",
            project_pipelines_url)
req = urllib.request.Request(
    url=project_pipelines_url,
    method='GET',
    headers={
        'Circle-Token': os.getenv('circle_token')
    }
)
try:
    with urllib.request.urlopen(req) as response:
        response_raw_body = response.read()
        logger.debug("Raw project pipelines response ‘%r’", response_raw_body)
        parsed_response = json.loads(response_raw_body)
        logger.debug("Pretty project pipelines response\n%s",
                     pprint.pformat(parsed_response))
except:
    logger.exception("Exception while retrieving recent builds from CircleCI")
    raise

pipelines = parsed_response['items']

assert 0 < len(pipelines), \
    'Circle thinks there are no pipelines which is impossible'

logger.info('Found %d pipelines', len(pipelines))

workflows = []
for pipeline in pipelines:
    pipeline_workflow_url="https://circleci.com/api/v2/pipeline/{}/workflow".format(
        pipeline['id']
    )
    req = urllib.request.Request(
        url=pipeline_workflow_url,
        method='GET',
        headers={
            'Circle-Token': os.getenv('circle_token'),
            'Accept': 'application/json'
        }
    )
    logger.debug("Checking for workflows at ‘%r’",
                 pipeline_workflow_url)
    try:
        with urllib.request.urlopen(req) as response:
            response_raw_body = response.read()
            logger.debug("Raw pipeline workflow body ‘%r’", response_raw_body)
            parsed_response = json.loads(response_raw_body)
            logger.debug("Pretty pipeline workflow\n%s",
                         pprint.pformat(parsed_response))
    except BaseException:
        logger.exception("Exception while retrieving recent builds from CircleCI")
        raise
    pipeline_workflows = parsed_response['items']
    if len(pipeline_workflows) == 0:
        try:
            pipeline_created_at = datetime.datetime.strptime(
                pipeline['created_at'],
                "%Y-%m-%dT%H:%M:%S.%f%z")
        except:
            logger.info(
                "Failed to parse pipeline created_at w/ "
                "microseconds. Falling back to non-microseconds")
            pipeline_created_at = datetime.datetime.strptime(
                pipeline['created_at'],
                "%Y-%m-%dT%H:%M:%S%z"
            )

        if (60 * 5) < (
                datetime.datetime.now(
                    datetime.timezone.utc) - pipeline_created_at
        ).seconds:
            logger.info(
                "Ignoring nascent pipeline ‘%r’ created more "
                "than 5 minutes ago",
                pipeline
            )
        else:
            raise NascentPipelineError('pipeline ‘{}’ found in nascent state'.format(
                pipeline))
    for workflow in parsed_response['items']:
        assert workflow.get('name'), \
            '‘{}’ has no ‘name’ key which is impossible'.format(
                workflow
            )
        if workflow['name'] == our_workflow['name']:
            if workflow['status'] == 'on_hold':
                workflow['pipeline'] = pipeline
                workflows.append(workflow)
            else:
                logger.debug("Ignoring workflow ‘%r’ because it's not ‘on_hold’",
                             workflow)
        else:
            logger.debug("Ignoring workflow ‘%r’ because it doesn't match our name ‘%s’",
                         workflow,
                         our_workflow['name'])

workflows.sort(key=operator.itemgetter('created_at'))

logger.info('Found the following workflows:\n%s',
            pprint.pformat(workflows))

assert 0 < len(workflows), \
    'CircleCI thinks there are no workflows which is impossible'

for workflow in workflows:
    assert workflow.get('id'), \
        '‘{}’ has no ‘id’ key which is impossible'.format(
            workflow
        )

Good luck!

2 Likes

Thanks both of you!

@timvisher I will have a play with that over the next few days, much appreciated!

So I have a working python script that does what I need (thanks for the use of your one @timvisher !)

But I am stumped on how to get the list of PR’s that I have created, in to the slack orb.

Any pointers on what I need to look for would be greatly appreciated.

So the key here is to realize that the Slack orb is ultimately just a shell script calling curl.

Then you need to familiarize yourself with Bash command substitution.

With that information in hand you essentially need to add something like the following to your message template:

$(./all_pending_builds.py)

in the place of one of the variable names.

For instance on an older version of the slack orb I have something like the following in some of my configs:

- slack/status:
    fail_only: true
    channel: '#project-channel'
    mentions: "$(git show -s --format='%an' HEAD | tr '[:upper:]' '[:lower:]' | tr -d ' '),static-mention"

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.