All,
I’m trying to write a script using the CircleCI API that queries all the repos I have on CircleCI and retrieves the amount of credits that repo has used. At the moment I’m using Python (pycircleci
) because I think I saw an example using it? (I hadn’t yet seen this post.)
Now, the code (essentially) is:
from pycircleci.api import Api
circleci = Api(circleci_api_key)
gh_repos = []
# get project list from current user info
user_info = circleci.get_user_info()
for gh_repo_url in user_info['projects']:
if gh_org in gh_repo_url:
gh_repo_name = urlparse(gh_repo_url).path.rsplit('/')[-1]
gh_repos.append(gh_repo_name)
credits = {}
for repo in gh_repos:
insights = circleci.get_project_workflows_metrics(gh_org,repo)
try:
credits[repo] = extract_values(insights,'total_credits_used')[0]
except IndexError:
# I think this means no credits were used???
credits[repo] = 0
print(f'Repo: {repo:10} -- Credits: {credits[repo]:<10d}')
time.sleep(10)
Now this does seem to work, but if I don’t have a time.sleep()
at the end, I get a 429 Client Error: Too Many Requests for url
because, well, that loop is executing so fast, CircleCI thinks I’m spamming.
I guess my question is: is there a good way to determine the wait needed? Or, perhaps, is there some way to do this “better”? I’m not great at Python and even worse at APIs/JSON/etc. I’m a Fortran programmer so a program accessing the Internet seems crazy to me!
Note that a 10-second wait between calls isn’t too bad in truth as, if I get this working, the script might run once a day at night. I just figured I’d try and learn!