Hi Alexey,
Thanks for the detailed post — you’ve diagnosed this accurately.
To answer your questions directly:
-
The endpoints powering the job “Resources” tab are not currently public. There’s no supported API to retrieve per-job CPU/RAM utilization programmatically today.
-
The Usage Export API does not currently support filtering by project, branch, pipeline ID, or workflow ID — it’s an org-wide dump, which makes it impractical for the targeted, per-branch comparisons you’re describing.
Near-term workaround: CircleCI MCP Server
If you’re open to an AI-assisted workflow, our MCP Server may help bridge the gap today. It includes two tools specifically relevant to your use case:
- A usage API downloader that handles fetching and processing the org-level CSV for you
- An underutilized resource class parser that analyses the data and surfaces over-provisioned jobs across your projects
Rather than writing and maintaining your own scripted pipeline, you’d be able to ask questions like “which jobs on branch X are using significantly more CPU than on main?” and have the MCP server handle the data wrangling. It won’t give you true per-branch API filtering, but it removes most of the manual overhead you described.
Longer term
I created a feature request on your behalf to track the native API improvements you’re asking for: Expose per-job CPU/RAM resource usage via public API with project, branch, and pipeline filtering
If you could add a vote and drop a comment with any additional context about your setup — e.g. how frequently you’d query it, whether a real-time push via webhooks would work as well as a pull API — that would be really valuable signal for prioritization.
Thanks again for the clear write-up.