"Could not parse object" Git Error

New to CircleCI, it’s been working great for a week or so, but today I started having issues. When I first create a new PR, it works as expected. But if I make another commit to the branch associated with the PR and push it up, CircleCI triggers the build, and then fails to find the latest commit:

Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.112' to the list of known hosts.

remote: Counting objects: 1965, done.        
remote: Compressing objects: 100% (173/173), done.        
remote: Total 1965 (delta 180), reused 374 (delta 166), pack-reused 1566        
Receiving objects: 100% (1965/1965), 493.20 KiB | 0 bytes/s, done.
Resolving deltas: 100% (845/845), done.
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.

remote: Counting objects: 4, done.        
remote: Compressing objects: 100% (2/2), done.        
remote: Total 4 (delta 2), reused 3 (delta 1), pack-reused 0        
Unpacking objects: 100% (4/4), done.
From github.com:xxx/yyy
 * [new ref]         refs/pull/141/head -> origin/pull/141
fatal: Could not parse object 'a05b2ab9d380fe2e3371d219a680953a0b1c7845'.
Exited with code 128

The odd thing is, If I then push the branch upstream, CircleCI will find it, and at that point if I re-trigger a build, it works! Pretty confusing…

Anyways, just wondering if anyone has run into this and has any workarounds. Thanks!

Adam

3 Likes

Are you self-hosting Git? I wonder if it has been corrupted on the remote, and needs a git fsck check?

Also, I have a vague memory of a bug reported here in which the clone would fail if the branch name contained the word “pull”. I can’t seem to find it, but perhaps that will give you clues as to new things to try.

Hi halfer, thanks for the suggestions.

Our repo is a private one hosted by Github. I’ve just tried a fresh git clone and it doing a fsck on that gives:

$ git fsck --full
Checking object directories: 100% (256/256), done.
Checking objects: 100% (1975/1975), done.

The two branches we have had fail so far were called docker and remove_gpgme respectively, which are different enough I would be very surprised if that was it.

I’ve also tried an interactive SSH session with the build, and running fsck directly on CircleCI seemed to indicate the repo was not corrupt on their infrastructure either.

Thanks again for the suggestions! I’m hoping this bug is magically sorted out today, will see when the next PR goes in…

We’re experiencing the same problems the last few days, in particular after rebasing/force pushing.

1 Like

My team had the same problem when I updated an open PR (not a rebase or force push). When I submitted a second PR at the exact same commit, both PRs started to work. Hopefully this is useful as a workaround to other people.

(If you’re not getting this from PRs, then maybe push a second branch at the same commit as the one that isn’t working, or whatever else seems analogous.)

Closing the failing pull request and creating a new one from the same code works for us.

1 Like

Same issue, very annoying its getting a total mess now the PR’s being closed and new ones with reference to the old one, … Mostly on force pushes like mentioned above

1 Like

To clarify, I didn’t need to close my old PR. Making a new PR fixed the old one, and it was the old one that we merged.

So you should be able to make a new PR and immediately close it (maybe wait for tests to pass, to be safe), which is still annoying but less disruptive than closing the old PR.

2 Likes

We’re experiencing the same issue every once in a while. We can work around it just fine, but it’s still quite annoying. Similar setup for us, private GitHub repo.

I’d like to add that I experienced the issue again today, after a week of things going smoothly. We are in the same position, it is possible to work around, but a pain every time the problem surfaces. Would really like it if this bug could be squashed!

Git stores its objects in its own internal directory. I’m not an expert here, but it may be interesting to see if the object actually exists.

To do so, take the hash in the error (e.g. a05b2ab9d380fe2e3371d219a680953a0b1c7845) and split it into the first two characters and then the remaining 38, and do this in your project:

ls -l .git/objects/a0/5b2ab9d380fe2e3371d219a680953a0b1c7845

The questions are, does this exist (it should) and is the content parsable (by Git)? I imagine this error comes from the file not existing. I wonder if we can determine, or a CircleCI engineer can show us, the Git command used to do this clone?

Hmm, if Git is trying to change to a branch hash that no longer exists, that would presumably cause this.

The script used to check out the PR is visible in the Checkout code step in the web UI. AFter doing a git clone it does:

if [ -n "$CIRCLE_TAG" ]
then
  git fetch --force origin "refs/tags/${CIRCLE_TAG}"
else
  git fetch --force origin "pull/167/head:remotes/origin/pull/167"
fi

if [ -n "$CIRCLE_TAG" ]
then
  git reset --hard "$CIRCLE_SHA1"
  git checkout -q "$CIRCLE_TAG"
elif [ -n "$CIRCLE_BRANCH" ]
then
  git reset --hard "$CIRCLE_SHA1"
  git checkout -q -B "$CIRCLE_BRANCH"
fi

git reset --hard "$CIRCLE_SHA1"

In my case, the last time things failed, $CIRCLE_TAG was not set, CIRCLE_BRANCH=pull/167, and CIRCLE_SHA1=e3c2aed661deba4f949e36eea5144feacdbd6b33. That commit hash was the head of the branch I was pulling from in PR 167, and the output was:

Unpacking objects: 100% (17/17), done.
From github.com:xxx/yyy
 * [new ref]         refs/pull/167/head -> origin/pull/167
fatal: Could not parse object 'e3c2aed661deba4f949e36eea5144feacdbd6b33'.
Exited with code 128

I don’t have a build failing in this way at the moment. Next time I do, I can attach with SSH and try listing the git object as you suggested @halfer .

1 Like

Ah, good sleuthing @tri-adam! Can you get a fresh local clone, and then replicate the sequence of Git commands performed on it, in order to see if you can force a Git fail? That would then constitute a solid bug report, if it’s replicable.

Yes, those commands work locally, and also I can restart that build now and it works. I believe this is because that commit has been merged to our master beach.

Next time I see the issue though, I’ll try to reproduce locally and check back in.

:sunny: :palm_tree: :cocktail: :beach_umbrella: :wink:

I imagine there would be a way in Git to reproduce the issue. If there is no other solution, you could create a dummy repo, create a change, create a PR on the change, and then replicate in Circle. That would probably be easier than working out how to wind a (remote) repo back to its exact PR state!

But yes, if you can live with this, wait until it happens again…

Hah, typos, although a beach would be nice right about now :wink:

Yeah, trouble is it doesn’t seem to happen on every PR, I would say it’s 5% of the time or less, and in those cases I’m not sure what triggers it. Some in the thread seem to think it’s related to force pushes after a rebase, though I’m not certain that is what has caused it in our case.

1 Like

We’re using the standard checkout step and we seem to see this error any time we commit updates to a PR. A rebuild doesn’t make it go away, we have to close the PR and re-issue another for the checkout step to not encounter the error.

We use the standard github workflow, committers working against their own forks and submitting PR’s to a central repo. This bug is especially a pain in the butt because if we need to do any further commits on the branch due to code review or QA’ing we need to close the PR and re-issue to get the build to work.

Is there anything I can provide to anyone at Circle to help diagnose this bug? I have a PR right now that’s unbuildable in its current state.

Not an employee, but I’d suggest getting the shell code as @tri-adam has done (make sure you look up your own, in case it is different) and then try to break a fresh clone locally using the steps that are executed based on the conditionals therein.

If you can replicate this error locally and reliably then you have something to report. From what @tri-adam is saying, it is not reliably replicable for them. Is it for you?

The shell code is just the standard checkout step from the circle config.yml file.

I cannot replicate this locally, the issue only ever appears on Circle builds and, so far, only when pushing an update to a PR that has already had a CI build run against it.

It’s almost like the commit that the shell code is attempting to reference in the git reset --hard "$CIRCLE_SHA1" line doesn’t exist yet in the repo that was cloned. The hashes all do actually exist in the repo / PR branch, that was the first thing I checked when I saw the error, hence not being able to replicate locally. Maybe some sort of Github/Circle race condition?

2 Likes