Notification for customs compiling and caching binary dependencies on CircleCI 2.0

This only affects customers who meet the criteria described in the title. Full details on what you need to know here: Use the `arch` cache template key if you rely on cached compiled binary dependencies

2 Likes

@tom Might be worth pointing out (if true) that this includes a wide swathe of customers that perhaps wouldn’t remember that they are (implicitly) compiling binary dependencies (e.g. a ruby project with Nokogiri as a dependency)

@tom what isn’t clear is under which circumstances the architecture (and thus arch key) might change. If it’s under the control of the config.yml then I don’t need to worry about this – if this is likely to vary out of my control then I do. (PS I feel like having just moved to 2.0 I don’t want to have to keep tweaking my CI config. the whole point of a CI service is that it should be stable).

1 Like

Thanks for the feedback - I hear you, it’s our intention to make things as seamless as possible.

In this case the potential architecture changes are outside of customer control since it’s our backend that may need to run on machines with newer processors. We want to be able to offer people the best performing hardware for the fastest builds.

We expect it to impact a small number of customers and we’ll be able to notify and share how to fix for affected projects. We wanted to share this pre-emptively for anyone who knew they’d be impacted. You point about nokogiri is a good one, however it doesn’t mean that every such project will suddenly start failing since we won’t be switching all of our infrastructure to new architecture.

1 Like

I’ve added some more details about the issue here: Use the `arch` cache template key if you rely on cached compiled binary dependencies

1 Like

Fantastic - very clear.

1 Like

@tom We’ve just hit an issue with this. We have one project where the build has just started failing consistently, as in one job of the workflow we’re saving a cache and using the arch as part of the key.
It appears to be saving with an arch of arch1-linux-amd64-6_63 but then in a later job in the workflow it’s trying to restore the cache but failing as it’s looking for an arch of arch1-linux-amd64-6_62.
The primary image for the ‘saving’ job is a CircleCI Node image, but the SHA looks like it hasn’t changed recently. The primary image for the ‘restore’ job is a custom one of ours which hasn’t changed either.

Any advice here?

@edkellena thanks for the info. Can you open a support ticket: https://circleci.zendesk.com/hc/en-us/requests/new and share a link to your project so we can take a look please?

@tom :slight_smile: I actually just sent an email to cs@… as well. Is that the same thing?

That works (for now!) and we have your ticket.

@tom Our build requires AVX2 (Haswell); we actually picked CircleCI (and did the upgrade to 2.0 early on) because you were running builds on HSW machines. Since midday yesterday our builds have failed. I don’t see anything in the docs about specifying the architecture to run the builds on. Is this possible?

@zbjornson There isn’t a way for customers to choose the architecture. The new architecture is Xeon E5-2680 v2 (Ivy Bridge) will that work for you? If so, then following the advice to use a new cache should solve the issue for you.

@tom No, we need Haswell or later. We’re out of luck, then?

It’s frustrating that this was only identified as a caching issue, without the understanding that some applications actually require certain CPU architectures. We spent a lot of time moving from Circle 1 to 2, and will have to move to a different CI provider if there’s no path forward here.

@zbjornson Can you open a support ticket: https://circleci.zendesk.com/hc/en-us/requests/new so we can look into this for you.

@tom @zbjornson I have run into the same issue where our build & test jobs require avx2 support. Was there a resolution here?
Currently our hosts are also running the E5-2680 v2 architecture. Using circle ci 2.0 atm.

Customer support said that they don’t intend to add back support for AVX2 hosts, so we moved to our own CI.