You should only get a warning on machine/remote-docker jobs using deprecated images. If you are having an issue finding what is the issue you can submit a support ticket and one of our folks should be able to help.
It looks like you are using the aws-cli which has a default tag specified which is deprecated. We are working internally to publish a new version that does not use a deprecated image. Thank you for your patience.
We have cut a new release of the aws-cli orb (Release v4.1.3 · CircleCI-Public/aws-cli-orb · GitHub). Be sure to update your version
number. It should get rid of the warnings. Of note: under alpine, aws-cli would
consistently segfault, so as of now for alpine linux, we default to the version of aws-cli
available in the package manager. In this case, it means a bump in version from 2.1 to
a 2.13 of the aws-cli binary, specifically for alpine linux. Also, for alpine linux, we now
ignore the version parameter.
Hello,
Unfortunately, we found out about the brownouts only this morning, and it’s been quite disruptive for our team. As reported by @brentmmarks in his last message, we had to upgrade also a bunch of AWS orbs, which involved quite some changes to the configurations.
Did you reach out in other ways beforehand about this or just with this thread? I cannot find any warning in our mailbox or from the builds’ output in the past few days.
We have sent emails to org admins I believe warning about these deprecations. Additionally we have placed messages on the top of each build using these deprecated images.
Do you have any idea when an updated version of the aws-ecr orb will be released as this broke for us today unexpectedly due to the deprecated images, and there is no version released that has the commit (21d867b - has been merged to master 3 days ago) that would fix the problem.
Hi @leeor neither one of those images should be deprecated so I am not sure if you are getting failures for another reason. I have checked my internal tooling and not seeing failures for these images. Also think you might have want to use 2024.01.1 instead of 2024.01.01. If you are still having issues I would suggest submitting a support ticket.
We were getting the missing image error in our build pipeline and updated our aws-cli orb from 3.1.1 to 4.1.3 but we’re now getting errors related to npm command not found.
We are running an npm install in our job
It looks something like this
While it did get our attention, brownouts are a TERRIBLE way to notify users. I would have appreciated a noticeable banner on the page a few weeks PRIOR to the brownout so that our production builds were not affected. Instead I had to jump into an emergency fix for our circle ci yaml this morning to get repairs started. This is something that is unsettling to say the least! We are in circle CI basically everyday, if we had a banner on the build notifying us that the linux image was being deprecated we could have fixed it BEFORE it became an emergency.
Do better circle ci (@brentmmarks - tagging for visibility)!
Sorry it seems that the warning was on the Job itself (if you happen to click into it) which is a terrible place for visibility. We almost never click into individual jobs but view the runners from the dashboard. Why are there no warnings on the dashboard view? This would have prevented the emergency this morning…
We were also hit by brownouts this morning. We use the ubuntu image indirectly through aws-ecr/build-and-deploy-image, and to get to a functional aws-ecr it seems we need to do a major (8 to 9) update that involves retooling the authentication. We did not get (or at least didn’t notice) any email notification and also basically never visit the job page, so this hit us by surprise at a time when we happen to be very busy and shortstaffed.
For the ECR orb update I also have not been able to find an 8-to-9 upgrade guide, and aws-ecr@8.2.1 still gives the error; we currently use access key and secret via extra-build-args so switching to the role ARN is not trivial.
Hi Team,
We were using the image : ubuntu-2004:202201-02.
Then the build were failing : This job was rejected because the image is [unavailable]/
When we updated to default / ubuntu-2204:2024.01.1 , version/tags now there’s a different error as below :
java.util.ServiceConfigurationError: io.cucumber.core.backend.ObjectFactory: Provider diaceutics.sbo.cucumber.objectfactory.CustomObjectFactory could not be instantiated
at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:586)
at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:813)
at java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:729)
at java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1403)
at io.cucumber.core.runtime.ObjectFactoryServiceLoader.loadSelectedObjectFactory(ObjectFactoryServiceLoader.java:52)
at io.cucumber.core.runtime.ObjectFactoryServiceLoader.loadObjectFactory(ObjectFactoryServiceLoader.java:48)
at java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
at io.cucumber.core.runtime.ThreadLocalObjectFactorySupplier.get(ThreadLocalObjectFactorySupplier.java:19)
at io.cucumber.core.runtime.BackendServiceLoader.loadBackends(BackendServiceLoader.java:44)
at io.cucumber.core.runtime.BackendServiceLoader.get(BackendServiceLoader.java:34)
at io.cucumber.core.runtime.BackendServiceLoader.get(BackendServiceLoader.java:30)
at io.cucumber.core.runtime.ThreadLocalRunnerSupplier.createRunner(ThreadLocalRunnerSupplier.java:50)
at java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
at io.cucumber.core.runtime.ThreadLocalRunnerSupplier.get(ThreadLocalRunnerSupplier.java:44)
at io.cucumber.testng.TestNGCucumberRunner.runScenario(TestNGCucumberRunner.java:121)
at diaceutics.sbo.cucumber.runners.CustomRunner.runParallelScenario(CustomRunner.java:25)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:134)
at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:597)
at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:173)
at org.testng.internal.TestMethodWithDataProviderMethodWorker.call(TestMethodWithDataProviderMethodWorker.java:77)
at org.testng.internal.TestMethodWithDataProviderMethodWorker.call(TestMethodWithDataProviderMethodWorker.java:15)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Unable to load cache item
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2052)
at com.google.common.cache.LocalCache.get(LocalCache.java:39
Any idea /updates which dependencies might be broken or suggestion how to get this fix.