Linux Image Deprecations and EOL for 2024

Hi folks, I have a similar question as @johnjmaguire i.e.

Snippet of our config

version: 2.1
setup: true

parameters:
  GHA_Actor:
    type: string
    default: ""
  GHA_Action:
    type: string
    default: ""
  GHA_Event:
    type: string
    default: ""
  GHA_Meta:
    type: string
    default: ""
  manual_commit_id:
    type: string
    default: ""
  lambda_names:
    type: string
    default: ""

defaults: &defaults
  working_directory: ~/samvaad
  docker:
    - image: cimg/python:3.9
  resource_class: small
orbs:
  continuation: circleci/continuation@0.4.0
  aws-cli: circleci/aws-cli@3.1.5

jobs:
  CalculateChanges: &calculateChanges
    <<: *defaults
    parameters:
      AWS_REGION:
        type: env_var_name
        default: DEV_AWS_REGION
      AWS_CI_BUCKET:
        type: string
        default: DEV_AWS_CI_BUCKET
      IAM_ROLE_ARN:
        type: string
        default: DEV_IAM_ROLE_ARN
      PROFILE:
        type: string
        default: dev

    steps:
      - checkout

      - run:
          name: Install python deps
          command: |
            python3.9 -m pip install import_deps==0.2.0 PyYAML requests boto3

      - aws-cli/setup:
          role-arn: $<< parameters.IAM_ROLE_ARN >>
          aws-region: << parameters.AWS_REGION >>
          # optional parameters
          profile-name: << parameters.PROFILE >>
          role-session-name: 'circleci'
          session-duration: '3600'

      - run:
          name: Identify Changes
          command: |
            if [[ -n "<< pipeline.parameters.lambda_names >>" ]]; then
                python3.9 calculate_changes.py --commit "<< pipeline.parameters.manual_commit_id >>" --lambdas "<< pipeline.parameters.lambda_names >>"
            elif [[ -n "$CIRCLE_PULL_REQUEST" ]]; then
                GITHUB_API_URL="$(echo "$CIRCLE_PULL_REQUEST" | sed 's/https:\/\/github.com\//https:\/\/api.github.com\/repos\//' | sed 's/\/pull\//\/pulls\//')"
                export PYTHONIOENCODING=utf8
                export PULL_REQUEST_BASE_REF=$(curl -s "$GITHUB_API_URL" -H "Authorization: Bearer $GITHUB_TOKEN" | python3 -c "import sys, json; print(json.load(sys.stdin)['base']['ref'])")
                python3.9 calculate_changes.py --branch "$PULL_REQUEST_BASE_REF"
            else
                python3.9 calculate_changes.py --branch develop  # for testing only
            fi


      - continuation/continue:
          configuration_path: .circleci/workflow.yml

Hi @jonprindiville,

You should only get a warning on machine/remote-docker jobs using deprecated images. If you are having an issue finding what is the issue you can submit a support ticket and one of our folks should be able to help.

Thanks,
Brent

hi @neilharia7,

It looks like you are using the aws-cli which has a default tag specified which is deprecated. We are working internally to publish a new version that does not use a deprecated image. Thank you for your patience.

Thanks,
Brent

1 Like

Hi all,

We have cut a new release of the aws-cli orb (Release v4.1.3 · CircleCI-Public/aws-cli-orb · GitHub). Be sure to update your version
number. It should get rid of the warnings. Of note: under alpine, aws-cli would
consistently segfault, so as of now for alpine linux, we default to the version of aws-cli
available in the package manager. In this case, it means a bump in version from 2.1 to
a 2.13 of the aws-cli binary, specifically for alpine linux. Also, for alpine linux, we now
ignore the version parameter.

Thanks,
Brent

Hello,
Unfortunately, we found out about the brownouts only this morning, and it’s been quite disruptive for our team. As reported by @brentmmarks in his last message, we had to upgrade also a bunch of AWS orbs, which involved quite some changes to the configurations.

Did you reach out in other ways beforehand about this or just with this thread? I cannot find any warning in our mailbox or from the builds’ output in the past few days.

3 Likes

Hi @filipporagazzobuybay,

We have sent emails to org admins I believe warning about these deprecations. Additionally we have placed messages on the top of each build using these deprecated images.

Thanks,
Brent

ubuntu-2204:2023.04.02 is not on the deprecation list.

That and the 2024.01.01 are unavailable for almost 10 hours now.

1 Like

Hi Brent,

Do you have any idea when an updated version of the aws-ecr orb will be released as this broke for us today unexpectedly due to the deprecated images, and there is no version released that has the commit (21d867b - has been merged to master 3 days ago) that would fix the problem.

Thanks,
Andy

Hi @leeor neither one of those images should be deprecated so I am not sure if you are getting failures for another reason. I have checked my internal tooling and not seeing failures for these images. Also think you might have want to use 2024.01.1 instead of 2024.01.01. If you are still having issues I would suggest submitting a support ticket.

That extra 0 was my typo. The build yaml says 2024.01.1.

We were getting the missing image error in our build pipeline and updated our aws-cli orb from 3.1.1 to 4.1.3 but we’re now getting errors related to npm command not found.

We are running an npm install in our job
It looks something like this

commands:
  aws-setup:
    steps:
      - aws-cli/setup:
          profile_name: cicd-user
      - aws-cli/role_arn_setup:
          profile_name: converge-internal
          role_arn: $AWS_ROLE_ARN_INTERNAL
          source_profile: cicd-user
      - aws-cli/role_arn_setup:
          profile_name: converge-staging
          role_arn: $AWS_ROLE_ARN_STAGING
          source_profile: cicd-user
      - aws-cli/role_arn_setup:
          profile_name: converge-production
          role_arn: $AWS_ROLE_ARN_PRODUCTION
          source_profile: cicd-user
  npm-install:
    steps:
      - checkout
      - run:
          command: npm ci

jobs:
  cdk-diff:
    parameters:
      profile:
        type: string
      environment:
        type: string
    executor: aws-cli/default
    steps:
      - aws-setup
      - npm-install
      - cdk:
          profile: <<parameters.profile>>
          command: diff
          environment: <<parameters.environment>>

the npm ci worked with 3.1.1 but doesn’t work anymore.

Does the last version of the aws-cli orb not contain NodeJS?

While it did get our attention, brownouts are a TERRIBLE way to notify users. I would have appreciated a noticeable banner on the page a few weeks PRIOR to the brownout so that our production builds were not affected. Instead I had to jump into an emergency fix for our circle ci yaml this morning to get repairs started. This is something that is unsettling to say the least! We are in circle CI basically everyday, if we had a banner on the build notifying us that the linux image was being deprecated we could have fixed it BEFORE it became an emergency.

Do better circle ci (@brentmmarks - tagging for visibility)!

5 Likes

Sorry it seems that the warning was on the Job itself (if you happen to click into it) which is a terrible place for visibility. We almost never click into individual jobs but view the runners from the dashboard. Why are there no warnings on the dashboard view? This would have prevented the emergency this morning…

2 Likes

Hi @JTCozart,

Thank you for the feedback, will bring this back to my team.

Thanks,
Brent

Edit: I’ve fixed this by replacing the aws-cli/default executor with node/default

having the following orbs

orbs:
  aws-cli: circleci/aws-cli@4.1.3
  terraform: circleci/terraform@3.1.0
  node: circleci/node@5.2.0
  aws-ecr: circleci/aws-ecr@8.2.1

We were also hit by brownouts this morning. We use the ubuntu image indirectly through aws-ecr/build-and-deploy-image, and to get to a functional aws-ecr it seems we need to do a major (8 to 9) update that involves retooling the authentication. We did not get (or at least didn’t notice) any email notification and also basically never visit the job page, so this hit us by surprise at a time when we happen to be very busy and shortstaffed.

For the ECR orb update I also have not been able to find an 8-to-9 upgrade guide, and aws-ecr@8.2.1 still gives the error; we currently use access key and secret via extra-build-args so switching to the role ARN is not trivial.

2 Likes

Hi @mfickett_FORM,

You should be able to override the image through a parameter in the orb: aws-ecr-orb/src/executors/default.yml at v8.2.0 · CircleCI-Public/aws-ecr-orb · GitHub

Hope this helps resolve this issue for you.

Thanks,
Brent

Hi,

It is after 17:00 UTC and before 20:00 UTC, but I still have a pipeline failing with one of these images being unavailable

Hi Team,
We were using the image : ubuntu-2004:202201-02.

Then the build were failing : This job was rejected because the image is [unavailable]/
When we updated to default / ubuntu-2204:2024.01.1 , version/tags now there’s a different error as below :

java.util.ServiceConfigurationError: io.cucumber.core.backend.ObjectFactory: Provider diaceutics.sbo.cucumber.objectfactory.CustomObjectFactory could not be instantiated
at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:586)
at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:813)
at java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:729)
at java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1403)
at io.cucumber.core.runtime.ObjectFactoryServiceLoader.loadSelectedObjectFactory(ObjectFactoryServiceLoader.java:52)
at io.cucumber.core.runtime.ObjectFactoryServiceLoader.loadObjectFactory(ObjectFactoryServiceLoader.java:48)
at java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
at io.cucumber.core.runtime.ThreadLocalObjectFactorySupplier.get(ThreadLocalObjectFactorySupplier.java:19)
at io.cucumber.core.runtime.BackendServiceLoader.loadBackends(BackendServiceLoader.java:44)
at io.cucumber.core.runtime.BackendServiceLoader.get(BackendServiceLoader.java:34)
at io.cucumber.core.runtime.BackendServiceLoader.get(BackendServiceLoader.java:30)
at io.cucumber.core.runtime.ThreadLocalRunnerSupplier.createRunner(ThreadLocalRunnerSupplier.java:50)
at java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
at io.cucumber.core.runtime.ThreadLocalRunnerSupplier.get(ThreadLocalRunnerSupplier.java:44)
at io.cucumber.testng.TestNGCucumberRunner.runScenario(TestNGCucumberRunner.java:121)
at diaceutics.sbo.cucumber.runners.CustomRunner.runParallelScenario(CustomRunner.java:25)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:134)
at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:597)
at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:173)
at org.testng.internal.TestMethodWithDataProviderMethodWorker.call(TestMethodWithDataProviderMethodWorker.java:77)
at org.testng.internal.TestMethodWithDataProviderMethodWorker.call(TestMethodWithDataProviderMethodWorker.java:15)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Unable to load cache item
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2052)
at com.google.common.cache.LocalCache.get(LocalCache.java:39

Any idea /updates which dependencies might be broken or suggestion how to get this fix.

The brownout should be over correct? I’m still getting failed builds around my image being unavailable

1 Like