Job fails due to flaky network?

In the past week days, I have seen more and more of our jobs fail due to possible network related issues?

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

Exited with code exit status 2
CircleCI received exit code 2

All the orb is doing is installing Google cloud SDK

curl -s https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-283.0.0-linux-x86_64.tar.gz | tar xz

Anyone else seeing this?

Thanks

Experiencing the same problems:

dfd5a49a372b: Layer already exists
c5bccd05d4ae: Layer already exists
35521538736e: Layer already exists
6cec254cd344: Layer already exists
de2851a3a5f8: Layer already exists
55c41dab3e9f: Layer already exists
eee248b31a26: Layer already exists
f1eb1b486064: Layer already exists
b8b727351343: Layer already exists
9d7862fe39f0: Layer already exists
c5ec52c98b31: Layer already exists
0955dbeb9b0a: Layer already exists
Put "https://********************************************/v2/customink/scala/manifests/branch-coding-bunny_use_ktool_13-a11d8a8": dial tcp 18.209.201.180:443: i/o timeout
/usr/local/bundle/gems/ktool-13.6.0/lib/ktool/docker.rb:60:in `block in build_image': Failed to push the docker image (RuntimeError)
	from /usr/local/bundle/gems/ktool-13.6.0/lib/ktool/docker.rb:58:in `each'
	from /usr/local/bundle/gems/ktool-13.6.0/lib/ktool/docker.rb:58:in `build_image'
	from /root/project/lib/docker.rb:28:in `build_image'
	from /root/project/lib/non_versioned_image.rb:29:in `block in build_variants'
	from /root/project/lib/non_versioned_image.rb:22:in `each'
	from /root/project/lib/non_versioned_image.rb:22:in `build_variants'
	from ./bin/build-docker-images:11:in `<main>'

Exited with code exit status 1

CircleCI received exit code 1

Pulling most images seems to work, but builds randomly fail where an AWS ECR pull/push is needed with timeout errors. Even though CircleCI claims their outages are fixed, there’s definitely a problem going on with the AWS communication.

We decided to add retry to the Curl/install command, that worked fine for awhile. Today we are getting

curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

These network errors are making our pipeline really un-reliable. Going to move off CircleCI if these are not addressed.