[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version

I have been battling with this issue and will need help.
So I am using django-cookiecutter + docker for the project.
The app runs ok after deploying the docker container to the server.

But when trying to push changes through cirlceci, the I get this error at the deploy phase

Creating CA: /home/circleci/.docker/machine/certs/ca.pem
Creating client certificate: /home/circleci/.docker/machine/certs/cert.pem
Running pre-create checks...
Creating machine...
(production) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env production
Building postgres
ERROR: SSL error: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:661)
Exited with code 1

Update: config.xml

jobs:
  build:
    machine: true
    working_directory: ~/app_api
    steps:
      - checkout
      - run:
          name: Run tests
          command: |
            docker-compose -f local.yml up -d
            docker-compose -f local.yml run django python manage.py help
            docker-compose -f local.yml run django pytest
  deploy:
    machine: true
    working_directory: ~/app_api
    steps:
      - checkout
      - add_ssh_keys:
          fingerprints: 
            **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**
      - run:
          name: Deploy Master to Digital Ocean
          command: |
            cp ./id_rsa.pub ~/.ssh
            ls -al ~/.ssh
            base=https://github.com/docker/machine/releases/download/v0.14.0 &&
            curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
            sudo install /tmp/docker-machine /usr/local/bin/docker-machine
            mkdir -p .envs/.production
            echo POSTGRES_HOST=$POSTGRES_HOST >> .envs/.production/.postgres
            echo REDIS_URL=$REDIS_URL >> .envs/.production/.django
            ...
            docker-machine create --driver generic --generic-ip-address 1**.2**.1**.**7 --generic-ssh-key ~/.ssh/id_rsa production
            eval "$(docker-machine env production)"
            docker-compose -f production.yml build
            docker-compose -f production.yml up -d

workflows:
  version: 2
  build-and-deploy:
    jobs:
      - build
      - deploy:
          requires:
            - build

production.yml

version: '3'

volumes:
  production_postgres_data: {}
  production_postgres_data_backups: {}
  production_traefik: {}

services:
  django: &django
    build:
      context: .
      dockerfile: ./compose/production/django/Dockerfile
    image: app_api_production_django
    depends_on:
      - postgres
      - redis
    env_file:
      - ./.envs/.production/.django
      - ./.envs/.production/.postgres
    command: /start

  postgres:
    build:
      context: .
      dockerfile: ./compose/production/postgres/Dockerfile
    image: vest_api_production_postgres
    volumes:
      - production_postgres_data:/var/lib/postgresql/data
      - production_postgres_data_backups:/backups
    env_file:
      - ./.envs/.production/.postgres

  traefik:
    build:
      context: .
      dockerfile: ./compose/production/traefik/Dockerfile
    image: app_api_production_traefik
    depends_on:
      - django
    volumes:
      - production_traefik:/etc/traefik/acme
    ports:
      - "0.0.0.0:80:80"
      - "0.0.0.0:443:443"

  redis:
    image: redis:3.2

  celeryworker:
    <<: *django
    image: app_api_production_celeryworker
    command: /start-celeryworker

  celerybeat:
    <<: *django
    image: app_api_production_celerybeat
    command: /start-celerybeat

  flower:
    <<: *django
    image: app_api_production_flower
    ports:
      - "5555:5555"
    command: /start-flower
  awscli:
    build:
      context: .
      dockerfile: ./compose/production/aws/Dockerfile
    env_file:
      - ./.envs/.production/.django
    volumes:
      - production_postgres_data_backups:/backups

Any idea why this happens?

done, thanks!

OK, so I think the issue is in your docker-compose -f production.yml build. Have you traced what line it is getting stuck on in your production.yml? If you want readers to take a view on that, you’d need to show that.

1 Like

I have updated it with the a more complete error trace and the production.yml file

Ah, so you are not running from an image directly from Docker Hub - you are rolling your own in ./compose/production/postgres/Dockerfile. I assume the console output shows you are still in a build phase rather than bringing it up. If so, let’s see the Dockerfile:wink:

Also, please indicate what line in the Dockerfile it is getting stuck on.

this is the dockerfile, can’t seem to figure our what line exactly it gets stuck on

django/Dockerfile


FROM python:3.6-alpine

ENV PYTHONUNBUFFERED 1

RUN apk update \
  # psycopg2 dependencies
  && apk add --virtual build-deps gcc python3-dev musl-dev \
  && apk add postgresql-dev \
  && apk add ca-certificates \
  # Pillow dependencies
  && apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
  # CFFI dependencies
  && apk add libffi-dev py-cffi

RUN addgroup -S django \
    && adduser -S -G django django

# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install --no-cache-dir -r /requirements/production.txt \
    && rm -rf /requirements

COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
RUN chown django /entrypoint

COPY ./compose/production/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
RUN chown django /start
COPY ./compose/production/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r//' /start-celeryworker
RUN chmod +x /start-celeryworker
RUN chown django /start-celeryworker

COPY ./compose/production/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r//' /start-celerybeat
RUN chmod +x /start-celerybeat
RUN chown django /start-celerybeat

COPY ./compose/production/django/celery/flower/start /start-flower
RUN sed -i 's/\r//' /start-flower
RUN chmod +x /start-flower
COPY . /app

RUN chown -R django /app

USER django

WORKDIR /app

ENTRYPOINT ["/entrypoint"]

entrypoint

#!/bin/sh

set -o errexit
set -o pipefail
set -o nounset



# N.B. If only .env files supported variable expansion...
export CELERY_BROKER_URL="${REDIS_URL}"


if [ -z "${POSTGRES_USER}" ]; then
    base_postgres_image_default_user='postgres'
    export POSTGRES_USER="${base_postgres_image_default_user}"
fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"

postgres_ready() {
python << END
import sys

import psycopg2

try:
    psycopg2.connect(
        dbname="${POSTGRES_DB}",
        user="${POSTGRES_USER}",
        password="${POSTGRES_PASSWORD}",
        host="${POSTGRES_HOST}",
        port="${POSTGRES_PORT}",
    )
except psycopg2.OperationalError:
    sys.exit(-1)
sys.exit(0)

END
}
until postgres_ready; do
  >&2 echo 'Waiting for PostgreSQL to become available...'
  sleep 1
done
>&2 echo 'PostgreSQL is available'

exec "$@"

Cool, add --no-cache to your docker-compose build and see what gets printed to the console. I assume it is not finishing the build, and it is not getting to the docker-compose up. Paste the new build output here if you wish.

on ok it, thanks

same exact issue.
do you think rebuilding the whole image from scratch might help?

No, you misunderstood what I was asking you to do. The flag I recommended can be used to build this image from scratch, so that you can see the point where it fails more clearly, without the cached layers potentially obfuscating the issue. I was not expecting it to fix the problem.

If you would like help on that, you will need to run that command, with that flag, and show the full build output here. Or, you can examine the output yourself, and add a note here to explain what step in the Dockerfile it fails on.