Use AWS credentials inside docker

Our build is configured using docker environment which uses one of a custom image that is pulled from my public docker-hub.
For deployment process, I use s3cmd which I have pre-configured inside my docker image. But that has my AWS credentials hardcoded in it.
I need to use CircleCI feature of prebuilt AWS CLI for this. How do I achieve that? Does AWS credentials get passed to docker environment also?

Current config snippet,

version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: dockerhub/image
steps:
- checkout
- run:
name: Build steps
command:
- deploy:
name: Deploy war to s3.
command: bash -c “s3cmd put ./artifacts/my-build.war s3://my-bucket/my-build.war”

How can I change the deploy part to include the AWS CLI for s3 upload which uses the AWS credentials I give in my Plan settings?

For somebody looking for a work-around,

I used AWS access and secret keys as ENV variables, that will be also be then available inside docker container, and used the custom s3cmd command to push to s3 using,

s3cmd --access_key=$ACCESS_KEY_ID --secret_key=$SECRET_ACCESS_KEY put my-build.war s3://my-bucket/