Our team currently uploads files manually to the server which causes some problems because there is a lot of room for human error.
So I’m looking into continuous deployment via CircleCi.
We use bitbucket for version control and would like the option to either auto deploy when pushing to a specific branch or deploy by clicking on a button in the CircleCi interface. Not sure if this is possible tbh.
We have a dedicated server with plesk to which we want to deploy.
I tried looking for some deployment examples but it just keeps getting more confusing.
Should I use FTP or ssh deployments?
Why is there no infrastructure on CircleCi to deploy? It seems like everything needs to be build by scratch.
Any examples to a simple deployment script are appreciated.
Any project you define in CircleCI can run scripts - so the starting point would be to just write a script that follows the same steps as you currently follow with your manual process.
Once you can perform the manual process, by just executing the Project within the CircleCI dashboard you can then start to enhance the process with whatever tools you wish to use. Doing it this way also means you have a working baseline that you understand and can always go back to.
So what I understand is that after years there are no best practices or recommendations on how to get started?
Everything seems straight forward when you want to upload to AWS of Heroku but when you want to deploy to a normal server nobody seems to know what to do.
It is more the fact that I build docker containers with CircleCI so there is nothing I can provide for a general deployment process.
If you provide some detail about your environment and toolset someone may be able to provide some details on how they do things in a similar environment.
This is the config I’m currently trying to get to work:
version: 2.1
executors:
my-executor:
docker:
- image: cimg/node:14.18.2
jobs:
build:
executor: my-executor
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run: npm run build --prod
deploy:
executor: my-executor
steps:
- add_ssh_keys:
fingerprints:
- $SSH_FINGERPRINT
- run: sudo apt install rsync
- run:
name: Deploy Over SSH
command: |
rsync -a $LOCAL_PATH $SSH_USER@$SSH_HOST:$REMOTE_PATH
- run: ssh $SSH_USER@$SSH_HOST 'chmod -R 755 $REMOTE_PATH'
workflows:
version: 2.1
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
The build job is working, however I think that the build is not being kept for the next job.
But at least it does not fail.
The deploy step however does not work .
The error I’m getting is this:
#!/bin/bash -eo pipefail
sudo apt install rsync
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package rsync
Exited with code exit status 100
CircleCI received exit code 100
It seems like the issue is caused by something else.
Unfortunately there are no useful results when searching the web for example configurations with rsync.
#!/bin/bash -eo pipefail
sudo apt install -y rsync
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package rsync
Exited with code exit status 100
CircleCI received exit code 100
Artifacts built in a job do not automatically persist to the next job. You need to save such artifacts to a workspace at end of job, and load them at beginning of next job. Search for save workspace and you should get to examples pretty quickly.
@Grasmachien sorry I do not. I don’t know enough about your situation, but I have a feeling that using rsync is not the way to go. I presume you are trying to install some private ssh keys onto the circleci machine? Can you use AWS Secrets or similar? If so, you could put the private keys there. Alternatively, put the keys in AWS S3 and retrieve from circleci (Azure and GCP have their equivalents of Secrets and S3).
@schollii Why would rsync not be the way to go?
We have a normal dedicated server.
No AWS or anything like that.
I just found out the main reason why my attempts always fail. We can only access our server via a VPN or on our whitelisted IP at the office. I contacted my hosting provider and was told that connecting to the VPN might prove to be hard because they work with a specific VPN client. We might want to use IP ranges to whitelist circle ci in the firewall of the server.
normal server
AWS is new normality
As to providing keys to job, easiest way would be use CircleCI contexts - zero overhead on getting them to the job. Second option is getting them from AWS/GCP key storage (if you know how to use them).