Friends in the group said that my site is too slow to access.
Because it is deployed on Vercel, and the API interface is in China, it goes through Zero, adding another layer, making access within China indeed not very good.
I plan to create a mirror site in China.
Originally, Kami was done through GitHub Actions, by releasing and then publishing the artifacts to Release, SSH connecting to the server and then pulling the GitHub Release, but because accessing GitHub from domestic servers is really too slow now, major domestic acceleration sites have also gradually become unavailable.
So I plan to use a small home server for local deployment, and finally push the artifacts to the cloud server.
After asking friends in the group, Drone CI should be a good choice, written in Go, which is relatively fast and lightweight, and can be easily bound to GitHub.
The overall process is roughly as follows:
The process is relatively simple, next let's see how to build Drone from scratch and write this process.
Drone Deployment#
Docker Compose in One Go#
Using: Arch Linux + Docker environment, other environments are for reference only.
The official documentation is quite scattered, and I encountered many pitfalls. However, the final configuration is not particularly special, so I will just paste it here.
name: drone
services:
drone-runner:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_RPC_PROTO=https
- DRONE_RPC_HOST=drone.innei.in
- DRONE_RUNNER_CAPACITY=2
- DRONE_RUNNER_NAME=runner
- DRONE_GITHUB_CLIENT_ID=
- DRONE_GITHUB_CLIENT_SECRET=
- DRONE_RPC_SECRET=
- DRONE_SERVER_HOST=
- DRONE_SERVER_PROTO=https
ports:
- 3000:3000
restart: always
container_name: runner
image: drone/drone-runner-docker:1
drone:
volumes:
- ./data:/data
environment:
- DRONE_GITHUB_CLIENT_ID=
- DRONE_GITHUB_CLIENT_SECRET=
- DRONE_RPC_SECRET=
- DRONE_SERVER_HOST=drone.innei.in
- DRONE_SERVER_PROTO=https
- DRONE_TLS_AUTOCERT=true
- DRONE_USER_CREATE=username:innei,machine:false,admin:true
ports:
- 80:80
- 443:443
restart: always
container_name: drone
image: drone/drone:2
networks:
mvlan:
ipv4_address: 10.0.0.47
networks:
mvlan:
external: true
The missing environment variables in the above configuration need to be read from the documentation, and you also need to create your own GitHub App.
References:
- https://docs.drone.io/server/provider/github/
- https://docs.drone.io/runner/docker/installation/linux/
Don't forget the Runner; I forgot to configure the Runner, and the CI was stuck in the Loading state.
Connecting Drone and GitHub requires a domain name (if you don't have a domain name, you can try using an internal IP address), because of the OAuth method, after successful verification, it will redirect back to the address you set. Here it is drone.innei.in. Follow this for the rest.
Public Network Tunneling#
After the above configuration, logging in from the internal network 10.0.0.47 is not possible; after GitHub OAuth, it will redirect to the configured drone.innei.in, but I haven't set up this address yet.
I used Cloudflare Zero Trust for tunneling. Very convenient.
That's it.
Once inside, the panel is empty. This is okay. At this point, Drone is set up.
Pipeline Writing#
Go to the GitHub repository that needs to run CI, create a .drone.yml
file in the root directory, and then push it to GitHub. At this point, Drone should appear for this Repo; enter and activate it.
kind: pipeline
name: build-and-package
platform:
os: linux
arch: amd64
steps:
- name: build
image: node:20-alpine
commands:
- 'npm i -g pnpm'
- 'pnpm install --no-frozen-lockfile'
- 'npm run build'
After this step is successful, the next step is to adjust the pipeline crazily.
Since NextJS requires a .env
file during the build, but I don't want to upload them one by one to secrets, as it does not support batch import (can’t they learn from the neighboring Vercel?), so I thought of another way to pass in the env. This also took a lot of time.
Initially, I thought about using http-server to host the .env
file and downloading it during the build, as it listens on the internal network, which is relatively safe. Later, I found that Drone's volume mapping is feasible, so I decided to use this solution.
Note
To use Volume mapping, you need to enable the Trusted option.
For this step, you need to enable admin privileges first. If deployed with Docker, you need to add ENV to the Drone Server. DRONE_USER_CREATE=username:innei,machine:false,admin:true
replace username
with your GitHub username.
After restarting the container, go back to the Drone Repo Settings, and you will see Project Settings - Trusted, then enable it.
Here, I chose to run Docker for each step in Drone.
In each step, you can map a volume to the host file path.
I stored the required .env
for the project in /home/innei/docker-compose/drone/public/shiro/.env
, and added volume mapping in the build step.
volumes:
- name: shiro-env
host:
path: /home/innei/docker-compose/drone/public/shiro/.env
steps:
- name: build
image: node:20-alpine
commands:
- 'npm i -g pnpm'
- 'pnpm install --no-frozen-lockfile'
- 'npm run build'
volumes:
- name: shiro-env
path: /drone/src/.env
Volumes need to be written twice; the top-level volume defines the absolute path of the host and the volume name, while at the step, it is the absolute path of the Docker restart.
We can split the build and deploy into two pipelines, but they depend on each other.
After the build is complete, the artifacts need temporary storage. I chose to use volume mapping to temporarily place them in /tmp
, for deploy to retrieve the artifacts to push to the cloud server.
volumes:
- name: shiro-dist
host:
path: /tmp/shiro-dist
steps:
- name: build
image: node:20-alpine
commands:
- 'npm i -g pnpm'
- 'pnpm install --no-frozen-lockfile'
- 'npm run build'
volumes:
- name: shiro-env
path: /drone/src/.env
- name: dns
path: /etc/resolv.conf
- name: package
image: node:20-alpine
commands:
- 'pwd'
- 'ls -a'
- 'ls .next'
- 'apk add zip'
- 'sh ./standalone-bundle.sh'
volumes:
- name: shiro-dist # Here mapped
path: /drone/src/assets
depends_on:
- build
Then use scp and ssh in deploy.
kind: pipeline
name: deploy
platform:
os: linux
arch: amd64
volumes:
- name: shiro-dist
host:
path: /tmp/shiro-dist
steps:
- name: transfer file
image: appleboy/drone-scp
settings:
host:
from_secret: ssh_host
username:
from_secret: ssh_username
key:
from_secret: ssh_key
port: 22
target: /home/deploy/shiro
source:
- assets/release.zip
rm_target: true
strip_components: 1
debug: true
volumes:
- name: shiro-dist # Here volume can access the artifacts from the build pipeline
path: /drone/src/assets
- name: deploy
image: appleboy/drone-ssh
settings:
host:
from_secret: ssh_host
username:
from_secret: ssh_username
key:
from_secret: ssh_key
port: 22
script:
- '\npm install --os=linux --cpu=x64 sharp --registry=https://registry.npmmirror.com'
- cd ~/shiro
- unzip -o release.zip
- rm release.zip
- ls
- cd standalone
- cp -r ~/node_modules/sharp ./node_modules
- ~/.n/bin/pm2 restart ecosystem.config.js
debug: true
depends_on:
- transfer file
depends_on:
- build-and-package # Here is the dependency
The ssh_
related settings are retrieved from secrets.
At this point, I have also tried it for over forty weeks 😂.
Final Configuration#
https://github.com/Innei/Shiro/blob/main/.drone.yml
References: https://www.timochan.cn/posts/jc/drone_workflows#Preface
This article is synchronized and updated to xLog by Mix Space. The original link is https://innei.in/posts/Z-Turn/drone-self-host-ci-cd-with-github