Recently, Vercel has adjusted its pricing again, making the Hobby plan increasingly insufficient, so I gave up using Vercel and turned to deploying Next.js projects on a private server.
The experience of deploying on a private server is very unfriendly. First, there is no fully automated deployment like Vercel, and there is no timely rollback. Second, building Next.js projects requires very large memory and CPU resources, and general lightweight servers may either crash or fail during the build process.
Goals#
- Use GitHub to build a universal product that is not affected by environment variables during the build. (You can learn more about this in the article Build Once, Deploy Many - Next.js Runtime Env)
- How to push the build artifacts to a remote server
- How to run the build workflow across source code repositories (this requirement arises because for closed-source repositories, GitHub CI has limitations on duration and other factors; additionally, this way the workflow configuration repository can be open-source while the source code repository can be closed-source)
- How to implement rollback (it may not be very convenient but is usable)
Process#
Based on the above goals, we can envision what we need to do, which is roughly this build process.
- Check out the code from the source code repository, not the workflow repository; this is very important.
- Regular code build
- Distinguish versions and then push to the server
- Complete the build
When there are code changes in the source code repository, the workflow pipeline of the workflow repository needs to be executed again.
Two Repositories#
Having clarified the above process, we now need to create a repository specifically for running CI, which is the workflow repository mentioned above.
Then we will write the workflow configuration.
name: Build and Deploy
on:
push:
branches:
- main
jobs:
build:
name: Build artifact
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [20.x]
steps:
- uses: actions/checkout@v4
with:
repository: innei-dev/shiroi # Change this to your private source repository
token: ${{ secrets.GH_PAT }} # You need a token that can access the private repository
fetch-depth: 0
lfs: true
- name: Checkout LFS objects
run: git lfs checkout
- uses: pnpm/action-setup@v2
with:
version: ${{ env.PNPM_VERSION }}
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'pnpm'
- name: Install dependencies
run: pnpm install
- uses: actions/cache@v4
with:
path: |
~/.npm
${{ github.workspace }}/.next/cache
key: ${{ runner.os }}-nextjs-${{ hashFiles('**/pnpm-lock.yaml') }}-${{ hashFiles('**/*.js', '**/*.jsx', '**/*.ts', '**/*.tsx') }}
restore-keys: |
${{ runner.os }}-nextjs-${{ hashFiles('**/pnpm-lock.yaml') }}-
- name: Build project
run: |
sh ./ci-release-build.sh # This is your build script
Pay attention to the commented areas above.
Build and Deploy#
Building and Sharing Artifacts Across Jobs#
Next, we will write the deployment workflow configuration.
To share artifacts between jobs, we need to use Artifacts. In the build process, upload the final artifact as an Artifact, and then download it in the next Job.
jobs:
build:
# ...
- uses: actions/upload-artifact@v4
with:
name: dist # Upload name
path: assets/release.zip # Source file path
retention-days: 7
deploy:
name: Deploy artifact
runs-on: ubuntu-latest
needs: build
steps:
- name: Download artifact
uses: actions/download-artifact@v4
with:
name: dist # Download the uploaded file
Important
Using this method can lead to the leakage of build artifacts, as the repository is open-source, meaning anyone (logged into GitHub) can download the uploaded artifacts.
People who are signed into GitHub and have read access to a repository can download workflow artifacts.
Since the above method is not secure, we will use CI cache to achieve the same functionality.
jobs:
build:
# ...
- name: Cache Build Artifacts
id: cache-primes
uses: actions/cache/save@v4
with:
path: assets
key: ${{ github.run_number }}-release # Use the workflow run number as the key
deploy:
name: Deploy artifact
runs-on: ubuntu-latest
needs: build
steps:
- name: Restore cached Build Artifacts
id: cache-primes-restore
uses: actions/cache/restore@v4
with: # Restore artifacts
path: |
assets
key: ${{ github.run_number }}-release
Using SSH to Transfer Artifacts to Remote Server#
Having completed the artifact build, the next step is to write the process for deploying to the server.
jobs:
deploy:
name: Deploy artifact
runs-on: ubuntu-latest
needs: build
steps:
- name: Restore cached Build Artifacts
id: cache-primes-restore
uses: actions/cache/restore@v4
with:
path: |
assets
key: ${{ github.run_number }}-release
- name: Move assets to root
run: mv assets/release.zip release.zip
- name: copy file via ssh password
uses: appleboy/[email protected]
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USER }}
password: ${{ secrets.PASSWORD }}
key: ${{ secrets.KEY }}
port: ${{ secrets.PORT }}
source: 'release.zip'
target: '/tmp/shiro'
- name: Exec deploy script with SSH
uses: appleboy/ssh-action@master
with:
command_timeout: 5m
host: ${{ secrets.HOST }}
username: ${{ secrets.USER }}
password: ${{ secrets.PASSWORD }}
key: ${{ secrets.KEY }}
port: ${{ secrets.PORT }}
script_stop: true
script: |
set -e
source $HOME/.bashrc
workdir=$HOME/shiro/${{ github.run_number }}
mkdir -p $workdir
mv /tmp/shiro/release.zip $workdir/release.zip
rm -r /tmp/shiro
cd $workdir
unzip -o $workdir/release.zip
cp $HOME/shiro/.env $workdir/standalone/.env
export NEXT_SHARP_PATH=$(npm root -g)/sharp
# https://github.com/Unitech/pm2/issues/3054
cd $workdir/standalone
pm2 stop $workdir/standalone/ecosystem.config.js && pm2 delete $workdir/standalone/ecosystem.config.js && pm2 start $workdir/standalone/ecosystem.config.js
rm $workdir/release.zip
pm2 save
echo "Deployed successfully"
Here we use SSH + SCP to upload the build artifacts to the server and then directly execute the relevant scripts.
We use the GitHub Workflow Id as the identifier for the current build, distinguishing it in the server deployment directory. This way, each deployment artifact remains on the server, facilitating future rollbacks, although the rollback process is relatively traditional, but you could say it has been implemented.
I use PM2 to manage the project, but you can use other methods.
Cross-Repository Workflow Invocation#
When there is a new commit in the source code repository, it needs to trigger the workflow repository to re-execute the pipeline.
Here we can use API calls.
In the source code repository, add a new workflow.
name: Trigger Target Workflow
on:
push:
branches:
- main
jobs:
trigger:
runs-on: ubuntu-latest
steps:
- name: Trigger Workflow in Another Repository
run: |
repo_owner="innei-dev"
repo_name="shiroi-deploy-action"
event_type="trigger-workflow"
curl -L \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.PAT }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/$repo_owner/$repo_name/dispatches \
-d "{\"event_type\": \"$event_type\", \"client_payload\": {}}"
Here we leverage the GitHub API to call the workflow repository's pipeline for re-execution.
Then we need to modify the workflow configuration of the called party:
on:
push:
branches:
- main
repository_dispatch:
types: [trigger-workflow]
permissions: write-all
types
is defined by the caller and needs to remain consistent.
Now, when the source triggers an update to main
, it will call the workflow that matches the types
in repository_dispatch
via the API interface.
Preventing Duplicate Version Builds#
The above configuration is basically usable, but there are still some places we need to check.
For example, the same commit should be considered a duplicate, and it should only be built and deployed once. If a duplicate is hit, the entire build and deployment process should be skipped.
Here we use each commit hash to judge, saving the last successfully deployed hash and comparing it with the current commit hash; if they match, we skip.
We can record the last commit hash completed in the build in a file (you can also use artifacts; as for why I use a file, please see the next section).
We save the commit hash of each completed build in the build_hash
file under the current repository.
We need several processes to do this. First, read the build_hash
in the current repository and save it in GITHUB_OUTPUT
for subsequent processes to read.
Then in the next process, check out the source code repository, read the commit hash from the source code repository, and compare it with build_hash
, outputting a boolean
value, which is also saved in GITHUB_OUTPUT
.
In the next process, use if
to directly determine whether to exit the entire process (since subsequent processes depend on this, it effectively exits everything).
The final process, after completing the deployment, saves the current commit hash to the repo; we used the Push action to do this.
And because this causes the bot to push a new commit each time, this should not trigger the workflow either. So in the first process, if
is used to guard based on the commit message.
The reference configuration is as follows:
name: Build and Deploy
on:
push:
branches:
- main
permissions: write-all
env:
PNPM_VERSION: 9.x.x
HASH_FILE: build_hash
jobs:
prepare:
name: Prepare
runs-on: ubuntu-latest
if: ${{ github.event.head_commit.message != 'Update hash file' }}
outputs:
hash_content: ${{ steps.read_hash.outputs.hash_content }}
- name: Read HASH_FILE content
id: read_hash
run: |
content=$(cat ${{ env.HASH_FILE }}) || true
echo "hash_content=$content" >> "$GITHUB_OUTPUT"
check:
name: Check Should Rebuild
runs-on: ubuntu-latest
needs: prepare
outputs:
canceled: ${{ steps.use_content.outputs.canceled }}
steps:
- uses: actions/checkout@v4
with:
repository: innei-dev/shiroi
token: ${{ secrets.GH_PAT }}
fetch-depth: 0
lfs: true
- name: Use content from prev job and compare
id: use_content
env:
FILE_HASH: ${{ needs.prepare.outputs.hash_content }}
run: |
file_hash=$FILE_HASH
current_hash=$(git rev-parse --short HEAD)
echo "File Hash: $file_hash"
echo "Current Git Hash: $current_hash"
if [ "$file_hash" == "$current_hash" ]; then
echo "Hashes match. Stopping workflow."
echo "canceled=true" >> $GITHUB_OUTPUT
else
echo "Hashes do not match. Continuing workflow."
fi
build:
name: Build artifact
runs-on: ubuntu-latest
needs: check
if: ${{needs.check.outputs.canceled != 'true'}}
# .... other build job config
store:
name: Store artifact commit version
runs-on: ubuntu-latest
needs: [deploy, build] # Depends on build and deploy processes
steps:
- name: Checkout
uses: actions/checkout@v4
with:
persist-credentials: false
fetch-depth: 0
- name: Use outputs from build
env:
SHA_SHORT: ${{ needs.build.outputs.sha_short }}
BRANCH: ${{ needs.build.outputs.branch }}
run: |
echo "SHA Short from build: $SHA_SHORT"
echo "Branch from build: $BRANCH"
- name: Write hash to file
env:
SHA_SHORT: ${{ needs.build.outputs.sha_short }}
run: echo $SHA_SHORT > ${{ env.HASH_FILE }}
- name: Commit files
run: |
git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add ${{ env.HASH_FILE }}
git commit -a -m "Update hash file"
- name: Push changes
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}
Cronjob Execution That Will Never Be Disabled#
To allow this workflow to run on a schedule, you can use schedule
:
name: Build and Deploy
on:
push:
branches:
- main
schedule:
- cron: '0 3 * * *'
repository_dispatch:
types: [trigger-workflow]
Due to GitHub Actions' limitations, when a repository has no activity for 3 months, the workflow will be disabled. Therefore, in the previous section, we used commits to prevent being disabled. Uploading the hash during each build is also a good choice.
That's it; this is the complete content.
The full configuration is here:
This article was synchronized and updated to xLog by Mix Space. The original link is https://innei.in/posts/tech/automatically-build-projects-across-repositories-and-deploy-to-servers