Feat/mitomen/2031/ci cd pipeline#2032
Conversation
Actions等のブランチ設定は、レビュー後にmainに切り替えます
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis pull request establishes a complete production CI/CD pipeline comprising a GitHub Actions workflow that builds and pushes Docker images to a Harbor registry, a deployment workflow that orchestrates remote deployment via SSH, and a Docker Compose configuration defining five services (API, User, Admin View, Cloudflare tunnel, and Dozzle agent) for the production stack. A database configuration update specifies an explicit MySQL port. Changes
Possibly related issues
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (2)
.github/workflows/CI.yml (2)
70-95:cache-toが無いため GHA キャッシュが更新されません
cache-from: type=ghaだけでは初回以降のキャッシュヒットが見込めません。cache-to: type=gha,mode=maxを追加してビルド時間を短縮してください。cache-from: type=gha + cache-to: type=gha,mode=max🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/CI.yml around lines 70 - 95, In the two docker build steps ("Build and Push User" and "Build and Push Admin View") that use docker/build-push-action@v5, add cache-to: type=gha,mode=max under the with block (next to cache-from: type=gha) so the GitHub Actions cache is updated after each build; update both the step named "Build and Push User" (context ./user, file ./user/prod.Dockerfile) and the step named "Build and Push Admin View" (context ./admin_view, file ./admin_view/prod.Dockerfile) to include this setting.
63-95::latest固定タグのみでロールバック手段がありませんすべてのイメージが
:latestのみで push されているため、
- 障害時に直前バージョンへ戻す手段が無い
- 各デプロイで何が動いているかの追跡が困難
- compose
pullのタイミング次第で混在版が動く可能性
${{ github.sha }}や${{ github.run_number }}を併記し、compose 側で同じタグを参照する運用を推奨します。Example
- tags: ${{ secrets.HARBOR_REG }}/group-manager/api:latest + tags: | + ${{ secrets.HARBOR_REG }}/group-manager/api:latest + ${{ secrets.HARBOR_REG }}/group-manager/api:${{ github.sha }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/CI.yml around lines 63 - 95, The workflow currently pushes images only with the :latest tag (see steps "Build and Push API", "Build and Push User", "Build and Push Admin View" and their tags field); change each docker/build-push-action "tags" input to push both a fixed semantic tag and an immutable CI tag (for example include both :latest and :${{ github.sha }} or :${{ github.run_number }}), e.g. tags: ${{ secrets.HARBOR_REG }}/group-manager/api:latest,${{ secrets.HARBOR_REG }}/group-manager/api:${{ github.sha }}; apply the same pattern for user and admin steps and ensure push: true remains set so you can roll back and reference the exact image in compose/deploy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/CD.yml:
- Around line 11-14: Don’t trust ssh-keyscan output blindly: instead store the
expected host public key in a GitHub Secret (e.g., CONTAINER_HOST_KEY) and write
that exact key into ~/.ssh/known_hosts (overwrite, not append) so the runner
uses a fixed fingerprint; ensure permissions on ~/.ssh are set appropriately and
remove the ssh-keyscan step. Locate the block using ssh-keyscan and
secrets.CONTAINER_IP and replace it with logic that writes the secret value for
the host key into ~/.ssh/known_hosts (and avoids accumulating entries on reused
self-hosted runners).
- Line 21: The workflow currently uses the global config command "git config
--global --add safe.directory /home/deploy-user/group-manager-2", which leaves
side effects on the runner; change it to either fix ownership or make the config
repository-local: replace the global command with a repo-scoped config like "git
-C \"$GITHUB_WORKSPACE\" config --add safe.directory \"$GITHUB_WORKSPACE\"" (no
--global) or, alternatively, correct the repository ownership before running Git
(e.g., chown the workspace to the runner user) so no global safe.directory tweak
is required.
- Around line 18-29: The workflow embeds secrets (HARBOR_PASS, HARBOR_USERNAME,
HARBOR_REG) into a heredoc sent over SSH (the "ssh ... << 'EOF'" block) which
causes GitHub to expand and expose them remotely and breaks quoting; change the
deployment to avoid expanding secrets into the heredoc by sending a script to
the remote via stdin and passing secrets as environment variables to ssh (use
"ssh -T ... 'bash -s'" and set env HARBOR_PASS=... HARBOR_USER=...
HARBOR_REG=... on the GitHub Actions side) or move the docker login step out of
the heredoc and perform login via docker/login-action on the runner and use
docker context/DOCKER_HOST to target the remote; update the commands that
currently run docker login, docker compose -f compose.cd.yml pull, docker
compose -f compose.cd.yml up -d, and docker image prune -f accordingly.
- Around line 22-24: The three git commands (git fetch origin
feat/mitomen/2031/CI-CD-pipeline, git reset --hard
origin/feat/mitomen/2031/CI-CD-pipeline, git checkout -B
feat/mitomen/2031/CI-CD-pipeline origin/feat/mitomen/2031/CI-CD-pipeline)
hardcode the branch name; change them to use a parameter or the workflow ref
instead: update the CD.yml step(s) that run these commands to use an input
variable (e.g. inputs.ref) or the built-in github.ref_name to build the remote
ref and local branch name, and update CI.yml to pass with: { ref: ${{
github.ref_name }} } (or ensure the input is populated) so the
checkout/reset/fetch operate on the caller-provided branch rather than the
hardcoded feat/mitomen/... value.
In @.github/workflows/CI.yml:
- Around line 51-62: Do not dump the entire secrets.USER_ENV file into
GITHUB_ENV (using printf + grep + sed) because it leaks secrets and breaks on
values with newlines/equals/quotes; instead store sensitive keys like
NEXTAUTH_SECRET and IMGUR_CLIENT_SECRET as individual GitHub secrets and pass
them directly to the build step via build-push-action's secrets: / secret-files:
or as explicit build-args (avoid using env.NEXTAUTH_SECRET); also stop using
printf '%s' -> grep/sed for GITHUB_ENV injection—if you must write env files,
ensure you emit a trailing newline and use a proper key/value writer or
GitHub's::set-env / GITHUB_ENV-safe append patterns to avoid parsing bugs.
- Around line 29-42: The CI workflow is enabling plaintext registry access by
writing an insecure /etc/docker/daemon.json and setting buildkitd-config-inline
with `http = true` and `insecure = true`; replace this by enabling TLS on the
Harbor registry (Let's Encrypt or internal CA) and remove the steps that write
`/etc/docker/daemon.json` in the "Configure Insecure Registry" job and the `http
= true` / `insecure = true` entries under "Set up Docker Buildx"
(buildkitd-config-inline); instead configure the runner to trust the registry
TLS certificate (install the CA into the runner or use the registry URL with
valid cert) and use standard authenticated docker login (secrets.HARBOR_REG /
credentials) for secure push/pull.
- Around line 88-95: The CI step "Build and Push Admin View" is not passing the
Docker build ARGs declared in admin_view/prod.Dockerfile (VUE_APP_URL,
VUE_APP_API_URL), causing empty values to be baked into the Vue build; update
that job to pass build-args (VUE_APP_URL and VUE_APP_API_URL) to
docker/build-push-action@v5 and add a preceding step to export/load
admin_view/.env (similar to the existing user/.env export) so those variables
are available to the workflow when invoking the Build and Push Admin View
action.
- Around line 3-9: Change the workflow trigger and the trigger-cd job so only
pushes to main initiate CD: restrict the push.branches list to only "main"
(remove feat/mitomen/2031/CI-CD-pipeline) and add an if guard to the trigger-cd
job—e.g., add if: github.ref == 'refs/heads/main' on the job named trigger-cd—so
the deploy job runs only for the main branch.
In `@compose.cd.yml`:
- Around line 9-13: The api service's bind mount (volumes: [./api:/myapp]) is
overwriting the image's /myapp contents; remove that volumes entry from the api
service so the pulled image's baked artifacts and installed dependencies are
used in CD, or if persistent data is required replace it with an appropriately
scoped Docker volume (not a host bind) referenced in the api service
configuration.
- Around line 13-25: The compose file assumes a repository-root .env (env_file:
[.env]) but CD.yml does not create it (CI.yml only creates user/.env and
admin_view/.env), causing docker compose up to fail or start with empty env; fix
by either (A) updating the compose service blocks (api, user, admin_view) to
reference service-specific env files (e.g., env_file: [user/.env], env_file:
[admin_view/.env], env_file: [api/.env]) or (B) adding a step in CD.yml that
generates or places the repo-root .env from secrets before running docker
compose up; change the env_file entries or add the CD job accordingly so env
variables exist at deploy time.
---
Nitpick comments:
In @.github/workflows/CI.yml:
- Around line 70-95: In the two docker build steps ("Build and Push User" and
"Build and Push Admin View") that use docker/build-push-action@v5, add cache-to:
type=gha,mode=max under the with block (next to cache-from: type=gha) so the
GitHub Actions cache is updated after each build; update both the step named
"Build and Push User" (context ./user, file ./user/prod.Dockerfile) and the step
named "Build and Push Admin View" (context ./admin_view, file
./admin_view/prod.Dockerfile) to include this setting.
- Around line 63-95: The workflow currently pushes images only with the :latest
tag (see steps "Build and Push API", "Build and Push User", "Build and Push
Admin View" and their tags field); change each docker/build-push-action "tags"
input to push both a fixed semantic tag and an immutable CI tag (for example
include both :latest and :${{ github.sha }} or :${{ github.run_number }}), e.g.
tags: ${{ secrets.HARBOR_REG }}/group-manager/api:latest,${{ secrets.HARBOR_REG
}}/group-manager/api:${{ github.sha }}; apply the same pattern for user and
admin steps and ensure push: true remains set so you can roll back and reference
the exact image in compose/deploy.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eacd6be5-3bb3-4778-b41b-6cf325594cd6
📒 Files selected for processing (3)
.github/workflows/CD.yml.github/workflows/CI.ymlcompose.cd.yml
|
api/config/database.ymlのproductionセクションの中のhostとsocketを、hostとportの2つに分けました。 |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (7)
.github/workflows/integration-prod.yml (4)
29-42:⚠️ Potential issue | 🟡 MinorAvoid HTTP/insecure registry configuration for production images.
insecure-registriesplus BuildKithttp = true/insecure = truemeans CI pushes images without registry TLS verification. If Harbor is reachable only over Tailscale this may be temporarily acceptable, but production should use TLS and remove these insecure settings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration-prod.yml around lines 29 - 42, Remove the insecure registry configuration and BuildKit http/insecure flags: delete or stop writing the "insecure-registries" entry in the "Configure Insecure Registry" step and remove the http = true and insecure = true lines from the buildkitd-config-inline under the "Set up Docker Buildx" step, and instead ensure the CI uses a TLS-accessible Harbor endpoint (or provide proper CA certs) so pushes use HTTPS; look for the exact symbols "insecure-registries", "buildkitd-config-inline", and registry."${{ secrets.HARBOR_REG }}" to locate and change the code.
3-9:⚠️ Potential issue | 🔴 CriticalGate production CD to
mainonly before merge.A push to
feat/mitomen/2031/CI-CD-pipelinecurrently runstrigger-cdwith inherited secrets. Restrict the push trigger and add an explicit branch guard on the deploy job.Proposed fix
on: push: branches: - main - - feat/mitomen/2031/CI-CD-pipeline workflow_call: workflow_dispatch: ... trigger-cd: needs: build-and-push + if: github.ref == 'refs/heads/main' uses: ./.github/workflows/CD.yml secrets: inheritAlso applies to: 97-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration-prod.yml around lines 3 - 9, The workflow currently triggers on pushes to both main and the feature branch (the branches list under on: push includes "feat/mitomen/2031/CI-CD-pipeline"), which allows deploy steps to run with inherited secrets; remove the feature branch from the push trigger so only "main" remains and add an explicit branch guard to the deploy job (add an if condition such as checking github.ref == 'refs/heads/main' on the deploy job) to ensure deploy only runs for main. Ensure you update the branches array and add the if guard on the job named "deploy" (or equivalent deploy job id) to prevent CD from running on feature branches.
51-57:⚠️ Potential issue | 🟠 MajorDo not export and bake runtime secrets into the build.
USER_ENVis appended to$GITHUB_ENV, thenIMGUR_CLIENT_SECRETandNEXTAUTH_SECRETare passed as build args;user/prod.Dockerfilesets them asENV, so they can persist in the built image. Keep runtime secrets in deploy-time env/secret files and only pass public build-time values.Also applies to: 80-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration-prod.yml around lines 51 - 57, The workflow currently writes secrets from USER_ENV into user/.env and appends them to $GITHUB_ENV (steps "Create user .env" and the Export user env to GitHub Actions block), and then passes IMGUR_CLIENT_SECRET and NEXTAUTH_SECRET as build-args which user/prod.Dockerfile turns into ENV, baking runtime secrets into the image; instead remove the step that exports USER_ENV to $GITHUB_ENV and stop passing IMGUR_CLIENT_SECRET/NEXTAUTH_SECRET as build args—keep USER_ENV (and those secrets) only in deploy-time secret storage and load them at container runtime or via your deployment platform, and modify user/prod.Dockerfile to not set these secrets as ENV at build time (use runtime env injection).
88-95:⚠️ Potential issue | 🔴 CriticalPass the Admin View build arguments.
admin_view/prod.DockerfiledeclaresVUE_APP_URLandVUE_APP_API_URL, but this build step does not pass them, so the production Vue bundle can be built with empty API URLs.Proposed fix
- name: Build and Push Admin View uses: docker/build-push-action@v5 with: context: ./admin_view file: ./admin_view/prod.Dockerfile push: true tags: ${{ secrets.HARBOR_REG }}/group-manager/admin:latest cache-from: type=gha + build-args: | + VUE_APP_URL=${{ env.VUE_APP_URL }} + VUE_APP_API_URL=${{ env.VUE_APP_API_URL }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/integration-prod.yml around lines 88 - 95, The Build and Push Admin View step is not passing the VUE_APP_URL and VUE_APP_API_URL args declared in admin_view/prod.Dockerfile, causing the production bundle to be built with empty API URLs; update the docker/build-push-action@v5 step named "Build and Push Admin View" to include build-args for VUE_APP_URL and VUE_APP_API_URL (sourcing their values from the workflow environment, repository secrets, or inputs as appropriate) so the build uses the correct values when invoking the prod.Dockerfile..github/workflows/deploy-prod.yml (3)
11-14:⚠️ Potential issue | 🟡 MinorPin the deploy host key instead of trusting
ssh-keyscanoutput.For production deploys, store the expected host public key/fingerprint in a secret and write that exact value to
known_hosts; TOFU viassh-keyscanwill not detect a first-connection MITM. Also prefer overwrite over append on reused self-hosted runners.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deploy-prod.yml around lines 11 - 14, Replace the insecure ssh-keyscan approach in the "Add deploy host key" step by reading the pinned host public key/fingerprint from a repository secret (e.g. secrets.DEPLOY_HOST_PUBLIC_KEY) and writing that exact value into ~/.ssh/known_hosts using overwrite (>) instead of append (>>) so reused runners don’t accumulate entries; update the step that currently references ssh-keyscan and secrets.CONTAINER_IP to echo the pinned secret into ~/.ssh/known_hosts and ensure ~/.ssh exists beforehand.
22-24:⚠️ Potential issue | 🔴 CriticalDo not hardcode the deployed branch.
This always resets production to
feat/mitomen/2031/CI-CD-pipeline; when.github/workflows/integration-prod.ymlruns frommain, CD would still deploy the feature branch. Accept arefinput from the caller and use that forfetch/reset/checkout.Proposed direction
on: workflow_call: + inputs: + ref: + required: true + type: string workflow_dispatch: + inputs: + ref: + required: true + type: string ... - git fetch origin feat/mitomen/2031/CI-CD-pipeline - git reset --hard origin/feat/mitomen/2031/CI-CD-pipeline - git checkout -B feat/mitomen/2031/CI-CD-pipeline origin/feat/mitomen/2031/CI-CD-pipeline + git fetch origin "${{ inputs.ref }}" + git reset --hard "origin/${{ inputs.ref }}" + git checkout -B "${{ inputs.ref }}" "origin/${{ inputs.ref }}"Then pass
with: { ref: ${{ github.ref_name }} }from the CI workflow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deploy-prod.yml around lines 22 - 24, The workflow currently hardcodes the branch name in the git commands (git fetch origin feat/mitomen/2031/CI-CD-pipeline, git reset --hard origin/feat/mitomen/2031/CI-CD-pipeline, git checkout -B feat/mitomen/2031/CI-CD-pipeline origin/feat/mitomen/2031/CI-CD-pipeline); change these to use a workflow input (e.g., input named ref) instead of the literal branch name, update the job to accept an input "ref", and use that input variable in the three git commands (git fetch origin ${{ inputs.ref }}, git reset --hard origin/${{ inputs.ref }}, git checkout -B ${{ inputs.ref }} origin/${{ inputs.ref }}); also ensure the caller workflow passes the ref (for example with: { ref: ${{ github.ref_name }} }).
18-27:⚠️ Potential issue | 🟠 MajorAvoid expanding Harbor secrets into the remote SSH script.
${{ secrets.* }}is rendered by GitHub Actions before SSH runs, so the remote script receives plaintext credentials. Move registry auth to a pre-provisioned Docker credential store on the deploy host, or pass values via a controlled SSH environment mechanism such asSendEnv/AcceptEnv; also ensureHARBOR_REGis exported for Compose interpolation beforedocker compose pull/up.
🧹 Nitpick comments (1)
.github/workflows/deploy-prod.yml (1)
28-28: Scope or remove the unconditional image prune.
docker image prune -fis host-wide. On a shared/self-hosted deployment machine, this can remove dangling images unrelated to this stack and reduce rollback options after a bad deploy. Prefer scheduled cleanup, label-filtered pruning, or keeping the previous release image around.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deploy-prod.yml at line 28, The workflow currently runs an unconditional host-wide prune via the command "docker image prune -f"; replace it with a scoped cleanup or remove it: either remove the line entirely, or change it to a label-filtered prune (e.g., use "docker image prune --filter label=your_project_label" and ensure build/push steps add that label) or implement preserving the previous release image (keep last image tag) so rollbacks remain possible. Locate the "docker image prune -f" command in the deploy workflow and update it to use a label filter or remove it and rely on scheduled/CI-side cleanup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/integration-prod.yml:
- Line 99: The workflow step referencing the reusable workflow uses:
./.github/workflows/CD.yml points to a non-existent file; update that step to
use the actual reusable workflow file (e.g., change the reference to
./.github/workflows/deploy-prod.yml) or alternatively add the missing CD.yml
file; locate the uses line in .github/workflows/integration-prod.yml and either
correct the path to deploy-prod.yml or commit a new CD.yml workflow with the
expected inputs and outputs.
---
Duplicate comments:
In @.github/workflows/deploy-prod.yml:
- Around line 11-14: Replace the insecure ssh-keyscan approach in the "Add
deploy host key" step by reading the pinned host public key/fingerprint from a
repository secret (e.g. secrets.DEPLOY_HOST_PUBLIC_KEY) and writing that exact
value into ~/.ssh/known_hosts using overwrite (>) instead of append (>>) so
reused runners don’t accumulate entries; update the step that currently
references ssh-keyscan and secrets.CONTAINER_IP to echo the pinned secret into
~/.ssh/known_hosts and ensure ~/.ssh exists beforehand.
- Around line 22-24: The workflow currently hardcodes the branch name in the git
commands (git fetch origin feat/mitomen/2031/CI-CD-pipeline, git reset --hard
origin/feat/mitomen/2031/CI-CD-pipeline, git checkout -B
feat/mitomen/2031/CI-CD-pipeline origin/feat/mitomen/2031/CI-CD-pipeline);
change these to use a workflow input (e.g., input named ref) instead of the
literal branch name, update the job to accept an input "ref", and use that input
variable in the three git commands (git fetch origin ${{ inputs.ref }}, git
reset --hard origin/${{ inputs.ref }}, git checkout -B ${{ inputs.ref }}
origin/${{ inputs.ref }}); also ensure the caller workflow passes the ref (for
example with: { ref: ${{ github.ref_name }} }).
In @.github/workflows/integration-prod.yml:
- Around line 29-42: Remove the insecure registry configuration and BuildKit
http/insecure flags: delete or stop writing the "insecure-registries" entry in
the "Configure Insecure Registry" step and remove the http = true and insecure =
true lines from the buildkitd-config-inline under the "Set up Docker Buildx"
step, and instead ensure the CI uses a TLS-accessible Harbor endpoint (or
provide proper CA certs) so pushes use HTTPS; look for the exact symbols
"insecure-registries", "buildkitd-config-inline", and registry."${{
secrets.HARBOR_REG }}" to locate and change the code.
- Around line 3-9: The workflow currently triggers on pushes to both main and
the feature branch (the branches list under on: push includes
"feat/mitomen/2031/CI-CD-pipeline"), which allows deploy steps to run with
inherited secrets; remove the feature branch from the push trigger so only
"main" remains and add an explicit branch guard to the deploy job (add an if
condition such as checking github.ref == 'refs/heads/main' on the deploy job) to
ensure deploy only runs for main. Ensure you update the branches array and add
the if guard on the job named "deploy" (or equivalent deploy job id) to prevent
CD from running on feature branches.
- Around line 51-57: The workflow currently writes secrets from USER_ENV into
user/.env and appends them to $GITHUB_ENV (steps "Create user .env" and the
Export user env to GitHub Actions block), and then passes IMGUR_CLIENT_SECRET
and NEXTAUTH_SECRET as build-args which user/prod.Dockerfile turns into ENV,
baking runtime secrets into the image; instead remove the step that exports
USER_ENV to $GITHUB_ENV and stop passing IMGUR_CLIENT_SECRET/NEXTAUTH_SECRET as
build args—keep USER_ENV (and those secrets) only in deploy-time secret storage
and load them at container runtime or via your deployment platform, and modify
user/prod.Dockerfile to not set these secrets as ENV at build time (use runtime
env injection).
- Around line 88-95: The Build and Push Admin View step is not passing the
VUE_APP_URL and VUE_APP_API_URL args declared in admin_view/prod.Dockerfile,
causing the production bundle to be built with empty API URLs; update the
docker/build-push-action@v5 step named "Build and Push Admin View" to include
build-args for VUE_APP_URL and VUE_APP_API_URL (sourcing their values from the
workflow environment, repository secrets, or inputs as appropriate) so the build
uses the correct values when invoking the prod.Dockerfile.
---
Nitpick comments:
In @.github/workflows/deploy-prod.yml:
- Line 28: The workflow currently runs an unconditional host-wide prune via the
command "docker image prune -f"; replace it with a scoped cleanup or remove it:
either remove the line entirely, or change it to a label-filtered prune (e.g.,
use "docker image prune --filter label=your_project_label" and ensure build/push
steps add that label) or implement preserving the previous release image (keep
last image tag) so rollbacks remain possible. Locate the "docker image prune -f"
command in the deploy workflow and update it to use a label filter or remove it
and rely on scheduled/CI-side cleanup.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4214130e-e0b8-4e12-9911-2485800e1fbe
📒 Files selected for processing (3)
.github/workflows/deploy-prod.yml.github/workflows/integration-prod.ymlapi/config/database.yml
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/deployment-prod.yml (1)
17-33:⚠️ Potential issue | 🔴 CriticalEnvironment variables won't reach the remote shell via SSH —
docker loginwill fail with empty credentials.The quoted heredoc
<< 'EOF'evaluates$HARBOR_PASS,$HARBOR_REG, and$HARBOR_USERNAMEon the remote side. SSH does not forward these environment variables by default (verified: noAcceptEnvin sshd_config, no pre-configured values in shell init files). This causes thedocker logincommand to receive empty strings, breaking authentication.Pass the secrets explicitly to avoid environment-dependent behavior:
Proposed fix: pass secrets explicitly via ssh command
- name: SSH and Deploy env: HARBOR_PASS: ${{ secrets.HARBOR_PASS }} HARBOR_REG: ${{ secrets.HARBOR_REG }} HARBOR_USERNAME: ${{ secrets.HARBOR_USERNAME }} run: | - ssh deploy-user@${{ secrets.CONTAINER_IP }} -T bash -s << 'EOF' + ssh deploy-user@${{ secrets.CONTAINER_IP }} -T \ + "HARBOR_PASS=\"$HARBOR_PASS\" HARBOR_REG=\"$HARBOR_REG\" HARBOR_USERNAME=\"$HARBOR_USERNAME\" bash -s" << 'EOF' set -e cd /home/deploy-user/group-manager-2 git fetch origin feat/mitomen/2031/CI-CD-pipeline git reset --hard origin/feat/mitomen/2031/CI-CD-pipeline git checkout -B feat/mitomen/2031/CI-CD-pipeline origin/feat/mitomen/2031/CI-CD-pipeline echo "$HARBOR_PASS" | docker login "$HARBOR_REG" -u "$HARBOR_USERNAME" --password-stdin docker compose -f compose.cd.yml pull docker compose -f compose.cd.yml up -d docker image prune -f EOF(If the password contains special characters, use
printf %qto safely quote it. Alternatively, configureAcceptEnv HARBOR_*in sshd_config and usessh -o SendEnv=HARBOR_PASS,HARBOR_REG,HARBOR_USERNAME.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deployment-prod.yml around lines 17 - 33, The SSH heredoc uses a single-quoted EOF (<< 'EOF') so $HARBOR_PASS, $HARBOR_REG and $HARBOR_USERNAME are expanded on the remote host where those envs are not set; make the SSH invocation pass the secrets explicitly instead of relying on remote envs: build the remote command string with the secrets injected from the runner into the ssh command (e.g. export or prefix HARBOR_PASS/HARBOR_REG/HARBOR_USERNAME in the ssh command invocation) so the docker login line inside the remote block receives concrete values, or alternatively enable SendEnv/AcceptEnv and use ssh -o SendEnv=... together with server-side sshd config—update the SSH/SSH and Deploy step containing the heredoc and the docker login line to use one of these explicit approaches and, if injecting the password into the remote command, safely quote it (e.g. via printf %q) before embedding.
🧹 Nitpick comments (2)
.github/workflows/deployment-prod.yml (1)
13-13: Redundantmkdir -p ~/.sshon persistent self-hosted runner.Per the established setup, this CD job runs on a self-hosted runner backed by a persistent LXC container where
~/.sshalready exists with correct permissions (700 / 600 forknown_hosts). Thismkdir -pline can be removed for consistency with the cleanup applied elsewhere.♻️ Proposed cleanup
- name: Add deploy host key run: | - mkdir -p ~/.ssh ssh-keygen -R "${{ secrets.CONTAINER_IP }}" 2>/dev/null || true echo "${{ secrets.DEPLOY_HOST_KEY }}" >> ~/.ssh/known_hostsBased on learnings: "the self-hosted runner for the CD workflow runs inside an LXC container where
~/.sshalready exists persistently with correct permissions (700 for the directory, 600 for known_hosts). Therefore,mkdir -p ~/.sshandchmodcommands in the workflow step are redundant and unnecessary for this environment."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deployment-prod.yml at line 13, Remove the redundant mkdir -p ~/.ssh command from the CD workflow step (the exact token "mkdir -p ~/.ssh") since the self-hosted LXC runner provides a persistent ~/.ssh with correct permissions; also remove any accompanying chmod/chown steps in the same job that were added solely to initialize ~/.ssh so the workflow remains consistent with the cleanup applied elsewhere.compose.cd.yml (1)
1-38:restartポリシー未設定 — 本番常駐サービスとしての復旧性が弱いです。各サービス(
cloudflare/api/user/admin_view/dozzle-agent)にrestart指定がなく、ホスト再起動やコンテナ異常終了時に自動復旧しません。本番デプロイ用の compose としてはrestart: unless-stopped(またはalways)の付与を推奨します。♻️ Proposed addition
cloudflare: image: cloudflare/cloudflared:latest container_name: cloudflare + restart: unless-stopped volumes: - ./cloudflare/prod:/home/nonroot/.cloudflared command: tunnel run api: image: ${HARBOR_REG}/group-manager/api:latest container_name: prod-api + restart: unless-stopped env_file: [.env] user: image: ${HARBOR_REG}/group-manager/user:latest container_name: prod-view + restart: unless-stopped ports: - "8003:3000" env_file: [.env] admin_view: image: ${HARBOR_REG}/group-manager/admin:latest container_name: prod-admin + restart: unless-stopped env_file: [.env] ports: - "8000:3000" dozzle-agent: image: amir20/dozzle:latest container_name: "prod-dozzle-agent" + restart: unless-stopped ports: ["7007:7007"] command: agent volumes: - /var/run/docker.sock:/var/run/docker.sock:ro🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@compose.cd.yml` around lines 1 - 38, Add a restart policy to all production services so they auto-recover (e.g., add restart: unless-stopped under each service block); specifically update the cloudflare, api, user, admin_view and dozzle-agent service definitions to include restart: unless-stopped (or restart: always if you prefer) as a sibling to image/container_name/env_file/ports/command entries, ensuring correct YAML indentation and consistency across these service blocks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/deployment-prod.yml:
- Around line 17-33: The SSH heredoc uses a single-quoted EOF (<< 'EOF') so
$HARBOR_PASS, $HARBOR_REG and $HARBOR_USERNAME are expanded on the remote host
where those envs are not set; make the SSH invocation pass the secrets
explicitly instead of relying on remote envs: build the remote command string
with the secrets injected from the runner into the ssh command (e.g. export or
prefix HARBOR_PASS/HARBOR_REG/HARBOR_USERNAME in the ssh command invocation) so
the docker login line inside the remote block receives concrete values, or
alternatively enable SendEnv/AcceptEnv and use ssh -o SendEnv=... together with
server-side sshd config—update the SSH/SSH and Deploy step containing the
heredoc and the docker login line to use one of these explicit approaches and,
if injecting the password into the remote command, safely quote it (e.g. via
printf %q) before embedding.
---
Nitpick comments:
In @.github/workflows/deployment-prod.yml:
- Line 13: Remove the redundant mkdir -p ~/.ssh command from the CD workflow
step (the exact token "mkdir -p ~/.ssh") since the self-hosted LXC runner
provides a persistent ~/.ssh with correct permissions; also remove any
accompanying chmod/chown steps in the same job that were added solely to
initialize ~/.ssh so the workflow remains consistent with the cleanup applied
elsewhere.
In `@compose.cd.yml`:
- Around line 1-38: Add a restart policy to all production services so they
auto-recover (e.g., add restart: unless-stopped under each service block);
specifically update the cloudflare, api, user, admin_view and dozzle-agent
service definitions to include restart: unless-stopped (or restart: always if
you prefer) as a sibling to image/container_name/env_file/ports/command entries,
ensuring correct YAML indentation and consistency across these service blocks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 48cab754-8742-4b0d-92b2-6f0decd87352
📒 Files selected for processing (2)
.github/workflows/deployment-prod.ymlcompose.cd.yml
対応Issue
resolve #2031
概要
実装詳細
Docker ImageをGitHub Actions Runner側でビルドして、イメージレジストリにpushする。mainブランチにpushするとCICDが走るようにしています。(レビュー後にfeatブランチではトリガーしないように変更します)
CI.ymlの動作終了をトリガーに、Self-Hosted-Runnerが、アプリの動作する仮想コンテナにsshし、Docker Imageのpullとコンテナ立ち上げを行う。
CD.ymlからpullしてきたイメージで、とりあえず動くようにしてあります。コンテナの設定に不備があれば教えてほしいです。
◾️NUTFes Organization(プロジェクト共通)
HARBOR_PASS
HARBOR_REG
HARBOR_USERNAME
TAILSCALE_OAUTH_CLIENT_ID
TAILSCALE_OAUTH_CLINET_SECRET
◾️group-manager-2 Repository
CONTAINER_IP
USER_ENV(DockerfileのARGで求められてる変数全部)
ADMIN_VIEW_ENV(DockerfileのARGで求められてる変数全部)
画面スクリーンショット等
■Actions成功

■imageのpullができており、コンテナの立ち上げまで完了

テスト項目
備考
Summary by CodeRabbit
Release Notes