Restructure GitHub Actions workflow with artifact handling#2
Restructure GitHub Actions workflow with artifact handling#2phongphongg wants to merge 2 commits into
Conversation
|
Note Gemini is unable to generate a summary for this pull request due to the file types involved not being currently supported. |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdded a build-images matrix job that builds and uploads image tarball artifacts; prepare exposes additional refs; e2e waits for and downloads artifacts, loads images into Minikube, and runs Task-based deployment; cleanup job deletes run-specific artifacts; branch/ref defaults moved to v0. Changes
Sequence DiagramsequenceDiagram
autonumber
actor Trigger as Webhook/Push
participant Prepare as Prepare Job
participant Build as build-images Job
participant Artifacts as GitHub Artifacts
participant E2E as e2e Job
participant Minikube as Minikube
participant Cleanup as cleanup Job
Trigger->>Prepare: start workflow, set refs (v0)
Prepare->>Build: provide refs / allow matrix start
Build->>Build: build images, create metadata, tarballs
Build->>Artifacts: upload image tarballs & metadata
Artifacts-->>E2E: artifacts available
E2E->>Artifacts: download artifacts
E2E->>Minikube: load images into cluster
E2E->>E2E: export per-service refs/outputs
E2E->>Minikube: deploy via Task-based step, run tests
E2E-->>Cleanup: trigger cleanup
Cleanup->>Artifacts: delete run-specific artifacts
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
949e0dd to
9aaed1c
Compare
58e3bff to
d5272cb
Compare
d5272cb to
508f40e
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/playwright.yml (1)
213-217:⚠️ Potential issue | 🟠 MajorCheckout the PR head explicitly for forked pull requests.
When a contributor forks this repository and opens a PR, the current checkout fails because
testing_refis just the branch name (fromgithub.head_ref), but the checkout defaults to the base repository where that branch doesn't exist. For PR scenarios, use the fork's repository and head commit SHA. For push and workflow_dispatch, continue using the standard ref.💡 One straightforward way to pin the PR checkout
- name: Checkout workastra/testing uses: actions/checkout@v6 with: + repository: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name || github.repository }} path: ${{ env.TESTING_PATH }} - ref: ${{ needs.prepare.outputs.testing_ref }} + ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || needs.prepare.outputs.testing_ref }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 213 - 217, The checkout step "Checkout workastra/testing" fails for forked PRs because it uses needs.prepare.outputs.testing_ref (branch name) against the base repo; change the step to choose the fork head explicitly for PRs by setting with.repository to github.event.pull_request.head.repo.full_name and with.ref to github.event.pull_request.head.sha when github.event_name == 'pull_request', and otherwise keep using with.ref: needs.prepare.outputs.testing_ref (or github.ref) for push/workflow_dispatch; update the actions/checkout@v6 invocation to use these conditional repository/ref values so the fork head is checked out for PR runs.
🧹 Nitpick comments (1)
.github/workflows/playwright.yml (1)
184-188: Keep the test base URL sourced fromDOMAIN.
/etc/hostsis now updated from${{ env.DOMAIN }}, but Line 316 still hard-codeshttp://workastra.com. The next domain change will update deploy/hosts and leave Playwright pointed at the old host.♻️ Small follow-up to keep the config single-sourced
- name: Run Playwright tests working-directory: ${{ env.TESTING_PATH }} env: - APP_BASE_URL: http://workastra.com + APP_BASE_URL: http://${{ env.DOMAIN }} run: pnpm run playwright:test --project=chromiumAlso applies to: 290-291
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 184 - 188, The Playwright job is still hard-coding the base URL as "http://workastra.com"; replace that literal with the environment variable (use http://${{ env.DOMAIN }} or the job's env.DOMAIN) so Playwright's baseURL is single-sourced from env.DOMAIN; update all occurrences referenced (the base URL in the Playwright step and the other two spots noted) to use env.DOMAIN instead of the literal host so future domain changes only require updating DOMAIN.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/playwright.yml:
- Around line 327-331: The cleanup job currently uses needs: [e2e] and if: ${{
!cancelled() }}, which inherits success() and is skipped when e2e fails or is
skipped; update the cleanup job (named "cleanup") to use a condition that runs
regardless of e2e outcome but still avoids running on workflow cancellation,
e.g. replace the if with a combined expression that uses always() and
!cancelled() (for example: if: ${{ always() && !cancelled() }}), so the job runs
after failed or skipped e2e runs but not when the workflow is cancelled.
---
Outside diff comments:
In @.github/workflows/playwright.yml:
- Around line 213-217: The checkout step "Checkout workastra/testing" fails for
forked PRs because it uses needs.prepare.outputs.testing_ref (branch name)
against the base repo; change the step to choose the fork head explicitly for
PRs by setting with.repository to github.event.pull_request.head.repo.full_name
and with.ref to github.event.pull_request.head.sha when github.event_name ==
'pull_request', and otherwise keep using with.ref:
needs.prepare.outputs.testing_ref (or github.ref) for push/workflow_dispatch;
update the actions/checkout@v6 invocation to use these conditional
repository/ref values so the fork head is checked out for PR runs.
---
Nitpick comments:
In @.github/workflows/playwright.yml:
- Around line 184-188: The Playwright job is still hard-coding the base URL as
"http://workastra.com"; replace that literal with the environment variable (use
http://${{ env.DOMAIN }} or the job's env.DOMAIN) so Playwright's baseURL is
single-sourced from env.DOMAIN; update all occurrences referenced (the base URL
in the Playwright step and the other two spots noted) to use env.DOMAIN instead
of the literal host so future domain changes only require updating DOMAIN.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7ed6b37d-7819-4306-a2c9-b42a096dd1a0
📒 Files selected for processing (1)
.github/workflows/playwright.yml
| cleanup: | ||
| runs-on: ubuntu-latest | ||
| name: Cleanup | ||
| needs: [e2e] | ||
| if: ${{ !cancelled() }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Actions job needs dependency skip behavior always() condition
💡 Result:
In GitHub Actions, a job that has needs: implicitly has a default condition of if: success(). That means if any needed job is skipped, success() evaluates false and the dependent job is skipped too—unless you override the job’s if. [1]
To get “dependency skip behavior” (run even when dependencies are skipped), set a job-level if: that starts with always() and then explicitly allow the dependency results you want via needs.<job_id>.result (success|failure|cancelled|skipped). [1]
Example: run deploy if build succeeded or was skipped (but not if it failed/cancelled):
jobs:
build:
runs-on: ubuntu-latest
if: ${{ github.ref == 'refs/heads/main' }}
steps:
- run: echo build
deploy:
runs-on: ubuntu-latest
needs: [build]
if: ${{ always() && (needs.build.result == 'success' || needs.build.result == 'skipped') }}
steps:
- run: echo deployNotes:
always()is what forces evaluation even when aneedsjob didn’t “succeed” (includingskipped). [1][2]- If you want “run unless workflow was cancelled”, use
if: ${{ !cancelled() && (...) }}(sincealways()also runs on cancellation). [1]
Sources: GitHub Docs on needs.<job_id>.result and job if functions [1]; community/StackOverflow discussion of downstream jobs being skipped unless always() + needs.*.result checks are used [2].
[1] https://docs.github.com/en/actions/reference/contexts-reference
[2] https://stackoverflow.com/questions/76750973/how-to-execute-a-job-that-needs-a-job-that-was-skipped
Run cleanup after failed/skipped e2e runs too.
With needs: [e2e] and if: ${{ !cancelled() }}, this job inherits the implicit if: success() condition and is skipped when e2e fails or is skipped, so the build-* artifacts remain on disk when cleanup matters most.
🧹 Suggested condition
cleanup:
runs-on: ubuntu-latest
name: Cleanup
needs: [e2e]
- if: ${{ !cancelled() }}
+ if: ${{ always() && !cancelled() }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cleanup: | |
| runs-on: ubuntu-latest | |
| name: Cleanup | |
| needs: [e2e] | |
| if: ${{ !cancelled() }} | |
| cleanup: | |
| runs-on: ubuntu-latest | |
| name: Cleanup | |
| needs: [e2e] | |
| if: ${{ always() && !cancelled() }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/playwright.yml around lines 327 - 331, The cleanup job
currently uses needs: [e2e] and if: ${{ !cancelled() }}, which inherits
success() and is skipped when e2e fails or is skipped; update the cleanup job
(named "cleanup") to use a condition that runs regardless of e2e outcome but
still avoids running on workflow cancellation, e.g. replace the if with a
combined expression that uses always() and !cancelled() (for example: if: ${{
always() && !cancelled() }}), so the job runs after failed or skipped e2e runs
but not when the workflow is cancelled.
f325e71 to
cbd178f
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/playwright.yml (1)
336-340:⚠️ Potential issue | 🟡 MinorMake
cleanuprun after failed/skippede2eruns.With
needs: [e2e], GitHub Actions skips downstream jobs when the needed job fails or is skipped unless the job-level condition usesalways().if: ${{ !cancelled() }}is therefore still not enough to guarantee artifact cleanup. (docs.github.com)🧹 Suggested change
cleanup: runs-on: ubuntu-latest name: Cleanup needs: [e2e] - if: ${{ !cancelled() }} + if: ${{ always() && !cancelled() }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 336 - 340, The cleanup job currently uses needs: [e2e] and if: ${{ !cancelled() }}, which prevents it from running when e2e fails or is skipped; update the job-level condition on the cleanup job (job name "cleanup", which currently has needs: [e2e] and if: ${{ !cancelled() }}) to use always() combined with the cancelled check (for example: if: ${{ always() && !cancelled() }}), so the cleanup runs even when the e2e job fails or is skipped.
🧹 Nitpick comments (2)
.github/workflows/playwright.yml (2)
191-195: Consider derivingAPP_BASE_URLfromDOMAINtoo.
DOMAINis now the source of truth for the host mapping, but Line 325 still hardcodeshttp://workastra.com. Sinceplaywright.config.ts:13-21readsAPP_BASE_URL, wiring that env fromDOMAINwould keep the workflow and test config from drifting.♻️ Suggested change
- name: Run Playwright tests working-directory: ${{ env.TESTING_PATH }} env: - APP_BASE_URL: http://workastra.com + APP_BASE_URL: http://${{ env.DOMAIN }} run: pnpm run playwright:test --project=chromium🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 191 - 195, The workflow hardcodes APP_BASE_URL separately from DOMAIN; update the env block so APP_BASE_URL is derived from DOMAIN (e.g., construct "http://"+DOMAIN or include scheme as needed) so tests read the same host as the DOMAIN source of truth; change the env assignment that currently hardcodes APP_BASE_URL to use the DOMAIN value and ensure APP_BASE_URL is exported to the Playwright job so playwright.config.ts can rely on APP_BASE_URL consistently.
167-175: Give each image build its own GHA cache scope.Both cache lines use the default
ghascope. Docker documents that this defaults tobuildkit, and that building multiple images with that default causes each build to overwrite the previous cache object. In this matrix, that means Desk/IAM/Migration will churn the same cache instead of benefiting from the split build job. (docs.docker.com)♻️ Suggested change
- cache-from: type=gha - cache-to: type=gha,mode=max + cache-from: type=gha,scope=${{ steps.meta.outputs.artifact_name }} + cache-to: type=gha,mode=max,scope=${{ steps.meta.outputs.artifact_name }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 167 - 175, The GHA Docker cache is using the default 'gha' scope and is being shared across matrix builds; update the docker/build-push-action step where cache-from and cache-to are set so each image gets its own scope (e.g., add scope=${{ matrix.path }} or another matrix identifier) — change cache-from: type=gha to cache-from: type=gha,scope=${{ matrix.path }} and cache-to: type=gha,mode=max to cache-to: type=gha,mode=max,scope=${{ matrix.path }} (or use matrix.docker.file/matrix.name if more appropriate) so builds like Desk/IAM/Migration do not overwrite each other.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/playwright.yml:
- Around line 336-340: The cleanup job currently uses needs: [e2e] and if: ${{
!cancelled() }}, which prevents it from running when e2e fails or is skipped;
update the job-level condition on the cleanup job (job name "cleanup", which
currently has needs: [e2e] and if: ${{ !cancelled() }}) to use always() combined
with the cancelled check (for example: if: ${{ always() && !cancelled() }}), so
the cleanup runs even when the e2e job fails or is skipped.
---
Nitpick comments:
In @.github/workflows/playwright.yml:
- Around line 191-195: The workflow hardcodes APP_BASE_URL separately from
DOMAIN; update the env block so APP_BASE_URL is derived from DOMAIN (e.g.,
construct "http://"+DOMAIN or include scheme as needed) so tests read the same
host as the DOMAIN source of truth; change the env assignment that currently
hardcodes APP_BASE_URL to use the DOMAIN value and ensure APP_BASE_URL is
exported to the Playwright job so playwright.config.ts can rely on APP_BASE_URL
consistently.
- Around line 167-175: The GHA Docker cache is using the default 'gha' scope and
is being shared across matrix builds; update the docker/build-push-action step
where cache-from and cache-to are set so each image gets its own scope (e.g.,
add scope=${{ matrix.path }} or another matrix identifier) — change cache-from:
type=gha to cache-from: type=gha,scope=${{ matrix.path }} and cache-to:
type=gha,mode=max to cache-to: type=gha,mode=max,scope=${{ matrix.path }} (or
use matrix.docker.file/matrix.name if more appropriate) so builds like
Desk/IAM/Migration do not overwrite each other.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 46b17685-c964-48e6-b594-3d0d2ccb2660
📒 Files selected for processing (1)
.github/workflows/playwright.yml
15d2ad7 to
f501521
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/playwright.yml (1)
362-366:⚠️ Potential issue | 🟠 MajorMake cleanup wait for
build-imagesand ignoree2efailures.With
needs: [e2e], this job can start beforebuild-imageshas finished uploading ife2edies early. And withif: ${{ !cancelled() }}, it still inherits the defaultsuccess()gate, so a failed or skippede2eprevents cleanup from running at all. GitHub documents both behaviors: downstream jobs are skipped after a failed/skipped dependency unless the job-level condition includes a status function likealways(), andneedsonly covers explicitly declared dependencies. (docs.github.com)🧹 Suggested fix
cleanup: runs-on: ubuntu-latest name: Cleanup - needs: [e2e] - if: ${{ !cancelled() }} + needs: [e2e, build-images] + if: ${{ always() && !cancelled() }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 362 - 366, The cleanup job currently only declares needs: [e2e] and uses if: ${{ !cancelled() }} which lets it start before build-images finishes if e2e exits early and still inherits the success() gate; update the cleanup job (cleanup) to depend on both e2e and build-images (needs: [e2e, build-images]) and change its condition to run regardless of upstream success but not when the workflow is cancelled by using if: ${{ always() && !cancelled() }} so cleanup waits for build-images and ignores e2e failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/playwright.yml:
- Around line 226-250: The loop waiting for the "Build image" matrix jobs
incorrectly treats terminal non-success conclusions as still-running; update the
check around the RESULTS array (mapfile -t RESULTS ...) to first detect any
terminal failures (e.g., "failure", "cancelled", "timed_out") and immediately
echo a failure message and exit 1, then only treat non-terminal/missing
conclusions as "still waiting" (sleep and retry), and finally break when all
entries are "success". Modify the block that currently uses grep -qv '^success$'
to implement this two-step check against terminal conclusions before waiting.
---
Duplicate comments:
In @.github/workflows/playwright.yml:
- Around line 362-366: The cleanup job currently only declares needs: [e2e] and
uses if: ${{ !cancelled() }} which lets it start before build-images finishes if
e2e exits early and still inherits the success() gate; update the cleanup job
(cleanup) to depend on both e2e and build-images (needs: [e2e, build-images])
and change its condition to run regardless of upstream success but not when the
workflow is cancelled by using if: ${{ always() && !cancelled() }} so cleanup
waits for build-images and ignores e2e failures.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4e30a18b-c485-44e9-926d-a3ca1aabb9c0
📒 Files selected for processing (1)
.github/workflows/playwright.yml
There was a problem hiding this comment.
♻️ Duplicate comments (2)
.github/workflows/playwright.yml (2)
367-367:⚠️ Potential issue | 🟡 MinorMake cleanup run after failed/skipped E2E runs too.
Line 367 currently allows cleanup to be skipped when
e2eis not successful, which leavesbuild-*artifacts behind.🧹 Suggested condition
- if: ${{ !cancelled() }} + if: ${{ always() && !cancelled() }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml at line 367, The cleanup step currently uses the condition "if: ${{ !cancelled() }}" so it skips when the e2e job fails or is skipped; update the condition for the cleanup step (look for the workflow step/job named cleanup) to run even after failures by changing the expression to "if: ${{ always() && !cancelled() }}", which ensures cleanup runs on success, failure, or skipped runs but still respects cancellation.
244-246:⚠️ Potential issue | 🟠 MajorFail fast on terminal matrix failures in the wait loop.
Line 244 treats terminal failures the same as “still waiting,” so the loop can burn time until timeout instead of exiting immediately.
🛠️ Suggested fix
- if printf '%s\n' "${RESULTS[@]}" | grep -qv '^success$'; then - echo "Some jobs not successful yet or failed" + if printf '%s\n' "${RESULTS[@]}" | grep -Eq '^(failure|cancelled|timed_out|startup_failure|action_required)$'; then + echo "At least one build-images job finished unsuccessfully" + exit 1 + elif printf '%s\n' "${RESULTS[@]}" | grep -qv '^success$'; then + echo "Waiting for remaining build-images jobs..." sleep 10 else echo "All build matrix jobs succeeded"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 244 - 246, The wait loop currently treats terminal failures like transient “not success,” so add an early check on the RESULTS array for terminal failure strings (e.g., "failure" or whatever terminal status your matrix uses) and exit immediately with a non-zero status instead of sleeping; locate the block referencing RESULTS and the grep check and insert a conditional that searches RESULTS for "failure" (or the terminal marker your jobs emit) and calls echo + exit 1 before the existing sleep, or alternatively change the grep logic to only continue sleeping when entries are "waiting" (not when any "failure" is present).
🧹 Nitpick comments (3)
.github/workflows/playwright.yml (3)
194-195:DOMAINis introduced but not actually driving Playwright base URL.Lines 194-195 suggest domain configurability, but tests still run with hard-coded
APP_BASE_URL: http://workastra.com, so changingDOMAINwon’t affect browser target.🔧 Suggested follow-up in this workflow
- name: Run Playwright tests working-directory: ${{ env.TESTING_PATH }} env: - APP_BASE_URL: http://workastra.com + APP_BASE_URL: http://${{ env.DOMAIN }} run: pnpm run playwright:test --project=chromium🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 194 - 195, The workflow defines DOMAIN but Playwright still uses a hard-coded APP_BASE_URL; update the GitHub Actions workflow so APP_BASE_URL is derived from DOMAIN (and optionally TENANT_NAME) instead of a literal URL. Locate the environment block that sets APP_BASE_URL in .github/workflows/playwright.yml and replace the hard-coded value with a reference to the DOMAIN variable (for example construct the URL from DOMAIN or combine TENANT_NAME and DOMAIN as needed) so changing DOMAIN actually changes the Playwright base URL used by tests.
188-190: Prefer native dependency wiring over custom polling.Line 190 comments out
build-imagesfromneeds, which forces the custom GH CLI wait loop. This is more brittle than native DAG orchestration and error propagation.♻️ Suggested simplification
e2e: timeout-minutes: 60 runs-on: ubuntu-latest name: E2E needs: - prepare - # - build-images + - build-images🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 188 - 190, The workflow currently comments out the `build-images` dependency in the `needs:` block (leaving `needs: - prepare` only), which forces a custom GH CLI polling/wait loop; restore native DAG wiring by re-adding `build-images` to the `needs` list so the job depends on `build-images` (remove the comment token on `build-images`), and then delete or disable the custom GH CLI wait loop code so the workflow relies on the native dependency ordering and error propagation instead of polling.
260-283: Validate expected artifact metadata count before exporting image outputs.If metadata files are missing/partial, this step silently emits incomplete outputs and fails later in deployment with less actionable errors.
✅ Guardrail suggestion
- name: Load images into minikube id: images run: | + EXPECTED=3 + FOUND=$(find "${{ runner.temp }}/artifacts" -name "metadata.json" | wc -l | tr -d ' ') + if [ "$FOUND" -ne "$EXPECTED" ]; then + echo "Expected $EXPECTED metadata.json files, found $FOUND" + exit 1 + fi + while IFS= read -r -d '' meta_file; do dir=$(dirname "$meta_file")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/playwright.yml around lines 260 - 283, The step "Load images into minikube" currently loops over metadata.json files and writes outputs without validating the found artifacts; modify this block to first collect and count the metadata files found under "${{ runner.temp }}/artifacts" (the results of find ... -name "metadata.json"), verify the count matches an expected number or at least is >0, and if validation fails emit a clear error and exit non‑zero before loading images; only after successful validation should the script iterate each metadata.json, parse REPO/TAG/IMG/TAR, load the tar via minikube image load, and then write outputs to GITHUB_OUTPUT (using KEY, TAG, REPO) so you avoid emitting incomplete outputs when artifacts are missing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/playwright.yml:
- Line 367: The cleanup step currently uses the condition "if: ${{ !cancelled()
}}" so it skips when the e2e job fails or is skipped; update the condition for
the cleanup step (look for the workflow step/job named cleanup) to run even
after failures by changing the expression to "if: ${{ always() && !cancelled()
}}", which ensures cleanup runs on success, failure, or skipped runs but still
respects cancellation.
- Around line 244-246: The wait loop currently treats terminal failures like
transient “not success,” so add an early check on the RESULTS array for terminal
failure strings (e.g., "failure" or whatever terminal status your matrix uses)
and exit immediately with a non-zero status instead of sleeping; locate the
block referencing RESULTS and the grep check and insert a conditional that
searches RESULTS for "failure" (or the terminal marker your jobs emit) and calls
echo + exit 1 before the existing sleep, or alternatively change the grep logic
to only continue sleeping when entries are "waiting" (not when any "failure" is
present).
---
Nitpick comments:
In @.github/workflows/playwright.yml:
- Around line 194-195: The workflow defines DOMAIN but Playwright still uses a
hard-coded APP_BASE_URL; update the GitHub Actions workflow so APP_BASE_URL is
derived from DOMAIN (and optionally TENANT_NAME) instead of a literal URL.
Locate the environment block that sets APP_BASE_URL in
.github/workflows/playwright.yml and replace the hard-coded value with a
reference to the DOMAIN variable (for example construct the URL from DOMAIN or
combine TENANT_NAME and DOMAIN as needed) so changing DOMAIN actually changes
the Playwright base URL used by tests.
- Around line 188-190: The workflow currently comments out the `build-images`
dependency in the `needs:` block (leaving `needs: - prepare` only), which forces
a custom GH CLI polling/wait loop; restore native DAG wiring by re-adding
`build-images` to the `needs` list so the job depends on `build-images` (remove
the comment token on `build-images`), and then delete or disable the custom GH
CLI wait loop code so the workflow relies on the native dependency ordering and
error propagation instead of polling.
- Around line 260-283: The step "Load images into minikube" currently loops over
metadata.json files and writes outputs without validating the found artifacts;
modify this block to first collect and count the metadata files found under "${{
runner.temp }}/artifacts" (the results of find ... -name "metadata.json"),
verify the count matches an expected number or at least is >0, and if validation
fails emit a clear error and exit non‑zero before loading images; only after
successful validation should the script iterate each metadata.json, parse
REPO/TAG/IMG/TAR, load the tar via minikube image load, and then write outputs
to GITHUB_OUTPUT (using KEY, TAG, REPO) so you avoid emitting incomplete outputs
when artifacts are missing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d740f730-4732-4901-8bc0-2a09fa20e2c6
📒 Files selected for processing (1)
.github/workflows/playwright.yml
83b45b2 to
2df4ca8
Compare
2df4ca8 to
cd22c09
Compare
Summary by CodeRabbit
New Features
Chores