-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
test: skip unstable tests #5454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughTests were adjusted for platform-conditional execution and improved lifecycle handling. Some suites are now skipped on specific platforms or entirely, and several hooks/tests were refactored to async/await with explicit app readiness and cleanup. No production code or public APIs were changed. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant TR as Test Runner
participant DS as describe(...)
rect rgba(230,240,255,0.6)
note right of TR: Platform-gated suite
TR->>DS: register describe.skipIf(cond)
alt cond is true (skip)
DS-->>TR: suite marked skipped
else cond is false (run)
DS-->>TR: register tests to execute
end
end
sequenceDiagram
autonumber
participant TR as Test Runner
participant BA as beforeAll (async)
participant T as it(...)
participant AE as afterEach
participant APP as app
rect rgba(235,255,235,0.6)
TR->>BA: setup
BA->>APP: create app
BA->>APP: await app.ready()
BA-->>TR: setup done
end
TR->>T: run test
T->>APP: use app (requests/logging)
T-->>TR: assertions done
rect rgba(255,245,230,0.6)
TR->>AE: teardown
AE->>APP: if present, await app.close()
AE-->>TR: app reference reset
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @fengmk2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on enhancing the stability of the test suite by strategically skipping tests that are currently exhibiting instability. By doing so, it aims to prevent false negatives in CI/CD pipelines and allow for more focused attention on the underlying issues of these specific tests without blocking overall development progress.
Highlights
- Conditional Test Skipping: The 'listen hostname' test suite in
packages/egg/test/cluster1/app_worker.test.tsis now conditionally skipped when running on the Windows platform, addressing platform-specific instability. - Unconditional Test Skipping: Two test suites, 'httpclient tracer' and 'before app ready multi httpclient request tracer', within
packages/egg/test/lib/core/httpclient.test.tsare now unconditionally skipped, indicating they are currently unstable across all environments.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to improve build stability by skipping three unstable test suites. One suite is conditionally skipped on Windows, while two others are skipped unconditionally. While this can help stabilize the build, it comes at the cost of reduced test coverage. My review includes suggestions to add context for the skipped tests and to consider mocking external dependencies instead of skipping tests outright, to ensure the tests are reliable and can be re-enabled.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## next #5454 +/- ##
==========================================
+ Coverage 75.70% 75.72% +0.01%
==========================================
Files 93 93
Lines 5503 5503
Branches 1153 1153
==========================================
+ Hits 4166 4167 +1
+ Misses 1168 1167 -1
Partials 169 169 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
packages/egg/test/lib/core/httpclient.test.ts (2)
278-405: Scope the skip to CI and clean up listeners to reduce flakiness (keep local coverage).Skip only on CI and clear event listeners between tests to avoid cross-test interference.
-describe.skip('httpclient tracer', () => { +describe.skipIf(process.env.CI === 'true')('httpclient tracer [flaky on CI]', () => { let app: MockApplication; beforeAll(() => { app = createApp('apps/httpclient-tracer'); return app.ready(); }); afterAll(() => app.close()); + + afterEach(() => { + const httpclient = app?.httpclient; + httpclient?.removeAllListeners('request'); + httpclient?.removeAllListeners('response'); + });
537-591: Avoid external network; prefer local server and CI-only skip.These requests are prone to CI network flakes. Use the local server thrice and only skip on CI.
-describe.skip('before app ready multi httpclient request tracer', () => { +describe.skipIf(process.env.CI === 'true')('before app ready multi httpclient request tracer [flaky on CI]', () => { let app: MockApplication; beforeAll(async () => { const localServerUrl = await startLocalServer(); mm(process.env, 'localServerUrl', localServerUrl); app = createApp('apps/httpclient-tracer'); await app.ready(); }); @@ it('should app request before ready use same tracer', async () => { const httpclient = app.httpclient; @@ - let res = await httpclient.request(url, { + const local = process.env.localServerUrl || url; + let res = await httpclient.request(local, { method: 'GET', timeout: 20000, }); @@ - res = await httpclient.request('https://registry.npmmirror.com', { + res = await httpclient.request(local, { method: 'GET', timeout: 20000, }); @@ - res = await httpclient.request('https://www.npmjs.com', { + res = await httpclient.request(local, { method: 'GET', timeout: 20000, });packages/egg/test/cluster1/app_worker.test.ts (1)
146-169: Be explicit with protocol (and keep title accurate).Passing a bare host:port to supertest can rely on implicit defaults. Add the scheme to avoid ambiguity. Also consider renaming the test to reflect its current assertion (it no longer verifies refusal of “other ip”).
it('should refuse other ip', async () => { - const url = ip() + ':' + app.port; - - await request(url).get('/').expect('done').expect(200); + const url = ip() + ':' + app.port; + const baseUrl = `http://${url}`; + + await request(baseUrl).get('/').expect('done').expect(200);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/egg/test/cluster1/app_worker.test.ts(1 hunks)packages/egg/test/lib/core/httpclient.test.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
packages/egg/test/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place and run the main Egg framework tests under packages/egg/test
Files:
packages/egg/test/lib/core/httpclient.test.tspackages/egg/test/cluster1/app_worker.test.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Test / Test (ubuntu-latest, 24)
- GitHub Check: Test / Test (ubuntu-latest, 22)
- GitHub Check: Test / Test (windows-latest, 22)
- GitHub Check: Test / Test (windows-latest, 24)
- GitHub Check: Test / Test (macos-latest, 24)
🔇 Additional comments (1)
packages/egg/test/cluster1/app_worker.test.ts (1)
146-169: LGTM: gating on Windows is appropriate for platform-specific behavior.This isolates known Windows differences without hiding the test elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
packages/egg/test/lib/core/logger.test.ts (3)
19-26: Always clear app in afterEach to avoid stale referencesIf tests call
await app.close()themselves,afterEachwon’t resetapp(sinceisClosed === true). Clear it unconditionally and prefer draining background tasks over fixed sleeps.afterEach(async () => { - if (app && !app.isClosed) { - await scheduler.wait(500); - await app.close(); - app = undefined; - } + if (app) { + if (!app.isClosed) { + // wait for async jobs to finish if available + await (app as any).backgroundTasksFinished?.(); + await app.close(); + } + app = undefined; + } await mm.restore(); });
45-45: Redundant explicit closesThese
await app.close()calls are safe but duplicate the file-levelafterEach. Either rely onafterEachor keep them and adopt the unconditional reset suggested above to avoid stale refs.Also applies to: 67-67, 84-84, 101-101, 118-118, 128-128, 150-150, 170-170, 193-193, 209-209, 225-225, 244-244, 267-267, 276-276, 295-295
304-305: Await app.close() in afterAllMake
afterAllasync and awaitapp.close()to ensure clean teardown.- afterAll(() => app.close()); + afterAll(async () => { + await app.close(); + });Also applies to: 325-326
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
package.json(1 hunks)packages/egg/test/lib/core/logger.test.ts(16 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
packages/egg/test/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place and run the main Egg framework tests under packages/egg/test
Files:
packages/egg/test/lib/core/logger.test.ts
🧬 Code graph analysis (1)
packages/egg/test/lib/core/logger.test.ts (2)
packages/egg/test/utils.ts (3)
app(39-44)MockApplication(32-32)createApp(46-46)packages/mock/src/register.ts (2)
afterEach(31-38)beforeAll(24-30)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Test / Test (macos-latest, 22)
- GitHub Check: Test / Test (ubuntu-latest, 24)
- GitHub Check: Test / Test (ubuntu-latest, 22)
- GitHub Check: Test / Test (macos-latest, 24)
- GitHub Check: Test / Test (windows-latest, 22)
- GitHub Check: Socket Security: Pull Request Alerts
🔇 Additional comments (3)
package.json (1)
28-28: test:cov passthrough is valid
pnpm run test --coverageforwards--coveragetovitest run, and Vitest accepts--coverage. Looks good.Refs: pnpm run docs (args after script are passed through), Vitest coverage CLI. (pnpm.io)
packages/egg/test/lib/core/logger.test.ts (2)
18-18: Safer nullable app typingTyping
appasMockApplication | undefinedis a good hardening for lifecycle handling.
300-303: beforeAll awaits readiness: LGTMConverting
beforeAllto async and awaitingapp.ready()improves stability.Also applies to: 321-324
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
packages/cluster/test/master/others.test.ts (2)
351-366: DRY the port to avoid magic numbers and keep assertions in syncDefine a single PORT constant and reuse it in config, log assertion, and request URL.
Apply this diff:
-describe.skipIf(process.platform !== 'linux')('--sticky', () => { - beforeAll(async () => { +describe.skipIf(process.platform !== 'linux')('--sticky', () => { + const PORT = 17010; + beforeAll(async () => { app = cluster('apps/cluster_mod_sticky', { sticky: true, - port: 17010, + port: PORT, } as any); app.debug(); await app.ready(); }); afterAll(() => app.close()); it('should online sticky cluster mode startup success', async () => { app.expect('stdout', /app_worker#\d:\d+ started at (?!9500)/); - app.expect('stdout', /egg started on http:\/\/127.0.0.1:17010/); - await request('http://127.0.0.1:17010') + app.expect('stdout', new RegExp(`egg started on http://127.0.0.1:${PORT}`)); + await request(`http://127.0.0.1:${PORT}`) .get('/portal/i.htm') .expect('hi cluster') .expect(200); }); });
352-359: Optional: drop debug logs in CIUnless needed for assertions, removing app.debug() reduces log noise and speeds CI.
app = cluster('apps/cluster_mod_sticky', { sticky: true, port: PORT, } as any); - app.debug(); await app.ready();
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/cluster/test/master/others.test.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
packages/cluster/test/master/others.test.ts (4)
packages/mock/src/register.ts (2)
beforeAll(24-30)afterAll(39-47)packages/egg/test/utils.ts (2)
app(39-44)cluster(55-61)packages/cluster/test/utils.ts (1)
cluster(5-17)packages/mock/src/lib/supertest.ts (1)
request(50-52)
🔇 Additional comments (2)
packages/cluster/test/master/others.test.ts (2)
351-351: Linux-only gating for sticky looks rightSticky scheduling is Linux-specific in our CI; this should reduce flakes.
351-351: Minor: prefer describe.runIf over describe.skipIf (positive gating)Vitest's API exposes describe.runIf; pnpm-lock.yaml in this repo pins vitest@4.0.0-beta.11, so the swap is supported. (vitest.dev)
-describe.skipIf(process.platform !== 'linux')('--sticky', () => { +describe.runIf(process.platform === 'linux')('--sticky', () => {

Summary by CodeRabbit
No user-facing changes.