Have something to say?

Tell us how we could make the product more useful to you.

Native CI Orchestration: Trigger and Re-run Tests Directly from the Currents Dashboard

The Problem: The "Context-Switching" Friction Currently, our testing workflow is fragmented across two platforms. To initiate or re-run a test suite, we must navigate to GitHub Actions; to analyze the results, we must then switch over to Currents.dev. This back-and-forth creates unnecessary friction, slows down the debugging cycle, and forces developers to manage two different interfaces for a single task. The Proposal: A Unified Execution & Observation Hub We would like to propose the ability to trigger and manage test runs directly from within the Currents.dev UI. Instead of Currents being a passive destination for results, it would become an active orchestration hub. This functionality would allow teams to: Launch New Runs: Trigger specific GitHub Action workflows (via repository dispatch or similar integration) directly from the Currents dashboard. Smart Re-runs: Re-execute failed tests or entire flakes with a single click inside the Currents run view, rather than hunting for the specific job in CI. Centralized Control: View real-time logs and execution progress without ever leaving the Currents ecosystem. Why this is a Game-Changer: Velocity: Dramatically reduces the "Mean Time to Repair" (MTTR) by allowing developers to re-run failed tests the moment they identify them in the dashboard. Simplified DX (Developer Experience): Provides a "Single Pane of Glass" for the entire testing lifecycle—from execution to post-mortem analysis. Competitive Alignment: Similar orchestration capabilities are highly valued in other dashboard environments (like Cypress Cloud), and bringing this to Currents would solidify its position as the premier choice for scalable test management. By bridging the gap between CI execution and Currents reporting, you would be providing a seamless, world-class workflow that saves time for every engineer on the team.

Image

Eugene M. 23 days ago

💡 Feature Request

GitHub Enterprise Server (GHES) Support for GitHub App Integration

Description: We'd like to request support for GitHub Enterprise Server (self-hosted GitHub deployments) with the Currents GitHub App integration. Current State: The GitHub Legacy (OAuth) integration supports GHES with commit status updates on pull requests The GitHub App integration (which provides enhanced features like PR comments) only supports GitHub.com Requested Enhancement: Enable the GitHub App integration to work with self-hosted GitHub Enterprise Server deployments by: Allowing users to enter a custom GitHub Enterprise Server URL during setup Supporting the full GitHub App feature set (including PR comments and enhanced status checks) for GHES instances Use Case: Organizations using self-hosted GitHub Enterprise Server want to benefit from the richer integration features provided by the GitHub App (particularly pull request comments with test insights) rather than being limited to the basic commit status updates available through the legacy OAuth integration. Workaround: Currently, GHES users can use the GitHub Legacy (OAuth) integration for commit status updates, but this doesn't provide the enhanced PR comment functionality.

Image

DJ Mountney About 2 months ago

💡 Feature Request

Test failure root cause classification and triage

Description: When viewing a specific test failure, there’s no way to classify the root cause from a development/triage perspective. This makes it hard to: Quickly identify which failures need immediate attention vs. known issues Measure test suite health accurately Prioritize engineering effort appropriately Proposed Solution: A manual classification/tagging system for test failures that allows users to categorize the root cause when viewing individual test failures. Suggested categories: Product Bug — Actual defect in the application Flaky Test — Intermittent failure due to test instability Environment Issue — Infrastructure, network, or test environment problems Test Bug — Issue with the test code itself Known Issue — Linked to an existing tracked bug (e.g., JIRA) Under Investigation — Not yet triaged Note: This is different from Currents’ existing automatic error classification (Category, Action, Target), which identifies what technically caused the error from the test’s perspective. This feature would add manual triage classification to indicate which part of the development cycle is responsible for the failure. Benefits: Enable error identification and triage directly from the test failure view Quickly determine what action to take on each test error Better visibility into failure root causes More accurate test suite health metrics Improved prioritization of engineering work Use Case: When viewing a specific test failure, users should be able to classify it to indicate whether it’s a product bug requiring immediate attention, a known flaky test, an environment issue, or something under investigation.

Image

DJ Mountney About 2 months ago

💡 Feature Request

Allow custom run titles (e.g., use Playwright test/spec name instead of commit/PR title)

Summary For scheduled (cron) and single-spec runs, the commit/PR title isn’t meaningful. We need a deterministic way to set the Currents “Run title” from the Playwright test/spec name or a custom function, without relying on environment hacks. Problem Currents run titles default to commit/PR. For hourly monitors and smoke checks this is noisy and unhelpful. When we execute one spec or one test, we want the run title to reflect that test/spec (e.g., “[plan-check] Bronze/Silver/Gold exist”) so dashboards and Slack alerts are instantly readable. Impact Much clearer run list, filters, and Slack notifications. Easier triage: the title itself tells us what failed. Works especially well for cron jobs where git metadata is irrelevant. Proposal Add a first-class way to control the run title in the Playwright reporter options (and via env/CLI) Ex: // playwright.config.ts ['@currents/playwright', { recordKey: process.env.CURRENTS_RECORD_KEY, projectId: process.env.CURRENTS_PROJECT_ID, // NEW: choose the title source runTitleSource: 'firstTest' | 'specFile' | 'project+spec' | 'commit' | 'custom', // NEW: if 'custom', Currents calls this with context about the run buildNameFn: (ctx) => { // ctx: { firstTest?, specFile?, projectName, branch, sha, workflow, attempt, ... } return [plan-check] ${ctx.firstTest?.title ?? ctx.specFile ?? 'run'}; }, // keep existing env override if set buildName: process.env.CURRENTS_BUILD_NAME, }] Also support env/CLI equivalents, e.g.: CURRENTS_RUN_TITLE_SOURCE=firstTest|specFile|project+spec|commit|custom If CURRENTS_BUILD_NAME is present, it still wins. Behavior / Acceptance Criteria Single test run: title becomes the test title (or buildNameFn output). Single spec with multiple tests: title can be the spec filename or project+spec, or buildNameFn. Multi-spec runs: use the selected source (project+spec (N tests)), or buildNameFn. Title is shown consistently in runs list, run detail page, and Slack notifications. Nice-to-have Expose more fields in ctx (tags, duration, status counts). Per-run tag injection that also shows in Slack. Current Workaround We set CURRENTS_BUILD_NAME before each invocation and run one spec per step. This works but is fragile and verbose across multiple suites.

Image

Xtos 3 months ago

1

💡 Feature Request

Artifact Support (Screenshots, Videos, Logs) in Currents Generic Reporter Context

Our CI pipeline runs end-to-end tests (Detox + Jest) for a React Native application. Due to current limitations with the @currents/jest reporter in a CommonJS environment, we’re using a workaround that converts JUnit XML reports into Currents reports for upload. This flow works functionally but lacks one key capability: attaching artifacts (screenshots, videos, logs) to the test results. These files are essential for debugging failures and verifying UI regressions in mobile E2E testing. Problem Currently: The JUnit → Currents conversion format doesn’t support artifacts. The Jest reporter also doesn’t include any mechanism for uploading files, as it uses the same generic Currents upload command. As confirmed by Currents support (Miguel, 22 Oct 2025), “right now it is not supported,” and this applies to both JUnit and Jest reporters. This limitation means that for frameworks like Detox or Cypress, where screenshot and video evidence are a standard part of test artifacts, Currents reports lose critical debugging data. Proposed Solution Add artifact attachment support to the generic Currents reporter and the data format reference, allowing uploads of associated files such as: Test-level screenshots and videos (e.g., recorded per test case) Test-level Log files or trace dumps Possibly, arbitrary attachments (e.g., JSON results or CLI outputs) Ideally, this would work both: When uploading via the CLI (currents upload ...), referencing artifact file paths in metadata. And when using reporter integrations (@currents/jest), automatically attaching files from known locations (like artifactsDir) based on testName or other matching identification. Impact Adding artifact support would: Greatly improve debugging and analysis of E2E test failures. Bring Currents in line with other reporting platforms (like Allure or TestRail integrations). Enable teams to migrate fully to Currents even when not using supported reporters directly (e.g., when working through converted reports). Reduce friction for users in monorepos, CI pipelines, or multi-environment setups where direct ESM usage isn’t feasible. Additional Notes Miguel mentioned the team “has plans to bump the generic reporter to support more things,” but no ETA is available.

Image

Martin Cihlář 3 months ago

4

💡 Feature Request

Link from Quarantine to Test Result / History

Context Part of my daily work is ensuring the stability of our CI test runs. I regularly review test results, create quarantines, investigate issues, verify bug fixes, and disable quarantines when tests are stable again. Currently, the workflow from a test result to its quarantine is simple — there’s a direct link and button for that. However, the reverse path (from quarantine back to the test or its results) is missing and makes validation slow. Problem When reviewing quarantines, I often need to check whether the related test has been fixed and stabilized. To do this, I currently have to: Open the quarantine. Copy the test name. Open the Test Explorer in a new tab. Paste and search for the test name. Open the results in yet another tab. (So I can recycle the second tab for my next search) Finally, review the test history to decide if the quarantine can be lifted. This multi-step process is time-consuming and repetitive, especially when managing multiple quarantines daily. Proposed Solution Add a direct link from each quarantine entry to: Option A: The Test Explorer, with filters automatically applied for the quarantined test(s). Option B (ideal): If the quarantine applies to a single test, link directly to the test history view — the graph of past runs, which is the most valuable view for verifying stability. Impact This feature would: Save several manual steps per quarantine review. Significantly speed up test stability verification. Improve workflow efficiency for anyone managing quarantines. Encourage more frequent and accurate quarantine cleanups.

Image

Martin Cihlář 3 months ago

💡 Feature Request