Native CI Orchestration: Trigger and Re-run Tests Directly from the Currents Dashboard
The Problem: The "Context-Switching" Friction Currently, our testing workflow is fragmented across two platforms. To initiate or re-run a test suite, we must navigate to GitHub Actions; to analyze the results, we must then switch over to Currents.dev. This back-and-forth creates unnecessary friction, slows down the debugging cycle, and forces developers to manage two different interfaces for a single task. The Proposal: A Unified Execution & Observation Hub We would like to propose the ability to trigger and manage test runs directly from within the Currents.dev UI. Instead of Currents being a passive destination for results, it would become an active orchestration hub. This functionality would allow teams to: Launch New Runs: Trigger specific GitHub Action workflows (via repository dispatch or similar integration) directly from the Currents dashboard. Smart Re-runs: Re-execute failed tests or entire flakes with a single click inside the Currents run view, rather than hunting for the specific job in CI. Centralized Control: View real-time logs and execution progress without ever leaving the Currents ecosystem. Why this is a Game-Changer: Velocity: Dramatically reduces the "Mean Time to Repair" (MTTR) by allowing developers to re-run failed tests the moment they identify them in the dashboard. Simplified DX (Developer Experience): Provides a "Single Pane of Glass" for the entire testing lifecycle—from execution to post-mortem analysis. Competitive Alignment: Similar orchestration capabilities are highly valued in other dashboard environments (like Cypress Cloud), and bringing this to Currents would solidify its position as the premier choice for scalable test management. By bridging the gap between CI execution and Currents reporting, you would be providing a seamless, world-class workflow that saves time for every engineer on the team.

Eugene M. 23 days ago
💡 Feature Request
Native CI Orchestration: Trigger and Re-run Tests Directly from the Currents Dashboard
The Problem: The "Context-Switching" Friction Currently, our testing workflow is fragmented across two platforms. To initiate or re-run a test suite, we must navigate to GitHub Actions; to analyze the results, we must then switch over to Currents.dev. This back-and-forth creates unnecessary friction, slows down the debugging cycle, and forces developers to manage two different interfaces for a single task. The Proposal: A Unified Execution & Observation Hub We would like to propose the ability to trigger and manage test runs directly from within the Currents.dev UI. Instead of Currents being a passive destination for results, it would become an active orchestration hub. This functionality would allow teams to: Launch New Runs: Trigger specific GitHub Action workflows (via repository dispatch or similar integration) directly from the Currents dashboard. Smart Re-runs: Re-execute failed tests or entire flakes with a single click inside the Currents run view, rather than hunting for the specific job in CI. Centralized Control: View real-time logs and execution progress without ever leaving the Currents ecosystem. Why this is a Game-Changer: Velocity: Dramatically reduces the "Mean Time to Repair" (MTTR) by allowing developers to re-run failed tests the moment they identify them in the dashboard. Simplified DX (Developer Experience): Provides a "Single Pane of Glass" for the entire testing lifecycle—from execution to post-mortem analysis. Competitive Alignment: Similar orchestration capabilities are highly valued in other dashboard environments (like Cypress Cloud), and bringing this to Currents would solidify its position as the premier choice for scalable test management. By bridging the gap between CI execution and Currents reporting, you would be providing a seamless, world-class workflow that saves time for every engineer on the team.

Eugene M. 23 days ago
💡 Feature Request
GitHub Enterprise Server (GHES) Support for GitHub App Integration
Description: We'd like to request support for GitHub Enterprise Server (self-hosted GitHub deployments) with the Currents GitHub App integration. Current State: The GitHub Legacy (OAuth) integration supports GHES with commit status updates on pull requests The GitHub App integration (which provides enhanced features like PR comments) only supports GitHub.com Requested Enhancement: Enable the GitHub App integration to work with self-hosted GitHub Enterprise Server deployments by: Allowing users to enter a custom GitHub Enterprise Server URL during setup Supporting the full GitHub App feature set (including PR comments and enhanced status checks) for GHES instances Use Case: Organizations using self-hosted GitHub Enterprise Server want to benefit from the richer integration features provided by the GitHub App (particularly pull request comments with test insights) rather than being limited to the basic commit status updates available through the legacy OAuth integration. Workaround: Currently, GHES users can use the GitHub Legacy (OAuth) integration for commit status updates, but this doesn't provide the enhanced PR comment functionality.

DJ Mountney About 2 months ago
Integration
💡 Feature Request
GitHub Enterprise Server (GHES) Support for GitHub App Integration
Description: We'd like to request support for GitHub Enterprise Server (self-hosted GitHub deployments) with the Currents GitHub App integration. Current State: The GitHub Legacy (OAuth) integration supports GHES with commit status updates on pull requests The GitHub App integration (which provides enhanced features like PR comments) only supports GitHub.com Requested Enhancement: Enable the GitHub App integration to work with self-hosted GitHub Enterprise Server deployments by: Allowing users to enter a custom GitHub Enterprise Server URL during setup Supporting the full GitHub App feature set (including PR comments and enhanced status checks) for GHES instances Use Case: Organizations using self-hosted GitHub Enterprise Server want to benefit from the richer integration features provided by the GitHub App (particularly pull request comments with test insights) rather than being limited to the basic commit status updates available through the legacy OAuth integration. Workaround: Currently, GHES users can use the GitHub Legacy (OAuth) integration for commit status updates, but this doesn't provide the enhanced PR comment functionality.

DJ Mountney About 2 months ago
Integration
💡 Feature Request
Test failure root cause classification and triage
Description: When viewing a specific test failure, there’s no way to classify the root cause from a development/triage perspective. This makes it hard to: Quickly identify which failures need immediate attention vs. known issues Measure test suite health accurately Prioritize engineering effort appropriately Proposed Solution: A manual classification/tagging system for test failures that allows users to categorize the root cause when viewing individual test failures. Suggested categories: Product Bug — Actual defect in the application Flaky Test — Intermittent failure due to test instability Environment Issue — Infrastructure, network, or test environment problems Test Bug — Issue with the test code itself Known Issue — Linked to an existing tracked bug (e.g., JIRA) Under Investigation — Not yet triaged Note: This is different from Currents’ existing automatic error classification (Category, Action, Target), which identifies what technically caused the error from the test’s perspective. This feature would add manual triage classification to indicate which part of the development cycle is responsible for the failure. Benefits: Enable error identification and triage directly from the test failure view Quickly determine what action to take on each test error Better visibility into failure root causes More accurate test suite health metrics Improved prioritization of engineering work Use Case: When viewing a specific test failure, users should be able to classify it to indicate whether it’s a product bug requiring immediate attention, a known flaky test, an environment issue, or something under investigation.

DJ Mountney About 2 months ago
💡 Feature Request
Test failure root cause classification and triage
Description: When viewing a specific test failure, there’s no way to classify the root cause from a development/triage perspective. This makes it hard to: Quickly identify which failures need immediate attention vs. known issues Measure test suite health accurately Prioritize engineering effort appropriately Proposed Solution: A manual classification/tagging system for test failures that allows users to categorize the root cause when viewing individual test failures. Suggested categories: Product Bug — Actual defect in the application Flaky Test — Intermittent failure due to test instability Environment Issue — Infrastructure, network, or test environment problems Test Bug — Issue with the test code itself Known Issue — Linked to an existing tracked bug (e.g., JIRA) Under Investigation — Not yet triaged Note: This is different from Currents’ existing automatic error classification (Category, Action, Target), which identifies what technically caused the error from the test’s perspective. This feature would add manual triage classification to indicate which part of the development cycle is responsible for the failure. Benefits: Enable error identification and triage directly from the test failure view Quickly determine what action to take on each test error Better visibility into failure root causes More accurate test suite health metrics Improved prioritization of engineering work Use Case: When viewing a specific test failure, users should be able to classify it to indicate whether it’s a product bug requiring immediate attention, a known flaky test, an environment issue, or something under investigation.

DJ Mountney About 2 months ago
💡 Feature Request
Show linked Jira issues on test list page
Description: Currently, JIRA issues linked to tests are only visible after opening a test and checking a small icon. This makes it hard to see linked issues at a glance. Request: Show linked JIRA issues directly in the test list view Make it easy to see if a failed test already has a related JIRA issue without opening the test Benefits: Faster triage of failed tests Avoid duplicate JIRA issues Quick visibility into test status and related work items Current behavior: Users must click into each test to see if a JIRA issue is linked. Desired behavior: JIRA links indicators are visible directly in the test list view.

DJ Mountney About 2 months ago
Integration
💡 Feature Request
Show linked Jira issues on test list page
Description: Currently, JIRA issues linked to tests are only visible after opening a test and checking a small icon. This makes it hard to see linked issues at a glance. Request: Show linked JIRA issues directly in the test list view Make it easy to see if a failed test already has a related JIRA issue without opening the test Benefits: Faster triage of failed tests Avoid duplicate JIRA issues Quick visibility into test status and related work items Current behavior: Users must click into each test to see if a JIRA issue is linked. Desired behavior: JIRA links indicators are visible directly in the test list view.

DJ Mountney About 2 months ago
Integration
💡 Feature Request
In Progress
Expand conditions and actions
use case: if the build-id contains specific patter → tag the run / add specific flag to filter runs / or rename the run to a custom name

Stanislav Ivanov 2 months ago
💡 Feature Request
In Progress
Expand conditions and actions
use case: if the build-id contains specific patter → tag the run / add specific flag to filter runs / or rename the run to a custom name

Stanislav Ivanov 2 months ago
💡 Feature Request
The test ID should be used to attach Jiras cross the projects.
I noticed that linked jiras do not move across runs. For example, we have a test that's failing on our 'engage' runs - engage-copy-favorites-list-id. This morning I used tags to run a group of tests and this test was among those. It failed as expected, but there is no jira attached to the test.

rbjparker 2 months ago
💡 Feature Request
The test ID should be used to attach Jiras cross the projects.
I noticed that linked jiras do not move across runs. For example, we have a test that's failing on our 'engage' runs - engage-copy-favorites-list-id. This morning I used tags to run a group of tests and this test was among those. It failed as expected, but there is no jira attached to the test.

rbjparker 2 months ago
💡 Feature Request
Allow custom run titles (e.g., use Playwright test/spec name instead of commit/PR title)
Summary For scheduled (cron) and single-spec runs, the commit/PR title isn’t meaningful. We need a deterministic way to set the Currents “Run title” from the Playwright test/spec name or a custom function, without relying on environment hacks. Problem Currents run titles default to commit/PR. For hourly monitors and smoke checks this is noisy and unhelpful. When we execute one spec or one test, we want the run title to reflect that test/spec (e.g., “[plan-check] Bronze/Silver/Gold exist”) so dashboards and Slack alerts are instantly readable. Impact Much clearer run list, filters, and Slack notifications. Easier triage: the title itself tells us what failed. Works especially well for cron jobs where git metadata is irrelevant. Proposal Add a first-class way to control the run title in the Playwright reporter options (and via env/CLI) Ex: // playwright.config.ts ['@currents/playwright', { recordKey: process.env.CURRENTS_RECORD_KEY, projectId: process.env.CURRENTS_PROJECT_ID, // NEW: choose the title source runTitleSource: 'firstTest' | 'specFile' | 'project+spec' | 'commit' | 'custom', // NEW: if 'custom', Currents calls this with context about the run buildNameFn: (ctx) => { // ctx: { firstTest?, specFile?, projectName, branch, sha, workflow, attempt, ... } return [plan-check] ${ctx.firstTest?.title ?? ctx.specFile ?? 'run'}; }, // keep existing env override if set buildName: process.env.CURRENTS_BUILD_NAME, }] Also support env/CLI equivalents, e.g.: CURRENTS_RUN_TITLE_SOURCE=firstTest|specFile|project+spec|commit|custom If CURRENTS_BUILD_NAME is present, it still wins. Behavior / Acceptance Criteria Single test run: title becomes the test title (or buildNameFn output). Single spec with multiple tests: title can be the spec filename or project+spec, or buildNameFn. Multi-spec runs: use the selected source (project+spec (N tests)), or buildNameFn. Title is shown consistently in runs list, run detail page, and Slack notifications. Nice-to-have Expose more fields in ctx (tags, duration, status counts). Per-run tag injection that also shows in Slack. Current Workaround We set CURRENTS_BUILD_NAME before each invocation and run one spec per step. This works but is fragile and verbose across multiple suites.

Xtos 3 months ago
💡 Feature Request
Allow custom run titles (e.g., use Playwright test/spec name instead of commit/PR title)
Summary For scheduled (cron) and single-spec runs, the commit/PR title isn’t meaningful. We need a deterministic way to set the Currents “Run title” from the Playwright test/spec name or a custom function, without relying on environment hacks. Problem Currents run titles default to commit/PR. For hourly monitors and smoke checks this is noisy and unhelpful. When we execute one spec or one test, we want the run title to reflect that test/spec (e.g., “[plan-check] Bronze/Silver/Gold exist”) so dashboards and Slack alerts are instantly readable. Impact Much clearer run list, filters, and Slack notifications. Easier triage: the title itself tells us what failed. Works especially well for cron jobs where git metadata is irrelevant. Proposal Add a first-class way to control the run title in the Playwright reporter options (and via env/CLI) Ex: // playwright.config.ts ['@currents/playwright', { recordKey: process.env.CURRENTS_RECORD_KEY, projectId: process.env.CURRENTS_PROJECT_ID, // NEW: choose the title source runTitleSource: 'firstTest' | 'specFile' | 'project+spec' | 'commit' | 'custom', // NEW: if 'custom', Currents calls this with context about the run buildNameFn: (ctx) => { // ctx: { firstTest?, specFile?, projectName, branch, sha, workflow, attempt, ... } return [plan-check] ${ctx.firstTest?.title ?? ctx.specFile ?? 'run'}; }, // keep existing env override if set buildName: process.env.CURRENTS_BUILD_NAME, }] Also support env/CLI equivalents, e.g.: CURRENTS_RUN_TITLE_SOURCE=firstTest|specFile|project+spec|commit|custom If CURRENTS_BUILD_NAME is present, it still wins. Behavior / Acceptance Criteria Single test run: title becomes the test title (or buildNameFn output). Single spec with multiple tests: title can be the spec filename or project+spec, or buildNameFn. Multi-spec runs: use the selected source (project+spec (N tests)), or buildNameFn. Title is shown consistently in runs list, run detail page, and Slack notifications. Nice-to-have Expose more fields in ctx (tags, duration, status counts). Per-run tag injection that also shows in Slack. Current Workaround We set CURRENTS_BUILD_NAME before each invocation and run one spec per step. This works but is fragile and verbose across multiple suites.

Xtos 3 months ago
💡 Feature Request
Completed
Reports 2.0
Improve Automated Reports email content: Show more data that is available in the dashboard - charts, explorer, test performance etc. More flexible scheduling Include charts and visuals for better UX Link back to Currents for further exploration

Andrew Goldis 3 months ago
💡 Feature Request
Completed
Reports 2.0
Improve Automated Reports email content: Show more data that is available in the dashboard - charts, explorer, test performance etc. More flexible scheduling Include charts and visuals for better UX Link back to Currents for further exploration

Andrew Goldis 3 months ago
💡 Feature Request
Completed
Show success prompt when linking Jira App
In the Jira platform, in the Currents Jira App. When a user pastes the unique link token in the input field and click the “Link” button, the action just clears the field leaving the user without feedback so they won’t know if the link was successful or not. It would be nice to have a prompt or notice saying that the action was successful or failed.

Miguel Langarano 3 months ago
💡 Feature Request
Completed
Show success prompt when linking Jira App
In the Jira platform, in the Currents Jira App. When a user pastes the unique link token in the input field and click the “Link” button, the action just clears the field leaving the user without feedback so they won’t know if the link was successful or not. It would be nice to have a prompt or notice saying that the action was successful or failed.

Miguel Langarano 3 months ago
💡 Feature Request
Microsoft Teams integration customizations
Describing the challenge: At the moment if my test run has 3 projects, every project separately will be sent to the teams channel, that’s cool, but sometimes in the channel I would like to see only one post about whole test run. With the post tags are being sent → the idea of the post is to have quick notification about the test run, if user wants to see the details, he’s more welcome to the “currents world” where he/she can explore why test failed, which tags were run etc. I would like to add some rules, for example if the test run has more than 5% failed tests then send the notification, if less than don’t etc.

Stanislav Ivanov 3 months ago
💡 Feature Request
Microsoft Teams integration customizations
Describing the challenge: At the moment if my test run has 3 projects, every project separately will be sent to the teams channel, that’s cool, but sometimes in the channel I would like to see only one post about whole test run. With the post tags are being sent → the idea of the post is to have quick notification about the test run, if user wants to see the details, he’s more welcome to the “currents world” where he/she can explore why test failed, which tags were run etc. I would like to add some rules, for example if the test run has more than 5% failed tests then send the notification, if less than don’t etc.

Stanislav Ivanov 3 months ago
💡 Feature Request
Collapsed view of projects list
A collapsed view or like a view without the graphs so we can accommodate more than one project in a row. This would help organizations with a big number of projects so they can easily dive through projects.

Miguel Langarano 3 months ago
💡 Feature Request
Collapsed view of projects list
A collapsed view or like a view without the graphs so we can accommodate more than one project in a row. This would help organizations with a big number of projects so they can easily dive through projects.

Miguel Langarano 3 months ago
💡 Feature Request
In Progress
Find Annotated Tests
https://docs.currents.dev/guides/playwright-annotations I conditionally add an annotation during the test run. I want the ability to find all the tests where this annotation occurred. “Test Explorer” does not allow me to filter for the annotation to find where / when this annotation occurred.

Dale Fixter 3 months ago
💡 Feature Request
In Progress
Find Annotated Tests
https://docs.currents.dev/guides/playwright-annotations I conditionally add an annotation during the test run. I want the ability to find all the tests where this annotation occurred. “Test Explorer” does not allow me to filter for the annotation to find where / when this annotation occurred.

Dale Fixter 3 months ago
💡 Feature Request
In Progress
Jira Integration - handle Custom Required Fields
The current Jira Integration implementation does not work when the issue type has a required custom field, making the Integration redundant. You can use the following API endpoints to discover the required fields and use these to ask the user to make their selection, so you are able to send a complete and successful Create Issue. https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-createmeta-projectidorkey-issuetypes-get https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-createmeta-projectidorkey-issuetypes-issuetypeid-get

Dale Fixter 3 months ago
💡 Feature Request
In Progress
Jira Integration - handle Custom Required Fields
The current Jira Integration implementation does not work when the issue type has a required custom field, making the Integration redundant. You can use the following API endpoints to discover the required fields and use these to ask the user to make their selection, so you are able to send a complete and successful Create Issue. https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-createmeta-projectidorkey-issuetypes-get https://developer.atlassian.com/cloud/jira/platform/rest/v3/api-group-issues/#api-rest-api-3-issue-createmeta-projectidorkey-issuetypes-issuetypeid-get

Dale Fixter 3 months ago
💡 Feature Request
Artifact Support (Screenshots, Videos, Logs) in Currents Generic Reporter Context
Our CI pipeline runs end-to-end tests (Detox + Jest) for a React Native application. Due to current limitations with the @currents/jest reporter in a CommonJS environment, we’re using a workaround that converts JUnit XML reports into Currents reports for upload. This flow works functionally but lacks one key capability: attaching artifacts (screenshots, videos, logs) to the test results. These files are essential for debugging failures and verifying UI regressions in mobile E2E testing. Problem Currently: The JUnit → Currents conversion format doesn’t support artifacts. The Jest reporter also doesn’t include any mechanism for uploading files, as it uses the same generic Currents upload command. As confirmed by Currents support (Miguel, 22 Oct 2025), “right now it is not supported,” and this applies to both JUnit and Jest reporters. This limitation means that for frameworks like Detox or Cypress, where screenshot and video evidence are a standard part of test artifacts, Currents reports lose critical debugging data. Proposed Solution Add artifact attachment support to the generic Currents reporter and the data format reference, allowing uploads of associated files such as: Test-level screenshots and videos (e.g., recorded per test case) Test-level Log files or trace dumps Possibly, arbitrary attachments (e.g., JSON results or CLI outputs) Ideally, this would work both: When uploading via the CLI (currents upload ...), referencing artifact file paths in metadata. And when using reporter integrations (@currents/jest), automatically attaching files from known locations (like artifactsDir) based on testName or other matching identification. Impact Adding artifact support would: Greatly improve debugging and analysis of E2E test failures. Bring Currents in line with other reporting platforms (like Allure or TestRail integrations). Enable teams to migrate fully to Currents even when not using supported reporters directly (e.g., when working through converted reports). Reduce friction for users in monorepos, CI pipelines, or multi-environment setups where direct ESM usage isn’t feasible. Additional Notes Miguel mentioned the team “has plans to bump the generic reporter to support more things,” but no ETA is available.

Martin Cihlář 3 months ago
Test Framework
💡 Feature Request
Artifact Support (Screenshots, Videos, Logs) in Currents Generic Reporter Context
Our CI pipeline runs end-to-end tests (Detox + Jest) for a React Native application. Due to current limitations with the @currents/jest reporter in a CommonJS environment, we’re using a workaround that converts JUnit XML reports into Currents reports for upload. This flow works functionally but lacks one key capability: attaching artifacts (screenshots, videos, logs) to the test results. These files are essential for debugging failures and verifying UI regressions in mobile E2E testing. Problem Currently: The JUnit → Currents conversion format doesn’t support artifacts. The Jest reporter also doesn’t include any mechanism for uploading files, as it uses the same generic Currents upload command. As confirmed by Currents support (Miguel, 22 Oct 2025), “right now it is not supported,” and this applies to both JUnit and Jest reporters. This limitation means that for frameworks like Detox or Cypress, where screenshot and video evidence are a standard part of test artifacts, Currents reports lose critical debugging data. Proposed Solution Add artifact attachment support to the generic Currents reporter and the data format reference, allowing uploads of associated files such as: Test-level screenshots and videos (e.g., recorded per test case) Test-level Log files or trace dumps Possibly, arbitrary attachments (e.g., JSON results or CLI outputs) Ideally, this would work both: When uploading via the CLI (currents upload ...), referencing artifact file paths in metadata. And when using reporter integrations (@currents/jest), automatically attaching files from known locations (like artifactsDir) based on testName or other matching identification. Impact Adding artifact support would: Greatly improve debugging and analysis of E2E test failures. Bring Currents in line with other reporting platforms (like Allure or TestRail integrations). Enable teams to migrate fully to Currents even when not using supported reporters directly (e.g., when working through converted reports). Reduce friction for users in monorepos, CI pipelines, or multi-environment setups where direct ESM usage isn’t feasible. Additional Notes Miguel mentioned the team “has plans to bump the generic reporter to support more things,” but no ETA is available.

Martin Cihlář 3 months ago
Test Framework
💡 Feature Request
Completed
unsaved changes modal displays after creating a Jira
When trying to close the modal after creating a Jira from a failed test, the modal won’t close until you click the OK button on the message “app.current.dev says ‘You have unsaved changes. Are you sure you want to close?’. In fact, there are no unsaved changes and the modal should just close when the x button is clicked or you click out of the modal. I’m using a Chrome browser but I see the same issue in Edge.

rbjparker 3 months ago
High Priority
Bug
Completed
unsaved changes modal displays after creating a Jira
When trying to close the modal after creating a Jira from a failed test, the modal won’t close until you click the OK button on the message “app.current.dev says ‘You have unsaved changes. Are you sure you want to close?’. In fact, there are no unsaved changes and the modal should just close when the x button is clicked or you click out of the modal. I’m using a Chrome browser but I see the same issue in Edge.

rbjparker 3 months ago
High Priority
Bug
Link from Quarantine to Test Result / History
Context Part of my daily work is ensuring the stability of our CI test runs. I regularly review test results, create quarantines, investigate issues, verify bug fixes, and disable quarantines when tests are stable again. Currently, the workflow from a test result to its quarantine is simple — there’s a direct link and button for that. However, the reverse path (from quarantine back to the test or its results) is missing and makes validation slow. Problem When reviewing quarantines, I often need to check whether the related test has been fixed and stabilized. To do this, I currently have to: Open the quarantine. Copy the test name. Open the Test Explorer in a new tab. Paste and search for the test name. Open the results in yet another tab. (So I can recycle the second tab for my next search) Finally, review the test history to decide if the quarantine can be lifted. This multi-step process is time-consuming and repetitive, especially when managing multiple quarantines daily. Proposed Solution Add a direct link from each quarantine entry to: Option A: The Test Explorer, with filters automatically applied for the quarantined test(s). Option B (ideal): If the quarantine applies to a single test, link directly to the test history view — the graph of past runs, which is the most valuable view for verifying stability. Impact This feature would: Save several manual steps per quarantine review. Significantly speed up test stability verification. Improve workflow efficiency for anyone managing quarantines. Encourage more frequent and accurate quarantine cleanups.

Martin Cihlář 3 months ago
💡 Feature Request
Link from Quarantine to Test Result / History
Context Part of my daily work is ensuring the stability of our CI test runs. I regularly review test results, create quarantines, investigate issues, verify bug fixes, and disable quarantines when tests are stable again. Currently, the workflow from a test result to its quarantine is simple — there’s a direct link and button for that. However, the reverse path (from quarantine back to the test or its results) is missing and makes validation slow. Problem When reviewing quarantines, I often need to check whether the related test has been fixed and stabilized. To do this, I currently have to: Open the quarantine. Copy the test name. Open the Test Explorer in a new tab. Paste and search for the test name. Open the results in yet another tab. (So I can recycle the second tab for my next search) Finally, review the test history to decide if the quarantine can be lifted. This multi-step process is time-consuming and repetitive, especially when managing multiple quarantines daily. Proposed Solution Add a direct link from each quarantine entry to: Option A: The Test Explorer, with filters automatically applied for the quarantined test(s). Option B (ideal): If the quarantine applies to a single test, link directly to the test history view — the graph of past runs, which is the most valuable view for verifying stability. Impact This feature would: Save several manual steps per quarantine review. Significantly speed up test stability verification. Improve workflow efficiency for anyone managing quarantines. Encourage more frequent and accurate quarantine cleanups.

Martin Cihlář 3 months ago
💡 Feature Request
Completed
Need to be able to search by Jira number when linking a JIra to a test
The dropdown in linking modal does not find an existing jira in the results when an existing jira number is entered. The only way to search for a jira and have it be found is to enter the Title Text. Video of the experience is attached.

rbjparker 4 months ago
💡 Feature Request
Completed
Need to be able to search by Jira number when linking a JIra to a test
The dropdown in linking modal does not find an existing jira in the results when an existing jira number is entered. The only way to search for a jira and have it be found is to enter the Title Text. Video of the experience is attached.

rbjparker 4 months ago
💡 Feature Request
Add test scripts to "Tests" view.
Add the possibility to display the scripts of a test case, in the Tests view. The situation is the following, once checking autotests failures, we check the screenshot and the error message. We would normally also check the scripts to try to understand what if failing, what needs to be fixed, etc. That is not available in the Tests view. I suggest this being an extra tab, next to the screenshot tab for example.

VeBa 4 months ago
💡 Feature Request
Add test scripts to "Tests" view.
Add the possibility to display the scripts of a test case, in the Tests view. The situation is the following, once checking autotests failures, we check the screenshot and the error message. We would normally also check the scripts to try to understand what if failing, what needs to be fixed, etc. That is not available in the Tests view. I suggest this being an extra tab, next to the screenshot tab for example.

VeBa 4 months ago
💡 Feature Request
Display results of each feature file when opening that feature file
The results per feature file are only displayed in the “pipeline view”. Once opening a specific feature file, the summarized results of that feature file are not displayed anymore. It is useful to have all related information on one same place (as the case of having passes/failed number of scenarios of one feature file, when displaying that specific feature file).

VeBa 4 months ago
💡 Feature Request
Display results of each feature file when opening that feature file
The results per feature file are only displayed in the “pipeline view”. Once opening a specific feature file, the summarized results of that feature file are not displayed anymore. It is useful to have all related information on one same place (as the case of having passes/failed number of scenarios of one feature file, when displaying that specific feature file).

VeBa 4 months ago
💡 Feature Request
Indicator of running pipelines
Add an indicator (e.g. in the projects list) in case there is a pipeline running for any of the projects. E.g.: a red circle next to the project that has a running pipeline (if not, the only way to see if something is running is to open project by project, which requires many clicks and time).

VeBa 4 months ago
💡 Feature Request
Indicator of running pipelines
Add an indicator (e.g. in the projects list) in case there is a pipeline running for any of the projects. E.g.: a red circle next to the project that has a running pipeline (if not, the only way to see if something is running is to open project by project, which requires many clicks and time).

VeBa 4 months ago
💡 Feature Request