rustc_expand: improve diagnostics for non-repeatable metavars rust-lang/rust#154014

Merged

27 comments and reviews loaded in 1.50s

Unique-Usman Avatar
Unique-Usman on 2026-03-17 21:03:07 UTC · edited
Unique-Usman Avatar
Unique-Usman on 2026-03-17 21:03:07 UTC · edited
View on GitHub

View all comments

There was an initally opened pr which solve this issue here #152679. It got merged but, there was a perf regression. And this new pr is opened to address the problem. The first did the computation of binding and matched_rule and then passed them as owned value down to diagnostics::emit_frag_parse_err( but, now this pr address the issue by passing lhs and rules as borrowed value to from_tts and the move the logic to diagnostics::emit_frag_parse_err(.

Fix #47452.

rustbot Avatar
rustbot on 2026-03-17 21:03:13 UTC
rustbot Avatar
rustbot on 2026-03-17 21:03:13 UTC
View on GitHub

r? @adwinwhite

rustbot has assigned @adwinwhite.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

Why was this reviewer chosen?

The reviewer was selected based on:

  • Owners of files modified in this PR: compiler
  • compiler expanded to 69 candidates
  • Random selection from 15 candidates
Unique-Usman Avatar
Unique-Usman on 2026-03-17 21:03:42 UTC
Unique-Usman Avatar
Unique-Usman on 2026-03-17 21:03:42 UTC
View on GitHub
rust-log-analyzer Avatar
rust-log-analyzer on 2026-03-17 21:08:59 UTC · hidden as outdated
rust-log-analyzer Avatar
rust-log-analyzer on 2026-03-17 21:08:59 UTC · hidden as outdated
View on GitHub

The job pr-check-2 failed! Check out the build log: (web) (plain enhanced) (plain)

Click to see the possible cause of the failure (guessed by this bot)
    Checking rustc_symbol_mangling v0.0.0 (/checkout/compiler/rustc_symbol_mangling)
error[E0621]: explicit lifetime required in the type of `rules`
   --> compiler/rustc_expand/src/mbe/macro_rules.rs:355:1
    |
355 |   #[instrument(skip(cx, transparency, arg, rules))]
    |   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   |
    |   lifetime `'cx` required
    |   in this attribute macro expansion
    |
   ::: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tracing-attributes-0.1.30/src/lib.rs:566:1
    |
566 | / pub fn instrument(
567 | |     args: proc_macro::TokenStream,
568 | |     item: proc_macro::TokenStream,
569 | | ) -> proc_macro::TokenStream {
    | |____________________________- in this expansion of `#[instrument]`
    |
help: add explicit lifetime `'cx` to the type of `rules`
    |
364 |     rules: &'cx [MacroRule],
    |             +++

For more information about this error, try `rustc --explain E0621`.
[RUSTC-TIMING] rustc_expand test:false 1.445
error: could not compile `rustc_expand` (lib) due to 1 previous error
rust-log-analyzer Avatar
rust-log-analyzer on 2026-03-19 06:34:51 UTC · hidden as outdated
rust-log-analyzer Avatar
rust-log-analyzer on 2026-03-19 06:34:51 UTC · hidden as outdated
View on GitHub

The job x86_64-gnu-miri failed! Check out the build log: (web) (plain enhanced) (plain)

Click to see the possible cause of the failure (guessed by this bot)
   Compiling matchers v0.2.0
error[E0621]: explicit lifetime required in the type of `rules`
   --> compiler/rustc_expand/src/mbe/macro_rules.rs:355:1
    |
355 |   #[instrument(skip(cx, transparency, arg, rules))]
    |   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   |
    |   lifetime `'cx` required
    |   in this attribute macro expansion
    |
   ::: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tracing-attributes-0.1.30/src/lib.rs:566:1
    |
566 | / pub fn instrument(
567 | |     args: proc_macro::TokenStream,
568 | |     item: proc_macro::TokenStream,
569 | | ) -> proc_macro::TokenStream {
    | |____________________________- in this expansion of `#[instrument]`
    |
help: add explicit lifetime `'cx` to the type of `rules`
    |
364 |     rules: &'cx [MacroRule],
    |             +++

For more information about this error, try `rustc --explain E0621`.
[RUSTC-TIMING] rustc_expand test:false 1.329
error: could not compile `rustc_expand` (lib) due to 1 previous error
estebank Avatar
estebank on 2026-03-23 17:23:10 UTC
estebank Avatar
estebank on 2026-03-23 17:23:10 UTC
View on GitHub

@bors try @rust-timer queue

rust-timer Avatar
rust-timer on 2026-03-23 17:23:13 UTC · hidden as outdated
rust-timer Avatar
rust-timer on 2026-03-23 17:23:13 UTC · hidden as outdated
View on GitHub

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

rust-bors Avatar
rust-bors on 2026-03-23 17:23:16 UTC · hidden as outdated
rust-bors Avatar
rust-bors on 2026-03-23 17:23:16 UTC · hidden as outdated
View on GitHub

⌛ Trying commit 35d11f3 with merge c7819c5

To cancel the try build, run the command @bors try cancel.

Workflow: https://github.com/rust-lang/rust/actions/runs/23450743953

rust-bors Avatar
rust-bors on 2026-03-23 19:32:22 UTC
rust-bors Avatar
rust-bors on 2026-03-23 19:32:22 UTC
View on GitHub

☀️ Try build successful (CI)
Build commit: c7819c5 (c7819c57c7c6744e29582575ea34a29d6e8a3905, parent: 13e2abaac846b2680ae93e1b3bd9fe7fe1b9a7fe)

rust-timer Avatar
rust-timer on 2026-03-23 19:32:25 UTC · hidden as outdated
rust-timer Avatar
rust-timer on 2026-03-23 19:32:25 UTC · hidden as outdated
View on GitHub

Queued c7819c5 with parent 13e2aba, future comparison URL.
There is currently 1 preceding artifact in the queue.
It will probably take at least ~1.2 hours until the benchmark run finishes.

rust-timer Avatar
rust-timer on 2026-03-23 20:26:43 UTC
rust-timer Avatar
rust-timer on 2026-03-23 20:26:43 UTC
View on GitHub

Finished benchmarking commit (c7819c5): comparison URL.

Overall result: ❌ regressions - please read the text below

Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @rustbot label: +perf-regression-triaged. If not, please fix the regressions and do another perf run. If its results are neutral or positive, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
0.2% [0.2%, 0.2%] 6
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.2% [0.2%, 0.2%] 6

Max RSS (memory usage)

This benchmark run did not return any relevant results for this metric.

Cycles

Results (secondary -0.5%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
1.9% [1.9%, 1.9%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.9% [-2.9%, -2.9%] 1
All ❌✅ (primary) - - 0

Binary size

Results (primary 0.1%, secondary 0.1%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
0.1% [0.0%, 0.1%] 55
Regressions ❌
(secondary)
0.1% [0.0%, 0.2%] 24
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.1% [0.0%, 0.1%] 55

Bootstrap: 482.731s -> 482.467s (-0.05%)
Artifact size: 396.84 MiB -> 396.93 MiB (0.02%)

estebank Avatar
estebank on 2026-03-24 02:20:45 UTC
estebank Avatar
estebank on 2026-03-24 02:20:45 UTC
View on GitHub

The regressions here are all in html5ever. I think this should be fine.

@Unique-Usman could you provide a description, a link to the previous PR and squash the commits?

Unique-Usman Avatar
Unique-Usman on 2026-03-24 02:25:15 UTC
Unique-Usman Avatar
Unique-Usman on 2026-03-24 02:25:15 UTC
View on GitHub

@estebank, I will do that. Thanks.

estebank Avatar
estebank on 2026-03-24 18:04:37 UTC
estebank Avatar
estebank on 2026-03-24 18:04:37 UTC
View on GitHub

@bors try @rust-timer queue

getting some confirmation on the html5ever incr regression

rust-timer Avatar
rust-timer on 2026-03-24 18:04:41 UTC · hidden as outdated
rust-timer Avatar
rust-timer on 2026-03-24 18:04:41 UTC · hidden as outdated
View on GitHub

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

rust-bors Avatar
rust-bors on 2026-03-24 18:04:45 UTC · hidden as outdated
rust-bors Avatar
rust-bors on 2026-03-24 18:04:45 UTC · hidden as outdated
View on GitHub

⌛ Trying commit ed4a1f8 with merge 9837802

To cancel the try build, run the command @bors try cancel.

Workflow: https://github.com/rust-lang/rust/actions/runs/23504878841

rust-bors Avatar
rust-bors on 2026-03-24 20:14:48 UTC
rust-bors Avatar
rust-bors on 2026-03-24 20:14:48 UTC
View on GitHub

☀️ Try build successful (CI)
Build commit: 9837802 (983780242f7d36e76c1d0da9177445d96d292976, parent: 0312931d8c0ba1a28268a12c06202b68cbc65f76)

rust-timer Avatar
rust-timer on 2026-03-24 20:14:52 UTC · hidden as outdated
rust-timer Avatar
rust-timer on 2026-03-24 20:14:52 UTC · hidden as outdated
View on GitHub

Queued 9837802 with parent 0312931, future comparison URL.
There are currently 0 preceding artifacts in the queue.
It will probably take at least ~1.0 hours until the benchmark run finishes.

rust-timer Avatar
rust-timer on 2026-03-24 20:54:43 UTC
rust-timer Avatar
rust-timer on 2026-03-24 20:54:43 UTC
View on GitHub

Finished benchmarking commit (9837802): comparison URL.

Overall result: ❌ regressions - please read the text below

Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @rustbot label: +perf-regression-triaged. If not, please fix the regressions and do another perf run. If its results are neutral or positive, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
0.3% [0.3%, 0.3%] 6
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.3% [0.3%, 0.3%] 6

Max RSS (memory usage)

Results (secondary -2.5%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.5% [-2.5%, -2.5%] 1
All ❌✅ (primary) - - 0

Cycles

Results (secondary -0.5%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
3.0% [2.9%, 3.1%] 2
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.9% [-4.1%, -2.1%] 3
All ❌✅ (primary) - - 0

Binary size

Results (primary 0.1%, secondary 0.1%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
0.1% [0.0%, 0.1%] 54
Regressions ❌
(secondary)
0.1% [0.0%, 0.1%] 22
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.1% [0.0%, 0.1%] 54

Bootstrap: 484.31s -> 483.312s (-0.21%)
Artifact size: 394.79 MiB -> 396.88 MiB (0.53%)

estebank Avatar
estebank on 2026-03-25 19:14:40 UTC
estebank Avatar
estebank on 2026-03-25 19:14:40 UTC
View on GitHub

It consistently affects html5ever incr builds a little bit (we are doing more after all), but I think the output improvement is worth it.

@bors r+

rust-bors Avatar
rust-bors on 2026-03-25 19:14:44 UTC
rust-bors Avatar
rust-bors on 2026-03-25 19:14:44 UTC
View on GitHub

📌 Commit ed4a1f8 has been approved by estebank

It is now in the queue for this repository.

rust-bors Avatar
rust-bors on 2026-03-25 23:14:19 UTC · hidden as outdated
rust-bors Avatar
rust-bors on 2026-03-25 23:14:19 UTC · hidden as outdated
View on GitHub
rust-bors Avatar
rust-bors on 2026-03-26 02:20:52 UTC
rust-bors Avatar
rust-bors on 2026-03-26 02:20:52 UTC
View on GitHub

☀️ Test successful - CI
Approved by: estebank
Duration: 3h 6m 27s
Pushing 1174f78 to main...

github-actions Avatar
github-actions on 2026-03-26 02:23:52 UTC
github-actions Avatar
github-actions on 2026-03-26 02:23:52 UTC
View on GitHub
What is this? This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.

Comparing 80d0e4b (parent) -> 1174f78 (this PR)

Test differences

Show 8 test diffs

Stage 1

  • [ui] tests/ui/macros/typo-in-norepeat-expr-2.rs: [missing] -> pass (J2)
  • [ui] tests/ui/macros/typo-in-norepeat-expr.rs: [missing] -> pass (J2)

Stage 2

  • [ui] tests/ui/macros/typo-in-norepeat-expr-2.rs: [missing] -> pass (J0)
  • [ui] tests/ui/macros/typo-in-norepeat-expr.rs: [missing] -> pass (J0)
  • [run-make] tests/run-make/compressed-debuginfo-zstd: ignore (ignored if LLVM wasn't build with zstd for ELF section compression or LLVM is not the default codegen backend) -> pass (J1)

Additionally, 3 doctest diffs were found. These are ignored, as they are noisy.

Job group index

Test dashboard

Run

cargo run --manifest-path src/ci/citool/Cargo.toml -- \
    test-dashboard 1174f784096deb8e4ba93f7e4b5ccb7bb4ba2c55 --output-dir test-dashboard

And then open test-dashboard/index.html in your browser to see an overview of all executed tests.

Job duration changes

  1. pr-check-1: 45m 27s -> 28m 4s (-38.2%)
  2. x86_64-gnu-tools: 1h 22m -> 52m 53s (-36.2%)
  3. pr-check-2: 56m 16s -> 38m 39s (-31.3%)
  4. x86_64-rust-for-linux: 1h 8m -> 47m 12s (-30.8%)
  5. x86_64-gnu-gcc: 1h 26m -> 1h 2m (-27.6%)
  6. dist-aarch64-apple: 2h 13m -> 1h 37m (-27.0%)
  7. dist-arm-linux-musl: 2h 11m -> 1h 37m (-25.7%)
  8. dist-android: 36m 6s -> 27m 3s (-25.1%)
  9. armhf-gnu: 1h 53m -> 1h 26m (-24.3%)
  10. x86_64-gnu-miri: 1h 52m -> 1h 26m (-23.3%)
How to interpret the job duration changes?

Job durations can vary a lot, based on the actual runner instance
that executed the job, system noise, invalidated caches, etc. The table above is provided
mostly for t-infra members, for simpler debugging of potential CI slow-downs.

rust-timer Avatar
rust-timer on 2026-03-26 03:01:21 UTC
rust-timer Avatar
rust-timer on 2026-03-26 03:01:21 UTC
View on GitHub

Finished benchmarking commit (1174f78): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Our benchmarks found a performance regression caused by this PR.
This might be an actual regression, but it can also be just noise.

Next Steps:

  • If the regression was expected or you think it can be justified,
    please write a comment with sufficient written justification, and add
    @rustbot label: +perf-regression-triaged to it, to mark the regression as triaged.
  • If you think that you know of a way to resolve the regression, try to create
    a new PR with a fix for the regression.
  • If you do not understand the regression or you think that it is just noise,
    you can ask the @rust-lang/wg-compiler-performance working group for help (members of this group
    were already notified of this PR).

@rustbot label: +perf-regression
cc @rust-lang/wg-compiler-performance

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
0.2% [0.1%, 0.3%] 7
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.1% [-0.1%, -0.1%] 1
Improvements ✅
(secondary)
-0.2% [-0.2%, -0.2%] 1
All ❌✅ (primary) 0.2% [-0.1%, 0.3%] 8

Max RSS (memory usage)

This benchmark run did not return any relevant results for this metric.

Cycles

Results (secondary -0.8%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
2.2% [2.2%, 2.2%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-3.9% [-3.9%, -3.9%] 1
All ❌✅ (primary) - - 0

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 481.885s -> 483.566s (0.35%)
Artifact size: 395.19 MiB -> 395.05 MiB (-0.04%)

Kobzol Avatar
Kobzol on 2026-03-31 07:44:03 UTC
Kobzol Avatar
Kobzol on 2026-03-31 07:44:03 UTC
View on GitHub

Small regression on html5ever, it was deemed to be acceptable prior to merging.

@rustbot label: +perf-regression-triaged

Unique-Usman Avatar
Unique-Usman on 2026-03-31 07:52:18 UTC
Unique-Usman Avatar
Unique-Usman on 2026-03-31 07:52:18 UTC
View on GitHub

Small regression on html5ever, it was deemed to be acceptable prior to merging.

@rustbot label: +perf-regression-triaged

@Kobzol nothing to do right ?

Kobzol Avatar
Kobzol on 2026-03-31 07:57:48 UTC
Kobzol Avatar
Kobzol on 2026-03-31 07:57:48 UTC
View on GitHub

Yeah.

👍1