rustc_expand: improve diagnostics for non-repeatable metavars rust-lang/rust#154014
r? @adwinwhite
rustbot has assigned @adwinwhite.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.
Use r? to explicitly pick a reviewer
Why was this reviewer chosen?
The reviewer was selected based on:
- Owners of files modified in this PR:
compiler compilerexpanded to 69 candidates- Random selection from 15 candidates
View on GitHub
The job pr-check-2 failed! Check out the build log: (web) (plain enhanced) (plain)
Click to see the possible cause of the failure (guessed by this bot)
Checking rustc_symbol_mangling v0.0.0 (/checkout/compiler/rustc_symbol_mangling)
error[E0621]: explicit lifetime required in the type of `rules`
--> compiler/rustc_expand/src/mbe/macro_rules.rs:355:1
|
355 | #[instrument(skip(cx, transparency, arg, rules))]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| lifetime `'cx` required
| in this attribute macro expansion
|
::: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tracing-attributes-0.1.30/src/lib.rs:566:1
|
566 | / pub fn instrument(
567 | | args: proc_macro::TokenStream,
568 | | item: proc_macro::TokenStream,
569 | | ) -> proc_macro::TokenStream {
| |____________________________- in this expansion of `#[instrument]`
|
help: add explicit lifetime `'cx` to the type of `rules`
|
364 | rules: &'cx [MacroRule],
| +++
For more information about this error, try `rustc --explain E0621`.
[RUSTC-TIMING] rustc_expand test:false 1.445
error: could not compile `rustc_expand` (lib) due to 1 previous error
View on GitHub
The job x86_64-gnu-miri failed! Check out the build log: (web) (plain enhanced) (plain)
Click to see the possible cause of the failure (guessed by this bot)
Compiling matchers v0.2.0
error[E0621]: explicit lifetime required in the type of `rules`
--> compiler/rustc_expand/src/mbe/macro_rules.rs:355:1
|
355 | #[instrument(skip(cx, transparency, arg, rules))]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| lifetime `'cx` required
| in this attribute macro expansion
|
::: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tracing-attributes-0.1.30/src/lib.rs:566:1
|
566 | / pub fn instrument(
567 | | args: proc_macro::TokenStream,
568 | | item: proc_macro::TokenStream,
569 | | ) -> proc_macro::TokenStream {
| |____________________________- in this expansion of `#[instrument]`
|
help: add explicit lifetime `'cx` to the type of `rules`
|
364 | rules: &'cx [MacroRule],
| +++
For more information about this error, try `rustc --explain E0621`.
[RUSTC-TIMING] rustc_expand test:false 1.329
error: could not compile `rustc_expand` (lib) due to 1 previous error
@bors try @rust-timer queue
View on GitHub
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
View on GitHub
⌛ Trying commit 35d11f3 with merge c7819c5…
To cancel the try build, run the command @bors try cancel.
Workflow: https://github.com/rust-lang/rust/actions/runs/23450743953
View on GitHub
Queued c7819c5 with parent 13e2aba, future comparison URL.
There is currently 1 preceding artifact in the queue.
It will probably take at least ~1.2 hours until the benchmark run finishes.
Finished benchmarking commit (c7819c5): comparison URL.
Overall result: ❌ regressions - please read the text below
Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @rustbot label: +perf-regression-triaged. If not, please fix the regressions and do another perf run. If its results are neutral or positive, the label will be automatically removed.
@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.2% | [0.2%, 0.2%] | 6 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.2% | [0.2%, 0.2%] | 6 |
Max RSS (memory usage)
This benchmark run did not return any relevant results for this metric.
Cycles
Results (secondary -0.5%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
1.9% | [1.9%, 1.9%] | 1 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-2.9% | [-2.9%, -2.9%] | 1 |
| All ❌✅ (primary) | - | - | 0 |
Binary size
Results (primary 0.1%, secondary 0.1%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.1% | [0.0%, 0.1%] | 55 |
| Regressions ❌ (secondary) |
0.1% | [0.0%, 0.2%] | 24 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.1% | [0.0%, 0.1%] | 55 |
Bootstrap: 482.731s -> 482.467s (-0.05%)
Artifact size: 396.84 MiB -> 396.93 MiB (0.02%)
The regressions here are all in html5ever. I think this should be fine.
@Unique-Usman could you provide a description, a link to the previous PR and squash the commits?
@estebank, I will do that. Thanks.
@bors try @rust-timer queue
getting some confirmation on the html5ever incr regression
View on GitHub
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
View on GitHub
⌛ Trying commit ed4a1f8 with merge 9837802…
To cancel the try build, run the command @bors try cancel.
Workflow: https://github.com/rust-lang/rust/actions/runs/23504878841
View on GitHub
Queued 9837802 with parent 0312931, future comparison URL.
There are currently 0 preceding artifacts in the queue.
It will probably take at least ~1.0 hours until the benchmark run finishes.
Finished benchmarking commit (9837802): comparison URL.
Overall result: ❌ regressions - please read the text below
Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @rustbot label: +perf-regression-triaged. If not, please fix the regressions and do another perf run. If its results are neutral or positive, the label will be automatically removed.
@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.3% | [0.3%, 0.3%] | 6 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.3% | [0.3%, 0.3%] | 6 |
Max RSS (memory usage)
Results (secondary -2.5%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-2.5% | [-2.5%, -2.5%] | 1 |
| All ❌✅ (primary) | - | - | 0 |
Cycles
Results (secondary -0.5%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
3.0% | [2.9%, 3.1%] | 2 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-2.9% | [-4.1%, -2.1%] | 3 |
| All ❌✅ (primary) | - | - | 0 |
Binary size
Results (primary 0.1%, secondary 0.1%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.1% | [0.0%, 0.1%] | 54 |
| Regressions ❌ (secondary) |
0.1% | [0.0%, 0.1%] | 22 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.1% | [0.0%, 0.1%] | 54 |
Bootstrap: 484.31s -> 483.312s (-0.21%)
Artifact size: 394.79 MiB -> 396.88 MiB (0.53%)
It consistently affects html5ever incr builds a little bit (we are doing more after all), but I think the output improvement is worth it.
@bors r+
View on GitHub
⌛ Testing commit ed4a1f8 with merge 1174f78...
Workflow: https://github.com/rust-lang/rust/actions/runs/23568957883
What is this?
This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.Comparing 80d0e4b (parent) -> 1174f78 (this PR)
Test differences
Show 8 test diffs
Stage 1
[ui] tests/ui/macros/typo-in-norepeat-expr-2.rs: [missing] -> pass (J2)[ui] tests/ui/macros/typo-in-norepeat-expr.rs: [missing] -> pass (J2)
Stage 2
[ui] tests/ui/macros/typo-in-norepeat-expr-2.rs: [missing] -> pass (J0)[ui] tests/ui/macros/typo-in-norepeat-expr.rs: [missing] -> pass (J0)[run-make] tests/run-make/compressed-debuginfo-zstd: ignore (ignored if LLVM wasn't build with zstd for ELF section compression or LLVM is not the default codegen backend) -> pass (J1)
Additionally, 3 doctest diffs were found. These are ignored, as they are noisy.
Job group index
- J0: aarch64-apple, aarch64-gnu, aarch64-gnu-llvm-21-1, aarch64-msvc-1, arm-android, armhf-gnu, dist-i586-gnu-i586-i686-musl, i686-gnu-1, i686-gnu-nopt-1, i686-msvc-1, optional-x86_64-gnu-parallel-frontend, test-various, x86_64-gnu, x86_64-gnu-debug, x86_64-gnu-gcc, x86_64-gnu-llvm-21, x86_64-gnu-llvm-21-2, x86_64-gnu-llvm-22-2, x86_64-gnu-nopt, x86_64-gnu-stable, x86_64-mingw-1, x86_64-msvc-1
- J1: x86_64-gnu-nopt
- J2: x86_64-gnu-llvm-21-3, x86_64-gnu-llvm-22-3
Test dashboard
Run
cargo run --manifest-path src/ci/citool/Cargo.toml -- \
test-dashboard 1174f784096deb8e4ba93f7e4b5ccb7bb4ba2c55 --output-dir test-dashboardAnd then open test-dashboard/index.html in your browser to see an overview of all executed tests.
Job duration changes
- pr-check-1: 45m 27s -> 28m 4s (-38.2%)
- x86_64-gnu-tools: 1h 22m -> 52m 53s (-36.2%)
- pr-check-2: 56m 16s -> 38m 39s (-31.3%)
- x86_64-rust-for-linux: 1h 8m -> 47m 12s (-30.8%)
- x86_64-gnu-gcc: 1h 26m -> 1h 2m (-27.6%)
- dist-aarch64-apple: 2h 13m -> 1h 37m (-27.0%)
- dist-arm-linux-musl: 2h 11m -> 1h 37m (-25.7%)
- dist-android: 36m 6s -> 27m 3s (-25.1%)
- armhf-gnu: 1h 53m -> 1h 26m (-24.3%)
- x86_64-gnu-miri: 1h 52m -> 1h 26m (-23.3%)
How to interpret the job duration changes?
Job durations can vary a lot, based on the actual runner instance
that executed the job, system noise, invalidated caches, etc. The table above is provided
mostly for t-infra members, for simpler debugging of potential CI slow-downs.
Finished benchmarking commit (1174f78): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Our benchmarks found a performance regression caused by this PR.
This might be an actual regression, but it can also be just noise.
Next Steps:
- If the regression was expected or you think it can be justified,
please write a comment with sufficient written justification, and add
@rustbot label: +perf-regression-triagedto it, to mark the regression as triaged. - If you think that you know of a way to resolve the regression, try to create
a new PR with a fix for the regression. - If you do not understand the regression or you think that it is just noise,
you can ask the@rust-lang/wg-compiler-performanceworking group for help (members of this group
were already notified of this PR).
@rustbot label: +perf-regression
cc @rust-lang/wg-compiler-performance
Instruction count
Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.2% | [0.1%, 0.3%] | 7 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
-0.1% | [-0.1%, -0.1%] | 1 |
| Improvements ✅ (secondary) |
-0.2% | [-0.2%, -0.2%] | 1 |
| All ❌✅ (primary) | 0.2% | [-0.1%, 0.3%] | 8 |
Max RSS (memory usage)
This benchmark run did not return any relevant results for this metric.
Cycles
Results (secondary -0.8%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
2.2% | [2.2%, 2.2%] | 1 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-3.9% | [-3.9%, -3.9%] | 1 |
| All ❌✅ (primary) | - | - | 0 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 481.885s -> 483.566s (0.35%)
Artifact size: 395.19 MiB -> 395.05 MiB (-0.04%)
Small regression on html5ever, it was deemed to be acceptable prior to merging.
@rustbot label: +perf-regression-triaged
Small regression on html5ever, it was deemed to be acceptable prior to merging.
@rustbot label: +perf-regression-triaged
@Kobzol nothing to do right ?
View all comments
There was an initally opened pr which solve this issue here #152679. It got merged but, there was a perf regression. And this new pr is opened to address the problem. The first did the computation of binding and matched_rule and then passed them as owned value down to
diagnostics::emit_frag_parse_err(but, now this pr address the issue by passinglhsandrulesas borrowed value to from_tts and the move the logic todiagnostics::emit_frag_parse_err(.Fix #47452.