Store const_eval_raw results to disk#62166
Conversation
|
@oli-obk Do you know why we only store const eval results which does not result in an error? |
|
@bors try |
I don't remember. I think I just kept the status quo alive and assumed there's a reason. |
|
We do need this for statics, because This will increase the amount of cached data, potentially by very large amounts. Every constant that has a trivial representation via But I think I can design a perf test that shows that this PR is a perf improvement, but I'm not sure if its worth it to include it. Essentially it should be // lib.rs of crate `foo`
static FOO: u32 = 42; // do some very complex static computation here// main.rs of crate `foo`
static BAR: u32 = *&foo::FOO + 1;Because reading from a static via an indirection will trigger rust/src/librustc_mir/interpret/memory.rs Line 394 in 7e08576 The only other uses are const evaluation of rust/src/librustc_mir/interpret/place.rs Line 564 in 7e08576 Ok, I just found one use that is odd: rust/src/librustc_mir/interpret/operand.rs Line 525 in d3e2cec const_eval and call const_to_op instead of using const_eval_raw.
Then we could cache |
|
I did that because (a) converting a |
|
Ah, turns out that messy The argument about consistency remains though, and also I'd expect a performance cost from doing validation there when it is not needed? Why is this call not a problem? That one cannot be replaced. Right now all "recursive" const eval goes through rust/src/librustc_mir/interpret/place.rs Line 564 in d3e2cec |
That cost will be paid anyway once
We could avoid it if we removed The reason I want to remove it is so that we don't store the |
Well, except for generic consts (once they exist). But I get what you are saying.
Remove what? I don't understand how that is connected to replacing one of three places where the interpreter calls |
If we only cache |
|
@bors try |
|
@bors try |
So you want to reliably use const_eval_raw for statics/promoteds, but const_eval for constants? I see. I don't like this lack of uniformity, but at least it's not entirely arbitrary -- and I guess there's always some price to pay for performance. Do you think we can set things up in a way such that if we ever get it wrong, we get an ICE? That would put me more at ease with this scheme. |
|
💥 Test timed out |
|
☔ The latest upstream changes (presumably #59722) made this pull request unmergeable. Please resolve the merge conflicts. |
|
@bors try |
|
⌛ Trying commit 77c0af6 with merge 6ae40628b538d3adb52b1770f7eb908a19255148... |
|
💔 Test failed - checks-azure |
|
@bors try |
|
⌛ Trying commit 77c0af6 with merge a60befa583852dee37603e3bd00fd04ff2870b54... |
|
@rust-timer build a60befa583852dee37603e3bd00fd04ff2870b54 |
|
Success: Queued a60befa583852dee37603e3bd00fd04ff2870b54 with parent 765eebf, comparison URL. |
|
💥 Test timed out |
|
Finished benchmarking try commit a60befa583852dee37603e3bd00fd04ff2870b54, comparison URL. |
Derive which queries to save using the proc macro Based on rust-lang#62166. r? @eddyb
Derive which queries to save using the proc macro Based on rust-lang#62166. r? @eddyb
Derive which queries to save using the proc macro Based on rust-lang#62166. r? @eddyb
Derive which queries to save using the proc macro Based on rust-lang#62166. r? @eddyb
Based on #59722.
r? @oli-obk