Conversation
|
r? @brson |
|
Yeah, sorry, this is still pretty preliminary. You use it like this: The separation is to allow you to do some scaffolding setup in the pre-iter part and then have the benchmark tool just measure the speed of the inner loop passed to |
|
r+ I am wary about codifying this one type of test harness, instead of having an extensible system, but I don't have any concrete suggestions right now for how to get there from here, nor any grand plans to redesign |
|
Agreed. This is just to get something habitual under way. We can do mindless rewrites to a better harness if someone thinks one up. |
|
What type of measurements is the benchharness going to be recording? Just the runtime of the iteration? For runtime can it measure real, user and system time? |
|
No, it just measures userspace nanoseconds. Idea is to be portable and reliable -- "too easy to not-use" much like our unit test harness -- not complete or thorough. We have proper profilers for real performance tuning. |
This is scaffolding for the new #[bench] attribute for marking unit tests as benchmarks. They are run with the --bench flag that the test runner now accepts. The runner automatically calibrates a test loop to an appropriate count to get a good per-iteration measurement.
This is scaffolding for the new #[bench] attribute for marking unit tests as benchmarks. They are run with the --bench flag that the test runner now accepts. The runner automatically calibrates a test loop to an appropriate count to get a good per-iteration measurement.