Skip to content

[Feature Request] Performance Benchmarks in Documentation #496

@jkterry1

Description

@jkterry1

So I'm currently trying do hyperparameter tuning RL problem that's kind of a worst case scenario. It's a 10D space, trials are terribly expensive and noisy, and for 3 hyperparameters (including the 2 most important ones) we really have no idea what the value should be so the bounds are very large. The last point means I presumably need to use bayesian or genetic algorithm methods instead of random/PBT methods like are more common.

Because of this I've been specifically looking into Ax and Nevergrad (they appear to be the only open source production grade tools for this). If you already have them, including benchmarks for how Ax compares to other similar methods for hyperparameter tuning or other tasks would be super helpful to new users. E.g. I'm currently having to decide between Ax and TBPSA from Nevergrad and can find no discussion anywhere of which approach is superior, even though I'm likely a fairly common use case for your library.

It would also be helpful to new users to include examples for recommended budget with Ax in certain scenarios, if you have the data. For example, in my case I assume it's between 100 and 1000 but that's a rather large difference in terms of cost.

Metadata

Metadata

Labels

documentationAdditional documentation requestedenhancementNew feature or requestwishlistLong-term wishlist feature requests

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions