This site is the home for a community-led effort towards improving experimentation in Genetic Programming (GP).
Although GP has been very successful in many domains, it is sometimes seen as immature with regard to benchmarking practices. Some commonly-used benchmark problems are seen as being “toy problems”, which are not informative about real-world performance. On the other hand, real-world problems are often either too computationally intensive to allow the many runs needed for meaningful comparisons, or require data which is not publicly available. The aims of the GP Benchmarks project are:
- to point out problems in existing experimental practice;
- to gather opinions from the community on the pros and cons of existing practice and on standardisation;
- if community consensus is in favour, to propose a draft suite of benchmarks which satisfy as much as possible the most important benchmark desiderata.
If you would like to:
- read more: see our Publications page and Twitter account;
- look at data: please see our Survey;
- join in this community effort: please sign up to our mailing list.