www.gp-field-guide.org.uk
Contents Top Previous Next

12.3 Human Competitive Results: The Humies

Getting machines to produce human-like results is the very reason for the existence of the fields of artificial intelligence and machine learning. However, it has always been very difficult to assess how much progress these fields have made towards their ultimate goal. Alan Turing understood that in order to avoid human biases when assessing machine intelligence, machine behaviour must be evaluated objectively. This led him to propose an imitation game, now known as the Turing test (?). Unfortunately, the Turing test is not usable in practice, and so, there is a need for more workable objective tests of machine intelligence.

Koza, Bennett, and Stiffelman (1999) suggested shifting attention from the notion of intelligence to the notion of human competitiveness. A result cannot acquire the rating of "human competitive" merely because it is endorsed by researchers inside the specialised fields that are attempting to create machine intelligence. A result produced by an automated method must earn the rating of "human competitive" independently of the fact that it was generated by an automated method.

Koza proposed that an automatically-created result should be considered "human-competitive" if it satisfies at least one of these eight criteria:

  1. The result was patented as an invention in the past, is an improvement over a patented invention or would qualify today as a patentable new invention.
  2. The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal.
  3. The result is equal to or better than a result that was placed into a database or archive of results maintained by an internationally recognised panel of scientific experts.
  4. The result is publishable in its own right as a new scientific result, independent of the fact that the result was mechanically created.
  5. The result is equal to or better than the most recent human-created solution to a long-standing problem for which there has been a succession of increasingly better human-created solutions.
  6. The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered.
  7. The result solves a problem of indisputable difficulty in its field.
  8. The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs).

These criteria are independent of, and at arm's length from, the fields of artificial intelligence, machine learning, and GP.

Over the years, dozens of results have passed the human-competitiveness test. Some pre-2004 human-competitive results include:

In total (Koza and Poli2005) lists 36 human-competitive results. These include 23 cases where GP has duplicated the functionality of a previously patented invention, infringed a previously patented invention, or created a patentable new invention. Specifically, there are fifteen examples where GP has created an entity that either infringes or duplicates the functionality of a previously patented 20th-century invention, six instances where GP has done the same with respect to an invention patented after 1 January 2000, and two cases where GP has created a patentable new invention. The two new inventions are general-purpose controllers that outperform controllers employing tuning rules that have been in widespread use in industry for most of the 20th century.

Many of the pre-2004 results were obtained by Koza. However, since 2004, a competition has been held annually at ACM's Genetic and Evolutionary Computation Conference (termed the Human-Competitive awards - the Humies). The $10,000 prize is awarded to projects that have produced automatically-created results which equal or better those produced by humans.

The Humies Prizes have typically been awarded to applications of evolutionary computation to high-tech fields. Many used GP. For example, the 2004 gold medals were given for the design, via GP, of an antenna for deployment on NASA's Space Technology 5 Mission (see Figure  12.2 ) (Lohn et al.2004) and for evolutionary quantum computer programming (Spector2004). There were three silver medals in 2004: one for the evolution of local search heuristics for SAT using GP (Fukunaga2004), one for the application of GP to the synthesis of complex kinematic mechanisms (Lipson2004) and one for organisation design optimisation using GP (KHosraviani2003KHosraviani, Levitt, and Koza2004). Also, four of the 2005 medals were awarded for GP applications: the invention of optical lens systems (Al-Sakran, Koza, and Jones2005Koza, Al-Sakran, and Jones2005), the evolution of a quantum Fourier transform algorithm (Massey, Clark, and Stepney2005), evolving assembly programs for Core War (Corno, Sanchez, and Squillero2005) and various high-performance game players for Backgammon, Robocode and Chess endgame (Azaria and Sipper2005a,bHauptman and Sipper2005Shichel, Ziserman, and Sipper2005). In 2006, GP again scored a gold medal with the synthesis of interest point detectors for image analysis (??), while it scored a silver medal in 2007 with the evolution of an efficient search algorithm for the Mate-in-N problem in Chess (Hauptman and Sipper2007) (see Figure  12.3 ).


PIC

Figure 12.2: Award winning human-competitive antenna design produced by GP.



PIC

Figure 12.3: Example mate-in-2 problem.


Note that many human competitive results were presented at the Humies 2004-2007 competitions (e.g., 11 of the 2004 entries were judged to be human competitive). However, only the very best were awarded medals. So, at the time of writing we estimate that there are at least 60 human competitive results obtained by GP. This shows GP's potential as a powerful invention machine.


www.gp-field-guide.org.uk
Contents Top Previous Next