Tiny GP Results

Simon Lucas, June 2004

Entries

There were seven entries, and they can be found in this directory.  The programs display a mixture of good clean design, some great ingenuity, and some downright dirty hacks!  They are well worth studying in detail if you're thinking of implementing your own GP system, or just want to learn more tricks of the trade.

The Planatscher entry called smallgp is available on-line.  Readers may also be interested in Christian Gagne's Beagle Puppy - not a Tiny GP entry, but well designed and well worth a look.

The panel of expert judges were:

We are extremely grateful to the judges for the time and effort they expended in making a thorough evaluation of the entries.  Each entry was evaluated by all four judges.

Selecting the Winner

The overall result is that Riccardo Poli is the winner, followed closely by Maarten Keijzer.

Three out of four judges selected Poli as the winner.  Note that one of the entrants (Keijzer) was also a judge (this was allowed, on the basis that we wanted GP implementation experts to be both judges and entrants).  Judges did not judge their own entries, of course.

The average overall scores were 8.5 for Poli, 8.1 for Keijzer, then the next below that was around 7 - so a clear gap between the first two and the rest of the bunch.  One of the judges selected AVE as the winner, but AVE's overall score was around fifth - some of the judges identified some problems with the implementation, as specified below.

The sections below show the summary comments from each judge, followed by the detailed comments they made on each entry.

I'd like to thank all the entrants for taking part, and especially thank the judges for their valuable time and effort.

Summary Comments

Detailed Comments

-- ave --


sources: 5389 bytes, 214 lines
binary: 6040 bytes, (496+12000+81920)=94416 bytes allocated at runtime

Dirty code. Documentation very slim. Serious doubt about code validity. No xover and mutation probabilities.



no protected division
no effort reported
needs stdio.h to be included for gcc 3.3
no crossover prob

cannot set MAX_LENGTH (keeping DEPTH at 6 makes very short solutions)

Memory management very nifty, eval routine re-use as end-of-tree searcher very cute

not grow initialization, but an interesting alternative, though 1/10 functions?

It is possible to crash the system by running out of the poolsize memory. There's no
mechanism to control this. Poolsize is a function of maximum depth (length) and population
size. This is however not worked out, leading to crashes in edge cases. Poor.

Interesting point mutation, though not for specs.

All in all: lots of points for the niftyness of the memory handling and associated
tricks with eval and such. Lots of minus points for implementation of the specs.
Big minus point for allowing the system to crash!

Furthermore, getting the system to perform tree manipulations extremely fast is only
marginally useful. Computation time is dominated by evaluation.




1/2 a byte per node. Used long lines to reduce source size, e.g. definition of SETNODE(). No comments. Readable as possible, given lots of use of bit twiddling and pointer arithmetic. Smallest object code if you disallow hard core linux hacks.

-- devylder --


sources: 10490 bytes, 242 lines
binary: unable to run (don't have access to mathematica)

Documentation no very clear. Unable to run program on mathematica. Overheads of running into an interpreted environment such mathematica make it very hardly a tiny GP system!.


Code looks very short and cool. Unfortunately, the lack of owning Mathematica does not
allow me to evaluate anything about runtime performance (memory management) and the like.
Given no experience whatsoever with Mathematica and no access to any runtime environment
precludes me from even finding out how the code does what it does (or if it does anything).

I think that using a proprietary language excludes this entry from competing, let alone
winning. The winning entry should be --- in my opinion --- usable for anyone, without the
need to buy a software product. Once I can download a Mathematica evaluator for free this
will go away. As it is now, I have to shell out the price of Mathematica to evaluate this
entry.
 



Sorry, I don't use mathematica.

-- keijzer --

sources: 8787 bytes, 170 lines
binary: 11704 bytes, 64000 bytes allocated at runtime

Clever implementation, altough code formating is a little dirty. Nice use of STL.

 


Good use of comments, shortest source file. Uses 1 byte per node, leveraging std::string. 12k compiles, second smallest.
 

-- mcphee --


sources: 153000 bytes, @2000 lines
binary: 51264 bytes, @15000000 bytes allocated at runtime

This is not a tiny GP system, but rather a small GP framework. Overall the implementation seems on a quick look to be really nice, but it doesn't really fit in the contest. There is too much of genericity in the system. Execution in the JVM is also very costful.

java and tiny? That bites, and Nic shows how that works. Java is one of the most fluffy
languages around, as for even the most simple functionality a new file/class needs to
be created, complete with lots and lots of declarations.

Furthermore, it seems to my view that an OO-approach to EC is a misfit. EC is very functional
in nature, and Java inhibits the use of functional programming. Sure it is possible to program
functional style, but this is actively discouraged by the language itself.

Ok, but that's just my beef with Java in general.

The problem with the current approach is that even though the code is set up very nicely, the use of
many files and classes and interfaces and patterns and ... and ... makes it very improbable
to keep the entire codebase in your head. It is what I call macaroni code: lots of little short
strings of code that are strung together in some intricate form. The C and C++ entries here are
all short enough to comprehend fully, with the Java code this is much harder. One string of spaghetti
beats tying together tens of pieces of macaroni.

The use of a full-fledged tree class hierarchy is another level of fluff. Many other
implementations use a linear tree structure. Such an approach would be perfectly doable in
Java, but it is not chosen here. This impacts runtime memory consumption immensely as Java
hase some per Object overhead associated with it. Having every node as an Object thus leads
to a very high memory footprint. A linear tree (int[]) with runtime determination of the structure through
a simple loop would have solved this. But alas.

Memory consumption: very high. Due to extensive use of allocation, the Garbage collection thread takes
about one third of the runtime of the application.

I needed to change the problem.dat that came with it as the extra space after the "5 " on the first line
caused parseInt to fail. Detail.

 


Sharing of subtrees & caching is very interesting. The memory footprint is pretty large. The author suggests that extracting a nibble out of a byte is time consuming, but I don't see why. (Direct quote: "The trade off, though, is the computation necessary to extract and process nodes in this sort of representation.")

 

-- planatscher --

sources: 11923 bytes, 320 lines
binary: 9180 bytes, 206896 bytes allocated at runtime

Quite a big entry for a tiny GP entry. In general a good implementation. Unsure about the necessity of genotype/phenotype decoding during interpretation and crossover.
 



executable size: after optimizations in the 10K ballpark

Interesting use of a dynamically build tree to intialize a linear string of operations,
however, this is not very efficient. It is not difficult (and faster) to directly
manipulate strings as if they were trees. The extra tree-manipulation/encoding/decoding
code leads to longer source code than strictly neccessary. Not a bad idea though.

 


I like use of linear representation (to keep population storage small) and pointer version (to make execution fast). Largest number of options settable from command line. 1 byte per node. No comments. Second largest source code size. Executable size 20k.


 

-- poli --

sources: 5905 bytes, 234 lines
binary: 5912 bytes, (48074+48000)=96074 bytes allocated at runtime

Documentation is mostly on the compression hack used to reduce the size, details on the implementation are missing. Otherwize, a clever implementation, minimalist but very clear. Binary size has been computed only with a 'gcc -Os -s', without the compression hack exposed in documentation. In my opinion, this hack was not within the scope of the competition and could have been applied to other C/C++ implementations. Anyway, even without the compression hack, this entry is still the smallest binary obtained after compilation.
 

This is among the smallest submissions, with quite a small source code size and the smallest binary obtained after compilation (even without the compilation hack exposed in the documentation). The code is pretty comprehensive and straightforward.
 


combination of shell scripting and c-code. Employs gzip to compress the executable,
which is uncompressed on the fly. Great to get the size of the executable down,
though it still needs an additional 5K for temporary file space to store the uncompressed
executable. This makes the total 7.5K :-)

source code is small and effective, though not very exciting. I do like the 'traverse' function
though.

using modulo for random point selection uses the lower order bits of the random number generator.
Not a good idea in general, though lrand48 does not neccessarily have a problem with that.

As for getting the size of the executable down. What is important (for PDA's and the like) is
the total memory footprint of the application. In this case this is 5K for the application, plus several
hundred K for the population. As the memory for the run itself clearly dominates here, the size of the application
hardly matters.

I don't see the rationale for cutting off division at 0.001f. Depending on the value of the numerator
this might be protective or simply restrictive.

Crossover does not check for maximum length!

 


Use of Linux-specific hacks & compressed object code. Without those, final executable is 12k. Smallest object code by far (even uncompressed). No comments. 1 byte per node.

 

-- pulido --


sources: 4367 bytes, 167 lines
binary: 10136 bytes, (9344+251000)=260344 bytes allocated at runtime
(Increased MAX_LEN value to 250, to be sure that most trees with depth near 17 fit in buffer).

Documentation not very clear, hints given on the implementation do not help understand the internal working. Variables name in code doesn't help figuring out the algorithms.
 




no protected division

I don't like the use of assert to guard the memory allocation, if one is to
define NDEBUG, the code will fail. It would have been better to define a 'verify'
function for this that will not dissappear with NDEBUG.

GetS3??? It's the central function yet the name is nondescriptive. Due to using
strlen inside the function (it is also the evaluator), the function is highly
inefficient (evaluation becomes quadratic in program size).

The function Depth returns the length! It does calculate the depth in the fourth argument,
but this is not really clear right?

The code uses strlen as a stop criterion in 'for' loops. Although a decent compiler can probably
optimize it away, if it's not, it is so extremely inefficient that it really devalues the code.
Speed wasn't an objective in this competition, but making O(n) algorithms O(n^2) is not an optimization,
it's a bad choice of algorithms and thus reflects on the code quality.

All in all, this is a very short and relatively high quality implementation. Unfortunately
it does not follow the specs completely. Protected division is implemented differently (replacing
the division with a randomly chosen different function, possibly in the middle of evaluation!)


 


1 byte/node, very small source code. Has funny (and inefficient) way of returning current position within string being evaled: requies call to strlen() instead of returning a pointer.