I have the following:
$ java -version java version "1.6.0_21" Java(TM) SE Runtime Environment (build 1.6.0_21-b07) Java HotSpot(TM) 64-Bit Server VM (build 17.0-b17, mixed mode) $ java32 -version java version "1.6.0_21" Java(TM) SE Runtime Environment (build 1.6.0_21-b07) Java HotSpot(TM) Client VM (build 17.0-b17, mixed mode, sharing)
I run tests on a huge Frege source code file that looks like this:
package Stress where x = (((( ... )))) // actually 1 million ( and 1 million )
Running the compiler with java32 -Xmx500m, I get
lexical analysis took 7.333s, 2000007 tokens, 272740 tokens/s parser (including lexical analysis) took 38.364s, 2000007 tokens (52132 tokens/s)
The 64 bit Java, however, dies with an Out of Memory error. With -Xmx1g it reports:
lexical analysis took 8.818s, 2000007 tokens, 226809 tokens/s parser (including lexical analysis) took 37.910s, 2000007 tokens (52756 tokens/s)
The speedup is barely significant. Only with 2 Gigabytes, it seems to get better:
... lexical analysis took 6.33s, 2000007 tokens, 315956 tokens/s parser (including lexical analysis) took 34.709s, 2000007 tokens (57622 tokens/s)
10% speedup, but only if it is given 4 times the memory! And, by the way, it is seldom the case that one has to compile a 100000 lines Frege source (assuming 20 tokens per line). Even the parser (which is generated from a yacc grammar), has only some 230000 tokens on 35000 lines.
On smaller files, the 32 bit client version is unbeatable, the 64bit server version actually being 2 to 3 times slower.