It’s been a while; time for a catchup!
June and July I mostly spent cleaning up Shark. HotSpot’s existing JITs, client and server, both inline pointers to objects in the native code they generate. These pointers need to be visible to the garbage collector, both so it knows the objects are live and so it can rewrite the pointers if it moves the object. This is trivial for client and server, as they both have access to the native code they generate: each method’s code is accompanied by a list of pointer locations within it. Shark, on the other hand, has no access to its generated native code other than knowing its address and size. Pointers can’t be inlined in Shark — it can’t tell the garbage collector where they are — so Shark had to load all garbage collected object pointers from other places, generally wherever the interpreter stored them.
This caused no end of problems. Aside from requiring more loads than the other JITs (at least one per object, and sometimes three or four) Shark had to mirror huge chunks of the interpreter. It had to cope with objects that were loaded in the VM (so the compiler could see them) but not cached in the interpreter (where the compiled code could see them). Because Shark wasn’t behaving like the other compilers, HotSpot’s compiler support layer would break in all kinds of exciting and imaginative ways. Finally, the method used by the server JIT to optimize interface calls to virtual calls and virtual calls to direct calls could not be used. Aside from the obvious speedup, Shark can only inline direct calls, so reducing virtual and interface calls to direct calls exposes them to the inliner. Calls in Shark have a lot of overhead, so this would have been a big win.
Sometime in May I figured out how to fake inlined object pointers. HotSpot’s compiler interface expects the compiler to generate native code into a CodeBuffer
. Shark, of course, uses LLVM, which generates code into a buffer it allocates. Shark had a HotSpot code buffer, but it didn’t do a lot with it. Now, every time Shark has an object pointer to inline, it writes it into the HotSpot code buffer where the garbage collector can see it. The generated code then loads the object pointer from the code buffer whenever it needs it. The pointer is still not inlined — there’s still a load required — but now it’s always only one load. Not a big speedup in itself, but it meant the remaining interpreterisms could be removed, which fixed the support layer breakages and allowed me to copy the interface-virtual-direct call optimization code more or less directly from the server compiler. Everything got a lot more stable, a lot more clean, and a little bit faster in the bargain.
During August I began the (long!) process of preparing Zero for submission to OpenJDK proper. It took some time to get started, but the patch has now gone through a couple of cycles of being reviewed by the HotSpot team: the code has been reformatted, the build system has been almost completely rewritten, and a bunch of other things got changed. It’s still ongoing, but the HotSpot part of the patch seems close to acceptance and the much smaller remainder will hopefully be reviewed soon. I’ve been ramping up my testing with each step: this one bootstrapped and built itself on 32-bit x86, x86_64 and 32-bit PowerPC, and has bootstrapped itself and is in the process of building itself on 64-bit PowerPC and 64-bit zSeries.
Also in August, Ed Nevill released his assembler interpreter for ARM. It replaces part of Zero with hand-crafted assembly language, making OpenJDK 2-8 times faster on that platform.
After the Zero patch is accepted, my next task will be getting Zero certified on 64-bit zSeries. I won’t have a lot of time for Shark until that’s done, but I have one last thing I want to do before I step aside for a couple of months. Xerxes RĂ„nby posted some benchmarks of Zero, Shark, and the assembler interpreter on ARM; Shark is gratifyingly faster than everything on five of the tests, but considerably slower than the assembler interpreter on the other four. I’m not happy with that!
On the tests where it’s slower, Shark is showing very little improvement over Zero, which suggests that these benchmarks are not spending a lot of time interpreting bytecode (which Shark would have compiled and made faster). I suspect these benchmarks are spending a lot of time in JNI calls. Back in February, Ed Nevill posted some profilies he had made to figure out why some interpreter improvements he had made had had very little effect; those profiles seemed to imply that the VM was spending a lot of its time setting up JNI calls. Zero uses libffi for this, and we at Red Hat have long suspected that libffi is slow.
HotSpot’s JITs have the capability to “compile” JNI methods. This sounds odd, as JNI methods are already native code; what’s actually getting compiled is the interface between the JVM and the native JNI code. If Shark could compile JNI methods, whenever HotSpot found a hot JNI method it would be able to replace its generic, one-size-fits-all interface code (using libffi) with an LLVM-generated interface custom built specifically for that method. I’m going to spend a week or so making Shark able to compile these methods, before I descend into zSeries TCK hell…