I always plan to write about what I’ve been doing at the end of the day, but I never quite get round to it. Usually my head is too full of code, or something. So I end up with a piece of paper with scribble all over it to write about in the morning. Here’s yesterday’s notes.

  1. There are two VMs, client and server. When you build the C++ interpreter it’s the server JIT it gets built into.
  2. The templater that makes the stubs has some magic to support multi-model architectures. It works like this. To generate the stubs for ppc and ppc64 you run the templater like this:
    python porter/generate.py ppc

    Whether a ppc64-specific file is generated depends on a template’s filename. If it has CPU in it, you get ppc only; if it has CPUS you get ppc and ppc64. So from the template:

    porter/hotspot/src/cpu/CPU/vm/assembler_CPU.cpp

    you get:

    hotspot/src/cpu/ppc/vm/assembler_ppc.hpp

    but from the template:

    porter/hotspot/build/linux/makefiles/CPUS.make

    you get:

    hotspot/build/linux/makefiles/ppc.make
    hotspot/build/linux/makefiles/ppc64.make

    It’s clunky but it works.

  3. This is possibly obvious, but sneaky too. Everything in a CPU template is evaluated once, for the main cpu (the one you specify on the command line). A consequence of this is that the templater function is64bit(cpu) will always return False if it’s in a CPU template. You need to use other methods to conditionalize this, #ifdef _LP64 for example.
  4. My bootstrap environment is… interesting. XSLT with IcedTea’s gij/ecj combo produces mangled jvmti headers on ppc, but the IBM JDK from RHEL5 doesn’t understand version 50 class files. So I’m using the IBM JDK with IcedTea’s javac wrapper.

Ok, I just saw this:“Note that ARCH_DATA_MODEL is really only needed on Solaris to indicate you want to built the 64-bit version. And before the Solaris 64-bit binaries can be used, they must be merged with the binaries from a separate 32-bit build. The merged binaries may then be used in either 32-bit or 64-bit mode, with the selection occurring at runtime with the -d32 or -d64 options.”

I was away all last week on a training course where I caught a cold despite sneaking off between labs for yoga breaks. Now I’m sat here feeling like death and trying to figure out where I left off.

Ok, where I left off was that I finally made enough stubs for the build to finish building Hotspot and to start failing on the class library instead. I had been about to throw myself into the class library failures, but I found out that ppc64 is not the sleek modern version of ppc but rather its fat ugly sister so I rearranged the build system to favour ppc instead.

It’s quite neat. There’s an environment variable, ARCH_DATA_MODEL, which allows 32-bit VMs to be built on amd64 and 64-bit VMs to be built on sparc. It flips ARCH — though not everywhere! — to i486 (on amd64 with ARCH_DATA_MODEL=32) and to sparcv9 (on sparc with ARCH_DATA_MODEL=64). I think it’s only used on Solaris at present, but the best bit for me is that there is no sparcv9 directory — it shares with sparc. I fixed it so ppc/ppc64 and s390/s390x do the same thing, so you’ll get a ppc64/s390x build if you set ARCH_DATA_MODEL=64 and a ppc/s390 build otherwise. So now I only have three sets of stubs to worry about, not five.

I have mixed feelings about this portable interpreter work at the moment. On the one hand it’s going great: every day feels like I’m progressing faster and faster, partly because I’ve written tools to do the awkward bits and partly because now when I get build errors I have a much better idea of where to fix them. But on the other hand it’s a little daunting, because it’s becoming clear that some big things have changed since it was last used and the lack of revision history means I can’t just go back and see how they were changed in the JITs. So I don’t know what to think.

One cool thing is that before I started trying to build the portable interpreter I toyed with the idea of implementing the JVM interface in a free VM and using it as a replacement for libjvm.so — and it turns out twisti is doing just that! So my “maybe this would work” fallback has turned into a “this will work” without me lifting a finger!

This past few days I’ve been wrestling with a minor flaw in my porting plan, namely that some files are reference not methods but constants, and I have no idea what values to assign to them. I’ve been picking random values to get the build to progress, but that’s just storing up hard to debug errors for when I finally get to run the thing. But I finally figured out how to do this properly:

  • Every time make complains about a missing file, touch it.
  • Every time something complains about a missing method, stub it with something that throws an error.
  • Every time something complains about a missing constant, create one randomly but mark it with XXX EVIL.

Then:

  • Remove all evil constants, and stub everything that needed them with the same error-throwing thing as before.
  • Every time a stub gets hit, write the real method.

I’m glad I figured this one out, it’s been bugging me.

Yesterday I made a templater to generate machine-specific files for OpenJDK. I’m not 100% convinced that all the files make says are missing are necessary — the portable interpreter is old code, and it’s possible there are files listed as dependencies that are only required for JITs — and I don’t intend writing assembler that isn’t used so the way I plan to proceed is:

  • Every time make complains about a missing file, touch it.
  • Every time something complains about a missing method or whatever, stub it.
  • Every time a stub gets hit, write the real method.

The great thing about this is that the first two steps can be done in the templater. Nominally I’m only supposed to be getting it running on ppc, but there’s four other platforms that we care about and others that other distributions want. The idea is that the longer I can keep my work generic the easier individual ports will be.

Today I hacked the OpenJDK build system so that it recognises when it’s running on a platform without a JIT and enables the portable interpreter. Now it’s at the point where the build fails because the platform dependent flags files globals_$cpu.hpp and globals_linux_$cpu.hpp don’t exist so I’ve been dredging through the existing ones to see what my new ones need. There’s some stuff in there which I’m tempted to rationalize but I’m going to leave it alone in the interests of keeping my patch simple. I figure keeping tangential stuff out is the path to easy patch acceptance when the time comes.

Now I’m back at work I can get down to some OpenJDK stuff in earnest. I grabbed myself the task of getting it up and running on the architectures that we support but Sun don’t.

The deal is that each processor that Sun supports has a JIT, and at the core of each JIT is an architecture description file, 10,000 lines or so of psuedo-assembler. Even one platform wouldn’t be an option for me, and I have five. But it turns out that Sun did an ia64 port way back when, and because ia64 assembler is wacky they wrote an interpreter in C++ instead. They dropped ia64 for Java 5, but the interpreter is still in the codebase and it looks like the build scripts can build it. Sort of. I mean, I guess it’s not been touched since 2004, but I live in hope.

Anyway, dwmw2 lent me a big old ppc64 box, so I’m trying to bootstrap OpenJDK on it. I’ll post some patches soon, probably Monday…