I’ve been getting my stuff ready for IcedTea this week. I’ve been bootstrapping it with my ecj-bootstrapped b17/b22 hybrid, and so far it’s gone far more smoothly than I expected. The build system changes between b17 and b22 were far less pervasive than I’d thought, and the fact that my hybrid build seems to run ant without a hitch is gobsmacking. But debugging build scripts is slow and tedious — Hotspot bugs are far more fun to fix!

Some more todos for the list:

Slow signature handler
When Hotspot calls a native method it generates a signature handler that takes the arguments from the Java stack and puts them where the native ABI expects them. Signatures with 13 or fewer arguments can be represented as a 64-bit int, but longer signatures won’t fit. These get handled by the slow signature handler, which I haven’t written. This is what’s preventing the important enterprise application Slime Volleyball from running.
call_VM
Calls from the interpreter into the VM should use a call_VM macro. I’ve gotten away without this so far because a) I only have two or three calls, and b) most of what call_VM does on other platforms is not necessary on PPC. But I think I’ll need to write it when I start thinking about relocations and JITs.
JNI_FastGetField
There is some stuff to accelerate JNI field accesses that I haven’t written. It’s disabled in debug builds which is why I hadn’t noticed it until now.

4 thoughts on “

  1. WHich call_VM do you not have? The macroassembler/interpreterMacroAssembler one or the c++ interpreter one. Based on the spelling you are using I’m assuming it the assembler version. How could exceptions be handled correctly? You must be being very lucky with where exceptions are being processed so that the c++ interpreter’s handling is either doing all the work (and it does a large fraction) or some exception paths are going wrong and aren’t noticed. I think you’d probably find that the jdk build is a more benign environment than say the jck.

  2. I have neither. But in my code so far there are only three places which should (maybe) use it: two calls to InterpreterRuntime::prepare_native_call and one to JavaThread::check_special_condition_for_native_trans. I say “maybe” because they’re all inside the native entry inside bits that do their own thing with exceptions. I’m not sure exactly how to implement the jump back into call_stub — I need to figure out how to cut the stack down to the right frame.

    One random question: like the other platforms, I keep the methodOop in a non-volatile register. Do I need to be updating this from the pointer in the interpreterState after eg calls to the BytecodeInterpreter::run to account for it maybe being moved by the GC?

  3. So without the exceptions check you mention you problably won’t pass tests where there is no linked native method. And you can fail tests with async exceptions, either losing them or aborting because of unprocessed exceptions. I don’t understand the cut the stack back remark because all that happens is an unwind.

    You do have to reload the method from the interpreterstate any time there is the possibility of a safepoint between the load and the use. The other platforms do that. On sparc since the L registers are known to gc code it doesn’t have to do anything special with Lmethod since it is always valid.

  4. I guess call_VM should be top of my to do list, it being something that could introduce weird bugs. And yeah, it is just an unwind, and I can see what you’d do on ppc — keep popping frames until you find one pointing to _call_stub_return_address — but I can’t see how that happens on other platforms which is confusing me.

    When might a safepoint happen? I’m thinking there are two places I need to reload Rmethod: when I return from BytecodeInterpreter::run, and when I return from JavaThread::check_special_condition_for_native_trans.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.