Inside Zero and Shark: Calling conventions and the call stub

JavaCalls::call is merely a thin wrapper around JavaCalls::call_helper, so lets have a look in there (they’re both in javaCalls.cpp). The interesting part starts when the JavaCallWrapper is created. JavaCallWrapper‘s constructor manages the transition to _thread_in_Java, amongst other things, and its destructor manages the transition back to _thread_in_vm, so the whole of that block will be _thread_in_Java. This idiom of using an object whose constuctor and destructor manage things is a common one in HotSpot; the apparently unused HandleMark created directly after the JavaCallWrapper is another example of this.

Ok, so now we’re _thread_in_Java, and it’s time to execute some Java code. The call to the call stub is the bit that does that, but before we look at that it’s interesting to skip forward a little, to look at what happens before and after the HandleMark and JavaCallWrapper are destroyed. Immediately before the blocks close is this:

// Preserve oop return value across possible gc points
if (oop_result_flag) {
  thread->set_vm_result((oop) result->get_jobject());

and immediately after the blocks close is this:

// Restore possible oop return
if (oop_result_flag) {
  result->set_jobject((jobject) thread->vm_result());

If the Java code called by the call stub returned an object (a java.lang.Object) then a pointer to that object will now be in result — and it’s an oop. The destructors of both HandleMark and JavaCallWrapper contain code that can GC, so these blocks of code are needed to protect that oop. Here, rather than using a handle, the result is protected by being stored in the thread, in a location the GC knows to check and update.

Back to the call stub. What is it? Well, in what I’ll call “classic” HotSpot (where everything from here on in is written in assembly language) every methodOop has a pair of entry points: pointers to the native code that actually executes the method. When a method has been JIT compiled these entry points will point at the JIT compiled code; for interpreter code they will point to some location within the interpreter. The reason there are two entry points is that the interpreter passes arguments and return values in a different manner to compiled code; the interpreter uses a different calling convention from the compiled code. If a method is compiled then its compiled entry point (the entry point that will be called by compiled code) will point directly at the compiled code, but its interpreted entry point will point to the i2c adaptor, which translates from the interpreter calling convention to the compiler calling convention and then jumps to the compiled entry point. Interpreted methods have similar treatment: their interpreted entry point points to the part of the interpreter responsible for executing that method, and their compiled entry point will point to the c2i adaptor.

What does this have to do with the call stub? Well, the call stub is the interface between VM code and the interpreter calling convention. It takes a C array of parameters and copies them to the locations specified by the interpreter calling convention. Then it invokes the method, by jumping to its interpreted entry point. Finally, it copies the result from the location specified by the interpreter calling convention to the address supplied by JavaCalls::call_helper.

You’ll notice this description has been with reference to classic HotSpot. Zero and Shark are mostly the same, but there are two significant differences. Firstly, the reason classic HotSpot has two calling conventions is an optimization. The interpreter calling convention gives better performance in the interpreter, the compiler calling convention gives better performance in the compiler, and the difference is enough to more than offset the overhead of using adaptors for bridging. In Zero and Shark, the limits of what can be done in C++ and with LLVM constrain the design of the calling convention such that having different ones doesn’t really make sense. So — for now, at least — Shark code also uses the interpreter calling convention, and the compiled entry point is never set or used. In Zero and Shark there is only “the calling convention”.

The second difference is that Shark methods require a bit of extra information to execute. Compiled methods need to be able to tell HotSpot where they are in the code at certain times, and in classic HotSpot this is done by looking at the PC. LLVM doesn’t allow us access to this — even if it did, it wouldn’t make much sense — so Shark compiled methods feed HotSpot faked PCs. To do this, each method needs to know where HotSpot thinks the compiled code starts, so in Zero, entry points are not pointers to code but pointers to ZeroEntry or SharkEntry objects. The real entry point is stored within those.

Next time, some details about the calling convention, and some stuff about stacks.