Instrumenting Zero and Shark

Every so often I find myself adding little bits of code to Zero or Shark, to figure out obscure bugs or to see whether working on some optimization or another is worthwhile. I did it again today, and thought I’d write a little tutorial.

The first versions of Shark implemented a lot of things the same way as the interpreter, ie slowly. Since February I’ve been slowly replacing these interpreter-isms with implementations that are more compiler-like, and today there’s only one left: invokeinterface. The reason I left it until last is that it’s the biggest and the ugliest: it’ll no doubt be a pig to do, and quite frankly I don’t really want to do it. To see if I could get away with not bothering with it, I decided to instrument Shark so I could run SPECjvm98 and have it print out the number of times Shark-compiled code executed an invokeinterface for every benchmark.

First, I needed somewhere to store the counter. I decided to put it in the individual thread’s JavaThread objects as they’re easy to get at from both C++ and Shark, and they’re thread-specific so you don’t have to worry about locking.

diff -r 4cc0bc87aef4 ports/hotspot/src/os_cpu/linux_zero/vm/thread_linux_zero.hpp
--- a/ports/hotspot/src/os_cpu/linux_zero/vm/thread_linux_zero.hpp	Fri May 29 12:46:07 2009 +0100
+++ b/ports/hotspot/src/os_cpu/linux_zero/vm/thread_linux_zero.hpp	Fri May 29 14:44:04 2009 +0100
@@ -32,6 +32,25 @@
     _top_zero_frame = NULL;
   }
 
+ private:
+  int _interface_call_count;
+
+ public:
+  int interface_call_count() const
+  {
+    return _interface_call_count;
+  }
+  void set_interface_call_count(int interface_call_count)
+  {
+    _interface_call_count = interface_call_count;
+  }
+
+ public:
+  static ByteSize interface_call_count_offset()
+  {
+    return byte_offset_of(JavaThread, _interface_call_count);
+  }
+
  public:
   ZeroStack *zero_stack()
   {

So we have the field itself, a getter and setter to access it from C++, and a static method to expose the offset of the field in the thread object to Shark. Next we need to make Shark update the counter:

diff -r 4cc0bc87aef4 ports/hotspot/src/share/vm/shark/sharkTopLevelBlock.cpp
--- a/ports/hotspot/src/share/vm/shark/sharkTopLevelBlock.cpp	Fri May 29 12:46:07 2009 +0100
+++ b/ports/hotspot/src/share/vm/shark/sharkTopLevelBlock.cpp	Fri May 29 14:44:04 2009 +0100
@@ -985,6 +985,16 @@
 // Interpreter-style interface call lookup
 Value* SharkTopLevelBlock::get_interface_callee(SharkValue *receiver)
 {
+  Value *count_addr = builder()->CreateAddressOfStructEntry(
+    thread(),
+    JavaThread::interface_call_count_offset(),
+    PointerType::getUnqual(SharkType::jint_type()));
+  builder()->CreateStore(
+    builder()->CreateAdd(
+      builder()->CreateLoad(count_addr),
+      LLVMValue::jint_constant(1)),
+    count_addr);
+  
   SharkConstantPool constants(this);
   Value *cache = constants.cache_entry_at(iter()->get_method_index());
 

We’re almost ready to add the SPECjvm98-specific bits now, but there’s one thing left. Some of the benchmarks are multithreaded, but we have one counter per thread; we need a way to set and get the counters from all running threads. HotSpot has some code to iterate over all the threads in the VM, but it’s all private to the Threads class. Not to worry though, we’ll just stick it in there:

diff -r 4cc0bc87aef4 openjdk-ecj/hotspot/src/share/vm/runtime/thread.hpp
--- a/openjdk-ecj/hotspot/src/share/vm/runtime/thread.hpp	Fri May 29 12:46:07 2009 +0100
+++ b/openjdk-ecj/hotspot/src/share/vm/runtime/thread.hpp	Fri May 29 14:44:04 2009 +0100
@@ -1669,6 +1669,9 @@
   // Deoptimizes all frames tied to marked nmethods
   static void deoptimized_wrt_marked_nmethods();
 
+ public:
+  static void reset_interface_call_counts();
+  static int  interface_call_counts_total();
 };
 
 
diff -r 4cc0bc87aef4 openjdk-ecj/hotspot/src/share/vm/runtime/thread.cpp
--- a/openjdk-ecj/hotspot/src/share/vm/runtime/thread.cpp	Fri May 29 12:46:07 2009 +0100
+++ b/openjdk-ecj/hotspot/src/share/vm/runtime/thread.cpp	Fri May 29 14:44:04 2009 +0100
@@ -3828,6 +3828,21 @@
   }
 }
 
+void Threads::reset_interface_call_counts()
+{
+  ALL_JAVA_THREADS(thread) {
+    thread->set_interface_call_count(0);
+  }
+}
+
+int Threads::interface_call_counts_total()
+{
+  int total = 0;
+  ALL_JAVA_THREADS(thread) {
+    total += thread->interface_call_count();
+  }
+  return total;
+}
 
 // Lifecycle management for TSM ParkEvents.
 // ParkEvents are type-stable (TSM).

Now we’re ready to add some SPECjvm98-specific code. A quick poke around in SPECjvm98 brings up the method spec.harness.ProgramRunner::runOnce as a likely place to hook ourselves in. This will be run by the interpreter — Shark won’t compile it as it’s only called a few times — so we put our code into the C++ interpreter’s normal entry which is the bit that executes bytecode methods:

diff -r 4cc0bc87aef4 ports/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp
--- a/ports/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp	Fri May 29 12:46:07 2009 +0100
+++ b/ports/hotspot/src/cpu/zero/vm/cppInterpreter_zero.cpp	Fri May 29 14:44:04 2009 +0100
@@ -42,6 +42,26 @@
   JavaThread *thread = (JavaThread *) THREAD;
   ZeroStack *stack = thread->zero_stack();
 
+  char *benchmark = NULL;
+  {
+    ResourceMark rm;
+    const char *name = method->name_and_sig_as_C_string();
+    if (strstr(name, "spec.harness.ProgramRunner.runOnce(") == name) {
+      intptr_t *locals = stack->sp() + method->size_of_parameters() - 1;
+      if (LOCALS_INT(5) == 100) {
+        Threads::reset_interface_call_counts();
+
+        name = LOCALS_OBJECT(1)->klass()->klass_part()->name()->as_C_string();
+        const char *limit = name + strlen(name);
+        while (*(--limit) != '/');
+        const char *start = limit;
+        while (*(--start) != '/');
+        start++;
+        benchmark = strndup(start, limit - start);
+      }
+    }
+  }
+  
   // Adjust the caller's stack frame to accomodate any additional
   // local variables we have contiguously with our parameters.
   int extra_locals = method->max_locals() - method->size_of_parameters();
@@ -59,6 +79,12 @@
 
   // Execute those bytecodes!
   main_loop(0, THREAD);
+
+  if (benchmark) {
+    tty->print_cr("%s: %d interface calls",
+                  benchmark, Threads::interface_call_counts_total());
+    free(benchmark);
+  }
 }
 
 void CppInterpreter::main_loop(int recurse, TRAPS)

This looks a little messy, but what it’s basically doing is spotting calls to spec.harness.ProgramRunner::runOnce and extracting the name of the benchmark from its arguments. It is complicated by the fact that SPECjvm98 intersperses full runs with hidden tenth-speed ones, so we use the speed argument (in LOCALS_INT(5)) to ignore the hidden runs.

Now we’re ready to run the benchmarks and see what happens:

SPECjvm98 results with instrumented VM

Looks like making invokeinterface faster is worthwhile after all! Now all I have to do is do it ;)

Debugging the C++ interpreter

Every so often I find myself wanting to add debug printing to the C++ interpreter for specific methods. I can never remember how I did it the last time and have to figure it out all over again, so here’s how:

diff -r 4d8381231af6 openjdk-ecj/hotspot/src/share/vm/interpreter/bytecodeInterpreter.cpp
--- a/openjdk-ecj/hotspot/src/share/vm/interpreter/bytecodeInterpreter.cpp  Tue Apr 21 09:50:43 2009 +0100
+++ b/openjdk-ecj/hotspot/src/share/vm/interpreter/bytecodeInterpreter.cpp  Wed Apr 22 11:13:35 2009 +0100
@@ -555,6 +555,15 @@
          topOfStack < istate->stack_base(),
          "Stack top out of range");
 
+  bool interesting = false;
+  if (istate->msg() != initialize) {
+    ResourceMark rm;
+    if (!strcmp(istate->method()->name_and_sig_as_C_string(),
+                "spec.benchmarks._202_jess.jess.Rete.FindDeffunction(Ljava/lang/String;)Lspec/benchmarks/_202_jess/jess/Deffunction;")) {
+      interesting = true;
+    }
+  }
+
   switch (istate->msg()) {
     case initialize: {
       if (initialized++) ShouldNotReachHere(); // Only one initialize call

The trick is getting the fully-qualified name of the method right: the method name contains dots, but the class names in its signature contain slashes. You’re there once you have that down.

I’m not dead

I haven’t blogged for a while. I’ve been working on Shark’s performance, walking through the native code generated for critical methods and looking at what’s happening. There’s several cases where I can see that some piece of code is unnecessary, but translating that into a way that Shark can see it’s unnecessary is non-trivial. I’m thinking I may need to separate the code generation, adding an intermediate layer between the typeflow and the LLVM IR so I can add things which are maybe necessary and then remove them if not. It all seems a bit convoluted — bytecode → typeflow → new intermediate → LLVM IR → native — but the vast bulk of the Shark’s time is spent in the last step so a bit more overhead to create simpler LLVM IR should speed up compilation as well as the runtime.

None of this has been particularly bloggable, but I wanted to point out two exiting things that are happening in Shark land. Robert Schuster and Xerxes RĂ„nby have been busy getting Shark to run on ARM, and Neale Ferguson has started porting LLVM to zSeries with the intention of getting Shark running there. I expected to see Shark on ARM sooner or later, but Shark on zSeries came completely out of the blue. I’m really looking forward to seeing that happen!

Good news and bad news

Bad news first. The drop in speed between the Zeros in IcedTea6 1.3 and 1.4 doesn’t seem to come from Zero itself. I did a build of IcedTea6 1.4 with everything in ports/hotspot/src/*cpu reverted to 1.3, and the speed loss remained. It must be something to do with the newer HotSpot, or some other patch that got added or changed. I don’t really want to spend any more time on this than I have, so we’ll just have to live with it.

I’ve not come to any conclusions as to the difference in speed between the native-layer C++ interpreter and Zero either. It’s not the unaligned access stuff I mentioned: I ran some benchmarks, but the results were ambiguous. It may be libffi, but again, I don’t want to spend more time on this…

The good news is that I’ve been checking the Zero sources for SCA cover, emailing various people, and there’s only one tiny easily-removable bit I’m unsure about. I spent the morning preparing and submitting the first of the patches that will be required, the core build patch, which will hopefully be well received.

Benchmarks

So, benchmarking. I did a couple of quick SPECjvm98 runs, to see what the times looked like. Please note that neither of these results were produced in compliance with the SPECjvm98 run rules so they are not comparable with SPECjvm98 metrics.

SPECjvm98 times for the interpreters in IcedTea6 1.4
SPECjvm98 times for the interpreters in IcedTea6 1.4

This result would suggest that the lion’s share of the performance difference between the template interpreter and Zero is down to Zero’s use of the C++ interpreter. There’s obviously something else slowing it down as well, though, possibly libffi, possibly the unaligned access stuff Ed Nevill mailed about the other day, or possibly something else entirely.

SPECjvm98 times for IcedTea6 1.3 Zero and IcedTea6 1.4 Zero
SPECjvm98 times for IcedTea6 1.3 Zero and IcedTea6 1.4 Zero

Apparently Robert Schuster was right about the speed loss between IcedTea6 1.3 Zero and IcedTea6 1.4. Most of the times are within 1% of each other, but the mpegaudio benchmark is some 12% slower. Assuming it is Zero (and not, for example, the HotSpot upgrade) there’s not a lot that could account for it. It could simply be the stack overflow checking that the Zero in 1.3 didn’t have, but I wouldn’t have thought that would have had such a pronounced effect. Maybe something’s not getting inlined like I expected; I’m going to have to hunt a little for this one…

State of the world

Well, in case you missed it, Zero passed the TCK! Specifically, the latest OpenJDK packages in Fedora 10 for 32- and 64-bit PowerPC passed the Java SE 6 TCK and are compatible with the Java SE 6 platform. I’ve been working toward this since sometime in November — the sharp-eyed amongst you may have noticed the steady stream of obscure fixes I’ve been committing — and the final 200 hour marathon finished at 5pm on FOSDEM Saturday, less than 24 hours before my talk. It was pretty stressful, and I took the week off to recover!

Needless to say, none of this could have happened without the rest of the OpenJDK team here at Red Hat getting it to pass on the other platforms. Special thanks must go to Lillian for managing the release. She got the blame for a lot of what went wrong, and it’s only fair she should get the credit for what went right.

Of course, all of this wasn’t just so I’d have something exciting to announce at FOSDEM. In a way it validates the decision we took at Red Hat to focus on Zero rather than using Cacao or another VM. By using as much OpenJDK code as possible — Zero builds are 99% HotSpot — we get as much OpenJDK goodness as possible, including the “correctness” of the code. Zero’s speed can make it a standing joke, but I’d like to use these passes to emphasize that Zero isn’t just a neat hack — it’s production quality code that hasn’t been optimized yet. I’ve written fast code and I’ve written correct code, and in my experience it’s easier in the long run to make correct code fast than it is to make fast code correct. The TCK isn’t everything, naturally, but the fact that it’s possible to pass it using Zero builds gives us a firm foundation for future work.

So, what now for me? Well, in the medium term I want to restart work on Shark, but there’s a couple of things for Zero I want to look at while they’re fresh in my mind. The first is speed. As an interpreter Zero will never be “fast”, but in my FOSDEM slides I used some of Andrew Haley‘s benchmarks that show Zero as significantly slower than the template interpreter on x86_64. Furthermore, Robert Schuster mentioned in his talk that the Zero in IcedTea 1.4 was significantly slower than the Zero in IcedTea 1.3. I’m not going to spend a great deal of time on it, but I’d like to do a bit of benchmarking and profiling to check that nothing stupid is happening.

The other thing I want to do for Zero is to get it into upstream HotSpot. This is going to require a lot of non-fun stuff — tidying, a bit of rethinking, and an SCA audit.

Finally, Inside Zero and Shark, the articles I’ve been writing. I didn’t mention it at the time, but I was writing them while my TCK runs were in progress, to keep me sane! I do plan to continue them, but they’ll likely be a little more sporadic now I’m starting the fun stuff again!

Inside Zero and Shark: The call stub and the frame manager

Now that we have all that stack stuff out of the way we can get into the core of Zero itself, the replacements for the parts of HotSpot that were originally written in assembly language.

I’ve mentioned already that the bridge from the VM into Java code is the call stub. In Zero, this is the function StubGenerator::call_stub, in stubGenerator_zero.cpp, and its job is really simple. In the previous article I explained how when a method is entered it finds its arguments at the top of the stack, at the end of the previous method’s frame. Well, for the first frame of all there is no previous frame, so the call stub’s job is to create one. If you look in the crash dump I linked in the previous article, you’ll see that right at the bottom of the stack trace is a short frame that’s different from all the others. This is the entry frame, the frame the call stub made. You can see the code that built it in EntryFrame::build, right at the bottom of stubGenerator_zero.cpp.

Once the entry frame is created, the call stub invokes the method by jumping to its entry point. If the method we’re calling hasn’t been JIT compiled then the entry point will be pointing at one of the interpreter’s method entries. These, along with the call stub, are the bits that are written in assembly language in classic HotSpot.

There are several different method entries — Zero has five, classic HotSpot has fourteen! — but most of this is optimization, and you can do pretty much everything with just two: an entry point for normal (bytecode) methods, and an entry for native (JNI) methods. In this article I’m going to talk about the normal entry.

In the C++ interpreter, the normal entry is split into two parts. The larger of the two is the bytecode interpreter. This is written in C++, the function BytecodeInterpreter::run in bytecodeInterpreter.cpp, and it does the bulk of the work. The other part is the frame manager. In non-Zero HotSpot this is written in assembly language, and it handles the various stack manipulations that cannot be performed from within C++. Zero’s frame manager is, of course, written in C++; it’s the function CppInterpreter::normal_entry in cppInterpreter_zero.cpp.

The frame manager performs tasks for the bytecode interpreter, so you might expect the bytecode interpreter call the frame manager wherever it needs to adjust the stack. It can’t work this way, however; the interleaved stack in classic HotSpot means that once you’re inside the bytecode interpreter the bytecode interpreter’s ABI frame lies on top of the stack, blocking any access to the Java frames beneath. To cope with this, the code is essentially written inside out, with the frame manager calling the bytecode interpreter, and the bytecode interpreter returning to the frame manager whenever it needs something done.

The way it works is this. On entering a method, we start off in the frame manager. The frame manager extends the caller’s frame to accomodate any extra locals, then creates a new frame for the callee. The frame manager then calls the bytecode interpreter with a method_entry message.

Now we’re inside the bytecode interpreter, which executes bytecodes one-by-one until it reaches something it cannot handle. Say it arrives at a method call instruction. In classic HotSpot, the bytecode interpreter’s ABI frame is blocking the top of the stack, so if the bytecode interpreter were to handle the call itself the callee wouldn’t be able to extend its caller’s frame to accomodate its extra locals. The frame manager has to to handle this, so the bytecode interpreter returns with a call_method message.

Now we’re back in the frame manager again; the bytecode interpreter’s frame has been removed, and the Java frame at the top of the stack. The arguments to the call were set up by the bytecode interpreter, so all the frame manager has to do is jump to the callee’s entry point. When the callee returns, the frame manager returns control to the bytecode interpreter by calling it with a method_resume message, and the bytecode interpreter continues from where it was when it issued the call_method.

This process is repeated every time a method call is required. Once the bytecode interpreter is finished with a method, it returns to the frame manager with a return_from_method or a throwing_exception message. The frame manager then removes the method’s frame, copies the result into it’s caller’s frame if necessary, and return to its caller.

The frame manager exists because the bytecode interpreter’s frame blocks the stack in classic HotSpot. In Zero, Java frames live on the Zero stack, which is separate from the ABI stack. Why then does Zero need a separate frame manager? The answer is that it doesn’t — it would be perfectly possible to rewrite the bytecode interpreter to stand alone. That, however, is the issue: you’d have to rewrite the bytecode interpreter, making significant modifications to the existing HotSpot code. That runs counter to the design philosophy of Zero, which aims for it to slot into HotSpot with minimal modification. It could be done, but we didn’t do it.

That pretty well sums up the normal entry. Next time I’ll talk about the other essential method entry, the one that handles JNI methods.

Inside Zero and Shark: HotSpot’s stacks

Now that we understand what Java is expecting of the stack we can take a look at how HotSpot and Zero implement it. It would, of course, be perfectly possible to implement a stack exactly as described, but in practice that’s not how it’s done.

The first difference is pretty straightforward. Remember I said that when a method is called the interpreter pops its arguments from the stack and copies them to the callee’s local variables? Well, all that copying would be pretty inefficient, so HotSpot simply doesn’t bother. Imagine you’re executing some method. You’ve just pushed three values onto the stack, value_a, value_b and value_c, and you’re about to invoke a method that takes two arguments:

 
value_c
value_b
value_a

When it enters the callee, rather than popping the arguments, it leaves them where they are:

 
value_c local[1]
value_b local[0]
  value_a  

If the callee has more locals than it has arguments (say this method needs four) then extra locals (set to zero) will be pushed onto the stack:

 
  0   local[3]
0 local[2]
value_c local[1]
value_b local[0]
  value_a  

Execution continues as normal after that, with values being pushed onto the stack after the locals. As methods call methods call methods call methods, the stack becomes split into layers, with individual method’s stacks interleaved with blocks of local variables. When a method returns, everything up to and including local[0] is popped. If a method is to return a value, then that will popped before everything else and pushed back onto the stack afterwards. We’ve exchanged copying the arguments for copying the result, a good tradeoff given that methods can have many arguments but only one result.

So far so good, but there’s another difference. The stack I’ve been talking to until now is the stack of the Java language. In HotSpot this is variously referred to as the Java expression stack, the Java stack or the expression stack. But HotSpot is a program in itself, and it has a stack of its own, the ABI stack or native stack. This is the stack that C and C++ functions use to store their own bits and pieces on. HotSpot was originally written for i386, a platform notoriously starved of registers, and rather than maintaining two separate stack pointers the HotSpot engineers decided to store the Java stack on the ABI stack and save a register. Each time a Java method is invoked, the native code that executes it needs to store some state on the ABI stack, so chunks of Java stuff end up interleaved with chunks of ABI stuff between each method’s local variables and its part of the expression stack.

This tinkering with the ABI stack is one of the two reasons the C++ interpreter in HotSpot required a layer written in assembler — you don’t have that kind of access to the ABI stack in C++. Zero, of course, is written in C++, and doesn’t have that access; Zero maintains a separate stack, an instance of the ZeroStack class from stack_zero.hpp. That could have consigned this interleaving to a side-note from history, but sadly the C++ interpreter expects to find it’s state information stored between its local variables and its expression stack. Rather than rewriting the C++ interpreter, Zero interleaves too. It’s the path of least resistance.

You can see what I mean in this crash dump that — congratulations! — you are now qualified to understand. The trace is split into frames, with each frame representing one method invocation. The deepest frame is at the top, so here:

java.lang.Thread.run
called java.util.concurrent.ThreadPoolExecutor$Worker.run
called java.util.concurrent.ThreadPoolExecutor.runWorker
called sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run
called sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0
called java.net.Socket.getInputStream — which crashed.

The top frame’s expression stack is at 0xd0ffe7e40xd0ffe7ec, and its local variables are at start of the next frame down, at 0xd0ffe83c0xd0ffe848. Between them is the C++ interpreter’s state (at 0xd0ffe7f00xd0ffe830) and two words which the stack walker uses to figure out where everything is.

I’m nearly finished, but there’s one final thing. The Java Language Specification specifies the sizes of the various types in terms of the number of stack slots or local variable slots they occupy: long and double values take two slots, and everything else takes one. If a method’s arguments are an int, an Object, a long and an int, it’s local variables on entry will look like this:

int   local[4]
long local[3]
local[2]
Object local[1]
int local[0]

This is pretty straightforward, aside from the fact that the long is officially in local[2] but its address is actually the address of local[3]. The problem arises when you’re using a 64-bit machine — the 64-bit Object pointer has been allocated the same number of slots as a 32-bit int. On 64-bit platforms, therefore, stack slots need to be 64-bits wide, which wastes space, and leaves us the choice of where in the slot to put non-Object types. The various classic HotSpot ports do this in different ways, but on Zero everything is accessed by slot number so values are positioned such that they start at the address of the start of the slot. This means the calculation the same regardless of whether the machine is 32- or 64-bit, and makes the majority of this stuff transparent. The same local variable array on 64-bit Zero looks like this:

int     local[4]
long local[3]
  local[2]
Object local[1]
int   local[0]

I’ll shut up about stacks now!

Inside Zero and Shark: The Java stack

This article will be a little generic — nothing about HotSpot, nothing about Zero — but before we can understand Zero’s calling convention we need to go up a level and understand the calling convention of Java itself, in which arguments and results are passed on the stack. Lets have a look at an example to see how that works:

class HelloUser {
  public static void main(String[] args) {
    System.out.print("Hello ");
    System.out.println(System.getProperty("user.name"));
  }
}

We’re going to have to disassemble it to see what’s happening:

public static void main(java.lang.String[]);
    0:  getstatic       [Field java/lang/System.out:Ljava/io/PrintStream;]
    3:  ldc             [String "Hello "]
    5:  invokevirtual   [Method java/io/PrintStream.print:(Ljava/lang/String;)V]
    8:  getstatic       [Field java/lang/System.out:Ljava/io/PrintStream;]
   11:  ldc             [String "user.name"]
   13:  invokestatic    [Method java/lang/System.getProperty:(Ljava/lang/String;)Ljava/lang/String;]
   16:  invokevirtual   [Method java/io/PrintStream.println:(Ljava/lang/String;)V]
   19:  return

The getstatic instruction gets a value from a static field of a class (in this case the out field of the System class) and pushes it onto the stack. The ldc instruction loads a constant (the string "Hello ") and pushes that onto the stack. So far we have this:

 
 
 
 
 
 
 
System.out
 
 
"Hello "
System.out
Before 0: getstatic Before 3: ldc Before 5: invokevirtual

The next instruction is an invokevirtual, which is going to call the method java.io.PrintStream.print. This takes two arguments, the implicit argument this, and the string to print, so the interpreter pops two values from the stack, stores them as the callee’s first two local variables, and starts to execute the callee. When the callee returns the stack will be empty:

 
 
 
 
Before 8: getstatic

We now have another getstatic and another ldc:

 
 
 
System.out
 
 
"user.name"
System.out
Before 11: ldc Before 13: invokestatic

The next instruction is an invokestatic, another method call. This is calling java.lang.System.getProperty, which takes only one argument, the name of the property to get (static methods have no this). Presently there are two values on the stack, but the interpreter doesn’t care about that. It simply pops the top value from the stack, stores it as the callee’s first local variable, and starts to execute the callee. This time, however, the callee returns a value, the user’s name, so when it returns it will have pushed that onto the stack:

 
 
"gbenson"
System.out
Before 16: invokevirtual

Now we’re ready for the final call, another invokevirtual. That extra value on the stack may have seemed odd before, but now it makes sense; it’s the first argument for this call! The interpreter pops two values from the stack, stores them as the callee’s first two local variables, and starts to execute the callee. This method returns nothing, so when the callee returns the stack will be empty. HelloUser.main returns nothing, so the stack is now exactly as it should be for us to execute the return instruction:

 
 
 
 
Before 19: return

Next time we’ll see how all this works in HotSpot and Zero.