123

The canonical JVM implementation from Sun applies some pretty sophisticated optimization to bytecode to obtain near-native execution speeds after the code has been run a few times.

The question is, why isn't this compiled code cached to disk for use during subsequent uses of the same function/class?

As it stands, every time a program is executed, the JIT compiler kicks in afresh, rather than using a pre-compiled version of the code. Wouldn't adding this feature add a significant boost to the initial run time of the program, when the bytecode is essentially being interpreted?

7
  • 4
    A thread discussing this problem: javalobby.org/forums/thread.jspa?threadID=15812 Commented Jan 2, 2010 at 19:08
  • 2
    But an unlikely question to attract a definitive answer. Commented Jan 2, 2010 at 19:15
  • 1
    I'm not sure about a "significant" boost, because then you would have to load JITted stuff from disk instead of JITing it in-memory. It could speed things up, but on a case-by-case basis. Commented Jan 2, 2010 at 19:17
  • 1
    Thanks for the great answers everyone! All the answers were equally valid, so I went with the community on this one... Commented Jan 3, 2010 at 12:57
  • 2
    This is a good question if you ask me :) Commented Jan 21, 2010 at 6:36

5 Answers 5

34

Without resorting to cut'n'paste of the link that @MYYN posted, I suspect this is because the optimisations that the JVM performs are not static, but rather dynamic, based on the data patterns as well as code patterns. It's likely that these data patterns will change during the application's lifetime, rendering the cached optimisations less than optimal.

So you'd need a mechanism to establish whether than saved optimisations were still optimal, at which point you might as well just re-optimise on the fly.

Sign up to request clarification or add additional context in comments.

10 Comments

...or you could just offer persistence as an option, like Oracle's JVM does -- empower advanced programmers to optimize their application's performance when and where they just know the patterns are not changing, under their responsibility. Why not?!
Because it's probably not worth it. If neither SUN, IBM nor BEA considered it worthwhile for their performance JVMs, there's going to be a good reason for it. Maybe their RT optimisation is faster than Oracle's, which is why Oracle caches it.
Why not taking stored optimatsations as a starting point, to use what has been learned on previous runs? From there JIT could work as usual an re-optimise stuff. On shut-down, that code could be persisted again and be used in the next run as a new starting point.
@Puce The only reason I can think about is that AFAIK you get no profiling stats from running optimized code. So you'd have no way to improve...
Image
I would personally be fine with a "just persist the JIT profiling information between runs" option with all the warnings that "this will only be valid with exact same JVM, same data etc and otherwise ignored". Regarding why this has not been implemented, I would expect that the added complexity of persisting and validating the JIT seed data was too much to take resources from other projects. Given the choice between this and Java 8 lambda+streams I'd rather have the latter.
|
32

Oracle's JVM is indeed documented to do so -- quoting Oracle,

the compiler can take advantage of Oracle JVM's class resolution model to optionally persist compiled Java methods across database calls, sessions, or instances. Such persistence avoids the overhead of unnecessary recompilations across sessions or instances, when it is known that semantically the Java code has not changed.

I don't know why all sophisticated VM implementations don't offer similar options.

4 Comments

Because other sophisticated JVMs don't have a honking great enterprise RDBMS handy to store stuff in :)
Wow! that means the Compilations are cached at times. This is good news!
IBM's J9 is also documented to do so.
Image
Note that this Oracle JVM is the one inside the Oracle Database, not the download Oracle got when purchasing Sun.
21

An updated to the existing answers - Java 8 has a JEP dedicated to solving this:

=> JEP 145: Cache Compiled Code. New link.

At a very high level, its stated goal is:

Save and reuse compiled native code from previous runs in order to improve the startup time of large Java applications.

Hope this helps.

3 Comments

The feature has not made it to the final release.
@assylias with AOT, this might not ever be needed
8

Excelsior JET has a caching JIT compiler since version 2.0, released back in 2001. Moreover, its AOT compiler may recompile the cache into a single DLL/shared object using all optimizations.

1 Comment

Yes, but the question was about the canonical JVM, i.e., Sun's JVM. I'm well aware that there are several AOT compilers for Java as well as other caching JVMs.
1

I do not know the actual reasons, not being in any way involved in the JVM implementation, but I can think of some plausible ones:

  • The idea of Java is to be a write-once-run-anywhere language, and putting precompiled stuff into the class file is kind of violating that (only "kind of" because of course the actual byte code would still be there)
  • It would increase the class file sizes because you would have the same code there multiple times, especially if you happen to run the same program under multiple different JVMs (which is not really uncommon, when you consider different versions to be different JVMs, which you really have to do)
  • The class files themselves might not be writable (though it would be pretty easy to check for that)
  • The JVM optimizations are partially based on run-time information and on other runs they might not be as applicable (though they should still provide some benefit)

But I really am guessing, and as you can see, I don't really think any of my reasons are actual show-stoppers. I figure Sun just don't consider this support as a priority, and maybe my first reason is close to the truth, as doing this habitually might also lead people into thinking that Java class files really need a separate version for each VM instead of being cross-platform.

My preferred way would actually be to have a separate bytecode-to-native translator that you could use to do something like this explicitly beforehand, creating class files that are explicitly built for a specific VM, with possibly the original bytecode in them so that you can run with different VMs too. But that probably comes from my experience: I've been mostly doing Java ME, where it really hurts that the Java compiler isn't smarter about compilation.

4 Comments

there is a spot in the classfile for such things, infact that was the original intent (store the JIT'ed code as an attribute in the classfile).
@TofuBeer: Thanks for the confirmation. I suspected that might be the case (that's what I would have done), but wasn't sure. Edited to remove that as a possible reason.
I think you hit the nail on the head with your last bullet point. The others could be worked around, but that last part is, I think, the main reason JITed code is not persisted.
Image
The last paragraph about the explicit bytecode-to-native compiler is what you currently have in .NET with NGEN (msdn.microsoft.com/en-us/library/6t9t5wcf(VS.71).aspx).

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.