I’m pleased to announce the release of Java Buildpack 4.0. This release adds all the typical new integrations and version updates you’d expect from a Java Buildpack release, but it is also the culmination of a major focus on improving how the JVM runs in a containerized environment. I’ll focus on two major changes in this post, but you can read the release notes for more detail on everything else that has changed.
JVM Memory Calculation
One of the major advantages of using the Java Buildpack is its memory calculation facilities. In summary, given a container with an enforced memory limit, this is the best way to divide up the JVM’s various memory regions and ensure that the application uses the maximum available memory without exceeding the limit. The buildpack has done this from its inception and in this release we’ve made some major improvements to how the calculation is performed and how to configure it.
Previously, the buildpack examined the memory limit of a container and applied a heuristic to it in order to determine the proper memory configuration. Over the years, this heuristic system grew until the number of configuration properties vastly outnumbered the memory regions being configured. In addition, trying to adjust the amount of memory allocated to any one region was confusing. You had to account for the size of a container, the proportion of memory allocated to other memory regions, as well as the proportion you were trying to change. Given the alternative of just setting
-XX:MaxMetaspaceSize, this wasn’t a great system. Stacked on top was the realization that under high loads, the calculation wasn’t correct; it didn’t account for some memory regions that, while obscure, are still governed by the container memory limit.
With this state of the world (and an amazing investigation by a community member), the team set out to improve memory calculation with two goals:
- The container should never terminate an application for exceeding the memory limit;
- Configuring the memory calculator should use standard Java memory flags.
The new memory calculator now accounts for the following memory regions:
- Heap (
- Metaspace (
- Thread Stacks (
- Direct Memory (
- Code Cache (
- Compressed Class Space (
One major result of accounting for all of these memory regions is the likelihood that applications which previously ran in small containers (512M or less) will no longer be able to. Where possible, we use the JVM defaults in the memory calculator, so 200 Tomcat connector threads x 1M thread stacks plus 240M of Reserved Code Cache put you at 440M of memory before you’ve accounted for heap, metaspace, or anything else. We believe that most applications will run nicely if they use the Cloud Foundry default container size of 1G without any modifications; however if you believe that your application doesn’t need these JVM defaults, you can now configure those memory regions directly with JVM options. This would typically be done with the CLI by
cf set-env <APP> JAVA_OPTS '-XX:ReservedCodeCacheSize=100M' or something similar.
The second major result of these changes is that we now optimize for all non-heap memory regions first and leave the remainder for the heap. This may seem counterintuitive (everyone needs more heap), but it has beneficial results. If we take the example from the previous paragraph, scaling down the Code Cache saves you 140M. Because all other memory regions stay the size they were previously, that 140M goes directly to the heap. If you scaled your container memory from 1G to 2G, that entire 1G difference would be allocated to the heap as well. This new system means you can reason about where any “additional” memory will go quite easily and you know that after you’ve accounted for memory regions that stay more or less constant in size, adjusting your container (
cf scale <APP> -m 2G) is guaranteed to give your application more breathing room and result in fewer garbage collections.
For more detailed information about how the memory calculator works, please see the documentation.
JVM Out of Memory Behavior
As an ever-greater number of applications run on Cloud Foundry, we’ve seen developers requesting more diagnostic capabilities for dealing with failure. In the past we’ve added support for Debugging, JMX and YourKit Profiling, but getting a heap dump as the application was crashing has always eluded us. There are two major reasons for this. First, the file system inside of a container is typically smaller than the memory used by the application and therefore isn’t large enough to write the heap dump to (Cloud Foundry defaults to a 1G file system which must also contain the application; a 1G heap dump won’t fit). Second, even if the disk were large enough to write the heap dump, the container is recycled immediately and that heap dump is lost.
Given those constraints (and the suggestion of yet another community member), we now print a histogram of the heap to the logs when the JVM encounters a terminal failure.
ResourceExhausted! (1/0) | Instance Count | Total Bytes | Class Name | | 17802 | 305280896 | [B | | 47937 | 7653024 | [C | | 14634 | 1287792 | Ljava/lang/reflect/Method; | | 46718 | 1121232 | Ljava/lang/String; | | 8436 | 940992 | Ljava/lang/Class; | ... Memory usage: Heap memory: init 373293056, used 324521536, committed 327155712, max 331874304 Non-heap memory: init 2555904, used 64398104, committed 66191360, max 377069568 Memory pool usage: Code Cache: init 2555904, used 15944384, committed 16384000, max 251658240 Metaspace: init 0, used 43191560, committed 44302336, max 106274816 Compressed Class Space: init 0, used 5262320, committed 5505024, max 19136512 PS Eden Space: init 93847552, used 41418752, committed 41418752, max 46137344 PS Survivor Space: init 15204352, used 34072320, committed 36700160, max 36700160 PS Old Gen: init 249036800, used 249034640, committed 249036800, max 249036800
While this isn’t nearly as complete as a true heap dump, we believe this will give developers a good idea of where to look for memory leaks outside of the container, using better diagnostic tools. If all else fails, you can always attach YourKit to your container for a true heap dump.
For more information about how
jvmkill works, please see the documentation.
We recognize these changes mean that some current deployments will require modification to work with Java Buildpack 4.0. In order to allow a window for applications to upgrade, we’re going to be maintaining both the 4.x and 3.x generations in parallel for a while. The Cloud Foundry default will stay 3.x, and releases will continue to be made against that line with every bug fix and improvement that the 4.x line gets. The 4.x line will be available and I encourage you to start testing against it.
During this window, we’re looking for as much feedback from real-world applications as we can get. We’re also open to making changes to how the memory calculator and out of memory behavior work based on feedback from the community. When we’re finally comfortable with enough real world applications successfully running on 4.x, we’ll make the announcement and switch Cloud Foundry to default to using it.
Finally, I’d like to say a special thanks to the following people: First, Andrew Thorburn who did an incredible job chasing down exactly where and when JVMs were allocating memory that caused them to be killed by the container. Second, Rafael de Albuquerque Ribeiro for suggesting a useful alternative to getting memory information out of the container. Finally, I’d like to recognize Glyn Normington and Chris Frost who did nearly all the hard work of taking these ideas and getting them implemented in the buildpack itself.