Jekyll2026-01-14T08:39:58+00:00https://inside.java/feed.xmlinsidejavaNews and views from members of the Java team at OracleOne Giant Leap: 95% Less Sampling Cost2026-01-14T00:00:00+00:002026-01-14T00:00:00+00:00https://inside.java/2026/01/14/user-cpu-time-jvm]]>["JonasNorlinder"]The Static Dynamic JVM – A Many Layered Dive #JVMLS2026-01-11T00:00:00+00:002026-01-11T00:00:00+00:00https://inside.java/2026/01/11/JVMLS-Static-Dynamic-JVM

Dive deep into the Java Virtual Machine and discover how it masterfully balances static analysis with dynamic execution. John Rose explores what makes the JVM both powerful and efficient, from theoretical computer science to real-world optimization techniques.

Make sure to check the JVMLS 2025 playlist.

]]>
["JohnRose"]
Run Into the New Year with Java’s Ahead-of-Time Cache Optimizations2026-01-09T00:00:00+00:002026-01-09T00:00:00+00:00https://inside.java/2026/01/09/run-aot-cacheAs a new year begins, turn your focus to boosting your Java application performance by applying Ahead-of-Time (AOT) cache features added in recent JDK releases. This article guides you through using AOT cache optimizations in your application, thereby minimizing startup time and achieving faster peak performance.

What Is the Ahead-of-Time Cache in the JDK

JDK 24 introduced the Ahead-Of-Time (AOT) cache, a HotSpot JVM feature that stores classes after they are read, parsed, loaded, and linked. Creating an AOT cache is specific to an application, and you can reuse it in subsequent runs of that application to improve the time to the first functional unit of work (startup time) and time to peak performance (warm up time).

To generate an AOT cache, you need to perform two steps:

  • Training by recording observations of the application in action. You can trigger a recording by setting -XX:AOTMode=record and giving a destination for the configuration file via -XX:AOTConfiguration:
java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \
  -cp app.jar com.example.App ...

This step aims to answer questions like “Which classes does the application load and initialize?”, “Which methods become hot?” and store the results in a configuration file (app.aotconf).

  • Assembly then converts the observations from the configuration file into an AOT cache (app.aot).
java -XX:AOTMode=create -XX:AOTConfiguration=app.aotconf \
  -XX:AOTCache=app.aot -cp app.jar

To benefit from a better startup time, run the application by pointing the -XX:AOTCache flag to the AOT cache you created in the previous steps.

java -XX:AOTCache=app.aot -cp app.jar com.example.App ...

The improved startup time is the result of shifting work, usually done just-in-time when the program runs, earlier to the second step, which creates the cache. Thereafter, the program starts up faster in the third phase because its classes are available from the cache immediately.

The three-step workflow (train+assemble+run) became available starting with JDK 24, via JEP 483: Ahead-of-Time Class Loading & Linking, the first feature merged from the research done by Project Leyden. A set of benchmarks proves the effectiveness of this feature and other Leyden performance-related ones, as displayed by Figure 1.

JDK 24 AOT Cache Benchmarks Figure 1: AOT Cache Benchmarks as of JDK 24

In JDK 25, the changes in JEP 515 - Ahead-of-Time Method Profiling enabled frequently executed method profiles to be part of the AOT cache. This addition improves application warm up by allowing the JIT to start generating native code immediately at application startup. The new AOT feature does not require you to add more constraints to your application execution, just use the existing AOT cache creation commands. Moreover, benchmarks showed improved startup time too (Figure 2).

JDK 25 AOT Cache Benchmarks Figure 2: AOT Cache Benchmarks as of JDK 25

JDK 25 also simplified the process for generating an AOT cache by making it possible to do it in a single step, through setting the argument for -XX:AOTCacheOutput flag:

# Training Run + Assembly Phase
java -XX:AOTCacheOutput=app.aot \
     -cp app.jar com.example.App ...

Upon passing -XX:AOTCacheOutput=[cache location], the JVM creates the cache on its shutdown. JEP 514 - Ahead-of-Time Command-Line Ergonomics introduced the two-step process for creating and using the AOT cache.

# Training Run + Assembly Phase
java -XX:AOTCacheOutput=app.aot \
     -cp app.jar com.example.App ...
     
# Deployment Run
java -XX:AOTCache=app.aot -cp app.jar com.example.App ...

The two-step workflow may not work as expected in resource-constrained environments. The sub-invocation that creates the AOT cache uses its own Java heap with the same size as the heap used for the training run. As a result, the memory needed to complete the one-step AOT cache generation is double the heap size specified on the command line. For example, if the step java -XX:AOTCacheOutput=... has also -Xms2g -Xmx2g options appended, specifying a 2GB heap, then the environment needs 4GB to complete the workflow.

A division of steps, as in a three-phase workflow, may be a better choice if you intend to deploy an application to small cloud tenancies. In such cases, you could run the training on a small instance while creating the AOT cache on a larger one. That way, the training run reflects the deployment environment, while the AOT cache creation can leverage the additional CPU cores and memory of the large instance.

Regardless of which workflow you choose, let’s take a closer look at AOT cache requirements and how to set it up to serve your application needs best.

How to Craft the AOT Cache Your Application Needs

Training and production runs should produce consistent results, just faster in deployment runs. To achieve that, the assembly phase intermediates what happens between training and production runs (Figure 3).

Train-Assembly-Deploy Figure 3: Training / Assembly / Deployment

For consistent training runs and the subsequent ones, make sure that:

  • The timestamp of the JARs is preserved across training runs.
  • Your training runs and the production one use the same JDK release for the same hardware architecture and operating system.
  • Your application behavior in your training run resembles the expected behavior of your application in production (e.g. the most highly used areas of your application in your training run, are the mostly highly used areas of your application in production).
  • Provide the classpath for your application as a list of JARs, without any directories, wildcards or nested JARs.
  • Production run classpath must be a superset of the training one.
  • Do not use use JVMTI agents that call the AddToBootstrapClassLoaderSearchand AddToSystemClassLoaderSearch APIs.

To check if your JVM is correctly configured to use the AOT cache, you can add the option -XX:AOTMode=on to the command line:

java -XX:AOTCache=app.aot -XX:AOTMode=on \
    -cp app.jar com.example.App ...

The JVM will report an error if the AOT cache does not exist or if your setup disregards any of the above requirements. The features introduced in JDK 24 and 25 did not support the Z Garbage Collector (ZGC). Yet, this limitation no longer applies as of JDK 26, with the introduction of JEP 516: Ahead-of-Time Object Caching with Any GC.

To ensure the AOT cache works effectively in production, the training run and all following runs must be essentially identical. Training runs are a way of observing what an application is doing across different runs and are primarily two types:

  • integration tests, which run at build time
  • production workloads, which require training in production.

Avoid loading unused classes during the training step and skip rich test frameworks to keep the AOT cache minimal. Mock external dependencies in training to load needed classes, but be aware that this may introduce extra cache entries. AOT cache effectiveness depends on how closely the training run matches production behavior. If you rebuild the application or upgrade its JDK, you must regenerate the AOT cache.

Note also that the AOT cache is only valid for the current state of your application. Making code changes to your application, adding libraries, updating existing libraries, will invalidate your cache. So you will need to regenerate your cache with every new build of your application. Otherwise, you risk crashes or undefined behavior (methods missing from cache). If you are noticing issues, or not getting the expected performance improvement to your application, try running it -Xlog:aot,class+path=info to monitor what it loads from cache.

Tips for Efficient Training Runs

There is a trade-off between performance and how easy it is to run the training. Using a production run for training is not always practical, especially for server applications, which can create log files, open network connections, access databases, etc. For such cases, it is better to make a synthetic training run that closely resembles actual production runs.

Aligning the training run to load the same classes as production helps to achieve an optimized startup time. To determine which classes are loaded by your training run, you can append the -verbose:class flag upon launching it. Or observe the loaded classes by enabling the jdk.ClassLoad JFR event and profiling your application with it:

# configure the event
jfr configure jdk.ClassLoad#enabled=true

# profile as soon as your application launches
java -XX:StartFlightRecording:settings=custom.jfc,duration=60s,filename=/tmp/AOT.jfr

# profile on a running application identified through llvmid
jcmd llvmid JFR.start settings=custom.jfc duration=60s filename=/tmp/AOT.jfr

Once you have a recording file, you may check the loaded classes, but also which methods your application frequently uses by running the following jfr commands:

# print jdk.ClassLoad events from a recording file
jfr print --events "jdk.ClassLoad" /tmp/AOT.jfr

# view frequently executed methods
jfr view hot-methods /tmp/AOT.jfr

If you determine that there are methods frequently used but not detected by your training run, exercise them. You can work out the standard modes of your application using a temporary file directory, a local network configuration, and a mocked database, if needed. Avoid loading unused classes during training and skip rich test frameworks to keep the AOT cache minimal. Instead, use smoke tests to cover typical startup paths; avoid extensive suites and stress/regression tests.

Takeaways

To conclude, crafting an AOT cache for better performance requires you to look over:

  • Cache validity or staleness; if you rebuild the application or upgrade the JDK, you must regenerate the AOT cache.
  • Portability, as the AOT cache is JVM and platform-specific.
  • Startup path coverage; the training run must cover typical application startup paths. If your training run is shallow, you will not warm up enough, and the benefits of the cache will be limited.
  • Operational setup as both the application JAR and the AOT cache must run with least privilege and according to immutable infrastructure practices.

Application performance is an ongoing task because software evolves: new features are added, libraries and frameworks change, workloads grow, and infrastructure shifts (e.g., to the cloud, container orchestration, etc.). And depending on those evolutions, your application performance goals evolve too. Invest in training your application today and keep up with JDK releases to unlock available optimizations, as performance improves with each of them!

The content of this article was initially shared in the The JVM Programming Advent Calendar.

]]>
["Ana-MariaMihalceanu"]
Java’s Plans for 2026 - Inside Java Newscast #1042026-01-08T00:00:00+00:002026-01-08T00:00:00+00:00https://inside.java/2026/01/08/Newscast-104

In 2026, Java keeps evolving: Project Valhalla is gunning for merging its value types preview in the second half of this year; Babylon wants to incubate code reflection; Loom will probably finalize the structured concurrency API; Leyden plans to ship AOT code compilation; and Amber hopes to present JEPs on constant patterns and pattern assignments. And those are just the most progressed features - more are in the pipeline and discussed in this episode of the Inside Java Newscast.

Make sure to check the show-notes.

]]>
["NicolaiParlog"]
The Inside Java Newsletter: JavaOne Sessions and Keynotes!2026-01-05T00:00:00+00:002026-01-05T00:00:00+00:00https://inside.java/2026/01/05/Inside-Java-Newsletter]]>["JimGrisanzio"]Episode 43 “Predictability or Innovation? Both!” with Georges Saab2025-12-26T00:00:00+00:002025-12-26T00:00:00+00:00https://inside.java/2025/12/26/Podcast-043



This Inside Java Podcast takes a meta approach. Instead of focusing on specific features, it explores the bigger picture: What are the right problems for Java to tackle? What are the current and future challenges for the Java platform? Why is predictability so important for Java, and what’s driving the recent focus on learners and students?

Nicolai Parlog discusses these topics with Georges Saab, Senior Vice President of the Java Platform Group and Chair of the OpenJDK Governing Board.


Make sure to also check the Duke’s Corner podcast on dev.java.


Additional resources

For more episodes, check out Inside Java, our YouTube playlist, and follow @Java on Twitter.

Contact us here.

]]>
["GeorgesSaab", "NicolaiParlog"]
Virtual Threads in the Real World: Fast, Robust Java Microservices with Helidon2025-12-21T00:00:00+00:002025-12-21T00:00:00+00:00https://inside.java/2025/12/21/Virtual-Threads-Robust-Java-Microservices

In 2022, the Helidon team made a significant decision: re-write our Netty-based Helidon Web Server to be fully implemented using virtual threads. The result is Helidon 4: the first microservices framework designed from the ground up for virtual threads. We are thrilled with the results, and have learned a few things along the way. Come join us as we introduce you to the benefits of virtual threads, share our lessons learned, give a few tips-and-tricks, and highlight what to look forward to in Java 24 and beyond.

Make sure to check the JavaOne 2025 playlist.

]]>
Java’s 2025 in Review - Inside Java Newscast #1032025-12-18T00:00:00+00:002025-12-18T00:00:00+00:00https://inside.java/2025/12/18/Newscast-103

With 2025 coming to a close, let’s summarize Java’s year and look at the current state of the six big OpenJDK projects as well as a few other highlights: Project Babylon is still pretty young and hasn’t shipped a feature or even drafted a JEP yet. Leyden, not much older, has already shipped a bunch of startup and warmup time improvements, though. Amber is currently taking a breather between its phases 1 and 2 and just like projects Panama and Loom only has a single, mature feature in the fire. And then there’s Project Valhalla…

Make sure to check the show-notes.

]]>
["NicolaiParlog"]
Quality Outreach Heads-up - JDK 26: Jlink Compression Plugin Now Handles -c Option Correctly2025-12-16T00:00:00+00:002025-12-16T00:00:00+00:00https://inside.java/2025/12/16/Quality-Heads-Up

The OpenJDK Quality Group is promoting the testing of FOSS projects with OpenJDK builds as a way to improve the overall quality of the release. This heads-up is part of the quality outreach sent to the projects involved. To learn more about the program, and how-to join, please check here.

The jlink tool guide states that you can set compression level using the --compress option or its short form -c.

jlink --module-path $JAVA_HOME/jmods/ \
    --add-modules java.base \
    --compress=zip-1 \
    --output runtime

Furthermore, while not documented, a command like the one below produces a runtime image compressed with the deprecated compression level 2.

jlink --module-path $JAVA_HOME/jmods/ \
    --add-modules java.base \
    -c \
    --output runtime

Yet, prior to JDK 26, utilizing jlink with -c followed by an argument would result in an error. This behavior has been corrected in JDK 26 and running jlink with either --compress or -c produces the same result:

jlink --module-path $JAVA_HOME/jmods/ \
    --add-modules java.base \
    -c zip-1 \
    --output runtime

Call to Action

By aligning the behavior of -c and --compress for jlink, both options now require an argument. If you use jlink -c without specifying an argument for compression level, it is recommended to either select a level or omit the option altogether. In the latter case, the resulting runtime image will be compressed using the default compression level (zip-6).

This fix has been incorporated into the 26-ea mainline build available here. For more details on this change, check JDK-8321139.

~
]]>
["Ana-MariaMihalceanu"]
Valhalla? Python? Withers? Lombok? - Ask the Architects at JavaOne’252025-12-15T00:00:00+00:002025-12-15T00:00:00+00:00https://inside.java/2025/12/15/JavaOne-Ask-Java-Architects

Should Java get rid of semicolons and what are the next steps for projects Valhalla and Loom? How does Java hold up against Python and what’s the hold-up with record withers? The Java architects Ron Pressler, Paul Sandoz, Brian Goetz, Mark Reinhold, Dan Heidinga, Viktor Klang, Gary Frost, Alex Buckley, and John Rose sat down at JavaOne 2025 to answer these and many more audience questions in the Ask The Architect session.

Make sure to check the JavaOne 2025 playlist.

]]>
[""]