<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[graalvm - Medium]]></title>
        <description><![CDATA[GraalVM team blog - https://www.graalvm.org - Medium]]></description>
        <link>https://medium.com/graalvm?source=rss----7122626bf34b---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 03:43:49 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/graalvm" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Inside trivago’s GraalVM Migration: Native Image for GraphQL at Scale]]></title>
            <link>https://medium.com/graalvm/inside-trivagos-graalvm-migration-native-image-for-graphql-at-scale-912bca9df841?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/912bca9df841</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[native-image]]></category>
            <category><![CDATA[graalvm]]></category>
            <dc:creator><![CDATA[Alina Yurenko]]></dc:creator>
            <pubDate>Thu, 02 Apr 2026 09:13:23 GMT</pubDate>
            <atom:updated>2026-04-02T09:13:21.718Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dSpyLheswJEqB5mtkRdXpA.jpeg" /></figure><h3>Introduction</h3><p><a href="https://www.trivago.com/">trivago</a> is one of the world’s largest hotel search platforms, processing billions of daily price, availability, and hotel records to provide the best options across 5 million+ accommodations in 190+ countries. At that scale, reliability and performance are of great importance.</p><p><strong>Their GraphQL Gateway is the single entry point for all web and native app traffic across 48 microservices.</strong> For a long time, every deployment came with a tax: JVM warm-up caused timeout spikes and slow first requests — a frustrating but widely common issue. trivago’s team decided not to accept it. They migrated the GraphQL microservices to GraalVM Native Image, eliminating warm-up entirely and cutting one service from 43 replicas down to 12 and CPU usage from 15 cores to 5.</p><p>We asked trivago’s team members, <a href="https://www.linkedin.com/in/hans-puac/"><strong>Hans Puac</strong></a>, Backend Software Engineering Lead, and <a href="https://www.linkedin.com/in/ayush-chaubey-11973623/"><strong>Ayush Chaubey</strong></a>, Backend Software Engineer, to tell us more about the migration process and share their insights with our community.</p><h3>Background</h3><h4>Can you briefly describe the GraphQL Gateway — its tech stack traffic volumes, and why it is central to trivago’s infrastructure?</h4><p>A bit of history: trivago switched from JSON REST APIs to a GraphQL endpoint in 2019 to address versioning challenges, give clients more flexibility when fetching and aggregating data, and reduce overfetching. We initially started with a monolithic GraphQL server, which eventually hit performance limits.</p><p>Today, we run a <a href="https://graphql.org/learn/federation/">federated GraphQL</a> architecture: all GraphQL requests first hit a Rust-based gateway using the <a href="https://github.com/apollographql/router/blob/dev/README.md">Apollo Router</a>, which acts as the central entry point and orchestrates calls to the services behind it. Our graph is split into 40+ subgraphs. These subgraphs run on the JVM or GraalVM (Java 25), use Spring Boot, and implement GraphQL via Netflix DGS. The codebase is written in Kotlin, built with Gradle, and deployed through CI/CD pipelines to GCP.</p><p>Traffic volume varies significantly across subgraphs. For example, during peak traffic in a single region, our translations subgraph exceeds 9,000 requests per second.</p><p>The GraphQL Gateway is central to trivago’s infrastructure because it provides a single, unified interface for all clients (web and native apps) to query the data they need. You can find out more about it in <a href="https://tech.trivago.com/">trivago’s engineering blog</a>.</p><h4>Before GraalVM, did you try other ways to reduce warm-up time before deciding to explore Native Image?</h4><p>Before exploring GraalVM Native Image, we tried several approaches to reduce warm-up time.</p><p>For example, we developed a custom class loader to eagerly load many commonly used classes right after the application starts. We also added a startup routine where the service triggers an HTTP call to itself, so the web server initializes thread pools and other internal components early. In addition, we adjusted our rollout strategy on GCP to ramp traffic up gradually on new pods, rather than sending them full peak RPS immediately.</p><h3>Discovering GraalVM</h3><h4>How did your team first learn about GraalVM and decide to use it?</h4><p>GraalVM first came onto our radar in early 2021 through blog posts, such as one on the <a href="https://spring.io/blog/2020/11/23/spring-native-for-graalvm-0-8-3-available-now">Spring blog</a>. At the time, we created a time-boxed investigation task, but it ended up sitting in the backlog until 2023. That year, two of our developers attended the Spring I/O conference, where GraalVM was highlighted in the <a href="https://youtu.be/IgmeFeTU1a4?list=PLe6FX2SlkJdTlHjktJqUWaFtaRBOkZ8JZ&amp;t=1224">keynote</a>, which brought the topic back into focus.</p><p>For a long time, our SREs had been dealing with the JVM’s warm-up behavior, especially slow first requests after startup. GraalVM promised improvements there, so we decided to validate it with a proof-of-concept.</p><p>At trivago, one of our core values is the “power of proof”, so we aimed to migrate a single service first and run a side-by-side comparison against the JVM version before committing further. To keep the initial effort low, we accepted a few shortcuts during the tryout, for example, building the native binary on a developer machine, manually creating and deploying the container image, to gather real numbers. The main requirement was that the GraalVM build behaved identically to the JVM version.</p><p>Once we saw the initial results, we committed to adopting it properly across the codebase, including the necessary changes to our CI/CD setup.</p><h3>Migration</h3><h4>Can you briefly walk us through the first migration, including the thought process and lessons learned?</h4><p>Our first migration focused on a smaller service that wasn’t heavily tied to external dependencies and was less complex, making it a good candidate for early migration.</p><p>Here’s how the journey unfolded:</p><ul><li>Native Image Build Issues. We quickly discovered that some of our core frameworks and libraries weren’t fully compatible with GraalVM Native Image. That meant either contributing fixes upstream in open source or finding alternative, compatible libraries. For example, when we ran into compatibility issues with Netflix DGS, we addressed them through small upstream contributions, like <a href="https://github.com/Netflix/dgs-framework/pull/1904">this change</a> to make reflection work and <a href="https://github.com/Netflix/dgs-framework/pull/1891">this change</a> to ensure certain classes are set up in the way GraalVM requires. These contributions helped enable GraalVM Native Image support. On top of that, we hit build-time exceptions related to class initialization order. The error messages often suggested whether a class should be initialized at build time or at runtime, but getting it right still required several build–test–repeat cycles.</li><li>Logging Framework Switch. Our initial use of Log4j posed a challenge, as it was not compatible with GraalVM native images. To resolve this, we transitioned to Logback, which integrates seamlessly with native images and ensured continued smooth logging across our services.</li><li>Testing Framework Adjustments. Our test suites previously leaned heavily on Mockk, but since GraalVM native images don&#39;t support dynamic mocking, we shifted many tests to use technologies like <a href="https://testcontainers.com/">Testcontainers</a>. In scenarios where mocking was essential, we disabled those tests during native image builds using the @DisabledInAotMode annotation.</li><li>Runtime Errors. Even after a successful build, we faced runtime exceptions — mainly because GraalVM’s static analysis missed some classes used reflectively. We addressed this by using Spring’s <a href="https://docs.spring.io/spring-boot/reference/packaging/native-image/advanced-topics.html#packaging.native-image.advanced.custom-hints">RuntimeHintsRegistrar</a> API to explicitly register Runtime hints, and leveraging GraalVM’s tracing agent during tests to automatically discover and register necessary classes.</li></ul><p><strong>Key Learnings:</strong></p><ul><li>Start with less complex services when starting with GraalVM native image.</li><li>Prepare for hands-on debugging, as fixes often require several iterations.</li><li>Invest time in automated tests — tools like the tracing agent rely heavily on robust test coverage.</li><li>Community engagement (open-source contribution) can sometimes be the fastest path to long-term solutions.</li></ul><h4>Did the migration become easier over time, for example when migrating service #14 compared with service #1?</h4><p>Yes, migrating later services became significantly easier compared to our initial efforts. By the time we reached service #14, we had established a robust framework for managing custom hints and had streamlined our build process. Our growing familiarity with GraalVM’s configuration options and performance tuning flags allowed us to optimize native images more effectively, right from the start. The lessons learned and best practices developed early on translated into faster, smoother migrations for subsequent services.</p><h4>What changes, if any, were needed in your CI/CD pipeline and test strategy to support Native Image builds?</h4><p>To support Native Image builds, we had to make a few practical changes to our CI/CD pipeline compared to our standard JVM setup.</p><p>First, the build stage needed more powerful builders/runners. Native Image compilation is significantly more resource-intensive and requires much more RAM than a regular JVM build, so we adjusted our CI machine sizing accordingly. To keep feedback cycles fast, we also differentiate build modes: PR preview builds use GraalVM’s faster -Ob quick-build mode, while production builds run with -O3 for maximum performance.</p><p>Second, our container build process changed. For JVM services we previously built images with Jib, but for Native Image we had to package the generated binary into a Docker image ourselves as part of the pipeline.</p><p>We also used a distroless base image, which meant we had to explicitly add a few runtime dependencies that were previously available implicitly. For example, we had to include aarch64-linux-gnu/libz.so.1 ourselves.</p><p>Finally, architecture mattered more: Native Image builds are tied to the target platform. We originally built on amd64, but since we now run on arm64, we also had to build on arm64 in CI. With JVM-based services, multi-arch images are much easier to produce because the same artifact can typically run across architectures.</p><h3>The Results</h3><h4><strong>The numbers you shared (43 → 12 replicas, 15 → 5 CPU cores) are really impressive. Can you share some benchmarks or performance measurements?</strong></h4><p>Since this migration happened a while ago, we only have a few screenshots from that time rather than a complete set of benchmarks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JIdD6VcJ7UInMh0YRTDaHA.png" /><figcaption>Replicas / capacity</figcaption></figure><p>In the first screenshot, you can see the replicas regularly maxing out before the change, and then dropping significantly after the release.</p><p>Because some deployments are assigned only a fraction of a CPU core, we also built a dashboard panel that normalizes throughput by CPU: <strong>requests handled per full CPU core</strong>. That metric increased substantially after the migration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/1*XTCk-Xi-eCYsPll502tZ9w.png" /><figcaption>Memory / heap</figcaption></figure><p>The second screenshot shows the impact on heap usage. The top panel contains the “classic” JVM heap metric we used before the migration, so it stops at the point where we switched the service to Native Image. The lower panel shows our replacement metric (“MXBean JVM Heap”), which we introduced afterwards and which continues to report heap usage for the native build.</p><p>A couple of caveats: the values shown are<strong> </strong>aggregated averages across multiple pods, not a single instance. Even with that in mind, <strong>the reduction is still clearly visible. </strong>Overall heap usage dropped significantly after the migration.</p><p>This chart also highlights an observability detail we’ll mention later: with Native Image we initially lost some of the heap/JVM metrics we previously got automatically, so we had to collect and expose parts of them ourselves.</p><h4>Were there any unexpected benefits or trade-offs that you didn’t anticipate going in?</h4><p>While we anticipated faster start-up times and reduced memory usage after migrating to GraalVM native image, the process yielded a few additional surprises:</p><h4>Unexpected Benefits:</h4><ul><li><strong>Lower CPU Utilization:</strong> Beyond our baseline expectations, we observed a noticeable decrease in CPU usage. This allowed us to reduce CPU core allocations for most services, translating to both cost and resource savings.</li><li><strong>Smaller Image Size:</strong> The resulting native images were much leaner and took up less storage space, which was a welcome side effect for deployments and distribution.</li></ul><h4>Unexpected Trade-offs:</h4><ul><li><strong>Longer Build Times:</strong> Building native images took significantly longer than standard JVM builds. While the increased build time was the most noticeable trade-off, it was manageable within our CI/CD pipelines given the substantial runtime performance improvements. Importantly, this does not impact day-to-day development: developers typically run the JVM version locally, and PR preview builds use GraalVM’s faster -Ob quick-build mode.</li><li><strong>Additional Configuration Effort:</strong> Another challenge was the need to provide custom runtime hints and rely heavily on GraalVM’s tracing agent to achieve comprehensive coverage of code path — especially for reflective operations. This required additional effort during both development and testing.</li></ul><h4>We often get questions about observability and monitoring of native images. Can you share what you use in your system?</h4><p>We use largely the same observability stack as before. Most metrics are exposed out of the box via Spring Boot with Micrometer and the Prometheus exporter, and we complement those with custom application metrics.</p><p>One change was needed for services built with GraalVM Native Image: we no longer got the usual JVM heap/memory, GC, and thread metrics that were available on the JVM. To compensate, we added our own metrics based on the MemoryMXBean, PlatformMXBean, and ThreadMXBean.</p><p>It was also helpful to enable com.sun.management.jmxremote for the GraalVM builds. That allowed us to inspect live pod metrics at higher resolution using tools like VisualVM.</p><h3>Future Plans</h3><h4>You still have 34 services to go. How are you prioritizing the next migrations, and do you have any other future plans related to this?</h4><p>After migrating the highest-traffic and highest-impact services, we’re continuing with the same approach we’ve had from the start: prioritizing what makes the most sense to migrate next, based on impact and effort, not speed for its own sake.</p><p>Recently, we completed and rolled out the Java 21 → 25 upgrade so we could take advantage of GraalVM’s -O3 optimization, which is now running in production. Next, we&#39;re planning the Spring Boot 4 and DGS 11 upgrades. We also want to re-evaluate GraalVM <a href="https://www.graalvm.org/latest/reference-manual/native-image/optimizations-and-performance/PGO/">PGO</a> (profile-guided optimization): in earlier tests it delivered promising results, but it requires additional work to collect and store the generated profiles in a reliable way.</p><p>For the remaining services, we’re prioritizing migrations opportunistically, for example, when a service is already undergoing a larger refactor or other significant changes, we’ll often migrate it to Native Image as part of that work. We also move a service up the list when we see a clear performance or cost issue that Native Image can help with. The long-term goal is still a full migration, but at a slower, pragmatic pace.</p><h4>Are you exploring other GraalVM capabilities beyond Native Image, such as integrations for Python, JavaScript, or WebAssembly?</h4><p>It’s definitely an interesting part of GraalVM, but so far we haven’t had a concrete use case for it. Right now, our focus remains on Native Image.</p><h3>Conclusions</h3><h4>What advice and resources would you share with teams looking to adopt GraalVM Native Image?</h4><p>A German saying we like is: <em>“Probieren geht über Studieren.”</em> Roughly translated, it means “Trying beats studying.” Online benchmarks can be helpful, but it’s hard to know how well they will translate to your own codebase and production workloads. Our recommendation is to build a small MVP first, just like we did, to evaluate whether the benefits of GraalVM Native Image justify the cost and effort. And it’s perfectly fine to take shortcuts at this stage, the goal is learning. Get an MVP running, compare it side by side with your JVM version, and then decide how to proceed.</p><p>Be prepared for a learning curve: it takes time to understand GraalVM’s closed-world principle, reflection/type hints, the build process, and tools like the tracing agent. We also noticed that AI tools still often struggle to interpret and debug GraalVM error messages, so it helps to fall back on classic debugging practices: isolate the problem, narrow it down, and change one thing at a time. Often you’ll encounter a chain of issues that need to be resolved step by step, so don’t let frustration win. The payoff can be significant, as our results show.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=912bca9df841" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/inside-trivagos-graalvm-migration-native-image-for-graphql-at-scale-912bca9df841">Inside trivago’s GraalVM Migration: Native Image for GraphQL at Scale</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building an AI Travel Assistant with GraalVM, Micronaut, and LangChain4j]]></title>
            <link>https://medium.com/graalvm/building-an-ai-travel-assistant-with-graalvm-micronaut-and-langchain4j-b722c473f9d2?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/b722c473f9d2</guid>
            <category><![CDATA[micronaut]]></category>
            <category><![CDATA[oracle-database]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[graalvm]]></category>
            <dc:creator><![CDATA[Alina Yurenko]]></dc:creator>
            <pubDate>Thu, 19 Feb 2026 10:45:14 GMT</pubDate>
            <atom:updated>2026-02-19T10:45:13.605Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xX0vLw2CYXF4EZV5HcjMSw.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@socialcut?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">socialcut</a> on <a href="https://unsplash.com/photos/aerial-view-of-airplane-wing-96A9UTFAMUM?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>Nowadays AI provides many great features, from quick answers to smarter search and digital assistants.</p><p>In this blog post, we’ll build a Swiss travel assistant that understands user intent. When a user asks something like:“Provide recommendations for a peaceful mountain resort”,<em> </em>the application will embed the query, run vector similarity search in the database, and return destinations that mean the right thing — even if they don’t contain the exact words. This enables more advanced and rich search user experience.</p><p>We’ll use:</p><ul><li><strong>Micronaut 4</strong> — lightweight JVM framework with compile-time dependency injection</li><li><strong>LangChain4j</strong> — a library for LLM orchestration and tool calling</li><li><strong>Oracle AI Database</strong> — native vector storage and similarity search</li><li><strong>OpenAI</strong> — an LLM for covering chat and embeddings</li></ul><h3>Project details</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rK9-blhcONQw34w2k3Dgkg.png" /><figcaption>Project architecture</figcaption></figure><p>Here’s what we will build:</p><ul><li>A web app that exposes a /api/chat endpoint</li><li>On startup, it loads a dataset with travel destinations, hotels, activities</li><li>Each entry gets a vector embedding generated from its description</li><li>Vectors are stored in Oracle Database next to the application data</li><li>At query time, we embed the user question the same way, and run similarity search</li><li>The LLM proposes which tools to call, while LangChain4j handles execution and message routing.</li></ul><h3>Micronaut: fast by design</h3><p>First, let’s briefly look at Micronaut in case you are new to it.</p><p>Micronaut is a JVM framework for building modern lightweight applications. It was introduced way back in October 2018 — you can still find Graeme Rocher’s <a href="https://www.youtube.com/watch?v=BL9SsY5orkA">talk</a> presenting it.</p><p>The key idea behind Micronaut is shifting dependency injection and annotation processing to compile time instead of runtime. Why? Traditionally, many frameworks heavily relied on reflection to scan application classes and resolve dependencies upon application start. Such approach often ended up being time- and memory consuming. Micronaut moves all of that work to build time — your app starts with everything already wired up. If you are familiar with the key idea behind GraalVM Native Image, you will find this quite similar: shift work to the build time, so you can start instantaneously when it actually matters: at run time.</p><p>The performance outcomes of shifting to build time:</p><ul><li>Sub-second startup times on the JVM</li><li>With Native Image, apps can run with as little as a 10MB heap!</li><li>Less reflection makes native compilation fast and easy out of the box</li><li>On top of that, reduced reflection means better performance and smaller binaries.</li></ul><p>So the idea behind Micronaut is to provide a great developer experience and performance at no runtime cost.</p><p>Now, let’s look at some of the key concepts of Micronaut.</p><p>If you know Spring, many things in Micronaut will look familiar. The programming model is similar — annotations, constructor injection, controllers, and configuration properties work the same way.</p><h4>Compile-Time Dependency Injection</h4><p>When you annotate a class with @Singleton or @Controller, the annotation processor generates the injection logic during compilation, with no need for run-time reflection.</p><p>As the result, missing dependencies fail at compile time rather than at startup, and the generated code is plain Java code that compilers know how to inline and optimize.</p><p>As an example, here’s what your hello world application could look like:</p><pre>import io.micronaut.http.annotation.Controller;<br>import io.micronaut.http.annotation.Get;<br><br>@Controller(&quot;/hello&quot;)<br>public class HelloController {<br><br>    @Get<br>    public String hello() {<br>        return &quot;Hello, World!&quot;;<br>    }<br>}</pre><p>From annotation processing at compile time comes one distinct quality of Micronaut: when adding Micronaut modules, you often need to include both the library dependency and its corresponding annotation processor. For example, adding LangChain4j support means adding both micronaut-langchain4j and micronaut-langchain4j-processor. The processor will take care of generating the necessary code.</p><h4>Standard Annotations</h4><p>Micronaut implements the <a href="https://javax-inject.github.io/javax-inject/">JSR-330</a> dependency injection standard. You can use standard jakarta.inject annotations like @Inject and @Singleton. In addition to familiarity, this makes it easy to write framework-agnostic code and share libraries across projects.</p><h4>Compile-Time Aspect-Oriented Programming</h4><p>Aspect-oriented programming in Micronaut also happens at compile time. Interceptors are integrated into your code during compilation, avoiding the runtime proxy generation. This means AOP features like @Cacheable, @Transactional, and custom interceptors work without reflection overhead.</p><h4>GraalVM Native Image Support</h4><p>Micronaut was designed with GraalVM and AOT compilation in mind. Since all dependency injection and AOP happens at compile time, there’s little to no reflection in the framework to configure for Native Image compilation.</p><p>Together with Micronaut’s compile-time approach, Native Image allows applications to start in milliseconds, use a fraction of the memory, and ship as self-contained executables without requiring a JVM installation.</p><h4>Polyglot Support</h4><p>Interestingly, Micronaut is also no stranger to polyglot programming. It supports Java, Kotlin, and Groovy, with features and annotation processing working across the languages.</p><p>Micronaut also offers first-class support for Graal Languages, such as GraalPy (see <a href="https://guides.micronaut.io/latest/micronaut-graalpy-python-package-maven-java.html">the example</a>), but that&#39;s a story for another time.</p><h3>LangChain4j: Bringing LLMs to Java</h3><p>Now let’s look at LangChain4j, an open-source library that will take care of AI orchestration in our project.</p><p>It provides:</p><ul><li>Chat models (OpenAI, Anthropic, Mistral, etc.)</li><li>Tool calling (function calling) with structured arguments</li><li>Chat memory</li><li>RAG features (embedding models and vector stores)</li><li>Framework integrations (Micronaut/Spring/Quarkus).</li></ul><p>Now, let’s also look at the key building blocks of LangChain4j.</p><h4>ChatModel</h4><p>ChatModel is the common low-level API for interacting with LLMs, that you are probably familiar with. You build a ChatRequest containing ChatMessage objects and receive a ChatResponse with the model&#39;s reply and metadata. The message types include UserMessage for user input (which can be multimodal), SystemMessage for predefined instructions, and AiMessage representing the model&#39;s response.</p><h4>AI Services: simple declarative approach</h4><p>AI Service lets you define a Java interface and have LangChain4j generate the implementation:</p><pre>import dev.langchain4j.service.SystemMessage;<br>import io.micronaut.langchain4j.annotation.AiService;<br><br>@AiService<br>public interface Assistant {<br><br>    @SystemMessage(&quot;You are a helpful travel assistant&quot;)<br>    String chat(String userMessage);<br>}</pre><p>You then inject and call it like any other service, for example assistant.chat(&quot;Find me a hotel in Zurich&quot;). Under the hood, LangChain4j will create a proxy that converts your method call into the appropriate UserMessage, adds the SystemMessage, calls the ChatModel, and extracts the response text.</p><p>AI Services can also handle chat memory, tool execution, structured output parsing, and RAG.</p><h4>Chat Memory</h4><p>ChatMemory automatically manages conversation history, with optional strategies for retaining recent messages.</p><p>For persistence beyond in-memory storage, you can implement the ChatMemoryStore interface, for example in Oracle Database.</p><h4>Tools</h4><p>Tools are predefined actions that can be invoked by the LLM. While this doesn’t sound too complicated, it’s a significant step forward from the early days of using LLMs in applications, when there was little to no determinism and control. Function calling provides structured outputs that match a defined schema, constrains the LLM to actions you’ve explicitly defined, and separates reasoning from execution, where the LLM decides what to do, but your code controls what actually happens.</p><h4>Embedding Models and Stores (RAG)</h4><p>For Retrieval-Augmented Generation (RAG), LangChain4j provides EmbeddingModel and EmbeddingStore abstractions, which we will use to supply our application with predefined data about travel destinations and activities and perform semantic search.</p><h3>Micronaut and LangChain4j in action</h3><p>Micronaut and LangChain4j together enable a nice declarative approach for extending Java applications with AI capabilities:</p><pre>@AiService<br>public interface Assistant {<br>    @SystemMessage(&quot;You are a helpful assistant&quot;)<br>    String chat(@UserMessage String message);<br>}</pre><p>That can be further injected:</p><pre>@Inject Assistant assistant;<br>assistant.chat(&quot;What should I see in Zurich?&quot;);</pre><p>The implementation will be generated at compile time by micronaut-langchain4j-processor.</p><p>Similarly, we can create tools for our travel assistant:</p><pre>@Singleton<br>public class TravelTools {<br><br>    @Tool(&quot;&quot;&quot;<br>        Search for Swiss destinations. Use for location queries.<br>        &quot;&quot;&quot;)<br>    public List&lt;Destination&gt; searchDestinations(String query) {<br>        float[] embedding = embeddingService.embed(query);<br>        return destinationRepository.searchByVector(embedding, 5);<br>    }<br>}</pre><p>In this case LangChain4j will handle the function calling protocol with OpenAI: converting the method signature to a JSON schema, parsing the LLM’s function call response, invoking the method, and feeding the results back.</p><h3>Vector Search with Oracle Database</h3><p>Oracle AI Database supports native vector columns, so you can store embeddings alongside regular relational data. In our schema, all entity tables — destinations, hotels, and activities — include a description_embedding column: a 1536-dimensional vector, matching the output of OpenAI’s text-embedding-3-small model.</p><p>When a user sends a request, the application embeds that query and passes the resulting vector to the database. Oracle’s VECTOR_DISTANCE function evaluates cosine distance and orders results by similarity, returning the top matches.</p><p>On startup, the app checks for entries without embeddings and generates them automatically, persisting them in the database for future queries.</p><h3>GraalVM Native Image for performance and efficiency</h3><p>Micronaut’s compile-time approach works great with GraalVM Native Image. Building, deploying, and running such native executables is straightforward:</p><pre>./mvnw package -Dpackaging=native-image<br>./target/swiss-travel-advisor</pre><p>The resulting image:</p><ul><li>Has the size of 132 MB 📦</li><li>Starts and connects to the database in 122 ms 🤯</li><li>Even under load, consumes only around 98 MB RAM! 🚀</li></ul><h3>Bringing everything together</h3><p>You can follow the steps below, or get the complete example on <a href="https://github.com/alina-yur/swiss-travel-advisor">GitHub</a>.</p><p>First, let’s start our database. You can use Docker or Podman:</p><pre>podman run -d -p 1521:1521 --name travel-app-db \<br>  -e ORACLE_PASSWORD=mypassword \<br>  -e APP_USER=appuser \<br>  -e APP_USER_PASSWORD=mypassword \<br>  gvenzl/oracle-free:latest</pre><p>Once that’s ready (and your OPENAI_API_KEY is set), start the app:</p><pre>export OPENAI_API_KEY=your-key<br>./target/swiss-travel-advisor</pre><p>This starts the application and populates the database with our predefined data. Flyway runs the migration scripts on startup, creating tables and inserting the destinations, hotels, and activities. Once the server is running, the DataInitializer generates vector embeddings, enabling semantic search.</p><p>Now for the fun part — let’s ask our assistant for travel recommendations. I can highly recommend using <a href="https://github.com/httpie/cli">httpie</a>:</p><pre>http POST http://localhost:8080/api/chat message=&quot;recommend best ski resorts&quot;<br><br>Here are some of the best ski resort destinations in Switzerland for you to consider:<br><br>1. St. Moritz (Graubünden)<br>   - Known for its glamorous vibe, luxury shopping, and winter sports.<br><br>2. Zermatt (Valais)<br>   - Charming alpine village at the foot of the iconic Matterhorn.<br><br>3. Interlaken (Bernese Oberland)<br>   - Adventure capital nestled between Lake Thun and Lake Brienz.<br><br>Would you like more information on any of these destinations? Let me know if you&#39;d like to add any to your wishlist!</pre><p>Notice how “best ski resorts” matched destinations based on meaning, not keywords — that’s vector search.</p><p>Our advisor also supports a wishlist feature, which is another Tool that the LLM can use. Let’s save Interlaken from the suggestions above:</p><pre>http POST http://localhost:8080/api/chat message=&quot;add Interlaken to a wishlist&quot;<br><br>Interlaken has been added to your wishlist! 🎉 You&#39;re going to love the adventure and stunning scenery there. Let me know if you need help with accommodations or activities in the area!</pre><p>And we can retrieve it later:</p><pre>http POST http://localhost:8080/api/chat message=&quot;retrieve my wishlist&quot;<br><br>Here&#39;s your current wishlist:<br><br>- **Interlaken (Bernese Oberland)** <br><br>If you&#39;d like to add more destinations, hotels, or activities, just let me know!</pre><p>In this case the LLM decides when to call tools and how to present the results — you just define the tools and rely on Micronaut and Langchain4j for any necessary code generation and other implementation details.</p><h3>Conclusions</h3><p>AI-powered assistants are a natural fit for recommendation apps. Users can ask questions in plain language — “recommend a cozy ski town” or “add this to my wishlist” — and the system will understand the intent, process the data, and act accordingly.</p><p>In this demo, Micronaut acts as a lightweight and efficient framework that is great for microservices and any applications where speed and memory usage matter. LangChain4j handles the AI orchestration, such as working with chat memory and tool calling. Oracle AI Database stores our vectors alongside application data, so we get convenient and powerful similarity search out of the box.</p><p>Together, these technologies offer a convenient way to build fast and smart Java applications. You can find the full project code and running instructions on <a href="https://github.com/alina-yur/swiss-travel-advisor">GitHub</a>, and learn more at <a href="https://graalvm.org">graalvm.org</a> and <a href="https://micronaut.io">micronaut.io</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b722c473f9d2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/building-an-ai-travel-assistant-with-graalvm-micronaut-and-langchain4j-b722c473f9d2">Building an AI Travel Assistant with GraalVM, Micronaut, and LangChain4j</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[JavaScript in Java: Driving Dynamic UIs with GraalVM at Scale]]></title>
            <link>https://medium.com/graalvm/javascript-in-java-driving-dynamic-uis-with-graalvm-at-scale-f552fb174df3?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/f552fb174df3</guid>
            <category><![CDATA[sdui]]></category>
            <category><![CDATA[graalvm]]></category>
            <category><![CDATA[interoperability]]></category>
            <category><![CDATA[polyglot]]></category>
            <category><![CDATA[scalability]]></category>
            <dc:creator><![CDATA[João Nogueira]]></dc:creator>
            <pubDate>Thu, 16 Oct 2025 10:14:29 GMT</pubDate>
            <atom:updated>2025-10-16T10:14:24.526Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NT6sr9gd5-cXoKLxtnkEjg.jpeg" /></figure><p><em>This is a guest blog post written by </em><a href="https://www.linkedin.com/in/joaomnogueira/"><em>João Nogueira</em></a><em>, a software engineer at </em><a href="https://jobs.picnic.app/en/about-picnic"><em>Picnic Technologies</em></a><em>.</em></p><p>At Picnic, we’re on a mission to deliver the freshest groceries right to our customers’ doors through a seamless mobile-only experience. As a rapidly scaling online grocery retailer, our success relies on the agility of our technology.</p><p>We faced a continuous challenge: how to rapidly evolve our mobile app’s user interface to meet evolving customer demands, deploy updates instantly, and empower non-engineers (like marketers or content managers) to contribute to the app’s dynamic content without requiring a new app release. Waiting for days or even weeks for users to update the app after every UI change was simply not scalable for our business.</p><h3>Why Server-Driven UI is Our Strategy</h3><p>To solve this, we adopted and developed a powerful <strong>Server-Driven UI (SDUI) </strong>framework: the <a href="https://blog.picnic.nl/faster-features-happier-customers-introducing-the-platform-that-transformed-our-grocery-app-b38e57a85531"><strong>Page Platform</strong></a>. SDUI became crucial for two main reasons:</p><ol><li><strong>Instant Deployment</strong>: It allows us to define and modify app screens in real-time. By shifting the responsibility for rendering logic from the client app to the server, we can deploy UI changes instantly, bypassing the lengthy app store review process and ensuring all users see the freshest content immediately.</li><li><strong>Empowering Non-Engineers</strong>: The platform serves as a powerful abstraction layer, enabling teams across the business to create complex, data-driven content using simple configuration, effectively giving them the tools to design and control parts of the app experience without writing a single line of client-side code.</li></ol><h3>The Status Quo: Handlebars’ Limitations</h3><p>Our initial approach to Server-Driven UI utilized <strong>Handlebars</strong> as a templating language within the <strong>Page Platform</strong>. While it offered a degree of configurability and allowed us to initially separate content from presentation, we quickly encountered its limitations:</p><ul><li><strong>Rigid Syntax and Logic</strong>: Handlebars’ logic felt rigid and restrictive. As our pages grew more complex, we found its limited support for intricate logic made implementation difficult, often forcing us to rely on custom, Java-based helpers to execute simple business rules.</li><li><strong>Weak Tooling and Maintainability</strong>: Critically, Handlebars lacked the <strong>rich IDE support</strong> and <strong>type-safety</strong> we needed for production-grade code. This increased the friction for our engineers and made complex refactoring and maintenance challenging.</li><li><strong>Backend Integration</strong>: Critically, we needed a solution that could seamlessly integrate with our powerful Java backend, supporting <strong>Calcite</strong> queries (<a href="https://blog.picnic.nl/define-extract-and-transform-inside-picnics-page-platform-dc396e33e976"><em>our query infrastructure that allows running SQL queries across all kinds of data sources</em></a>) and other Java logic, while enabling more complex UI logic without relying solely on the Java ecosystem.</li></ul><p>This set of challenges led us to seek a more flexible solution. The true innovation began when we sought a runtime solution that could bridge the gap between our robust Java backend and the modern, flexible JavaScript/TypeScript frontend technologies preferred for dynamic UI development. This pursuit led to a critical decision: how to efficiently and safely execute JavaScript directly within our existing Java application. This is where <strong>GraalVM became the cornerstone of our strategy</strong>, enabling a polyglot architecture that significantly streamlined our development and deployment.</p><h3>Exploring Implementation Options</h3><p>With JavaScript as our chosen language, the next challenge was deciding how to run it on the backend. We explored two main approaches:</p><h4>Standalone JavaScript services</h4><p>The most straightforward way to execute JavaScript on the server.</p><ul><li><strong>Pros</strong>: Easy to set up, aligned with traditional JavaScript backend architectures.</li><li><strong>Cons</strong>: Introduced operational complexity — we would have to learn how to run JavaScript services in production at scale. Additionally, it lacked direct interoperability with Java, requiring more services and contracts, which added friction to the release process. Moreover, communication with our Java BE services would represent extra network communication.</li></ul><h4>Running JavaScript inside our Java application</h4><p>A more integrated approach that would allow JavaScript execution within the JVM.</p><ul><li><strong>Pros</strong>: Offers seamless interoperability with Java, eliminating the need for additional services to manage. This approach promised a more unified and efficient system.</li><li><strong>Cons</strong>: We lacked experience with this approach and weren’t sure how to implement it effectively.</li></ul><p>Given the significant benefits of seamless interoperability and reduced operational overhead offered by running JavaScript inside our Java application, we chose to pursue this integrated approach. This decision set the stage for adopting a polyglot runtime that could effectively bridge our Java and JavaScript ecosystems.</p><h3>Adopting GraalVM: Powering Polyglot Applications</h3><p>As a polyglot virtual machine, GraalVM allows us to run multiple languages — including JavaScript — within the same Java runtime environment. By adopting GraalVM, we could execute JavaScript code seamlessly within our backend Java application, allowing for page templates defined in JavaScript to be evaluated in the JVM.</p><p>A critical requirement was ensuring safe interoperability between Java and JavaScript. Sharing Java objects directly with JavaScript can unintentionally expose internal methods and properties we don’t want to be accessible, as while we want to expose the fields of a certain Java object, the same doesn’t apply to common methods like `toString`. Instead of allowing unrestricted access to Java classes and methods, we crafted a custom API based on a promise-driven structure <a href="https://www.graalvm.org/latest/reference-manual/js/JavaInteroperability/#using-await-with-java-objects">(<em>supporting interoperability with asynchronous Java methods</em>)</a>. Each page is represented as an <em>async</em> function, and we subscribe to this promise from Java, providing only the specific parameters required for each page render. To enable JavaScript to invoke certain asynchronous Java methods (data retrieval from Calcite being the clearer asynchronous use-case), we extended this promise-based API, allowing JavaScript to call back into Java in a controlled, asynchronous manner. This leveraged <strong>GraalVM’s built-in support for awaiting Java </strong>`<em>CompletableFuture</em>` objects from JavaScript, a key feature for handling asynchronous operations in a polyglot environment. By controlling this Java to JavaScript interaction, our setup is efficient, scalable, and extensible, giving us the benefits of JavaScript’s flexibility within the stability and performance of the Java backend.</p><h4>Challenges we faced</h4><p>Our use case required careful orchestration to ensure optimal performance. Each GraalVM context — a necessary component for executing JavaScript — can handle only one script execution at a time. A naive approach, where a new instance is created for every request, would fail to leverage GraalVM’s internal optimizations, significantly impacting execution performance.</p><p>Since creating new contexts is computationally expensive and their initial executions are slow due to the lack of runtime optimizations, we implemented an <em>ObjectPool</em> to efficiently manage and reuse them. This pooling mechanism allowed us to minimize the overhead of repeatedly instantiating contexts, instead recycling them to handle incoming executions more efficiently. This pooling strategy was vital for ensuring that our GraalVM-powered Page Platform could meet the stringent performance and scalability requirements of our production environment.</p><h3>Page Rendering Flow: An Overview</h3><p>Before diving into the specifics of our tooling, it’s important to step back and look at how our Page Platform operates at a high level. The process of rendering a page follows this general flow:</p><ol><li>The app requests a page by its ID and relevant parameters.</li><li>This request is sent to our Java backend service.</li><li>The backend retrieves the corresponding JavaScript code for the requested page and executes it on the server using GraalVM.</li><li>During execution, the JavaScript code interacts with Java components as needed — for example, to fetch data using Calcite (our data retrieval API) or, in the future, to trigger side-effect-driven actions. GraalVM’s polyglot capabilities are crucial here, enabling seamless and efficient calls between JavaScript and Java.</li><li>Once all necessary data is retrieved, additional business logic expressed in JavaScript may be executed on the server within the GraalVM runtime.</li><li>The final output is a serialized JSON structure that adheres to a predefined contract describing how the page should be rendered. This response is passed back to the client via Java, completing the request/response cycle.</li></ol><p>With this foundation in place, let’s explore how we bridge JavaScript and Java to make this process seamless.</p><h3>Bridging JavaScript and Java</h3><p>To define our pages in JavaScript and enable their execution within our Java backend, we needed to establish a clear contract. This contract, facilitated by <strong>GraalVM’s interoperability features</strong>, ensures that each page’s JavaScript execution produces a well-defined output that Java can efficiently process, generating the correct JSON response for the app.</p><p>To achieve this, we designed the following API:</p><ul><li>JavaScript exports a map of page IDs to asynchronous functions. Each function, when executed, resolves to a JavaScript object that adheres to the predefined Page API.</li><li>Java provides an interoperability API, enabling JavaScript to invoke asynchronous Java methods when needed, securely bridging the two languages.</li></ul><p>This setup, powered by GraalVM, ensures seamless communication between JavaScript and Java, allowing for efficient data retrieval, business logic execution, and page rendering directly within the JVM.</p><p>The JavaScript code executed in our system is version-controlled in a Git repository, ensuring that every change is tracked and represented by a unique Git revision. For each revision, a new JavaScript bundle is generated. This bundle is designed to adhere to a specific API, which we outline below <em>(example of a `home-page` simply with a title section)</em>:</p><pre>const Pages = {<br>  &#39;home-page&#39;: async () =&gt; {<br>    return {<br>      title: &quot;Home Page&quot;<br>    }<br>  }<br>}<br><br>export default Pages;</pre><p>Similarly, we had to establish a clear API that would allow developers to request data from Calcite queries directly within the JavaScript framework. This interaction had to align with our promise-driven infrastructure. To accomplish this, for each page rendered, we bind a <em>query</em> method to the GraalVM context, which delegates to the Calcite query resolution logic, effectively creating a bridge between the JavaScript and data query layers within the polyglot environment.</p><p>This approach enables seamless data retrieval, following a simple API structure, as shown below:</p><pre>await query(&#39;SELECT * FROM table_name&#39;, {&#39;parameter_key&#39;: &#39;parameter_value&#39;})</pre><p>This well-defined API not only enables seamless interaction between the JavaScript and Java layers in production (leveraging <strong>GraalVM</strong>) but also facilitates a streamlined local development environment. Using a local <em>Node.js</em> server, JavaScript developers and page authors can significantly shorten their feedback loop. In this setup, queries and invocations are routed to an internal REST endpoint, providing the necessary data for page rendering. It’s important to note that while Node.js provides a familiar environment for local development, the production environment relies on <strong>GraalVM</strong> to execute the JavaScript code directly within the JVM, ensuring the performance and interoperability benefits of our polyglot architecture.</p><h4>JavaScript-Powered Templates: Leveraging Modern Frontend Practices within Java</h4><p>Moving to JavaScript for templating was a significant upgrade, giving us the full power of a programming language to build server-side pages. To further enhance developer experience, maintainability, and reliability, we embraced <strong>TypeScript and JSX</strong>. Inspired by React’s JSX setup, we integrated this powerful, declarative syntax into our framework.</p><p>JSX allowed us to create reusable components with a clean, intuitive syntax and robust IDE support, including AI-powered suggestions. This choice brought us the benefits of type-safety, build-time validation, and a familiar development experience for frontend engineers.</p><p>A crucial step was ensuring our TypeScript and JSX setup was compatible with our production environment: <strong>GraalVM</strong>. Since GraalVM executes standard JavaScript, we configured our build process to transpile our TypeScript and JSX code into vanilla JavaScript with a single main entry point. This allowed us to leverage modern frontend development practices while seamlessly integrating and executing these dynamic templates directly within our Java backend via GraalVM.</p><h4>Bundling: Preparing JavaScript for GraalVM Execution</h4><p>To optimize our approach, we introduced a bundling process to generate production-ready bundles that seamlessly integrate with our backend. Bundling ensures compatibility with production environments, and we developed custom plugins to enable advanced functionality, such as importing SQL query files that transform SQL files into JavaScript functions, complete with support for dynamic parameters.</p><p>Additionally, we version each bundle output, allowing us to load specific versions of the code as needed. Having a bundler also gives us the flexibility to add new dependencies and include them in the bundle without impacting the backend system. This centralized approach makes dependency management much more efficient.</p><h4>Testing</h4><p>For testing, we used an infrastructure that supports parallel execution and integrates seamlessly with our bundling setup. This enables us to create test environments that closely mimic production. In addition to standard testing — such as unit, functional, performance, and snapshot tests — we developed custom tests tailored to our pages, including end-to-end (E2E) and visual regression tests. These ensure consistent behavior across different environments and help us catch issues early.</p><h4>Development Server</h4><p>To further streamline development, we set up a local development server and integrated our network proxy to capture and redirect application requests to the local implementation. Additionally, we reused the custom plugins from our bundling and testing setup to ensure that the dev server mimics the bundler’s behavior. This creates a consistent experience across development, testing, and production environments.</p><p>Our setup allows us to seamlessly switch between real and mocked data sources, providing flexibility for accurate and immediate testing. When mock data isn’t used, we implemented local versions of data providers to communicate with REST endpoints, effectively simulating our production environment. In production, however, API bridges between JavaScript and Java handle direct connections to our backend.</p><p>One key difference between our local and production environments is that GraalVM, which we use in production, provides a JavaScript runtime but lacks full <em>Node.js</em> functionality. This required us to account for certain limitations and ensure our setup remained compatible across both environments.</p><p>In conclusion, this comprehensive setup — featuring TypeScript, JSX, bundler, testing environment, and local development server — empowers us to develop, test, and deploy server-side pages with ease. With real-time feedback, production compatibility, and flexibility for complex logic, our approach simplifies the development process and reduces time to production, resulting in a highly maintainable and efficient system.</p><h3>Deployment: Ensuring Consistent JavaScript Execution with GraalVM</h3><p>As mentioned earlier, the JavaScript framework is version-controlled in Git, with each change corresponding to a new Git revision. To ensure our running environments stay up to date with the latest changes and that GraalVM always executes the correct version of our dynamic templates, we implemented a GitHub webhook that notifies our backend deployments of incoming updates. Upon receiving a notification, the backend fetches the latest changes from the repository, generates the updated JavaScript bundle, and replaces the deployed version with the newly committed code.</p><p>This process ensures that the latest revisions are seamlessly integrated into the environment, ready for GraalVM execution. The overall flow can be summarized as follows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*M1VtKfyIxBuasp9O.png" /></figure><p>How resilient is our system, though? What happens if GitHub is down? Do we ensure our system can withstand third-party outages? This is where the Elastic File System (EFS), mentioned in the diagram above, comes into play.</p><p>But what exactly is an Elastic File System? As Amazon describes it:</p><blockquote><em>“Amazon Elastic File System (Amazon EFS) provides serverless, fully elastic file storage, allowing you to share file data without provisioning or managing storage capacity and performance.” </em>— <a href="https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html">source</a></blockquote><p>So, how does this help? First, our backend application is deployed across multiple replicas. We don’t want each replica to depend on a live connection to GitHub to fetch its own version of the Git repository. To solve this, we store both the updated repository and the generated bundles in the Elastic File System. This shared EFS volume is accessible by all our application pods, ensuring that the <strong>GraalVM</strong> runtime on each replica can access the correct, up-to-date code.</p><p>In the event of a GitHub outage, not only do we have an up-to-date copy of the Git repository (as of the last successful update), but we also have the generated bundles stored in the EFS. This ensures that we avoid the overhead of regenerating bundles every time, and more importantly, it removes the direct dependency on GitHub, improving the overall resilience of our polyglot system.</p><h3>A Note on Template Migration</h3><p>Our move to JavaScript-powered templates and the adoption of a polyglot architecture with <strong>GraalVM</strong> involved a necessary transition from our previous Handlebars-based templating system. Migrating our old templates to the new framework presented an interesting challenge. To address this, we built automation tools to help us, which significantly streamlined the process and allowed us to efficiently transition to our new approach. While the specifics of this migration journey and the challenges we addressed are outside the scope of this post focused on <strong>GraalVM</strong>, for those interested in the details, we invite you to read our original blog post on the Picnic Engineering blog <a href="https://medium.com/picnic-engineering/java-meets-javascript-a-modern-approach-to-dynamic-page-rendering-31250dc66f33">here</a>.</p><h3>The End of a Journey, The Start of a New Era</h3><p>Moving from Handlebars to our custom TypeScript framework, and crucially integrating GraalVM, was not just a technical necessity but a transformative journey for the team. This migration reduced complexity, unified the template logic, and empowered us to leverage modern development practices like TypeScript’s strong typing and JSX’s expressive syntax. By combining automated tools, AI-powered improvements, and rigorous manual oversight, we ensured a seamless transition that preserved functionality while enhancing maintainability. Our innovative use of the Handlebars compiler, GraalVM integrations, and structured testing processes allowed us to bridge the old and new systems effectively.</p><p>This move not only modernized our codebase but also set a precedent for how we approach large-scale system overhauls — prioritizing automation, collaboration, and meticulous testing. It’s a testament to the team’s creativity and adaptability, showcasing how challenges can inspire inventive solutions. The adoption of <strong>GraalVM</strong> was a crucial component in achieving this, providing performant solutions, and ensuring reduced operational overhead, and seamless interoperability between Java and JavaScript. As we continue building with this new framework, now significantly enhanced by <strong>GraalVM</strong>, we’re confident in its scalability, performance, and ability to support the evolving needs of our developers and users.</p><p>While this post has focused on the role of <strong>GraalVM</strong>, there is more to be said about these efforts and the broader journey of our framework’s evolution. For the full story, we invite you to read our original blog post on the Picnic Engineering blog <a href="https://medium.com/picnic-engineering/java-meets-javascript-a-modern-approach-to-dynamic-page-rendering-31250dc66f33">here</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f552fb174df3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/javascript-in-java-driving-dynamic-uis-with-graalvm-at-scale-f552fb174df3">JavaScript in Java: Driving Dynamic UIs with GraalVM at Scale</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From JIT to Native: Path to Efficient Java Containers]]></title>
            <link>https://medium.com/graalvm/from-jit-to-native-path-to-efficient-java-containers-d81221418c39?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/d81221418c39</guid>
            <category><![CDATA[graalvm]]></category>
            <category><![CDATA[java]]></category>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[micronaut]]></category>
            <dc:creator><![CDATA[Olga Gupalo]]></dc:creator>
            <pubDate>Wed, 11 Jun 2025 15:26:05 GMT</pubDate>
            <atom:updated>2025-06-16T13:16:57.083Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*va_6cykfZ6j0zBXS8AYizw.jpeg" /></figure><p>Can a <a href="https://micronaut.io/"><strong>Micronaut</strong></a> application start in milliseconds and run in a container smaller than most Go apps? It can, with <a href="https://www.graalvm.org/latest/reference-manual/native-image/">GraalVM Native Image</a>. Here’s how we transformed a traditional Java application into a fast to start, small to ship, and ready for the cloud native container.</p><p>The experiment started with a <strong>Micronaut web server</strong>, and ended with a <strong>22MB </strong>fully static deployable image. How do we get there? With GraalVM Native Image, smart linking strategies, and a bit of magic at the end.</p><p>Along the way, we tested various build strategies, explored trade-offs, and measured real impact.</p><p>Yet, once compiled into a JAR and placed in a Docker container with a full JDK, this web server expanded to ~<strong>470MB</strong>. This result was observed with multiple OpenJDK distributions. That’s a non-starter if you’re trying to keep things lightweight for microservices or scale-to-zero platforms like Knative or AWS Lambda.</p><p><strong>How small can we make it? </strong>The answer is, as it turns out, very small — if you’re willing to “<strong>go native</strong>” and even “<strong>static native”</strong>.</p><h3>Step by Step: Slimming Down the Container</h3><p>We started by benchmarking different ways to package and run the application — gradually replacing the JVM with custom runtimes, native executables, and, finally, fully static binaries. The results throughout this post were collected on an Oracle Linux 8 machine with 48 GB of memory and 4 CPUs to ensure consistency.</p><p>The demo is open source and reproducible: 👉 <a href="https://github.com/graalvm/workshops/tree/main/native-image/micronaut-webserver"><strong>github.com/graalvm/workshops/tree/main/native-image/micronaut-webserver</strong>.</a> Each step is automated via scripts and Dockerfiles, and the app serves real GraalVM documentation pages so you can benchmark accurately.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0TWnLMoj_N7_fOIleJF1fg.png" /><figcaption>GraalVM documentation pages served with Micronaut</figcaption></figure><p>Here’s a brief overview of the most notable stages.</p><h4>Step 1: The Starting Point — Running a JAR in a Container</h4><p>Compiled as a JAR and run with java -jar, this web server worked perfectly—but the container size was still significant. Using a distroless java21-debian12 image instead of one built by installing an OpenJDK distribution into a slim OS container dropped the container size down to <strong>216MB. </strong>Note that the static website pages contributed to the overall image size.</p><p>🧱 <em>Image:</em> gcr.io/distroless/java21-debian12<br>🚀 <em>Startup:</em> ~378ms<br>📦 <em>Container</em> s<em>ize: </em>216MB</p><p>Not bad for a full Java web server, but could be further optimized for cold start and footprint.</p><h4>Step 2: Creating a Custom JDK with jlink</h4><p>Next, we tested jlink, a tool that creates a stripped-down Java runtime containing only the modules your application needs. Using jdeps to find dependencies and jlink to build the runtime, we saved quite some space:</p><p>🧱 <em>Image:</em> gcr.io/distroless/java-base-debian12<br>🚀 <em>Startup:</em> ~340ms<strong><br></strong>📦 <em>Container</em> s<em>ize:</em> 167MB</p><p>Here’s what the command flow looked like:</p><pre>RUN ./mvnw clean package<br>RUN ./mvnw dependency:build-classpath -Dmdep.outputFile=cp.txt<br>RUN CP=$(cat cp.txt) &amp;&amp; \<br>    MODULES=$(jdeps --ignore-missing-deps -q --recursive --multi-release 24 --print-module-deps --class-path &quot;$CP&quot; target/webserver-0.1.jar) &amp;&amp; \<br>    echo &quot;Modules: $MODULES&quot; &amp;&amp; \<br>    jlink \<br>      --module-path &quot;${JAVA_HOME}/jmods&quot; \<br>      --add-modules &quot;$MODULES&quot;,jdk.zipfs \<br>      --verbose \<br>      --strip-debug \<br>      --compress zip-9 \<br>      --no-header-files \<br>      --no-man-pages \<br>      --strip-java-debug-attributes \<br>      --output jlink-jre</pre><p>This gave us an instant <strong>49MB</strong> win, just by trimming unused modules. Not a dramatic performance change, but a solid step toward efficiency.</p><h4>Step 3: Compiling Ahead-of-Time — Going Native</h4><p>Then came the game changer: <strong>GraalVM Native Image</strong>. We built the application ahead of time using <a href="https://graalvm.github.io/native-build-tools/latest/maven-plugin.html">Native Image Maven plugin</a>, inside a multi-stage Docker build, and then packaged it in a distroless base image for the runner — <strong>no JVM required</strong>. The resulting image was dynamically linked.</p><p>🧱 <em>Image:</em> gcr.io/distroless/java-base-debian12<br>🚀 <em>Startup:</em> ~20ms<br>📦 <em>Container</em> s<em>ize:</em> 132MB</p><p>We just reduced startup time by almost <strong>17x</strong> times, and the container size by <strong>35MB</strong>! That’s how powerful native compilation can be. Still, we wanted more space savings.</p><h4>Step 4: Optimizing a Native Image for File Size</h4><p>What if we could shrink the native executable file itself? GraalVM offers the-Os flag, which optimizes for file size by skipping performance-costly optimizations. We added it to the build:</p><p>🧱 <em>Image:</em> gcr.io/distroless/java-base-debian12<br><strong>🚀 </strong><em>Startup: </em>~20ms<br>📦 <em>Container</em> s<em>ize:</em> 102MB<br>📦 <em>Binary size</em><strong>:</strong> 62MB (down from default 86MB!)</p><p>The binary size decreased by <strong>24MB</strong> — with no change in behavior or startup time. Optimization for the win!</p><p>Additionally, we tested the new <a href="https://medium.com/graalvm/skipflow-producing-smaller-executables-with-graalvm-f18ca98279c2"><strong>SkipFlow</strong></a><strong> </strong>feature introduced in<strong> </strong>GraalVM for JDK 24. That’s a static analysis optimization that tracks unreachable branches to reduce code paths that may never run. It’s experimental, but was easy to enable: -H:+TrackPrimitiveValues -H:+UsePredicates.It shaved just 1<strong>MB</strong> off in our case, but that could grow with larger codebase.</p><h4>Step 5: Running a Mostly Static Application — Going Static</h4><p>So far we built <strong>dynamically linked</strong> native images. What if we link almost everything statically? With the --static-nolibc flag, we created a <strong>mostly static</strong> executable, linked against everything except glibc. This made it possible to switch to a smaller container image (base-debian12).</p><p>🧱 <em>Image:</em> gcr.io/distroless/base-debian12<br><strong>🚀 </strong><em>Startup: </em>~20ms<br>📦 <em>Container</em> s<em>ize:</em> 89.7MB</p><p>That’s a further <strong>12MB</strong> saved just by changing the base container. <br>Here is a breakdown of the native-maven-plugin configuration for this build:</p><pre>&lt;plugin&gt;<br>  &lt;groupId&gt;org.graalvm.buildtools&lt;/groupId&gt;<br>  &lt;artifactId&gt;native-maven-plugin&lt;/artifactId&gt;<br>  &lt;version&gt;${native.maven.plugin.version}&lt;/version&gt;<br>  &lt;configuration&gt;<br>    &lt;imageName&gt;webserver.mostly-static&lt;/imageName&gt;<br>    &lt;buildArgs&gt;<br>      &lt;buildArg&gt;--static-nolibc&lt;/buildArg&gt;<br>      &lt;buildArg&gt;-Os&lt;/buildArg&gt;<br>    &lt;/buildArgs&gt;<br>  &lt;/configuration&gt;<br> &lt;/plugin&gt;</pre><h4>Step 6: Running a Fully Static Application — In an Empty Container</h4><p>Now time for the real fun. With the--static --libc=musl flag, we could build a <strong>fully static</strong> native image—<strong>no OS-level dependencies</strong>, just a single binary. That meant we could use the scratch container—basically, just an empty filesystem. (scratch is an official Docker image.)</p><p>🧱 <em>Image:</em> scratch<br>🚀 <em>Startup:</em> ~20ms<br>📦 <em>Container</em> s<em>ize:</em> 69.2MB</p><p>A production-ready Micronaut web application was deployed in under <strong>69MB</strong>, starting in milliseconds! That’s better than many compiled C++ apps.</p><h4>Step 7. Going Extreme — UPX Compression</h4><p>Could we go even smaller? We applied <a href="https://upx.github.io/">UPX</a>, a binary compression tool, to our fully static executable and packaged it into the same scratch container. UPX decompresses the image at first, adding a CPU hit, but drastically reduces the image size.</p><p>🧱 <em>Image:</em> scratch<br>🚀 <em>Startup:</em> ~20ms<br>📦 <em>Container</em> s<em>ize:</em> 22.3MB<br>📦 <em>Binary size</em><strong>:</strong> 20MB (down from 62MB!)</p><p>That’s nearly <strong>20× smaller</strong> than the original container size. The app still started instantly and served requests flawlessly! The trade-off is that you lose visibility—you can’t easily inspect what was compressed out.</p><h3>Before and After: The Numbers</h3><p>Let’s compare where we started and where we landed:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2nlLxvRnQpgQy29Zo8T_iQ.jpeg" /><figcaption><strong>20× reduction in size</strong> and <strong>17× faster startup</strong> from our original setup — without sacrificing functionality</figcaption></figure><h3>Engineering Lessons Learned</h3><p>Here’s what we took away from this process:</p><ul><li><strong>Not all optimizations are equal.</strong> jlink is useful, but Native Image delivers orders-of-magnitude improvements in both size and startup time.</li><li><strong>Static linking unlocks ultra-tiny containers.</strong> If you don’t need a full OS, use scratch—you’ll get not only smaller images but also a reduced attack surface. Alternatives to scratch can be gcr.io/distroless/static or alpine:3 with a few more utilities and libraries inside.</li><li><strong>Base image size matters.</strong> Switching from java21-debian12 to base-debian12 or scratch made a huge difference. Don’t overlook this part.</li><li><strong>Compression can be the final trick.</strong> UPX is surprisingly effective, and worth experimenting with — though you’ll want to test CPU overhead in performance-sensitive apps.</li></ul><p>On the final thought, if you’re building Java applications for the cloud and haven’t tried GraalVM Native Image, now is the time. The ecosystem is ready, the tooling is mature, and the results are hard to ignore.</p><h3>Try It Yourself 💻</h3><p>The demo sources, Dockerfiles, and step-by-step instructions are available on GitHub: <a href="https://github.com/graalvm/workshops/tree/main/native-image/micronaut-webserver"><strong>github.com/graalvm/workshops/tree/main/native-image/micronaut-webserver</strong></a>. <br>If you prefer Spring to Micronaut, there is an identical Spring Boot version of this demo: <a href="https://github.com/graalvm/workshops/tree/main/native-image/spring-boot-webserver"><strong>github.com/graalvm/workshops/tree/main/native-image/spring-boot-webserver</strong></a>.<br>For more experiments and examples, look at <a href="https://github.com/graalvm/graalvm-demos/tree/master/native-image/tiny-java-containers"><strong>Tiny Java Containers</strong>.</a> <br>We welcome your feedback via <a href="https://graalvm.org/slack-invitation">Slack</a> or <a href="https://github.com/oracle/graal">GitHub</a>.</p><p>— the GraalVM team</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d81221418c39" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/from-jit-to-native-path-to-efficient-java-containers-d81221418c39">From JIT to Native: Path to Efficient Java Containers</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SkipFlow: Producing Smaller Executables with GraalVM]]></title>
            <link>https://medium.com/graalvm/skipflow-producing-smaller-executables-with-graalvm-f18ca98279c2?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/f18ca98279c2</guid>
            <category><![CDATA[compilers]]></category>
            <category><![CDATA[graalvm]]></category>
            <category><![CDATA[graalvm-native-image]]></category>
            <category><![CDATA[static-code-analysis]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[David Kozak]]></dc:creator>
            <pubDate>Wed, 16 Apr 2025 12:17:06 GMT</pubDate>
            <atom:updated>2025-04-16T12:17:06.082Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9nc4cyLfuWu_GJ5SNYAaMQ.png" /></figure><p><a href="https://en.wikipedia.org/wiki/Pointer_analysis">Points-to analysis</a> is a crucial part of every <a href="https://www.graalvm.org/latest/reference-manual/native-image/">GraalVM Native Image build</a>. In this blog post, we present <strong>SkipFlow</strong>, an extension to the points-to analysis that tracks primitive values and evaluates branching conditions during the analysis process. In our benchmarks, this reduced <strong>binary size</strong> by an average of <strong>6.35%</strong> without increasing build time.</p><p>We start by providing a high-level overview of the static analysis in GraalVM Native Image. Then, we describe how <strong>SkipFlow</strong> improves the points-to analysis. Finally, we present an experimental evaluation and discuss future research in this area.</p><h3>Points-to Analysis in GraalVM Native Image</h3><p>A typical Java application has a lot of third-party libraries but uses only a fraction of their functionality. To avoid compiling methods that are not needed, Native Image performs a whole-program points-to analysis to determine all classes, methods, and fields that are so-called <em>reachable</em>, i.e., might be needed at runtime. Consider the example below to understand the motivation for using points-to analysis in this context.</p><pre>void f(Iface i){<br>    if(i == null) throw new NullPointerException();<br>    if(i instanceof A){ ... }<br>    i.run();<br>}</pre><p>By inspecting the method f only, we have little knowledge about the possible value of the parameter i. Therefore, we would have to assume that i.run() can call any implementation of the run() method of any subtype of Iface, which would lead to marking all such methods as reachable.</p><p>On the other hand, if we could compute that i will always be of a specific type B (not a subtype of A) and never null, we could replace the virtual invocation of i.run()with a direct invocation of the run() method available on the type B, therefore marking only a single method as reachable. Furthermore, we could remove both the null and type checks (including any code inside the corresponding branches). Points-to analysis tracks the possible types for all variables and fields in the program, making such optimizations possible. Employing points-to analysis leads to both smaller binaries and faster compile times.</p><p>Native Image runs the analysis using a data structure called a <strong>type flow graph</strong>. As the name suggests, this graph models the flow of types throughout the program. Nodes in the graph, which we call <strong>flows</strong>, represent method parameters, fields, variables, and various instructions relevant to the analysis. Directed <strong>use edges</strong> describe how the types can flow between nodes. Each flow maintains a <strong>typestate</strong> representing a set of types that can reach the given flow.</p><p>The type flow graph is expanded during the analysis. It starts from a set of root methods (e.g., main()) and gradually adds more nodes and edges as additional reachable code elements are discovered. The analysis enables pruning unreachable code, eliminating redundant type and null checks, and even devirtualizing method invocations for which only a single target method is computed. In the example above, we could reduce the content of the method f to a direct call to B.run().</p><p>However, precision is not the only metric we consider. Since the analysis is executed during every Native Image build, analysis time and memory footprint are equally important. We need an analysis that is both reasonably precise and fast. In some cases, we even sacrifice a bit of precision to improve scalability. For example, we use a technique called <strong>saturation</strong>, which works as follows: if the number of types observed for a given variable exceeds a given <strong>threshold</strong> (by default 20), the analysis stops tracking the flow of types precisely for that variable and removes it from the graph. This technique is motivated by compilers typically optimising only cases with few possible types. <strong>Saturation</strong> enables linear scaling with respect to the size of the program, yet with only a negligible effect on precision. To learn more about saturation, you can take a look at this publication: <a href="https://dl.acm.org/doi/10.1145/3656417">Scaling Type-Based Points-to Analysis with Saturation</a>.</p><h3>Introducing SkipFlow</h3><p>The analysis Native Image currently runs (which we will denote as <em>baseline</em> in the rest of the blog) can be needlessly imprecise on specific code patterns. Consider the example below taken from the <a href="https://github.com/fpsunflower/sunflow/blob/master/src/org/sunflow/core/Scene.java#L279">Dacapo Sunflow benchmark</a>.</p><pre>void render(..., Display display){<br>    if (display == null) {<br>        display = new FrameDisplay();<br>    }<br>    ...<br>}</pre><p>The <em>baseline</em> analysis is able to determine that the parameter display will never be null. However, since it does not consider the control flow of the method, it still analyses the contents of the branch and considers the type FrameDisplay as instantiated. This might not seem to be a big problem, but the value of the variable display is eventually <a href="https://github.com/fpsunflower/sunflow/blob/master/src/org/sunflow/core/renderer/BucketRenderer.java#L147">used as a receiver</a> to call the imageBegin() method and FrameDisplay.imageBegin() transitively calls into AWT and Swing GUI libraries, none of which are actually needed. This means we analyse and subsequently compile a lot of dead code. Can we do better?</p><p>There are many techniques in the area of static analysis which could improve the precision of the <em>baseline</em> analysis, but they are typically too slow for our use case. Remember that the analysis runs in every build and has to process hundreds of thousands of methods in minutes or even faster.</p><p>The approach we took for increasing the precision of the analysis, called <strong>SkipFlow</strong>, is based on two key features. The first feature is <strong>evaluating branching conditions</strong> during the run of the analysis. To support this feature in the existing framework, we have introduced <strong>predicate edges</strong> into type flow graphs that encode a relationship between a branching condition and nodes within the branches. In the <em>Dacapo Sunflow</em> example above, a <strong>predicate edge</strong> would lead from the node representing the null check of display to the node representing the FrameDisplay instantiation. All flows with an incoming predicate edge are <strong>disabled by default</strong>. Such flows can accept values flowing into them via <strong>use</strong> <strong>edges</strong>, but they will not push any values further down the graph until enabled by their <strong>predicate</strong>. This makes it possible to <strong>delay evaluating</strong> the content of branches, such as the one above that instantiates a FrameDisplay object, until it is determined that the branch may be executed at runtime. In the particular case of <em>Dacapo Sunflow</em>, we were able to reduce the size of the image by more than <strong>50%</strong>.</p><p>The second feature is <strong>tracking primitive values</strong>, which proves useful in cases where the condition is factored into a separate method, such as the example below.</p><pre>void onExit(Thread thread){<br>     if (thread.isVirtual()){<br>         virtualThreads.remove(thread);<br>     }<br>}<br> <br> <br>public boolean isVirtual() {<br>    return this instanceof BaseVirtualThread;<br>}</pre><p>The SharedThreadContainer.onExit() method, taken from the <a href="https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/jdk/internal/vm/SharedThreadContainer.java#L119">JDK</a>, is a callback containing code that should be executed only for virtual threads. The check itself is offloaded into the <a href="https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/Thread.java#L1372C26-L1372C35">Thread.isVirtual() method</a>, which contains a simple type check. This is a common code pattern in the JDK. Tracking primitive values enables the propagation of boolean values (true and false) out of method calls, so that code specific to virtual threads can be removed unless the compiled application actually uses them.</p><h3>Experimental Evaluation</h3><p>We have evaluated <strong>SkipFlow</strong> using the <em>Renaissance </em>and<em> Dacapo </em>benchmark suites, and a set of <em>microservices applications</em>. Below, we present a chart showing the impact on <strong>binary size</strong> for a set of <strong>microservices applications</strong>. SkipFlow reduces the size of native images by 4.4% on average without negatively impacting the build time. In fact, image builds tend to be even slightly faster with SkipFlow enabled because there are fewer methods to analyse and compile.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uvPEsudFTHku9e0XIl9H0w.png" /><figcaption>Binary Size Reduction for Microservices</figcaption></figure><p>Similar trends can be seen in <em>Renaissance</em> and <em>Dacapo</em> benchmark suites, which you can see in the charts below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nobRO31rj7QS19NdnrRvmQ.png" /><figcaption>Binary Size Reduction for Dacapo</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BhhPNNvjYLUcUF8P_804_Q.png" /><figcaption>Binary Size Reduction for Renaissance</figcaption></figure><p>Across all three benchmark suites, binary size is reduced by <strong>6.35%</strong>, on average. The evaluation was conducted using GraalVM for JDK 24 on a dual-socket Intel Xeon E5–2630 v3 processor running at 2.40 GHz, with 8 physical / 16 logical cores per socket and 128 GB of main memory, on Oracle Linux Server release 7.3.</p><h3>Conclusion</h3><p>SkipFlow <a href="https://github.com/oracle/graal/pull/9821">is included</a> in GraalVM for JDK 24, but is not yet enabled by default. To enable it, you can use the flags -H:+TrackPrimitiveValues -H:+UsePredicates. However, SkipFlow will be enabled by default in GraalVM for JDK 25 and is already available in <a href="https://github.com/graalvm/oracle-graalvm-ea-builds/releases">Early Access builds</a>! We encourage you to try it on your projects and share your feedback.</p><p>In future, we plan to improve SkipFlow analysis even further by using a more precise representation for primitive values, including the possible interval of values that a given variable can have.</p><p>If you are interested to learn more, you can take a look at this academic paper <a href="https://dl.acm.org/doi/10.1145/3696443.3708932">SkipFlow: Improving the Precision of Points-to Analysis using Primitive Values and Predicate Edges</a>, which was presented at <strong>CGO’25</strong>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f18ca98279c2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/skipflow-producing-smaller-executables-with-graalvm-f18ca98279c2">SkipFlow: Producing Smaller Executables with GraalVM</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What’s new in Graal Languages 24.2]]></title>
            <link>https://medium.com/graalvm/whats-new-in-graal-languages-24-2-328471fc4137?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/328471fc4137</guid>
            <category><![CDATA[graal]]></category>
            <category><![CDATA[graal-languages]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Alina Yurenko]]></dc:creator>
            <pubDate>Tue, 18 Mar 2025 15:25:17 GMT</pubDate>
            <atom:updated>2025-03-26T17:00:29.937Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AvsfXqXmdLmQv5RkKx9T9A.png" /></figure><p>Today, along with <a href="https://medium.com/p/328471fc4137">GraalVM for JDK 24</a>, we are releasing version 24.2 of Graal Languages. This version is designed for use with GraalVM for JDK 24, but it is also compatible with the latest CPU releases of GraalVM for JDK 21, Oracle JDK 21, and OpenJDK 21. This release includes many exciting updates, including a Gradle plugin for GraalPy, scaling native Python across Java threads, a Continuation API in Espresso (Java on Truffle), the new Truffle Bytecode DSL, and more. Let’s take a look!</p><p>Alternatively, watch our release stream for the updates in this release and demos:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_3a0QU2pkrA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_3a0QU2pkrA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_3a0QU2pkrA%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/812c18d2152e73238fe5e42f90fbd8d5/href">https://medium.com/media/812c18d2152e73238fe5e42f90fbd8d5/href</a></iframe><h3>Graal Languages in Java ☕️</h3><p>Starting from this release, users of GraalPy, GraalJS, and other Graal Languages must enable native access privileges to avoid warnings printed by the JVM. When using the module-path, pass the --enable-native-access=org.graalvm.truffle option to the java launcher, and when using the class-path, pass the --enable-native-access=ALL-UNNAMED option to resolve the new warning. Note that Truffle, the framework all Graal Languages are implemented in, automatically forwards the native access capability to all loaded languages and tools. Therefore, no further configuration is required. Denying native access with --illegal-native-access=deny will disable the optimizing runtime and the slower fallback runtime is used instead.</p><p><a href="https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Context.html\">Context</a> and <a href="https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Engine.html">Engine</a> are now automatically closed when no longer strongly referenced. It is still recommended to close them explicitly whenever possible.</p><p>Moreover, the <a href="https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Value.html">Value</a> API has been extended: we added the ability to use Value#as(byte[].class) to copy the contents of a guest language byte buffer (Value#hasBufferElements()) to a new byte array. We also added support for the creation of strings from raw byte arrays and native memory using Value.fromByteBasedString(…) and Value.fromNativeString(…).</p><p>When embedding a Graal Language in a native image, language and instrument resources are now automatically included. By default, a separate resources folder is no longer created next to the image. See more in the <a href="https://www.graalvm.org/jdk24/reference-manual/embed-languages/">Embedding Languages guide</a>.</p><h3>GraalPy 🐍</h3><p>After adding a <a href="https://www.graalvm.org/jdk24/reference-manual/python/Embedding-Build-Tools/#graalpy-maven-plugin-configuration">Maven plugin</a> for GraalPy in the previous release, we now also <strong>ship a Gradle plugin</strong> that makes it easy to embed GraalPy in your Gradle projects! 🎉</p><p>You can simply add it to your Java projects as follows:</p><pre>plugins {<br>    application<br>    id(&quot;org.graalvm.python&quot;) version &quot;24.2.0&quot;<br>}<br><br>graalPy {<br>    packages = setOf(&quot;pygal==3.0.5&quot;)<br>}</pre><p>One of the great benefits of using GraalPy plugins is that they provide a way to easily configure the Python packages you want to use.</p><p>The new release of GraalPy also includes two new experimental features that can significantly boost the performance of Python in Java. GraalPy can now load native Python extensions multiple times in different contexts, which makes it possible to <strong>scale native Python across Java threads</strong>. For this, enable the python.IsolateNativeModules option and use the multi-context mode of GraalPy (we recommend starting with one GraalPy context per thread).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*beeBo7V8qWN8u1rM-_q_DQ.png" /><figcaption>Performance of native Python extensions in GraalPy across multiple contexts.</figcaption></figure><p>The second experimental feature is a new integration of Apache Arrow, a language-independent data representation, in GraalPy. With this, it is possible to directly share large amounts of data between Java and Python <strong>without having to copy between Java and Python</strong> object layouts. This can significantly reduce the time to process shared data, for example when using popular Python packages such as Pandas from Java:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GPmXg1mLfRc_HLj6sDSDOA.png" /><figcaption>Reduced memory and CPU usage with direct sharing</figcaption></figure><p>See the <a href="https://github.com/oracle/graalpython/blob/master/docs/user/Native-Extensions.md">native extensions documentation</a> for more details. We will also <a href="https://github.com/graalvm/graal-languages-demos/pull/15">add a guide on this to our demo repository soon</a>.</p><p>Besides, we’ve also made several improvements to how <a href="https://github.com/oracle/graalpython/blob/master/docs/user/Interoperability.md#interacting-with-foreign-objects-from-python-scripts">foreign types are now treated in Python</a>. Foreign objects are now given a Python class corresponding to their interop traits. This means, for example, that <strong>Java </strong><strong>Map or </strong><strong>List objects now appear as subclasses of Python&#39;s </strong><strong>dict or </strong><strong>list respectively</strong>, and they have all the expected methods on the Python side and behave as much as possible as if they were Python objects. We also added polyglot.register_interop_type and @polyglot.interop_type to add Python methods to given foreign classes when they are called from Python code. This makes it easier to adapt foreign interfaces to be more idiomatic in Python. Finally, when calling a method on a foreign object in Python code, Python methods are now prioritized over foreign members.</p><p>Check out the <a href="https://github.com/oracle/graalpython/blob/master/CHANGELOG.md#version-2420">GraalPy changelog</a> for all updates in this release.</p><p>To make it easier for you to get started with GraalPy, we’ve added several new demos and guides — check them out:</p><ul><li><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-starter">Build a minimal Java application that embeds GraalPy</a></li><li><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-openai-starter">Build a Java application that embeds </a><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-openai-starter">openai Python package with GraalPy</a></li><li><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-jbang-qrcode">Embed </a><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-jbang-qrcode">qrcode Python package with GraalPy in JBang</a></li><li><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-micronaut-pygal-charts">Embed SVG charting library </a><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-micronaut-pygal-charts">pygal with GraalPy in Micronaut</a></li><li><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-spring-boot-pygal-charts">Embed SVG charting library </a><a href="https://github.com/graalvm/graal-languages-demos/blob/main/graalpy/graalpy-spring-boot-pygal-charts">pygal with GraalPy in Spring Boot</a>.</li></ul><p>See more on GitHub:</p><p><a href="https://github.com/graalvm/graal-languages-demos/tree/main/graalpy">graal-languages-demos/graalpy at main · graalvm/graal-languages-demos</a></p><h3>GraalJS and GraalWasm 🌍</h3><p>We implemented several ECMAScript proposals, such as Error.isError, Math.sumPrecise, Atomics.pause, Promise.try, Uint8Array to/from base64 and hex, RegExp.escape, Iterator Sequencing, Source Phase Imports, and Regular Expression Pattern Modifiers.</p><p>GraalJS now also supports <a href="https://github.com/WebAssembly/esm-integration">WebAssembly/ES module integration</a>, so Wasm modules can be loaded via import statements. This, for example, makes it even easier to embed Rust, Go, or C++ code compiled to Wasm in a Java application. Here’s an example of this being used in a generated JavaScript binding for a Rust library that now just works on GraalJS:</p><pre>import * as wasm from &quot;./photon_bg.wasm&quot;;<br>export * from &quot;./photon_bg.js&quot;;<br>import { __wbg_set_wasm } from &quot;./photon_bg.js&quot;;<br>__wbg_set_wasm(wasm);<br>wasm.__wbindgen_start();</pre><p>We also updated the version of our Node.js runtime to 22.13.1.</p><p>For more details, take a look at the changelogs for <a href="https://github.com/oracle/graaljs/blob/master/CHANGELOG.md#version-2420">GraalJS</a> and <a href="https://github.com/oracle/graal/blob/master/wasm/CHANGELOG.md#version-2420">GraalWasm</a>.</p><h3>Espresso ☕️</h3><p>In Espresso, our Java runtime built on Truffle, we added an exciting new experimental feature: <strong>the</strong><a href="https://www.graalvm.org/reference-manual/espresso/continuations/"><strong>Continuation API</strong></a>. This API lets you pause a running program, save its state, and then resume it later. The heap objects can be serialized to resume execution in a different JVM instance running the same code (for example, after a restart).</p><p>There are several use cases where continuations could be particularly interesting:</p><ul><li>Speculative Execution: speed up computations where CPU-intensive work is blocked by long waiting periods;</li><li>Implementing Coroutines/Yield;</li><li>Web Request Handling: Maintaining state across HTTP requests without global variables or session storage.</li><li>Undo/Redo Functionality: Capturing application state at various points to enable undoing or redoing actions.</li><li>Distributed Computing: Serializing continuation state, allowing distributed systems to migrate units of work;</li><li>Live Programming/Hot Code Swapping.</li></ul><p>To get started with the Continuation API, add org.graalvm.espresso:continuations:24.2.0 to your pom.xml/build.gradle with provided scope. See also our <a href="https://www.graalvm.org/reference-manual/espresso/continuations/serialization/">serialization example</a>.</p><p>You can find more Espresso updates in the <a href="https://github.com/oracle/graal/blob/master/espresso/CHANGELOG.md#version-2420">changelog</a>.</p><h3>TruffleRuby 💎</h3><p>In this release, we updated TruffleRuby to Ruby 3.3.5 and switched to the Panama NFI backend for improved C extension performance. With this new backend, C extensions such as sqlite3, trilogy , and json, are now around <strong>2x to 3x faster </strong>thanks to the more efficient Panama upcalls in JVM mode.</p><p>See more updates, in particular compatibility improvements and bug fixes, in the <a href="https://github.com/oracle/truffleruby/blob/master/CHANGELOG.md">changelog</a>.</p><h3>Truffle Language Implementation Framework</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kgBbZUuganb3GO9DI8UTOA.png" /><figcaption>Truffle Bytecode DSL: From source to AST to bytecode.</figcaption></figure><p>We added the <a href="https://www.graalvm.org/truffle/javadoc/com/oracle/truffle/api/bytecode/package-summary.html">Bytecode DSL</a>, a new framework to simplify implementing bytecode interpreters on top of Truffle. The goal is to generate the tricky and tedious details of a bytecode interpreter from an AST-like specification. The generated interpreter supports a variety of features, including tiered interpretation, bytecode quickening, boxing elimination, instrumentation, continuations, and serialization. Additionally, the generated code implements several optimizations for interpreter speed and memory footprint. The next step for us is to put the new DSL to use in GraalPy and other Graal Languages. If you’d like to use it, please see our extensive <a href="https://github.com/oracle/graal/blob/master/truffle/docs/bytecode_dsl/UserGuide.md">user guides</a> and <a href="https://github.com/oracle/graal/blob/master/truffle/docs/bytecode_dsl/BytecodeDSL.md">tutorials</a> to get started.</p><h3>Community and Ecosystem</h3><ul><li>With GraalPy you can easily create interactive SVG charts with Pygal in your Java applications — see how for <a href="https://github.com/graalvm/graal-languages-demos/tree/main/graalpy/graalpy-micronaut-pygal-charts">Micronaut</a> and <a href="https://github.com/graalvm/graal-languages-demos/tree/main/graalpy/graalpy-spring-boot-pygal-charts">Spring Boot</a>.</li><li>GraalPy and GraalWasm featured are featured as Innovators in <a href="https://www.infoq.com/articles/java-trends-report-2024/">InfoQ‘s Java 2024 Trends Report</a>! 🚀</li><li>See our <a href="https://www.youtube.com/watch?v=IdoFsS-mpVw">Jfokus</a> talk for the best practices and tips for extending your Java applications with GraalPy.</li><li>Christian Humer, Truffle project lead, <a href="https://airhacks.fm/#episode_333">talked</a> to Adam Bien on his airhacks.fm podcast about GraalVM, Truffle, Futamura projections, and more.</li></ul><h3>Conclusion</h3><p>We’d like to take this opportunity to thank our amazing contributors and community for all the feedback, suggestions, and contributions that went into this release.</p><p>If you have feedback for this release or suggestions for features you would like to see in future releases, please share them with us on <a href="https://graalvm.org/slack-invitation">Slack</a>, <a href="https://github.com/oracle/graal">GitHub</a>, or <a href="https://bsky.app/profile/graalvm.org">BlueSky</a>.</p><p>— the GraalVM team</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=328471fc4137" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/whats-new-in-graal-languages-24-2-328471fc4137">What’s new in Graal Languages 24.2</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Welcome, GraalVM for JDK 24!]]></title>
            <link>https://medium.com/graalvm/welcome-graalvm-for-jdk-24-7c829fe98ea1?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/7c829fe98ea1</guid>
            <category><![CDATA[graalvm]]></category>
            <category><![CDATA[java]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Alina Yurenko]]></dc:creator>
            <pubDate>Tue, 18 Mar 2025 15:24:24 GMT</pubDate>
            <atom:updated>2025-03-26T17:00:57.813Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZImtf5exKj7dHC1AzjsC9w.png" /></figure><p>Today we are releasing <a href="https://www.graalvm.org/downloads/">GraalVM for JDK 24</a>!</p><p>As always, we release GraalVM on the same day that Java 24 is released, so you can use GraalVM as your Java 24 JDK.</p><p>You can already <a href="https://www.graalvm.org/downloads/">download GraalVM</a> and check <a href="https://www.graalvm.org/release-notes/JDK_24/">the release notes</a> for more details. Keep reading this blog post to see what’s new in this release!</p><p>Alternatively, watch our release stream for the updates in this release and demos:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_3a0QU2pkrA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_3a0QU2pkrA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_3a0QU2pkrA%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/812c18d2152e73238fe5e42f90fbd8d5/href">https://medium.com/media/812c18d2152e73238fe5e42f90fbd8d5/href</a></iframe><h3>More Peak Performance with Machine Learning 🙌</h3><p>You might know about <a href="https://medium.com/graalvm/machine-learning-driven-static-profiling-for-native-image-d7fc13bb04e2">ML-based profile inference in Native Image</a>: in the absence of user-supplied profiling information, Native Image in Oracle GraalVM uses a pre-trained ML model to predict the execution probabilities of the control flow graph branches. This enables powerful optimizations, improving the peak performance of native images out of the box. <strong>In this release, we are introducing a new generation of ML-enabled profile inference — </strong><a href="https://2025.cgo.org/details/cgo-2025-papers/45/GraalNN-Context-Sensitive-Static-Profiling-with-Graph-Neural-Networks"><strong>GraalNN</strong></a><strong>. With GraalNN, we are observing a ~7.9% peak performance improvement on average on a wide range of microservices benchmarks (Micronaut, Spring, Quarkus, and others).</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CVzkm_q9Iq8aU1ropUCcCA.png" /><figcaption>The impact of ML-based profile inference in Native Image. The benchmark is running a Micronaut helloworld application with ML-enabled PGO comparing to baseline (no profiling information)</figcaption></figure><p>To enable this optimization, pass the -O3(optimize for peak performance) flag to Oracle GraalVM Native Image. Note that this approach doesn’t require a training run and a follow-up build — you can get improved peak throughput with just one build.<br>For more implementation details, you can see our paper “<a href="https://2025.cgo.org/details/cgo-2025-papers/45/GraalNN-Context-Sensitive-Static-Profiling-with-Graph-Neural-Networks">GraalNN: Context-Sensitive Static Profiling with Graph Neural Networks</a>”, which we will present at this year’s International Symposium on Code Generation and Optimization (CGO). We are working on further improving this optimization, so expect even higher performance in GraalVM for JDK 25!</p><h3>Even Smaller Native Executables 📦</h3><p>Points-to analysis is a crucial part of the GraalVM Native Image build process. While Java projects might contain tens of thousands of classes coming from dependencies, we can avoid compiling all the methods by analyzing which classes, methods, and fields that are reachable, i.e., actually needed at runtime.</p><p>In this release we are introducing SkipFlow — an extension of our points-to analysis that tracks primitive values and evaluates branching conditions during the run of the analysis. It allows us to produce <strong>~6.35% smaller binaries</strong> without increasing the build time. In fact, image builds tend to be even slightly faster with SkipFlow enabled because there are fewer methods to analyze and compile.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oktxeHWamu3sOj7FEnB79A.png" /><figcaption>The impact of the SkipFlow optimization on the size of native executables</figcaption></figure><p>You can find more details about this optimization in our CGO paper, “<a href="https://2025.cgo.org/details/cgo-2025-papers/17/SkipFlow-Improving-the-Precision-of-Points-to-Analysis-using-Primitive-Values-and-Pr">SkipFlow: Improving the Precision of Points-to Analysis using Primitive Values and Predicate Edges</a>”, and a follow-up blog post that we will share soon.</p><p>This optimization is included in GraalVM 24 as experimental and not yet enabled by default. To test it out, you can use the flags -H:+TrackPrimitiveValues and -H:+UsePredicates. Feel free to share your feedback with us, as we plan to enable it by default in GraalVM for JDK 25.</p><h3>Premain Support for Java Agents 🥷</h3><p>One of the common requests from both our users and partners was to extend the Java agents support in Native Image. Up to now, agents have been supported by Native Image but with some constraints:</p><ul><li>The agent had to run and transform all classes at build time;</li><li>The premain of the agent was only executed at build time;</li><li>All of the classes needed to be present in the agent jar;</li><li>The agent should not have manipulated the classes used by Native Image.</li></ul><p>With this release, we are taking the first step towards <strong>agent support at runtime</strong>. We have added <a href="https://github.com/oracle/graal/pull/8988">support for premain for static agents</a>, and it currently it works as follows:</p><ul><li>At compile time, use -H:PremainClasses= to set the premain classes;</li><li>At run time, use -XX-premain:[class]:[options] to set premain runtime options are set along with main class’s arguments.</li></ul><p>This now allows premain methods to be initialized when the native image actually runs.</p><p>We would like to thank Alibaba for their contributions to this feature.</p><p>We have more work planned in GraalVM for JDK 25. In the meantime, you can help us by telling us which agents specifically you would like to use with Native Image — let us know in the <a href="https://github.com/oracle/graal/issues/8177">GitHub ticket</a> or via our <a href="https://www.graalvm.org/community/">community platforms</a>.</p><h3>Vector API support in Native Image 🚀</h3><p>The <a href="https://openjdk.org/jeps/469">Vector API</a> enables vector computations that reliably compile to optimal vector instructions, resulting in performance superior to equivalent scalar computations.</p><p>In this release, we have continued our work to optimize more Vector API operations on GraalVM, with more operations now efficiently compiled to SIMD code, where supported by the target hardware:</p><ul><li>operations on Vector API masks,</li><li>masked Vector API loads and stores,</li><li>general Vector API rearrange operations,</li><li>and Vector API loads/stores to and from memory segments.</li></ul><p>Additionally, we’re excited to announce that <strong>Vector API support in Native Image is now on par with JIT!</strong>🎉 To enable Vector API optimizations when building native images, pass the --add-modules jdk.incubator.vector and -H:+VectorAPISupport options at build time.</p><p>One of the areas where the Vector API really shines is large language models, as most of the compute heavy operations there are matrix and vector multiplications. As an example, you can take a look at <a href="https://github.com/mukel/llama3.java">Llama3.java</a>, a hobby project of our colleague, <a href="https://github.com/mukel">Alfonso² Peterssen</a>. It’s a one-file local LLM inference engine implemented in pure Java, utilizing the latest features of Java’s Vector and FFM APIs and powerful optimizations coming from GraalVM. By combining all of these, you get a blazing fast local LLM assistant, compiled with GraalVM Native Image, running purely on CPU:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FzgAMxC7lzkc%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DzgAMxC7lzkc&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FzgAMxC7lzkc%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0e14a235309cad6789d97f35b4df8e6e/href">https://medium.com/media/0e14a235309cad6789d97f35b4df8e6e/href</a></iframe><p>You can see how fast the engine is — with the optimizations coming from GraalVM, the model “responds” blazing fast — in this demo, the 1B parameter Llama 3.2 model with 4.5 bits/weight quantization runs at 52.25 tokens/s. Better yet, by utilizing more aggressive AOT optimizations, such as model preloading at build time, you can <strong>completely </strong><a href="https://github.com/mukel/llama3.java?tab=readme-ov-file#aot-model-preloading"><strong>eliminate</strong></a><strong> startup overhead</strong>.</p><p>Please note that Vector API support on GraalVM is considered experimental. We would be happy to <a href="https://github.com/oracle/graal/issues/10285">receive feedback</a> as we further improve the number of optimized Vector API operations.</p><h3>More efficient applications with Native Image 🌿</h3><p>Native Image is well-known for giving applications fast startup, low memory and CPU usage, and compact packaging. Lately there is another aspect of Native Image where we see increasing interest from the community —resource savings. Thanks to AOT compilation and optimizations, Native Image application can run more efficiently, reducing resource consumption — including electricity usage 🔋. As an example, we measured energy consumption of <a href="https://github.com/spring-projects/spring-petclinic">Spring PetClinic</a> running on JIT and on Native Image in several scenarios with increasing load:</p><ul><li>Scenario A: 1 curl request, 1s after curl response</li><li>Scenario B: 1 curl request, 10s total run time</li><li>Scenario C: 4000 requests/s, 20 seconds</li><li>Scenario D: 4000 requests/s, 100 seconds</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hvGSrEHJyZbT1VzW4WkfCg.png" /><figcaption>Energy consumption of a <a href="https://github.com/spring-projects/spring-petclinic">Spring PetClinic</a> application running under increasing load on JIT vs AOT. The measurements were performed using <a href="https://github.com/ColinIanKing/powerstat">powerstat</a> on Intel i7–9850H @ 2.60GHz machine, by fixing CPU frequency to 2GHz.</figcaption></figure><p>As we see, the natively compiled version of the application consistently<strong> consumes less energy, even under constant load</strong>. Our initial findings are also aligned with a community study performed by <a href="https://ionutbalosin.com/2024/03/analyzing-jvm-energy-consumption-for-jdk-21-an-empirical-study/">Ionut Balosin</a>.</p><p>If you are looking for a way to reduce resources usage of your applications, consider compiling them with GraalVM.</p><h3>New security features in Native Image🛡️</h3><p>As you might know, Oracle GraalVM Native image offers SBOM support, which is essential for vulnerability scanning. To generate an SBOM file in CycloneDX format, pass the --enable-sbom flag to embed it into a native executable, or --enable-sbom=classpath,export if you want to add it to the resources path or export as JSON.</p><p>With this flag you can generate an SBOM this way for any project, but <strong>for even </strong><a href="https://www.graalvm.org/release-notes/JDK_24/"><strong>more accurate SBOMs</strong></a><strong>, we recommend using the </strong><a href="https://graalvm.github.io/native-build-tools/latest/maven-plugin.html"><strong>Maven plugin for GraalVM Native Image</strong></a>. The plugin creates a “baseline” SBOM by using cyclonedx-maven-plugin. The baseline SBOM defines which package names belong to a component, helping Native Image associate classes with their respective components—a task that can be tricky when using shading or fat JARs. In this collaborative approach, Native Image is also able to prune components and dependencies more aggressively to produce a minimal SBOM.</p><p>These enhancements are available starting with plugin version 0.10.4 and are enabled by default when using the --enable-sbom option.</p><p>In this release we also <strong>added support for dependency trees</strong>: the SBOM now provides information about component relationships through the CycloneDX dependencies field format. This dependency information is derived from Native Image’s static analysis call graph. Analyzing the dependency graph can help you understand why specific components are included in your application. For example, discovering an unexpected component in the SBOM allows for tracing its inclusion through the dependency graph to identify which parts of the application are using it.</p><p>Additionally, we added support for <strong>class-level metadata for SBOM components</strong>. You can enable it with --enable-sbom=class-level. This metadata includes Java modules, classes, interfaces, records, annotations, enums, constructors, fields, and methods that are part of the native executable. This information can be useful for advanced vulnerability scanning to determine if a native executable with the affected SBOM component is actually vulnerable, thereby reducing the false positive rate of vulnerability scanning, and for better understanding which components are included in the native executable. Note that including class-level metadata increases the SBOM size substantially — include it only when you need detailed information about your native executable’s contents.</p><p><strong>We have also added SBOM support to </strong><a href="https://github.com/graalvm/setup-graalvm"><strong>GraalVM’s GitHub action</strong></a>. You can now automatically generate a highly <a href="https://www.graalvm.org/release-notes/JDK_24/">accurate SBOM</a> with Native Image and submit it to <a href="https://docs.github.com/en/rest/dependency-graph/dependency-submission?apiVersion=2022-11-28">GitHub’s dependency submission API</a>. This enables simple integration with all the powerful security tooling that GitHub provides:</p><ul><li>Vulnerability tracking with <a href="https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide#about-dependabot">GitHub’s Dependabot</a>;</li><li>Dependency tracking with <a href="https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph">GitHub’s Dependency Graph</a>. The Dependency Graph shows the union of the submitted SBOM components and what GitHub infers automatically.</li></ul><p>You can activate this feature in setup-graalvm with the option native-image-enable-sbom.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Z5PyC4AFZJ3rkvw3uLG9Mg.png" /></figure><h3>Build reports</h3><p>To better understand your Native Image builds and the contents of produced executables, you can use <strong>Build reports</strong>. Those are HTML reports that can be generated alongside your Native Image build, providing details about the following:</p><ul><li><strong>Build overview</strong> such as the build environment, analysis results and resources usage, which can also be exported for integration purposes;</li><li><strong>Code area and image heap</strong> which can help you understand which methods and objects make up your application;</li><li><strong>Resources tab</strong> showing included, missing, and injected (such as by frameworks) resources;</li><li><strong>SBOM information</strong> that also can be exported as JSON (in Oracle GraalVM);</li><li><strong>Profiling information visualization </strong>represented as a flame graph and histogram<strong> </strong>(requires supplying profiles).</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3k0csJzsDLbpzPLqMfREJw.gif" /><figcaption>Native Image Build reports</figcaption></figure><p>To get started with build reports, pass --emit build-report when building an application. To learn more about Build Reports, navigate to the <a href="https://www.graalvm.org/latest/reference-manual/native-image/overview/build-report/">docs</a>.</p><h3>Debugging updates</h3><p>In this release we introduced several debugging updates:</p><ul><li>We have added a GDB Python script (gdb-debughelpers.py) to improve the Native Image debugging experience — <a href="https://www.graalvm.org/dev/reference-manual/native-image/guides/debug-native-image-process-with-python-helper-script/">learn more</a>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kuTie6PiqfozHml_PqCEng.gif" /><figcaption>Debugging native images with a GDB Python script</figcaption></figure><ul><li>We added support for emitting Windows x64 unwind info. This enables stack walking in native tooling, such as debuggers and profilers.</li><li>We updated debug info from DWARF4 to DWARF5 and now store type information in DWARF type units. This helped us reduce the size of debugging information by 30% 💪</li></ul><h3>Monitoring updates 📈</h3><p>We have added experimental support for jcmd on Linux and macOS. jcmd is used to send diagnostic command requests, where these requests are useful for controlling Java Flight Recordings, troubleshooting, and diagnosing applications. To try it out, add --enable-monitoring=jcmd to your application build arguments. See <a href="https://www.graalvm.org/jdk24/reference-manual/native-image/debugging-and-diagnostics/jcmd/">the documentation</a> for more details.</p><p>We would like to thank Red Hat for their contributions to this feature.</p><h3>Usability 👩‍💻</h3><ul><li>We have removed the&quot;customTargetConstructorClass&quot; field from the serialization JSON metadata. All possible constructors are now registered by default when registering a type for serialization. RuntimeSerialization.registerWithTargetConstructorClass is now deprecated.</li><li>Serialization JSON reachability metadata can now be included in reflection metadata via the &quot;serializable&quot; flag.</li></ul><p>Here is how such an entry would look for a regular serialized.Type:</p><pre>{<br>  &quot;reflection&quot;: [<br>    {<br>      &quot;type&quot;: &quot;serialized.Type&quot;,<br>      &quot;serializable&quot;: true<br>    }<br>  ]<br>}</pre><p>and for a proxy class:</p><pre>{<br>  &quot;reflection&quot;: [<br>    {<br>      &quot;type&quot;: {<br>        &quot;proxy&quot;: [&quot;FullyQualifiedInterface1&quot;, &quot;...&quot;, &quot;FullyQualifiedInterfaceN&quot;],<br>        &quot;serializable&quot;: true<br>      }<br>    }<br>  ]<br>}</pre><h3>Miscellaneous</h3><ul><li>Native Image now targets armv8.1-a by default on AArch64. Use -march=compatibility for best compatibility or -march=native for best performance within machines with the same CPU features.</li><li>We added support for Java module system-based service loading — for example you can now specify module Foo { provides MyService with org.example.MyServiceImpl; } in module-info.java.</li></ul><h3>Community and Ecosystem</h3><ul><li>According to <a href="https://www.infoq.com/articles/java-trends-report-2024/">InfoQ’s Java 2024 Trends Report</a>, GraalVM, and specifically Native Image, is now considered a technology being used by the Early Majority. We are happy to see that the ongoing work of our team and community to make Native Image stable, fast, and production-ready, is being recognized. Along with Native Image, we are glad to see GraalPy and GraalWasm featured in Innovators! 🚀</li><li>Great news for maintainers of projects hosted on GitHub: you can now easily <a href="https://github.com/actions/setup-java/releases/tag/v4.4.0">use</a> GraalVM with setup-java!🎉</li><li>While we are on the topic of GitHub, you can now build with GraalVM using GitHub’s Linux ARM64 hosted runners, and we already added support for it in setup-graalvm —<a href="https://github.com/graalvm/setup-graalvm"> get started</a>.</li><li>We are happy to welcome <a href="https://bsky.app/profile/sandraahlgrimm.bsky.social">Sandra Ahlgrimm</a> and Microsoft to the <a href="https://www.graalvm.org/community/advisory-board/">GraalVM Advisory Board</a>!</li><li>Micronaut 4.7 <a href="https://micronaut.io/2024/11/14/micronaut-framework-4-7-0-released/">added</a> experimental support for LangChain4j and integration with GraalPy, so you can invoke Python code from Java easily in a Micronaut application.</li><li>AWS CRT (Common Runtime) package for Java <a href="https://aws.amazon.com/blogs/developer/aws-crt-client-for-java-adds-graalvm-native-image-support/">added</a> support for Native Image. Cold start request processing time using the GraalVM Native Image experienced a 4X reduction for 90% of the requests. Warm start requests took 18–25% less time to process using the GraalVM Native Image.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*4kp3Ut-odHWn_Cu8.png" /><figcaption><a href="https://aws.amazon.com/blogs/developer/aws-crt-client-for-java-adds-graalvm-native-image-support/">GraalVM Native Image support in AWS CRT Client for Java</a></figcaption></figure><ul><li>With Spring Boot 3.4, SBOM is now auto-detected when building a native image with --enable-sbom=sbom.</li><li>Quarkus introduced support for Model Context Protocol, which enables AI models to interact with applications and services in a decoupled way, and works great with GraalVM Native Image— <a href="https://quarkus.io/blog/introducing-mcp-servers/">see how.</a></li></ul><h3>Conclusion</h3><p>We’d like to take this opportunity to thank our amazing contributors and community for all the feedback, suggestions, and contributions that went into this release.</p><p>If you have feedback for this release or suggestions for features that you would like to see in future releases, please share them with us on <a href="https://graalvm.org/slack-invitation">Slack</a>, <a href="https://github.com/oracle/graal">GitHub</a>, or <a href="https://bsky.app/profile/graalvm.org">BlueSky</a>.</p><p>Now go ahead and try the <a href="https://www.graalvm.org/downloads/">new GraalVM</a>! 🚀</p><p>— the GraalVM team</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7c829fe98ea1" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/welcome-graalvm-for-jdk-24-7c829fe98ea1">Welcome, GraalVM for JDK 24!🚀</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GraalVM in 2024: A year in review]]></title>
            <link>https://medium.com/graalvm/graalvm-in-2024-a-year-in-review-fe8dff967d82?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/fe8dff967d82</guid>
            <category><![CDATA[java]]></category>
            <category><![CDATA[graalvm]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[end-of-year]]></category>
            <dc:creator><![CDATA[Alina Yurenko]]></dc:creator>
            <pubDate>Thu, 19 Dec 2024 12:51:46 GMT</pubDate>
            <atom:updated>2024-12-19T12:51:46.410Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XkEv1by1p4HswXhh5lCwVQ.png" /></figure><p>The year is coming to an end and many exciting things happened in the GraalVM project and in our community! In this blog post, we want to share with you some of the highlights.</p><h3>Graal JIT in Oracle JDK 23 🚀</h3><p>Starting with Oracle JDK 23, the Oracle GraalVM JIT compiler is now included among the compilers available as <a href="https://blogs.oracle.com/java/post/including-the-graal-jit-in-oracle-jdk-23">part of the Oracle JDK</a>.</p><p>This means that Oracle JDK users can easily get access to the advanced performance optimizations, offered by GraalVM!🎉 There are several studies showcasing the performance benefits of using Oracle GraalVM JIT — you can check out the <a href="https://renaissance.dev/">Renaissance benchmark suite</a>, or <a href="https://ionutbalosin.com/2024/02/jvm-performance-comparison-for-jdk-21/">performance study</a> by Ionut Balosin and Florin Blanaru.</p><p>Now it’s your time to take Graal JIT for a spin!</p><h3>Faster than ever on JIT and AOT 🏁</h3><p>Graal JIT is known to be very fast. We love <a href="https://github.com/oracle/graal/issues/10298#issuecomment-2538928355">seeing</a> users randomly discovering this and observing performance gains with zero tuning. It is that easy!</p><p>You might know that we have this big vision of AOT at the speed of JIT, and with every release we are working to that end. We are constantly tracking multiple performance metrics, to make sure that with Native Image you get the best possible peak performance in addition to fast startup and low memory usage. One of our benchmark applications is <a href="https://github.com/spring-projects/spring-petclinic">Spring PetClinic</a>. For GraalVM for JDK 23 we benchmarked it on Ampere A1 machines, and here are the results:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/770/0*1FhOUDcvZTJ7ky5Y.png" /><figcaption>Performance of Spring PetClinic with Oracle GraalVM Native Image and GraalVM CE JIT. The benchmarking experiment ran the latest <a href="https://github.com/spring-projects/spring-petclinic">Spring PetClinic</a> on Ampere A1 servers.</figcaption></figure><p>More optimizations coming up in GraalVM for JDK 24!😎</p><h3>Parsing 1 Billion Rows in Java in 0.3 Seconds 🤯</h3><p>Speaking of speed, we can’t leave out the <a href="https://github.com/gunnarmorling/1brc">1BRC 1️⃣🐝🏎️</a> challenge, which concluded at the beginning of the year. There are many great blog posts dissecting the competition and each of performance optimizations used by the contestants. Most of them suggest that before any manual tuning, you should start by choosing an advanced compiler, that will give you performance gains out of the box, and Graal JIT shined throughout the competition, both in JIT and AOT modes. If you are looking for a fun and highly educational video to watch during your Christmas break, may I suggest that you go with a 1BRC deep dive from <a href="https://bsky.app/profile/thomaswue.dev">Thomas Wuerthinger</a> and <a href="https://bsky.app/profile/royvanrijn.com">Roy van Rijn</a> at Devoxx Belgium:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_w4-BqeeC0k%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_w4-BqeeC0k&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_w4-BqeeC0k%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0d2968cfa8d12034ad4c759d52a97744/href">https://medium.com/media/0d2968cfa8d12034ad4c759d52a97744/href</a></iframe><h3>Native Image developer experience 👩‍💻</h3><p>The developer experience of working with Native Image is one of our priorities, and it has evolved significantly in the recent years. We are always working to reduce the time and resources needed for builds, and with every release you can see improvements. It also doesn’t hurt if you have a very good machine, as <a href="https://bsky.app/profile/marcushellberg.dev/post/3laq4apcbu222">observed by Marcus Hellberg</a> :)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*sEOJX5NVR6cZj0kkPhjH7w.gif" /><figcaption>Build time of a Spring Boot &amp; Vaadin application on Apple M4 Max vs M1 Pro</figcaption></figure><p>We know that now that more and more users are using GraalVM in production, a smooth experience using libraries is crucial. To that end, Native Build Tools (NBT), our Maven and Gradle plugins, now enable access to the <a href="https://github.com/oracle/graalvm-reachability-metadata">GraalVM Reachability Metadata</a> repository by default. This means that for projects that use NBT (and frameworks that rely on NBT), configuration for known libraries will be resolved automatically. We also actively work with framework and library teams to expand what can be easily built with Native Image, and smooth out the experience. Currently, our community-maintained list of <a href="http://frameworks and libraries integrating with Native Image">frameworks and libraries integrating with Native Image</a> contains over 200 projects!🤯</p><p>We are also working on <a href="https://github.com/orgs/oracle/projects/6">several projects</a> that will improve Native Image configuration and usability.</p><p>There were several great new Native Image features introduced in 2024, such as <a href="https://medium.com/graalvm/welcome-graalvm-for-jdk-23-203928491b2b#f182">new compacting garbage collector</a>, <a href="https://medium.com/graalvm/welcome-graalvm-for-jdk-23-203928491b2b#6ebb">new optimization level for size</a>, and others — you can learn more about them in our release blog posts (<a href="https://medium.com/graalvm/welcome-graalvm-for-jdk-23-203928491b2b">GraalVM for JDK 23</a>, <a href="https://medium.com/graalvm/welcome-graalvm-for-jdk-22-8a48849f054c">GraalVM for JDK 22</a>).</p><p>Expect even more DX improvements in 2025!</p><h3>Graal Languages</h3><p><a href="https://github.com/graalvm/graal-languages-demos">GraalPy and GraalWasm</a> are now stable and ready for production workloads!🎉 They have joined Java and JavaScript as our supported Graal languages.</p><p>This year we’ve spent a lot of effort on improving <a href="https://www.graalvm.org/python/compatibility/">compatibility</a> and developer experience of GraalPy. You might remember that embedding Graal languages in Java is just a matter of adding two Maven dependencies — org.graalvm.polyglot:polyglot and, for example, org.graalvm.polyglot:python. You can also use our <a href="https://www.graalvm.org/latest/reference-manual/python/Embedding-Build-Tools/#graalpy-maven-plugin">graalpy-maven-plugin</a> (or graalPy Gradle) to conveniently configure the embedding as a part of your normal Java build configuration.</p><p>Additionally, you can now use GraalPy within the setup-python GitHub Action!</p><p>Another major update in the languages space is that after two years of R&amp;D we’ve finally merged the <a href="https://github.com/oracle/graal/blob/master/truffle/docs/bytecode_dsl/BytecodeDSL.md">Bytecode DSL</a>. Bytecode interpreters have the same peak performance as ASTs, but they can be encoded with less memory, and can benefit further from additional performance optimizations.</p><p>As the easiest way to get started with Graal languages, try our demos:</p><p><a href="https://github.com/graalvm/graal-languages-demos">GitHub - graalvm/graal-languages-demos: Graal Languages - Demos and Guides</a></p><h3>Community and User Stories</h3><p>We work closely with our community to build productive partnerships, get valuable feedback, and further improve GraalVM for everyone. One of our favorite platforms for collaboration is our yearly GraalVM Community Summit. It’s a two-day event in Zurich where we meet with our partners and community members, discuss recent updates and our roadmap, and have working sessions on various topics. Every year the event grows bigger and bigger, and we are already at max capacity of our office :) This year we had a truly impressive lineup of participants, with representatives from Alibaba, Amazon, Apple, BellSoft, Broadcom, Enso, Google, JetBrains, Microsoft, MicroDoc, Neo4J, Red Hat, SAP, Shopify, TornadoVM, and others, as well as all GraalVM teams.</p><figure><img alt="Photos of attendees at the GraalVM Community Summit." src="https://cdn-images-1.medium.com/max/1024/1*RDAQll_jxelZUK69FFY87g.jpeg" /><figcaption>GraalVM Community Summit 2024</figcaption></figure><p>This year was very productive for open-source community and governance. We did a public call for lead maintainers for GraalVM backport repositories, and they are now led by <a href="https://github.com/zakkak">Foivos Zakkak</a>, who has track record of contributing to the GraalVM project and maintaining related distributions.</p><p>Also, <a href="https://bsky.app/profile/sandraahlgrimm.bsky.social">Sandra Ahlgrimm</a> from Microsoft joined the <a href="https://www.graalvm.org/community/advisory-board/">GraalVM Advisory Board</a> to help us drive the project forward and contribute to its vision.</p><p>Here are a few more notable updates from our community and users:</p><ul><li>More than <a href="https://github.com/graalvm/setup-graalvm/network/dependents">2000 repositories on GitHub</a> are now using the GraalVM GitHub action!🚀</li><li>GraalVM meets IntelliJ IDEA! ❤️ We have a great collaboration with the IntelliJ IDEA team, and they implemented several great features for our users. GraalJS now acts as a <a href="https://blog.jetbrains.com/idea/2024/06/intellij-idea-2024-2-eap-5/#graaljs-as-the-execution-engine-for-the-http-client">modern execution engine for the HTTP client</a>, you can have <a href="https://x.com/maciejwalkowiak/status/1744840618168258683">syntax highlighting for polyglot applications</a>, and most recently they are working on streamlining <a href="https://www.jetbrains.com/idea/whatsnew/">debugging experience for GraalVM native images</a>.</li><li>The“<a href="https://spring.io/blog/2024/06/03/state-of-spring-survey-2024-results">State of Spring</a>” survey results are out, and they offer interesting insights. 11% of Spring Boot users already run GraalVM natively compiled applications in production, 26% of users are currently evaluating, and 31% are planning to develop native applications with GraalVM!</li><li>One of my favorite projects this year is <a href="https://github.com/mukel/llama3.java">llama3.java</a>, built by our brilliant colleague <a href="https://github.com/mukel">Alfonso² Peterssen</a>. It’s a complete LLM inference engine in Java! No C, no Python, no dependencies, and no calls to cloud-based LLMs — a complete inference engine, which along with a model gives you a full-blown local LLM assistant. Under the hood it’s using the power of the FFM &amp; Vector APIs, and GraalVM for starting the application with zero overhead. How cool is that!👾</li><li><a href="https://github.com/google/google-java-format">Google Java Code Formatter</a> is now available as very fast platform specific native binary executables, built with GraalVM.</li><li>Apple have announced their new configuration language, <a href="https://pkl-lang.org/index.html">Pkl</a>, which is built on top of Truffle.</li><li>You can <a href="https://timefold.ai/blog/how-to-speed-up-timefold-solver-startup-time-by-20x-with-native-images">speed up</a> Timefold Solver startup by 20x with Native Image!</li><li>AWS <a href="https://aws.amazon.com/blogs/developer/aws-crt-client-for-java-adds-graalvm-native-image-support/">added support</a> for Native Image to AWS CRT Client for Java, reducing cold start request processing time by 4x reduction for 90% of the requests.</li><li>Micrometer enables passing observability data to multiple platforms &amp; backends, and <a href="https://spring.io/blog/2024/10/28/lets-use-opentelemetry-with-spring">works like a charm with Native Image</a>:</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZiY0bUhq0hMiSqbqAmI_og.gif" /></figure><ul><li>You can now <a href="https://github.com/actions/setup-java/pull/501">use GraalVM as your JDK</a> in the setup-java GitHub action!</li><li>It’s never been easier to run integration tests for Kafka applications with Testcontainers using the new GraalVM-based <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-974%3A+Docker+Image+for+GraalVM+based+Native+Kafka+Broker">native Kafka Docker image</a>! 😍</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/690/1*QLf--Fwog6cMX5HIT2nYsQ.gif" /><figcaption>Native Kafka with GraalVM</figcaption></figure><h3>Conclusions</h3><p>We are already actively working on the next release of GraalVM due in March. We also have several ongoing major projects in the GraalVM space, such as <a href="https://github.com/oracle/graal/issues/7626">Native Image Layers</a>, <a href="https://graal.cloud/graalos/">GraalOS</a>, and further compatibility and performance improvements for all our projects.</p><p>Thank you for being a part of our community and for all your contributions, suggestions, and support. We wish you happy holidays and plenty of time to recharge, and see you in 2025!🎄</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fe8dff967d82" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/graalvm-in-2024-a-year-in-review-fe8dff967d82">GraalVM in 2024: A year in review</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Machine Learning-Driven Static Profiling for Native Image]]></title>
            <link>https://medium.com/graalvm/machine-learning-driven-static-profiling-for-native-image-d7fc13bb04e2?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/d7fc13bb04e2</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[graalvm-native-image]]></category>
            <category><![CDATA[graalvm]]></category>
            <category><![CDATA[onnx]]></category>
            <category><![CDATA[xgboost]]></category>
            <dc:creator><![CDATA[Milan Cugurovic]]></dc:creator>
            <pubDate>Wed, 11 Dec 2024 14:04:57 GMT</pubDate>
            <atom:updated>2024-12-11T14:04:57.689Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Egin3Ob4tcfkRZ-ZyhFLBw.png" /></figure><p><a href="https://en.wikipedia.org/wiki/Machine_learning">Machine learning</a> (ML) enables the training of models that, based on the characteristics of a program, can accurately predict its execution. In this blog we will explain how we used ML to develop the static profiler GraalSP (a static profiler that predicts profiles in Native Image), integrated it into <a href="https://www.graalvm.org/latest/reference-manual/native-image/">Oracle GraalVM Native Image</a>, and achieved a 7.5% improvement in runtime performance!</p><p>In the first part of this blog, we describe static profilers and compare them to their dynamic counterparts, which we covered in <a href="https://medium.com/graalvm/profile-guided-optimization-for-native-image-f015f853f9a8">a previous blog</a>. In that blog we discussed program profiles and how they “guide” optimizations. Using a sorting example we will delve into the design and development of GraalSP. At the end we’ll also present the results of integrating ML into Native Image and discuss the deployment of the ML models into production.</p><h3>Dynamic and Static Profilers</h3><p>In AOT compilation, dynamic profilers work by building an instrumented image, collecting profiles, and then building an optimized image. They collect high-quality profiles but also have a few drawbacks. First, they complicate the optimization process by requiring two builds and a profile-collection run. This process is usually time and memory consuming, placing an extra burden on programmers and the machines used for optimization. Also, finding appropriate workloads for profile collection can be challenging.</p><p>Fortunately, there is a much cheaper alternative where there’s no need to go through the run-build-run compilation cycle or hunt for suitable inputs for profile collection. It’s called “static profiling” or “static profile prediction”. A static profiler is a profiler that doesn’t collect a profile during program execution — instead, it predicts a program’s profile. A state-of-the-art static profiler takes advantage of a machine learning model to predict a profile based on a set of static features that characterize a program.</p><p>Figure 1 illustrates the pipeline of a Profile-Guided Optimization (PGO) build driven by a static profiler relying on an ML model. A static profiler extracts features (we will get back to them later!) that characterize a program and then perform ML model inference to predict a profile. This predicted profile is then used just like one obtained by dynamic profiling: the compiler utilizes the information about program “execution” to create an optimized executable. This way, an ML model enables the compiler to utilize a profile without needing to run the program, reducing the usability burden and the challenge of finding suitable workloads for profile collection.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/999/1*ii04P0ei3Zp_jv2fTTqvxg.png" /><figcaption>Figure 1: Pipeline of a PGO build driven by an ML-based static profiler.</figcaption></figure><h3>Introducing the Graal IR</h3><p>It is important to note that a static profiler does not operate on a program’s source code but use the <a href="https://en.wikipedia.org/wiki/Intermediate_representation">compiler’s intermediate representation (IR) </a>instead. A compiler translates a program’s source code into IR, a form that is more manageable and suitable for various optimizations, such as eliminating redundant code and duplicating frequent loops.</p><p>In this context, Native Image uses <a href="https://ssw.jku.at/General/Staff/GD/APPLC-2013-paper_12.pdf">Graal Intermediate Representation (Graal IR)</a> to represent a program as a graph. Graal IR is a high-level IR that captures a program’s structure and includes additional information that enables the compiler to perform advanced analyses and optimizations, leading to better-optimized programs.</p><p>Graal IR represents a program as a graph consisting of nodes that correspond to the control flow and nodes that correspond to the data flow in the program. For example, consider the simple <em>for</em> loop. Graal IR translates the <em>for</em> loop into the IR graph shown below (Figure 2), with control flow edges colored red and data flow edges colored blue. The condition <em>i &lt; n</em> is parsed into an <em>IF</em> node that evaluates the value of the counter <em>i</em>. The loop body corresponds to the <em>true</em> branch of the <em>IF</em> control split node in the IR. Therefore, executing the loop body corresponds to executing the <em>true</em> branch of the corresponding <em>IF</em> node.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/485/1*sWoqND-nr20QN_wo6yyg_Q.png" /><figcaption>Figure 2: Graal IR graph corresponding to a simple <strong><em>for</em></strong> loop.</figcaption></figure><p>You can learn more about Graal IR in <a href="https://medium.com/graalvm/under-the-hood-of-graalvm-jit-optimizations-d6e931394797">one of our previous blog posts</a> and in the paper <a href="https://ssw.jku.at/General/Staff/GD/APPLC-2013-paper_12.pdf">Graal IR: An Extensible Declarative Intermediate Representation</a>.</p><h3>GraalSP: The Static Profiler for Native Image</h3><p>Now that we are familiar with the pipeline of static profilers and Graal IR, let’s discuss the ML in Native Image. We developed <em>GraalSP</em>: a precise, highly efficient, lightweight, polyglot, and robust static profiler as a part of the Native Image tool. Let’s delve into GraalSP using an example — we’ll use the heap sort function to demonstrate how (static) profiling can help us improve performance.</p><h4>Running Example: Heap Sort</h4><p>In the code block below we show the source code of the <em>heapSort</em> function from the Java Development Kit (JDK). This sorting implementation uses the <em>pushDown</em> method to push elements down the heap.</p><pre>/**<br>* Sorts the specified range of the array using heap sort.<br>*<br>* @param a the array to be sorted<br>* @param low the index of the first element, inclusive, to be sorted<br>* @param high the index of the last element, exclusive, to be sorted<br>*/<br>private static void heapSort(int[] a, int low, int high) {<br>   for (int k = (low + high) &gt;&gt;&gt; 1; k &gt; low; ) {<br>       pushDown(a, --k, a[k], low, high);<br>   }<br>   while (--high &gt; low) {<br>       int max = a[low];<br>       pushDown(a, low, a[high], low, high);<br>       a[high] = max;<br>   }<br>}</pre><h4>Impact of the Program Profiles on Performance</h4><p><a href="https://en.wikipedia.org/wiki/Inline_expansion">Function inlining optimization</a> determines whether to inline function calls based on the probability of executing loop bodies in <em>for</em> and <em>while</em> loops. More precisely, function inlining is a very complex optimization, and decisions about inlining a <em>pushDown</em> invocation aren’t made solely based on the probabilities of loop body execution. However, these probabilities play a significant role in determining whether or not to inline calls. Therefore, optimization of the <em>heapSort</em> function highly depends on program profiles that contain information about the execution probabilities of the <em>for</em> and <em>while</em> loops.</p><p>Consider sorting an array of 10 million integer values using the <em>heapSort</em> function from the JDK. Function inlining optimization can inline calls to the <em>pushDown</em> method, reducing the overhead of function calls during the sorting process. When the <em>pushDown</em> method calls are inlined, the average sorting time is 1.80 seconds. Conversely, if these calls are not inlined, the average sorting time increases to 2.18 seconds. As inlining calls to the <em>pushDown</em> method can speed up program execution by more than 20%, the execution probabilities of the loop bodies are very important.</p><h4>Goal: Accurately Predict Program Profiles</h4><p>Our goal is to predict the execution probabilities of the bodies of the <em>for</em> and <em>while</em> loop. Let’s focus on, for example, <em>for</em> loop. Predicting the probability of executing the loop body translates to predicting the probability of executing the <em>true</em> branch of the corresponding <em>IF</em> node in the Graal IR; so our goal is to predict the execution probability of the <em>true</em> branch of the<em> IF</em> node corresponding to the <em>for</em> loop. To do so, first, we need to take a look at the Graal IR and “define” branches of the <em>IF</em> node.</p><h4>ML Features that Characterize Code</h4><p>In Figure 3, we illustrate the <em>IF</em> node in the Graal IR that corresponds to the <em>for</em> loop (named <em>“1. If”</em>), along with its <em>true</em> and <em>false</em> branches defined by the blocks of a <a href="https://en.wikipedia.org/wiki/Control-flow_graph">control flow graph</a> (CFG).</p><p>If you’re wondering about the CFG, it’s essentially composed of blocks, where each block contains nodes from the IR graph. The CFG consolidates both control flow and data flow from the Graal IR, providing a clear representation of the program’s execution order and enhancing comprehension of its flow. Therefore, we utilized the CFG to characterize the branches of <em>IF</em> statements.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Yb1e8qOFR_nIsYnQ3f_XxQ.png" /><figcaption>Figure 3: IF node that corresponds to the for and loop of the heapSort function.</figcaption></figure><p>We “define” branches of the <em>IF</em> node in terms of the nodes in the Graal IR and blocks of the CFG built on top of the Graal IR. The <em>true</em> branch of the <em>“1. If” </em>node consists of blocks <em>B2-B8</em>, while the <em>false</em> branch consists of blocks <em>B9-B24</em>. Once we define blocks corresponding to the branches of the <em>IF</em> node, to fully characterize the<em> IF</em> node we also extract features from the CFG block that hosts the <em>IF</em> node (block <em>B1</em> in our example) as well as blocks that point to that block in the CFG (block <em>B0</em>).</p><p>The features we extract include, for example, the estimated assembly size of a branch, the estimated number of CPU cycles the computer will perform to execute instructions from the branch, and the nesting loop depth of an <em>IF</em> node. <a href="https://www.laphamsquarterly.org/miscellany/fermats-last-margin-note">The margin of this blog is too narrow</a> to discuss all the features. For those interested in more details, our paper <a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121224001031"><em>GraalSP: Polyglot, efficient, and robust machine learning-based static profiler</em></a><em> </em>provides more details. Here, it is important to note that the feature characterizing the <em>IF</em> node, for which we aim to predict the probability of executing its <em>true</em> branch, is represented by a vector of floating point numbers.</p><h4>Machine Learning for Profile Prediction</h4><p>After extracting the feature vector, we use <a href="https://en.wikipedia.org/wiki/Supervised_learning">supervised learning techniques</a> to train the ML model to predict profiles. We use <a href="https://medium.com/@iuyasik/introduction-to-ensemble-models-and-xgboost-9b13948a42a7">the XGBoost ensemble</a> of decision tree (DT) models for regression to predict profiles.</p><p>A DT model uses a tree-like structure consisting of nodes, branches, and leaves to model the data. Each node evaluates a feature and directs decisions down the tree, while leaf nodes predict profiles. On the left side of Figure 4, we illustrate a shallow DT model that predicts the branch execution probability of a <em>true</em> branch based on the loop depth of an <em>IF</em> node, as well as the assembly size and number of CPU cycles of a branch. For example, if an <em>IF</em> node is at a depth of two or more and the estimated CPU cycles for a <em>true</em> branch are less than five, the DT model predicts the execution probability of that branch to be 0.15. The main benefits of DTs are their interpretability, speed, and ease of use.</p><p>To improve prediction performance, reduce overfitting, and increase accuracy, we utilize the XGBoost ensemble, combining DTs as weak learners. On the right side of Figure 4, we illustrate an ensemble consisting of 1,500 decision trees. Each tree in the ensemble predicts the execution probability for a target branch, and the ensemble aggregates all predictions (for example, by averaging them) to determine the outcome.</p><p>Usage of the ensembles enables us to achieve <strong>precise</strong> predictions, while usage of the decision trees enables us to achieve a<strong> highly efficient </strong>static profiler. Also, by using <strong>lightweight</strong> decision trees, we end up with a small model of only ~250 kilobytes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/942/0*X6V1PrBaW8B25aOg" /><figcaption>Figure 4: DT and XGBoost ML models.</figcaption></figure><p>By defining the static profiler on top of the IR of the Graal compiler, we’ve created a <strong>polyglot</strong> static profiler capable of predicting profiles for all programs compiled to Java bytecode (for example, Java, Scala, and so on!).</p><p>Furthermore, we designed and developed two profile prediction heuristics that handle scenarios where input data deviates from the real-world Java and Scala programs. Of course, these heuristics can’t predict profiles better than ML models, but they can help prevent the model from making mistakes. For instance, one of the heuristics ensures that the probability of executing the body of a loop is not less than 0.2. This way, the ML model won’t make cardinal mistakes on frequent loops. By slightly increasing the binary size of the generated programs, these heuristics have enabled us to create a <strong>robust</strong> static profiler that effectively handles outliers.</p><p>The impact of a static profiler on program performance can vary depending on the program’s code. To evaluate the impact, we use programs from test suites <a href="https://github.com/renaissance-benchmarks/renaissance">Renaissance</a>, <a href="https://www.dacapobench.org/">DaCapo</a>, and <a href="https://dl.acm.org/doi/10.1145/2048066.2048118">DaCapo con Scala</a>. We integrated GraalSP into Native Image and achieved a <strong>7.46% speedup</strong> in execution time (geometric mean) compared to the default configuration. The default image build assumes a uniform distribution of execution probabilities over the branches of a control split. Figure 5 reports the geomean speedup of programs aggregated according to the test suite.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*P2y3OCzpBesevHOL" /><figcaption>Figure 5: Runtime speedup of GraalSP across the test suites Renaissance, DaCapo, and DaCapo con Scala.</figcaption></figure><p>Runtime improvement comes at the cost of an average increase in the size of generated programs of only 3.9%. The increased size of generated binary files results from <a href="https://epub.jku.at/download/pdf/4410434.pdf">duplication-based compiler optimizations</a>. The static profiler generates profiles for the entire codebase, even though usually only a small section of the code is executed. Duplication-based optimizations duplicate code even in sections that won’t be executed, thus slightly increasing the size of the compiled programs.</p><p>It is important to emphasize that the performance of programs optimized using PGO with dynamically collected profiles is much higher than those optimized using profiles predicted by GraalSP. In our experiments, <strong>PGO with dynamically collected profiles produces programs that are 33% faster and 15% smaller</strong>. As expected, the quality of the dynamically collected profiles is superior because they lack the inherent errors found in ML models. Additionally, dynamically collected profiles include information that GraalSP currently does not predict: method call profiles, reached types in virtual calls, details about monitor locking and unlocking, and so on. Of course, as there is no such thing as a free lunch, these performance improvements comes with the hurdles of dynamic profiling.</p><h3>Model Inference: Deployment to Production</h3><p>Since the releases of GraalVM for JDK 17 and GraalVM for JDK 20 (version 23.0), GraalSP has been <strong>enabled by default in Oracle GraalVM</strong>! During the default build, you’ll find out that Native Image optimizes your program using PGO with <em>ML-inferred</em> profiles. Figure 6 illustrates the integration of GraalSP and dynamic profiling in Native Image. When users enable dynamic profiling, Native Image instruments the code and runs the instrumented executable to collect profiles. Otherwise, if users do not enable the dynamic profiler, Native Image will run the GraalSP to predict profiles and optimize programs based on the predicted values.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/972/0*8fWpygwoxA_5pYY2" /><figcaption>Figure 6: Integration of the GraalSP and dynamic profiler in the Native Image.</figcaption></figure><p>GraalSP uses <a href="https://onnxruntime.ai/docs/get-started/with-java.html">the ONNX Java Runtime</a>, developed by Microsoft for cross-platform accelerated ML. This ensures compatibility with various architectures, including Windows amd64, Linux amd64 and aarch64, and Darwin amd64 and aarch64, providing flexibility and usability of GraalSP across different platforms. Also, by using the ONNX runtime, we implemented static profiling with a compilation time overhead of only 10%.</p><h3>Conclusion</h3><p>To summarize, GraalSP is enabled by default in Oracle GraalVM Native Image and offers the benefits of PGO without requiring profile collection. GraalSP characterizes programs in terms of Graal IR and uses the XGBoost ensemble to predict profiles. Native Image then uses these predicted profiles to perform PGO and create optimized programs. This represents the first successful application of ML in Oracle GraalVM. However, at Oracle Labs, we are preparing some exciting updates, improvements, and ML novelties to be announced very soon. Stay tuned!</p><p>For more details about the ML magic inside Oracle GraalVM, please refer to our paper <a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121224001031"><em>GraalSP: Polyglot, efficient, and robust machine learning-based static profiler</em></a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d7fc13bb04e2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/machine-learning-driven-static-profiling-for-native-image-d7fc13bb04e2">Machine Learning-Driven Static Profiling for Native Image</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Native Image Quick Reference — GraalVM for JDK 23]]></title>
            <link>https://medium.com/graalvm/native-image-quick-reference-graalvm-for-jdk-23-45cb94b65bf7?source=rss----7122626bf34b---4</link>
            <guid isPermaLink="false">https://medium.com/p/45cb94b65bf7</guid>
            <dc:creator><![CDATA[Olga Gupalo]]></dc:creator>
            <pubDate>Tue, 12 Nov 2024 14:35:49 GMT</pubDate>
            <atom:updated>2024-11-12T14:48:00.287Z</atom:updated>
            <content:encoded><![CDATA[<h3>Native Image Quick Reference — GraalVM for JDK 23</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*M479SRKWr_h8MG3XvlkaqQ.png" /></figure><p>It’s been two years since we published a revision to our <a href="https://medium.com/graalvm/native-image-quick-reference-v2-332cf453d1bc">Native Image Quick Reference</a> and much has changed since then! The biggest update? <strong>Native Image became part of the GraalVM distribution, no need for a separate download!</strong> In the meantime, our team has focused on improving the developer experience by shortening build times, adding new useful features, and including several widely-used host options that have now become public APIs. In this quick reference, we highlight some of the key updates to Native Image, particularly around optimizations and performance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Po23EMNpVM8NxfYeNiutAA.png" /><figcaption>Native Image Quick Reference — GraalVM for JDK 23</figcaption></figure><p>This quick reference is designed to fit neatly on an A4 or US Letter paper, making it easy to print. The PDF versions are available below:</p><ul><li><strong>A4 Paper</strong> — <a href="https://www.graalvm.org/uploads/quick-references/Native-Image-graalvm-for-jdk23/native-image-quick-reference-graalvm-for-jdk23(eu_a4).pdf">Download PDF</a></li><li><strong>US Letter Paper</strong> —<a href="https://www.graalvm.org/uploads/quick-references/Native-Image-graalvm-for-jdk23/native-image-quick-reference-graalvm-for-jdk23(us-letter).pdf"> Download PDF</a></li></ul><h3>Building</h3><p>There are two ways to build Java applications ahead of time: using native-image on the command line or with Native Build Tools.</p><p>Building with native-image <strong>on the command line </strong>remains straightforward, the same way as you would use java:</p><ul><li>to compile a class: native-image [options] &lt;mainclass&gt; [args…]</li><li>to compile a JAR file: native-image [options] -jar &lt;jarfile&gt; [args…]</li><li>to compile the main class in a module: native-image [options] -m &lt;module&gt;[/&lt;mainclass&gt;] [args…]</li></ul><p>Building a <strong>shared library</strong> has not changed and requires passing the --shared option to the native-image tool.</p><p>For container deployments, we recommend taking advantage of Native Image <strong>containerization and linking options</strong> (Linux only) to reduce dependencies on the runtime libraries. You can create a <strong>fully static native image</strong> using the --static --libc=musl option and run it in a lightweight container, even a <a href="https://hub.docker.com/_/scratch">scratch</a> container, without any runtime dependencies!</p><p>Another option is to build a <strong>mostly statically linked </strong>binary by passing the --static-nolibc option. A mostly static native image links all the shared libraries on which it relies (zlib, JDK-shared static libraries) except the standard C library, libc. This type of native image is useful for deployment in a Distroless base container which provides glibc.</p><h4>Native Build Tools</h4><p>The <strong>recommended method</strong> for building native images is through <a href="https://graalvm.github.io/native-build-tools/latest/"><strong>Native Build Tools</strong></a> plugins for Maven and Gradle.</p><p>In a <strong>Maven</strong> project, you define the plugin as part of a Maven profile in pom.xml as follows:</p><pre>&lt;profile&gt;<br>    &lt;id&gt;native&lt;/id&gt;<br>    &lt;build&gt;<br>      &lt;plugins&gt;<br>        &lt;plugin&gt;<br>          &lt;groupId&gt;org.graalvm.buildtools&lt;/groupId&gt;<br>          &lt;artifactId&gt;native-maven-plugin&lt;/artifactId&gt;<br>          &lt;version&gt;${buildtools.version}&lt;/version&gt;<br>          &lt;executions&gt;<br>            &lt;execution&gt;<br>              &lt;id&gt;build-native&lt;/id&gt;<br>              &lt;goals&gt;<br>                &lt;goal&gt;compile-no-fork&lt;/goal&gt;<br>              &lt;/goals&gt;<br>              &lt;phase&gt;package&lt;/phase&gt;<br>            &lt;/execution&gt;<br>          &lt;/executions&gt;<br>        &lt;/plugin&gt;<br>      &lt;/plugins&gt;<br>    &lt;/build&gt;<br>  &lt;/profile&gt;<br>&lt;/profiles&gt;</pre><p>To build, simply use:</p><pre>./mvnw -Pnative package</pre><p>Skip tests with -DskipTests if needed: ./mvnw -DskipTests -Pnative package.</p><p>In a Gradle project, you define the plugin in build.gradle (the configuration for Kotlin is provided in the <a href="https://graalvm.github.io/native-build-tools/latest/gradle-plugin.html">plugin documentation</a>).</p><pre>plugins {<br>  id &#39;org.graalvm.buildtools.native&#39; version &#39;${buildtools.version}&#39;<br>}</pre><p>Build a native executable and run the application at one step:</p><pre>./gradlew nativeRun</pre><p>Both plugins support running JUnit Platform tests as native code:</p><ul><li>Maven: ./mvnw -Pnative test</li><li>Gradle: ./gradlew nativeTest</li></ul><p>Native Image Maven and Gradle plugins basic configuration can be significantly extended. For example, in a Maven project, pass options to native-image directly through the plugin configuration:</p><pre>&lt;configuration&gt;<br>    &lt;buildArgs&gt;<br>       &lt;buildArg&gt;--option&lt;/buildArg&gt;<br>    &lt;/buildArgs&gt;<br>&lt;/configuration&gt;</pre><p>In a Gradle project, use:</p><pre>graalvmNative {<br>    binaries {<br>        main {<br>            buildArgs.add(&quot;--option&quot;)<br>        }<br>     }<br>}</pre><p>You can find more information in the <a href="https://graalvm.github.io/native-build-tools/latest/">plugins documentation</a>.</p><h3>Optimizing</h3><p>Native Image provides multiple ways to optimize the resulting native executable for performance, size, build time, debuggability, and other metrics.</p><h4>Improved performance</h4><p>To improve the performance, throughput in particular, we strongly recommend using <strong>Profile-Guided Optimization (PGO)</strong>. PGO enables the Graal compiler to leverage profiling information and sets the optimization level to -O3, similar to when it is running as a JIT compiler, when AOT-compiling your application. For this, perform the following steps:</p><ol><li>Build the application with --pgo-instrument.</li><li>Run the instrumented application with representative workloads to create profiling data, saved as default.iprof.</li><li>Rebuild using the --pgo option (or --pgo=&lt;custom&gt;.iprof for specific profiles), generating an optimized native version of your application.</li></ol><p>Find more information on this topic in <a href="https://raw.githubusercontent.com/oracle/graal/master/PGO-Basic-Usage.md">Basic Usage of Profile-Guided Optimization</a>. (Available with Oracle GraalVM.)</p><p>The next recommendation is to improve latency and throughput by switching from the default Serial garbage collector to the <strong>G1 garbage collector.</strong> You can enable G1 by adding the --gc=G1 option to native-image (only available with Oracle GraalVM on Linux). Here are a few options that you can specify when doing performance tuning:</p><ul><li>-H:G1HeapRegionSize: Define the G1 region size.</li><li>-XX:MaxRAMPercentage: Set the maximum heap size as a percentage of physical memory.</li><li>-XX:MaxGCPauseMillis: Set target maximum pause times.</li></ul><p>You can find a full list in the <a href="https://www.graalvm.org/latest/reference-manual/native-image/optimizations-and-performance/MemoryManagement/#performance-tuning">documentation</a>.</p><p>For example, to create a native image using G1 GC with a region size of 2MB and a maximum pause time goal of 100ms, run:</p><pre>native-image --gc=G1 -H:G1HeapRegionSize=2m -R:MaxGCPauseMillis=100 HelloWorld</pre><p>To execute the native image from above and override the maximum pause time goal, run:</p><pre>./helloworld -XX:MaxGCPauseMillis=50</pre><p>For<strong> optimal performance without PGO</strong>, you can use the new option -O3 Although it may increase the build time.</p><p>Another option is <strong>optimizing for a specific machine type</strong>. You can enable more CPU features if you deploy your application on the same machine or machine with similar specs using the-march=native option. This instructs the compiler to use all instructions that it finds available on the machine the binary is generated on.</p><p>If the generated image is distributed to users with many different, and potentially very old machines, use -march=compatibility. This reduces the set of instructions used by the compiler to a minimum and thus improves the compatibility of the generated binary.</p><h4>For faster build time</h4><p>To <strong>reduce build times</strong>, enable the quick build mode with -Ob. This mode performs fewer compiler optimizations, reducing the overall <a href="https://www.graalvm.org/latest/reference-manual/native-image/overview/BuildOutput/#stage-compiling">compilation phase</a> time, and is suitable for development builds. However, production builds should benefit from full optimizations. More CPU cores also significantly improves the build time: more CPUs — faster builds.</p><p>On another note, Native Image prints a valuable <strong>build output</strong> to the console with the information on the <a href="https://www.graalvm.org/latest/reference-manual/native-image/overview/BuildOutput/#resource-usage-statistics">resource usage statistics</a> such as garbage collection, peak RSS, and CPU load.</p><h4>For smaller binary size</h4><p>Use -Os to <strong>prioritize smaller executable sizes</strong>, ideal for cloud or container deployment environments where the occupied space matters. This option retains essential optimizations from -O2 at the cost of reduced performance.</p><h4>For reduced memory footprint</h4><p>When executing a native image, Java heap settings are determined based on the system configuration and GC. To override this default mechanism, set a maximum heap size for more <strong>predictable memory usage</strong> at run time. For example:</p><pre>./myapp -Xms2m -Xmx10m -Xmn1m</pre><p>You may want to gather garbage collection logs if your expectations are not met:</p><pre>./myapp -XX:+PrintGC -XX:+VerboseGC</pre><p>Find more information about <a href="https://www.graalvm.org/latest/reference-manual/native-image/optimizations-and-performance/MemoryManagement/#java-heap-size">Memory Management in Native Image</a> on the website.</p><h3>Useful Developer Tools</h3><p>If the file size of your native binary increased unexpectedly, you can now <strong>generate a Build Report </strong>using --emit build-report to analyze embedded resources and other data. Build Reports enable you to explore the contents of the generated images in greater detail and in a more appealing manner than on the command line. <a href="https://www.graalvm.org/jdk23/reference-manual/native-image/guides/optimize-native-executable-size-using-build-report/">See an example here</a>. (Available with Oracle GraalVM.)</p><p>To identify your application dependencies, you can generate a <strong>Software Bill of Materials</strong> (SBOM) using the new --enable-sbom option which supports CycloneDX by default. This embeds a GZIP-compressed SBOM file that you can later <a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/use-sbom-support/">extract into a human-readable format</a> or submit to a vulnerability scanner. (Available with Oracle GraalVM.)</p><p>Native Image supports more application monitoring features than before, such as Java Management Extensions (JMX), Native Memory Tracking (NMT), and so on. Enable monitoring features at build time with the --enable-monitoring option, which defaults to all. To enable only selected features, choose from: heapdump, jfr, jvmstat, jmxserver, jmxclient, nmt, and threaddump. Enabling any of these features will increase the size of the executable.</p><p>For performance analysis, Native Image now supports <a href="https://perfwiki.github.io/main/">Linux Perf profiler</a>. You can gather the runtime statistics with perf as follows:</p><pre>perf stat ./myapp</pre><p>For additional details on profiling, refer to <a href="https://www.graalvm.org/latest/reference-manual/native-image/debugging-and-diagnostics/perf-profiler/">Linux Perf Profiler Support in Native Image</a>.</p><h3>Summary</h3><p>The updated quick reference is designed to keep developers up-to-date on the latest Native Image features. It covers essential building, optimization, and developer tools options. Whether you’re building for containers, aiming for optimized performance, or debugging complex issues, this quick reference is a good place to start.</p><p>Download and print it to keep it at hand, and happy coding with GraalVM Native Image!</p><p>— <br><em>The GraalVM team</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=45cb94b65bf7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/graalvm/native-image-quick-reference-graalvm-for-jdk-23-45cb94b65bf7">Native Image Quick Reference — GraalVM for JDK 23</a> was originally published in <a href="https://medium.com/graalvm">graalvm</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>