Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
The Future of Data Streaming with Apache Flink for Agentic AI
How to Secure a Spring AI MCP Server with an API Key via Spring Security
The purpose of this article is to depict and demonstrate how to automate the build and deployment process using a CI/CD pipeline with CloudHub 2.0 (Mule 4). Prerequisites Anypoint CloudHub account (CloudHub 2.0)app.runtime – 4.9.0mule.maven.plugin.version – 4.3.0Anypoint Studio – Version 7.21.0OpenJDK – 11.0 CI/CD Pipeline Now, follow the steps below. Create a project from a RAML file in Anypoint Studio. The detailed steps are mentioned in my earlier blog. Please go through the link below if you have not already done so: https://dzone.com/articles/jwt-policy-enforcement-raml-anypoint-platform How to Do Automated Deployment? First, let’s see how to deploy manually using a Maven command from the command prompt. There are two steps: build and deployment. Deploy to ExchangeDeploy to Runtime Manager Please find the pom.xml below. For security reasons, I have changed the group ID. XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>XXXXXXX-a7c8-XXX-ae67-dXXXXXXXXXXX</groupId> <artifactId>order-api-sapi</artifactId> <version>1.0.20</version> <packaging>mule-application</packaging> <name>order-api-sapi</name> <properties> <mule.maven.plugin.version>4.3.0</mule.maven.plugin.version> <app.runtime>4.9.0</app.runtime> </properties> <build> <plugins> <plugin> <groupId>org.mule.tools.maven</groupId> <artifactId>mule-maven-plugin</artifactId> <version>${mule.maven.plugin.version}</version> <extensions>true</extensions> <configuration> <classifier>mule-application</classifier> <cloudhub2Deployment> <uri>https://anypoint.mulesoft.com</uri> <provider>MC</provider> <environment>${env}</environment> <server>anypoint-exchange-v3</server> <target>${target}</target> <applicationName>${app.name}</applicationName> <replicas>${replicas}</replicas> <muleVersion>${app.runtime}</muleVersion> <vCores>${vCores}</vCores> <deploymentSettings> <generateDefaultPublicUrl>true</generateDefaultPublicUrl> </deploymentSettings> <connectedAppGrantType>client_credentials</connectedAppGrantType> <properties> <anypoint.platform.client_id>${anypoint.client.id}</anypoint.platform.client_id> <anypoint.platform.client_secret>${anypoint.client.secret}</anypoint.platform.client_secret> <remote.api.clientId>8081</remote.api.clientId> <env>${env}</env> </properties> <secureProperties> <remote.api.clientSecret>8082</remote.api.clientSecret> </secureProperties> </cloudhub2Deployment> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.mule.connectors</groupId> <artifactId>mule-http-connector</artifactId> <version>1.10.3</version> <classifier>mule-plugin</classifier> </dependency> <dependency> <groupId>org.mule.connectors</groupId> <artifactId>mule-sockets-connector</artifactId> <version>1.2.5</version> <classifier>mule-plugin</classifier> </dependency> <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-apikit-module</artifactId> <version>1.11.3</version> <classifier>mule-plugin</classifier> </dependency> </dependencies> <distributionManagement> <repository> <id>anypoint-exchange-v3</id> <name>anypoint-exchange-v3</name> <url>https://maven.anypoint.mulesoft.com/api/v3/organizations/${groupId}/maven</url> <layout>default</layout> </repository> </distributionManagement> <repositories> <repository> <id>anypoint-exchange-v3</id> <name>Anypoint Exchange</name> <url>https://maven.anypoint.mulesoft.com/api/v3/maven</url> <layout>default</layout> </repository> <repository> <id>mulesoft-releases</id> <name>MuleSoft Releases Repository</name> <url>https://repository.mulesoft.org/releases/</url> <layout>default</layout> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>mulesoft-releases</id> <name>MuleSoft Releases Repository</name> <layout>default</layout> <url>https://repository.mulesoft.org/releases/</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </project> I already explained <cloudhub2Deployment> in my earlier blog. Please refer to it if required. Mandatory Configuration We need to explicitly mention the following snippet under distributionManagement. This is mandatory, as it defines the repository location of the generated build JAR. XML <distributionManagement> <repository> <id>anypoint-exchange-v3</id> <name>anypoint-exchange-v3</name> <url>https://maven.anypoint.mulesoft.com/api/v3/organizations/${groupId}/maven</url> <layout>default</layout> </repository> </distributionManagement> You also need to add the repository inside the <repositories> tag: XML <repository> <id>anypoint-exchange-v3</id> <name>Anypoint Exchange</name> <url>https://maven.anypoint.mulesoft.com/api/v3/maven</url> <layout>default</layout> </repository> Project Setup in VS Code Now, open the project using Visual Studio Code. Once opened, you will see the project structure. We need to do three things: Create a .maven folder inside the project and create a settings.xml file inside it.Create a .github/workflows folder inside the project and create a build.yaml file inside it.Add the GitHub Actions extension in VS Code. This will connect VS Code with your GitHub account after providing credentials. settings.xml You need to add the profiles and servers sections as shown below. Profile section contains environment-specific details.Server section contains connected app credentials (client ID and client secret). XML <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.2.0 https://maven.apache.org/xsd/settings-1.2.0.xsd"> <pluginGroups> <pluginGroup>org.mule.tools</pluginGroup> </pluginGroups> <profiles> <profile> <id>Sandbox</id> <properties> <target>Cloudhub-US-East-2</target> <replicas>1</replicas> <region>us-east-1</region> <env>Sandbox</env> <app.name>sand-order-api-sapi</app.name> <vCores>0.1</vCores> <worker.type>MICRO</worker.type> <anypoint.client.id>xxxxxx</anypoint.client.id> <anypoint.client.secret>xxxxx</anypoint.client.secret> </properties> </profile> </profiles> <servers> <server> <id>anypoint-exchange-v3</id> <username>~~~Client~~~</username> <password>f059d98e8d974517bxxxxxxx~?~c1efdD4cFxxxxxxxxx</password> </server> </servers> </settings> Please ensure that all three IDs match exactly: Server IDRepository IDDistribution repository ID Example: anypoint-exchange-v3 build.yaml File Add the following content to the build.yaml file: YAML name: Publish to Exchange & Deploy to CH2.0 on: push: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout this repo uses: actions/checkout@v4 - name: Cache dependencies uses: actions/cache@v4 with: path: ~/.m2/repository key: ${{ runner.os }-maven-${{ hashFiles('**/pom.xml') } restore-keys: ${{ runner.os }-maven- - name: Set up JDK 1.8 uses: actions/setup-java@v4 with: distribution: "zulu" java-version: 8 - name: Publish to Exchange run: | mvn deploy --settings .maven/settings.xml -DskipMunitTests \ -Dclient.id="${{ secrets.CONNECTED_APP_CLIENT_ID }" \ -Dclient.secret="${{ secrets.CONNECTED_APP_CLIENT_SECRET }" - name: Deploy to CloudHub 2.0 run: | mvn deploy --settings .maven/settings.xml -PSandbox -DskipMunitTests -DmuleDeploy \ -Dclient.id="${{ secrets.CONNECTED_APP_CLIENT_ID }" \ -Dclient.secret="${{ secrets.CONNECTED_APP_CLIENT_SECRET }" Add Client ID and Client Secret Below is where you can add the client ID and client secret in VS Code. Please ensure that the names match exactly in both places: build.yaml fileRepository Secrets Once added in VS Code, they will automatically appear in the GitHub repository settings. To view this in GitHub, navigate to: Your Repository → Settings → Actions You can also set these directly in GitHub. The client ID and client secret are the credentials of the Connected App. Build Execution Once everything is set, the build will start automatically when you check in the code. Important: Every time you run a fresh build, ensure that the Group ID, Artifact ID, and Version (GAV) combination is unique. Otherwise, you may encounter the common GAV error. To demonstrate this, I first ran the build without changing the version. As expected, publishing to Exchange failed because the same GAV already existed. After changing the version, the build succeeded. How to Verify the CI/CD Pipeline 1. Make changes to the code locally, then commit and push them to the Git repository (as shown in the figure below). 2. Once the changes are pushed, the build starts automatically. Note: There are two steps in the build: Publish to ExchangeDeploy to Runtime Manager 3. The build completes successfully. 4. The same changes are reflected in the Anypoint Platform. Please verify the timestamp. I hope you enjoyed this article. Please leave a comment if you have any suggestions or improvements.
In my previous article, I demonstrated how to implement OIDC4VCI (credential issuance) and OIDC4VP (credential presentation) using Spring Boot and an Android wallet. This follow-up focuses on a critical security enhancement now mandated by EUDI standards: DPoP (Demonstrating Proof-of-Possession). The Problem With Bearer Tokens Traditional Bearer tokens have an inherent weakness: anyone who obtains the token can use it. If an attacker intercepts or steals a Bearer token, they can impersonate the legitimate client until the token expires (or is revoked). Enter DPoP (RFC 9449) DPoP (RFC 9449) solves this by binding access tokens to a client’s cryptographic key. Even if an attacker steals a DPoP-bound token, it is useless without the corresponding private key. Here’s how it works in practice: The client generates a key pair and includes the public key in a signed DPoP proof.The authorization server binds the issued token to that key (via the cnf.jkt claim).On each resource request, the client proves possession of the private key with a fresh DPoP proof.The resource server validates that the proof matches the token’s key binding. Why Now? HAIP 1.0 Mandates DPoP The OpenID4VC High Assurance Interoperability Profile (HAIP) 1.0, published in December 2025, establishes mandatory requirements for EUDI wallet implementations: MUST support sender-constrained tokens via DPoP (RFC 9449)MUST support PKCE with S256 (RFC 7636)Wallets MUST handle DPoP-Nonce headers if servers provide them DPoP Out of the Box: Since Spring Boot 3.5 Starting with Spring Boot 3.5 (May 2025), native DPoP support is available via: Spring Authorization Server 1.5.0 – automatically issues DPoP-bound tokens when clients send a DPoP headerSpring Security 6.5.0 – auto-validates DPoP proofs on resource servers The following sequence diagram demonstrates the flow: Implementation Highlights In the sections below, we present additions to the existing Authorization & Resource servers (backend) and the Android Wallet (mobile client). Spring Boot versions must be updated in the respective POM files. Authorization Server Spring Authorization Server 1.5+ handles DPoP automatically. The key configuration addition is supporting public clients (no client secret) for mobile wallets: Java RegisteredClient.withId(UUID.randomUUID().toString()) .clientId("wallet-client") // Public client support for mobile wallets (PKCE + DPoP, no secret) .clientAuthenticationMethod(ClientAuthenticationMethod.NONE) .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE) .clientSettings(ClientSettings.builder() .requireProofKey(true) // PKCE required .build()) .build(); When a client sends a DPoP header, the authorization server automatically: Validates the DPoP proofExtracts the public key and computes its thumbprintIncludes the cnf.jkt claim in the access token Resource Server (Issuer) With Spring Security 6.5+, DPoP validation is enabled by default. The configuration is minimal: Java @Bean public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests(auth -> auth .requestMatchers("/.well-known/**").permitAll() .anyRequest().authenticated() ) // Spring Security 6.5+ auto-enables DPoP validation .oauth2ResourceServer(oauth2 -> oauth2 .jwt(Customizer.withDefaults()) ); return http.build(); } Spring Security automatically: Accepts Authorization: DPoP <token> schemeValidates the DPoP proof JWT (signature, htm, htu, iat, jti)Verifies the ath claim matches the access token hashConfirms the proof's public key matches the token’s cnf.jkt Android Wallet The wallet creates DPoP proofs for each request. In our simplified PoC codebase, we can reuse the wallet key previously used for JWT proofs (rather than managing a separate key pair) and introduce a DPoPManager class: Kotlin class DPoPManager(private val walletKeyManager: WalletKeyManager) { fun createDPoPProof( httpMethod: String, httpUri: String, accessTokenHash: String? = null ): String { val walletKey = walletKeyManager.getWalletKey() val header = JWSHeader.Builder(JWSAlgorithm.ES256) .type(JOSEObjectType("dpop+jwt")) .jwk(walletKey.toPublicJWK()) .build() val claimsBuilder = JWTClaimsSet.Builder() .jwtID(UUID.randomUUID().toString()) .claim("htm", httpMethod) .claim("htu", httpUri) .issueTime(Date()) // Include access token hash for resource requests accessTokenHash?.let { claimsBuilder.claim("ath", it) } val signedJWT = SignedJWT(header, claimsBuilder.build()) signedJWT.sign(AndroidKeystoreSigner(KEY_ALIAS)) return signedJWT.serialize() } fun computeAccessTokenHash(accessToken: String): String { val hash = MessageDigest.getInstance("SHA-256") .digest(accessToken.toByteArray(Charsets.US_ASCII)) return Base64.encodeToString(hash, Base64.URL_SAFE or Base64.NO_PADDING or Base64.NO_WRAP) } } And then the usage in the issuance flow can be modified like: Kotlin // Token request val dpopProof = dpopManager.createDPoPProof("POST", tokenUrl) client.submitForm(tokenUrl, parameters) { header("DPoP", dpopProof) } // Resource request (include ath claim) val dpopProof = dpopManager.createDPoPProof( "POST", credentialUrl, dpopManager.computeAccessTokenHash(accessToken) ) client.post(credentialUrl) { header("Authorization", "DPoP $accessToken") header("DPoP", dpopProof) } Verifying It Works Positive test: The full flow completes successfully with DPoP headersNegative test: Remove the DPoP header from a request, and Logcat should show: Markdown HTTP 401 Unauthorized WWW-Authenticate: DPoP error="invalid_request", error_description="DPoP proof is missing or invalid." Debugging: Set a breakpoint in org.springframework.security.oauth2.server.resource.authentication.DPoPAuthenticationProvider.authenticate() to step through the validation. Conclusion DPoP is no longer optional for EUDI-compliant implementations. The good news: Spring Boot 3.5+ makes adoption straightforward with built-in support in both the Authorization Server and Resource Server. The main implementation effort is on the wallet side, creating fresh DPoP proofs for each request. Beyond EUDI and mobile wallets, DPoP is a valuable security hardening measure for any OAuth 2.0 implementation where token misuse is a concern.
Working with dates and time has always been one of the trickiest parts of Java development. For years, developers wrestled with java.util.Date, Calendar, and the never-ending confusion around mutability, time zones, thread safety, and formatting quirks. When Java 8 introduced the java.time package, it finally brought a modern and much more intuitive date-time API inspired by Joda-Time. Yet even with this improved API, many developers still find themselves constantly converting between different date representations, especially when integrating legacy systems, REST interfaces, databases, or front-end clients. In this article, I want to walk through the best practical approaches for date conversion in Java 8+, focusing on clarity and reliability. These are patterns I’ve seen consistently used in production systems, and they help avoid many silent bugs that come from incorrect time zone assumptions, accidental loss of precision, and misuse of the older date classes. Why Date Conversion Still Matters Even though the newer Java Time API is robust, conversion remains a big part of everyday development. Some common reasons include: Legacy systems still providing java.util.Date or even date strings in old patternsJSON serialization/deserialization (e.g., between ISO strings and Java objects)Databases returning timestamp typesConverting between date-only and date-time representationsWorking with epoch values — milliseconds or secondsHandling time zones when applications deploy globally Because of these factors, knowing the right set of conversion techniques saves both debugging time and operational surprises. 1. Converting Between String and Java 8 Date Types Probably the most common need is converting to and from strings — typically when communicating with front-end applications or parsing input files. String → LocalDate/LocalDateTime Java LocalDate date = LocalDate.parse("2025-01-15", DateTimeFormatter.ofPattern("yyyy-MM-dd")); LocalDateTime dateTime = LocalDateTime.parse("2025-01-15 10:30", DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm")); Between LocalDate and LocalDateTime, the important thing is that neither carries time zone information. This makes them ideal for representing business dates but not timestamps. LocalDate / LocalDateTime → String Java String output = date.format(DateTimeFormatter.ofPattern("MMM dd, yyyy")); String output2 = dateTime.format(DateTimeFormatter.ISO_LOCAL_DATE_TIME); The important lesson here is that you should always use a DateTimeFormatter with a clear pattern. Relying on defaults can lead to differences between JVM locales. 2. Converting Between the Legacy Date API and Java 8+ Types Even though most developers prefer java.time, the old API still shows up everywhere— JDBC drivers, some frameworks, and older codebases. Date → LocalDate/LocalDateTime Java Date date = new Date(); LocalDate localDate = date.toInstant() .atZone(ZoneId.systemDefault()) .toLocalDate(); LocalDateTime localDateTime = date.toInstant() .atZone(ZoneId.systemDefault()) .toLocalDateTime(); Here, ZoneId.systemDefault() plays an important role. A Date object represents an instant in UTC, but converting to a local representation requires choosing a zone. Using the system default is fine for most cases, but in distributed systems, it’s often better to explicitly set the time zone. LocalDate/LocalDateTime → Date Java Date fromLocalDate = Date.from( localDate.atStartOfDay(ZoneId.systemDefault()).toInstant()); Date fromLocalDateTime = Date.from( localDateTime.atZone(ZoneId.systemDefault()).toInstant()); This pattern ensures the legacy Date gets the correct moment in time. 3. Working With Instants and Epoch Values For systems that rely on precise timestamps — for example, event logs or distributed systems—conversion to Instant or epoch values is common. Instant → Millisecond Epoch Java long millis = Instant.now().toEpochMilli(); Millisecond Epoch → Instant Java Instant instant = Instant.ofEpochMilli(millis); Instant → LocalDateTime Java LocalDateTime ldt = LocalDateTime.ofInstant(instant, ZoneId.of("UTC")); LocalDateTime → Instant Java Instant instant2 = ldt.atZone(ZoneId.of("UTC")).toInstant(); Using UTC for all system-level timestamps is generally considered best practice. 4. Understanding the Importance of Time Zones Many subtle bugs in date conversions arise because developers overlook time zones. For example, converting "2025-01-15T10:30:00" into a server in another region may shift the date unintentionally. LocalDateTime → ZonedDateTime Java ZonedDateTime zdt = LocalDateTime.now().atZone(ZoneId.of("America/New_York")); ZonedDateTime → Another Zone Java ZonedDateTime converted = zdt.withZoneSameInstant(ZoneId.of("UTC")); This difference is huge: withZoneSameInstant() shifts the date/time so the represented moment stays accurate.withZoneSameLocal() keeps the date/time but applies a different interpretation. Most developers want the first one — but it's easy to mix them up. 5. Avoiding Common Conversion Mistakes Mistake #1: Using SimpleDateFormat in a Multithreaded Environment It’s not thread-safe. Java Time formatters are. Mistake #2: Forgetting the Time Zone When Converting Date → LocalDateTime Always define a zone. Implicit defaults hide bugs until production. Mistake #3: Using LocalDateTime for Timestamps It has no time zone and no instant meaning. Prefer: Instant for precise timestamps,ZonedDateTime for user-facing calendar information. Mistake #4: Using the System Default Time Zone Without Intention Be explicit when working across services or regions. 6. A Clean Utility Class for Reuse Most teams eventually consolidate conversions into a utility class. Something short and readable works best: Java public class DateUtils { public static LocalDate toLocalDate(Date date) { return date.toInstant() .atZone(ZoneId.systemDefault()) .toLocalDate(); } public static LocalDateTime toLocalDateTime(Date date) { return date.toInstant() .atZone(ZoneId.systemDefault()) .toLocalDateTime(); } public static Date toDate(LocalDateTime dateTime) { return Date.from(dateTime .atZone(ZoneId.systemDefault()) .toInstant()); } public static String format(LocalDate date, String pattern) { return date.format(DateTimeFormatter.ofPattern(pattern)); } public static String format(LocalDateTime dateTime, String pattern) { return dateTime.format(DateTimeFormatter.ofPattern(pattern)); } } Keep the class small; avoid overengineering. In most cases, clarity is more important than completeness. Conclusion Java 8’s date and time API is one of the biggest quality-of-life improvements the language has seen. Still, real-world applications require frequent conversions — between older APIs, strings, JSON formats, epoch timestamps, and various time zones. Using a consistent and well-understood approach helps prevent subtle bugs that often surface only after deployment. By relying on Instant for timestamps, being explicit with time zones, and using simple reusable utilities, teams can avoid the pitfalls that plagued the old date-time classes. Whether you're parsing user input, integrating with legacy systems, or designing cloud-native services, mastering these conversion techniques is essential for writing reliable and maintainable Java code.
Java remains one of the most popular languages for enterprise applications running on the cloud. While languages like Go, Rust, JavaScript, and Python have a high profile for cloud application developers, the RedMonk language rankings have ranked Java in the top three most popular languages throughout the history of the ranking. When deploying applications to the cloud, there are a few key differences between deployment environments and development environments. Whether you’re spinning up a microservice application on Kubernetes or launching virtual machine instances, it is important to tune your Java Virtual Machine (JVM) to ensure that you are getting your money’s worth from your cloud spend. It pays to know how the JVM allocates resources and to ensure you use them efficiently. Java remains one of the most popular languages for enterprise applications running on the cloud. While languages like Go, Rust, JavaScript, and Python have a high profile for cloud application developers, the RedMonk language rankings have ranked Java in the top three most popular languages throughout the history of the ranking. When deploying applications to the cloud, there are a few key differences between deployment environments and development environments. Whether you’re spinning up a microservice application on Kubernetes or launching virtual machine instances, it is important to tune your Java Virtual Machine (JVM) to ensure that you are getting your money’s worth from your cloud spend. It pays to know how the JVM allocates resources and to ensure you use them efficiently. Most of the information and advice in this series is platform-independent and will work just as well on x86_64 and Arm 64 CPUs. As Java was designed to be platform-independent, this is not surprising. As the Java community has invested effort in optimizing the JVM for Arm64 (also called aarch64, for “64-bit Arm architecture”), Java developers should see the performance of their applications improve on that architecture without doing anything special. However, we will point out some areas where the Arm64 and x86_64 architectures differ, and how to take advantage of those differences for your applications. Additionally, we will generally only refer to long-term supported versions of Java tooling. For example, G1GC was introduced as the default garbage collector in the Java 9 development cycle but was not available in a long-term supported JDK (Java Development Kit) until Java 11. Since most enterprise Java developers use LTS versions of the JDK, we will limit version references to those (at the time of writing, those are Java 8, 11, 17, 21, and 25). In this two-part series on tuning Java applications for the cloud, we come at the problem from two different perspectives. In part 1 (this article), we will focus on how the JVM allocates resources and identify some options and operating system configurations that can improve performance on Ampere-powered instances in the cloud or on dedicated bare-metal hardware. In Part 2, we will look more closely at the infrastructure side, with a particular focus on Kubernetes and Linux kernel configuration. We will walk through some architectural differences between Arm64 and x86, and how to ensure that your Kubernetes, operating system, and JVM are all tuned to maximize the bang for your buck from your Java application. Part 1: Optimizing the JVM When running Java applications in the cloud, tuning the JVM is not necessarily at the forefront of deployment teams' minds, but getting it wrong or running with default options can impact the performance and cost of your cloud applications. In this article, we will walk through some of the more helpful tunable elements in the JVM, covering: Performance benefits of using recent Java versionsKey differences between cloud instances and developer environmentsSetting the right heap size and choosing the right Garbage Collector for your applicationJVM options that may boost price/performance for Ampere-powered instances Keeping Up With the Times Arm64 support was first introduced to the Java ecosystem with Java 8 and has been steadily improving since then. If you are still using Java 8, your Java applications can run up to 30% slower than if you are using a more recent version of Java, like Java 21 or the recently released Java 25. The reason is two-fold: The performance of Java has been steadily improving across all architecturesThere are a number of initiatives that have specifically improved performance on Arm64 It is worth noting that it is possible to develop applications with the Java 8 language syntax while taking advantage of the performance improvements of a more recent JVM, using Oracle’s Java SE Enterprise Performance Pack. This is (simplifying slightly) a distribution of tools that compiles Java 8 applications to run on a JVM from the Java 17 JDK. That said, the language has seen many improvements over the past 10 years, and we recommend updating your Java applications to run on a more recent Java distribution. The Difference Between Cloud Instances and Developer Desktops The JVM’s default ergonomics were designed with the assumption that your Java application is just one of many processes running on a shared host. On a developer laptop or a multi-tenant server, the JVM intentionally plays nice, limiting itself to a relatively small percentage of system memory and leaving headroom for everything else. That works fine on a workstation where the JVM is competing with your IDE, your browser, and background services, but in cloud environments, your Java application will typically be the only application you care about in that VM or Docker (more generally OCI) container instance. By default, if you don’t explicitly set initial and max heap size, the JVM uses a tiered formula to size the heap based on “available memory.” You can see what the heap size is by default for your cloud instances using Java logging: Java java -Xlog:gc+heap=debug [0.005s][debug][gc,heap] Minimum heap 8388608 Initial heap 524288000 Maximum heap 8342470656 The defaults for heap sizing, based on system RAM available, are: On small systems (≤ 384 MB RAM), the default max heap is set to 50% of available memory.On systems with memory between 384 MB and 768 MB, the max heap is fixed at 192 MB, no matter how much memory the system actually has in that range.For systems with available memory over 768 MB, the max heap is 25% of available memory.The initial heap (-Xms) is much smaller: around 1/64th of available memory, capped at 1 GB.Since Java 11, when running in OCI containers, the JVM bases these calculations on the container’s memory limit (cgroup) rather than host memory, but the percentages and thresholds remain the same. We will talk about the JVM’s container awareness in our next article. So, for a VM with 512 MB RAM, the JVM will still only allow 192 MB for the heap. On a laptop with 16 GB RAM, the default cap is ~4 GB. On a container with a 2 GB memory limit, the heap defaults to ~512 MB. That’s a perfectly reasonable choice if your JVM is sharing a machine with dozens of other processes. But in the cloud, when you spin up a dedicated VM or a container instance, the JVM is often the only significant process running. Instead of trying to be a good neighbor and leave resources for other applications, you want it to use the majority of the resources you’ve provisioned. Otherwise, you’re paying for idle memory and under-utilized CPU. JVM Heap Defaults vs. Cloud Recommendations This shift has two key implications: Memory allocation: Instead of defaulting to 25–50% of RAM, cloud workloads should usually allocate 80–85% of available memory to the heap. This ensures you get the most out of the memory you’re paying for while leaving room for JVM internals (metaspace, thread stacks, code cache) and OS overhead.CPU utilization: Cloud instances nearly always run on multiple cores, but Kubernetes resource limits can confuse the JVM’s view of the world. If your container requests 1 CPU, the scheduler enforces that limit with time slices across multiple cores. However, the JVM will assume it is running on a single-core system and may make inefficient choices as a result. This can lead to poor garbage-collection choices or thread-pool sizing. For this reason, cloud developers should explicitly set -XX:ActiveProcessorCount to a value greater than 1 and choose a garbage collector that supports multiple garbage collection threads. ScenarioDefault ergonomics (no flags)recommended for cloud workloads Scenario Default Ergonomics (no flags) Recommended for Cloud Workloads Initial heap (-Xms or -XX:InitialRAMPercentage) ~1/64th of memory (capped at 1 GB) Match initial heap close to max heap (stable long-lived services): -XX:InitialRAMPercentage=80 Max heap (-Xmx or -XX:MaxRAMPercentage) - ≤ 384 MB RAM → 50% of RAM - 384–768 MB → fixed 192 MB - ≥ 768 MB → 25% of RAM Set heap to 80-85% of container/VM limit: -XX:MaxRAMPercentage=80 GC choice G1GC (default in Java 11+) or Parallel GC (Java 8) when processor count is greater than or equal to 2 SerialGC when processor count is less than 2 G1GC (-XX:+UseG1GC) is a sensible default for most cloud services CPU count JVM detects host cores, may overshoot container quota XX:ActiveProcessorCount=(cpu_limit with min of 2) Cgroup awareness Java 11+ detects container limits Set explicit percentages as you would for VMs Regardless of your target architecture, if you only tweak a few JVM options for cloud workloads, start here. These settings prevent the most common pitfalls and align the JVM with the resources you’ve explicitly provisioned: Garbage collector: Use the G1GC (-XX:+UseG1GC) for most cloud services. It balances throughput and latency, scales well with heap sizes in the multi-GB range, and is the JVM’s default in recent releases when you have more than one CPU core. Active processor count: Plain Text -XX:ActiveProcessorCount=<cpu_limit with minimum 2> Match this value to the number of CPUs or millicores assigned to the underlying compute hosting your container. For example, even if Kubernetes allocates a quota of 1024 millicores to your container, if it is running in a 16-core virtual machine, you should be setting ActiveProcessorCount to 2 or more. This allows the VM to appropriately allocate thread pools and choose a garbage collector, such as G1GC, instead of SerialGC, which halts your application entirely during GC runs. The optimal value for this will depend on what else is running in the virtual machine — if you set the number too high, you will have noisy neighbor impacts for other applications running on the same compute node. Heap sizing: Plain Text -XX:InitialRAMPercentage=80 -XX:MaxRAMPercentage=85 These options tell the JVM to scale its heap based on the container’s memory limits rather than host memory, and to claim a larger fraction than desktop defaults. Use 80% as a safe baseline; push closer to 85% if your workload is steady-state. Consistency between Init and Max: For long-lived services, set InitialRAMPercentage equal to or slightly smaller than MaxRAMPercentage. This avoids the performance penalty of gradual heap expansion under load. With these three knobs, most Java applications running in Kubernetes or cloud VMs will achieve predictable performance and avoid out-of-memory crashes. JVM Options That Can Improve Performance on Arm64 Beyond heap sizing and CPU alignment, a handful of JVM options can give you measurable improvements for servers running Ampere’s Arm64 CPUs. These are not “one size fits all.” They depend on workload characteristics such as RAM usage, latency vs. throughput trade-offs, and network I/O, but they’re worth testing to see whether they improve your application's performance. Enabling HugePages Transparent Huge Pages allocates a large contiguous block of memory consisting of multiple kernel pages in one try and treats it as a single memory page from an application perspective. It enables large memory pages by booting the appropriate Linux kernel and using Transparent Huge Pages in your JVM with -XX:+UseTransparentHugePages to allocate large, continuous blocks of memory, which can offer a massive performance boost for workloads that can take advantage. Using a 64k-Page Kernel Booting your host OS with a 64K kernel page size makes sure that memory is allocated and managed by your kernel in larger blocks than the 4K default. This will reduce TLB misses and speed up memory access for workloads that tend to use large contiguous blocks of memory. Note that booting kernels with a specific kernel page size and configuring TransparentHugePages require OS support and configuration, so they’re best handled in coordination with your ops team. Memory Prefetch Some workloads benefit from pre-touching memory pages on startup. By default, virtual memory pages are not mapped to physical memory until they are needed. The first time a physical memory page is needed, the operating system generates a page fault, which fetches a physical memory page, maps the virtual address to the physical address, and stores the pair of addresses in the kernel page table. Pre-touch maps virtual memory addresses to physical memory addresses at startup, making the first access to those memory pages at run time faster. Adding the option: Plain Text -XX:+AlwaysPreTouch forces the JVM to commit and map all heap pages at startup, avoiding page faults later under load. The tradeoff: slightly longer startup time, but more consistent latency once running. This option is good for latency-sensitive services that stay up for a long time. This has the additional benefit of ensuring a fast failure at startup time if you are requesting more memory than can be made available to your application. Tiered Compilation vs. Ahead-of-Time JIT The JVM normally compiles hot code paths incrementally at runtime. Options like -XX:+TieredCompilation (enabled by default) balance startup speed with steady-state performance. For cloud workloads where startup time is less important than throughput, you can bias toward compiling more aggressively up front. In some cases, compiling JIT profiles ahead of time (using jaotc or Class Data Sharing archives) can further reduce runtime CPU overhead. However, ahead-of-time compilation comes with both risks and constraints. Just-In-Time (JIT, or runtime) compilation takes advantage of gathering profiling information while running the application. To identify hot methods, method calls that need not be virtual method calls, calls that can be inlined, hot loops within methods, constant parameters, branch frequencies, etc. An Ahead-Of-Time (AOT) compiler is missing all that information and may produce sub-optimal code performance. In addition, language features related to dynamic class loading, where class definitions are not available ahead of time, or are generated at run-time, cannot be used with ahead-of-time compilation. Vectorization and Intrinsics Modern JVMs on Arm64 include optimized intrinsics for math, crypto, and vector operations. No flags are needed to enable these, but it’s worth validating that you’re running at least Java 17+ to take advantage of these optimizations. Guideline for Adoption For short-lived batch jobs, avoid options that slow startup (AlwaysPreTouch, aggressive JIT).For long-running services (APIs, web apps), favor memory pretouch and consistent heap sizing.For memory-intensive services, configure TransparentHugePages, consider a kernel with a larger memory page size from the default 4K, and monitor TLB performance. Conclusion The JVM has a long history of making conservative assumptions, tuned for developer laptops and multi-tenant servers rather than dedicated cloud instances. On Ampere®-powered VMs and containers, those defaults often leave memory and CPU cycles unused. By explicitly setting heap percentages, processor counts, and choosing the right garbage collector, you can ensure your applications take full advantage of the hardware beneath them. By using a more recent version of the JVM, you are benefiting fromnthe incremental improvements that have been made since Arm64 support was first added in Java 8. That’s just the beginning, though. JVM flags and tuning deliver real wins, but the bigger picture includes the operating system and Kubernetes itself. How Linux allocates memory pages, how Kubernetes enforces CPU and memory quotas, and how containers perceive their share of the host all have a direct impact on JVM performance. In the next article in this series, we’ll step outside the JVM and look at the infrastructure layer: How container awareness in the JVM and Kubernetes resource requests and limits interactsWhat happens if you don’t set quotas explicitlyHow kernel- and cluster-level tuning (kernel-level tuning options, memory page sizes, core pinning) can unlock even more efficiency Part 1 provides guidance on the JVM to “use what you’ve paid for.” Part 2 will ensure your OS and container platform are tuned for optimal performance. We invite you to learn more about Ampere developer efforts, find best practices, insights, and give us feedback at: https://developer.amperecomputing.com and https://community.amperecomputing.com/. Check out the full Ampere article collection here.
What is Virtual Thread Multi-threading is a widely used feature across the industry for developing Java-based applications. It allows us to run operations in parallel, enabling faster task execution. The number of threads created by any Java application is limited by the number of parallel operations the OS can handle; in other words, the number of threads in a Java application is equal to the number of OS threads. Until now, this limitation has created a bottleneck on further scaling any application, considering the current fast-paced ecosystem. To overcome this limitation, Java has introduced the concept of Virtual Thread in JDK21. A Java application creates a Virtual Thread and is not associated with any OS thread. It means every Virtual Thread does not need to be dependent on a Platform Thread (aka OS thread). Virtual Thread will work on any task independently and will acquire a Platform Thread only when it needs to perform any I/O operation. This mechanism for acquiring and releasing Platform Threads gives an application the flexibility to create as many Virtual Threads as possible and achieve high concurrency. Threads Before JDK 21 All the threads that were instances of java.lang.Thread class before JDK 21 were OS threads, aka Platform Threads. That meant every time a thread was created in the JDK environment, it was supposed to be mapped with a platform thread. This mechanism has added a limitation on the number of threads that can be created in a JVM environment. Due to the high cost of creating a Platform Thread, threads used to get pooled to avoid creating them again and again, a process that added extra cost to the application performance. Threads After JDK 21 With JDK 21, the application developer can choose to create a virtual threa instead of Platform Thread using the Thread.ofVirtual() API or Executors.newVirtualThreadPerTaskExecutor(). The thread created this way is internal to the JVM, and no OS thread is occupied. All concurrent tasks can be done by a Virtual Thread in the same way as a Platform Thread does. A Virtual thread requires a Platform thread to perform an I/O operation, and once I/O is complete, the Virtual thread will release the Platform thread. Virtual Threads do not require management in a pool. Instead we can have an unlimited number of Virtual Threads in the System since they are internal to the JVM. Why Virtual Thread? High Throughput: Tasks that consist of a large number of concurrent operations spend much of their time in waiting ex.Server Applications. Web Servers typically handle many client requests. In the absence of Virtual Threads, their capacity to handle parallel requests is limited. Using Virtual Threads can serve a large number of concurrent requests and add to their capacity.No Thread Pooling: - Virtual threads are inexpensive with enough availability, hence need never be pooled. For each concurrent task, we can create a Virtual Thread, and it is as easy as creating an Object in JVM memory. High Performance: – Creating a virtual thread is less time-consuming (because no OS-level activity occurs), hence overall application performance will improve.Less Memory Consumption: – Each Virtual Thread maintains a stack in heap to store local variables and method calls. Each Virtual Thread can spawn multiple Virtual Threads, and they are considered short-lived, we expect a shallow call stacks for each Thread, consuming little memory.Scalable Solution: APITheavy applications are generally designed in a thread-per-request style. Since virtual threads allow the JVM to create a greater number of threads compared to platform thread, applications can scale to serve many client requests. How to Create Virtual Thread The current JDK framework supports two ways to create virtual thread. Below is the sample code to create virtual threads. Approach 1 – Using Thread.Builder Interface Thread.Builder builder = Thread.ofVirtual().name("NewThread"); Runnable task = () -> { System.out.println("Running thread"); }; Thread t = builder.start(task); System.out.println("Thread name: " + t.getName()); Approach 2 – ExecuterService framework try (ExecutorService myExecutor = Executors.newVirtualThreadPerTaskExecutor()) { Future<?> future = myExecutor.submit(() -> System.out.println("Running a new thread")); future.get(); System.out.println("Task completed"); Performance Comparison: Virtual Thread vs Platform Thread) I have created a basic program to showcase the performance difference between virtual threads and platform threads. Code Snippets //Store class is storing the thread count generated with odd and even numbers. It is using the concurrent hash map to store this data. public class Store { private ConcurrentHashMap<Integer, Integer> concurrentHashMap = new ConcurrentHashMap<Integer, Integer>(); public synchronized void addQuantity(int productId){ int key = productId % 2; concurrentHashMap.computeIfAbsent(key,k->0); concurrentHashMap.computeIfPresent(key,(k,v)->v+1); } public Map<Integer, Integer> getStoreData(){ return concurrentHashMap; } } //This class is acting as a task, which is getting executed by a number of thread. Every thread has a task to increment the count in store. public class ComputationTask implements Runnable{ int productId; Store store; public ComputationTask(Store store, int id){ this.store = store; productId=id; } @Override public void run() { store.addQuantity(productId); } //This main class has logic to instantiate virtual and platform threads. There are over 1000 threads getting created, and once all the threads have finished their work, the code is printing the hash map entries and the overall time taken by the process. public class Main { public static void main(String[] args) throws InterruptedException { long pid = ProcessHandle.current().pid(); System.out.println("Process ID: " + pid); Thread.Builder builder = null; Store store = new Store(); builder = Thread.ofVirtual().name("virtual worker-", 0); // builder = Thread.ofPlatform().name("platform worker-", 0); long starttime = System.currentTimeMillis(); for (int i = 1; i < 1000; i++) { ComputationTask task = new ComputationTask(store, i); Thread t1 = builder.start(task); t1.join(); System.out.println(t1.getName() + " started"); } Map<Integer,Integer> map = store.getStoreData(); map.entrySet().stream() .forEach(entry -> System.out.println("Key: " + entry.getKey() + ", Value: " + entry.getValue())); long endtime = System.currentTimeMillis(); System.out.println("Total Computation Time - "+(endtime-starttime)+" miliseconds"); } } Execution Execute the program with virtual thread on: virtual worker-0 started virtual worker-1 started virtual worker-2 started virtual worker-3 started …………………………………. …………………………………. virtual worker-995 started virtual worker-996 started virtual worker-997 started virtual worker-998 started virtual worker-999 started Key: 0, Value: 500 Key: 1, Value: 500 Total Computation Time - 147 miliseconds Process finished with exit code 0 Execute the program with platform thread on Comment the virtual thread, create code in Main.java, and uncomment the platform thread creation code. platform worker-0 started platform worker-1 started platform worker-2 started platform worker-3 started …………………………………….. …………………………………….. platform worker-995 started platform worker-996 started platform worker-997 started platform worker-998 started platform worker-999 started Key: 0, Value: 500 Key: 1, Value: 500 Total Computation Time - 551 miliseconds Process finished with exit code 0 Based on the above result, it is evident that when the program is creating 1000 virtual threads and performing a certain operation, it takes 147 ms, whereas the same code, if run using platform threads, takes around 551 ms to complete. Hence, virtual threads deliver better performance than platform threads. Best Practices When Using Virtual Threads Virtual thread will be pinned to any platform thread only when there is a synchronized block execution. Hence, avoid frequent, long-duration synchronized blocks so that platform threads are short-lived, and we can take full advantage of the virtual thread model.Neve create a pool of virtual threads because virtual threads are available in large volumes; there is little overhead in creating a new virtual thread. If there is no thread pooling, then JVM does not need to work on complex logic to maintain a thread pool and scheduling.Avoid asynchronous code-writing techniques, because in synchronous code, the server dedicates a thread to processing each incoming request for its entire duration. Since virtual threads can be plentiful, blocking them is cheap and encouraged.Virtual thread supports thread-local variables. However, because virtual threads can be numerous, use thread locals only after careful consideration. Do not use thread locals to pool costly resources among multiple tasks sharing the same thread in a thread pool.
In the previous article, we learnt the basics, setup, and configuration of the REST Assured framework for API test automation. We also learnt to test a POST request with REST Assured by sending the request body as: StringJSON Array/ JSON ObjectUsing Java CollectionsUsing POJO In this tutorial article, we will learn the following: How to use JSON files as a request body for API testing.Implement the Builder Design Pattern in Java to create request data dynamically.Integrate the Datafaker library to generate realistic test data at runtime.Perform assertions with the dynamic request data generated using the Builder design pattern and the Datafaker library. Writing a POST API Test With a Request Body as a JSON File The JSON files can be used as a request body to test the POST API requests. This approach comes in handy in the following scenarios: Multiple test scenarios with different payloads, and you need to maintain test data separately from test code.Large or complex payloads that need to be reused across multiple tests.Frequently changing request payloads that are easier to update in the JSON files rather than using other approaches, like dynamically updating the request body using JSON Objects/Arrays or POJOs. Apart from the above, JSON files can also be used when non-technical team members need to modify the test data before running the tests, without modifying the automation code. With the pros, this approach has some drawbacks as well. The JSON files must be updated with unique data before each test run to avoid duplicate data errors. If you prefer not to modify the JSON files before every execution, you’ll need to implement data cleanup procedures, which adds additional maintenance overhead. We will be using the POST /addOrder API from the RESTful e-commerce demo application to write the POST API requests test. Let’s add a new Java class, TestPostRequestWithJsonFile, and add a new method, getOrdersFromJson(), to it. Java public class TestPostRequestWithJsonFile { public List<Orders> getOrdersFromJson (String fileName) { InputStream inputStream = this.getClass () .getClassLoader () .getResourceAsStream (fileName); if (inputStream == null) { throw new IllegalArgumentException ("File not found!!"); } Gson gson = new Gson (); try (BufferedReader reader = new BufferedReader (new InputStreamReader (inputStream))) { Type listType = new TypeToken<List<Orders>> () { }.getType (); return gson.fromJson (reader, listType); } catch (IOException e) { throw new RuntimeException ("Error Reading the JSON file" + fileName, e); } } //... } Code Walkthrough The getOrdersFromJson() method accepts the JSON file as a parameter and returns a list of orders. This method functions as explained below: Locates the JSON file: The JSON file is placed in the src/test/resources folder, it searches for the JSON file in the classpath using the getResourcesAsStream() method. In case the file is not found, it will throw an IllegalArgumentException.Deserialise the JSON to Java objects: The BufferedReader is used for efficiently reading the file. Google’s Gson library uses theTypeToken to specify the target type (List<Orders>) for proper generic type handling, and converts JSON array into a typed list of order objects.The try-with-resources autocloses the resources to prevent memory leaks. The following test method, testCreateOrder(), tests the POST /addOrder API request: Java @Test public void testCreateOrders () { List<Orders> orders = getOrdersFromJson ("new_orders.json"); given ().contentType (ContentType.JSON) .when () .log () .all () .body (orders) .post ("http://localhost:3004/addOrder") .then () .log () .all () .statusCode (201) .and () .assertThat () .body ("message", equalTo ("Orders added successfully!")); } The following line of code will read the file new_orders.json and use its content as the request body to create new orders. Java List<Orders> orders = getOrdersFromJson("new_orders.json") The rest of the test method remains the same as explained in the previous tutorial, which sets the content type to JSON and sends the post request. It will verify that the status code is 201 and also assert the message field in the response body. Writing a POST API Test With a Request Body Using the Builder Pattern and Datafaker The recommended approach for real-time projects is to use the Builder Pattern with the Datafaker library, as it generates dynamic data at runtime, allowing random and fresh test data generation every time the tests are executed. The key advantages of using this approach are as follows: It provides a faster test setup as there are no I/O operations involved in searching, locating, and reading JSON files.It can easily handle parallel test execution as there is no conflict of test data between concurrent tests.It helps in easy maintenance as there is no need for manual updating of the test data. The Builder Pattern with Datafaker can be implemented using the following steps: Step 1: Generate a POJO for the Request Body The following is the schema of the request body of the POST /addOrder API: JSON [ { "user_id": "string", "product_id": "string", "product_name": "string", "product_amount": 0, "qty": 0, "tax_amt": 0, "total_amt": 0 } ] Let’s create a new Java class for POJO and name it OrderData. We will use Lombok in this POJO as it helps in reducing boilerplate code, such as getters, setters, and builders. By using annotations like @Builder, @Getter and @Setter, the class can be made concise, readable, and easier to maintain. Java @Getter @Setter @Builder @JsonPropertyOrder ({ "user_id", "product_id", "product_name", "product_amount", "qty", "tax_amt", "total_amt" }) public class OrderData { @JsonProperty ("user_id") private String userId; @JsonProperty ("product_id") private String productId; @JsonProperty ("product_name") private String productName; @JsonProperty ("product_amount") private int productAmount; private int qty; @JsonProperty ("tax_amt") private int taxAmt; @JsonProperty ("total_amt") private int totalAmt; } The field name of the JSON request body has a “_” in between them, and as per Java standard conventions, we follow the camelCase pattern. So, to mitigate this issue, we can make use of the @JsonProperty annotation by the Jackson DataBind library and provide the actual field name in the annotation over the respective Java variable names. The order of the JSON fields can be preserved by using the @JsonProperOrder annotation and passing the field names as per the required order. Step 2: Create a Builder Class for Generating Data at Runtime With Datafaker In this step, we will create a new Java class, OrderDataBuilder, for generating test data at runtime using the Datafaker library. Java public class OrderDataBuilder { public static OrderData getOrderData () { Faker faker = new Faker (); int productAmount = (faker.number () .numberBetween (1, 1999)); int qty = faker.number () .numberBetween (1, 10); int grossAmt = qty * productAmount; int taxAmt = (int) (grossAmt * 0.10); int totalAmt = grossAmt + taxAmt; return OrderData.builder () .userId (String.valueOf (faker.number () .numberBetween (301, 499))) .productId (String.valueOf (faker.number () .numberBetween (201, 533))) .productName (faker.commerce () .productName ()) .productAmount (productAmount) .qty (qty) .taxAmt (taxAmt) .totalAmt (totalAmt) .build (); } } A static method getOrderData() has been created inside the class that implements the Datakaker library and builds the OrderData for generating the request body in JSON format at runtime. The Faker class from the Datafaker library is instantiated first, which will be further used for creating fake data at runtime. It provides various methods to generate the required data, such as names, numbers, company names, product names, addresses, etc., at runtime. Using the OrderData POJO, we can populate the required fields through Java’s Builder design pattern. Since we have already applied the @Builder annotation from Lombok, it automatically enables an easy and clean way to construct OrderData objects. Step 3: Write the POST API Request Test Let’s create a new Java class, TestPostRequestWithBuilderPattern, for implementing the test. Java public class TestPostRequestWithBuilderPattern { @Test public void testCreateOrders () { List<OrderData> orderDataList = new ArrayList<> (); for (int i = 0; i < 4; i++) { orderDataList.add (getOrderData ()); } given ().contentType (ContentType.JSON) .when () .log () .all () .body (orderDataList) .post ("http://localhost:3004/addOrder") .then () .statusCode (201) .and () .assertThat () .body ("message", equalTo ("Orders added successfully!")); } } The request body requires the data to be sent in a JSON Array with multiple JSON objects. The OrderDataBuilder class will generate the JSON objects; however, the JSON Array can be handled in the test. Java List<OrderData> orderDataList = new ArrayList<> (); for (int i = 0; i < 4; i++) { orderDataList.add (getOrderData ()); } This code generates four unique order records using the getOrderData() method and adds them to a list named orderDataList. Once the loop completes, the list holds four unique OrderData objects, each representing a new order ready to be included in the test request. The POST test request is finally sent to the server, where it is executed, and the code checks for a status code of 201 and asserts the response body with the text “Orders added successfully!” Performing Assertions With the Builder Pattern When the request body and its data are generated dynamically, a common question arises: “Can we perform assertions on this dynamically created data?” The answer is “Yes.” In fact, it is much easier and quicker to perform the assertions with the request data generated using the Builder pattern and the Datafaker library. The following is the response body generated after successful order creation using the POST /addOrder API: Java { "message": "Orders fetched successfully!", "orders": [ { "id": 1, "user_id": "412", "product_id": "506", "product_name": "Enormous Wooden Watch", "product_amount": 323, "qty": 7, "tax_amt": 226, "total_amt": 2487 }, { "id": 2, "user_id": "422", "product_id": "447", "product_name": "Ergonomic Marble Shoes", "product_amount": 673, "qty": 2, "tax_amt": 134, "total_amt": 1480 }, { "id": 3, "user_id": "393", "product_id": "347", "product_name": "Fantastic Bronze Plate", "product_amount": 135, "qty": 9, "tax_amt": 121, "total_amt": 1336 }, { "id": 4, "user_id": "398", "product_id": "526", "product_name": "Incredible Leather Bottle", "product_amount": 1799, "qty": 4, "tax_amt": 719, "total_amt": 7915 } ] } Let’s say we need to perform the assertion for the user_id field in the second order and the total_amt field of the fourth order in the response. We can write the assertions with REST Assured as follows: Java given ().contentType (ContentType.JSON) .when () .log () .all () .body (orderDataList) .post ("http://localhost:3004/addOrder") .then () .statusCode (201) .and () .assertThat () .body ("message", equalTo ("Orders added successfully!")) .and () .assertThat () .body ("orders[1].user_id", equalTo (orderDataList.get (1) .getUserId ()), "orders[3].total_amt", equalTo (orderDataList.get (3) .getTotalAmt ())); The order array in the response holds all the data related to the orders. Using the JSONPath “orders[1].user_id”, the user_id of the second order will be retrieved. Similarly, the total amount of the fourth order can be fetched using the JSONPath orders[3].total_amt. The Builder design pattern comes in handy for comparing the expected values, where we can use the code orderDataList.get(1).getUserId and orderDataList.get(3).getTotalAmt to get the dynamic value of user_id (second order) and total_amount (fourth order) generated and used in the request body for creating orders at runtime. Summary The REST Assured framework provides flexibility to post the request body in the POST API requests. The request body can be posted using a String, JSON Object, or JSON Array, Java Collections such as List and Map, JSON files, and POJOs. The Builder design pattern in Java can be combined with the Datafaker library to generate a dynamic request body at runtime. Based on my experience, using the Builder Pattern in Java provides several advantages over other approaches for creating request bodies. It allows dynamic values to be easily generated and asserted, making test verification and validation more efficient and reliable.
Web applications depend on Java-based services more than ever. Every request that comes from a browser, a mobile app, or an API client eventually reaches a backend service that must respond quickly and consistently. When traffic increases or a dependency slows down, many Java services fail in ways that are subtle at first and catastrophic later. A delay becomes a backlog. A backlog becomes a timeout. A timeout becomes a full service outage. The goal of a reliable web service is not to avoid every failure. The real goal is to recover from failure fast enough that users never notice. What matters is graceful recovery. Why Java Web Services Fail Under Load When a Java web service experiences stress, it usually fails at specific pressure points. These failures do not appear suddenly — they accumulate slowly until the system can no longer respond. A few common examples include: Traffic spikes causing a thread pool to become fullThe database taking too long to return resultsRemote service responding with partial data that the application is not prepared to handleMessage queues growing faster than the system can process them Once one part of the system becomes slow, every layer above it begins to stall. Requests wait for threads. Threads wait for network calls. Network calls wait for other dependencies. Eventually the entire service stops moving. This type of failure is not caused by a single bug. It is caused by the system having no way to protect itself from slow downstream behavior. The Common Mistake in Java-Based Web Services Many Java services assume that external systems will behave correctly. They assume that network calls will return quickly. They assume that resources will remain healthy. They assume that load will stay within expected levels. When these assumptions fail, the system has no defensive layer. A slow dependency causes a slow endpoint. A slow endpoint triggers additional retries. More retries increase the load and make the problem worse. The result is a cascading failure that affects the entire application. Developers often discover that the real problem is not the failure itself. The real problem is that the service had no plan for failure. How to Build Recovery-Friendly Request Handling A web service must decide quickly whether it can handle a request or not. Recovery begins with predictable behavior. Several practices help Java services respond safely during the heavy load: Use clear limits for the number of active requestsRespond with a safe fallback result when work cannot be performedAvoid adding more work to the system when it is already overloadedMonitor response times continuously to detect early signs of stress This practices keep the request flow healthy and prevent the system from slowing to a halt. Use Short and Consistent Timeouts for Web Endpoints One of the fastest ways to improve resilience is to replace long or default timeout values with short, consistent ones. A short timeout allows the system to abandon work that is unlikely to complete. This prevents requests from getting stuck and blocking others. It is better to fail fast than to hold a thread for too long. Predictable timeouts also lead to predictable behavior during outages, which makes cascading failures less likely. Avoid Retry Storms That Make Problems Worse When a dependency slows down, the natural instinct is to retry the request. This instinct is reasonable when failures are rare. In a web application that sees thousands of requests per second, it can create a storm. A retry storm happens when every client retries at the same time. The extra traffic overloads the struggling service even more, worsening the situation with every passing second. To avoid this, retries must be controlled and limited. They must include proper spacing and must understand when to stop. A safe retry strategy can protect a system from collapse. Isolation is the Most Powerful Tool for Web Backends Isolation ensures that one slow component cannot bring down the entire application. Java-based web services can use isolation in several ways: Separate fast operations from slow operationsProtect calls to external systems with boundariesMove work that may stall into dedicated executorsUse different pools for background tasks versus request-facing tasks Isolation keeps the platform responsive even when one component begins to struggle. Use Concurrency Wisely When Building Java Web Applications Concurrency is one of Java's greatest strengths — but also one of its biggest sources of failure. Proper use of concurrency allows the application to serve many users at once without overwhelming the system. Key best practices include: Use fixed-size pools instead of unbounded thread countsAvoid long-running operations inside executor poolsUse non-blocking operations when practicalEnsure that important tasks are not starved of resources Concurrency must be a tool for stability, not a source of unpredictability. Patterns That Keep Java Web Backends Alive Under Pressure Years of studying outages and recovery events reveal patterns that consistently improve resilience: Set clear limits for resource usageValidate inputs earlySeparate long-running work and fail fast when necessaryUse predictable error messagesStop accepting new work when the system reaches its limitClean up stalled tasks regularlyRestart components safely when required These small practices combine into significant improvements in availability. Final Thoughts for Web Developers and Backend Engineers Modern web applications rarely fail because a single component breaks. They fail because the system is not prepared to recover. A reliable Java-based service does not need to be perfect — it needs to be predictable and steady when failure arrives. By designing for recovery instead of relying on perfect conditions, developers can build Java web services that remain stable, responsive, and trustworthy even under difficult conditions. This mindset is the foundation of long-term reliability in a world where pressure never stops.
Design documents in Enterprise Java often end up trapped in binary silos like Excel or Word, causing them to drift away from the actual code. This pattern shows how to treat Design Docs as source code by using structured Markdown and generative AI. We've all been there: the architecture team delivers a Detailed Design Document (DDD) to the development team. It’s a 50-page Word file, even worse, a massive Excel spreadsheet with multiple tabs defining Java classes, fields, and validation rules. By the time you write the first line of code, the document is already outdated. Binary files are nearly impossible to version, diffing changes is impractical, and copy-pasting definitions into Javadoc is tedious. At enterprise scale, this "Code Drift," where implementation diverges from design becomes a major source of technical debt. By shifting design documentation to structured Markdown and leveraging generative AI, we can treat documentation exactly like source code. This creates a bridge between the architect’s intent and the developer’s integrated development environment (IDE). The Problem: The Binary Wall In traditional Waterfall or hybrid environments, design lives in Office documents (Word/Excel), while code lives in text formats (Java/YAML). Because the formats are incompatible, automation is breaks down. You can't easily "compile" an Excel sheet into a Java POJO, and you certainly can’t unit test a Word doc. To close this gap, design information needs to be: Text-based (for Git version control).Structured (for machine parsing).Human-readable (for reviews and collaboration). The solution is Structured Markdown. The Solution: Markdown as a Data Source Instead of treating Markdown merely as a way to write README files, we treat it as a structured specification format. By standardizing headers and layout, a Markdown file becomes a consistent, machine-friendly data source that GenAI tools (GitHub, Copilot, ChatGPT, etc.) can parse to generate boilerplate code, diagrams, and even legacy Excel reports for stakeholders. 1. The Directory Structure To make this approach work, design documents must live alongside the code, mirroring the package structure so they evolve together. The Pattern: Plain Text /project-root /src /main/java/com/app/backend/RegisteredUser.java /design-docs /backend RegisteredUser.md OrderService.md /diagrams architecture.mermaid By keeping the .md file in the same repository structure as the .java file, we establish a direct, traceable link between the specification and the implementation. 2. The Structured Spec The key is to write Markdown as an actual specification, not as a blog post. We use specific headers (such as ## Class Summary, ## Members) that act as hooks for automation tools. Example: RegisteredUser.md Markdown # RegisteredUser ## Class Summary Represents a user who has completed the registration process. Manages user credentials and validation status. ## Members | Name | Type | Description | | :--- | :--- | :--- | | userId | String | Unique identifier for the user. | | userPassword | String | Encrypted password string. | ## Methods + fn isValidUserId(): bool - Logic: Returns true if userId length >= 3. + fn isValidUserPassword(): bool - Logic: Returns true if password length >= 8. This format is readable to a Product Owner, yet structured enough for an LLM to interpret type definitions and logic constraints. Implementation: From Text to Java Once we express the design in structured Markdown, generative AI can bridge the gap to Java code. In Fujitsu's case study, they utilized a VS Code extension and the OpenAI API to read these Markdown specs and generate initial class scaffolding. You can replicate this workflow with any GenAI coding assistant. Because the prompt context includes a rigid, predictable structure, hallucination rates drop significantly. Prompt Context Example: "Read the following Markdown design doc. Generate a Java class including Javadoc, fields, and validation logic described in the 'Methods' section." The Generated Output: Java package com.app.backend; /** * Represents a user who has completed the registration process. * Manages user credentials and validation status. */ public class RegisteredUser { private String userId; private String userPassword; public RegisteredUser(String userId, String userPassword) { this.userId = userId; this.userPassword = userPassword; } /** * Logic: Returns true if userId length >= 3. */ public boolean isValidUserId() { return this.userId != null && this.userId.length() >= 3; } /** * Logic: Returns true if password length >= 8. */ public boolean isValidUserPassword() { return this.userPassword != null && this.userPassword.length() >= 8; } } The AI doesn't guess; it implements the specified business rules (>= 3, >= 8) exactly as written. If the design changes, you update the Markdown, and regenerate the code. Visualizing the Architecture A common concern when moving away from Excel, Visio, or other diagramming tools is losing the ability to "draw" the system. But now that our design lives in structured text, we can compile it into diagrams. Using the standardized Markdown headers, we can automatically generate Mermaid.js class diagrams simply by scanning the directory. Input (Markdown Header):Class: RegisteredUser depends on Class: UserProfile JavaScript #Mermaid Diagram classDiagram class RegisteredUser { +String userId +String userPassword +isValidUserId() } class UserProfile { +String email } RegisteredUser --> UserProfile This ensures your architecture diagrams always reflect the current state of the design documents, rather than what the architect drew three months ago. The "Excel" Requirement Many enterprises still require an Excel file for official sign-off or for non-technical stakeholders. But now that the source of truth is structured text (Markdown), generating Excel is trivial. A simple script (or even an AI prompt) can parse the headers and populate a CSV or XLSX template automatically. Old Way: Master file is Excel -> Developers manually write Java.New Way: Master file is Markdown -> Auto-generate Java and auto-generate Excel for management. Results and ROI Shifting to a Markdown-first approach does more than tidy up your repository. In the analyzed case study, teams saw clear productivity gains: 55% faster development: Boilerplate code (classes, tests) was generated directly from the Markdown spec.Reduced communication overhead: AI-assisted translation of Markdown is faster and more accurate than dealing with Excel cells.True diff-ability: Git now shows exactly who changed the business rule, and when in the git commit history. Conclusion Documentation often becomes an afterthought because the tools we use for design (Office) work against the tools we use for development (IDEs). By adopting Markdown as a formal specification language, we pull design work directly into the DevOps pipeline. So the next time you're asked to write a detailed design, skip the spreadsheet. Open a .md file, define a clear structure, and let the code flow from there.
In this article, we will walk you through how to conduct a load test and analyze the results using Java Maven technology. We'll covering everything from launching the test to generating informative graphs and tables. For this demonstration, we'll utilize various files, including Project Object Model (POM) files, JMeters scripts, and CSV data, from the jpetstore_loadtesting_dzone project available on GitHub. This will help illustrate the steps involved and the functionality of the necessary plugins and tools. You can find the project here: https://github.com/vdaburon/jpetstore_loadtesting_dzone. The web application being tested is a well-known application called JPetStore, which you can further explore at https://github.com/mybatis/jpetstore-6. Advantages of This Solution for Launching and Analyzing Tests The details of how to implement this solution and the details of Maven launches will be covered in the subsequent chapters. For now, let's highlight the key advantages: For Installation There is no need to pre-install Apache JMeter to conduct the load tests, as the JMeter Maven Plugin automatically fetches the Apache JMeter tool and the necessary plugins from the Maven Central Repository.The file paths used are relative to the Maven project, which means they can vary across different machines during the development phase and in Continuous Integration setups.For Continuous Integration This solution seamlessly integrates into Continuous Integration pipelines (such as Jenkins and GitLab). Tests can be easily run on a Jenkins node or a GitLab Runner, making it accessible for both developers and testers.Performance graphs from operating system or Java monitoring tools can be easily added using monitoring tools like nmon + nmon visualizer for Linux environments.For Developers and Testers Java developers and testers familiar with Maven will feel comfortable with the pom.xml files and Git.The load testing project is managed like a standard Java Maven project within the Integrated Development Environments (IDEs) such as IntelliJ, Eclipse, and Visual Studio.The various files (POM, JMeter script, CSV data) can be version-controlled using Git or other source control systems.The project's README.md file (in Markdown format) can serve as valuable documentation on how to run load tests and analyze results, particularly in Integrated Control (IC).For Analysis The analysis is fast, as various output files are created in just a few minutes.Users can filter results to focus only on specific pages, excluding the URLs invoked within them.The plugin's filter tool allows users to analyze results by load steps, running it multiple times with different start and end offset parameters.For clearer graphs, users can filter to present response times per scenario with a manageable number of curves.Users can force the Y-axis for better comparison of graphs because they are on the same scale, for example, setting Y = 5000 ms for response times or from 0 to 100% for CPU usage.Aggregate and Synthesis reports are available in both CSV format and HTML tables for easy display on a webpage.After executing a load test, users can quickly review results through the generated index.html page, which provides easy access to graphs and HTML tables.The generated HTML page includes links to file sizes, and clicking on these links offers a view of content, like JMeter logs, in a browser.If a particular graph is missing, users can create duration graphs for each URL called on a page using the "JMeterPluginsCMD Command Line Tool" and "Filter Results Tool" from the JMeter results file or directly through JMeter's Swing GUI interface.For Report Generation Graphs created during the analysis can be directly imported into reports created in Microsoft Word or LibreOffice Writer formats.CSV reports can be edited in a spreadsheet software (Microsoft Excel or LibreOffice Calc), and the formatted values can then be easily copied into a Word or Writer report.For Archiving Archiving results is quite simple; users can save the zipped directory containing all the results and analyses.This archiving format approach makes it easy to compare different load test campaigns.The retention period for results can be extensive, stretching several years, as the file format is simple and clear; unlike data stored in documents, relational databases, or temporal databases, it remains easily accessible and understandable. Running a Load Test With Maven and Apache JMeter If you're looking to run a load test using Apache JMeter, there is a Maven plugin available for that purpose. This plugin is called the jmeter-maven-plugin , and you can find it at its project URL: https://github.com/jmeter-maven-plugin/jmeter-maven-plugin. To effectively run your performance tests with Java Maven, you need a few essentials: A JDK/JRE version 1.8 or higher (such as version 17)A recent version of Maven (3.7 or higher)A Maven pom.xml file One of the great things about this setup is that you don't need to install Apache JMeter beforehand. It's also a good idea to have a Git client available for fetching crucial resources from the repository such as the JMeter script, external configuration files, and any CSV data files you'll need. For easier management, it is recommended to maintain two Maven files: The first Maven file, pom.xml (pom_01_launch_test.xml), is dedicated to launching the performance testThe second Maven file, pom.xml (pom_02_analyse_results.xml), is for analyzing the results JMeter Maven Plugin Recommended Project Directory Structure The Maven project designed for launching load tests comes with a predefined directory structure. For the jmeter-maven-plugin , this structure can be found at: ${project.base.directory}/src/test/jmeter In this directory, you need to place the following items: The JMeter script (.jmx)The dataset files (.csv)External configuration files referenced in the JMeter script (.properties)The JMeter configuration file (user.properties) if you are using any non-standard properties The pom.xml File for Launching the Load Test The first pom.xml file (pom_01_launch_test.xml) includes the declaration of the jmeter-maven-plugin with some configuration properties. Plain Text <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>io.github.vdaburon</groupId> <artifactId>jpetstore-maven-load-test-dzone</artifactId> <version>1.0</version> <packaging>pom</packaging> <name>01 - Launch a load test of the JPetstore web application with the maven plugin</name> <description>Launch a load test of the JPetstore web application with the maven plugin</description> <inceptionYear>2025</inceptionYear> <developers> <developer> <id>vdaburon</id> <name>Vincent DABURON</name> <email>[email protected]</email> <roles> <role>architect</role> <role>developer</role> </roles> </developer> </developers> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <jmeter.version>5.6.3</jmeter.version> <jvm_xms>256</jvm_xms> <jvm_xmx>756</jvm_xmx> <prefix_script_name>jpetstore</prefix_script_name> <config_properties_name>config_test_warm_up.properties</config_properties_name> </properties> <build> <plugins> <plugin> <!-- Launch load test with : mvn clean verify --> <groupId>com.lazerycode.jmeter</groupId> <artifactId>jmeter-maven-plugin</artifactId> <version>3.6.1</version> <executions> <!-- Generate JMeter configuration --> <execution> <id>configuration</id> <goals> <goal>configure</goal> </goals> </execution> <!-- Run JMeter tests --> <execution> <id>jmeter-tests</id> <goals> <goal>jmeter</goal> </goals> </execution> <!-- Fail build on errors in test <execution> <id>jmeter-check-results</id> <goals> <goal>results</goal> </goals> </execution> --> </executions> <configuration> <jmeterVersion>${jmeter.version}</jmeterVersion> <jmeterExtensions> <!-- add jmeter plugins in JMETER_HOME/lib/ext --> <artifact>kg.apc:jmeter-plugins-functions:2.2</artifact> <artifact>kg.apc:jmeter-plugins-dummy:0.4</artifact> <artifact>io.github.vdaburon:pacing-jmeter-plugin:1.0</artifact> </jmeterExtensions> <testPlanLibraries> <!-- add librairies in JMETER_HOME/lib --> <!-- e.g: <artifact>org.postgresql:postgresql:42.5.1</artifact> --> </testPlanLibraries> <downloadExtensionDependencies>false</downloadExtensionDependencies> <jMeterProcessJVMSettings> <xms>${jvm_xms}</xms> <xmx>${jvm_xmx}</xmx> <arguments> <argument>-Duser.language=en</argument> <argument>-Duser.region=EN</argument> </arguments> </jMeterProcessJVMSettings> <testFilesIncluded> <jMeterTestFile>${prefix_script_name}.jmx</jMeterTestFile> </testFilesIncluded> <propertiesUser> <!-- folder for csv file relatif to script folder --> <relatif_data_dir>/</relatif_data_dir> <!-- PROJECT_HOME/target/jmeter/results/ --> <resultat_dir>${project.build.directory}/jmeter/results/</resultat_dir> </propertiesUser> <customPropertiesFiles> <!-- like -q myconfig.properties , add my external configuration file --> <file>${basedir}/src/test/jmeter/${config_properties_name}</file> </customPropertiesFiles> <logsDirectory>${project.build.directory}/jmeter/results</logsDirectory> <generateReports>false</generateReports> <testResultsTimestamp>false</testResultsTimestamp> <resultsFileFormat>csv</resultsFileFormat> </configuration> </plugin> </plugins> </build> </project> Launching a Load Test on the JPetstore Web Application To launch a performance test on the JPetstore application at 50% load for a duration of 10 minutes, specify: The JMeter script prefix with -Dprefix_script_name=jpetstore (for the jpetstore.jmx file)The properties file name with -Dconfig_properties_name=config_test_50pct_10min.properties , which contains the virtual users' configuration needed for the 50% load and a 10-minute duration)The properties file (e.g., config_test_50pct_10min.properties), should contain external configuration, including JMeter properties such as the test URL, the number of virtual users per scenario, and the duration of the test. To launch the load test, use the following command:mvn -Dprefix_script_name=jpetstore -Dconfig_properties_name=config_test_50pct_10min.properties -f pom_01_launch_test.xml clean verify Notes to keep in mind: Ensure that the mvn program is included in the PATH environment variable or that the MAVEN_HOME environment variable is set.Since Maven relies on a JDK/JRE, make sure the path to the java program is specified in the launch file, or that the JAVA_HOME environment variable is configured.If you need to stop the test before it reaches its scheduled time, run the shell script located at <JMETER_HOME>/bin/shutdown.sh (for Linux) or shutdown.cmd (for Windows). The test has started. The "Summary logs" provide an overview of the performance test's progress. We specifically keep an eye on the time elapsed since the launch and the number of errors encountered. Here's an example of the logs from a test that was launched in the IntelliJ IDE: Plain Text C:\Java\jdk1.8.0_191\bin\java.exe ... -Dmaven.home=C:\software\maven3 -Dprefix_script_name=jpetstore -Dconfig_properties_name=config_test_50pct_10min.properties -f pom_01_launch_test.xml clean verify -f pom_01_launch_test.xml [INFO] Scanning for projects... [INFO] [INFO] --< io.github.vdaburon:jpetstore-maven-load-test-dzone >--- [INFO] Building 01 - Launch a load test of the JPetstore web application with the maven plugin 1.0 [INFO] from pom_01_launch_test.xml [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] --- clean:3.2.0:clean (default-clean) @ jpetstore-maven-load-test-dzone --- [INFO] [INFO] --- jmeter:3.6.1:configure (configuration) @ jpetstore-maven-load-test-dzone --- [INFO] [INFO] ------------------------------------------------------- [INFO] C O N F I G U R I N G J M E T E R [INFO] ------------------------------------------------------- [INFO] [INFO] Creating test configuration for execution ID: configuration [INFO] Building JMeter directory structure... [INFO] Generating JSON Test config... [INFO] Configuring JMeter artifacts... [INFO] Populating JMeter directory... [INFO] Copying extensions to C:\demo\jpetstore_loadtesting_dzone\target\1515b131-17ff-4f97-bcb7-ba2eec698862\jmeter\lib\ext Downloading dependencies: false [INFO] Copying junit libraries to C:\demo\jpetstore_loadtesting_dzone\target\1515b131-17ff-4f97-bcb7-ba2eec698862\jmeter\lib\junit Downloading dependencies: true [INFO] Copying test plan libraries to C:\demo\jpetstore_loadtesting_dzone\target\1515b131-17ff-4f97-bcb7-ba2eec698862\jmeter\lib Downloading dependencies: true [INFO] Configuring JMeter properties... [INFO] [INFO] --- jmeter:3.6.1:jmeter (jmeter-tests) @ jpetstore-maven-load-test-dzone --- [INFO] [INFO] ------------------------------------------------------- [INFO] P E R F O R M A N C E T E S T S [INFO] ------------------------------------------------------- [INFO] [INFO] Executing test: jpetstore.jmx [INFO] Arguments for forked JMeter JVM: [java, -Xms256M, -Xmx756M, -Duser.language=en, -Duser.region=EN, -Djava.awt.headless=true, -jar, ApacheJMeter-5.6.3.jar, -d, C:\demo\jpetstore_loadtesting_dzone\target\1515b131-17ff-4f97-bcb7-ba2eec698862\jmeter, -j, C:\demo\jpetstore_loadtesting_dzone\target\jmeter\results\jpetstore.jmx.log, -l, C:\demo\jpetstore_loadtesting_dzone\target\jmeter\results\jpetstore.csv, -n, -q, C:\demo\jpetstore_loadtesting_dzone\src\test\jmeter\config_test_50pct_10min.properties, -t, C:\demo\jpetstore_loadtesting_dzone\target\jmeter\testFiles\jpetstore.jmx, -Dsun.net.http.allowRestrictedHeaders, true] [INFO] [INFO] WARN StatusConsoleListener The use of package scanning to locate plugins is deprecated and will be removed in a future release [INFO] WARN StatusConsoleListener The use of package scanning to locate plugins is deprecated and will be removed in a future release [INFO] WARN StatusConsoleListener The use of package scanning to locate plugins is deprecated and will be removed in a future release [INFO] WARN StatusConsoleListener The use of package scanning to locate plugins is deprecated and will be removed in a future release [INFO] Creating summariser <summary> [INFO] Created the tree successfully using C:\demo\jpetstore_loadtesting_dzone\target\jmeter\testFiles\jpetstore.jmx [INFO] Starting standalone test @ September 24, 2025 11:30:22 AM CEST (1758706222410) [INFO] Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445 [INFO] summary + 33 in 00:00:08 = 4.2/s Avg: 100 Min: 30 Max: 1089 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0 [INFO] summary + 67 in 00:00:29 = 2.3/s Avg: 53 Min: 28 Max: 174 Err: 0 (0.00%) Active: 5 Started: 5 Finished: 0 [INFO] summary = 100 in 00:00:37 = 2.7/s Avg: 69 Min: 28 Max: 1089 Err: 0 (0.00%) [INFO] summary + 81 in 00:00:30 = 2.7/s Avg: 69 Min: 27 Max: 858 Err: 0 (0.00%) Active: 7 Started: 7 Finished: 0 [INFO] summary = 181 in 00:01:07 = 2.7/s Avg: 69 Min: 27 Max: 1089 Err: 0 (0.00%) … [INFO] summary + 47 in 00:00:31 = 1.5/s Avg: 86 Min: 30 Max: 471 Err: 0 (0.00%) Active: 7 Started: 7 Finished: 0 [INFO] summary = 1381 in 00:09:38 = 2.4/s Avg: 71 Min: 27 Max: 1184 Err: 0 (0.00%) [INFO] summary + 36 in 00:00:22 = 1.6/s Avg: 69 Min: 30 Max: 150 Err: 0 (0.00%) Active: 0 Started: 7 Finished: 7 [INFO] summary = 1417 in 00:10:00 = 2.4/s Avg: 71 Min: 27 Max: 1184 Err: 0 (0.00%) [INFO] Tidying up ... @ September 24, 2025 11:40:23 AM CEST (1758706823339) [INFO] ... end of run [INFO] Completed Test: C:\demo\jpetstore_loadtesting_dzone\target\jmeter\testFiles\jpetstore.jmx [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 10:08 min [INFO] Finished at: 2025-09-24T11:40:24+02:00 [INFO] ------------------------------------------------------------------------ [INFO] Shutdown detected, destroying JMeter process... [INFO] Process finished with exit code 0 The results can be found in the following directory: <PROJECT_HOME>/target/jmeter/results jpetstore.jmx.log (JMeter logs)error.xml (contains information about failed samplers)jpetstore.csv (JMeter results) Analysis of Results We use the second Maven POM file specifically for analysis purposes, which is named: pom_02_analyse_results.xml The launch parameter is: prefix_script_name, representing the script prefix without its extension. This is important because the JMeter results file follows the format <script prefix>.csv (for instance, jpetstore.csv). To launch the analysis, type the following command:mvn -Dprefix_script_name=jpetstore -f pom_02_analyse_results.xml verify Note: DO NOT use the clean command as it will erase the test results that we want to retain. The Maven File With the Plugin and Tools for Analysis The Maven plugin and tools: jmeter-graph-tool-maven-plugincsv-report-to-htmlcreate-html-for-files-in-directory The jmeter-graph-tool-maven-plugin plugin allows you to: Filter JMeter results files by retaining only the pages while removing the page URLs. It can also narrow down the data by test period, ensuring that only the steps with a stable number of virtual users are included.Generate a "Summary" report in CSV formatGenerate a "Synthesis" report in CSV formatCreate graphs in PNG format to visual various metrics, including: Threads State Over TimeResponse Codes Per SecondBytes Throughput Over TimeTransactions Per SecondResponse Times PercentilesResponse Times Over Time The csv-report-to-html tool reads the generated CSV reports (both Summary and Synthesis) and generates an HTML table displaying the data contained within. Meanwhile, the create-html-for-files-in-directory tool browses the target/jmeter/results directory and creates an index.html page. This page serves as a convenient hub for viewing various image files, HTML tables, and create links to other files present in the directory. The pom_02_analyse_results.xml File for Analysis The contents of the pom_02_analyse_results.xml file are outlined below: Plain Text <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>io.github.vdaburon</groupId> <artifactId>jpetstore-maven-analyse-result-dzone</artifactId> <version>1.0</version> <packaging>pom</packaging> <name>02 - Analyzes the results of the web application JPetstore load test with deditated maven plugins</name> <description>Analyzes the results of the web application JPetstore load test with deditated maven plugins</description> <inceptionYear>2025</inceptionYear> <developers> <developer> <id>vdaburon</id> <name>Vincent DABURON</name> <email>[email protected]</email> <roles> <role>architect</role> <role>developer</role> </roles> </developer> </developers> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <jvm_xms>256</jvm_xms> <jvm_xmx>756</jvm_xmx> <graph_width>960</graph_width> <graph_height>800</graph_height> <prefix_script_name>jpetstore</prefix_script_name> </properties> <dependencies> <dependency> <groupId>io.github.vdaburon</groupId> <artifactId>csv-report-to-html</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>io.github.vdaburon</groupId> <artifactId>create-html-for-files-in-directory</artifactId> <version>1.9</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>io.github.vdaburon</groupId> <artifactId>jmeter-graph-tool-maven-plugin</artifactId> <version>1.2</version> <executions> <execution> <id>create-graphs</id> <goals> <goal>create-graph</goal> </goals> <phase>verify</phase> <configuration> <directoryTestFiles>${project.build.directory}/jmeter/testFiles</directoryTestFiles> <filterResultsTool> <filterResultsParam> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}.csv</inputFile> <outputFile>${project.build.directory}/jmeter/results/${prefix_script_name}_filtred.csv</outputFile> <successFilter>false</successFilter> <includeLabels>SC[0-9]+_P.*</includeLabels> <includeLabelRegex>true</includeLabelRegex> </filterResultsParam> </filterResultsTool> <graphs> <graph> <pluginType>AggregateReport</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}.csv</inputFile> <generateCsv>${project.build.directory}/jmeter/results/G01_AggregateReport.csv</generateCsv> <includeLabels>SC[0-9]+_.*</includeLabels> <includeLabelRegex>true</includeLabelRegex> </graph> <graph> <pluginType>SynthesisReport</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}.csv</inputFile> <generateCsv>${project.build.directory}/jmeter/results/G02_SynthesisReport.csv</generateCsv> <includeLabels>SC[0-9]+_.*</includeLabels> <includeLabelRegex>true</includeLabelRegex> </graph> <graph> <pluginType>ThreadsStateOverTime</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}.csv </inputFile> <width>${graph_width}</width> <height>${graph_height}</height> <generatePng>${project.build.directory}/jmeter/results/G03_ThreadsStateOverTime.png</generatePng> <relativeTimes>no</relativeTimes> <paintGradient>no</paintGradient> <autoScale>no</autoScale> </graph> <graph> <pluginType>ResponseCodesPerSecond</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}.csv</inputFile> <width>${graph_width}</width> <height>${graph_height}</height> <generatePng>${project.build.directory}/jmeter/results/G05_ResponseCodesPerSecond.png</generatePng> <relativeTimes>no</relativeTimes> <paintGradient>no</paintGradient> <limitRows>100</limitRows> <autoScale>no</autoScale> <excludeLabels>SC[0-9]+_.*</excludeLabels> <excludeLabelRegex>true</excludeLabelRegex> </graph> <graph> <pluginType>TransactionsPerSecond</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}_filtred.csv</inputFile> <width>${graph_width}</width> <height>${graph_height}</height> <generatePng>${project.build.directory}/jmeter/results/G07_TransactionsPerSecondAggregated.png</generatePng> <relativeTimes>no</relativeTimes> <aggregateRows>yes</aggregateRows> <paintGradient>no</paintGradient> <limitRows>100</limitRows> <autoScale>no</autoScale> </graph> <graph> <pluginType>ResponseTimesPercentiles</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}_filtred.csv</inputFile> <width>${graph_width}</width> <height>${graph_height}</height> <generatePng>${project.build.directory}/jmeter/results/G08_ResponseTimesPercentiles.png</generatePng> <aggregateRows>no</aggregateRows> <paintGradient>no</paintGradient> </graph> <graph> <pluginType>ResponseTimesOverTime</pluginType> <inputFile>${project.build.directory}/jmeter/results/${prefix_script_name}_filtred.csv</inputFile> <width>${graph_width}</width> <height>${graph_height}</height> <generatePng>${project.build.directory}/jmeter/results/G11_ResponseTimesOverTime_SC01.png</generatePng> <relativeTimes>no</relativeTimes> <paintGradient>no</paintGradient> <limitRows>100</limitRows> <includeLabels>SC01.*</includeLabels> <includeLabelRegex>true</includeLabelRegex> <forceY>2000</forceY> </graph> </graphs> <jMeterProcessJVMSettings> <xms>${jvm_xms}</xms> <xmx>${jvm_xmx}</xmx> <arguments> <argument>-Duser.language=en</argument> <argument>-Duser.region=EN</argument> <!-- Date format is not standard, The format must be the same as declared in the user.properties set for the load test. Not mandatory but these properties prevent error messages when parsing the results file. --> <argument>-Djmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS</argument> <argument>-Djmeter.save.saveservice.default_delimiter=;</argument> </arguments> </jMeterProcessJVMSettings> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <!-- individual launch : mvn exec:java@aggregate_csv_to_html --> <id>aggregate_csv_to_html</id> <phase>verify</phase> <goals> <goal>java</goal> </goals> <configuration> <mainClass>io.github.vdaburon.jmeter.utils.ReportCsv2Html</mainClass> <arguments> <argument>${project.build.directory}/jmeter/results/G01_AggregateReport.csv</argument> <argument>${project.build.directory}/jmeter/results/G01_AggregateReportSorted.html</argument> <argument>sort</argument> </arguments> </configuration> </execution> <execution> <!-- individual launch : mvn exec:java@synthesis_csv_to_html --> <id>synthesis_csv_to_html</id> <phase>verify</phase> <goals> <goal>java</goal> </goals> <configuration> <mainClass>io.github.vdaburon.jmeter.utils.ReportCsv2Html</mainClass> <arguments> <argument>${project.build.directory}/jmeter/results/G02_SynthesisReport.csv</argument> <argument>${project.build.directory}/jmeter/results/G02_SynthesisReportSorted.html</argument> <argument>sort</argument> </arguments> </configuration> </execution> <execution> <!-- individual launch : mvn exec:java@create_html_page_for_files_in_directory --> <id>create_html_page_for_files_in_directory</id> <phase>verify</phase> <goals> <goal>java</goal> </goals> <configuration> <mainClass>io.github.vdaburon.jmeter.utils.HtmlGraphVisualizationGenerator</mainClass> <arguments> <argument>${project.build.directory}/jmeter/results</argument> <argument>index.html</argument> </arguments> <systemProperties> <systemProperty> <key>image_width</key> <value>${graph_width}</value> </systemProperty> <systemProperty> <key>add_toc</key> <value>true</value> </systemProperty> </systemProperties> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> In the results directory, you'll find the graphs, CSV files containing reports, and HTML tables for the reports. There is also an index.html page, which allows you to view the results and provides links to the different files. This directory can be found at target/jmeter/results within your Maven project. The generated index.html page allows you to view the graphs and access file links directly in your web browser. Here's a glimpse of what the HTML page displays: The JMeter log file can be found in the directory: target/jmeter/results . This is not the default location; the pom.xml file, specifically pom_01_launch_test.xml , has been modified to specify the file location log:<logsDirectory>${project.build.directory}/jmeter/results</logsDirectory>. Consequently, the created log file is named with a prefixed that combines the script file name and the ".log" extension, for example, jpetstore.jmx.log. Limitations of the Load Testing Solution With Maven The limitations encountered don't come directly from Maven itself, but rather from the computer (whether it's a VM or a POD) that is running the load test. When dealing with heavy loads, it's often necessary to modify system settings to increase the limits of the account that runs Apache JMeter. In Linux, the limits can be found in the in the file located at /etc/security/limits.conf. The default values are generally insufficient for high-load testing scenarios. To check the current limits for a Linux account, you can run the command:ulimit -a By default, the maximum number of open files and network connections is capped at 1024. Additionally, the number of processes is limited to 4096. To modify these limits, you'll need to edit the /etc/security/limits.conf file as a root user. Make sure to change the values for the Linux user (in this case, JMeter) that is running Java, to accommodate the necessary commands. Plain Text jmeter hard nproc 16384 jmeter soft nproc 16384 jmeter hard nofile 16384 jmeter soft nofile 16384 When a test is launched by a GitLab Runner (or a Jenkins node), it's essential for the Runner to have system settings adjusted to accommodate CPU load, available memory, and network bandwidth. Going Further Additional Steps To manage the size of the JMeter results and the XML error files, consider adding a compression step, as these files tend to be quite large and compress efficiently. There are two available plugins that can help validate the results against Key Performance Indicators (KPIs): JUnitReportKpiJMeterReportCsv (https://github.com/vdaburon/JUnitReportKpiJMeterReportCsv)JUnitReportKpiCompareJMeterReportCsv (https://github.com/vdaburon/JUnitReportKpiCompareJMeterReportCsv) Aditionally, you can broaden your analysis to include the generation of KPI results, allowing your Continuous Integration pipeline to fail if any KPIs fall short. In you need to generate a PDF document from the index.hml page, tools like convert-html-to-pdf (https://github.com/vdaburon/convert-html-to-pdf) can help you accomplish that. Monitoring It is important to monitor the environment being tested during load tests. You can incorporate additional steps to start monitoring before the test begins and to stop it after the load test is complete. This way, you can retrieve the files generated during the monitoring phase for further analysis. It is recommended to use Application Performance Monitoring tools (such as Dynatrace or ELASTIC APM) to observe both the application and the environment throughout the load test.
When engineering teams modernize Java applications, the shift from JDK 8 to newer Long-Term Support (LTS) versions, such as JDK 11, 17, and soon 21, might seem straightforward at first. Since Java maintains backward compatibility, it's easy to assume that the runtime behavior will remain largely unchanged. However, that's far from reality. In 2025, our team completed a major modernization initiative to migrate all of our Java microservices from JDK 8 to JDK 17. The development and QA phases went smoothly, with no major issues arising. But within hours of deploying to production, we faced a complete system breakdown. Memory usage, which had been consistently reliable for years, jumped by four times. Containers that had previously operated without issue began to restart repeatedly. Our service level agreements (SLAs) degraded, and incident severity levels escalated. This prompted a multi-day diagnostic effort involving several teams—including platform experts, Java Virtual Machine (JVM) specialists, and service owners. This post-mortem will cover the following: Key differences between JDK 8 and JDK 17How containerized environments amplify hidden JVM behaviorsThe distinctions between native memory and heap memoryThe reasons behind thread proliferation and its impact on memoryThe specific commands, flags, and environment variables that resolved our issuesA validated checklist for anyone upgrading to JDK 17 (or 21) The problems we faced were subtle and nearly invisible to standard Java monitoring tools. However, the lessons we learned reshaped our approach to upgrading JVM versions and transformed our understanding of memory usage in containerized environments. The Incident We deployed the JDK 17 version of our primary service to Kubernetes. The rollout was smooth, health checks turned out green, request latencies remained stable, and the logs showed no errors. However, 2–3 hours later, our dashboards began lighting up. Symptoms Observed MetricJDK 8 (Before)JDK 17 (After)Memory usage~50% of container95–100% (frequent OOMKills)Thread count~4001600+ threadsTotal native memory~800 MB3.4–3.6 GBContainer restartsNoneMultiple/hourGC behaviorStableG1GC overhead spikes Services that had been stable for years suddenly began to fail unpredictably. The Challenge: Heap Monitoring Misled Us Every Java engineer knows to keep an eye on heap usage. Initially, the heap looked perfectly fine, remaining constant around the configured Xmx. However, it was native memory that was surging. Native memory includes: Thread stacksglibc malloc arenasAuxiliary structures in Garbage Collector (GC)JIT compiler buffersMetaspace, Code CacheNIO buffersInternal JVM C++ structures Unfortunately, this isn’t visible through heap dump tools and isn’t captured by standard Java monitoring. This is exactly what OOMKilled our containers. Root Cause Analysis During our investigation, we found that three independent JVM behaviors amplified under containers created a “perfect memory storm.” After three days of thorough analysis—reviewing heap data, utilizing native memory tracking (jcmd VM.native_memory), sampling thread dumps, examining GC logs, and inspecting container cgroups—we identified three root causes. Root Cause #1: Thread Proliferation Due to CPU Mis-Detection What Happened JDK 17 introduced changes to how Runtime.availableProcessors() functions. Specifically, in versions 17.0.5 and later, a regression caused the Java Virtual Machine (JVM) to ignore cgroup CPU limits and instead read the physical CPU count of the host. Example: Plain Text Container CPU limit: 2 vCPUs Host machine CPUs: 96 JVM detected: 96 CPUs ❌ This miscalculation caused various parts of the JVM to scale thread creation based on the inflated CPU count, including: GC worker threadsJIT compiler threadsForkJoin common poolJVMTI threadsAsync logging threads So instead of: Plain Text ~50–80 JVM system threads the JVM spawned: Plain Text 300–400+ threads When factoring in application threads (async tasks, thread pools, I/O threads), the total count shot to: Plain Text 1600+ threads Why Threads Matter for Memory Every thread typically reserves ~2 MB of stack by default (native memory). So: Plain Text 1600 threads × 2 MB = ~3.2 GB native stack memory Even if those threads remain idle, the stack is reserved. This thread bloat alone pushed us dangerously close to the memory limit of our container. Root Cause #2: glibc malloc Arena Fragmentation The thread explosion made things much worse. Glibc manages memory using malloc arenas, and, by default, it allocates: Plain Text 8 × CPU_COUNT arenas Due to the JVM incorrectly detecting 96 CPUs, glibc created: Plain Text 8 × 96 = 768 arenas A typical arena can consume 10 to 30 MB, depending on fragmentation patterns. Even when arenas are sparsely used, they still occupy virtual memory and contribute to Resident Set Size (RSS). In our case, this resulted in: Plain Text ~1.5–2.0 GB consumed by glibc arenas This was invisible to Java monitoring tools and heap analysis. Root Cause #3: G1GC Native Memory Overhead (800–1000 MB Higher) Another factor to consider is the shift to Garbage-First Garbage Collector (G1GC) in JDK 17, while JDK 8 commonly used ParallelGC. G1GC is known for using significantly more native memory: ComponentApprox Native MemoryRemembered Sets300–400 MBCard Tables100–200 MBRegion metadata200 MBMarking bitmaps150+ MBConcurrent refinement buffers100 MB Total for G1GC: Plain Text ~800–1000 MB native memory ParallelGC in JDK 8: Plain Text ~150–200 MB Difference: Plain Text +650–800 MB This put us well beyond our container’s 4 GB memory limit. Combined Memory Explosion Model Let's look at the combined impact of the three root causes: Under JDK 8 (~2.8 GB Total) Plain Text Heap: 2048 MB Metaspace: 200 MB Code Cache: 240 MB Threads: 80 MB Native GC: 150 MB Other native: 100 MB ---------------------------------- Total: ~2.8 GB Under JDK 17 (~5.4 GB Total) Plain Text Heap: 2048 MB Metaspace: 250 MB Code Cache: 240 MB Threads: 200 MB G1GC: 1000 MB glibc arenas: 1500 MB Other native: 150 MB ---------------------------------- Total: ~5.4 GB ❌ This puts us 1.4 GB over the container limit. No amount of heap tuning could have fixed this, because the heap itself was not the underlying problem. The Fix: A Three-Part Solution Fix #1: Explicitly Set CPU Count Plain Text -XX:ActiveProcessorCount=2 This is the most important setting for containerized Java on JDK 11 and above. It prevents the JVM from scaling threads based on the CPU count of the node. Fix #2: Limit glibc Malloc Arenas Set the environment variable: Plain Text export MALLOC_ARENA_MAX=2 This reduced native arena overhead from approximately 1.5GB to below 200MB. If you're dealing with very tight memory constrains, consider using: Plain Text export MALLOC_ARENA_MAX=1 Fix #3: Tune or Replace G1GC You have two options here: Keep G1GC, but tune it, orSwitch to ParallelGC, particularly for memory-sensitive workloads. ParallelGC remains the lowest native memory footprint GC in modern Java. Our tuning: Plain Text -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:G1HeapRegionSize=16m After implementing these fixes, we observed that memory usage stabilized in the range of 65% to 70%. Additional Detection and Observability Improvements The biggest operational takeaway is clear: relying solely on heap monitoring is not enough. JVM upgrades also require native memory monitoring. Here's what we've implemented: Native Memory Tracking (NMT) We enabled NMT with the command: Plain Text -XX:NativeMemoryTracking=summary From there, we used: Plain Text jcmd <pid> VM.native_memory summary This provided us a detailed breakdown of memory usage across threads, arenas, GC, compiler, etc. Thread Count Alerts We established the following: Baseline thread counts per serviceAlerts for any increase exceeding 50% Dashboards showing thread growth patterns Increases in thread counts often signal potential native memory leaks. Monitoring Container-Level Memory Metrics We shifted our focus to monitoring container-level memory instead of pod-level memory, which aggregates data from multiple containers: Plain Text container_memory_working_set_bytes By concentrating on container-level metrics, we were able to identify memory overshoots sooner and with greater accuracy. How We Reproduced the Issue Locally To validate that the issue was inherent to JDK 17, we set up a local environment that mirrored the original setup. Step 1: Run the Application in Docker Plain Text docker run \ --cpus=2 \ --memory=4g \ -e MALLOC_ARENA_MAX=2 \ myservice:java17 Step 2: Inspect CPU Detection Plain Text docker exec -it <container> bash java -XX:+PrintFlagsFinal -version | grep -i cpu Here's What We Found: Before the fix: Plain Text active_processor_count = 96 After the fix: Plain Text active_processor_count = 2 Step 3: Inspect Native Memory: Plain Text jcmd <pid> VM.native_memory summary The arena counts correlated exactly with the detected CPU. Why This Problem Is Becoming More Common A number of companies migrating from Java 8 to Java 17 (or 21) are encountering similar challenges. The reasons for this include: Containerization exposes previously hidden JVM behaviors.Local development machines typically have plenty of RAM and CPU power, unlike Kubernetes containers.G1GC has now become the default garbage collector, and its overhead is greater than that of ParallelGC.Many servers are equipped with 64 to 128 CPUs, and JVM thread scaling explodes if mis-detected.Native memory usage in Java applications is rarely monitored, even in large organizations.The behavior of glibc malloc arenas is poorly understood outside the realm of low-level systems engineering. This combination of factors creates a “trap,” where JVM upgrades might pass all QA tests but may break instantly once deployed in production. What We Would Do Differently Next Time JVM Version Soak Testing Moving forward, we will implement the following requirements: A 48-hour load soakA 24-hour canary production soakMonitoring of thread countsOversight of native memory Analysis of GC behavior logs We've learned that a functional test suite alone is not sufficient. JVM Upgrade Runbooks We have developed a runbook that includes: Required flags for containersRequired environment variables (MALLOC_ARENA_MAX)Monitoring dashboards to check before promotionA rollback decision tree Rigorous Baseline Establishment For each service, we will establish baselines for: Heap usage Native memory Thread countsGC overhead Once these baselines are defined, comparing JDK 8 to JDK 17 will become straightforward. Upgrade Checklist Pre-Upgrade Steps Set -XX:ActiveProcessorCount explicitlySet MALLOC_ARENA_MAX=1 or 2Choose your garbage collection method: G1GC or ParallelGCEnable Native Memory TrackingEstablish memory baselines for both heap and native memoryTake note of thread count baselinesEnable container-level memory metricsConduct soak tests for 24 to 48 hoursMonitor and validate GC pause times while under load Post-Deployment Actions Observe thread counts for 2 to 6 hoursCompare native memory usage against your baselineCheck and validate arena countsEnsure CPU detection is accurateRollback immediately if native memory rises more than 10–15% beyond the baseline Conclusion The upgrade to JDK 17 served as one of the most instructive incidents our team has encountered. It highlighted several crucial points: Native memory dominates JVM behavior in containersCPU detection bugs can silently cripple servicesGC changes between JDK releases can add 500MB+ overheadglibc malloc arenas can expand due to excessive thread proliferationMonitoring heuristics from JDK 8 become less reliable when transitioning to JDK 17Upgrading the JVM must be treated with the same caution as a major infrastructure overhaul, rather than simply a minor version update The good news? After applying the recommended fixes, our services now operate more efficiently on JDK 17 than they ever did on JDK 8. We're seeing improved GC throughput, reduced pause times, and improved overall performance. However, this experience serves as a critical reminder: Modern Java is fast and powerful but only when configured with an understanding of how the JVM interacts with container runtimes, native memory systems, and Linux allocators. If you are planning a JDK 17 upgrade, use this guide, validate your assumptions, and closely monitor native memory alongside heap memory.
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Ram Lakshmanan
yCrash - Chief Architect