Showing posts with label Java 19. Show all posts
Showing posts with label Java 19. Show all posts

Wednesday, November 9, 2022

The Arrival of Java 19

The arrival of Java 19!


Oracle is proud to announce the general availability of JDK 19. This release is the tenth Feature Release delivered on time through the six-month release cadence. This level of predictability allows developers to easily manage their adoption of innovation thanks to a steady stream of expected changes.

Java 19, Core Java, Oracle Java, Oracle Java Certification, Oracle Java Prep, Oracle Java Preparation, Oracle Java Skills, Java Jobs, Java Tutorial and Mateirals

Java’s ability to boost performance, stability, and security continues to make it the world’s most popular programming language. According to an IDC report more than 10 million developers – representing 75% of full-time developers worldwide – use Java.

JDK 19 is now available!


Oracle now offers JDK 19 for developers, end-users, and enterprises. Oracle JDK 19 will receive performance, stability and security updates following the Oracle Critical Patch Update (CPU) schedule as outlined in the Oracle Java SE Support Roadmap.

Oracle JDK 19 is not a long-term support (LTS) release. Oracle JDK 17 (announced on September 14, 2021) is the second LTS  under the release cadence announced in 2018. Oracle has announced plans to shorten the time between future LTS releases, from three years to two years so you should expect the next LTS to be Java 21 in September of 2023.

Another important change announced with Oracle JDK 17 was the introduction of a new and simpler license terms which will allow companies to use Oracle JDK 17 – including the quarterly performance, stability, and security patches – at no cost for at least the next three years, allowing one full year of overlap with the next LTS release. Java SE subscribers get access to Oracle’s Java SE Support and commercial features such as GraalVM Enterprise, Java Management Service and the Advanced Management Console.

Java 19, Together


As with previous releases, with Java 19 we continue to celebrate the contributions from many individuals and organizations in the OpenJDK Community — we all build Java, together!

JDK 19 Fix Ratio

The rate of change over time in the JDK releases has remained largely constant for years, but under the six-month cadence the pace at which production-ready features and improvements are delivered has vastly improved.

Instead of making tens of thousands of fixes and delivering close to one hundred JEPs (JDK Enhancement Proposals) every few years as we did with yesteryear Major Releases, enhancements are delivered in leaner Feature Releases on a more manageable, predictable six-month schedule. The changes range from significant new features to small enhancements to routine maintenance, bug fixes, and documentation improvements. Each change is represented in a single commit for a single issue in the JDK Bug System.

Of the 19,297 JIRA issues marked as fixed in Java 11 through Java 19 at the time of their GA, 13,825 were completed by people working for Oracle while 5,472 were contributed by individual developers and developers working for other organizations. Going through the issues and collating the organization data from assignees results in the following chart of organizations sponsoring the development of contributions in Java:

Java 19, Core Java, Oracle Java, Oracle Java Certification, Oracle Java Prep, Oracle Java Preparation, Oracle Java Skills, Java Jobs, Java Tutorial and Mateirals

In Java 19, of the 1,962 JIRA issues marked as fixed, 1,383 were completed by Oracle, while 579 were contributed by other members of the Java community.

Oracle would like to thank the developers working for organizations including Alibaba, Amazon, ARM, Huawei, IBM, Intel, JetBrains, NTT Data, Red Hat, SAP and Tencent for their notable contributions. We are also thankful to see contributions from smaller organizations such as Bellsoft, DataDog, Loongson, and Skymatic, as well as independent developers who collectively contributed 6% of the fixes in Java 19.

We are equally grateful to the many experienced developers who reviewed proposed changes, the early adopters who tried out early access builds and reported issues, and the dedicated professionals who provided feedback on the OpenJDK mailing lists. 

The following individuals provided invaluable feedback on build quality, logged good quality bugs, or offered frequent updates:

◉ Uwe Schindler (Apache Lucene)
◉ Martin Grigorov (Apache Tomcat, Apache Wicket)
◉ Rafael Winterhalter (Byte Buddy)
◉ Yoann Rodière (Hibernate ORM, Validator, Search, Reactive)
◉ Marc Hoffman (JaCoCo)
◉ Lukas Eder (JOOQ)

Additionally, through the Quality Outreach program we would like to thank the following FOSS projects and individuals who provided excellent feedback on testing Java 19 early access builds to help improve the quality of the release:

◉ Apache Derby (Rick Hillegas)
◉ Apache Zookeeper (Enrico Olivelli)
◉ BNYM Code Katas (Rinat Gatyatullin)
◉ RxJava (David Karnok)
◉ Apache Johnzon
◉ JobRunr
◉ MyBatis
◉ Renaissance

New in Java 19


Along with thousands of performance, stability and security updates, Java 19 delivers dozens of new features and enhancements, seven enhancements/changes rise to the level of being managed through the JDK Enhancement Proposals - JEPs process, including four preview features and two incubator features.

Some new features not requiring a JEP include new (D)TLS Signature Schemes, support for Unicode 14.0, additional DateTime formats and PAC-RET Protection on AArch64 systems.  Full details of these and many other new features can be found at https://jdk.java.net/19/release-notes.

JEP Preview Features are fully specified and fully implemented Language or VM Features of the Java SE Platform; and yet impermanent. They are made available in JDK Feature Releases to allow for developer feedback based on real-world uses, before them becoming permanent in a future release. This also affords tool vendors the opportunity to work towards supporting features before they are finalized into the Java SE Standard.

JEP Incubator modules allow putting non-final APIs and non-final tools in the hands of developers and users to gather feedback that will ultimately improve the quality of the Java platform.

The seven JEPs delivered with Java 19 are grouped into four categories mapping to key long-term Java technology projects and hardware support.

Project Amber



Improves developer productivity by extending pattern matching to express more sophisticated, composable data queries. This is done by enhancing the Java programming language with record patterns to deconstruct record values. Record patterns and type patterns can be nested to enable a powerful, declarative, and composable form of data navigation and processing.

JEP 405 relates to:

- [JDK 19] JEP 427: Pattern Matching for switch (3rd Preview)


Improves developer productivity by improving Java’s code semantics. This is done by enhancing the Java programming language with pattern matching for switch expressions and statements. Extending pattern matching to switch allows an expression to be tested against a number of patterns, each with a specific action, so that complex data-oriented queries can be expressed concisely and safely.

JEP 427 relates to:

- [JDK 17] JEP 406: Pattern Matching for switch (Preview)
- [JDK 18] JEP 420: Pattern Matching for switch (Second Preview)
- [JDK 19] JEP 405: Record Patterns (Preview)
 

Project Panama



The Foreign Function & Memory API offers value in four unique ways:

Ease of use — Replaces the Java Native Interface (JNI) with a superior, pure-Java development model.

Performance — Provides performance that is comparable to, if not better than, existing APIs such as JNI and sun.misc.Unsafe.

Generality — Provides ways to operate on different kinds of foreign memory (e.g., native memory, persistent memory, and managed heap memory) and, over time, to accommodate other platforms (e.g., 32-bit x86) and foreign functions written in languages other than C (e.g., C++, Fortran).

Safety — Allows programs to perform unsafe operations on foreign memory but warn users about such operations by default.

The Foreign Function & Memory API introduces an API by which Java programs can interoperate with code and data outside of the Java runtime. By efficiently invoking foreign functions (i.e., code outside the JVM), and by safely accessing foreign memory (i.e., memory not managed by the JVM), the API enables Java programs to call native libraries and process native data without the brittleness and danger of JNI.

JEP 424 relates to:

- [JDK 18] JEP 419: Foreign Function & Memory API (Second Incubator)
- [JDK 17] JEP 412: Foreign Function & Memory API (Incubator)


Improves performance achieving performance superior to equivalent scalar computations. This is done by Introducing an API to express vector computations that reliably compile at runtime to optimal vector instructions on supported CPU architectures, thus achieving performance superior to equivalent scalar computations. Vector APIs were incubated in JDK 16, 17, and 18. JDK 19 incorporates feedback from users of those releases as well as performance improvements and implementation enhancements. It uses some of the Foreign Function and Memory APIs which are now mature enough to create this dependency.

JEP 426 relates to:

- [JDK 16] JEP 338: Vector API (Incubator)
- [JDK 17] JEP 414: Vector API (Second Incubator)
- [JDK 18] JEP 417: Vector API (Third Incubator)

Project Loom



Virtual Threads are lightweight threads that dramaticaly reduce the effort of writing, maintaining, and observing high-throughput concurrent applications.

Virtual Threads is the first JEP as part of Project Loom. Project Loom upgrades the Java concurrency model to meet the needs of today’s high-scale server applications.

There are a lot of great things about Java’s threads. They offer a natural programming model, with readable, sequential code using control flow operators that users understand – loops, conditionals, exceptions. Users get great debugging and serviceability, and readable stack traces. And threads are natural units of scheduling for OSes. We want to retain these advantages. 

The problem is that the implementation of threads by the OS is too heavyweight. It takes too long to start a thread for each connection, but worse, the number of threads the OS can support at any one time limits the number of concurrent transactions a server can handle — to well below the capacity of the hardware or the network — and so threads become a severe constraining factor on server throughput.

Many people assumed we would embrace the asynchronous programming style offered by so-called “reactive” frameworks. By not representing concurrent operations directly as threads, it does scale beyond the limits posed by the thread-per-request model, but at a huge cost – much more complicated code that is harder to write, harder to read, and much harder to debug or profile because the platform in all its layers and its tools, is built around threads. Reactive may be the best people can do with the current JVM, but our goal is to do better, which we can do by making threads lighter and more scalable, letting developers keep using the model and tooling they’ve been using successfully for years.

Developers today have three bad options: waste hardware through underutilization, waste programmer effort with worse programming models and observability, or switch away from Java. So, Project Loom offers developers a better option.

Project Loom timeline:

- Late 2017: Work on Loom begins
- Jul 2019: EA build of Fiber prototype released for feedback
- Sept 2019: JEP 353 (Reimplement Legacy Socket API) shipped in JDK 13
- Sept 2020: JEP 373 (Reimplement Legacy DatagramSocket API) shipped in JDK 15
- Nov 2021: Early Access builds of structured concurrency support released for feedback
- Nov 2021: Draft JEPs for virtual threads and for structured concurrency published for comment
- Mar 2022: JEP 418 (Internet Address Resolution SPI) shipped in JDK 18
- Sep 2022: Preview in JDK 19
 

Structured Concurrency simplifies multithreaded programming by introducing an API for structured concurrency. Structured Concurrency treats multipe tasks running in different threads as a single unit of work, thereby streamling error handling and cancellation, improving reliability, and enhancing observability.

New Port



Ports the JDK to the Linux/RISC-V architecture. RISC-V is a free and open-source RISC instruction set architecture (ISA) designed originally at the University of California, Berkeley, and now developed collaboratively under the sponsorship of RISC-V International. It is already supported by a wide range of language toolchains. With the increasing availability of RISC-V hardware, a port of the JDK offers value to developers.

Source: oracle.com

Monday, May 23, 2022

Coming to Java 19: Virtual threads and platform threads

Operating systems can’t increase the efficiency of platform threads, but the JDK will make better use of them by severing the one-to-one relationship between its threads and OS threads.

Now that Project Loom’s JEP 425 officially previews virtual threads for Java 19, it’s time to take a close look at them: scheduling and memory management; mounting, unmounting, capturing, and pinning; observability; and what you can do for optimal scalability.

Java 19, Oracle Java, Oracle Java Exam Prep, Java Preparation, Java Career, Java Skills, Java Jobs, Java Exam

Before I go into virtual threads, I need to revisit classic threads or, what I will call them from here on out, platform threads.

The JDK implements platform threads as thin wrappers around operating system (OS) threads, which are costly, so you cannot have too many of them. In fact, the number of threads often becomes the limiting factor long before other resources, such as CPU or network connections, are exhausted.

In other words, platform threads often cap an application’s throughput to a level well below what the hardware could support.

That’s where virtual threads come in.

Virtual threads

Operating systems can’t increase the efficiency of the platform threads, but the JDK can make better use of them by severing the one-to-one relationship between its threads and OS threads.

A virtual thread is an instance of java.lang.Thread that requires an OS thread to do CPU work—but doesn’t hold the OS thread while waiting for other resources. You see, when code running in a virtual thread calls a blocking I/O operation in the JDK API, the runtime performs a nonblocking OS call and automatically suspends the virtual thread until the operation finishes.

During that time, other virtual threads can perform calculations on that OS thread, so they’re effectively sharing it.

Critically, Java’s virtual threads incur minimal overhead, so there can be many, many, many of them.

So just as operating systems give the illusion of plentiful memory by mapping a large virtual address space to a limited amount of physical RAM, the JDK gives the illusion of plentiful threads by mapping many virtual threads to a small number of OS threads.

And just as programs barely ever care about virtual versus physical memory, rarely does concurrent Java code have to care whether it runs in a virtual thread or a platform thread.

You can focus on writing straightforward, potentially blocking code—the runtime takes care of sharing the available OS threads to reduce the cost of blocking to near zero.

Virtual threads support thread-local variables, synchronized blocks, and thread interruption; therefore, code working with Thread and currentThread won’t have to change. Indeed, this means that existing Java code will easily run in a virtual thread without any changes or even recompilation!

Once server frameworks offer the option to start a new virtual thread for every incoming request, all you need to do is update the framework and JDK and flip the switch.

Speed, scale, and structure

It’s important to understand what virtual threads are for—and what they are not for.

Never forget that virtual threads aren’t faster threads. Virtual threads don’t magically execute more instructions per second than platform threads do.

What virtual threads are really good for is waiting.

Because virtual threads don’t require or block an OS thread, potentially millions of virtual threads can wait patiently for requests to the file system, databases, or web services to finish.

By maximizing the utilization of external resources, virtual threads provide larger scale, not more speed. In other words, they improve throughput.

Beyond hard numbers, virtual threads can also improve code quality.

Their cheapness opens the door to a fairly new concurrent programming paradigm called structured concurrency, which I covered in Inside Java Newscast #17.

It’s now time to explain how virtual threads work.

Scheduling and memory

The operating system schedules OS threads, and thus platform threads, but virtual threads are scheduled by the JDK. The JDK does so indirectly by assigning virtual threads to platform threads in a process called mounting. The JDK unassigns the platform threads later; this is called unmounting.

The platform thread running a virtual thread is called its carrier thread and from the perspective of Java code, the fact that a virtual and its carrier temporarily share an OS thread is invisible. For example, stack traces and thread-local variables are fully separated.

Carrier threads are then left to the OS to schedule as usual; as far as the OS is concerned, the carrier thread is simply a platform thread.

To implement this process, the JDK uses a dedicated ForkJoinPool in first-in-first-out (FIFO) mode as a virtual thread scheduler. (Note: This is distinct from the common pool used by parallel streams.)

By default, the JDK’s scheduler uses as many platform threads as there are available processor cores, but that behavior can be tuned with a system property.

Where do the stack frames of unmounted virtual threads go? They are stored on the heap as stack chunk objects.

Some virtual threads will have deep call stacks (such as a request handler called from a web framework), but those spawned by them will usually be much shallower (such as a method that reads from a file).

The JDK could mount a virtual thread by copying all its frames from heap to stack. When the virtual thread is unmounted, most frames are left on the heap and copied lazily as needed.

Thus, stacks grow and shrink as the application runs. This is crucial to making virtual threads cheap enough to have so many and to frequently switch between them. And there’s a good chance that future work can further reduce memory requirements.

Blocking and unmounting

Typically, a virtual thread will unmount when it blocks on I/O (for example, when it reads from a socket) or when it calls other blocking operations in the JDK (such as take on a BlockingQueue).

When the blocking operation is ready to finish (the socket received the bytes or the queue can hand out an element), the operation submits the virtual thread back to the scheduler, which will, in FIFO order, eventually mount it to resume execution.

However, despite prior work in JDK Enhancement Proposals such as JEP 353 (Reimplement the legacy Socket API) and JEP 373 (Reimplement the legacy DatagramSocket API), not all blocking operations in the JDK unmount the virtual thread. Instead, some capture the carrier thread and the underlying OS platform thread, thus blocking both.

This unfortunate behavior can be caused by limitations at the OS level (which affects many file system operations) or at the JDK level (such as with the Object.wait() call).

The capture of an OS thread is compensated by temporarily adding a platform thread to the scheduler, which can hence occasionally exceed the number of available processors; a maximum can be specified with a system property.

Unfortunately, there’s one more imperfection in the initial virtual thread proposal: When a virtual thread executes a native method or a foreign function or it executes code inside a synchronized block or method, the virtual thread will be pinned to its carrier thread. A pinned thread will not unmount in situations where it otherwise would.

No platform thread is added to the scheduler in this situation, though, because there are a few things you can do to minimize the impact of pinning—more on that in a minute.

That means capturing operations and pinned threads will reintroduce platform threads that are waiting for something to finish. This doesn’t make an application incorrect, but it might hinder its scalability.

Fortunately, future work may make synchronization nonpinning. And, refactoring internals of the java.io package and implementing OS-level APIs such as io_uring on Linux may reduce the number of capturing operations.

Virtual thread observability

Virtual threads are fully integrated with existing tools used to observe, analyze, troubleshoot, and optimize Java applications. For example, the Java Flight Recorder (JFR) can emit events when a virtual thread starts or ends, didn’t start for some reason, or blocks while being pinned.

To see the latter situation more prominently, you can configure the runtime, via a system property, to print a stack trace when a thread blocks while pinned. The stack trace highlights stack frames that cause the pinning.

And because virtual threads are simply threads, debuggers can step through them just as they can through platform threads. Of course, those debugger user interfaces might need to be updated to deal with millions of threads, or you’ll get some very tiny scroll bars!

Virtual threads naturally organize themselves in a hierarchy. That behavior and their sheer number make the flat format of traditional thread dumps unsuitable, though, so they will stick to just dumping platform threads. A new kind of thread dump in jcmd will present virtual threads alongside platform threads, all grouped in a meaningful way, in both plain text and JSON.

Three pieces of practical advice

The first item on my list: Don’t pool virtual threads!

Pooling makes sense only for expensive resources and virtual threads aren’t expensive. Instead, create new virtual threads whenever you need to do stuff concurrently. You might use thread pools to limit access to certain resources, such as requests to a database. Don’t. Instead, use semaphores to make sure only a specified number of threads are accessing that resource, as follows:

// WITH THREAD POOL

private static final ExecutorService

  DB_POOL = Executors.newFixedThreadPool(16);

public <T> Future<T> queryDatabase(

    Callable<T> query) {

  // pool limits to 16 concurrent queries

  return DB_POOL.submit(query);

}

// WITH SEMAPHORE

private static final Semaphore

  DB_SEMAPHORE = new Semaphore(16);

public <T> T queryDatabase(

    Callable<T> query) throws Exception {

  // semaphore limits to 16 concurrent queries

  DB_SEMAPHORE.acquire();

  try {

    return query.call();

  } finally {

    DB_SEMAPHORE.release();

  }

}

Next, for good scalability with virtual threads, avoid frequent and long-lived pinning by revising synchronized blocks and methods that run often and contain I/O operations, particularly long-running ones. In this case, a good alternative to synchronization is a ReentrantLock, as follows:

// with synchronization (pinning 👎🏾):
// synchronized guarantees sequential access
public synchronized String accessResource() {
  return access();
}

// with ReentrantLock (not pinning 👍🏾):
private static final ReentrantLock
  LOCK = new ReentrantLock();

public String accessResource() {
  // lock guarantees sequential access
  LOCK.lock();
  try {
    return access();
  } finally {
    LOCK.unlock();
  }
}

Finally, another aspect that works correctly in virtual threads but deserves being revisited for better scalability is thread-local variables, both regular and inheritable. Virtual threads support thread-local behavior the same way as platform threads, but because virtual threads can be very numerous, thread locals should be used only after careful consideration.

In fact, as part of Project Loom, many uses of thread locals in the java.base module were removed to reduce the memory footprint when code is running with millions of threads. There is an interesting alternative for some use cases that is currently being explored in a draft JEP for scope-local variables.

Source: oracle.com