December 05, 2025
One Positive Effect of Java 25 with Compact Object Headers Enabled
by Donald Raab at December 05, 2025 04:48 AM
Measuring the memory cost of ArrayList and Eclipse Collections FastList
Donald Walker left a comment on my “What if Java didn’t have modCount?” blog that got me thinking.
Some thoughts: the real overhead of modCount may be that it is sitting there eating 4 bytes (minimum…) on each instance of Java fundamental collection classes.
I’ve thought for many years, apparently incorrectly, that java.util.ArrayList required 8 bytes more than Eclipse Collections FastList, because of the 4 byte int field named modCount. After reading the comment above, I wrote some code in Java 25 and proved myself wrong. Then I changed a property, ran the code again, and proved myself right. Thank you Project Lilliput and JEP 519!
I’m wondering if it’s possible that my faulty memory of the 8 byte savings was from the days of 32-bit Java or 64-bit Java before compressed oops. I didn’t blog back then, so I can’t find out. Thankfully, I am writing this down now so I don’t forget.
What is the memory cost of modCount?
An int is 4 bytes. So modCount should cost 4 bytes. With 8 byte alignment, 4 bytes will cost you 8 if you leave 4 bytes of padding.
Memory Layout of ArrayList and FastList
I ran a test using Java 25, Eclipse Collections 13.0, and Java Object Layout (JOL) 0.17 which is an OpenJDK Project.
<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections</artifactId>
<version>13.0.0</version>
</dependency>
<dependency>
<groupId>org.openjdk.jol</groupId>
<artifactId>jol-core</artifactId>
<version>0.17</version>
</dependency>
The following is the code I ran with vanilla Java 25, measuring the memory layout and cost of an empty ArrayList and empty FastList.
@Test
public void emptyArrayListVsFastList()
{
this.outputMemory(new ArrayList());
this.outputMemory(new FastList());
}
private void outputMemory(Object instance)
{
System.out.println(ClassLayout.parseInstance(instance).toPrintable());
System.out.println(GraphLayout.parseInstance(instance).toFootprint());
}
The output is as follows:
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0000000000000001 (non-biasable; age: 0)
8 4 (object header: class) 0x00218308
12 4 int AbstractList.modCount 0
16 4 int ArrayList.size 0
20 4 java.lang.Object[] ArrayList.elementData []
Instance size: 24 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
java.util.ArrayList@4bd31064d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
2 40 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0000000000000001 (non-biasable; age: 0)
8 4 (object header: class) 0x011a8000
12 4 int FastList.size 0
16 4 java.lang.Object[] FastList.items []
20 4 (object alignment gap)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
org.eclipse.collections.impl.list.mutable.FastList@7cbd9d24d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 org.eclipse.collections.impl.list.mutable.FastList
2 40 (total)
The surprise here for me is that for some reason I was mistakenly expecting FastList to be 8 bytes smaller than ArrayList. As JOL reports, there isa a 4 byte “object alignment gap” in FastList where modCount would be, if FastList extended AbstractList, which it does not.
Enabling Compact Object Headers
I’ll add the following property with Java 25 to enable Compact Object Headers and run again.
-XX:+UseCompactObjectHeaders
Now the output is as follows:
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0023c40000000001 (Lilliput)
8 4 int AbstractList.modCount 0
12 4 int ArrayList.size 0
16 4 java.lang.Object[] ArrayList.elementData []
20 4 (object alignment gap)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
java.util.ArrayList@7354b8c5d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
2 40 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x012c800000000001 (Lilliput)
8 4 int FastList.size 0
12 4 java.lang.Object[] FastList.items []
Instance size: 16 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
org.eclipse.collections.impl.list.mutable.FastList@10f7f7ded footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 16 16 org.eclipse.collections.impl.list.mutable.FastList
2 32 (total)
With Compact Object Headers (COH) enabled in Java 25, I finally see the 8 byte difference between ArrayList and FastList that I have been mistakenly expecting all these years. Yay!
Just to confirm, I will pre-size the lists to a size of 10, and expect to see the same 8 byte savings with COH enabled.
@Test
public void presizedArrayListVsFastList()
{
this.outputMemory(new ArrayList(10));
this.outputMemory(new FastList(10));
}
This is the output. Still an 8 byte difference. Woo hoo!
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0023c40000000001 (Lilliput)
8 4 int AbstractList.modCount 0
12 4 int ArrayList.size 0
16 4 java.lang.Object[] ArrayList.elementData [null, null, null, null, null, null, null, null, null, null]
20 4 (object alignment gap)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
java.util.ArrayList@7354b8c5d footprint:
COUNT AVG SUM DESCRIPTION
1 56 56 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
2 80 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x012c800000000001 (Lilliput)
8 4 int FastList.size 0
12 4 java.lang.Object[] FastList.items [null, null, null, null, null, null, null, null, null, null]
Instance size: 16 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
org.eclipse.collections.impl.list.mutable.FastList@10f7f7ded footprint:
COUNT AVG SUM DESCRIPTION
1 56 56 [Ljava.lang.Object;
1 16 16 org.eclipse.collections.impl.list.mutable.FastList
2 72 (total)
Lessons Learned
When you learn something, and test something, write it down and share it somewhere it can be recalled, like a blog. Memory fades and fails. Something that may or may not have been true 15-20 years ago, should be validated and confirmed with tests. Things change. We’ve come a long way since the days of 32-bit Java and even 64-bit Java before compressed oops.
I’m happy to see and confirm the 8 byte savings of not having modCount in FastList will arrive with Java 25 and Compact Object Headers enabled. Now I’ve written it down.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
December 04, 2025
2025 Open Source Congress Report
by Jacob Harris at December 04, 2025 10:00 AM
Uncover the growing challenges facing open source today, from rising regulatory demands to the complexities of operating as a global, mature ecosystem.
AI Coding Training for Teams
by Jonas, Maximilian & Philip at December 04, 2025 12:00 AM
Since releasing our AI Coding Training, we’ve received overwhelmingly positive feedback from participants. Shortly after launching the training for individual developers, engineering managers and tech …
The post AI Coding Training for Teams appeared first on EclipseSource.
December 02, 2025
The Eclipse Dataspace Working Group (EDWG) advances two open protocols toward global ISO/IEC standardisation
by Anonymous at December 02, 2025 10:45 AM
BRUSSELS – 2 December 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, together with the Eclipse Dataspace Working Group (EDWG), today announced the release of two key protocol specifications that will be submitted for international standardisation through the ISO/IEC JTC1 Publicly Available Specification (PAS) process.
These new protocols mark a significant step forward in enabling open, interoperable, and sovereign dataspaces that allow organisations, industries, and nations to share data securely while maintaining control of their information.
The two specifications are:
- Eclipse Dataspace Protocol, which defines interoperable data sharing between entities based on modern web technologies and governed by usage control. Access the released version here.
- Eclipse Dataspace Decentralised Claims Protocol, which defines an overlay protocol for trust and credential verification. It allows for multiple credential issuers and eliminates the need for third-party verification. Access the released version here.
“Open source is at the heart of digital sovereignty,” said Mike Milinkovich, executive director of the Eclipse Foundation. “By aligning open standards with community-driven innovation, we give organisations the ability to retain control over their data and infrastructure. These new protocols, and their path toward international standardisation, demonstrate how open collaboration strengthens trust and interoperability across the global data economy.”
Advancing digital sovereignty through open standards
Dataspaces are increasingly recognized as a cornerstone of digital sovereignty, helping nations, industries and organisations ensure that data remains under the control of its rightful owners. This is essential for protecting privacy, fostering fair competition, and maintaining national and regional autonomy. Digital sovereignty through dataspaces also enables the creation of frameworks that promote trusted, transparent, and equitable data-sharing practices.
The two new protocols are the foundation of an open dataspace protocol stack that will enable interoperability between sovereign dataspaces and ensure successful data sharing. The Eclipse Dataspace protocol serves as the base layer for technical interoperability across dataspace catalogues, contracts, and data planes, while the Decentralized Claims protocol adds decentralised trust, credential verification, and cross-ecosystem interoperability.
Together, these protocols enable a unified stack of open standards for secure, sovereign data sharing. In parallel, the EDWG is developing additional specifications to support Dataspace Trust frameworks that will further expand these capabilities in 2026.
Bridging open source innovation and global standards
As Europe’s largest open source organisation with a global community, the Eclipse Foundation is well positioned to support digital sovereignty by connecting European priorities with international open source and standardisation efforts. Its neutral governance model, proven legal framework, and extensive network of industry and academic partners, together with its designation as an ISO/IEC JTC 1 PAS Submitter, enable the Foundation to link open source innovation with formal global standardisation.
Moreover, the Eclipse Foundation is deeply involved in the activities of the technical committee of the global standards organization ISO/IEC JTC1 SC38 on “Cloud Computing and Distributed Platforms.” Specifically, the Eclipse Foundation is working closely with this organisation within WG6 (“Data, interoperability and portability”), where EDWG activities are being translated into various standardisation initiatives to complement the aforementioned PAS transposition of protocols. This includes active collaboration on the current Draft International Standard ISO/IEC 20151 (“Dataspace concepts and characteristics”), which incorporates the broader vision for a Dataspace interoperability protocol stack that complements and supports EDWG protocols.
“The collaboration between the open source community and global standardisation organisations is crucial. We are pleased to see this partnership crystallising through ISO/IEC JTC1 SC38 activities, and we welcome the PAS transposition of EDWG standards as an important step in the right direction,” says Anish Karmarkar, ISO/IEC SC38 chair.
The Eclipse Foundation also serves as a liaison with CEN/CENELEC JTC25 (Data management, Dataspaces, Cloud and Edge) and contributes to the European Commission’s efforts to establish harmonised standards supporting the EU Data Act and related digital policies. These collaborations ensure that open source technologies remain central to building trusted, transparent, and interoperable data ecosystems across Europe and beyond.
“The Eclipse Dataspace Working Group’s progress shows how open collaboration can produce technologies that are both practical and globally relevant,” said Javier Valiño, Program Manager for Dataspaces at the Eclipse Foundation. “We are helping organisations worldwide build trustworthy dataspaces and advance secure, sovereign data sharing across industries and borders.”
About the Eclipse Dataspace Working Group
The Eclipse Dataspace Working Group (EDWG) provides an open, collaborative forum for individuals and organisations to develop the open source software, specifications, and governance models needed to build scalable, standards-based dataspaces. The group’s mission is to advance interoperability, trust, and sovereignty across data ecosystems through open innovation and community-driven development.
The EDWG actively contributes to standards creation, implementation, and the integration of existing open source projects that support the growth of a global ecosystem of interoperable dataspaces.
Whether you represent an enterprise, technology vendor, cloud provider, academic institution, or public sector organisation, the EDWG offers a unique opportunity to shape the technologies and standards that will define the future of trusted data sharing. Membership provides a seat at the table in a growing global community, with opportunities for collaboration, visibility, and leadership in one of the fastest-evolving areas of digital innovation.
Learn more and get involved at https://dataspace.eclipse.org. Your participation can help shape the future of dataspaces worldwide.
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
A Native IDE for Claude Code - Deeply Integrating AI Agents with Theia
by Jonas, Maximilian & Philip at December 02, 2025 12:00 AM
We’re excited to share another highlight from TheiaCon 2025: a deep dive into how we transformed Claude Code from a terminal application into a truly native IDE experience within Eclipse Theia. This …
The post A Native IDE for Claude Code - Deeply Integrating AI Agents with Theia appeared first on EclipseSource.
December 01, 2025
LaTeX listings: Eclipse colors
by Lorenzo Bettini at December 01, 2025 11:58 AM
November 30, 2025
What if a Developer can Help Improve Software Development in Java?
by Donald Raab at November 30, 2025 08:26 PM
My journey to help improve the Java ecosystem continues.
My journey has been greater than the destination. I’ve travelled a lot of miles and made a lot of friends along the way.Things happen sometimes when you ask “What if?” It’s not enough to just ask the question. You have to invest yourself in exploring the answer and convince others the thing you are thinking or talking about is important to consider. Sometimes you should write a blog. Sometimes you should write some code. Sometimes you’ll need to engage in open community discussions. Sometimes you should write a book. Regardless, any changes to a programming language or library requires a commitment of patience and persistence. If you believe strongly in your “what if”, then just do it. Even if you have to go it alone at first , the view might be very pleasant when you get there. If and when others see what you see, they may enjoy the view as well, and commit themselves to the cause. It’s ok if no one else sees what you see, or if the value is deemed to not meet the cost, or if it simply is not a priority. Everyone needs an incentive to do something. Sometimes incentives won’t align. If you learned something, then take that as the win and move on. There’s plenty of work to do.
I shared this post originally on LinkedIn. The blogs I referenced are all on Medium, so I am sharing it here as well.
When I decided at the end of 2023 to take time off to travel and to write a book about the open source Eclipse Collections library, I didn’t start by travelling or writing the book. I started by blogging.
In the first fifteen days of 2024, I wrote three “What if Java…” blogs. These blogs created the necessary distance and space for me to think about what I might want to write about in a book about a Java collections library I had created and worked on for twenty years. I wanted to recall some of what motivated me to create Eclipse Collections twenty years earlier. The simple answer was “because Smalltalk”, but there were many more nuanced answers, that most Java developers would not immediately appreciate or understand.
I’ve written a couple more “What if Java…” style blogs since writing the first three. If you want to see some of how I see the world of software development, through my former Smalltalk developer lens, then check out these blogs.
📔 What if null was an Object in Java?
📔 What if Java had Symmetric Converter Methods on Collection?
📔 What if Java didn’t have modCount?
If you want to understand why I believed that Java would get lambdas all the way back in 2004, then this is the blog to read. [This was really my first “What if Java got support for lambdas?” blog.]
📔 My ten year quest for concise lambda expressions in Java
Twenty years after starting this quest, I began the journey writing the book about an improbable open source Java library. While I have written and published the first edition of the book, my journey hasn’t finished yet. I continue to convey the message to all Java developers, that there are different, sometimes better, ways to approach solving problems in Java. Java is a great programming language. Developers who program in Java can and should learn a lot from classic programming languages, like Smalltalk.
The following is the story I wrote after completing the book writing portion of my journey. There is an appendix dedicated to Smalltalk, and the whole organization of the book owes much to the idea of message categories I learned from Smalltalk over thirty years ago.
📙 My Twenty-one Year Journey to Write and Publish My First Book
Thanks for reading! 🙏
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
November 29, 2025
Easily running SWT on different Linux Distros
by Jonah Graham at November 29, 2025 03:13 AM
As a follow-up to my earlier article on testing and developing SWT on GTK, I’ve recently run into a set of challenges that require running SWT tests, samples, and even the full IDE across multiple Linux distributions and different GTK minor versions.
Historically, whenever I needed to test Eclipse on another distro, I’d either dual-boot my machine-using a shared data partition to keep setup simple, or, for quick one-off experiments, spin up a VM. Occasionally I’d throw together a Docker image just for a single test run.
But as I found myself switching between environments more frequently, those ad-hoc approaches grew cumbersome. I needed something faster and more repeatable. So I formalized the Docker approach and built a workflow that lets me launch Java applications-including SWT under X11 or Wayland-with just a click or two.
The result is swt-dev-linux (https://github.com/jonahgraham/swt-dev-linux), a GitHub repository where I’m collecting and documenting the scripts that make this workflow straightforward and reliable. If you need to test SWT across different Linux/GTK configurations, I hope it makes your life easier too.
Demo
Here is a screenshot where I have the SWT ControlExample running on 4 different distros simultaneously. It makes it easy to compare and contrast behaviours:

In a process tree this looks like:

The above examples were all running on GTK4 + x11, the next example is running GTK4 one on x11 and one on wayland on Fedora 43, with my host distro being Ubuntu 25.10:

Recursive SDKs Demo
Here is another screenshot showing (from top left):
- Eclipse SDK setup with my main SWT development environment, launching:
- a child Eclipse SDK running on my Ubuntu docker container, launching:
- a child Eclipse SDK also running on my Ubuntu docker container, launching:
- a hello world Java application

Here is what the process tree looks like for the nested SDKs:

Intrigued?
Come visit my GitHub repo at https://github.com/jonahgraham/swt-dev-linux and try it out and let me know what you think. File an issue or a PR!
November 28, 2025
What if Java didn’t have modCount?
by Donald Raab at November 28, 2025 07:23 PM
Exploring the lesser-known debugging field in mutable Java Collections
All the references that directly read or write the protected modCount variable from AbstractListWhat is modCount?
Most developers know what a ConcurrentModificationException is in Java Collections. Developers typically encounter this error when they mutate a collection while iterating over it. I discovered recently, that developers rarely look to see what causes this exception to be thrown.
I have been asking developers whether they know about the modCount variable used in various types throughout the Java Collections Framework. So far, the answer has been an overwhelmingly chorus of “No.” This surprises me, since IntelliJ reports 131 usages of AbstractList.modCount in the JDK. I believe the modCount variable has existed since the Java Collections Framework was introduced in Java 2, all the way back in 1998. The places AbstractList.modCount is referenced are displayed in detail in the mind map above.
How could Java developers today not know what the modCount variable is?
Because the modCount variable is hard to find
The following is the JavaDoc and field definition for modCount in AbstractList. The field is buried two thirds of the way into the class on line 630. Note: the modCount field is the only field defined in AbstractList.
The JavaDoc for the modCount variable is on line 603 in AbstractList, and the variable definition is on line 630I don’t know why a field definition for one of the most referenced fields in the Java Collections framework is defined on line 630 of AbstractList. I’ve also rarely if ever seen a lone field require 25 lines of JavaDoc. If modCount and ConcurrentModificationException are vitally important for the mutable Java Collections, then I would move modCount to the top of the class and display it prominently with blinking warning signs and sirens. The usage of modCount is certainly not well-hidden. It is the first line in one of the most used methods in the entire Java Collections framework — ArrayList.add().
The add method on ArrayList increments modCount on each callThere is no comment about modCount in the add method. I guess a developer can always use their IDE to step into where the field is defined to find out why it is incremented here.
Et tu HashMap?
Unlike AbstractList, HashMap has fields other than modCount, and has them all together with a “Fields” separator comment. They are still defined a few hundred lines in the source file but stand out with the separator.
The JavaDoc for the modCount variable is on line 402 in HashMap, and the variable definition is on line 410The following is a mind map showing the references to HashMap.modCount.
All the references that directly read or write the protected modCount variable from HashMapNote, because java.util.HashSet is built using java.util.HashMap, it does not have to define its own modCount field. HashSet gets the fail-fast behavior directly from HashMap.
Are there other Java Collection types that define modCount?
This is the modCount definition in TreeMap. As we can see, the field is moving up closer to the class definition. The amount of Javadoc is significantly reduced here when compared to the AbstractList and HashMap versions.
The JavaDoc for the modCount variable is on line 142 in TreeMap, and the variable definition is on line 146Other types that define a modCount field can be found by searching for usages of ConcurrentModificationException. The types I was able to find the define a modCount variable were:
- java.util.Hashtable
- java.util.WeakHashMap
- java.util.PriorityQueue
- java.util.regex.Matcher
- java.util.IdentityHashMap
Are there Java Collection types that do not use modCount?
Some of the Collection types in Java I was able to find that do not use modCount to provide fail-fast iterators were:
- java.util.ArrayDeque
- java.util.CopyOnWriteArrayList
- java.util.CopyOnWriteArraySet
The ArrayDeque class does provide fail-fast iterator behavior, but without using modCount. You can take a look at how ArrayDeque achieves this in the source code.
CopyOnWriteArrayList and CopyOnWriteArraySet do need to use modCount because they are thread-safe collections and return fail-safe iterators. There are other classes with the Concurrent prefix, which suggest to me they would not define modCount, because their iterators should be fail-safe.
Why modCount?
The modCount fields are used in Java to implement fail-fast iterators. A fail-fast iterator is intended to help developers discover bugs that they can introduce when mutating collections while iterating over them. A common case of this happening can be seen in the following code example.
@Test
public void howToEncounterAConcurrentModificationExceptionInArrayList()
{
List<Integer> integers = new ArrayList<>(List.of(1, 2, 3, 4, 5));
Assertions.assertThrows(
ConcurrentModificationException.class,
() -> {
for (Integer each : integers)
{
if (each % 2 == 0)
{
integers.remove(each);
}
}
});
}
In this example, we create and iterate over a List of integers from one to five, and remove all of the even numbers from the list while iterating over it. Using ArrayList, a ConcurrentModificationException is thrown.
There are multiple ways for a developer to fix this bug. The classic way is to use an Iterator explicitly with iterator.remove(). Using an Iterator directly in a loop is slightly more complicated than using an enhanced for loop.
@Test
public void howToAvoidConcurrentModificationExceptionInArrayList1()
{
List<Integer> integers = new ArrayList<>(List.of(1, 2, 3, 4, 5));
for (Iterator<Integer> iterator = integers.iterator(); it.hasNext();)
{
Integer each = iterator.next();
if (each % 2 == 0)
{
iterator.remove();
}
}
Assertions.assertEquals(List.of(1, 3, 5), integers);
}
We could also fix this bug by copying the integer List to a separate result List before iterating over it. We mutate the result List instead.
@Test
public void howToAvoidConcurrentModificationExceptionInArrayList2()
{
List<Integer> integers = new ArrayList<>(List.of(1, 2, 3, 4, 5));
List<Integer> result = new ArrayList<>(integers);
for (Integer each : integers)
{
if (each % 2 == 0)
{
result.remove(each);
}
}
Assertions.assertEquals(List.of(1, 3, 5), result);
}
In more modern versions of Java (Java 8+), we could simply use the Stream filter method with a negated Predicate (see !=).
@Test
public void howToAvoidConcurrentModificationExceptionInArrayList3()
{
List<Integer> integers = new ArrayList<>(List.of(1, 2, 3, 4, 5));
List<Integer> result = integers.stream()
.filter(each -> each % 2 != 0)
.toList();
Assertions.assertEquals(List.of(1, 3, 5), result);
}
We could also use the Collection.removeIf with a Predicate.
@Test
public void howToAvoidConcurrentModificationExceptionInArrayList4()
{
List<Integer> integers = new ArrayList<>(List.of(1, 2, 3, 4, 5));
integers.removeIf(each -> each % 2 ==0);
Assertions.assertEquals(List.of(1, 3, 5), integers);
}
There are other bugs modCount can help find, but the bug above is a classic bug that most developers learn in the early days of first programming in Java. I first learned about this “mutating while iterating bug” in Smalltalk. Smalltalk did not provide fail-fast iterators. Developers got to learn about how not to do this by reading the Smalltalk FAQ or stepping through with the debugger when code was not working as expected. I stopped introducing this bug after the first few times I encountered in in Smalltalk.
What if modCount didn’t exist?
How would we protect developers from hurting themselves by mutating and iterating over non-concurrent collections? We could save an incalculable number of increments and comparisons of modCount by plastering the following annotation and corresponding JavaDoc at the top of each non-concurrent Collection in Java.
@NotConcurrent
public class EveryNonConcurrentJavaCollection {
/**
WARNING: DO NOT ITERATE OVER AND MUTATE A NON-CONCURRENT COLLECTION
AT THE SAME TIME!!!
Example:
List<Integer> integers = new ArrayList<>(List.of(1, 2, 3, 4, 5));
// HERE WE ITERATE
for (Integer each : integers)
{
if (each % 2 == 0)
{
// HERE WE MUTATE THE LIST WHILE ITERATING OVER IT.
// HERE BE DRAGONS. DRACARYS!!!
// YOU ARE NOW A PILE OF ASH.
integers.remove(each);
}
}
REPEAT 1,000,000 TIMES: DO NOT WRITE CODE LIKE THIS!!!
**/
We are charged a dual complexity and performance tax in ArrayList, HashSet, HashMap, and the other Java Collection types that use modCount in their code paths to protect us from running into common programming problems.
The performance tax of modCount is levied while modifying and iterating over collections. The performance tax of modCount, is very small and probably not likely to result in a noticeable improvement for most applications, but it may be measurable and noticeable in some JMH micro-benchmarks.
I decided to write some JMH benchmarks, even though I generally do not like comparing and sharing them because they take a lot of time to run and are mostly meaningless at this level when considering and addressing application performance bottlenecks.
The following chart shows the difference between using java.util.ArrayList (has modCount) and Eclipse Collections FastList (does not have modCount) to add 100 elements to a collection and then iterate over it using Collection.forEach, an indexed for loop, and an enhanced for loop. The unit of measure is operations per millisecond, which means bigger is better. Note: all of these are extremely fast, as they are measured in milliseconds.

I was expecting a much less noticeable performance difference. I would have expected more along the lines of perhaps a 10–30% improvement using FastList when compared to ArrayList because of the cost of incrementing modCount. There is either something wrong with my benchmark or something else is going on in the add method of ArrayList that is not happening in FastList. I hope it is my benchmark. There is a delegated method call to another private add method. The following screenshot shows both add methods in ArrayList. The JavaDoc in the private add method is highly suspect.

I think modCount is possibly complicating what should be completely inlined code in a single add method. The performance cost observed in my benchmark can’t possibly be from modCount alone. Perhaps if there is any benefit to me having spent the time researching modCount, spotting a potential performance issue in the add method of ArrayList will make my time worth it.
I don’t trust my own benchmarks, so here is the source code. Try them yourself on your own hardware. Let me know if you spot any obvious issues. I ran them using Java 25 (Azul Zulu) on my MacBook Pro M2 Max with 12 cores and 96gb RAM. YMMV.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.eclipse.collections.api.tuple.Pair;
import org.eclipse.collections.api.tuple.primitive.IntObjectPair;
import org.eclipse.collections.impl.Counter;
import org.eclipse.collections.impl.list.mutable.FastList;
import org.eclipse.collections.impl.tuple.Tuples;
import org.eclipse.collections.impl.tuple.primitive.PrimitiveTuples;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
@State(Scope.Thread)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@Fork(2)
@Warmup(iterations = 20, time = 2)
@Measurement(iterations = 10, time = 2)
public class ModCountVsNoModCountBenchmark
{
private static final int SIZE = 100;
@Benchmark
public List<String> modCountAdd()
{
List<String> arrayList = new ArrayList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
arrayList.add("");
}
return arrayList;
}
@Benchmark
public List<String> noModCountAdd()
{
List<String> fastList = new FastList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
fastList.add("");
}
return fastList;
}
@Benchmark
public Pair<Counter, List<String>> modCountAddForEach()
{
List<String> arrayList = new ArrayList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
arrayList.add("");
}
Counter counter = new Counter();
arrayList.forEach(each -> counter.add(each.length() + 1));
return Tuples.pair(counter, arrayList);
}
@Benchmark
public Pair<Counter, List<String>> noModCountAddForEach()
{
List<String> fastList = new FastList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
fastList.add("");
}
Counter counter = new Counter();
fastList.forEach(each -> counter.add(each.length() + 1));
return Tuples.pair(counter, fastList);
}
@Benchmark
public IntObjectPair<List<String>> modCountAddEnhancedFor()
{
List<String> arrayList = new ArrayList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
arrayList.add("");
}
int counter = 0;
for (String each : arrayList)
{
counter += each.length() + 1;
}
return PrimitiveTuples.pair(counter, arrayList);
}
@Benchmark
public IntObjectPair<List<String>> noModCountAddEnhancedFor()
{
List<String> fastList = new FastList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
fastList.add("");
}
int counter = 0;
for (String each : fastList)
{
counter += each.length() + 1;
}
return PrimitiveTuples.pair(counter, fastList);
}
@Benchmark
public IntObjectPair<List<String>> modCountAddIndexed()
{
List<String> arrayList = new ArrayList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
arrayList.add("");
}
int counter = 0;
final int localSize = arrayList.size();
for (int i = 0; i < localSize; i++)
{
String each = arrayList.get(i);
counter += each.length() + 1;
}
return PrimitiveTuples.pair(counter, arrayList);
}
@Benchmark
public IntObjectPair<List<String>> noModCountAddIndexed()
{
List<String> fastList = new FastList<>(SIZE);
for (int i = 0; i < SIZE; i++)
{
fastList.add("");
}
int counter = 0;
final int localSize = fastList.size();
for (int i = 0; i < localSize; i++)
{
String each = fastList.get(i);
counter += each.length() + 1;
}
return PrimitiveTuples.pair(counter, fastList);
}
}
I ran the same benchmarks for 1,000,000 element lists. These were the results. The unit of measure was increased to Operations per second, instead of millisecond. In the code above just change the size to 1_000_000 and the TimeUnit to SECONDS.

The problem with modCount
The complexity tax of modCount is much greater than the performance tax, and is the reason why I decided to write this blog. The code in the Java Collections framework has to deal with modCount in many methods as illustrated in the two mind maps above. This code complicates designing, adding, and testing new features to the Java Collections Framework in addition to levying the always-on performance tax on every usage of modCount in every Java application. The purpose of modCount is to help developers find bugs they introduce by using non-concurrent collections incorrectly. This is a development time benefit. The modCount variable serves no purpose in production code. It is not monitored or reported on. It’s purpose is to prevent common bugs from going undiscovered and winding up in production.
The complexity cost of modCount is passed on to every developer who reads and debugs the code in the Java Collections and Streams framework. The code in Java Collections and Streams should be as simple as possible, and not any simpler. I do not believe it is currently as simple as it should be. The modCount variable is adding a long-term complexity debt to the Java Collections Framework which keeps increasing every time a new behavior is introduced to the standard library that requires incrementing or comparing modCount.
Eclipse Collections benefits from not extending AbstractList, and not using modCount in its non-concurrent collections. Non-concurrent collections should be used with care. Eclipse Collections does not provide fail-fast iterators… it provides fast iterators. Since Eclipse Collections has drop-in replacements for JDK types like ArrayList, HashSet, and HashMap, if you’re certain that you do not have any bugs in the non-concurrent collections code you have written (verified and supported with tests), you should be able to switch to Eclipse Collections and possibly get a minor speedup, with code which may also be easier to read and understand.
While writing this blog, I experimented implementing a FailOnSizeChangeIterator for Eclipse Collections types. I used a slightly less reliable method than modCount, that still catches the most common bug found by fail-fast iterators… removing from a collection while iterating over it. I record and compare the Collection.size() before and while iterating.
import java.util.Collection;
import java.util.ConcurrentModificationException;
import java.util.Iterator;
import java.util.function.Consumer;
public class FailOnSizeChangeIterator<T> implements Iterator<T>
{
private int expectedSize;
private Collection<T> collection;
private Iterator<T> iterator;
public FailOnSizeChangeIterator(Collection<T> collection)
{
this.collection = collection;
this.expectedSize = collection.size();
this.iterator = collection.iterator();
}
private void checkSizeChange()
{
if (expectedSize != collection.size())
{
throw new ConcurrentModificationException();
}
}
@Override
public boolean hasNext()
{
this.checkSizeChange();
return iterator.hasNext();
}
@Override
public T next()
{
this.checkSizeChange();
return iterator.next();
}
@Override
public void remove()
{
this.checkSizeChange();
iterator.remove();
this.expectedSize = collection.size();
}
@Override
public void forEachRemaining(Consumer<? super T> action)
{
this.checkSizeChange();
this.iterator.forEachRemaining(action);
}
}
I wrote a test which fails reliably with a ConcurrentModification exception, using FailFastIterator with a FastList.
@Test
public void howToEncounterAFailFastExceptionInFastList()
{
FastList<Integer> integers = new FastList<>(List.of(1, 2, 3, 4, 5));
Assertions.assertThrows(
ConcurrentModificationException.class,
() -> {
for (Iterator<Integer> it = new FailOnSizeChangeIterator<>(integers);
it.hasNext(); )
{
Integer each = it.next();
if (each % 2 == 0)
{
integers.remove(each);
}
}
});
}
This approach is less reliable than modCount, because modCount can be used to capture changes to a collection that do not impact size, like sorting. I checked ArrayList, and it does increment modCount on sort, but does not increment modCount on set. It also increments modCount on replaceAll, so would seem to favor incrementing modCount on bulk behaviors that do not change size.
What’s in this blog for you?
I wrote this blog because I wanted to explore and learn more about modCount. We made the decision long ago not to extend AbstractList in Eclipse Collections, and I wanted to discover if there is something we are missing in Eclipse Collections by not using it. I don’t think there is, but I learned much more than I expected. I did not expect modCount to be so invasive.
As usual, mind maps proved useful at taking something very complex, like understanding the complexity of modCount usage, and putting in in a single diagram. I hope this proves useful for developers who add behavior to the Java Collections Framework in OpenJDK. This is the map of all the places you need to think about not just testing for behavior, but testing for concurrent behavior being applied to non-concurrent collections.
The lesson in here for the reader is that sometimes when we go exploring a tangent like modCount, we learn more than we might expect. I’ve tried to share some of what I learned along the way. I wrote, ran, and ultimately threw away a bunch of JMH benchmarks that I tested. The most compelling benchmark was the one that included add. I thought the results of that were interesting, confusing, and maybe worth looking into. As I noted, the difference may not be directly caused by modCount, but impacted by the complexity of including modCount. The benchmarks that measured the performance impact of using streams and and parallel streams with different types that have and don’t have modCount were less compelling.
Recommendation: The best way to get away from the complexity and cost of modCount and have both safe and performant code is to use immutable collections.
If you don’t like the complexity and always-on performance tax of modCount, you can use Eclipse Collections. If you depend regularly on the fail-fast behavior to protect yourself from yourself, you might want to stick with the fail-fast iterator approach in the JDK collections. Switch to the Eclipse Collections types when you are comfortable that you have good tests and have not introduced any bugs.
Thank you for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
November 27, 2025
Next generation AI IDEs - sponsored by Eclipse and Advantest
by Andrey Loskutov ([email protected]) at November 27, 2025 04:06 PM
"Next generation AI" IDEs
AI is the current buzzword, everyone seem to use it, so why not to write something about it?
Disclaimer: the following blog is written wearing my "hat" as a proud Advantest team member, from the company that provides both hardware and software for the "AI".
It happened to me (because of my job as Java/Eclipse platform owner at Advantest) to stumble upon two "AI based" (and hyped) IDEs recently - Cursor and Google Antigravity (surely there could be more of that stuff).
If you never heard about Cursor or Antigravity and wonder what are they, let give them a chance to introduce themselves (just few selected headlines from the product pages):
Cursor
Cursor is the best way to code with AI
The best way to build software
Develop enduring software at scale
The new way to build software
Antigravity
Experience liftoff with the next-generation IDE
For developers - Achieve new heights
For organizations - Level up your entire team
New Era in AI-Assisted Software Development
So it looks like this should be really cool and surely "cutting edge" stuff, isn't?
Quest for Essence
However, looking at product pictures, it seems that they both are ... "AI pimped" forks of VS Code (and on the second look they are forks of VS Code).
I will not say much about AI here (my personal "I" is enough for me so far), but as a Java and Eclipse developer, I was curious how different they are compared to "old school" IDEs.
Following our Advantest corporate "quest for essence" mantra, I couldn't resist to look under the hood, to search for the "essence" of these IDEs Java support.
It was a short but amazing journey, this is a short illustrated report about it.
TL;DR
You can skip reading entire blog post, here is one picture that should explain everything *:
* if you wonder why everything is based on Advantest, you should know that ~70% of the semiconductor tests are done on equipment from Advantest so you basically can't have any modern electronic device without parts of it tested on our testers).
Full Story
(Disclaimer: I've checked Cursor and pure VS Code only, but not Antigravity, because later one required root right for installation, but the findings presented below should be same for every VS based/forked IDE).
So let start Cursor, click away all popups and suggestions to buy / login / enable AI, open simple Java file and try to debug it.
First surprise: Cursor can't build or run Java by default!
By default, Cursor only provides basic text editor with syntax highlighting!
Cursor – depends on Microsoft?
To build, run & debug Java code, Cursor recommends to install VS Code Extension Pack for Java (from Microsoft).
Cursor – depends on Red Hat?
OK, let's install Extension Pack for Java (from Microsoft)...
But wait, Extension Pack for Java from Microsoft is based on ... Language Support for Java (from Red Hat)
Cursor – depends on JDT!
OK, let's install Language Support for Java, but wait, it is based on … Eclipse JDT (Java Development Tools)…
Cursor – Eclipse AI inside
Now it also happened to me to be also a Eclipse Platform and JDT project lead, and I was interested how much from Eclipse or JDT code is actually used by Cursor.
It is not difficult to find where and what is installed, so a quick check revealed ...
... a lot of Eclipse libraries stored at ~/.cursor/extensions/redhat.java-1.47.0-linux-x64/server/plugins/.
Seeing this, it was obvious to me, that it is not just a single ECJ library (Eclipse Compiler for Java from JDT) used, but the entire Eclipse platform with lot of dependencies, and that could only work if Eclipse was started by Cursor as a separated Java/OSGI application (because VS Code is running in native Chrome browser process).
Quick check with jps revealed the headless JVM process started by Cursor and running full featured Eclipse product under the hood:
The truth: Cursor - sponsored by Eclipse and Advantest!
To make it clear: to provide full Java support, "the next generation AI tooling" starts regular Eclipse application (similar to regular Eclipse IDE) in a language server, but "headless", without any UI component.
Why should Cursor start Eclipse? Well, because "AI" (in the sense most people understand it) can't compile, build and debug anything, for this work "AI" is simply not intelligent enough!
"The next generation AI" still needs "old boring AI" provided by JDT project and JDT code runs inside Eclipse process. Of course, VS Code based IDEs can use other language servers for Java, but as of today, the most popular one is the extension which is based on Eclipse, and it uses Eclipse / JDT tooling simply because it provides "for free" things which are not easy to implement from scratch. In fact, ECJ (Eclipse Compiler for Java) only alternative Java compiler implementation (after javac from Oracle). Many projects also use ECJ for compilation on CI because ECJ compiler is faster as javac.
Advantest supports Open Source and helps Eclipse Platform with maintenance since many years.
In particular, ECJ (Eclipse Compiler for Java), as a major part of JDT, is maintained by Srikanth Sankaran, taking leading role in Eclipse Java compiler development since two years.
As of today, Srikanth is the most active JDT core contributor. He reviewed, redesigned, reimplemented, refactored support for all Java language enhancements from Java 10 till Java 25, in a 20 months long intense effort.
These improvements were delivered over several Eclipse releases in the past 20 months and are available in upcoming Eclipse 4.38 release (… and also in Cursor, VS Code, …).
Let's summarize what we've learned about Cursor
- Cursor can’t compile, build, run or debug any Java code by default.
- Cursor Java support is based on VS Code extensions.
- Most popular VS Code Java extension is backed by headless Eclipse product.
- Both Eclipse IDE and Cursor Java support depend on same JDT "AI"
With that, we can proudly say, that "AI" in Cursor / Antigravity / VS Code (for Java development) is sponsored by Eclipse and Advantest.
by Andrey Loskutov ([email protected]) at November 27, 2025 04:06 PM
Investing in Eclipse Theia: Collective Sponsoring and Strategic Partner Options
by Jonas, Maximilian & Philip at November 27, 2025 12:00 AM
Many organizations rely on Eclipse Theia as a strategic platform for building custom tools and IDEs — across engineering domains, cloud solutions, and increasingly AI-native environments. As Theia …
The post Investing in Eclipse Theia: Collective Sponsoring and Strategic Partner Options appeared first on EclipseSource.
November 25, 2025
Textual, Graphical, and Form-Based Data Modeling with Eclipse Theia (AI)
by Jonas, Maximilian & Philip at November 25, 2025 12:00 AM
We’re excited to share another highlight from TheiaCon 2025: a talk on CrossModel, an innovative data modeling tool that showcases the power of Eclipse Theia’s extensibility and the seamless …
The post Textual, Graphical, and Form-Based Data Modeling with Eclipse Theia (AI) appeared first on EclipseSource.
November 20, 2025
Mastering Project Context Files for AI Coding Agents
by Jonas, Maximilian & Philip at November 20, 2025 12:00 AM
Have you seen files like CLAUDE.md, .copilot-instructions.md, or .cursorrules popping up in your projects? These project context files are becoming essential tools for working effectively with AI …
The post Mastering Project Context Files for AI Coding Agents appeared first on EclipseSource.
November 19, 2025
Language execution with Langium and LLVM
November 19, 2025 12:00 AM
November 18, 2025
Understanding Open Source Stewards and the Cyber Resilience Act
by Marta Rybczynska at November 18, 2025 06:54 AM
This white paper outlines what open source stewards need to understand about their obligations, what processes may need to evolve, and where more discussion is needed.
TheiaCon 2025: The Eclipse Theia Project Update
by Jonas, Maximilian & Philip at November 18, 2025 12:00 AM
We’re excited to share the opening keynote from TheiaCon 2025, providing a comprehensive update on the Eclipse Theia project over the past year. This keynote highlights the remarkable progress Theia …
The post TheiaCon 2025: The Eclipse Theia Project Update appeared first on EclipseSource.
November 17, 2025
Mind Maps Didn’t Make Me Scroll
by Donald Raab at November 17, 2025 03:19 AM
How to view complete Java library package structures without scrolling.
The api packages in the Eclipse Collections API jar as a mind mapFully Expanded Packages Without Scrolling
I’ve been a fan of using mind maps for almost a decade. I have created many mind maps for my blogs using Astah UML. I really enjoy using Astah UML to help me explore and communicate ideas. As an added bonus, Astah UML is written in Java.
While I was writing my first book, “Eclipse Collections Categorically: Level up your programming game”, I wanted to show the complete package structure of the two jar files that make up the Eclipse Collections library. I started out trying to use screenshots of the project view from IntelliJ IDEA. The problem was that the entire package hierarchy wouldn’t fit in a tree view without having to scroll.
This is the best I could do without scrolling using the tree view in IntelliJ. I took two snapshots of the tree view and mashed them up in a single image. It kind of gives a sense of what the package structure in the two jars looks like, but it was incomplete.
Side by side tree view of Eclipse Collections api and impl packages in IntelliJI started out using an image like this in the book, and got some feedback from one of my technical reviewers that the picture wasn’t very clear because it was incomplete.
This is what led me to the “light bulb” moment to use mind maps to capture the package structure of the two jars. I am very happy with the results and how it looks in both the printed and digital versions of my book. The first image in this blog shows the api packages as a mind map.
For more information about the package design in Eclipse Collections, the following blog is a great resource. Unfortunately, when I wrote this blog, I didn’t yet have the bright idea to use mind maps for the package hierarchies in the two jars. Lesson learned.
Leverage Information Chunking to scale your Java library package design
That’s all folks
If you ever find yourself creating some documentation or a book that needs to include a Java package or directory structure, maybe next time consider using a mind map. This approach worked well for me in my book. When folks ask me about how the Eclipse Collections package hierarchies are structured, I point them to the two mind maps that appear on facing pages in the digital and print versions of the Eclipse Collections Categorically book. Both mind maps can be found in the free online reading sample for Kindle in Chapter 2 under the “Package Design” section.
Thanks for reading! If you’ve read this far, check the Kindle version of the of the book on Amazon on November 20, 2025. You might find a good deal on that date if you are interested in the Kindle version of the book.
Note: You don’t need a Kindle Reader to read a Kindle book, as there is a Kindle App available for several hardware platforms. I read Kindle books on my MacBook Pro using the Kindle App.
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
November 13, 2025
OCX 2026: Let’s build the future of open collaboration together
by Clark Roundy at November 13, 2025 04:41 PM
TL;DR - Registration for OCX26 is officially open! Join us in Brussels from 21–23 April 2026, and grab your early bird discount before 6 January. Don’t miss the chance to be part of the future of open collaboration.
The heartbeat of open source
At the Eclipse Foundation, openness is more than a value. It’s who we are. Each year, the Open Community Experience (OCX) brings that spirit to life by connecting developers, innovators, researchers, and industry leaders from around the world.
OCX 2026 is shaping up to be our biggest and most inspiring event yet. And we’re doing it in the heart of Europe: Brussels, a city known for innovation, collaboration, and great waffles.
One pass. Six experiences. Endless opportunities.
Your OCX26 pass gives you full access to the Main OCX Track plus five collocated events, each focused on the technologies and communities shaping the future of open source:
- Open Community for Tooling: IDEs, modeling tools, and developer platforms driving innovation.
- Open Community for Automotive: The hub for software-defined vehicles and next-gen mobility.
- Open Community for AI: Exploring responsible, transparent, and open AI frameworks.
- Open Community for Compliance: Tackling security, regulation, and the Cyber Resilience Act.
- Open Community for Research: Where academia meets industry to turn ideas into impact.
Whether you write code, design smarter cars, research AI, navigate compliance, or just love open source, OCX26 is where you belong.
Why register early?
Because it saves you over €100! Register before 6 January 2026 to lock in early bird pricing.
Our program teams are now putting together an unmissable lineup filled with fresh ideas, bold conversations, and practical insights. You can expect sessions on everything from secure software practices and CRA compliance to AI-powered development tools and next-generation mobility platforms shaping the future of open source.
Who should attend?
If you care about open source, OCX26 is the place to be:
- Developers and maintainers shaping open tools and frameworks
- Innovators in automotive, embedded, and edge systems
- AI researchers advancing ethical, open AI
- Compliance and security professionals navigating new regulations
- Academics and industry partners turning research into real-world impact
- Tech leaders connecting innovation to industry needs
In short, YOU!
Got something to share?
There’s still time to submit your talk, but not much.
The call for proposals closes on 19 November,
We’re looking for stories, insights, and breakthroughs from across the open source ecosystem: Java, AI, automotive, embedded, compliance, and research. Whether it’s a new project, an interesting idea, or a collaboration success story, your voice belongs on the OCX stage.
Don’t miss the chance to share your expertise and connect with hundreds of passionate community members from across the world.
Sponsor the future
OCX exists because of the organizations that believe in open collaboration and community-driven innovation.
Now’s your chance to join them as a sponsor of OCX. Our flexible Sponsorship packages put your brand in front of developers, innovators, and leaders who are shaping the next generation of open technology.
From AI and automotive to tooling and compliance, OCX26 connects your brand with the communities shaping tomorrow’s technology.
Be part of the experience
Mark your calendars, grab your early bird pass, and get ready to join over 600 open source innovators in Brussels this April for three days of collaboration, connection, and creativity.
👉 Register now.
👉 Submit your talk by 13 November.
👉 Explore sponsorship opportunities.
Eclipse Theia 1.66 Release: News and Noteworthy
by Jonas Helming at November 13, 2025 09:11 AM
Eclipse Theia 1.66 delivers a feature-rich update with persistent AI chat sessions, slash commands, agent modes, and new GitHub and Project Info agents. It also brings significant debugging, UI, and API improvements. Check out the full announcement!
Eclipse Theia 1.66 Release: News and Noteworthy
by Jonas, Maximilian & Philip at November 13, 2025 12:00 AM
We are happy to announce the Eclipse Theia 1.66 release! The release contains in total 78 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …
The post Eclipse Theia 1.66 Release: News and Noteworthy appeared first on EclipseSource.
November 06, 2025
Foundation Announces Maintainers Fund
by Scott Lewis ([email protected]) at November 06, 2025 01:22 AM
by Scott Lewis ([email protected]) at November 06, 2025 01:22 AM
November 05, 2025
AWS invests in strengthening open source infrastructure at the Eclipse Foundation
by Anonymous at November 05, 2025 03:08 PM
This commitment will benefit multiple core services, including Open VSX Registry, the open source registry for Visual Studio Code extensions that powers AI-enabled development environments such as Kiro and other leading tools.
AWS invests in strengthening open source infrastructure at the Eclipse Foundation
by Mike Milinkovich at November 05, 2025 02:29 PM
In our recent open letter and blog post on sustainable stewardship of open source infrastructure, we called on the industry to take a more active role in supporting the systems and services that drive today’s software innovation. Today, we’re excited to share a powerful example of what that kind of leadership looks like in action.
The Eclipse Foundation is pleased to announce that Amazon Web Services (AWS) has made a significant investment to strengthen the reliability, performance, and security of the open infrastructure that supports millions of developers around the world. This commitment will benefit multiple core services, including Open VSX Registry, the open source registry for Visual Studio Code extensions that powers AI-enabled development environments such as Kiro and other leading tools.
Sustaining the backbone of open source innovation
For more than two decades, the Eclipse Foundation has quietly maintained open infrastructure that forms the foundation of modern software creation for millions of software developers worldwide. Its privately hosted systems deliver more than 500 million downloads each month across services such as download.eclipse.org, the Eclipse Marketplace, and Open VSX. These platforms serve as the backbone for individuals, organisations, and communities that rely on open collaboration to build the technologies of the future.
AWS’s investment will help improve performance, reliability, and security across this infrastructure. The collaboration reflects a shared commitment to keeping open source systems resilient, transparent, and sustainable at global scale.
Open VSX: a model for sustainable open infrastructure
Open VSX is a vendor-neutral, open source (EPL-2.0) registry for Visual Studio Code extensions. It serves as the default registry for Kiro, Amazon’s AI IDE platform, and is relied upon by a growing global community of developers. The registry now hosts over 7,000 extensions from nearly 5,000 publishers and delivers in excess of 110 million downloads per month. As a leading registry serving developer communities worldwide, including JavaScript and AI development communities, Open VSX has become a vital piece of open source infrastructure that supports thousands of development teams worldwide.
By supporting Open VSX, AWS is helping to strengthen the foundations of this essential service and reinforcing the Eclipse Foundation’s ability to provide secure, reliable, and globally accessible infrastructure. Their contribution reflects the importance of collective investment in maintaining the resilience, openness, and security of the tools developers use every day.
This sponsorship highlights the shared responsibility that all organisations have in sustaining the technologies they depend on. It also sets a strong example of how industry leaders can contribute to ensuring that the services we all rely on remain trustworthy, scalable, and sustainable for the future.
Improving reliability, security, and trust
The AWS investment is helping strengthen security, ensuring fair access, and improving long-term service reliability. Ongoing work focuses on enhancing malware detection, improving traffic management, and expanding operational monitoring to ensure a stable and trusted experience for developers around the world.
As part of this collaboration, AWS is providing infrastructure and services that will improve availability, performance, and scalability across these systems. This support will accelerate key roadmap initiatives and help ensure that the platforms developers rely on remain secure, scalable, and trustworthy well into the future.
A shared commitment to open source sustainability
AWS’s contribution demonstrates how industry leaders can make strategic investments in sustaining the shared infrastructure their businesses depend on every day. By investing in the services that support open source development, AWS is helping to ensure that critical technologies remain open, reliable, and accessible to everyone.
The Eclipse Foundation continues to serve as an independent steward of open source infrastructure, maintaining the tools and systems that enable software innovation across industries. Together with supporters like AWS, we are building a stronger foundation for the future of open collaboration.
But this is only the beginning. The long-term health of open source infrastructure depends on collective action and shared responsibility. We encourage other organisations to follow AWS’s example and take an active role in sustaining the technologies that make modern development possible.
Learn how your organisation can make a difference through Eclipse Foundation membership or direct sponsorship opportunities. The future of open innovation depends on all of us; and together, we can keep it strong, secure, and sustainable.
November 04, 2025
Self-Brewed Beer is (Almost) Free - Experiences using Ollama in Theia AI - Part 2
November 04, 2025 06:12 PM
This is part two of an extended version of a talk I gave at TheiaCon 2025. That talk covered my experiences with Ollama and Theia AI in the previous months. In part one I provided an overview of Ollama and how to use it to drive Theia AI agents, and presented the results of my experiments with different local large language models.
In this part, I will draw conclusions from these results and provide a look into the future of local LLM usage.
Considerations Regarding Performance
The experiment described in part one of this article showed that working with local LLMs is already possible, but still limited due to relatively slow performance.
Technical Measures: Context
The first observation is that the LLM is becoming slower as the context grows. The reason is that the LLM needs to parse the entire context for each message. At the same time, a too small context window leads to the LLM forgetting parts of the conversation. In fact, as soon as the context window is filled, the LLM engine will start discarding the first messages in the conversation, while retaining the system prompt. So, if you experience that an agent seems to forget the initial instructions you gave it in the chat, this means most likely that the context window is exceeded. In this case the agent might become unusable, so it is a good idea to use a context window that is large enough to fit the system prompt, the instructions, and the tool calls during processing. On the other hand, at a certain point in long conversations or reasoning chains, the context can become so large that each message takes more than a minute to process.
Consequently, as users, we need to develop an intuition for the necessary context length–long enough for the task, but not too excessive.
Also, it is a good idea to reduce the necessary context by
- adding paths in the workspace to the context beforehand, so that instead of letting the agent browse and search the workspace for the files to modify via tool calls, we already provide that information. In my experiments, this reduced token consumption from about 60,000 tokens to about 20,000 for the bug analysis task. (Plus, this also speeds up the analysis process as a whole, because the initial steps of searching and browsing the workspace do not need to be performed by the agent).
- keeping conversations and tasks short. Theia AI recommends this even for non-local LLMs and provides tools such as Task Context and Chat Summary. So, it is a good idea to follow Theia AI's advice and use these features regularly.
- defining specialized agents. It is very easy to define a new agent with its custom prompt and tools in Theia AI. If we can identify a repeating task that needs several specialized tools, it is a good idea to define a specialized agent with this specialized toolset. In particular regarding the support for MCP servers, it might be tempting to start five or more MCP servers and just throw all the available tools into the Architect or Coder agent's prompt. This is a bad idea, though, because each tool's definition is added to the system prompt and thus, consumes a part of the context window.
Note that unloading/loading models is rather expensive as well and usually takes up to several seconds. And in Ollama, even changing the context window size causes a model reload. Therefore, as VRAM is usually limited, it is a good idea to stick to one or two models that can fit into the available memory, and not change context window sizes too often.
Organizational Measures
Even with these considerations regarding the context length, Local LLMs will always be slower than their cloud counterparts.
Therefore, we should compensate for this at the organizational level by adjusting the way we work; for example, while waiting for the LLM to complete a task,
- we could start to write and edit prompts for the next features
- we can review the previous task contexts or code modifications and adjust them
- we can do other things in parallel, like go to lunch, grab a coffee, go to a meeting, etc., and let the LLM finish its work while we are away.
Considerations Regarding Accuracy
As mentioned in part 1, local LLMs are usually quantized (which means basically: rounded) so that weights-or parameters-consume less memory. Therefore, a model can have a lower accuracy. The symptom for this is that the agent does not do the correct thing, or does not use the correct arguments when calling a tool.
In my experience, analyzing the reasoning/thinking content and checking the actual tool calls an agent makes, is a good way to determine what goes wrong. Depending on the results of such an analysis
- we can modify the prompt; for example by giving more details, more examples, or by emphasizing important things the model needs to consider
- we can modify the implementation of the provided tools. This, of course, requires building a custom version of the Theia IDE or the affected MCP server. But if a tool call regularly fails, because the LLM does not get the arguments 100% correct, but we could mitigate for these errors in the tool implementation, it might be beneficial to invest in making the tool implementation more robust.
- we can provide more specific tools; for example, Theia AI only provides general file modification tools, such as writeFileReplacements. If you work mostly with TypeScript code, for example, it might be a better approach to implement and use a specialized TypeScript file modification tool that can automatically take care of linting, formatting, etc. on the fly.
Considerations Regarding Complexity
During my experiments, I have tried to give the agent more complex tasks to work on and let it run overnight. This failed however, because sooner or later, the agent will be unable to continue due to the limited context size; it starts forgetting the beginning of the conversation and thus, its primary objective.
One way to overcome this limitation is to split complex tasks into several smaller, lower-level ones. Starting with version 1.63.0, Theia AI supports agent-to-agent delegation. Based on this idea, we could implement a special Orchestrator agent (or a more programmatic workflow) that is capable to split complex tasks into a series of simpler ones. These simpler tasks could then be delegated to specialized agents (refined versions of Coder, AppTester, etc.) one by one. This would have the advantage that each step could start with a fresh, empty context window, thus following the considerations regarding context discussed above.
This is something that would need to be implemented and experimented with.
Odds and Ends
This blog article has presented my experiences and considerations about using local LLMs with Theia AI.
Several topics have only been touched slightly, or not at all, and are subject of further inspection and experimentation:
- Until recently, I had considered Ollama too slow for code completion, mostly because the TTFT (time to first token) is usually rather high. But recently, I have found that at least with the model zdolny/qwen3-coder58k-tools:latest, the response time feels okay. So, I will start experimenting with this and some other models for code completion.
- Also, Ollama supports fill-in-the-middle completion. This means that the completion API does not only support providing a prefix, but also a suffix as input. This API is currently not supported by Theia AI directly. The Code Completion Agent in Theia usually provides the prefix and suffix context as part of the user prompt. So Theia AI would have to be enhanced to support the fill-in-the-middle completion feature natively. And it needs to be determined whether this will also help to improve performance and accuracy.
- Next, there are multiple approaches regarding optimizing and fine-tuning models for better accuracy and performance. There are several strategies, such as Quantization, Knowledge Distillation, Reinforcement Learning, and Model Fine Tuning which can be used to make models more accurate and performant for one's personal use cases. The Unsloth and MLX projects, for example, aim at providing optimized, local options to perform these tasks.
- Finally, regarding Apple Silicon Processors in particular, there are two alternatives to boost performance, if they were supported:
- CoreML is a proprietary Apple framework to use the native Apple Neural Engine (which would provide another performance boost, if an LLM could run fully on it). The bad news is that at the moment, it seems that using the Apple Neural Engine is currently limited by several factors. Therefore, there are no prospects of running a heavier LLM, such as gpt-oss:20b, on the ANE, at the moment.
- MLX is an open framework, also developed by Apple, that runs very efficiently on Apple Silicon processors using a hybrid approach to combine CPU, GPU, and Apple Neural Engine resources. Yet, there is still very limited support available to run LLMs in MLX format. But at least, there are several projects and enhancements in development:
- there is a Pull Request in development to add MLX support to Ollama, which is the basis for using the Neural Engine
- other projects, such as LM Studio, swama, mlx-lm and others support models in the optimized MLX format, but in my experiments, tool call processing was unstable, unfortunately.
Outlook
The evolution of running LLMs locally and using them for agentic development in Theia AI has been moving fast recently. The progress made in 2025 alone suggests that LLMs running locally will continue to get better and better over time:
- better models keep appearing: from deepseek-r1 to qwen3 and gpt-oss, we can be excited about what will come next
- context management is getting better: every other week, we can observe discussions around enhancing or using the context window more effectively in one way or another: the Model-Context-Procol, giving LLMs some form of persistent memory, choosing more optimal representations of data, for example by using TOON, utilizing more intelligent context compression techniques, to name just a few.
- hardware is becoming better, cheaper, and more available; I have performed my experiments with a 5 year old processor (Apple M1 Max) and I have already achieved acceptable results. Even today's processors are already much better, and there is more to come in the future
- software is becoming better: Ollama is being actively developed and enhanced, and Microsoft has recently published BitNet, an engine to support 1-bit LLMs, etc.
We can be excited to see what 2026 will bring…
Self-Brewed Beer is (Almost) Free - Experiences using Ollama in Theia AI - Part 1
November 04, 2025 03:38 PM
This blog article is an extended version of a talk I gave at TheiaCon 2025. The talk has covered my experiences with Ollama and Theia AI in the previous months.
What is Ollama?
Ollama is an OpenSource project which aims at making it possible to run Large Language Models (LLMs) locally on your own hardware with a docker-like experience. This means that, as long as your hardware is supported, it is detected and used with no further configuration.
Advantages
Running LLMs locally has several advantages:
- Unlimited tokens: you only pay for the power you are consuming and the hardware if you do not already own it.
- Full confidentiality and privacy: the data (code, prompts, etc.) never leaves your network. You do not have to worry about providers using your confidential data to train their models.
- Custom models: You have the option to choose from a large number of pre-configured models, or you can download and import new models, for example, from huggingface. Or you can take a model and tweak it or fine-tune it to your specific needs.
- Vendor neutrality: It does not matter who wins the AI race in a few months, you will always be able to run the model you are used to locally.
- Offline: You can use a local LLM on a suitable laptop even when traveling, for example by train or on the plane. No Internet connection required. (A power outlet might be good, though...)
Disadvantages
Of course, all of this also comes at a cost. The most important disadvantages are:
- Size limitations: Both the model size (number of parameters) and context size are heavily limited by the available VRAM.
- Quantization: As a compromise to allow for larger models or contexts, quantization is used to sacrifice weight precision. In other words, a model with quantized parameters can fit more parameters in the same amount of memory. This comes at a cost of lower inference accuracy as we will see further below.
Until recently, the list of disadvantages has included that there was no support for local multimodal models. So, reasoning about images, video, audio, etc. was not possible. But that has changed last week, when ollama 0.12.7 was released along with locally runnable qwen3-vl model variants.
Development in 2025
A lot has happened in 2025 alone. At the beginning of 2025, there was neither a good local LLM for agentic use (especially reasoning and tool calling was not really usable) and also the support for Ollama in Theia AI was limited.
But since then, in the last nine months:
- Ollama 0.9.0 has added support for reasoning/thinking and streaming tool calling
- More powerful models have been released (deepseek-r1, qwen3, gpt-oss, etc.)
- Ollama support in Theia AI has seen a major improvement
With the combination of these changes, it is now very well possible to use Theia AI agents backed by local models.
Getting Started
To get started with Ollama, you need to follow these steps:
- Download and install the most recent version of Ollama. Be sure to regularly check for updates, as with every release of Ollama, new models, new features, and performance improvements are implemented.
- Start Ollama using a command line like this:
OLLAMA_NEW_ESTIMATES="1" OLLAMA_FLASH_ATTENTION="1" OLLAMA_KV_CACHE_TYPE="q8_0" ollama serve
Keep an eye open for the Ollama release changelogs, as the environment settings can change over time. Make sure to enable and experiment with new features. - Download a model using
ollama pull gpt-oss:20b - Configure the model in Theia AI by adding it to the Ollama settings under Settings > AI Features > Ollama
- Finally, as described in my previous blog post, you need to add request settings for the Ollama models in the settings.json file to adjust the context window size (num_ctx), as the default context window in Ollama is not suitable for agentic usage.
Experiments
As a preparation for TheiaCon, I have conducted several non-scientific experiments on my MacBook Pro M1 Max with 64GB of RAM. Note that this is a 5-year-old processor.
The task I gave the LLM was to locate and fix a small bug: A few months ago, I had created Ciddle - a Daily City Riddle, a daily geographical quiz, mostly written in NestJS and React using Theia AI. In this quiz, the user has to guess a city. After some initial guesses, the letters of the city name are partially revealed as a hint, while keeping some letters masked with underscores. As it turned out, this masking algorithm had a bug related to a regular expression not being Unicode-friendly: it matched only ASCII letters, but not special characters, such as é. So special characters would never be masked with underscores.
Therefore, I wrote a prompt explaining the issue and asked Theia AI to identify the bug and fix it. I followed the process described in this post:
- I asked the Architect agent to analyze the bug and plan for a fix
- once without giving the agent the file containing the bug, so the agent needs to analyze and crawl the workspace to locate the bug
- once with giving the agent the file containing the bug using the "add path to context" feature of Theia AI
- I asked Theia AI to summarize the chat into a task context
- I asked Coder to implement the task (in agent mode, so it directly changes files, runs tasks, writes tests, etc.)
- once with the unedited summary (which contained instructions to create a test case)
- once with the summary with all references to an automated unit test removed, so the agent would only fix the actual bug, but not write any tests for it
The table below shows the comparison of different models and settings:
| Model | Architect | Architect (with file path provided) | Summarize | Coder (fix and create test) | Coder (fix only) |
| gpt-oss:20b | |||||
| - w/ num_ctx = 16k | 175s | 33s | 32s | 2.5m (3) | 43s |
| - w/ num_ctx = 128k | 70s | 50s | 32s | 6m | 56s |
| qwen3-14b - w/ num_ctx = 40k |
(1) | 143s | 83s | (4) | (4) |
| qwen3-coder:30b - w/ num_ctx = 128k |
(2) | (2) | 64s | 21m (3) | 13m |
| gpt-oss:120b-cloud | 39s | 16s | 10s | 90s (5) | 38s |
(1) without file path to fix, the wrong file and bugfix location is identified
(2) with or without provided path to fix, qwen3-coder "Architect" agent runs in circles trying to apply fixes instead of providing an implementation plan
(3) implemented fix correctly, but did not write a test case, although instructed to do so.
(4) stops in the middle of the process without any output
(5) in one test, gpt-oss:120b-cloud did not manage to get the test file right and failed when the hourly usage limit was exceeded
Observations
I have performed multiple experiments. The table reports more or less the best case times. As usual when working with LLMs, the results are not always deterministic. But, in general, if the output is similar for a given model, the processing time is also the same within a few seconds, so the table above shows more or less typical results for the case that the outcome was acceptable, if this was possible.
In general, I have achieved the best results with gpt-oss:20b with a context window of 128k tokens (the maximum for this model). A smaller context window can result in faster response times, but at the risk of not performing the task completely; for example, when running with 16k context, the Coder agent would fix the bug, but not provide a test, even though the task context contained this instruction.
Also, in my first experiments, the TypeScript/Jest configuration contained an error which caused the model (even with 128k context) to run around in circles for 20 minutes and eventually deleting the test again before finishing its process.
The other two local models, I used in the tests, qwen3:14b and qwen3-coder:30b were able to perform some of the agentic tasks, but usually at a lower performance and even failing in some scenarios.
Besides the models listed in the table above, I have also tried a few other models that were popular in the Ollama model repository, such as granite4:small-h and gemma3:27b. But they either had a similar behavior as qwen3:14b, so they just stopped at some point without any output, or they did not use the tools provided and just replied with a general answer.
Also note, that some tools (such as deepseek-r1) do not support tool calling in their local variants (yet...?). There are some variants of common models that are modified by users to support tool calling in theory, but in practice the tool calls are either not properly detected by ollama, or the provided tools are not used at all.
Finally, just for comparison, I have also used the recently released Ollama cloud model feature to run the same tasks with gpt-oss:120b-cloud. As expected, the performance is much better than with local models, but at the same time, the gpt-oss:120b-cloud model also began to run around in circles once. So even that is not perfect in some cases.
To summarize, the best model for local agentic development with Ollama is currently gpt-oss:20b. In case everything works, it is surprisingly fast even with my 5 year old hardware. But, if something goes wrong, it usually goes fatally wrong, and the model will entangle itself in endless considerations and fruitless attempts to fix the situation.
Stay tuned for the second part of this article, where I will describe the conclusions I draw from my experiences and experiments, discuss consequences, and provide a look into the future of local LLMs in the context of agentic software development.
The Active Ecosystem of Eclipse Theia Adopters
by Jonas, Maximilian & Philip at November 04, 2025 12:00 AM
We’re pleased to call attention to a compelling article by Thomas Froment at the Eclipse Foundation: “The Active Ecosystem of Eclipse Theia Adopters: A Tour of Diverse Tools and IDEs.” For those in …
The post The Active Ecosystem of Eclipse Theia Adopters appeared first on EclipseSource.
November 02, 2025
What if Java had Symmetric Converter Methods on Collection?
by Donald Raab at November 02, 2025 05:03 PM
Comparing converter methods in Smalltalk, Java, and Eclipse Collections
Using converter methods in Pharo Smalltalk. Converter methods are prefixed with “as” in Smalltalk.toBe(), or not toBe()?
Converter methods are more than a convenience in a programming language. They are a means to discovering additional collection types available to developers. When the number of available collection types is large, this becomes even more important to have good discoverability. Smalltalk has mostly mutable collection types. Java and Eclipse Collections both have mutable and immutable implementations. Only Eclipse Collections has mutable and immutable types as separate interfaces. Eclipse Collections also has primitive collections, so converter methods help provide helpful symmetry and discoverability between Object and primitive collection types.
toSmalltalk
In Smalltalk, converter methods are prefixed with as. The Collection abstract class has eleven converter methods — asArray, asBag, asByteArray, asCharacterSet, asDictionary, asIdentitySet, asMultilineString, asOrderedCollection, asOrderedDictionary, asSet, asSortedCollection.
This is the code from the above image inlined.
|ordered sorted set bag|
ordered := OrderedCollection with: 'Apple' with: 'Pear' with: 'Banana' with: 'Apple'.
sorted := ordered asSortedCollection: #yourself descending.
set := ordered asSet.
bag := sorted asBag.
Transcript show: ordered printString; cr.
Transcript show: sorted printString; cr.
Transcript show: set printString; cr.
Transcript show: bag printString; cr.
This is the output:
an OrderedCollection('Apple' 'Pear' 'Banana' 'Apple')
a SortedCollection('Pear' 'Banana' 'Apple' 'Apple')
a Set('Pear' 'Banana' 'Apple')
a Bag('Pear' 'Banana' 'Apple' 'Apple')IIRC, most of the Collection types available via converter methods in Smalltalk are mutable.
toJava
In Java, converter methods are prefixed with to. The Collection interface has two converter methods — toString and toArray. The Stream interface has three converter methods —toString, toArray, toList. The Stream interface has a collect method which takes a Collector as a parameter. The Collectors utility class has nine unique to methods (some are overloaded) — toCollection, toList, toSet, toMap, toConcurrentMap, toUnmodifiableList, toUnmodifiableSet, toUnmodifiableMap, toConcurrentMap.
To convert a List to a “sorted” List, a Set, and a Bag , we can use the following. There is no SortedList or Bag type in Java, but we’ll find an equivalent.
@Test
public void converterMethodsInJava()
{
List<String> ordered =
List.of("Apple", "Pear", "Banana", "Apple");
List<String> sorted =
ordered.stream()
.sorted(Comparator.reverseOrder())
.toList();
Set<String> set =
ordered.stream()
.collect(Collectors.toSet());
Map<String, Long> bag =
sorted.stream()
.collect(Collectors.groupingBy(
Function.identity(),
Collectors.counting()));
assertEquals(
List.of("Pear", "Banana", "Apple", "Apple"),
sorted);
assertEquals(
Set.of("Pear", "Banana", "Apple"),
set);
assertEquals(
Map.of("Pear", 1L, "Banana", 1L, "Apple", 2L),
bag);
}
Most of the converter methods in Java are three steps away from Collection. I do not think it is likely we will see any more converter methods on Collection or Stream.
Note, in the code example above it is not easy to distinguish between mutable and immutable collection implementations. You have to read the Javadoc or code to understand the return types of different methods.
IntelliJ also recommends not using the converter method in the case of Collectors.toSet().
stream().collect(Collectors.toSet()) shows up highlighted in yellowIntelliJ recommends writing it as follows and this can be accomplished by hitting Alt-Enter and choosing the recommended action above.
Set<String> set = new HashSet<>(ordered);
Using this approach is more concise and is probably more performant (measure, don’t guess), but it introduces more asymmetry and draws implementation details (java.util.HashSet class) into our example.
toEclipseCollections
In Eclipse Collections, like Java, we use the to prefix for converter methods that have a linear time cost. The Eclipse Collections RichIterable interface has twenty six unique converter methods (some are overloaded). The converter methods can be found using IntelliJ Structure view in a category named “Converting”.
Expanding the converter methods for Eclipse Collections in RichIterable in IntelliJTo convert a List to a “sorted” List, a Set, and a Bag , we can use the following. There is no SortedList in Eclipse Collections, but we’ll find an equivalent. We will use mutable collections in these examples, and we can tell they are mutable based on the type names.
@Test
public void converterMethodsInEclipseCollections()
{
MutableList<String> ordered =
Lists.mutable.of("Apple", "Pear", "Banana", "Apple");
MutableList<String> sorted =
ordered.toSortedList(Comparator.reverseOrder());
MutableSet<String> set =
ordered.toSet();
MutableBag<String> bag =
sorted.toBag();
assertEquals(
List.of("Pear", "Banana", "Apple", "Apple"),
sorted);
assertEquals(
Set.of("Pear", "Banana", "Apple"),
set);
assertEquals(
Bags.mutable.withOccurrences("Apple", 2, "Pear", 1, "Banana", 1),
bag);
}
If we want to use immutable collections in Eclipse Collections, the code would look like this.
@Test
public void immutableConverterMethodsInEclipseCollections()
{
ImmutableList<String> ordered =
Lists.immutable.of("Apple", "Pear", "Banana", "Apple");
ImmutableList<String> sorted =
ordered.toImmutableSortedList(Comparator.reverseOrder());
ImmutableSet<String> set =
ordered.toImmutableSet();
ImmutableBag<String> bag =
sorted.toImmutableBag();
assertEquals(
List.of("Pear", "Banana", "Apple", "Apple"),
sorted);
assertEquals(
Set.of("Pear", "Banana", "Apple"),
set);
assertEquals(
Bags.mutable.withOccurrences("Apple", 2, "Pear", 1, "Banana", 1),
bag);
}
I did not have to change the assertions in the example. The equals and hashCode contract for mutable and immutable types of the same container type is the same.
Takeaways
Java is a great language. Java’s standard Collection library is useable but does not have great symmetry or convenience. Eclipse Collections brings back and extends the conveniences Smalltalk had thirty years ago into Java today, and adds symmetry for mutable and immutable converter methods.
Think about the code you are writing today, and what it will be like maintaining it for the next 5, 10, 20 or 30 years. Writing code that communicates well helpful in reducing the cost of understanding and maintenance. Well written code will also help new developers learn how things work, without having to memorize a lot of asymmetric alternatives to converting between collection types.
If you want to learn more about converter methods in Eclipse Collections, I have blogged about them previously, and they are also covered in Chapter 4 of the book “Eclipse Collections Categorically.” Here is a table of most of the mutable and immutable converter methods described in Chapter 4.
Converting between RichIterable Types from Eclipse Collections Categorically.Vladimir Zakharov and I also covered some converter methods in our “Refactoring to Eclipse Collections” talk at dev2next which I blogged about here.
Refactoring to Eclipse Collections with Java 25 at the dev2next Conference
If you don’t think this applies to you because you’ve moved to Kotlin, Python, Ruby, or some other language, take a look at the Kotlin Collections to methods. Once Immutable Collections become part of the Kotlin standard library, I will take an educated guess how this will grow.
Kotlin Collections converter methodsThanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
October 29, 2025
The Eclipse Foundation’s Theia AI wins 2025 CODiE Award for Best Open Source Development Tool
by Anonymous at October 29, 2025 08:45 AM
BRUSSELS – 29 October 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced that Theia AI has been named the winner of the 2025 CODiE Award for Best Open Source Development Tool.
The CODiE Awards are the only peer-recognised program honouring excellence and innovation across the technology landscape. Each product undergoes a rigorous evaluation by expert judges and industry peers based on innovation, impact, and overall value.
“We are honoured to be recognised among such groundbreaking technologies and organisations,” said Jonas Helming, Project Lead for Eclipse Theia and CEO of EclipseSource. “This CODiE Award underscores our team’s commitment to advancing open source innovation and empowering the next generation of AI-native tools and IDEs.”
Theia AI: Giving developers full control over AI integration
Part of the Eclipse Theia tool platform, Theia AI is an open source framework that gives tool builders complete control over how AI is integrated into their products. It allows developers to manage every aspect of AI capabilities, from selecting the most suitable Large Language Model (LLM), whether cloud-based, self-hosted, or fully local, to orchestrating the entire prompt engineering flow, defining agentic behaviours, and choosing which data and knowledge sources to use.
This flexibility ensures transparency, adaptability, and precision, enabling developers to fine-tune AI interactions to fit their specific use cases and strategic goals. Tool developers can design AI-driven user experiences exactly as they envision, whether through interactive chat interfaces, AI-assisted code editors, or fully customised user interfaces.
By simplifying complex AI integration challenges, Theia AI enables the creation of advanced, tailor-made AI capabilities that go beyond today’s state of the art and align with the unique demands of each domain. Following extensive beta testing and real-world adoption, Theia AI is now publicly available to empower developers and tool builders to bring intelligent, domain-specific AI capabilities to life. Learn more in the Theia AI release announcement.
“The CODiE Awards celebrate the visionaries shaping the future of technology,” said Jennifer Baranowski, President of the CODiE Awards. “This year’s winners exemplify how innovation, leadership, and purpose can come together to create solutions that move industries forward and make a lasting impact.”
A full list of 2025 CODiE Award winners can be found at www.codieawards.com/winners.
Explore the future of open source development at TheiaCon 2025, happening now (29–30 October). Registration is free and open to everyone.
To connect with the growing global Eclipse Theia community, contribute, or learn more, visit: https://theia-ide.org/.
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
Open Source MBSE at Scale: From Industry-Proven Tools to Web-Native SysML v2
by Cédric Brun ([email protected]) at October 29, 2025 12:00 AM
Cedric Brun, CEO of Obeo, and Asma Charfi, from CEA, look back on 15 years of open-source ecosystem development and share their vision for the next generation of Model-Based Systems Engineering (MBSE) tools.
Context
- Event: 2025 IEEE International Symposium on Systems Engineering (ISSE)
- Location: ENSTA, Paris
- Date: October 2025
Summary
This joint presentation explored how open-source MBSE technologies have evolved over the past 15 years — from Eclipse-based industrial tools like Capella, Papyrus, and Sirius, to new web-native environments supporting SysML v2 and agent-assisted engineering.
Key messages included:
- The power of open ecosystems for accelerating innovation in education, research, and industry.
- Lessons learned from large-scale industrial adoption of MBSE tools.
- The emergence of next-generation modeling environments — collaborative, extensible, and AI-augmented, bridging the gap between domain experts and software engineers.
The talk sparked lively discussions and a strong interest from the IEEE community regarding the convergence of open-source platforms and upcoming SysML v2 tooling.
Highlights
- 15 years of open collaboration across the Eclipse ecosystem — from early Papyrus and Capella foundations to today’s vibrant MBSE community.
- Industry-proven tools at scale, including Capella and its extensions (Team, Cloud, and Publication), showcasing how open-source can sustain mission-critical engineering.
- A live proof of concept illustrating “Obeo Enterprise for SysON,” combining SysML v2 with Arcadia semantics and an AI agent assisting the creation of a logical architecture for the X-Wing spacecraft.
- A forward-looking perspective on the transition to web-native, cloud-enabled, and AI-augmented modeling platforms built for openness and collaboration.
Open Source MBSE at Scale: From Industry-Proven Tools to Web-Native SysML v2 was originally published by Cédric Brun at CEO @ Obeo on October 29, 2025.
by Cédric Brun ([email protected]) at October 29, 2025 12:00 AM
October 28, 2025
Eclipse LMOS Redefines Agentic AI with Industry’s First Open Agent Definition Language (ADL) for Enterprises
by Anonymous at October 28, 2025 08:45 AM
BRUSSELS – 28 October 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced the introduction of the Agent Definition Language (ADL) functionality to the Eclipse LMOS (Language Models Operating System) project.
Eclipse LMOS is an open source platform for orchestrating intelligent AI agents that perform complex tasks at enterprise scale. It is composed of three core components:
- Eclipse LMOS ADL (Agent Definition Language): A structured, model-neutral language and visual toolkit that lets domain experts define agent behavior reliably and collaborate seamlessly with engineers.
- Eclipse LMOS ARC Agent Framework: A JVM-native framework with a Kotlin runtime for developing, testing, and extending AI agents comes with a built-in visual interface for quick iterations and debugging.
- Eclipse LMOS Platform: An open, vendor-neutral orchestration layer for agent lifecycle management, discovery, semantic routing, and observability, built on the CNCF stack and currently in Alpha.
An industry-first innovation, ADL addresses the complexity of traditional prompt engineering by providing a structured, model-agnostic framework that allows business and engineering teams to co-define agent behaviour in a consistent, maintainable, and versionable way. This shared language increases the reliability and scalability of growing agentic use cases, enabling enterprises to design and govern complex agentic systems with confidence. This capability further distinguishes Eclipse LMOS from proprietary alternatives.
The goal of the LMOS project is to create a sovereign, open platform where AI agents can be developed, deployed, and integrated seamlessly across networks and ecosystems. Built on open standards such as Kubernetes, LMOS is already in production with one of the largest enterprise Agentic AI deployments in Europe.
“Agentic AI is redefining enterprise software, yet until now there has been no open source alternatives to proprietary offerings,” said Mike Milinkovich, executive director of the Eclipse Foundation. “With Eclipse LMOS and ADL, we’re delivering a powerful, open platform that any organisation can use to build scalable, intelligent, and transparent agentic systems.”
Empowering Enterprises to Build the Future of Agentic AI
Agentic AI represents a generational shift in how enterprises approach their technology stack. According to Gartner (June 2025), by 2028, 15% of daily business decisions will be made autonomously through agentic AI, and 33% of enterprise applications will include such capabilities, up from less than 1% in 2024.
Eclipse LMOS is uniquely designed to let enterprise IT teams leverage their existing infrastructure, skills, and DevOps practices. Running on technologies such as Kubernetes, Istio, and JVM-based applications, LMOS integrates naturally into enterprise environments, accelerating adoption while protecting prior investments.
The introduction of ADL builds on this foundation by empowering non-technical users to shape agent behavior. Business domain experts, not just engineers, can directly encode requirements into agents, accelerating time-to-market and ensuring that agent behavior accurately reflects real-world domain knowledge.
“With ADL, we wanted to make defining agent behaviour as intuitive as describing a business process, while retaining the rigor engineers expect,” said Arun Joseph, Eclipse LMOS project lead. “It eliminates the fragility of prompt-based design and gives enterprises a practical path to scale agentic AI using their existing teams and resources.”
Together, these two pillars, leveraging existing engineering investments and empowering business experts with ADL, make LMOS unique among agentic AI platforms.
Enterprise-Ready Advantages
Compared to proprietary solutions, Eclipse LMOS delivers:
- Open architecture - Innovation thrives in an open environment. LMOS is part of an open ecosystem that invites developers, data scientists, and organisations to collaborate and shape the future of Multi-Agent Systems.
- Collaboration - AI agent collaboration enhances problem-solving. LMOS orchestrates these interactions with advanced routing based on the user’s intent or goals, allowing agents to work together seamlessly within a single, unified system.
- Cloud native scalability - As your AI needs grow, LMOS grows with you. Its cloud-native architecture dynamically scales from a few agents to hundreds, ensuring seamless performance as your AI operations expand.
- Modularity - LMOS is built with modularity at its core, allowing you to easily integrate new Agents in your preferred development language or framework.
- Extensibility - Extensibility drives innovation. LMOS defines clear specifications, allowing you to quickly extend its ecosystem.
- Multi-tenant capable - Built with enterprises in mind, LMOS is designed to be multi-tenant capable from the ground up. LMOS enables the efficient management of multiple tenants and agent groups within a single infrastructure.
Real-World Impact
At Deutsche Telekom, Eclipse LMOS powers the award-winning Frag Magenta OneBOT assistant and other customer-facing AI systems. This deployment, one of Europe’s largest multi-agent enterprise deployments, has processed millions of service and sales interactions across several countries, showcasing LMOS’s enterprise-grade scalability and reliability in production environments.
Get Involved
Developers, enterprises, and researchers are invited to join the community and contribute to the evolution of open source Agentic AI. Full details on the LMOS project, participating organisations, and ways to get involved are available here. To learn more about AI initiatives at the Eclipse Foundation, visit eclipse.org/ai
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
Why AI Coding Fails - and How to Fix It
by Jonas, Maximilian & Philip at October 28, 2025 12:00 AM
Many developers and teams are experimenting with AI coding — using tools like GitHub Copilot, Cursor, and other AI code assistants — but few manage to make it work reliably in real projects. At …
The post Why AI Coding Fails - and How to Fix It appeared first on EclipseSource.
October 27, 2025
Open VSX security update - October 2025
by Anonymous at October 27, 2025 08:38 PM
Over the past few weeks, the Open VSX team and the Eclipse Foundation have been responding to reports of leaked tokens and related malicious activity involving certain extensions hosted on the Open VSX Registry.
Open VSX security update, October 2025
October 27, 2025 07:30 PM
Over the past few weeks, the Open VSX team and the Eclipse Foundation have been responding to reports of leaked tokens and related malicious activity involving certain extensions hosted on the Open VSX Registry. We want to share a clear summary of what happened, what actions we’ve taken, and what improvements we’re implementing to strengthen the security of the ecosystem.
Background
Earlier this month, our team was alerted to a report from Wiz identifying several extension publishing tokens inadvertently exposed by developers within public repositories. Some of these tokens were associated with Open VSX accounts.
Upon investigation, we confirmed that a small number of tokens had been leaked and could potentially be abused to publish or modify extensions. These exposures were caused by developer mistakes, not a compromise of the Open VSX infrastructure. All affected tokens were revoked immediately once identified.
To improve detection going forward, we introduced a token prefix format in collaboration with MSRC to enable easier and more accurate scanning for exposed tokens across public repositories.
The “GlassWorm” campaign
Around the same time, a separate report from Koi Security described a new malware campaign that leveraged some of these leaked tokens to publish malicious extensions. The report referred to this as a “sel”-propagating worm,” drawing comparisons to the ShaiHulud incident that impacted the npm registry in September.
While the report raises valid concerns, we want to clarify that this was not a self-replicating worm in the traditional sense. The malware in question was designed to steal developer credentials, which could then be used to extend the attacker’s reach, but it did not autonomously propagate through systems or user machines.
We also believe that the reported download count of 35,800 overstates the actual number of affected users, as it includes inflated downloads generated by bots and visibility-boosting tactics used by the threat actors.
All known malicious extensions were removed from Open VSX immediately upon notification, and associated tokens were rotated or revoked without delay.
Status of the incident
As of October 21, 2025, the Open VSX team considers this incident fully contained and closed. There is no indication of ongoing compromise or remaining malicious extensions on the platform.
We continue to collaborate closely with affected developers, ecosystem partners, and independent researchers to ensure transparency and reinforce preventive measures.
Strengthening the platform
This event has underscored the importance of proactive defense across the supply chain, particularly in community-driven ecosystems. To that end, we are implementing several improvements:
-
Token lifetime limits: All tokens will have shorter validity periods by default, reducing the potential impact of accidental leaks.
-
Simplified revocation: We are improving internal workflows and developer tooling to make token revocation faster and more seamless upon notification.
-
Security scanning at publication: Automated scanning of extensions will now occur at the time of publication, helping us detect malicious code patterns or embedded secrets before an extension becomes available to users.
-
Ecosystem collaboration: We are continuing to work with other marketplace operators, including VS Code and third-party forks, to share intelligence and best practices for extension security.
Help us build a more secure and sustainable open source future
We take this responsibility seriously, and the trust you place in us is paramount. Incidents like this remind us that supply chain security is a shared responsibility: from publishers managing their tokens carefully, to registry maintainers improving detection and response capabilities.
The Open VSX incident is now resolved, but our work on improving the resilience of the ecosystem is ongoing. We remain committed to transparency and to strengthening every part of our platform to ensure that open source innovation continues safely and securely.
Open VSX is built by and for the open source developer community. It needs your support to stay sustainable. Read more about this in our recent blog post.
If you believe you’ve discovered a security issue affecting Open VSX, please reach out to us at [email protected].
Thank you for your vigilance, cooperation, and commitment to a safer open source community.
October 25, 2025
Go Primitive in Java, or Go in a Box
by Donald Raab at October 25, 2025 08:05 PM
We can have our eight Java primitives and travel light in collections too.
It’s hard to go fast when you’re in a box
Java has eight primitives. For better or worse, we’ve had them in Java for over 30 years. We use primitives all the time (e.g. loops, if-statements, math, etc.), even when we don’t use them directly (e.g. String).
Java has array type support for all eight primitives. Java has three primitive Stream types (IntStream, LongStream, DoubleStream). Java has zero primitive Collection types. You have to box primitives to use them in collections. This means wrapping boolean, byte, char, short, int, float, long, double in their object wrapper equivalents, Boolean, Byte, Character, Short, Integer, Float, Long, Double.
This is unfortunate. This is the nicest alternative I could come up with instead of saying what I really think, which is, this sucks.
I stopped caring about this “unfortunate situation” thirteen years ago. We added primitive collections to Eclipse Collections because we saw no near or distant future where primitive collection support would exist in Java natively.
We got to work and built solutions to travel light with Java collections a long time ago. You can travel light now as well if you want or need. If there’s something missing that you need, Eclipse Collections is open source and open for contributions. New contributors wanted!
It’s cheaper and faster to travel light
I could show you benchmarks and memory savings of using primitive collections instead of boxed collections. If you need to see these to be convinced of the benefits of primitive collection support in Java, then you probably don’t need support for primitive collections in Java. No need to read any further. Please accept this complimentary set of eight boxes for your collection travels.
If you understand and have a need for primitive collections, Eclipse Collections may have some solutions for you. Read on.
Eight is enough
Eclipse Collections has support for the following primitive collection types.
- List (all eight primitives)
- Set (all eight primitives)
- Stack (all eight primitives)
- Bag (all eight primitives)
- LazyIterable (all eight primitives)
- Map (all combinations except boolean as keys)
- Interval (IntInterval and LongInterval)
- String (CharAdapter and CodePointAdapter)
For List, Set, Stack, Bag, and Map, there are both Mutable and Immutable versions of the containers. There is only immutable primitive support for Interval and String types. LazyIterable for primitives is read-only.
Instantiate Them Using Factories
Symmetry and uniformity are very important design considerations in Eclipse Collections. While perfect symmetry is challenging to achieve, there is “good enough” symmetry for most types in the library. The following slide from the dev2next 2025 talk, “Refactoring to Eclipse Collections”, shows the combination of factories available. Credit to Vladimir Zakharov for creating this concise slide.
Slide 5 from the “Refactoring to Eclipse Collections” talk at dev2next 2025As you might notice on this slide, there are currently missing primitive collection types. There are no BiMap, Multimap, SortedBag, SortedSet, SortedMap types for primitives today. That can change over time, if folks have a specific need. We only add types to Eclipse Collections when there is a real need.
Why no primitive Boolean<V>Map?
The type Map<Boolean, V> in Java has a particular design smell. We specifically designed the primitive collection types in Eclipse Collections so there are no BooleanObjectMap<V> type or Boolean<Primitive>Map types.
Disallowing this kind of type may seem like a poor design decision to folks who enjoy Map-Oriented Programming. After all, the the Collectors.partitioningBy() method returns Map<Boolean, List<T>>, so it must be a good design right? Not all questions have a simple answer, so some questions deserve an entire blog.
Map-Oriented Programming in Java
In modern versions of Java (Java 17+ for LTS users), you can use a Java record to create a concise strong type for what might be considered more generally as a Pair. Eclipse Collections also has Pair, and all combinations of primitive and object Pair types (e.g. IntObjectPair, ShortBytePair, LongBytePair, etc). These are better, safer alternatives to using a Map<Boolean, V> type.
What about primitive support for lambdas?
Eclipse Collections has had primitive collection support since before Java had lambdas (around 2012). Just like the object collections in Eclipse Collections, the primitive collections were designed with a feature rich API. I knew Java would get lambdas eventually, I just wasn’t sure when exactly.
My ten year quest for concise lambda expressions in Java
By the time we added primitive collection support to GS Collections, I believed Concise Lambda Expressions would be included in Java 8. The fundamental problem with lambda support for primitives is the same as collections support for primitives. There is no support for Generic Types over Primitives in Java today. This is a feature that may eventually arrive with Project Valhalla.
My ten year quest for lambda support in Java has been absolutely dwarfed by my twenty-one year wait for generic types over primitives.
I shared what I have been wishing and waiting for in Java in this blog.
What are you wishing or waiting for in Java?
TL;DR… This is what it looks like when you decide to stop wishing or waiting, and just get to work making a Functional Interface named Procedure/Procedure2 (aka Consumer/BiConsumer) work for the primitive types. This is only one of three Functional Interface type categories. There are also Function0/Function/Function2 and Predicate/Predicate2. The combinatorial explosion of these types is explained further in the blog and the “Eclipse Collections Categorically” book.
Functional “Procedure” Interfaces for primitive types in Eclipse CollectionsBlogs, Code Katas, and other Resources
If you are interested in learning more about the primitive collection support in Eclipse Collections, the following resources can help.
Blogs
- The missing Java data structures no one ever told you about — Part 3
- How to collect a Java Stream into a primitive Collection
- Primitive Collections in Eclipse Collections | Baeldung
Code Katas
Lost and Found Kata in Eclipse Collections Kata repo. There is a solutions folder for this kata as well.
Book
The book “Eclipse Collections Categorically: Level up your programming game” was first published in March 2025. The book has excellent coverage of working with both object and primitive collections in Eclipse Collections. Various versions of the book are linked from the publisher here. The book is also currently available for free to Amazon Kindle Unlimited subscribers.
Reference Guide
There is an AsciiDoc Reference Guide for Eclipse Collections with a dedicated section on primitive collections here.
Final Thoughts
The extensive primitive collections support in Eclipse Collections has been one of its most popular features. The combination of primitive collections with lambda-enabled API and support for mutable and immutable types is unmatched in any other Java collections library. These are hard problems to solve, but they have been solved problems in Eclipse Collections for well over a decade.
It will be great when Project Valhalla is finally realized and released in Java. Maybe you can afford to wait for Project Valhalla to arrive and finally build the applications and libraries you really want to build. I’m glad we got to work on supporting primitive collections in Eclipse Collections when I was in my early forties. Now I’m in my mid-fifties, and I have decided I’m getting too old to wait for language miracles to arrive.
Java has been good enough since Java 8, and gets better with every release. I go primitive any time I need to. I don’t need to wait for anything.
You can either get to work using what’s available today, or wait and hope for someone to eventually unbox the box you’ve been travelling in. Go primitive in Java, or go in a box. Your choice.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
October 24, 2025
Before the Cloud: Eclipse Foundation’s Quiet Stewardship of Open Source Infrastructure
by Denis Roy at October 24, 2025 08:12 PM
Long before the cloud era, the Eclipse Foundation quietly served as the backbone of open source stewardship. Its software, frameworks, processes and infrastructure helped define and standardise developer workflows that are now core to modern engineering practices.
As early as 2005, the Eclipse IDE’s modular plugin architecture embodied what we now recognise as today's extension registry model. Developers no longer needed to manually download and configure artifacts; they could be automatically ingested, at high volume, into build and delivery pipelines known today as CI/CD.
Eclipse Foundation’s early success demanded infrastructure that could scale globally without the benefit of GitHub, Cloudflare, AWS, or GCP. Like many pioneering platforms of that time, we had to build performant and resilient systems from the ground up.
Fast forward two decades, and open source infrastructure has become the backbone of software delivery across every industry. Developer platforms now span continents and power everything from national infrastructure to consumer technology. In this landscape, software delivery is no longer just a technical process but a key driver of innovation, competition, and developer velocity.
Today, the Eclipse Foundation continues its legacy of building dependable open source infrastructure, powering registries, frameworks, and delivery systems that enable millions of developers to innovate at scale. From open registries like Open VSX to entreprise-grade frameworks such as Jakarta EE, the Foundation provides the scaffolding for the next generation of AI-augmented development. Its vendor-neutral governance ensures that tools, and the innovations they enable, remain open, globally accessible and community-driven.
From IDEs to extension registries, the Eclipse Foundation continues to shape the digital backbone of modern innovation. It remains one of the world’s most trusted homes for open collaboration, enabling developers, communities, and organisations to build the technologies that define the future—at global scale.
October 23, 2025
AI Coding Training Now Available: Learn the Dibe Coding Methodology
by Jonas, Maximilian & Philip at October 23, 2025 12:00 AM
Over the past two years, AI coding has exploded — with tools and demos promising to transform how we build software. Yet many teams, especially in enterprise environments, still struggle to move …
The post AI Coding Training Now Available: Learn the Dibe Coding Methodology appeared first on EclipseSource.
October 21, 2025
On-Demand AI Agent Delegation in Theia AI
by Jonas, Maximilian & Philip at October 21, 2025 12:00 AM
AI-powered development environments are evolving beyond single, monolithic agents. The next step is collaborative AI — a network of specialized agents that each excel at a certain task and can …
The post On-Demand AI Agent Delegation in Theia AI appeared first on EclipseSource.
October 17, 2025
How we used Maven relocation for Xtend
by Lorenzo Bettini at October 17, 2025 01:45 PM
October 16, 2025
Eclipse Theia 1.65 Release: News and Noteworthy
by Jonas, Maximilian & Philip at October 16, 2025 12:00 AM
We are happy to announce the Eclipse Theia 1.65 release! The release contains in total 78 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …
The post Eclipse Theia 1.65 Release: News and Noteworthy appeared first on EclipseSource.
October 15, 2025
Spliterating Hairs Results in Spliterating Deja Vu
by Donald Raab at October 15, 2025 04:41 AM
How a “Random” question led me down a Java Spliterator rabbit hole.
A Better RandomAccess Default From Long Ago
I responded to a comment on a recent blog with a link to a blog that I had a written a few years ago. The blog details the journey that I went on taking a discovery and idea all the way to be included in OpenJDK. This blog and the links it contains are the only organized body of evidence that I am aware of that explain the creation, existence and purpose of RandomAccessSpliterator. The following is the blog.
Traveling the road from Idea all the way to OpenJDK
This story happened a long time ago, but it got me wondering, whatever happened with RandomAccessSpliterator? I mean I knew this class will probably be in Java forever, but I wondered how often it actually gets used today.
Note for the reader: This is the entrance to the rabbit hole. I owe losses of large amounts of personal time to questions like these. If you value your time and accept the universe as it is, then do not ask yourself questions like these. If you do find yourself asking these kinds of questions, you may learn more than you wanted to know.
Find Usages?
One does not simply create RandomAccessSpliterator. It is a default Spliterator, created for RandomAccess List types that have not defined their own spliterator() method. This means, instances of this type can only be discovered at runtime. As evidence, I tried IntelliJ Find Usages on RandomAccessSpliterator on the Eclipse Collections code base. The only place it is created is in a default method on the List interface in the JDK.
The default method on List that creates RandomAccessSpliteratorWhile I’ve known I cannot just find usages of this type, I thought there must be another way. So I decided to run the ~167K unit tests in the Eclipse Collections test suite and turn a breakpoint on. Then I learned something I had never tried before. You don’t have to suspend a breakpoint. You can just output a message when the breakpoint is hit. Woo hoo! Runtime usages!
I unchecked Suspend, and checked “Breakpoint hit” messageNow when I run the Eclipse Collections unit test suite, this is what I see in the console with this breakpoint set.
Breakpoints that are loggedNow I just Googled to see if I could find a count of the number of times, and StackOverflow had a question and answer. So I found the tab in IntelliJ, and sure enough there’s the same count I did by hand a day earlier. 🤦♂️
Number of times this breakpoint is hitOk, this is where the story should end. You learned some cool stuff about IntelliJ and debugging and counting breakpoints, win!
I wonder if RandomAccessSpliterator is used anywhere in the JDK code base?
Note to reader: This is where I lose sight of the top of the rabbit hole and enter into free fall into the JDK code base and hours of running JMH Benchmarks.
The Ballad of List12 and ListN
I don’t have the code for running the JDK unit tests on my machine. I’ve never run them, and not sure how I would. That’s a rabbit hole for me to fall down a different day maybe when I’m retired.
I decided to just poke around and try things out with JDK types. I’ll make the story short. I discovered that RandomAccessSpliterator is used by the two classes created by calling List.of(). Yes, the immutable (or unmodifiable if you prefer) lists we’ve been creating since they were added to Java 9, use RandomAccessSpliterator, which was also added in Java 9. Instances of List.of() get RandomAccessSpliterator by default because they don’t define a spliterator() method.
Oh, shaving cream! RandomAccessSpliterator lives!
Now, this rabbit hole, had a little detour. As it turns out, List12 did not define a spliterator() method in Java 21, but it defines one in Java 25. So it must have been added somewhere between Java 21 and 25. The method looks like this in Java 25.
List12 with one element gets Collections.singletonSpliterator() which is package privateI wrote a test to show a bunch of Spliterator types used by commonly used List types in Java.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class SpliteratorTest
{
@Test
public void listNSpliteratorType()
{
List<Integer> integers = List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
assertEquals(
"RandomAccessSpliterator",
integers.spliterator().getClass().getSimpleName());
}
@Test
public void list12SpliteratorType()
{
List<Integer> list1Of12 = List.of(1);
assertEquals(
"",
list1Of12.spliterator().getClass().getSimpleName());
List<Integer> list2Of12 = List.of(1, 2);
assertEquals(
"RandomAccessSpliterator",
list2Of12.spliterator().getClass().getSimpleName());
}
@Test
public void ArraysAsListSpliteratorType()
{
List<Integer> arraysAsList =
Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
assertEquals(
"ArraySpliterator",
arraysAsList.spliterator().getClass().getSimpleName());
}
@Test
public void ArrayListSpliteratorType()
{
List<Integer> arrayList =
new ArrayList<>(List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10));
assertEquals(
"ArrayListSpliterator",
arrayList.spliterator().getClass().getSimpleName());
}
}
Now the simple name of the Spliterator used for single element instances of the List12 type is… empty. This is because it is defined as an anonymous inner class. Anonymous indeed!
I was initially surprised to see there are three named Spliterator types here, not two. I was expecting ArrayList to use ArraySpliterator. The reason it does not, is because the Spliterator for ArrayList has to deal with potential ConcurrentModificationException exceptions being thrown if the modCount variable inherited from AbstractList is triggered. RandomAccessSpliterator has a dual mode which checks if a RandomAccess type extends from AbstractList, and if so adds the modCount logic. If not it ignores it.
Ok, so while I think it is really cool to see an idea I had for a default Spliterator implementation 12 years ago is actively used by newer collection types, I found myself asking the question.
Why would ListN use RandomAccessSpliterator instead of ArraySplitterator, since it doesn’t extend AbstractList and doesn’t need to deal with modCount? List12 makes more sense to me, as it is not backed by an array, and is still RandomAccess.
Note to the reader: This is when I saw the same black cat walk by my doorway twice. This is also where the rabbit hole got very deep, as it caused me to spend 8–10 hours just writing and running JMH benchmarks. I am going to keep it short and sweet for you and just share one straightforward benchmark to consider.
Deja Vu
I’ve been here before. Twelve years ago I found myself discovering and proving that IteratorSpliterator was a terrible default for RandomAccess types when using parallelStream(). RandomAccessSpliterator eventually stepped in as a much better default implementation for folks who could not or did not want to provide their own spliterator() override.
While RandomAccessSpliterator is a good default alternative, I believe that ArraySpliterator must be a better performing alternative for a List type with a backing array that is immutable. The array and immutable aspects are the key. All of the complicated logic needed to check for modCount changes goes away. And with an array, we don’t have to use method calls like get() to look up elements at indexes. Win!
This must be measurable, right? I believe so. To the rabbit hole!
After much testing and trying different combinations of things, I decided I would settle on one benchmark to share. If folks want to try out their own benchmarks to prove if this is worth it or not, I wish them luck and much patience. I am satisfied, and believe that having ListN be as fast to iterate with stream() and parallelStream() as Arrays.asList() would be a good thing, and beneficial to the entire Java community.
Note: We have used ArraySpliterator for all of our array backed List types for years in Eclipse Collections. This is for two reasons. One, it’s a simple and fast Spliterator that we didn’t have to write. Two, we don’t use modCount in our collection types, so don’t require the use of modCount sensitive Spliterators.
The benchmark Results
I ran benchmarks calculating the combination of min and max using Stream.reduce(). I ran these benchmarks on my MacBook Pro M2 Max with 12 Cores and 96GB RAM.
Result output (minus the prefix long test name):
Benchmark Mode Cnt Score Error Units
minMaxArrayList thrpt 20 158.375 ± 3.787 ops/s
minMaxArraysAsList thrpt 20 208.214 ± 0.686 ops/s
minMaxListN thrpt 20 97.352 ± 0.717 ops/s
parallelMinMaxArrayList thrpt 20 1149.748 ± 60.342 ops/s
parallelMinMaxArraysAsList thrpt 20 1387.468 ± 8.229 ops/s
parallelMinMaxListN thrpt 20 1062.055 ± 15.685 ops/s
Unit is operations per second so bigger is better.
The results in a chartThe Code:
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.eclipse.collections.api.tuple.primitive.IntIntPair;
import org.eclipse.collections.impl.list.Interval;
import org.eclipse.collections.impl.tuple.primitive.PrimitiveTuples;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
@State(Scope.Thread)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@Fork(2)
@Warmup(iterations = 20, time = 2)
@Measurement(iterations = 10, time = 2)
public class RandomAccessVsArraySpliteratorBenchmark
{
private final Interval interval = Interval.oneTo(1_000_000);
private final List<Integer> listN = List.copyOf(interval);
private final List<Integer> arrayList = new ArrayList<>(interval);
private final List<Integer> arraysAsList = Arrays.asList(interval.toArray());
@Benchmark
public IntIntPair minMaxListN()
{
int min = this.listN.stream()
.reduce(Math::min)
.orElse(0);
int max = this.listN.stream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
@Benchmark
public IntIntPair minMaxArrayList()
{
int min = this.arrayList.stream()
.reduce(Math::min)
.orElse(0);
int max = this.arrayList.stream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
@Benchmark
public IntIntPair minMaxArraysAsList()
{
int min = this.arraysAsList.stream()
.reduce(Math::min)
.orElse(0);
int max = this.arraysAsList.stream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
@Benchmark
public IntIntPair parallelMinMaxListN()
{
int min = this.listN.parallelStream()
.reduce(Math::min)
.orElse(0);
int max = this.listN.parallelStream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
@Benchmark
public IntIntPair parallelMinMaxArrayList()
{
int min = this.arrayList.parallelStream()
.reduce(Math::min)
.orElse(0);
int max = this.arrayList.parallelStream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
@Benchmark
public IntIntPair parallelMinMaxArraysAsList()
{
int min = this.arraysAsList.parallelStream()
.reduce(Math::min)
.orElse(0);
int max = this.arraysAsList.parallelStream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
}
This is the limit I am willing to spend any more time on this. I think it shows that ListN could get a decent speedup for this test case by switching to ArraySpliterator. Other test cases may see different results. I’m fairly confident that switching ListN to use ArraySpliterator will not likely result in any degradation of performance, but I have also learned that measuring performance is really hard, especially when it comes to JIT compilers.
Final Thoughts
I learned some new things trying these experiments and diving into these all too familiar looking rabbit holes. I don’t know if this will cause any changes in Java, but I do hope it helps shine a light on a potentially useful performance improvement moving ListN from using RandomAccessSpliterator to using ArraySpliterator.
I also hope this provides some useful information that my readers may not have been aware of.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
October 14, 2025
It's Released: Your Native Claude Code IDE Integration in Theia
by Jonas, Maximilian & Philip at October 14, 2025 12:00 AM
Anthropic’s Claude Code is one of the most advanced AI coding agents available: powerful, autonomous, and loaded with well-designed tools. But until now, the experience always felt somewhat separated …
The post It's Released: Your Native Claude Code IDE Integration in Theia appeared first on EclipseSource.
October 10, 2025
Announcing Eclipse Ditto Release 3.8.0
October 10, 2025 12:00 AM
Eclipse Ditto team is excited to announce the availability of a new minor release, including new features: Ditto 3.8.0.
Adoption
Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto
When you use Eclipse Ditto it would be great to support the project by putting your logo there.
Changelog
The main improvements and additions of Ditto 3.8.0 are:
- Diverting Ditto connection responses to other connections (e.g. to allow multi-protocol workflows)
- Dynamically re-configuring WoT validation settings without restarting Ditto
- Enforcing that WoT model based thing definitions are used and match a certain pattern when creating new things
- Support for OAuth2 “password” grant type for authenticating outbound HTTP connections
- Configure JWT claims to be added as information to command headers
- Added support for client certificate based authentication for Kafka and AMQP 1.0 connections
- Extend “Normalized” connection payload mapper to include deletion events
- Support silent token refresh in the Ditto UI when using SSO via OAuth2/OIDC
- Enhance conditional updates for merge thing commands to contain several conditions to dynamically decide which parts of a thing to update and which not
The following non-functional work is also included:
- Improving WoT based validation performance for merge commands
- Enhancing distributed tracing, e.g. with a span for the authentication step and by adding the error response for failed API requests
- Updating dependencies to their latest versions
- Providing additional configuration options to Helm values
The following notable fixes are included:
- Fixing nginx CORS configuration which caused Safari / iOS browsers to fail with CORS errors
- Fixing transitive resolving of Thing Models referenced with
tm:ref - Fixing sorting on array fields in Ditto search
- Fixing issues around “put-metadata” in combination with merge commands
- Fixing that certificate chains for client certificate based authentication in Ditto connection was not fully parsed
- Fixing deployment of Ditto on OpenShift
Please have a look at the 3.8.0 release notes for a more detailed information on the release.
Artifacts
The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.
The Ditto JavaScript client release was published on npmjs.com:
The Docker images have been pushed to Docker Hub:
- eclipse/ditto-policies
- eclipse/ditto-things
- eclipse/ditto-things-search
- eclipse/ditto-gateway
- eclipse/ditto-connectivity
The Ditto Helm chart has been published to Docker Hub:
–
The Eclipse Ditto team
October 09, 2025
Response diversion - Multi-protocol workflows made easy
October 09, 2025 12:00 AM
Today we’re excited to announce a powerful new connectivity feature in Eclipse Ditto: Response Diversion. This feature enables sophisticated multiprotocol workflows by allowing responses from one connection to be redirected to another connection instead of being sent to the originally configured reply target.
With response diversion, Eclipse Ditto becomes even more versatile in bridging different IoT protocols and systems, enabling complex routing scenarios that were previously challenging or impossible to achieve.
The challenge: Multi-protocol IoT landscapes
Modern IoT deployments often involve multiple protocols and systems working together. Consider these common scenarios:
- Cloud integration: Your devices use MQTT to communicate with AWS IoT Core, but your analytics pipeline consumes data via Kafka
- Protocol translation: Legacy systems expect HTTP webhooks, but your devices communicate via AMQP
- Response aggregation: You want to collect all device responses in a central monitoring system regardless of the original protocol
Until now, implementing such multiprotocol workflows required complex external routing logic or multiple intermediate systems. Response diversion brings this capability directly into Ditto’s connectivity layer.
How response diversion works
Response diversion is configured at the connection source level using a key in the specific config and special header mapping keys:
{
"headerMapping": {
"divert-response-to-connection": "target-connection-id",
"divert-expected-response-types": "response,error,nack"
},
"specificConfig": {
"is-diversion-source": "true"
}
}
And in the target connection, by defining a target. In the case of multiple sources one or exactly the same number of sources targets are required. If multiple targets are configured they are mapped to the sources by order. Only diverted responses will be accepted by source connections which ids are defined in the specific config under the key ‘authorized-connections-as-sources’ in a comma separate format.
{
"id": "target-connection-id-1",
"targets": [
{
"address": "command/redirected/response",
"topics": [],
"qos": 1,
"authorizationContext": [
"pre:ditto"
],
"headerMapping": {}
}
],
"specificConfig": {
"is-diversion-target": "true"
}
}
{
"targets": [
{
"address": "command/redirected/response",
"topics": [],
"qos": 1,
"authorizationContext": [
"pre:ditto"
],
"headerMapping": {}
}
],
"specificConfig": {
"is-diversion-target": "true",
"authorized-connections-as-sources": "target-connection-id-1,..."
}
}
When a command is received through a source with response diversion configured, Ditto intercepts the response and routes it through the specified target connection instead of the original reply target.
Real-world use case: AWS IoT Core with Kafka
Let’s explore a practical scenario that demonstrates the power of response diversion. In this setup:
- Devices communicate with AWS IoT Core via MQTT (bidirectional)
- Apache Kafka IoT Core pushes device commands to a Kafka topic
- Device commands are consumed from Kafka topics
- Responses must go back to AWS IoT Core via MQTT (since IoT Core doesn’t support Kafka consumers)
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ AWS IoT Core │ │ Kafka Bridge │ │ Apache Kafka │ │ Eclipse Ditto │
│ (MQTT) │ │ /Analytics │ │ │ │ │
│ │ │ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │Device │ │───▶│ │MQTT→Kafka │ │───▶│ │device- │ │───▶│ │Kafka Source │ │
│ │Commands │ │ │ │Bridge │ │ │ │commands │ │ │ │Connection │ │
│ │(MQTT topics)│ │ │ │ │ │ │ │topic │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
│ ▲ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ │ │ │ │ │ │ │ ┌─────────────┐ │
│ │ │ │ │ │ │ │ │Command │ │
│ │ │ │ │ │ │ │ │Processing │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ └─────────────┘ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ │ │ │ │ │ │ │ ┌─────────────┐ │
│ │ │ │ │ │ │ │ │Response │ │
│ │ │ │ │ │ │ │ │Diversion │ │
│ │ │ │ │ │ │ │ │Interceptor │ │
│ │ │ │ │ │ │ │ └─────────────┘ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ ┌─────────────┐ │ │ │ │ │ │ ┌─────────────┐ │
│ │Device │ │◀───┼─────────────────┼────┼─────────────────┼────│ │MQTT Target │ │
│ │Responses │ │ │ │ │ │ │ │Connection │ │
│ │(MQTT topics)│ │ │ │ │ │ │ │(AWS IoT) │ │
│ └─────────────┘ │ │ │ │ │ │ └─────────────┘ │
└─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘
Legend:
───▶ Command Flow (MQTT → Kafka → Ditto)
◀─── Response Flow (Ditto → MQTT, bypassing Kafka)
Example Configuration
First, create the Kafka connection that consumes device commands:
{
"id": "kafka-commands-connection",
"connectionType": "kafka",
"connectionStatus": "open",
"uri": "tcp://kafka-broker:9092",
"specificConfig": {
"bootstrapServers": "kafka-broker:9092",
"saslMechanism": "plain"
},
"sources": [{
"addresses": ["device-commands"],
"authorizationContext": ["ditto:kafka-consumer"],
"headerMapping": {
"device-id": "{{ header:device-id }}",
"divert-response-to-connection": "aws-iot-mqtt-connection",
"divert-expected-response-types": "response,error"
}
}]
}
Next, create the MQTT connection that will handle diverted responses:
{
"id": "aws-iot-mqtt-connection",
"connectionType": "mqtt",
"connectionStatus": "open",
"uri": "ssl://your-iot-endpoint.amazonaws.com:8883",
"sources": [],
"targets": [
{
"address": "device/{{ header:device-id }}/response",
"topics": [],
"headerMapping": {
"device-id": "{{ header:device-id }}",
"correlation-id": "{{ header:correlation-id }}"
}
}
],
"specificConfig": {
"is-diversion-target": "true"
}
}
Flow explanation
- Command ingestion: Kafka connection consumes device commands from the
device-commandstopic - Response diversion: Commands are configured to divert responses to the
aws-iot-mqtt-connection - Response routing: Responses are automatically published to AWS IoT Core via MQTT on the device-specific response topic
- Device notification: Devices receive responses via their subscribed MQTT topics in AWS IoT Core
This setup enables a seamless flow from Kafka-based systems back to MQTT-based device communication without requiring external routing logic.
Try it out
Response diversion is available starting with Eclipse Ditto version 3.8.0. Update your deployment and start experimenting with multi-protocol workflows!
The feature documentation provides comprehensive configuration examples and troubleshooting guidance. We’d love to hear about your use cases and feedback.
Get started with response diversion today and unlock new possibilities for your IoT connectivity architecture.
–
The Eclipse Ditto team
October 06, 2025
Refactoring to Eclipse Collections with Java 25 at the dev2next Conference
by Donald Raab at October 06, 2025 02:58 AM
Showing what makes Java great after 30 years is the vibrant OSS ecosystem
Vladimir Zakharov and Donald Raab presenting “Refactoring to Eclipse Collections” at the dev2next Conference 2025This blog will show you how Vladimir Zakharov and I live-refactored a single test case with nine method category unit tests at dev2next 2025. The test starts off passing using the built in JDK Collections and Streams. We refactored it live in front of an audience to use Eclipse Collections. I will be refactoring the same test case as I write this blog, and explaining different lessons learned along the way. You can follow along as I refactor the code here, or accomplish this on your own by starting with the pre-refactored code available on GitHub. Here are the slides we used for the talk, available on GitHub.
Note: A Decade as OSS at the Eclipse Foundation
The Eclipse Collections library has been available in open source since December 2015, managed as a project at the Eclipse Foundation. Prior to that the GS Collections library, which was the Eclipse Collections predecessor, was open sourced in January 2012. That will be 14 years total in open source at the end of this year.
I have been conditioned for the past decade to start all conversations about Eclipse Collections, with a statement that should be obvious, but unfortunately isn’t. You do not need to use the Eclipse IDE or any other IDE to use Eclipse Collections. Eclipse Collections is a standalone open source collections library for Java. See the following blog for more details.
Explaining the Eclipse prefix in Eclipse Collections
Now that the preamble is out of the way, let’s continue.
The Idea of Refactoring to Eclipse Collections
The idea of “Refactoring to Eclipse Collections” started out as an article by Kristen O’Leary and Vladimir Zakharov in June 2018. The two Goldman Sachs alumni wrote the following article for InfoQ.
Refactoring to Eclipse Collections: Making Your Java Streams Leaner, Meaner, and Cleaner
Kristen and Vlad wouldn’t know it at the time, but they would recognize something fundamentally important in this article, that I would go on to leverage to organize the chapters of my book “Eclipse Collections Categorically” on — Method Categories.
You can see where Vlad and Kristen organized the methods in Eclipse Collections into Method Categories in their article.
Extracted from the Refactoring to Eclipse Collections InfoQ articleNeither Vlad, Kristen, or myself would understand at the time this article was written, or even over the past seven years how important the idea of grouping methods by method category would be for me when I wrote “Eclipse Collections Categorically.” When I wrote the book, I didn’t appreciate at the time that Kristen and Vlad had a similar basic idea in their article. The book took this idea to its natural conclusion, that the idea of Method Categories is a fundamentally missing feature in Java and most other file based programming languages. This feature needs to be added to Java and other languages for developers to be able to better organize their APIs both in the IDE and in documentation (e.g. Javadoc).
Read on to learn more.
Refactoring to Eclipse Collections, Revisited
Vlad approached me with the idea of submitting a talk to dev2next on “Refactoring to Eclipse Collections”, and I agreed.
When the talk was accepted, I thought it would be good to revise the code examples with a familiar domain concept that I had used in my book — Generation. As Java 25 was released a couple weeks before the talk, I upgraded the code examples to use Java 25 with and without Compact Object Headers (JEP 519) enabled. You can find some memory comparison charts in the slide deck linked above.
All of the code examples for Refactoring to Eclipse Collections can be found in the following GitHub repo.
Generation Alpha to the Rescue
Everything we have done in the past decade in Java has become a part of the history of Generation Alpha. We don’t hear much about Generation Alpha, because no one from this generation has graduated from high school yet. The beginning of Generation Alpha was 2013, which means no one in Generation Alpha will remember a time before Java had support for concise lambda expressions. Lambdas arrived in March 2014, with the release of Java 8.
Below is the full code for Generation enum that Vlad and I would use in our talk at dev2next 2025. This Java enum is somewhat similar to the Generation enum I use in my book, “Eclipse Collections Categorically.”
package refactortoec.generation;
import java.util.stream.IntStream;
import org.eclipse.collections.impl.list.primitive.IntInterval;
public enum Generation
{
UNCLASSIFIED("Unclassified", 0, 1842),
PROGRESSIVE("Progressive Generation", 1843, 1859),
MISSIONARY("Missionary Generation", 1860, 1882),
LOST("Lost Generation", 1883, 1900),
GREATEST("Greatest Generation", 1901, 1927),
SILENT("Silent Generation", 1928, 1945),
BOOMER("Baby Boomers", 1946, 1964),
X("Generation X", 1965, 1980),
MILLENNIAL("Millennials", 1981, 1996),
Z("Generation Z", 1997, 2012),
ALPHA("Generation Alpha", 2013, 2029);
private final String name;
private final YearRange years;
Generation(String name, int from, int to)
{
this.name = name;
this.years = new YearRange(from, to);
}
public int numberOfYears()
{
return this.years.count();
}
public IntInterval yearsInterval()
{
return this.years.interval();
}
public IntStream yearsStream()
{
return this.years.stream();
}
public boolean yearsCountEqualsEc(int years)
{
return this.yearsInterval().size() == years;
}
public boolean yearsCountEqualsJdk(int years)
{
return this.yearsStream().count() == years;
}
public String getName()
{
return this.name;
}
public boolean contains(int year)
{
return this.years.contains(year);
}
}
For our talk, we introduced a Java record called YearRange, which is used to store the start and end years for each Generation. This is different than the Generation in my book, which just stores an IntInterval. You will see IntInterval can be created from a YearRange by calling the method interval(). Similarly, an IntStream can be created from YearRange by calling stream(). Both of these code paths look very similar. The difference between them is subtle. An instance of IntInterval can be used as many times as a developer needs. An instance of IntStream can only be used once, before the IntStream becomes exhausted and you have to create a new one.
import java.util.stream.IntStream;
import org.eclipse.collections.impl.list.primitive.IntInterval;
public record YearRange(int from, int to)
{
public int count()
{
return this.to - this.from + 1;
}
public boolean contains(int year)
{
return this.from <= year && year <= this.to;
}
public IntStream stream()
{
return IntStream.rangeClosed(this.from, this.to);
}
public IntInterval interval()
{
return IntInterval.fromTo(this.from, this.to);
}
}
GenerationJdk
For our talk, we created a class called GenerationJdk that contains the JDK specific elements of the code. GenerationJdk looks as follows.
package refactortoec.generation;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiFunction;
import java.util.stream.Gatherers;
import java.util.stream.Stream;
public class GenerationJdk
{
public static final Set<Generation> GENERATION_SET =
Set.of(Generation.values());
public static final Map<Integer, Generation> BY_YEAR =
GenerationJdk.groupEachByYear();
private static Map<Integer, Generation> groupEachByYear()
{
Map<Integer, Generation> map = new HashMap<>();
GENERATION_SET.forEach(generation ->
generation.yearsStream()
.forEach(year -> map.put(year, generation)));
return Map.copyOf(map);
}
public static Generation find(int year)
{
return BY_YEAR.getOrDefault(year, Generation.UNCLASSIFIED);
}
public static Stream<List<Generation>> windowFixedGenerations(int size)
{
return Arrays.stream(Generation.values())
.gather(Gatherers.windowFixed(size));
}
public static <IV> IV fold(IV value, BiFunction<IV, Generation, IV> function)
{
return GENERATION_SET.stream()
.gather(Gatherers.fold(() -> value, function))
.findFirst()
.orElse(value);
}
}
GenerationEc
There is an equivalent class that uses Eclipse Collections types and methods called GenerationEc, which looks as follows.
package refactortoec.generation;
import org.eclipse.collections.api.RichIterable;
import org.eclipse.collections.api.block.function.Function2;
import org.eclipse.collections.api.factory.Sets;
import org.eclipse.collections.api.map.primitive.ImmutableIntObjectMap;
import org.eclipse.collections.api.map.primitive.MutableIntObjectMap;
import org.eclipse.collections.api.set.ImmutableSet;
import org.eclipse.collections.impl.factory.primitive.IntObjectMaps;
import org.eclipse.collections.impl.list.fixed.ArrayAdapter;
public class GenerationEc
{
public static final ImmutableSet<Generation> GENERATION_IMMUTABLE_SET =
Sets.immutable.with(Generation.values());
public static final ImmutableIntObjectMap<Generation> BY_YEAR =
GenerationEc.groupEachByYear();
private static ImmutableIntObjectMap<Generation> groupEachByYear()
{
MutableIntObjectMap<Generation> map = IntObjectMaps.mutable.empty();
GENERATION_IMMUTABLE_SET.forEach(generation ->
generation.yearsInterval()
.forEach(year -> map.put(year, generation)));
return map.toImmutable();
}
public static Generation find(int year)
{
return BY_YEAR.getIfAbsent(year, () -> Generation.UNCLASSIFIED);
}
public static RichIterable<RichIterable<Generation>> chunkGenerations(int size)
{
return ArrayAdapter.adapt(Generation.values())
.asLazy()
.chunk(size);
}
public static <IV> IV fold(IV value, Function2<IV, Generation, IV> function)
{
return GENERATION_IMMUTABLE_SET.injectInto(value, function);
}
}
Set vs. ImmutableSet
The primary differences between GenerationJdk and GenerationEc are the types used for GENERATION_SET and IMMUTABLE_GENERATION_SET. In the talk, the differences between Set and ImmutableSet are explained in the following slides. First, we explain the difference of type, and how to be explicit about whether a type is Mutable or Immutable. We show how Eclipse Collections types can be used as drop-in-replacements for JDK types (Step 1 in slide), and how the types on the left can be migrated to more intention revealing types once the types on the right have been refactored (Step 2 in slide).
Determining if a Set is Mutable of Immutable in the JDK and Eclipse CollectionsNote: The squirrel at the bottom left of this slide is what I used to mark slides I was presenting during our talk. I couldn’t easily screenshot this squirrel out of the picture. I hope it is not too distracting. :)
An ImmutableSet conveys its intent much more clearly than Set. Set is a mutable interface, which may be optionally mutable, if the type it contains throws exceptions for the mutating methods. This is a surprise better left unhidden and exposed by a more explicit type like ImmutableSet, which has no mutating methods.
The biggest difference between Set and ImmutableSet is the number of methods available directly for developers on the collection types. The following Venn diagram shows the difference in the number of non-overloaded methods.
The number of non-overloaded methods on JDK Set and Eclipse Collections ImmutableSetThe large number of methods on ImmutableSet may seem daunting. This is where method categories help. Instead of sorting and scrolling through 158 methods, the methods can be grouped into just nine categories. The following slide shows how I accomplished this in IntelliJ using Custom Code Folding Regions to emulate Methods Categories, which are available natively in Smalltalk IDEs.
Using Custom Code Folding Regions in IntelliJ to simulate Method CategoriesWhat may be less obvious is that a developer has to look in five places to find all of the behaviors for JDK Set. There are methods in Set, Collections, Stream, Collectors, and Gathers, for a total of 170 methods. Note, not all of the methods in the Collections utility class work for Set. Some are specific to List and Map. There is no organized way of viewing the 64 methods there. Just scroll.
Other Differences in GenerationJdk and GenerationEc
Another difference in these two classes are the groupEachByYear methods. We kept these methods equivalent in that they use nested forEach calls to build a Map. The keys in the map are individual years as int values, and the values are Generation instances corresponding to each year. In the case of the JDK, a Map<Integer, Generation> is used. In the case of EC, an ImmutableIntObjectMap<Generation> is used. The ImmutableIntObjectMap<Generation> reveals the intent that this map cannot be modified, where the Map<Integer, Generation> cannot do this, even thought the Map.copyOf() call creates an immutable copy of the Map. The primitive IntObjectMap used by EC will generate a map that takes less memory than the Map used by JDK because the int values will not be boxed as Integer objects.
The two other differences in these classes are the methods used for windowFixed/chunk and fold. The method chunk in Eclipse Collections can either be used directly by calling chunk on the collection (eager), or by calling asLazy first (lazy). The lazy version is arguably better in the example we use because we don’t hold onto the chunked results after computation is finished. Waste not, want not.
In Eclipse Collections, we categorize chunk as a grouping operation. It groups elements of a collection together based on an int value. So if you have a collection of 10 items and call chunk(3), you will wind up with a collection with 4 collections of size 3, 3, 3, 1.
The method fold is useful for aggregating results. In the test class I will refactor in this blog, we will see how to use fold to calculate the max, min, and sum of items in a collection using fold. In Eclipse Collections, the method that is the equivalent of fold in the JDK is named injectInto.
Refactoring to Eclipse Collections
There is a single test class in the GitHub repository that we leveraged for live refactoring from JDK to Eclipse Collections. The test class is linked below.
The Javadoc for this class is intended to act as a guide for developers to refactor this class on their own. Check out the whole project from this GitHub repo and give it a try!
The class level Javadoc explains how the test is organized into method categories that will test multiple methods.
/**
* In this test we will refactor from JDK patterns to Eclipse Collections
* patterns. The categories of patterns we will cover in this refactoring are:
*
* <ul>
* <li>Counting - 🧮</li>
* <li>Testing - 🧪</li>
* <li>Finding - 🔎</li>
* <li>Filtering - 🚰</li>
* <li>Grouping - 🏘️</li>
* <li>Converting - 🔌</li>
* <li>Transforming - 🦋</li>
* <li>Chunking - 🖖</li>
* <li>Folding - 🪭</li>
* </ul>
*
* Note: We work with unit tests so we know code works to start, and continues
* to work after the refactoring is complete.
*/
Refactoring to use a drop-in-replacement
The first refactoring we did during our talk was to replace all references in this class to GENERATION_SET, which is stored on GenerationJdk, with GENERATION_IMMUTABLE_SET, which is stored on GenerationEc.
For a small example, the following code would be transformed as follows:
// BEFORE
// Counting with Predicate -> Count of Generation instances that match
long count = GENERATION_SET.stream()
.filter(generation -> generation.contains(1995))
.count();
// AFTER
// Counting with Predicate -> Count of Generation instances that match
long count = GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1995))
.count();
After the search and replace in the test, we run all of the methods and see that they all still pass.
Now we will continue refactoring each of the method categories included in this test class.
Refactoring Counting 🧮
The first category of methods we will refactor are counting methods.
JDK Collections / Streams
/**
* There are two use cases for counting we will explore.
* <ol>
* <li>Counting with a Predicate -> return is a primitive value</li>
* <li>Counting by a Function -> return is a Map<Integer, Long></li>
* </ol>
*/
@Test
public void counting() // 🧮
{
// Counting with Predicate -> Count of Generation instances that match
long count = GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1995))
.count();
assertEquals(1L, count);
// Counting by a Function -> Number of years in a Generation ->
// Count of Generations
Map<Integer, Long> generationCountByYears =
GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.groupingBy(Generation::numberOfYears,
Collectors.counting()));
var expected = new HashMap<>();
expected.put(17, 2L);
expected.put(16, 3L);
expected.put(19, 1L);
expected.put(18, 2L);
expected.put(23, 1L);
expected.put(27, 1L);
expected.put(1843, 1L);
assertEquals(expected, generationCountByYears);
assertNull(generationCountByYears.get(30));
}
Refactoring Counting to Eclipse Collections
@Test
public void counting() // 🧮
{
// Counting with Predicate -> Count of Generation instances that match
int count = GENERATION_IMMUTABLE_SET
.count(generation -> generation.contains(1995));
assertEquals(1, count);
// Counting by a Function -> Number of years in a Generation ->
// Count of Generations
ImmutableBag<Integer> generationCountByYears =
GENERATION_IMMUTABLE_SET.countBy(Generation::numberOfYears);
var expected = Bags.mutable.withOccurrences(17, 2)
.withOccurrences(16, 3)
.withOccurrences(19, 1)
.withOccurrences(18, 2)
.withOccurrences(23, 1)
.withOccurrences(27, 1)
.withOccurrences(1843, 1);
assertEquals(expected, generationCountByYears);
assertEquals(0, generationCountByYears.occurrencesOf(30));
}
Lessons Learned from Counting
Using Java Stream to count, first requires you to learn how to use filter. The method count() on Stream returns a long, but takes no parameter. It is the size of the Stream.
With Eclipse Collections, the count method takes a Predicate as a parameter, and counts the elements that match the Predicate.
Notice that the bun methods disappear here. Eclipse Collections gets to the point immediately. We are using count or countBy. These are active verbs, not gerunds. They do not require bun methods like stream and collect. These methods are available directly on the collections themselves. Both of these methods are eager, not lazy. They have a specific terminal result at the end of computation (int or Bag).
A Stream will return a long for a count, because a Stream can be sourced from things other than collections (e.g. files). Collection types in Java have a max size of int. In the case of Eclipse Collections, the only thing the library deals with are collections, so the result of count will never be bigger than the max size of a collection, which is int.
The less obvious thing that is happening here is the covariant nature of countBy, and other methods on Eclipse Collections Collection types. When a collection type is returned from a method, the source collection determines the result type. In the case of an ImmutableSet<Generation>, which is what GENERATION_IMMUTABLE_SET returns, the result type for countBy is an ImmutableBag<Integer>. The Map returned by the Stream version of the code is not immutable, but you wouldn’t know that from the interface named Map, because it can’t tell you.
Lastly, a Bag is a safer data structure to return than a Map for countBy. This is because a Map will return null for missing keys, where a Bag knows it is a counter, so will return 0 for missing keys when occurrencesOf is used.
Refactoring Testing 🧪
The next category of methods we will refactor are testing methods. A testing method will always return a boolean result.
JDK Collections / Streams
/**
* Testing methods return a boolean. We will explore three testing methods.
* Testing methods are always eager, but can often short-circuit execution,
* meaning they don't have to visit all elements of the collection if the
* condition is met.
*<ol>
*<li>Stream.anyMatch(Predicate) -> RichIterable.anySatisfy(Predicate)</li>
*<li>Stream.allMatch(Predicate) -> RichIterable.allSatisfy(Predicate)</li>
*<li>Stream.noneMatch(Predicate) -> RichIterable.noneSatisfy(Predicate)</li>
*</ol>
*/
@Test
public void testing() // 🧪
{
assertTrue(GENERATION_IMMUTABLE_SET.stream()
.anyMatch(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET.stream()
.allMatch(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET.stream()
.noneMatch(generation -> generation.contains(1995)));
}
Refactoring Testing to Eclipse Collections
@Test
public void testing() // 🧪
{
assertTrue(GENERATION_IMMUTABLE_SET
.anySatisfy(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET
.allSatisfy(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET
.noneSatisfy(generation -> generation.contains(1995)));
}
Lessons Learned from Testing
There are other methods for testing that we did not cover in this refactoring. Examples are contains, isEmpty, notEmpty, containsBy, containsAll, containsAny, containsNone.
The simple pattern to remember when refactoring any/all/None is that the suffix Match in the JDK, becomes Satisfy in Eclipse Collections. The biggest difference is that the call to stream is removed as it is unnecessary. The methods are available directly on the collections themselves in Eclipse Collections.
Refactoring Finding 🔎
The next category of methods are finding methods. A finding method is one that returns an element of the collection. There are methods that can search for elements based on Predicate or Function.
JDK Collections / Streams
/**
* Finding methods return some element of a collection. Finding methods are
* always eager.
* <ol>
* <li>Stream.filter(Predicate).findFirst() -> RichIterable.detect(Predicate) / detectOptional(Predicate)</li>
* <li>Collectors.maxBy(Comparator) -> RichIterable.maxBy(Function)</li>
* <li>Collectors.minBy(Comparator) -> RichIterable.minBy(Function)</li>
* <li>Stream.filter(Predicate.not()) -> RichIterable.reject(Predicate)</li>
* </ol>
*/
@Test
public void finding() // 🔎
{
Generation findFirst =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1995))
.findFirst()
.orElse(null);
assertEquals(MILLENNIAL, findFirst);
Generation notFound =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1795))
.findFirst()
.orElse(UNCLASSIFIED);
assertEquals(UNCLASSIFIED, notFound);
List<Generation> generationsNotUnclassified =
Stream.of(Generation.values())
.filter(gen -> !gen.equals(UNCLASSIFIED))
.toList();
Generation maxByYears =
generationsNotUnclassified.stream()
.collect(Collectors.maxBy(
Comparator.comparing(Generation::numberOfYears)))
.orElse(null);
assertEquals(GREATEST, maxByYears);
Generation minByYears =
generationsNotUnclassified.stream()
.collect(Collectors.minBy(
Comparator.comparing(Generation::numberOfYears)))
.orElse(null);
assertEquals(X, minByYears);
}
Refactoring Finding to Eclipse Collections
@Test
public void finding() // 🔎
{
Generation findFirst = GENERATION_IMMUTABLE_SET
.detect(generation -> generation.contains(1995));
assertEquals(MILLENNIAL, findFirst);
Generation notFound = GENERATION_IMMUTABLE_SET
.detectIfNone(
generation -> generation.contains(1795),
() -> UNCLASSIFIED);
assertEquals(UNCLASSIFIED, notFound);
MutableList<Generation> generationsNotUnclassified =
ArrayAdapter.adapt(Generation.values())
.reject(gen -> gen.equals(UNCLASSIFIED));
Generation maxByYears =
generationsNotUnclassified.maxBy(Generation::numberOfYears);
assertEquals(GREATEST, maxByYears);
Generation minByYears =
generationsNotUnclassified.minBy(Generation::numberOfYears);
assertEquals(X, minByYears);
}
Lessons Learned from Finding
Again, we see that finding in the JDK is dependent on the method filter. The method findFirst is terminal in the JDK and takes no parameters. It returns an Optional, which we then have to query to see if something was actually returned from the call to filter. We write cases where something is found, and something is not found.
Eclipse Collections detect method takes a Predicate as a parameter, and returns a found element or null if something is not found. If we want to protect against the null return case, we can use detectIfNone, which takes a Predicate and Function0 as parameters. The Function0 is evaluated in the case something is not found.
We see that the filter method has no equivalent of a filterNot. Instead, we have to negate a Predicate using an ! in the lambda, or we could wrap a Predicate in a call to Predicate.not().
Eclipse Collections has a method named reject that filters exclusively. As we will see in the next category (filtering), Eclipse Collections also has a method named select which filters inclusively.
Refactoring Filtering 🚰
The filtering category includes methods like filter and partition. In Eclipse Collections, the method names are select (inclusive filter), reject (exclusive filter) and partition (one pass select and reject)
JDK Collections / Streams
/**
* Filtering methods return another Stream or Collection based on a Predicate.
* Filtering can be eager or lazy. We will explore three filtering methods.
* <ol>
* <li>Stream.filter(Predicate) -> RichIterable.select(Predicate)</li>
* <li>Stream.filter(Predicate.not()) -> RichIterable.reject(Predicate)</li>
* <li>Collectors.partitioningBy(Predicate) -> RichIterable.partition(Predicate)</li>
* </ol>
*/
@Test
public void filtering() // 🚰
{
Set<Generation> filteredSelected =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.yearsCountEqualsJdk(16))
.collect(Collectors.toUnmodifiableSet());
var expectedSelected = Set.of(X, MILLENNIAL, Z);
assertEquals(expectedSelected, filteredSelected);
Set<Generation> filteredRejected =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> !generation.yearsCountEqualsJdk(16))
.collect(Collectors.toUnmodifiableSet());
var expectedRejected = Sets.mutable.with(
ALPHA, UNCLASSIFIED, BOOMER, GREATEST, LOST,
MISSIONARY, PROGRESSIVE, SILENT);
assertEquals(expectedRejected, filteredRejected);
Map<Boolean, Set<Generation>> partition = GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.partitioningBy(
generation -> generation.yearsCountEqualsJdk(16),
Collectors.toUnmodifiableSet()));
assertEquals(expectedSelected, partition.get(Boolean.TRUE));
assertEquals(expectedRejected, partition.get(Boolean.FALSE));
}
Refactoring Finding to Eclipse Collections
@Test
public void filtering() // 🚰
{
ImmutableSet<Generation> filteredSelected =
GENERATION_IMMUTABLE_SET
.select(generation -> generation.yearsCountEqualsJdk(16));
var expectedSelected = Set.of(X, MILLENNIAL, Z);
assertEquals(expectedSelected, filteredSelected);
ImmutableSet<Generation> filteredRejected =
GENERATION_IMMUTABLE_SET
.reject(generation -> generation.yearsCountEqualsJdk(16));
var expectedRejected = Sets.mutable.with(
ALPHA, UNCLASSIFIED, BOOMER, GREATEST, LOST,
MISSIONARY, PROGRESSIVE, SILENT);
assertEquals(expectedRejected, filteredRejected);
PartitionImmutableSet<Generation> partition = GENERATION_IMMUTABLE_SET
.partition(generation -> generation.yearsCountEqualsJdk(16));
assertEquals(expectedSelected, partition.getSelected());
assertEquals(expectedRejected, partition.getRejected());
}
Lessons Learned from Filtering
While the name filtering makes sense for a method category, the name filter is ambiguous as a method. It is not clear by the name alone whether the method is meant to be an inclusive or exclusive filter. The methods select and reject in Eclipse Collections disambiguate through their names.
The method partition in Eclipse Collections returns a special type, in this case a PartitionMutableSet. Again, we see that methods in EC are covariant, and return specialized types based on the source type.
The filtering methods on Eclipse Collections collection types are all eager. If we want lazy versions of the methods, we can call asLazy() first, and then will have to do something similar to Java Stream and call a terminal method like toList(). There are many more methods available on LazyIterable than Stream, as LazyIterable extends RichIterable.
Now, to address the return type of Map<Boolean, Set<Generation>> from the Collectors.partitioningBy() method. It is difficult (although not impossible) to think of a worse return type for this method. A Map<Boolean, Anything> is a bad idea. I think it is so bad, that Eclipse Collections primitive maps do not support BooleanToAnythingMaps. We explicitly decided not to support these types. There are much better alternatives like using a Record with explicit names, or introducing a specific type as we did in Eclipse Collections for PartitionIterable. If you want me to explain more about why Map<Boolean, Anything> is bad, there is a blog for that, with the title “Map-Oriented Programming in Java.” Enjoy!
Map-Oriented Programming in Java
Refactoring Grouping 🏘️
The grouping category was limited to just groupBy in this talk. There are other methods that are categorized as grouping in Eclipse Collections. You can see the full list of EC methods included in the grouping category in the slide above with the Custom Code Folding Regions demonstrated in IntelliJ.
JDK Collections / Streams
/**
* Grouping methods return a Map with some key calculated by a Function and
* the values contained in a Collection. We will explore one grouping method.
*
* <ol>
* <li>Collectors.groupingBy(Function) -> RichIterable.groupBy(Function)</li>
* </ol>
*/
@Test
public void grouping() // 🏘️
{
Map<Integer, Set<Generation>> generationByYears =
GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.groupingBy(
Generation::numberOfYears,
Collectors.toSet()));
var expected = new HashMap<>();
expected.put(17, Set.of(ALPHA, PROGRESSIVE));
expected.put(16, Set.of(X, MILLENNIAL, Z));
expected.put(19, Set.of(BOOMER));
expected.put(18, Set.of(SILENT, LOST));
expected.put(23, Set.of(MISSIONARY));
expected.put(27, Set.of(GREATEST));
expected.put(1843, Set.of(UNCLASSIFIED));
assertEquals(expected, generationByYears);
assertNull(generationByYears.get(30));
}
Refactoring Grouping to Eclipse Collections
@Test
public void grouping() // 🏘️
{
ImmutableSetMultimap<Integer, Generation> generationByYears =
GENERATION_IMMUTABLE_SET.groupBy(Generation::numberOfYears);
var expected = Multimaps.immutable.set.empty()
.newWithAll(17, Set.of(ALPHA, PROGRESSIVE))
.newWithAll(16, Set.of(X, MILLENNIAL, Z))
.newWithAll(19, Set.of(BOOMER))
.newWithAll(18, Set.of(SILENT, LOST))
.newWithAll(23, Set.of(MISSIONARY))
.newWithAll(27, Set.of(GREATEST))
.newWithAll(1843, Set.of(UNCLASSIFIED));
assertEquals(expected, generationByYears);
assertTrue(generationByYears.get(30).isEmpty());
}
Lessons Learned from Grouping
I will refer you to the blog on Map-Oriented Programming in Java again. The groupBy method in Eclipse Collections returns a special type called Multimap. A Multimap is a collection type that knows its value types are some type of collection. A Multimap can gracefully handle a sparsely populated data set, by returning an empty collection when a key is missing. A Map will return null for missing keys. The test case illustrates this.
We see yet again, that the groupBy method is covariant on Eclipse Collections types. An ImmutableSet returns an ImmutableSetMultimap when calling groupBy on it.
Creating a Multimap is more involved than creating other types. We use the Multimaps factory class here and choose immutable and set to further refine the Multimap type we want to be an ImmutableSetMultimap. If you go to the first paragraph of this blog, you will find a link to the slides for our talk which includes a slide with all of the combinations of Eclipse Collections factories explained.
Refactoring Converting 🔌
The category of converting includes 29 methods in Eclipse Collections. We only cover the toList and toImmutableList converter methods in this talk. The converter methods in the JDK are limited to toList on Stream, and bunch of toXyz methods on Collectors.
JDK Collections / Streams
/**
* Converting method convert from a source Collection type to a target
* Collection type. Converting methods in both Java and Eclipse Collections
* usually have a prefix of "to". We'll explore a few converting methods
* in this test.
* <ol>
* <li>Collectors.toList() -> RichIterable.toList()</li>
* <li>Stream.toList() -> RichIterable.toImmutableList()</li>
* </ol>
*/
@Test
public void converting() // 🔌
{
List<Generation> mutableList =
GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.toList());
List<Generation> immutableList =
GENERATION_IMMUTABLE_SET.stream()
.toList();
List<Generation> sortedMutableList =
mutableList.stream()
.sorted(Comparator.comparing(
gen -> gen.yearsStream().findFirst().getAsInt()))
.collect(Collectors.toList());
var expected = Lists.mutable.with(values());
assertEquals(expected, sortedMutableList);
List<Generation> sortedImmutableList =
immutableList.stream()
.sorted(Comparator.comparing(
gen -> gen.yearsStream().findFirst().getAsInt()))
.toList();
assertEquals(expected, sortedImmutableList);
}
Refactoring Converting to Eclipse Collections
@Test
public void converting() // 🔌
{
MutableList<Generation> mutableList =
GENERATION_IMMUTABLE_SET.toList();
ImmutableList<Generation> immutableList =
GENERATION_IMMUTABLE_SET.toImmutableList();
MutableList<Generation> sortedMutableList =
mutableList.toSortedListBy(
gen -> gen.yearsInterval().getFirst());
var expected = Lists.mutable.with(values());
assertEquals(expected, sortedMutableList);
ImmutableList<Generation> sortedImmutableList =
immutableList.toImmutableSortedListBy(
gen -> gen.yearsInterval().getFirst());
assertEquals(expected, sortedImmutableList);
}
Lessons Learned from Converting
The methods for converting from one collection type to another are extremely helpful. They are also extremely limited on the Stream interface. It is confusing that the method named toList on Collectors, does not return the same type as the method named toList on Stream.
While we limited the converting category to methods for converting to mutable and immutable Lists, the following blog shows the large number of potential targets for converting methods prefixed with to in Eclipse Collections.
Converter methods in Eclipse Collections
Refactoring Transforming 🦋
The transforming category includes methods like JDK map and EC collect. These methods transform the element type of a collection to a different type (e.g. Generation -> String).
JDK Collections / Streams
/**
* Transforming methods convert the elements of a collection to another type by
* applying a Function to each element. We'll explore the following methods.
*
* <ol>
* <li>Stream.map() -> RichIterable.collect()</li>
* <li>Collectors.toUnmodifiableSet() -> ???</li>
* </ol>
*
* Note: Certain methods on RichIterable are covariant, so return a type that
* makes sense for the source type.
* Hint: If we collect on an ImmutableSet, the return type is an ImmutableSet.
*/
@Test
public void transforming() // 🦋
{
Set<String> names =
GENERATION_IMMUTABLE_SET.stream()
.map(Generation::getName)
.collect(Collectors.toUnmodifiableSet());
var expected = Sets.immutable.with(
"Unclassified", "Greatest Generation", "Lost Generation", "Millennials",
"Generation X", "Baby Boomers", "Generation Z", "Silent Generation",
"Progressive Generation", "Generation Alpha", "Missionary Generation");
assertEquals(expected, names);
Set<String> mutableNames = names.stream()
.collect(Collectors.toSet());
assertEquals(expected, mutableNames);
}
Refactoring Transforming to Eclipse Collections
@Test
public void transforming() // 🦋
{
ImmutableSet<String> names =
GENERATION_IMMUTABLE_SET.collect(Generation::getName);
var expected = Sets.immutable.with(
"Unclassified", "Greatest Generation", "Lost Generation", "Millennials",
"Generation X", "Baby Boomers", "Generation Z", "Silent Generation",
"Progressive Generation", "Generation Alpha", "Missionary Generation");
assertEquals(expected, names);
MutableSet<String> mutableNames = names.toSet();
assertEquals(expected, mutableNames);
}
Lessons Learned from Transforming
We see the collect method in Eclipse Collections , like select, reject, partition, countBy, groupBy, is covariant. Using collect on an ImmutableSet returns an ImmutableSet. The collect method is the equivalent of map on the JDK Stream type. It is not the same as the collect method on the Stream type.
The following section on collect from the “Eclipse Collections Categorically” book explains the difference between collect on Stream and collect in Eclipse Collections.
Explaining the difference between the method named collect in Eclipse Collections, and collect on Java StreamRefactoring Chunking 🖖
The category of chunking can also be grouped in the category of grouping. We differentiated it in our talk because the capability of chunking was added as a method named windowFixed to the new Gatherers type in Java. The method that provides the same behavior as windowFixed in Eclipse Collections is simply named chunk.
Note: The hand emoji above reminded me of taking a collection of five fingers and chunking them by two each. This leaves three chunks, with 2, 2, 1 fingers.
JDK Collections / Streams
/**
* Chunking is a kind of grouping method, but for our purposes we will put
* the methods in their own category. Chunking is great for breaking
* collections into smaller collections based on a size parameter.
* We'll explore the following methods.
*
* <ol>
* <li>Stream.gather(Gatherers.windowFixed()) -> RichIterable.chunk()</li>
* <li>Collectors.joining() -> RichIterable.makeString()</li>
* </ol>
*/
@Test
public void chunking() // 🖖
{
Stream<List<Generation>> windowFixedGenerations =
GenerationJdk.windowFixedGenerations(3);
String generationsAsString = windowFixedGenerations.map(Object::toString)
.collect(Collectors.joining(", "));
String expected = """
[UNCLASSIFIED, PROGRESSIVE, MISSIONARY], [LOST, GREATEST, SILENT], \
[BOOMER, X, MILLENNIAL], [Z, ALPHA]""";
assertEquals(expected, generationsAsString);
String yearsAsString = MILLENNIAL.yearsStream()
.boxed()
.gather(Gatherers.windowFixed(4))
.map(Object::toString)
.collect(Collectors.joining(", "));
String expectedYears = """
[1981, 1982, 1983, 1984], [1985, 1986, 1987, 1988], \
[1989, 1990, 1991, 1992], [1993, 1994, 1995, 1996]""";
assertEquals(expectedYears, yearsAsString);
}
The additional code to explore is in GenerationJdk.
public static Stream<List<Generation>> windowFixedGenerations(int size)
{
return Arrays.stream(Generation.values())
.gather(Gatherers.windowFixed(size));
}
Refactoring Chunking to Eclipse Collections
@Test
public void chunking() // 🖖
{
RichIterable<RichIterable<Generation>> chunkedGenerations =
GenerationEc.chunkGenerations(3);
String generationsAsString = chunkedGenerations.makeString(", ");
String expected = """
[UNCLASSIFIED, PROGRESSIVE, MISSIONARY], [LOST, GREATEST, SILENT], \
[BOOMER, X, MILLENNIAL], [Z, ALPHA]""";
assertEquals(expected, generationsAsString);
String yearsAsString = MILLENNIAL.yearsInterval()
.chunk(4)
.makeString(", ");
String expectedYears = """
[1981, 1982, 1983, 1984], [1985, 1986, 1987, 1988], \
[1989, 1990, 1991, 1992], [1993, 1994, 1995, 1996]""";
assertEquals(expectedYears, yearsAsString);
}
The additional code to explore is in GenerationEc.
public static RichIterable<RichIterable<Generation>> chunkGenerations(int size)
{
return ArrayAdapter.adapt(Generation.values())
.asLazy()
.chunk(size);
}
Lessons Learned from Chunking
This is the first time we used Gatherers in this talk. The first thing we can notice about the gather method on Stream, is there is no equivalent of gather on IntStream, LongStream, or DoubleStream. The chunk method on the other hand is available for both Object and primitive collections in Eclipse Collections.
The method named chunk is available as an eager method directly on collections, and also lazily via a call to asLazy. The code could be changed to be eager as follows, but there would be a slight performance hit because a temporary collection would be created as a result.
public static RichIterable<RichIterable<Generation>> chunkGenerations(int size)
{
return ArrayAdapter.adapt(Generation.values())
.chunk(size);
}
Notice how the return type of chunk is still RichIterable<RichIterable<Generation>> when we remove the call to asLazy. This is because a LazyIterable is a RichIterable, and an ImmutableSet is also a RichIterable. They behave differently for certain methods, but have a consistent API.
Refactoring Folding 🪭
The folding category is actually called aggregating in Eclipse Collections. For this talk we separated it out as a category to explain the fold method in the JDK on the Gatherers class. The method that is equivalent to fold in Eclipse Collections is called injectInto.
JDK Collections / Streams
/**
* Folding is a mechanism for reducing a type to some new result type.
* We'll explore folding to calculate a min, max, and sum.
* Methods we'll cover:
* <ol>
* <li>Stream.gather(Gatherers.fold() -> RichIterable.injectInto()</li>
* </ol>
*/
@Test
public void folding() // 🪭
{
Integer maxYears = GenerationJdk.fold(
Integer.MIN_VALUE,
(Integer value, Generation generation) ->
Math.max(value, generation.numberOfYears()));
Integer minYears = GenerationJdk.fold(
Integer.MAX_VALUE,
(Integer value, Generation generation) ->
Math.min(value, generation.numberOfYears()));
Integer sumYears = GenerationJdk.fold(
Integer.valueOf(0),
(Integer value, Generation generation) ->
Integer.sum(value, generation.numberOfYears()));
assertEquals(1843, maxYears);
assertEquals(16, minYears);
assertEquals(2030, sumYears);
}
The additional code to explore is in GenerationJdk.fold().
public static <IV> IV fold(IV value, BiFunction<IV, Generation, IV> function)
{
return GENERATION_SET.stream()
.gather(Gatherers.fold(() -> value, function))
.findFirst()
.orElse(value);
}
Refactoring Folding to Eclipse Collections
@Test
public void folding() // 🪭
{
Integer maxYears = GenerationEc.fold(
Integer.MIN_VALUE,
(Integer value, Generation generation) ->
Math.max(value, generation.numberOfYears()));
Integer minYears = GenerationEc.fold(
Integer.MAX_VALUE,
(Integer value, Generation generation) ->
Math.min(value, generation.numberOfYears()));
Integer sumYears = GenerationEc.fold(
Integer.valueOf(0),
(Integer value, Generation generation) ->
Integer.sum(value, generation.numberOfYears()));
assertEquals(1843, maxYears);
assertEquals(16, minYears);
assertEquals(2030, sumYears);
}
The additional code to explore is in GenerationEc.fold().
public static <IV> IV fold(IV value, Function2<IV, Generation, IV> function)
{
return GENERATION_IMMUTABLE_SET.injectInto(value, function);
}
Lessons Learned from Folding
The approach taken for folding in the JDK is unnecessarily convoluted. If we compare fold and injectInto next to each other, this will be clearer.
// JDK fold
public static <IV> IV fold(IV value, BiFunction<IV, Generation, IV> function)
{
return GENERATION_SET.stream()
.gather(Gatherers.fold(() -> value, function))
.findFirst()
.orElse(value);
}
// EC injectInto
public static <IV> IV fold(IV value, Function2<IV, Generation, IV> function)
{
return GENERATION_IMMUTABLE_SET.injectInto(value, function);
}
The methods fold and injectInto are hard enough to explain, without adding the overhead of Stream, Gatherers, and Optional into the mix.
The following blog explains the method injectInto in more detail. I refer to injectInto as the “Continuum Transfunctioner.” Read the following blog to find out why.
Eclipse Collections by Example: InjectInto
Refactoring a Conclusion
After having given a 75 minute talk at dev2next, and then turning the talk into a blog where I repeat the live refactoring that Vlad and I did in front on an audience, there is very little left for me to say. There is a lot to digest in this blog. I dare say this probably the longest blog I have ever written.
I will simply leave you with our takeaways slide from the talk, and an important section of the book “Eclipse Collections Categorically.”
You get what you settle for, so don’t settle for less than you expectNote: The following is an excerpt from Chapter one of the book, “Eclipse Collections Categorically.” This section of Chapter one is available in the online reading sample for the book on Amazon.
Tell your collections what you want them to do for you. Don’t ask for their data and do it yourself.I hope you enjoyed reliving the talk Vlad and I gave at dev2next titled “Refactoring to Eclipse Collections.” I enjoyed writing it, and will see if I can go back and make improvements over time. This blog will hopefully be a good resource for folks seeking to build or reinforce a set of basic skills across several method categories for Eclipse Collections. This blog isn’t as comprehensive as the book I just wrote, but should hopefully be a good starter for what you might have been missing just using Java Collections and Streams for the past 21 years.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
October 01, 2025
Key Highlights from the 2025 Jakarta EE Developer Survey Report
by Tatjana Obradovic at October 01, 2025 02:09 PM
The results are in! The State of Enterprise Java: 2025 Jakarta EE Developer Survey Report has just been released, offering the industry’s most comprehensive look at the state of enterprise Java. Now in its eighth year, the report captures the perspectives of more than 1700 developers, architects, and decision-makers, a 20% increase in participation compared to 2024.
The survey results give us insight into Jakarta EE’s role as the leading framework for building modern, cloud native Java applications. With the release of Jakarta EE 11, the community’s commitment to modernisation is clear, and adoption trends confirm its central role in shaping the future of enterprise Java. Here are a few of the major findings from this year’s report:
Jakarta EE Adoption Surpasses Spring
For the first time, more developers reported using Jakarta EE (58%) than Spring (56%). This clearly indicates growing awareness that Jakarta EE provides the foundation for popular frameworks like Spring. This milestone underscores Jakarta EE’s momentum and the community’s confidence in its role as the foundation for enterprise Java in the cloud era.
Rapid Uptake of Jakarta EE 11
Released earlier this year, Jakarta EE 11 has already been adopted by 18% of respondents. Thanks to its staged release model, with Core and Web Profiles first, followed by the full platform release, developers are migrating faster than ever from older versions.
Shifts in Java SE Versions
The community continues to embrace newer Java versions. Java 21 adoption leapt to 43%, up from 30% in 2024, while older versions like Java 8 and 17 declined. Interestingly, Java 11 showed a rebound at 37%, signaling that organisations continue to balance modernisation with stability.
Cloud Migration Strategies Evolve
While lift-and-shift (22%) remains the dominant approach, developers are increasingly exploring modernisation paths. Strategies include gradual migration with microservices (14%), modernising apps to leverage cloud-native features (14%), and full cloud-native builds (14%). At the same time, 20% remain uncertain, highlighting a need for clear guidance in this complex journey.
Community Priorities
Survey respondents reaffirmed priorities around cloud native readiness and faster specification adoption, while also emphasising innovation and strong alignment with Java SE.
Why This Matters
These findings highlight not only Jakarta EE’s accelerating momentum but also the vibrant role the community plays in steering its evolution. With enterprise Java powering mission-critical systems across industries, the insights from this survey provide a roadmap for organisations modernising their applications in an increasingly cloud native world.
A Call to the Community
The Jakarta EE Developer Survey continues to serve as a vital barometer of the ecosystem. With the Jakarta EE Working Group hard at work on the next release, including innovative features, there’s never been a better time to get involved. Whether you’re a developer, architect, or enterprise decision-maker, now is the perfect time to get involved:
- Explore the full report
- Join the Jakarta EE Working Group: Shape the platform’s future while engaging directly with the community.
- Contribute: Your feedback, participation, and innovations help Jakarta EE evolve faster.
With the Jakarta EE Working Group already preparing for the next release, including new cloud native capabilities, the momentum is undeniable. Together, we are building the future of enterprise Java.
Welcome Sonnet 4.5 to Theia AI (and Theia IDE)!
by Jonas, Maximilian & Philip at October 01, 2025 12:00 AM
Developers and tool builders can use Anthropic’s Sonnet 4.5 directly in Theia AI and the AI-powered Theia IDE, without any additional glue code. Just add "sonnet-4.5" to your model list in your …
The post Welcome Sonnet 4.5 to Theia AI (and Theia IDE)! appeared first on EclipseSource.
September 30, 2025
Testing and developing SWT on GTK
by Jonah Graham at September 30, 2025 03:21 PM
I have recently started working on improved support of GTK4 in SWT and I have been trying to untangle the various options that affect SWT + GTK and how everything goes together.
Environment Variables
These are key environment variables that control where and how SWT draws in GTK land.
- SWT_GTK4: If this is set to 1 then SWT will attempt to use GTK4 libraries
- GDK_BACKEND: Which backend the GDK layer (a layer below GTK) uses to draw. Can be set to x11 or wayland.
- DISPLAY: when GDK_BACKEND is x11, controls which display the program is drawn on.
If SWT_GTK4 or GDK_BACKEND is set to a value that is not supported, then generally the code gracefully falls back to the other value. For example, setting SWT_GTK4=1 without GTK4 libraries will attempt to load GTK3 libraries.
If DISPLAY is set to an invalid value, you will generally get a org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed] exception (although there are other reasons you can get that exception).
GDK_BACKEND is often set by unexpected places. For example on my machine I often find GDK_BACKEND set to x11, even though I have not requested that. Other places, such as VSCode may force setting GDK_BACKEND depending on different circumstances. Therefore I recommend being explicit/careful with GDK_BACKEND to ensure that SWT is using the backend you expect.
X11 and Wayland
When Wayland is in use, and GDK_BACKEND=x11, then Xwayland is used to bridge the gap between an application written to use X11 and the user’s display. Sometimes the behaviour of Xwayland and its interactions can differ from using a machine with X as the real display. To test this you may want to install a machine (or VM) with a distro that uses X11 natively, such as Xubuntu. The alternative is to use a VNC server (see below section).
X11 VNC Server
Rather than installing a VM or otherwise setting up a different machine you can use a VNC Server running an X server. This has the added benefit of giving a mostly accurate X11 experience, but also benefits from maintaining its own focus and drawing, allowing X11 tests to run without interrupting your development environment.
In the past I have recommended using Xvfb as documented in CDT’s testing manual. However, for my current SWT development I have used tiger VNC so I can see and interact with the window under test.
When I was experimenting on setting this up I seemed to have accidentally changed my Ubuntu theme. I was doing a bunch of experimenting, so I’m not sure what exactly I did. I have included the steps I believe are necessary below, but I may have edited out an important step – if so, please comment below and I can update the document.
These are the steps to setup and configure tiger vnc that worked for me on my Ubuntu 25.04 machine:
-
sudo apt install tigervnc-standalone-server tigervnc-commonInstall the VNC tools sudo apt install xfce4 xfce4-goodiesInstall an X11 based window manager and basic tools (there are probably some more minimal sets of things that could be installed here)vncpasswdConfigure VNC with a password access controlsudo vi /etc/X11/Xtigervnc-sessionEdit how X11 session is started. I found that the default didn’t work well, probably because xfce4 was not the only thing installed on my machine and the Xsession script didn’t quite know what to do. Theexec /etc/X11/Xsession "$@"didn’t launch successfully, so I replaced that line with these lines:
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startxfce4
TheSESSION_MANAGERandDBUS_SESSION_BUS_ADDRESSare unset because I wanted to keep this session independent of other things running on my machine and I was getting errors without them unset.vncserver :99Start the VNC Server – adjust:99for the display you want to use, you setDISPLAYenvironment variable to:99in this case.xtigervncviewer -SecurityTypes VncAuth -passwd /tmp/pathhere/passwd :99Start the viewer, use the command that vncserver output as part of its startup
Wayland Remote connection
I have not had the opportunity to use this much yet, but recent Ubuntu machines come with desktop sharing using RDP based on gnome-remote-desktop. This should allow connecting to a Ubuntu machine and use Wayland remotely. Enable it from Settings -> System -> Remote Desktop and connect to the machine using Remote Desktop.
What to test?
Now that I am developing SWT, specifically targetting GTK4 work, there are different configurations of the above to test. My primary focus is to test:
SWT_GTK4=0withGDK_BACKEND=x11running on the defaultDISPLAYthat is connected to XwaylandSWT_GTK4=1withGDK_BACKEND=wayland(in this caseDISPLAYis unused)
However these additional settings seem useful to test, especially as x11 backend sometimes seems to be used unexpectedly on wayland:
SWT_GTK4=0withGDK_BACKEND=x11running on theDISPLAYconnected to my VNC. This is really useful for when I want to leave tests running in the backgroundSWT_GTK4=1withGDK_BACKEND=x11the behaviour of various things (such as the Clipboard) is different when using GTK4 with wayland. I don’t know how important this use case is long termSWT_GTK4=0withGDK_BACKEND=wayland– I don’t know if this really adds anything and have hardly tried this combination.
Run Configurations
Here is what a few of my run configurations look like

The Eclipse Foundation Releases the 2025 Jakarta EE Developer Survey Report
by Jacob Harris at September 30, 2025 08:45 AM
BRUSSELS – 30 September 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced the availability of The State of Enterprise Java: 2025 Jakarta EE Developer Survey Report, the industry’s most comprehensive resource for technical insights into enterprise Java. Now in its eighth year, the report highlights accelerating momentum for Jakarta EE adoption and its growing role in powering cloud native applications. The 2025 Jakarta EE Developer Survey Report is available for download in its entirety here.
“With the arrival of Jakarta EE 11, it’s clear the community is prioritizing modernization of their Java infrastructure,” said Mike Milinkovich, executive director of the Eclipse Foundation. “This reflects our commitment to establishing Jakarta EE as a world-class platform for enterprise cloud native development. It’s exciting to see the Java ecosystem embracing this community-led transition.”
Conducted from March 18 to June 5, 2025, the survey collected insights from more than 1700 participants, a 20% increase over 2024, making it one of the most comprehensive community-driven views into the enterprise Java ecosystem.
Key findings from the 2025 Jakarta EE Developer Survey Report:
- Jakarta EE momentum grows: Jakarta EE adoption has surpassed Spring for the first time, with 58% of respondents using Jakarta EE compared to 56% for Spring. This marks a significant milestone and confirms Jakarta EE’s position as the leading Java framework for building cloud native applications. The data reflects the growing recognition of Jakarta EE’s foundational role in modern enterprise Java.
- Jakarta EE 11 is being rapidly adopted by the community: Jakarta EE 11 has already been adopted by 18% of respondents. This early traction shows strong interest across regions and company sizes. The community’s flexible, staged release model, which provides early access to Core and Web Profiles, is helping developers move away from older Java EE versions and adopt new innovations more quickly.
- Java version shifts: Adoption of Java 21 jumped to 43%, up from 30% in 2024. Java 17 and Java 8 both saw declines, while Java 11 experienced a rebound and now stands at 37%. The data suggests that developers are becoming more willing to adopt newer Java versions shortly after release.
- Cloud migration strategies: Lift-and-shift remains the leading approach (22%), but teams are also weighing a variety of modernization paths. Some plan to gradually migrate with microservices (14%), others are modernizing applications to leverage cloud features (14%), while a portion are already fully cloud-based (14%). At the same time, uncertainty is growing, with 20% of developers still unsure about their strategy.
- Community priorities: Cloud native readiness and faster specification adoption top the agenda, alongside steady interest in innovation and Java SE alignment.
A call to the community
The Jakarta EE Developer Survey remains a vital resource for developers, architects, and business leaders, offering a clear view into current trends and future directions for enterprise Java.
The Jakarta EE community welcomes contributions and participation from individuals and organisations alike. With the Jakarta EE Working Group hard at work on the next release, including innovative cloud native features, there’s never been a better time to get involved. Learn more and connect with the global community here.
For organisations that rely on enterprise Java, membership in the Jakarta EE Working Group offers a unique opportunity to shape its future, while benefiting from marketing initiatives and direct engagement with key contributors. Discover the benefits of membership here.
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
+49 (89) 211 871 -43 / -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
Member case study: Bloomberg’s shift to open source Java
by Jacob Harris at September 30, 2025 08:30 AM
By adopting Eclipse Temurin and joining the Adoptium Working Group, Bloomberg is strengthening their infrastructure, reducing costs, and leading open source innovation.
September 28, 2025
History
by Scott Lewis ([email protected]) at September 28, 2025 08:39 PM
by Scott Lewis ([email protected]) at September 28, 2025 08:39 PM
September 24, 2025
From Excess to Balance: The Collapse of All-You-Can-Eat
by Denis Roy at September 24, 2025 01:50 PM
A few years ago, I noticed that things were changing in the Eclipse Foundation's (EF) IT operations: we were adding servers, and lots of them.
Trays of 3U mega-machines, packing 14 compute units each, with on-board switches, immense fans and drawing much electrical power, providing our community with CPU cycles galore. Storage devices could not keep up, so in came the clustered mega-storage solution, nine massive machines with drives and drives and drives, coupled with expensive switching gear to link everything together.
And yet, it's still not enough. And it's unsustainable.
You may have heard a new buzzword that's been making inroads into the IT and Developer mainstreams: sustainability. There are a few articles floating about that mention it. The Eclipse Foundation is not immune to the unsustainable practice of unlimited consumption, and at the IT Desk, we're pivoting. We have to.
It's all about fairness. Responsible usage is a shared task to be supported by all, not just a few. In the following months, the engineers in the EF IT team will work towards measuring what matters and drawing baselines for reasonable consumption. Our systems will then be adapted to inform you if those reasonable consumption limits have been reached.
What does this mean? Well, that build that has been running continuously in the background may come to a stop, with an invitation to resume it -- tomorrow. The 275MB of the same dependencies that are downloaded 5x each day may fail after the third time, inviting you to resume -- later. Those 40,000 files produced by each build may be acceptable -- once, but not continuously.
The EF is here to help. We'll strive to provide visibility ands predictability in our operations. We'll start in observer-mode first. We'll communicate and share our findings. We'll help you adapt to the new sustainable environment.
The burden of responsible usage belongs to all of us -- for a fair, open and sustainable future.
September 23, 2025
Businesses built on open infrastructure have a responsibility to sustain it
by Mike Milinkovich at September 23, 2025 01:04 PM
The global software ecosystem runs on open source infrastructure. As demand grows, we invite the businesses who rely on it most to play a larger role in sustaining it.
Open source infrastructure is the backbone of the global digital economy. From registries to runtimes, open source underpins the tools, frameworks, and platforms that developers and enterprises rely on every day. Yet as demand for these systems grows, so too does the urgency for those who depend on them most to play a larger role in sustaining their future.
Today, the Eclipse Foundation, alongside Alpha-Omega, OpenJS Foundation, Open SSF, Packagist (Composer), the Python Software Foundation (PyPI), the Rust Foundation (crates.io), and Sonatype (Maven Central), released a joint open letter urging greater investment and support for open infrastructure. The letter calls on those who benefit most from these critical digital resources to take meaningful steps toward ensuring their long-term sustainability and responsible stewardship.
The scale of open source’s impact cannot be overstated: A 2024 Harvard study, The Value of Open Source Software, estimated that the supply-side value of widely used OSS is estimated to top $4.15 billion, while the demand-side value reached $8.8 trillion. Even more striking, 96% of that value came from the work of just 5% of OSS developers. The authors of the study estimate that without open source, organisations would need to spend more than 3.5 times their current software budgets to replicate the same capabilities.
This open ecosystem now powers much of the software industry worldwide, a sector worth trillions of dollars. Yet the investment required to sustain its underlying infrastructure has not kept pace. Running enterprise-grade infrastructure that provides zero downtime, continuous monitoring, traceability, and secure global distribution carries very real costs. The rapid rise of generative and agentic AI has only added to the strain, driving massive new workloads, many of them automated and inefficient.
The message is clear: with meaningful financial support and collaboration from industry, we can secure the long-term strength of the open infrastructure you rely on. Without that shared commitment, these vital resources are at risk.
Open VSX: Critical infrastructure worth investing in
The Eclipse Foundation stewards Open VSX, the world’s largest open source registry for VS Code extensions. Originally created to support Eclipse Foundation projects, it has grown into essential infrastructure for enterprises, serving millions of developers. Today it is the default marketplace for many VS Code forks and cloud environments, and as AI-native development and platform engineering accelerate, Open VSX is emerging as a backbone of extension infrastructure used by AI-driven development tools.
Open VSX currently handles over 100 million downloads each month, a nearly 4x increase since early 2024. This rapid growth underscores the accelerating demand across the ecosystem. Innovative, high-growth companies like Cursor, Windsurf, StackBlitz, and GitPod (now Ona), are just a few of the many organisations building on and benefiting from Open VSX. It is enterprise-class infrastructure that requires significant investment in security, staffing, maintenance, and operations.
Yet there is a clear imbalance between consumption and contribution.
Since its launch in September 2022:
- Over 3,000 issues have been submitted by more than 2,500 individuals
- Around 1,200 pull requests have been submitted, but only by 43 contributors
In a global ecosystem with tens of thousands of users, fewer than 50 people are doing the work to keep things running and improving. That gap between use and support is difficult to maintain over the long term.
A proven model for sustainability
The Eclipse Foundation also stewards Eclipse Temurin, the open source Java runtime provided by the Adoptium Working Group. With more than 700 million downloads and counting, Temurin has become a cornerstone of the Java ecosystem, offering enterprises a cost-effective, production-grade option.
To help maintain that momentum, the Adoptium Working Group launched the Eclipse Temurin Sustainer Program, designed to encourage reinvestment in the project and support faster releases, stronger security, and improved test infrastructure. The new Temurin ROI calculator shows that enterprises can save an average of $1.6 million annually by switching to open source Java.
Together, Open VSX and Temurin demonstrate what is possible when there is shared investment in critical open source infrastructure. But the current model of unlimited, no-cost use cannot continue indefinitely. The shared goal must be to create a sustainable and scalable model in which commercial consumers of these services provide the primary financial support. At the same time, it is essential to preserve free access for open source users, including individual developers, maintainers, and academic institutions.
We encourage all adopters and enterprises to get involved:
- Contribute to the code: Review issues, submit patches, and help evolve the projects in the open under Eclipse Foundation governance.
- Sustain what you use: Support hosting, testing, and security through membership, sponsorship, or other financial contributions, collaborating with peers to keep essential open infrastructure strong.
Investing now helps ensure the systems you depend on remain resilient, secure, and accessible for everyone.
Looking ahead
The growth of Open VSX and Eclipse Temurin underscores their value and importance. They have become cornerstones of modern development, serving a global community and fueling innovation across industries. But growth must be matched with sustainability. Because those who benefit most have not always stepped up to support these projects, we are implementing measures such as rate limiting. This is not about restricting access. It is about keeping the doors open in a way that is fair and responsible.
We are at a turning point. The future of open source infrastructure depends on more than goodwill. I remain optimistic that we can meet this challenge. By working together, industry and the open source community can ensure that these vital systems remain reliable, resilient, and accessible to all. I invite you to join us in honouring the spirit of open source by aligning responsibility with usage and helping to build a sustainable future for shared digital infrastructure.
Businesses built on open infrastructure have a responsibility to sustain it
by Jacob Harris at September 23, 2025 10:00 AM
The global software ecosystem runs on open source infrastructure. As demand grows, we invite the businesses who rely on it most to play a larger role in sustaining it.
Open infrastructure is not free: A joint statement on sustainable stewardship
by Jacob Harris at September 23, 2025 08:45 AM
Over the past two decades, open source has revolutionized the way software is developed. Every modern application, whether written in Java, JavaScript, Python, Rust, PHP, or beyond, depends on public package registries like Maven Central, PyPI, crates.io, Packagist and open-vsx to retrieve, share, and validate dependencies. These registries have become foundational digital infrastructure – not just for open source, but for the global software supply chain.
Beyond package registries, open source projects also rely on essential systems for building, testing, analyzing, deploying, and distributing software. These also include content delivery networks (CDNs) that offer global reach and performance at scale, along with donated (usually cloud) computing power and storage to support them.
And yet, for all their importance, most of these systems operate under a dangerously fragile premise: They are often maintained, operated, and funded in ways that rely on goodwill, rather than mechanisms that align responsibility with usage.
Despite serving billions (perhaps even trillions) of downloads each month (largely driven by commercial-scale consumption), many of these services are funded by a small group of benefactors. Sometimes they are supported by commercial vendors, such as Sonatype (Maven Central), GitHub (npm) or Microsoft (NuGet). At other times, they are supported by nonprofit foundations that rely on grants, donations, and sponsorships to cover their maintenance, operation, and staffing.
Regardless of the operating model, the pattern remains the same: a small number of organizations absorb the majority of infrastructure costs, while the overwhelming majority of large-scale users, including commercial entities that generate demand and extract economic value, consume these services without contributing to their sustainability.
Modern Expectations, Real Infrastructure
Not long ago, maintaining an open source project meant uploading a tarball from your local machine to a website. Today, expectations are very different:
-
Dependency resolution and distribution must be fast, reliable, and global.
-
Publishing must be verifiable, signed, and immutable.
-
Continuous integration (CI) pipelines expect deterministic builds with zero downtime.
-
Security tooling expects an immediate response from public registries.
-
Governments and enterprises demand continuous monitoring, traceability, and auditability of systems.
-
New regulatory requirements, such as the EU Cyber Resilience Act (CRA), are further increasing compliance obligations and documentation demands, adding overhead for already resource-constrained ecosystems.
-
Infrastructure must be responsive to other types of attacks, such as spam and increased supply chain attacks involving malicious components that need to be removed.
These expectations come with real costs in developer time, bandwidth, computing power, storage, CDN distribution, operational, and emergency response support. Yet, across ecosystems, most organizations that benefit from these services do not contribute financially, leaving a small group of stewards to carry the burden.
Automated CI systems, large-scale dependency scanners, and ephemeral container builds, which are often operated by companies, place enormous strain on infrastructure. These commercial-scale workloads often run without caching, throttling, or even awareness of the strain they impose. The rise of Generative and Agentic AI is driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges.
The illusion of “free and infinite” infrastructure encourages wasteful usage.
Proprietary Software distribution
In many cases, public registries are now used to distribute not only open source libraries but also proprietary software, often as binaries or software development kits (SDKs) packaged as dependencies. These projects may have an open source license, but they are not functional except as part of a paid product or platform..
For the publisher, this model is efficient. It provides the reliability, performance, and global reach of public infrastructure without having to build or maintain it. In effect, public registries have become free global CDNs for commercial vendors.
We don’t believe this is inherently wrong. In fact, it’s somewhat understandable and speaks to the power of the open source development model. Public registries offer speed, global availability, and a trusted distribution infrastructure already used by their target users, making it sensible for commercial publishers to gravitate toward them. However, it is essential to acknowledge that this was not the original intention of these systems. Open source packaging ecosystems were created to support the distribution of open, community-driven software, not as a general-purpose backend for proprietary product delivery. If these registries are now serving both roles, and doing
so at a massive scale, that’s fine. But it also means it’s time to bring expectations and incentives into alignment.
Commercial-scale use without commercial-scale support is unsustainable.
Moving Towards Sustainability
Open source infrastructure cannot be expected to operate indefinitely on unbalanced generosity. The real challenge is creating sustainable funding models that scale with usage, rather than relying on informal and inconsistent support.
There is a difference between:
-
Operating sustainably, and
-
Functioning without guardrails, with no meaningful link between usage and responsibility.
Today, that distinction is often blurred. Open source infrastructure, whether backed by companies or community-led foundations, faces rising demands, fueled by enterprise-scale consumption, without reliable mechanisms to scale funding accordingly. Documented examples demonstrate how this imbalance drives ecosystem costs, highlighting the real-world consequences of an illusion that all usage is free and unlimited.
For foundations in particular, this challenge can be especially acute. Many are entrusted with running critical public services, yet must do so through donor funding, grants, and time-limited sponsorships. This makes long-term planning difficult and often limits their ability to invest proactively in staffing, supply chain security, availability, and scalability. Meanwhile, many of these repositories are experiencing exponential growth in demand, while the growth in sponsor support is at best linear, posing a challenge to the financial stability of the nonprofit organizations managing them.
At the same time, the long-standing challenge of maintainer funding remains unresolved. Despite years of experiments and well-intentioned initiatives, most maintainers of critical projects still receive little or no sustained support, leaving them to shoulder enormous responsibility in their personal time. In many cases, these same underfunded projects are supported by the very foundations already carrying the burden of infrastructure costs. In others, scarce funds are diverted to cover the operational and staffing needs of the infrastructure itself.
If we were able to bring greater balance and alignment between usage and funding of open source infrastructure, it would not only strengthen the resilience of the systems we all depend on, but it would also free up existing investments, giving foundations more room to directly support the maintainers who form the backbone of open source.
Billion-dollar ecosystems cannot stand on foundations built of goodwill and unpaid weekends.
What Needs to Change
It is time to adopt practical and sustainable approaches that better align usage with costs. While each ecosystem will adopt the approaches that make the most sense in its own context, the need for action is universal. These are the areas where action should be investigated:
-
Commercial and institutional partnerships that help fund infrastructure in proportion to usage or in exchange for strategic benefits.
-
Tiered access models that maintain openness for general and individual use while providing scaled performance or reliability options for high-volume consumers.
-
Value-added capabilities that commercial entities might find valuable, such as usage statistics.
These are not radical ideas. They are practical, commonsense measures already used in other shared systems, such as Internet bandwidth and cloud computing. They keep open infrastructure accessible while promoting responsibility at scale.
Sustainability is not about closing access; it’s about keeping the doors open and investing for the future.
This Is a Shared Resource and a Shared Responsibility
We are proud to operate the infrastructure and systems that power the open source ecosystem and modern software development. These systems serve developers in every field, across every industry, and in every region of the world.
But their sustainability cannot continue to rely solely on a small group of donors or silent benefactors. We must shift from a culture of invisible dependence to one of balanced and aligned investments.
This is not (yet) a crisis. But it is a critical inflection point.
If we act now to evolve our models, creating room for participation, partnership, and shared responsibility, we can maintain the strength, stability, and accessibility of these systems for everyone.
Without action, the foundation beneath modern software will give way. With action -- shared, aligned, and sustained -- we can ensure these systems remain strong, secure, and open to all.
How You Can Help
While each ecosystem may adopt different approaches, there are clear ways for organizations and individuals to begin engaging now:
-
Show Up and Learn: Connect with the foundations and organizations that maintain the infrastructure you depend on. Understand their operational realities, funding models, and needs.
-
Align Usage with Responsibility: If your organization is a high-volume consumer, review your practices. Implement caching, reduce redundant traffic, and engage with stewards on how you can contribute proportionally.
-
Build With Care: If you create build tools, frameworks, or security products, consider how your defaults and behaviors impact public infrastructure. Reduce unnecessary requests, make proxy usage easier, and document best practices so your users can minimize their footprint.
-
Become a Financial Partner: Support foundations and projects directly, through membership, sponsorship, or by employing maintainers. Predictable funding enables proactive investment in security and scalability.
Awareness is important, but awareness alone is not enough. These systems will only remain sustainable if those who benefit most also share in their support.
What’s Next
This open letter serves as a starting point, not a finish. As stewards of this shared infrastructure, we will continue to work together with foundations, governments, and industry partners to turn principles into practice. Each ecosystem will pursue the models that make sense in its own context, but all share the same direction: aligning responsibility with usage to ensure resilience.
Future changes may take various forms, ranging from new funding partnerships to revised usage policies to expanded collaboration with governments and enterprises. What matters most is that the status quo cannot hold.
We invite you to engage with us in this work: learn from the communities that maintain your dependencies, bring forward ideas, and be prepared for a world where sustainability is not optional but expected.
Signed by:
Alpha-Omega
Eclipse Foundation (Open VSX)
OpenJS Foundation
Open Source Security Foundation
Packagist (Composer)
Python Software Foundation (PyPI)
Rust Foundation (crates.io)
Sonatype (Maven Central)
Organizational signatures indicate endorsement by the listed entity. Additional organizations may be added over time.
Acknowledgments: Thanks to contributors from the above organizations and the broader community for review and input.
September 11, 2025
The Eclipse Theia Community Release 2025-08
by Jonas, Maximilian & Philip at September 11, 2025 12:00 AM
We are happy to announce the eleventh Eclipse Theia community release, “2025-08,” incorporating the latest advances from Theia releases 1.62, 1.63, and 1.64. New to Eclipse Theia? It is the …
The post The Eclipse Theia Community Release 2025-08 appeared first on EclipseSource.
by Jonas, Maximilian & Philip at September 11, 2025 12:00 AM
September 09, 2025
Building MCP Servers: Tool Descriptions + Service Contracts = Dynamic Tool Groups
by Scott Lewis ([email protected]) at September 09, 2025 12:18 AM
The Model Context Protocol (MCP) can easily be used to expose APIs and services in the form of MCP tools...i.e. functions/methods that can take input, perform some actions based upon that input, and produce output, without specifying a particular language or runtime.
OSGi Services (and Remote Services) provide a dynamic, flexible, secure environment for microservices, with clear well-established mechanisms for separating service contracts from service implementations.
One way to think of a service contract for large language models (LLMs) is that the service contract can be enhanced to provide LLM-processable metadata for each tool/method/function. Any service contract can still be used by human developers (API consumers), but with tool-specific meta-data/descriptions added, larger service contracts can be also used by any model.
Since service contracts in most languages are sets of functions/methods, the service contract can also be used to represent groupings of MCP tools, or Dynamic MCP ToolGroups. The example on the MCPToolGroups page and on the Bndtools project templates, is a simple example of grouping a set of related functions/methods into a service contract and including MCP tool meta-data (tool and tool param text descriptions).
by Scott Lewis ([email protected]) at September 09, 2025 12:18 AM
Task Engineering in AI Coding: How to Break Problems Into AI-Ready Pieces
by Jonas, Maximilian & Philip at September 09, 2025 12:00 AM
AI is changing how we code—but not what makes coding successful. Great software still depends on clarity, structure, and deliberate decision-making. Where many developers rush to feed an entire …
The post Task Engineering in AI Coding: How to Break Problems Into AI-Ready Pieces appeared first on EclipseSource.
by Jonas, Maximilian & Philip at September 09, 2025 12:00 AM
Eclipse Collections Categorically: Level up your programming game
September 09, 2025 12:00 AM
September 01, 2025
Explaining the Eclipse prefix in Eclipse Collections
by Donald Raab at September 01, 2025 07:16 PM
Eclipse Collections is a standalone open source collections library for Java
After a decade of Eclipse Collections existence as a project at the Eclipse Foundation, I still find myself having to explain the difference between Eclipse Collections, Eclipse IDE, and the Eclipse Foundation to developers who have the mistaken impression that Eclipse Collections is part of or requires the Eclipse IDE. Eclipse Collections is a standalone Java library which is a project managed at the Eclipse Foundation. It is not part of and does not require you to use the Eclipse IDE to use it.
The prefix Eclipse in Eclipse Collections comes from the Eclipse Foundation, not the Eclipse IDE. The first two bullets below should be enough to make it clear what Eclipse Collections is and that it has no dependencies on any IDE. The first five bullets explain the existence of the Eclipse Foundation and how it relates to Eclipse Collections and the Eclipse IDE. The remaining five bullets are there to help clear up any remaining doubts as to the existence of a dependent relationship between Eclipse Collections and the Eclipse IDE or any other IDE. There is no such dependency.
Clarifying the Eclipse prefix in Eclipse Collections
- Eclipse Collections is a standalone open source Java collections library.
- Eclipse Collections was formerly known as GS Collections.
- Eclipse IDE is an open source Integrated Development Environment.
- Eclipse Foundation is an open source foundation like Apache Software Foundation, Linux Foundation, etc.
- Eclipse Collections and the Eclipse IDE are separate projects managed at the Eclipse Foundation.
- Eclipse Collections isn’t dependent on the Eclipse IDE.
- The Eclipse IDE isn’t dependent on Eclipse Collections.
- Developers who use IntelliJ, NetBeans, VS Code and other Java IDEs can use Eclipse Collections.
- Developers who use the Eclipse IDE can also use Eclipse Collections.
- Developers can use Eclipse Collections without using any IDE, as Eclipse Collections is a standalone Java library and not part of any IDE.
The prefix Eclipse was first used with the Eclipse IDE in 2001. The prefix Eclipse was later was used to name the Eclipse Foundation in 2004. Eclipse Collections joined the Eclipse Foundation as a Java project at the end of 2015. All three share the prefix Eclipse in common, similar to many of the projects at Apache sharing the Apache prefix (e.g. Spark, Tomcat, Commons, etc.).
This a public service that I provide to the open source development community for free in an attempt to clarify any lingering confusion caused by the Eclipse prefix.
Thank you for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
August 30, 2025
I‘m not Cattle
by Donald Raab at August 30, 2025 07:20 PM
I just enjoy being heard
I will keep writing.
I enjoy human feedback.
Thank you for reading.
All I needed to say fit in this haiku. 🙏
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
August 29, 2025
Eclipse in Wayland (2025)
by Lorenzo Bettini at August 29, 2025 08:43 AM
August 26, 2025
Building MCP Servers: Dynamic Tool Groups
by Scott Lewis ([email protected]) at August 26, 2025 12:04 AM
Currently, adding tools to MCP servers is a static process. i.e. a new tool is designed and implemented, MCP meta-data (descriptions) added via annotations, decorators, or code, the new code is added to the MCP server, things are compiled and started, tested, debugged, etc.
As well, there is currently no mcp concept of tool 'groups'...i.e. multiple tools that are grouped together based upon function, common use case, organization, or discoverability. Most current MCP servers have a flat namespace of tools.
I've created a repo with a small set of classes, based upon the mcp-java-sdk and the mcp-annotations projects, that supports the dynamic adding and removing of tool groups from mcp servers.
In environments with the OSGi service registry, this allows the easy, dynamic, and secure (type safe) adding and removing of OSGi services (and/or remote services) to MCP servers.
by Scott Lewis ([email protected]) at August 26, 2025 12:04 AM
How AI and MCP Supercharge GitHub Workflows in Theia IDE
by Jonas, Maximilian & Philip at August 26, 2025 12:00 AM
How can AI make your GitHub workflows faster, smarter, and less repetitive? In this new video, we show how the GitHub MCP server, connected to the AI-powered Theia IDE, can automate three common …
The post How AI and MCP Supercharge GitHub Workflows in Theia IDE appeared first on EclipseSource.
August 22, 2025
Building MCP Servers: Alternative Transports
by Scott Lewis ([email protected]) at August 22, 2025 02:38 AM
by Scott Lewis ([email protected]) at August 22, 2025 02:38 AM
August 21, 2025
Updated Eclipse Theia FAQ – Clearing Up the Most Common Misunderstandings
by Jonas, Maximilian & Philip at August 21, 2025 12:00 AM
We’ve significantly updated our FAQ for Eclipse Theia adopters and users. The rewritten FAQ addresses the questions we hear most often from the community and potential adopters — and clears up some of …
The post Updated Eclipse Theia FAQ – Clearing Up the Most Common Misunderstandings appeared first on EclipseSource.
August 19, 2025
The Author’s Inside Guide to Reading Eclipse Collections Categorically
by Donald Raab at August 19, 2025 12:29 PM
TL;DR — Read Chapters 1, 2, 3. Jump to 11. Skim 4–10. Dive in as desired.
Eclipse Collections Categorically is written as a story, but organized to skim and jump aroundThe book, Eclipse Collections Categorically, will continue to be available to Kindle Unlimited subscribers until October 12, 2025.
The book is also available to purchase in print versions at Amazon and Barnes & Noble.
This blog can help readers determine the best options for reading the book given time constraints.
How to learn a feature-rich API
Eclipse Collections Categorically overcomes the challenge of learning and comprehending a feature-rich API by grouping the methods into method categories. This was an innovative information chunking technique I learned from the classic Smalltalk programming language in the 1990s. What the book doesn’t tell you is how to go about reading it to maximize your reading and learning style given constraints on your time.
The book can be read as a story or as a reference guide.
At 429 pages in paperback, and 377 pages in the larger hardcover book, it can take a while to read the whole book. The good news is that the book was designed to be read end-to-end or be picked up and read at any point as a reference. The decision of how best to read the book is up to the reader.
1. Read the Preface
The Preface is the story of where, why, and how Eclipse Collections was developed. It is an important backstory, if you want to understand what drove me to create an open source Java collections library that needed lambdas, a decade before lambdas arrived in Java, and then write a book about it two decades later.
**The Preface is free to read in the online reading sample at Amazon.
2. Read the Introduction and THIS section
The Introduction tells you how the book is organized. This will help inform you as how and where you want to spend your time. The rest of the Introduction tells you how to acquire Eclipse Collections and access the source. There is a new GitHub project that does that as well, and has the added benefit of including the latest version of Eclipse Collections (13.0) that was released at the end of June, 2025.
**The Introduction is free to read in the online reading sample at Amazon.
New GitHub Repository with additional resources
The following GitHub project can be used as a hand-on resource to follow along with the code examples in the book. Some folks learn best by doing. This repository was created after the book was published. There is a Maven project and a sample of the examples from the book that can be run, since they are executable tests.
GitHub - sensiblesymmetry/ec-categorically: Resources for Eclipse Collections Categorically book
There are two code examples per chapter shared in this repo (it is only a sample), but the project is setup and includes the dependencies needed to personally explore and try any of the examples in the book.
3. Decide how you want to read the book
The manner in which you decide to read the book depends on what you are looking to get out of it. Once you understand how the book is organized in the Introduction, your decision on how to approach reading the book should become clearer.
Now that there is a GitHub repo with resources to accompany the book, it will be easier to take a hands-on approach for some folks who like to experiment and see code run. The code examples in the book are effectively the solutions to a code kata, which is focused on learning the Eclipse Collections API in a comprehensive manner.
Option A: Read from Beginning to End
The chapters in Eclipse Collections Categorically are organized by method categories. The chapters are ordered specifically to help developers new to Eclipse Collections build skills and understanding in an incremental fashion. There is a story that builds upon previous chapters as the reader progresses.
I wanted to write a story that could be read from beginning to end. Depending on the speed you read and the time you have to focus, this can take the average reader a while.
If you are time-constrained and just want to learn some of the big ideas covered in the book, then I would suggest Option B to start your journey.
Option B: Read Chapters 1, 2, 3. Jump to 11. Skim 4–10.
Chapters 1, 2, 3 and give you all that you need to get started on a journey of learning Eclipse Collections, using method categories as an indexed guide (aka, Categorically). Chapter 11 is the summary chapter for the book. Chapter 11 shows you the symmetry that exists in the library, and how it can aid your learning as you use Eclipse Collections in your projects.
Chapter 3 takes you on a journey through a straightforward but surprising method category — counting. I recommend reading Chapter 3 from beginning to end as it will help you understand the symmetry of chapters 4 through 10.
Chapters 4–10 cover additional method categories (testing, finding, filtering, transforming, etc.). You can read them straight through or jump around them in any order. One option that someone has shared with me that worked well for them was to skim chapters 4 through 10 to see what was in them, and then go back and focus on particular sections when you want more detail on various methods. Chapters 3–10 will help you learn different techniques for accomplishing things with the Eclipse Collections API. They are an efficient index into the 134 methods you can see in the diagrams in the image above.
4. The Appendices
There is a lot of content in the appendices. I would suggest reading them in any order that interests you. There is a lot of interesting data, background, and some advice on using collections effectively in object-oriented domains in Java.
The Introduction covers what is in the appendices, so I won’t repeat it here.
Enjoy the book!
I hope you can take advantage of and enjoy the limited time free book promotion. If you have Kindle Unlimited, you have until October 12th, 2025. If you enjoy the book, I hope you will consider purchasing a print or digital copy and making it a permanent part of your physical or virtual bookshelf.
Thanks for reading, and enjoy!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am also the author of the book, Eclipse Collections Categorically: Level up your programming game.
GPT-5 vs Sonnet-4: Side-by-Side on Real Coding Tasks
by Jonas, Maximilian & Philip at August 19, 2025 12:00 AM
Two of today’s most popular AI coding models—GPT-5 and Sonnet-4—are often compared using benchmarks or synthetic tasks. But how do they behave in real-world coding scenarios? In this video, we put …
The post GPT-5 vs Sonnet-4: Side-by-Side on Real Coding Tasks appeared first on EclipseSource.
August 13, 2025
Eclipse Theia 1.64 Release: News and Noteworthy
by Jonas, Maximilian & Philip at August 13, 2025 12:00 AM
We are happy to announce the Eclipse Theia 1.64 release! The release contains in total 60 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …
The post Eclipse Theia 1.64 Release: News and Noteworthy appeared first on EclipseSource.
August 12, 2025
Theia AI and Theia IDE Now Support GPT-5—Out of the Box!
by Jonas, Maximilian & Philip at August 12, 2025 12:00 AM
Developers and tool builders can now use OpenAI’s GPT-5 directly in Theia AI and the AI-powered Theia IDE, without additional integration work. Just add "gpt-5" (or its variants like mini or nano) to …
The post Theia AI and Theia IDE Now Support GPT-5—Out of the Box! appeared first on EclipseSource.
August 07, 2025
A Native Claude Code IDE? How It Could Look Like with Eclipse Theia
by Jonas, Maximilian & Philip at August 07, 2025 12:00 AM
What if Claude Code, Anthropic’s powerful AI coding agent, wasn’t just a terminal app but a truly native part of your IDE? That’s the question we explored in our latest side project at EclipseSource. …
The post A Native Claude Code IDE? How It Could Look Like with Eclipse Theia appeared first on EclipseSource.
August 05, 2025
Agent-to-Agent Delegation in the AI-powered Theia IDE / Theia AI
by Jonas, Maximilian & Philip at August 05, 2025 12:00 AM
Automating workflows with AI just took a leap forward. Theia AI and the AI-powered Theia IDE now supports agent-to-agent delegation, enabling one AI agent to delegate specific tasks - like reporting …
The post Agent-to-Agent Delegation in the AI-powered Theia IDE / Theia AI appeared first on EclipseSource.
August 02, 2025
Building MCP Servers: Preventing AI Monopolies
by Scott Lewis ([email protected]) at August 02, 2025 09:21 PM
I recently read an insightful article about using open protocols (MCP in this case) to prevent user context/data lock-in at the AI application layer:
Open Protocols Can Prevent AI Monopolies
In the spirit of this article, I've decided to make an initial code contribution to the MCP java sdk project
by Scott Lewis ([email protected]) at August 02, 2025 09:21 PM
July 31, 2025
Langium 4.0 is released!
July 31, 2025 12:00 AM
Enhanced Image Support in the AI-powered Theia IDE / Theia AI
by Jonas, Maximilian & Philip at July 31, 2025 12:00 AM
They say a picture is worth a thousand words. When describing UI issues to an AI assistant, it’s worth even more. The AI-powered Theia IDE now features rich image support, allowing you to communicate …
The post Enhanced Image Support in the AI-powered Theia IDE / Theia AI appeared first on EclipseSource.
July 30, 2025
Migrating Eclipse and RCP Tools to the Web
by Jonas, Maximilian & Philip at July 30, 2025 12:00 AM
Over almost two decades, the Eclipse Platform and Eclipse RCP have powered countless mission-critical tools and IDEs. But as outlined in our recent article on the future of Eclipse RCP, the technology …
The post Migrating Eclipse and RCP Tools to the Web appeared first on EclipseSource.
July 29, 2025
Interactive AI Responses in Your Custom GitHub Copilot – New Theia AI Tutorial
by Jonas, Maximilian & Philip at July 29, 2025 12:00 AM
Ever wanted your AI assistant to do more than just produce text? Our new video tutorial shows how to make your custom Copilot, built on Eclipse Theia, more interactive and visual, tailored to your …
The post Interactive AI Responses in Your Custom GitHub Copilot – New Theia AI Tutorial appeared first on EclipseSource.
July 24, 2025
AI Coding at Scale: Structure Your Workflow with Dibe Coding
by Jonas, Maximilian & Philip at July 24, 2025 12:00 AM
AI-powered development is everywhere. From YouTube tutorials to conference talks, from open-source demos to enterprise prototypes - coding with AI is the new frontier. One-shot prompts that generate …
The post AI Coding at Scale: Structure Your Workflow with Dibe Coding appeared first on EclipseSource.
July 22, 2025
EclipseSource Ends Maintenance of the Eclipse Modeling Tools Package - Here's Why
by Jonas, Maximilian & Philip at July 22, 2025 12:00 AM
For almost a decade, EclipseSource has proudly maintained and contributed to the Eclipse Modeling Tools package - a curated edition of the Eclipse IDE tailored for modeling technologies. This Eclipse …
The post EclipseSource Ends Maintenance of the Eclipse Modeling Tools Package - Here's Why appeared first on EclipseSource.
July 16, 2025
The Future of the Eclipse Platform and Eclipse RCP
by Jonas, Maximilian & Philip at July 16, 2025 12:00 AM
Over almost two decades, the Eclipse Platform and Rich Client Platform (RCP) have been foundational technologies for building extensible desktop applications, tools, and custom IDEs. From engineering …
The post The Future of the Eclipse Platform and Eclipse RCP appeared first on EclipseSource.
July 11, 2025
Building MCP Servers - part 3: Security
by Scott Lewis ([email protected]) at July 11, 2025 10:46 PM
There have been recent reports of critical security vulnerabilities on the mcp-remote project, and the mcp inspector project.
I do not know all the technical details of the exploits, but it appears to me that in both cases it has to do vulnerabilities introduced by the MCP Server implementation. and use of the stdio MCP transport.
I want to emphasize that example described in these two posts
is using mechanisms that are...though heavy usage by commercial server technologies over the past 10 years...not subject to the same sorts of remote vulnerabilities seen by the mcp-remote and mcp-inspector projects.
Also, the flexibility in discovery and distribution provided by the RSA Specification and the RSA implementation used, allows for addressing MCP Server remote tools, or protocol weaknesses, quickly and easily, without having to update the MCP Server or tooling implementation code.
by Scott Lewis ([email protected]) at July 11, 2025 10:46 PM



