The Object Teams Blog

Adding team spirit to your objects.

Posts Tagged ‘modularity

Jigsaw arriving – and now what?

leave a comment »

The debate is over. The result of fierce discussions has been declared. Jigsaw is knocking at the door. Now the fate of Jigsaw depends on the community. Will the community embrace Jigsaw? Will we be able to leverage the new concepts to our benefit? Will we struggle with new pitfalls?

Let’s step back a second: do we really know what is Jigsaw? Do we really understand the design, its subtleties and implications?

eclipsecon2017At EclipseCon Europe 2017 I will try to shed a light on some lesser known aspects of Jigsaw. A deep dive into things I learned as a JDT committer while implementing tool support for Java 9.

In search for truth

To set the stage, we’ll first have to figure out, who or what defines Java 9 — or Jigsaw — or JPMS. This is both a question of specification vs. implementation as well as a matter of a specification spread over several documents and artifacts. Let’s try to grok Jigsaw from the legally binding sources, rather then from secondary writing and hearsay (if that is possible).

We will have to relearn some of the basic terms, like: what is a package in Java? Do packages form a hierarchy? (I will show, how both answers, yes and no, are equally right and wrong).

Jigsaw is said to do away with some “undesirable” stuff like split packages, and cyclic dependencies. Really? (Yes and no).

Encapsulation

Of course, with Jigsaw all is about encapsulation – easy to agree on, but what is it that a Java 9 module encapsulates? Only a deep understanding of the flavor of encapsulation will tell us, what exact qualities we gain from following the new discipline (it’s not about security, e.g.), and also what pains will be incurred on the path of migrating to the new model. (Hint: I will be speaking both about compile time and runtime).

Interestingly, the magic potion Jigsaw also brings its own antidote: With tightening the rules of encapsulation, also the opposite becomes necessary: they call it breaking encapsulation, for which I once coined the term “decapsulation“. I should be flattered by how close Java 9 comes to what I call “gradual encapsulation“. So, the talk can not stop at presenting just the new language concepts, also the various extra-lingual means for tuning your architecture need to be inspected through the looking glass. This is also where tools come into focus: how can JDT empower its users to use the new options without the need to dig through the sometimes cryptic syntax of command line parameters?

Loose ends

At this point we shall agree that Jigsaw is a compromise, fulfilling many of its goals and promises, while also missing some opportunities.

I will also have to say a few words about long standing API of JDT, which makes guarantees that are no longer valid in Java 9. This raises the question: what is the migration path for tools that sit on top of JDT? (There is no free lunch).

Finally, it wouldn’t be Java, if overloading wouldn’t make things worse for the newly added concepts. But: haven’t you become fond of hating, when JDT says:

error_obj The type org.foo.Bar cannot be resolved. It is indirectly referenced from required .class files

We may be seeing light at the end of the tunnel: for Jigsaw we had to revamp the guts of our compiler in ways, that could possibly help to – finally – resolve that problem. Wish me luck …

Hope to see you in Ludwigsburg, there’s much to be discussed over coffee and beer

eclipsecon2017 Ludwigsburg, Germany · October 24 – 26, 2017

Written by Stephan Herrmann

September 10, 2017 at 16:31

Book chapter published: Confined Roles and Decapsulation in Object Teams — Contradiction or Synergy?

leave a comment »

I strongly believe that for perfect modularity, encapsulation in plain Java is both too weak and too strong. This is the fundamental assumption behind a book chapter that has just been published by Springer.

The book is:
Aliasing in Object-Oriented Programming. Types, Analysis and Verification
My chapter is:
Confined Roles and Decapsulation in Object Teams — Contradiction or Synergy?

The concepts in this chapter relate back to the academic part of my career, but all of my more pragmatic tasks since those days indicate that we are still very far away from optimal modularity, and both mistakes are found in real world software: to permit access too openly and to restrict access too strictly. More often than not, it’s the same software exhibiting both mistakes at once.

For the reader unfamiliar with the notions of alias control etc., let me borrow from the introduction of the book:

Aliasing occurs when two or more references to an object exist within the object graph of a running program. Although aliasing is essential in object-oriented programming as it allows programmers to implement designs involving sharing, it is problematic because its presence makes it difficult to reason about the object at the end of an alias—via an alias, an object’s state can change underfoot.

978-3-642-36946-9

Aliasing, by the way, is one of the reasons, why analysis for @Nullable fields is a thorny issue. If alias control could be applied to @Nullable fields in Java, much better static analysis would be possible.


How is encapsulation in Java too weak?

Java only allows to protect code, not objects

This manifests at two levels:

Member access across instances

In a previous post I mentioned that the strongest encapsulation in Java – using the private modifier – doesn’t help to protect a person’s wallet against access from any other person. This is a legacy from the pre-object-oriented days of structured programming. In terms of encapsulation, a Java class is a module utterly unaware of the concept of instantiation. This defect is even more frustrating as better precedents (think of Eiffel) have been set before the inception of Java.

A type system that is aware of instances, not just classes, is a prerequisite for any kind of alias control.

Object access meets polymorphism

Assume you declare a class as private because you want to keep a particular thing absolutely local to the current module. Does such a declaration provide sufficient protection? No. That private class may just extend another – public – class or implement a public interface. By using polymorphism (an invisible type widening suffices) an instance of the private class can still be passed around in the entire program – by its public super type. As you can see, applying private at the class level, doesn’t make any objects private, only this particular class is. Since every class extends at least Object there is no way to confine an object to a given module; by widening all objects are always visible to all parts of the program. Put dynamic binding of method calls into the mix, and all kinds of “interesting” effects can be “achieved” on an object, whose class is “secured” by the private keyword.

The type system of OT/J supports instance-based protection.

Java’s deficiencies outlined above are overcome in OT/J by two major concepts:

Dependent types
Any role class is a property of the enclosing team instance. The type system allows reasoning about how a role is owned by this enclosing team instance. (read the spec: Image1.2.2)
Confined roles
The possible leak by widening can be prevented by sub-classing a predefined role class Confined which does not extend Object. (read the spec: Image7.2)

For details of the type system, why it is suitable for mending the given problems, and why it doesn’t hurt in day-to-day programming, I have to refer you to the book chapter.


How is encapsulation in Java too strict?

If you are a developer with a protective attitude towards “your” code, you will make a lot of things private. Good for you, you’ve created (relatively) well encapsulated software.
But when someone else is trying to make use of “your” code (re-use) in a slightly unanticipated setting (calling for unanticipated adaptations), guess what: s/he’ll curse you for your protective attitude.
Have you ever tried to re-use an existing class and failed, because some **** detail was private and there was simply no way to access or override that particular piece? When you’ve been in this situation before, you’ll know there are basically 2 answers:

  1. Give up, simply don’t use that (overly?) protective class and recode the thing (which more often than not causes a ripple effect: want to copy one method, end up copying 5 or more classes). Object-orientation is strong on re-use, heh?
  2. Use brute force and don’t tell anybody (tools that come in handy are: binary editing the class file, or calling Method.setAccessible(true)). I’m not quite sure why I keep thinking of Core Wars as I write this 🙂 .

OT/J opens doors for negotiation, rather than arms race & battle

The Object Teams solution rests on two pillars:

  1. Give a name to the act of disregarding an encapsulation boundary: decapsulation. Provide the technical means to punch tiny little holes into the armor of a class / an object. Make this explicit so you can talk and reason about the extent of decapsulation. (read the spec: Image2.1.2(c))
  2. Separate technical possibility from questions of what is / should / shouln’t be allowed. Don’t let technology dictate rules but make it possible to formulate and enforce such rules on top of the existing technology.

Knowing that this topic is controversial I leave at this for now (a previous discussion in this field can be found here).


Putting it all together

  1. If you want to protect your objects, do so using concepts that are stronger than Java’s “private”.
  2. Using decapsulation where needed fosters effective re-use without the need for “speculative API”, i.e., making things public “just in case”.
  3. Dependent types and confined roles are a pragmatic entry into the realm of programs that can be statically analysed, and strong guarantees be given by such analysis.
  4. Don’t let technology dictate the rules, we need to put the relevant stakeholders back into focus: Module providers, application developers and end users have different views. Technology should just empower each of them to get their job done, and facilitate transparency and analyzability where desired.

Some of this may be surprising, some may sound like a purely academic exercise, but I’m convinced that the above ingredients as supported in OT/J support the development of complex modular systems, with an unprecedented low effective coupling.

FYI, here’re the section headings of my chapter:

  1. Many Faces of Modularity
  2. Confined Roles
    1. Safe Polymorphism
    2. From Confined Types to Confined Roles
    3. Adding Basic Flexibility to Confined Roles
  3. Non-hierarchical Structures
    1. Role Playing
    2. Translation Polymorphism
  4. Separate Worlds, Yet Connected
    1. Layered Designs
    2. Improving Encapsulation by Means of Decapsulation
    3. Zero Reference Roles
    4. Connecting Architecture Levels
  5. Instances and Dynamism
  6. Practical Experience
    1. Initial Comparative Study
    2. Application in Tool Smithing
  7. Related Work
    1. Nesting, Virtual Classes, Dependent Types
    2. Multi-dimensional Separation of Concerns
    3. Modules for Alias Control
  8. Conclusion

Written by Stephan Herrmann

March 30, 2013 at 22:11

Modularity at JavaOne 2012

leave a comment »

Those planning to attend JavaOne 2012 in San Francisco might be busy right now building their personal session schedule. So here’s what I have to offer:

On Wed., Oct. 3, I’ll speak about

Redefining Modularity and Reuse in Variants—
All with Object Teams

I'm Speaking at JavaOne 2012

I guess one of the goals behind moving Jigsaw out of the schedule for Java 8 was: to have more time to improve our understanding of modularity, right? So, why not join me when I discuss how much an object-oriented programming language can help to improve modularity.

When speaking of modularity, I don’t see this as a goal in itself, but as a means for facilitating reuse, and when speaking of reuse I’m particularly interested in reusing a module in a context that was not anticipated by the module provider (that’s what creates new value).

To give you an idea of my agenda, here’re the five main steps of this session:

  1. Warm-up: Debugging the proliferation of module concepts – will there ever be an end? Much of this follows the ideas in this post of mine.
  2. Reuse needs adaptation of existing modules for each particular usage scenario. Inheritance, OTOH, promises to support specialization. How come inheritance is not considered the primary architecture level concept for adapting reusable modules?
  3. Any attempts to apply inheritance as the sole adaptation mechanism in reuse mash-ups results in conflicts which I characterize as the Dominance of the Instantiator. To mend this contention inheritance needs a brother, with similar capabilities but much more dynamic and thus flexible.
  4. Modularity is about strictly enforced boundaries – good. However, the rules imposed by this regime can only deliver their true power if we’re also able to express limited exceptions.
  5. Given that creating variants and adapting existing modules is crucial for unanticipated reuse, how can those adaptations themselves be developed as modules?

At each step I will outline a powerful solution as provided by Object Teams.

See you in San Francisco!

5 items in your shopping cart Edit: The slides and the recording of this presentation are now available online. Image

Written by Stephan Herrmann

September 20, 2012 at 19:55

Object Teams with Null Annotations

with 5 comments

The recent release of Juno M4 brought an interesting combination: The Object Teams Development Tooling now natively supports annotation-based null analysis for Object Teams (OT/J). How about that? 🙂
NO NPE Image

The path behind us

Annotation-based null analysis has been added to Eclipse in several stages:

Using OT/J for prototyping
As discussed in this post, OT/J excelled once more in a complex development challenge: it solved the conflict between extremely tight integration and separate development without double maintenance. That part was real fun.
Applying the prototype to numerous platforms
Next I reported that only one binary deployment of the OT/J-based prototype sufficed to upgrade any of 12 different versions of the JDT to support null annotations — looks like a cool product line
Pushing the prototype into the JDT/Core
Next all of the JDT team (Core and UI) invested efforts to make the new feature an integral part of the JDT. Thanks to all for this great collaboration!
Merging the changes into the OTDT
Now, that the new stuff was mixed back into the plain-Java implementation of the JDT, it was no longer applicable to other variants, but the routine merge between JDT/Core HEAD and Object Teams automatically brought it back for us. With the OTDT 2.1 M4, annotation-based null analysis is integral part of the OTDT.

Where we are now

Regarding the JDT, others like Andrey, Deepak and Aysush have beaten me in blogging about the new coolness. It seems the feature even made it to become a top mention of the Eclipse SDK Juno M4. Thanks for spreading the word!

Ah, and thanks to FOSSLC you can now watch my ECE 2011 presentation on this topic.

Two problems of OOP, and their solutions

Now, OT/J with null annotations is indeed an interesting mix, because it solves two inherent problems of object-oriented programming, which couldn’t differ more:

1.: NullPointerException is the most widespread and most embarrassing bug that we produce day after day, again and again. Pushing support for null annotations into the JDT has one major motivation: if you use the JDT but don’t use null annotations you’ll no longer have an excuse. For no good reasons your code will retain these miserable properties:

  • It will throw those embarrassing NPEs.
  • It doesn’t tell the reader about fundamental design decisions: which part of the code is responsible for handling which potential problems?

Why is this problem inherent to OOP? The dangerous operator that causes the exception is this:
Image
right, the tiny little dot. And that happens to be the least dispensable operator in OOP.

2.: Objectivity seems to be a central property on any approach that is based just on Objects. While so many other activities in software engineering are based on the insight that complex problems with many stakeholders involved can best be addressed using perspectives and views etc., OOP forces you to abandon all that: an object is an object is an object. Think of a very simple object: a File. Some part of the application will be interested in the content so it can decode the bytes and do s.t. meaningful with it, another part of the application (maybe an underlying framework) will mainly be interested in the path in the filesystem and how it can be protected against concurrent writing, still other parts don’t care about either but only let you send the thing over the net. By representing the “File” as an object, that object must have all properties that are relevant to any part of the application. It must be openable, lockable and sendable and whatnot. This yields
bloated objects and unnecessary, sometimes daunting dependencies. Inside the object all those different use cases it is involved in can not be separated!

With roles objectivity is replaced by a disciplined form of subjectivity: each part of the application will see the object with exactly those properties it needs, mediated by a specific role. New parts can add new properties to existing objects — but not in the unsafe style of dynamic languages, but strictly typed and checked. What does it mean for practical design challenges? E.g, direct support for feature oriented designs – the direct path to painless product lines etc.

Just like the dot, objectivity seems to be hardcoded into OOP. While null annotations make the dot safe(r), the roles and teams of OT/J add a new dimension to OOP where perspectives can be used directly in the implementation. Maybe it does make sense, to have both capabilities in one language 🙂 although one of them cleans up what should have been sorted out many decades ago while the other opens new doors towards the future of sustainable software designs.

The road ahead

The work on null annotations goes on. What we have in M4 is usable and I can only encourage adopters to start using it right now, but we still have an ambitious goal: eventually, the null analysis shall not only find some NPEs in your program, but eventually the absense of null related errors and warnings shall give the developer the guarantee that this piece of code will never throw NPE at runtime.

What’s missing towards that goal:

  1. Fields: we don’t yet support null annotations for fields. This is next on our plan, but one particular issue will require experimentation and feedback: how do we handle the initialization phase of an object, where fields start as being null? More on that soon.
  2. Libraries: we want to support null specifications for libraries that have no null annotations in their source code.
  3. JSR 308: only with JSR 308 will we be able to annotate all occurrences of types, like, e.g., the element type of a collection (think of List)

Please stay tuned as the feature evolves. Feedback including bug reports is very welcome!

Ah, and one more thing in the future: I finally have the opportunity to work out a cool tutorial with a fellow JDT committer: How To Train the JDT Dragon with Ayushman. Hope to see y’all in Reston!

Image

Written by Stephan Herrmann

December 20, 2011 at 22:32

The Essence of Object Teams

leave a comment »

When I write about Object Teams and OT/J I easily get carried away indulging in cool technical details. Recently in the Object Teams forum we were asked about the essence of OT/J which made me realize that I had neglected the high-level picture for a while. Plus: this picture looks a bit different every time I look at it, so here is today’s answer in two steps: short and extended.

Short version:

OT/J adds to OO the capability to define a program in terms of multiple perspectives.

Extended version:

In software design, e.g., we are used to describing a system from multiple perspectives or views, like: structural vs. behavioral views, architectural vs. implementation views, views focusing on distribution vs. business logic, or just: one perspective per use case.
Image
A collage of UML diagrams

© 2009, Kishorekumar 62, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

In regular OO programming this is not well supported, an object is defined by exactly one class and no matter from which perspective you look at it, it always has the exact same properties. The best we can do here is controlling the visibility of methods by means of interfaces.

Image

A person is a person is a person!?

By contrast, in OT/J you start with a set of lean base objects and then you may attach specific roles for each perspective. Different use cases and their scenarios can be implemented by different roles. Visualization in the UI may be implemented by another independent set of roles, etc.

Image

A base seen from multiple perspectives via roles.

Code-level comparison?

I’ve been asked to compare standard OO and OT/J by direct comparison of code examples, but I’m not sure if it is a good idea to do, because OT/J is not intended to just provide a nicer syntax for things you can do in the same way using OO. OT/J – as any serious new programming language should do IMHO – wants you to think differently about your design. For example, we can give an educational implementation of the Observer pattern using OT/J, but once you “think Object Teams” you may not be interested in the Observer pattern any more, because you can achieve basically the same effect using a single callin binding as shown in our Stopwatch example.

So, positively speaking, OT/J wants you to think in terms of perspectives and make the perspectives you would naturally use for describing the system explicit by means of roles.

OTOH, a new language is of course only worth the effort if you have a problem with existing approaches. The problems OT/J addresses are inherently difficult to discuss in small examples, because they are:

  1. software complexity, e.g., with standard OO finding a suitable decomposition where different concerns are not badly tangled with each other becomes notoriously difficult.
  2. software evolution, e.g., the initial design will be challenged with change requests that no-one can foresee up-front.

(Plus all the other concerns addressed)

Both, theory and experience, tell me that OT/J excels in both fields. If anyone wants to challenge this claim using your own example, please do so and post your findings, or lets discuss possible designs together.

One more essential concept

I should also mention the second essence in OT/J: after slicing elements of your program into roles and base objects we also support to re-group the pieces in a meaningful way: as roles are contained in teams, those teams enable you to raise the level of abstraction such that each user-relevant feature, e.g., can be captured in exactly one team, hiding all the details inside. As a result, for a high-level design view you no longer have to look at diagrams of hundreds of classes but maybe just a few tens of teams.

Hope this helps seeing the big picture. Enough hand-waving for today, back to the code! 🙂

Written by Stephan Herrmann

September 20, 2011 at 15:39

Combination of Concerns

with one comment

If you are interested in more than two of these …

Image

… you’ll surely enjoy this:

Hands-on introduction to Object Teams

See you at Image

Written by Stephan Herrmann

March 17, 2011 at 16:52

Null annotations: prototyping without the pain

with 3 comments

Image
So, I’ve been working on annotations @NonNull and @Nullable so that the Eclipse Java compiler can statically detect your NullPointerExceptions already during compile time (see also bug 186342).

By now it’s clear this new feature will not be shipped as part of Eclipse 3.7, but that needn’t stop you from trying it, as I have uploaded the thing as an OT/Equinox plugin.

Behind the scenes: Object Teams

Image
Today’s post shall focus on how I built that plugin using Object Teams, because it greatly shows three advantages of this technology:

  • easier maintenance
  • easier deployment
  • easier development

Image
Before I go into details, let me express a warm invitation to our EclipseCon tutorial on Thursday morning. We’ll be happy to guide your first steps in using OT/J for your most modular, most flexible and best maintainable code.

Maintenance without the pain

It was suggested that I should create a CVS branch for the null annotation support. This is a natural choice, of course. I chose differently, because I’m tired of double maintenance, I don’t want to spend my time applying patches from one branch to the other and mending merge conflicts. So I avoid it wherever possible. You don’t think this kind of compiler enhancement can be developed outside the HEAD stream of the compiler without incurring double maintenance? Yes it can. With OT/J we have the tight integration that is needed for implementing the feature while keeping the sources well separated.

The code for the annotation support even lives in a different source repository, but the runtime effect is the same as if all this already were an integral part of the JDT/Core. Maybe I should say, that for this particular task the integration using OT/J causes a somewhat noticable performance penalty. The compiler does an awful lot of work and hooking into this busy machine comes at a price. So yes, at the end of the day this should be re-integrated into the JDT/Core. But for the time being the OT/J solution well serves its purpose (and in most other situations you won’t even notice any impact on performance plus we already have further performance improvements in the OT/J runtime in our development pipeline).

Independent deployment

Had I created a branch, the only way to get this to you early adopters would have been via a patch feature. I do have some routine in deploying patch features but they have one big drawback: they create a tight dependency to the exact version of the feature which you are patching. That means, if you have the habit of always updating to the latest I-build of Eclipse I would have to provide a new patch feature for each individual I-build released at Eclipse!

Not so for OT/Equinox plug-ins: in this particular case I have a lower bound: the JDT/Core must be from a build ≥ 20110226. Other than that the same OT/J-based plug-in seemlessly integrates with any Eclipse build. You may wonder, how can I be so sure. There could be changes in the JDT/Core that could break the integration. Theoretically: yes. Actually, as a JDT/Core committer I’ll be the first to know about those changes. But most importantly: from many years’ experience of using this technology I know such breakage is very seldom and should a problem occur it can be fixed in the blink of an eye.

As a special treat the OT/J-based plug-in can even be enabled/disabled dynamically at runtime. The OT/Equinox runtime ships with the following introspection view:
Image
Simply unchecking the second item dynamically disables all annotation based null analysis, consistently.

Enjoyable development

The Java compiler is a complex beast. And it’s not exactly small. Over 5 Mbytes of source spread over 323 classes in 13 packages. The central package of these (ast) comprising no less than 109 classes. To add insult to injury: each line of this code could easily get you puzzling for a day or two. It ain’t easy.

If you are a wizard of the patches feel free to look at the latest patch from the bug. Does that look like s.t. you’d like to work on? Not after you’ve seen how nice & clean things can be, I suppose.

First level: overview

Instead of unfolding the package explorer until it shows all relevant classes (at what time the scrollbar will probably be too small to grasp) a quick look into the outline suffices to see everything relevant:
Image
Here we see one top-level class, actually a team class. The class encapsulates a number of role classes containing the actual implementation.

Navigation to the details

Each of those role classes is bound to an existing class of the compiler, like:

   protected class MessageSend playedBy MessageSend { ...
(Don’t worry about identical names, already from the syntax it is clear that the left identifier MessageSend denotes a role class in the current team, whereas the second MessageSend refers to an existing base class imported from some other package).

Ctrl-click on the right-hand class name takes you to that base class (the packages containing those base classes are indicated in the above screenshot). This way the team serves as the single point of reference from which each affected location in the base code can be reached with a single mouse click – no matter how widely scattered those locations are.

When drilling down into details a typical roles looks like this:
Image

The 3 top items are Image “callout” method bindings providing access to fields or methods of the base object. The bottom item is a regular method implementing the new analysis for this particular AST node, and the item above it defines a Image “callin” binding which causes the new method to be executed after each execution of the corresponding base method.

Locality of new information flows

Since all these roles define almost normal classes and objects, additional state can easily be introduced as fields in role classes. In fact some relevant information flows of the implementation make use of role fields for passing analysis results directly from one role to another, i.e., the new analysis mostly happens by interaction among the roles of this team.

Selective views, e.g., on inheritance structures

As a final example consider the inheritance hierarchy of class Statement: In the original this is a rather large tree:
Image
Way too large actually to be fully unfolded in a single screenshot. But for the implementation at hand most of these
classes are irrelevant. So at the role layer we’re happy to work with this reduced view:

Image
This view is not obtained by any filtering in the IDE, but that’s indeed the real full inheritance tree of the role class Statement. This is just one little example of how OT/J supports the implementation of selective views. As a result, when developing the annotation based null analysis, the code immediately provides a focused view of everything that is relevant, where relevance is directly defined by design intention.

A tale from the real world

I hope I could give an impression of a real world application of OT/J. I couldn’t think of a nicer structure for a feature of this complexity based on an existing code base of this size and complexity. Its actually fun to work with such powerful concepts.

Did I already say? Don’t miss our EclipseCon tutorial 🙂

Hands-on introduction to Object Teams

See you at Image

Written by Stephan Herrmann

March 14, 2011 at 00:18

Object Teams rocks :)

with 2 comments

This is a story of code evolution at its best. About re-use and starting over.

During the last week or so I modernized a part of the Object Teams Development Tooling (OTDT) that had been developed some 5 years ago: the type hierarchy for OT/J. I’ll mention the basic requirements for this engine in a minute. While most of the OTDT succeeds in reusing functionality from the JDT, the type hierarchy was implemented as a full replacement of the original. This is a pretty involved little machine, which took weeks and months to get right. It provides its logic to components like Refactoring and the Type Hierarchy View.

On the one hand this engine worked well for most uses, but over so many years we did not succeed to solve two remaining issues:

Give a faithful implementation for getSuperclass()
This is tricky because a role class in OT/J can have more than one superclass. Failing to implement this method we could not support the “traditional” mode of the hierarchy view that shows both the tree of subclasses of a focus type plus the path of superclasses up to Object (this upwards path relies on getSuperclass).
Support region based hierarchies
Here the type hierarchy is not only computed for supertypes and subtypes of one given focus type, but full inheritance structure is computed for a set of types (a “region”). This strategy is used by many JDT Refactorings, and thus we could not precisely adapt some of these for OT/J.

In analyzing this situation I had to weigh these issues:

  • In its current state the implementation strategy was a show stopper for one mode of the type hierarchy view and for precise analysis in several refactorings.
  • Adding a region based variant of our hierarchy implementation would mean to re-invent lots of stuff, both from the JDT and from our own development.
  • All this seemed to suggest to discard our own implementation and start over from scratch.
Start over I did, but not from scratch but from the wealth of a working JDT implementation.

Object Teams to the rescue: Let’s re-build Rome in ten days.

As mentioned in my previous post, the strength of Object Teams lies in building layers: each module sits in one layer, and integration between layers is given by declarative bindings:
Image

Applying this to the issue at hand we now actually have three layers with quite different structures:

Java Model

The bottom layer is the Java model that implements the containment tree of Jave elements: A project contains source folders, containing packages, containing compilation units, containing types containing members. In this model each Java type is represented by an instance of IType

Java Type Hierarchy

This engine from the JDT maintains the graph of inheritance information as a second way for navigating between ITypes. Interestingly, this module pretty closely simulates what Object Teams does natively, I may come back to that in a later post.

Object Teams Type Hierarchy

As an extension of Java, OT/J naturally supports the normal inheritance using extends, but there is a second way how an inheritance link can be established: based on inheritance of the enclosing team:

team class EcoSystem {
   protected class Project { }
   protected class IDEProject extends Project { }
}
team class Eclipse extends EcoSystem {
   @Override
   protected class Project { }
   @Override
   protected class IDEProject extends Project { }
}
 

Here, Eclipse.Project is an implicit subclass of EcoSystem.Project simply because Eclipse is a subclass of EcoSystem and both classes have the same simple name Project. I will not go into motivation and consequences of this language design (that’ll be a separate post — which I actually promised many weeks ago).

Looking at the technical challenge we see that the implicit inheritance in OT/J adds a third layer, in which classes are connected in yet another graph.

Three Layers — Three Graphs

Looking at the IType representation of Eclipse.IDEProject we can ask three questions:

Question Code Answer
What is your containing element? type.getParent() Eclipse
What is your superclass? hierarchy.getSuperclass(type) Eclipse.Project
What is your implicit superclass? ?? EcoSystem.Project

Each question is implemented in a different layer of the system. Things get a little complicated when asking a type for all its super types, which requires to collect the answers from both the JDT hierarchy layer and the OT hierarchy. Yet, the most tricky part was giving an implementation for getSuperclass().

An "Impossible" Requirement

There is a hidden assumption behind method getSuperclass() which is pervasive in large parts of the implementation, especially most refactorings:

When searching all methods that a type inherits from other types, looping over getSuperclass() until you reach Object will bring you to all the classes you need to consider, like so:

IType currentType = /* some init */;
while (currentType != null) {
   findMethods(currentType, /*some more arguments*/);
   currentType = hierarchy.getSuperclass(currentType);
}
 

There are lots and lots of places implemented using this pattern. But, how do you do that if a class has multiple superclasses?? I cannot change all the existing code to use recursive functions rather than this single loop!

Looking at Eclipse.IDEProject we have two direct superclasses: Eclipse.Project (normal inheritance, “extends”) and EcoSystem.IDEProject (OT/J implicit inheritance), which cannot both be answered by a single call to getSuperclass(). The programming language theory behind OT/J, however, has a simple answer: linearization. Thus, the superclasses of Eclipse.IDEProject are:

  • Eclipse.IDEProject → EcoSystem.IDEProject → Eclipse.Project → EcoSystem.Project

… in this order. And this is how this shall be rendered in the hierarchy view:
Image

The final callenge: what should this query answer:

        getSuperclass(ecoSystemIDEProject);

According to the above linearization we should answer: Eclipse.Project, but only if we are in the context of the superclass chain of Eclipse.IDEProject. Talking directly to EcoSystem.IDEProject we should get EcoSystem.Project! In other words: the function needs to be smarter than what it can derive from its arguments.

Layer Instances for each Situation

Let’s go back to the layer thing:
Image
At the bottom you see the Java model (as rendered by the package explorer). In the top layer you see the OT/J type hierarchy (lets forget about the middle layer for now). Two essential concepts can be illustrated by this picture:

  • Each layer is populated with objects and while each layer owns its objects, those objects connected with a red line between layers are almost the same, they represent the same concept.
  • The top layer can be instantiated multiple times: for each focus type you create a new OT/J hierarchy instance, populated with a fresh set of objects.

It is the second bullet that resolves the “impossible” requirement: the objects within each layer instance are wired differently, implementing different traversals. Depending on the focus type, each layer may answer the getSuperclass(type) question differently, even for the same argument.

The first bullet answers how these layers are integrated into a system: Conceptually we are speaking about the same Java model elements (IType), but we superimpose different graph structure depending on our current context.

All layers basically talk about the same objects,
but in each layer these objects are connected in a specific way as suites for the task at hand.

Inside the hierarchy layer, we actually do not handle IType instances directly, but we have Imageroles that represent one given IType each. Those roles contain all the inheritance links needed for answering the various questions about inheritance relations (direct/indirect, explicit/implicit, super/sub).

A cool thing about Object Teams is, that having different sets of objects in different layers (Team teams) doesn’t make the program more complex, because I can pass an object from one layer into methods of another layer and the language will quite automagically translate into the object that sits at the other end of that red line in the picture above. Although each layer has its own view, they “know” that they are basically talking about the same stuff (sounds like real life, doesn’t it?).

Summing up

OK, I haven’t shown any code of the new hierarchy implementation (yet), but here’s a sketch of before-vs.-after:

Code Size
The new implementation of the hierarchy engine has about half the size of the previous implementation (because it need not repeat anything that’s already implemented in the Java hierarchy).
Integration
The previous implementation had to be individually integrated into each client module that normally uses Java hierarchies and then should use an OT hierarchy instead. After the re-implementation, the OT hierarchy is transparently integrated such that no clients need to be adapted (accounting for even more code that could be discarded).
Linearization
Using the new implementation, getSuperclass() answers the correct, context sensitive linearization, as shown in the screenshot above, which the old implementation failed to solve.
Region based hierarchies
The old implementation was incompatible with building a hierarchy for a region. For the new implementation it doesn’t matter whether it’s built for a single focus type or for a region, so, many clients now work better without any additional efforts.

The previous implementation only scratched at the surface – literally worked around the actual issue (which is: the Java type hierarchy is not aware of OT/J implicit inheritance). The new solution solves the issue right at its core: the new team OTTypeHierarchies assists the original type hierarchy (such that its answers indeed respect OT/J’s implicit inheritance). By performing this adaptation at the issue’s core, the solution automatically radiates to all clients. So I expect that investing a few days in re-writing the implementation will pay off in no time. Especially, improving the (already strong) refactoring support for OT/J is now much, much easier.

Lessons learned: when your understanding of a problem improves, you’ll be able to discard your old workarounds and move the solution closer to the core. This reduces code size, makes the solution more consistent, enables you to solve issues you previously weren’t able to solve, and transparently provides the solution to a wider range of client modules.
Moving your solution into the core could easily result in a design were a few bloated and tangled core modules do all the work, mocking the very idea of modularity. This can be avoided by a technology that is based on some concept of perspectives and self-contained layers, as supported by teams in OT/J.

Need I say, how much fun this re-write was? 🙂

Written by Stephan Herrmann

August 18, 2010 at 22:52

Object Teams Final 0.7.0

with 4 comments

On a day like this (2010/07/07), it should be evident what is better than one or a few stars: a strong team!

But when you look at what happens in your software at runtime, all you see is a soup of individuals (called objects) running around all over the place (remember: standard module systems do not create boundaries around runtime objects!).

In all humbleness, are you surprised to learn that it was a German invention back in 2002, that objects should team up?

As a consequence of that idea, today – 8 years later –
the Object Teams Development Tooling 0.7.0 is released!
(supersedes version 1.4.0 from objectteams.org)

Object Teams banner

This is the first release after the move of the project from objectteams.org to Eclipse.org, at what point it is time to express a big thank you to all who helped along the way, Mentors, EMO, Legal, the Tools-PMC, and – of course – the many Contributors. This is: a team-effort!

Now you might say: that’s a pretty scattered team: some people at the Eclipse Foundation in Ottawa, some students in Berlin, people from Austin, and where-not. But that’s actually the point about a team: you start from a set of individuals who initially do not necessarily have any particular relationship. Then you create a team where each individual takes one particular role. This means you further specialize these existing individuals (you don’t want a team of 11 goal keepers, would you? Even with a Manual Neuer giving a perfect forward pass, it takes a Miroslav Klose to make the goal). And then you unite all members of the team towards a common goal, giving the team a new identity, so that team acts like one.

Still, each team member brings into the team the strengths of his particular background, meaning: the individuals do not completely disappear, but some properties of the individuals shine through when they play their roles in the team.

Now, what’s that got to do with software?

Given you already have a core of an application implemented, and now it’s your task to implemented one or two more user stories on top of the existing code. You look at the existing classes and mark those that in some way or other are related to what you need to implement. One user story relates to all classes marked red, another one to those marked greenish etc (and do expect overlap):

Image

How do you implement the user story that involves all the red classes, such that the new implementation sits in a nice new module that concentrates on only this one task/user story?

Consider the red entities as plain individuals, they don’t know about the new task they should contribute to. Also keep in mind, that not all instances of those red classes will participate in the new user story. What we need to do is: specialize a few of those individuals so they can play particular roles wrt the new task and unite those roles within a new team.

If you live in flat-land, this is tremendously difficult, but if you’re able to just add one more dimension to the picture …

Image

… the solution is very straight forward:

Image

Now:

  • The implementation of the red user story is a strong, cohesive Teamteam
  • Its members are Roleroles specialized in their particular sub-tasks.
  • Each role relates () to one individual from the application core and specializes what is already given towards what the team requires.
  • How exactly each role relates to its base is declared using two kinds of atomic bindings: callout and callin method bindings (see, e.g., this post).
  • These roles only affect the system as long as the team context is active, and activation happens per team instance (with options for fine-tuning).
  • Other teams may be formed for other purposes / user stories (see the greenish team)
  • Even if you start already with this 3-D picture, with Object Teams you can always add one more dimension to your architecture, if needed.
Free your mind, no limits to modularity, if only you allow your objects to form strong, and hopefully successful, teams.

And now, go get it, do the quick-start or try some examples.

Enjoy the Game!

Written by Stephan Herrmann

July 7, 2010 at 14:31

How many concepts for modules do we need?

with 13 comments

The basic elements of programming are methods. If you have many methods you want to group them in classes. If you have many classes you want to group them in packages. If you have many packages you want to group them in bundles. If you have many bundles you group them in features, but if you have many features …stop!, STOP!!!

Isn’t this insane? Every time we scale up to the next level we need a new programming concept? Like, someone invented the +1 operator and I can trump him by inventing a +2 operator, and you trump me by …? Haven’t we learned the 101 of programming: abstraction?Image I guess not many folks in Java-land are aware of a language called gbeta, where classes and methods are unified to “patterns” and no other kinds of modules are needed than patterns. It’s good news that the guy behind gbeta receives one of this year’s prestigious Dahl-Nygaard prizes: Erik Ernst. Object Teams owes much to Erik and I will speak more about his contributions in a sequel post.

Another really smart guy is Gilad Bracha, who after working on Java generics (together with Erik actually) and even JSR 294 decided to do something better, actually doubleplusgood. While he upholds the distinction between methods and classes he makes strong claims that no other modules than classes are needed, if, and here is the rub: if classes can be nested (see “Modules as Objects in Newspeak“).

Modules in Java: man or boy?

Let me briefly check this hypothesis:

The proliferation of module concepts in Java is due to the lack of truly nestable modules.

Which of the above mentioned concepts supports nesting? Features support inclusion which isn’t exactly nesting, but since features are actually defied by OSGi purists, I don’t want to burn my fingers by promoting features. Bundles cannot be nested, bummer! Some people actually think packages support nesting, with two reasons for believing so: packages are mapped to directories in a files system or in a jar and directories can be nested, plus: packages have compound names. However, semantically the dot in a package name is just a regular part of the name, it has no special semantics. Speaking of package foo.bar being a “sub-package” of foo is strongly misleading as the relation between them two is in no way different from the relation between any other pair of packages. All packages are equalst. Perhaps you recall the superpackage hero (or
the strawman of that hero), which introduced the capability that a public class can actually be seen by classes from specific superpackages only. And: superpackages where designed to support nesting. Now superpackages are “superceded” by Jigsaw modules, which don’t support nesting. Great, they invented the +3 operator. That’s award winning!

Finally classes: surprisingly, yes, classes can be nested. Unfortunately, still today it shines through that nested classes are an after-thought, the real core of Java 1.0 was designed without that. E.g., serializing nested classes is strongly discouraged. How many O/R mappers support non-static inner classes? The last time I looked: zero!

Nested classes: flaws and remedies (part 1)

I see three conceptual flaws in the design of nested classes in Java:

  1. scoping (2 problems actually)
  2. forced representation
  3. poor integration with inheritance

I discuss the first two in this post, the third, inheritance, deserves a post on its own right.
For each problem I will briefly show how it is resolved in OT/J. First off, I should mention that OT/J maintains compatibility with Java by applying any modified semantics only inside classes with the team modifier.

Scoping

Consider the following Java snippet:

public class City {
   TownHall theTownHall = new TownHall();
   class TownHall {
       class Candidate { }
       Candidate[] candidates;
       void voteForMayor(Candidate candidate) { ... }
   }
   class Citizen {
       void performCivilDuties() {
           ...
           theTownHall.voteForMayor(theTownHall.candidates[n]);
       }
   }
}
 

In this code we can actually make all methods, fields and nested classes private, to the end that external clients see none of these internal details of a City, whereas classes at the inside can blithely communicate with each other. Thus we have created a wonderful module City which can use all accomplishments of object-oriented programming for its internal structure – well hidden from the outside. In Java, however, this is flawed in two ways:

  1. Nested classes cannot protect any members from access by sibling classes, so I (a Citizen) can actually see the wallets of all mayor Candidates (well, maybe that’s not a bug but a feature). It would be much more useful if only inside-out visibility was given, i.e., a Citizen can see all members of his/her City, but not inverse – the City looking into its glass Citizens.
  2. Scoping rules in Java are purely static, i.e., permissions are given to classes not objects. As a result I could not only vote in my home city, but in any City (and every person can see the wallet of any other person etc.).

OT/J solves (1) by reinterpreting the access modifiers. Within a team class a private field Candidate.wallet, e.g., would not be visible outside its class, whereas a Citizen could still access theTownHall if this field were private.

(2) is basically solved by applying different rules to self-references (incl. outer self) than to explicitly qualified expressions, so City.this.theTownHall (OK, uses the enclosing this instance) applies different rules than newYork.theTownHall (not OK if theTownHall is private).

Well, issue (1) is a matter of tedious details, where I see no excuse for Java’s “solution”. Issue (2) has always been a differentiation between good old Eiffel, e.g., and its “successor” Java. This issue stands for a conceptual crossroad: are we interested in code nesting, or are we more interested in expressing how run-time objects are grouped? I personally don’t see much use in making the definition of a Citizen a nested part of the definition of a City, but speaking of many Citizens (instances) forming a City (instance) makes a lot of sense.

Forced Representation

By this I mean the fact that programmers are forced to store all inner classes textually embedded in their enclosing class. After we’ve already seen the discrepancy between the semantics and the representation of a package (flat structure stored in nested directories), now we see the opposite: semantic nesting is forcefully tied to representational nesting. This sounds logical until you start to write significant amounts of code as one big outer class with lots of (deeply) nested classes contained. You probably wouldn’t even think of this, because pretty soon the file becomes unmanageably huge. This is a very mundane reason why class nesting in Java simply doesn’t scale.

The solution in OT/J is pretty trivial. The following structure

// file community/City.java
package community;
public team class City {
    protected class Citizen {}
}
 

is strictly equivalent to these two files:

// file community/City.java
package community;
/**
 * @role Citizen
 */
public team class City {
}
 
// file community/City/Citizen.java
team package community.City;
protected class Citizen {}
 

The trick is: City is both a class and a special package. So semantically the team contains the nested Citizen but physical storage may happen in a separate file stored in a directory community/City/ that corresponds to the enclosing team class.

As a special treat the package explorer of the OTDT provides two display options for team packages: logical vs. physical. In the physical view the 2-files solution looks like this:
Image

Just by selecting the logical view, where the team-package is not shown, the display changes to
Image
Small files in separate directories are good for team support etc. Logical nesting is good for modularity. Why not have the cake and eat it?

These are just little tricks introduced by OT/J and the OTDT. But with these there’s no excuse any longer for not using class nesting even for large structures. And remember: this is real nesting, so you can use the same concept for 1-level, 2-level, … n-level containment. Good news for developers, bad news for technology designers, because now we simply don’t need any “superpackages”, “modules” etc. I actually wonder, how nestable classes could be unified with bundles, but that’s still a different story to be told on a different day.

Before that I will discuss how nesting and inheritance produce tremendous synergy (instead of awkwardly standing side-by-side). From a research perspective this has been solved some 9 years ago. I strongly believe that the time is ripe to bring these fruits from ivory towers to the hands of real world developers. Stay tuned for part 2.

Written by Stephan Herrmann

March 20, 2010 at 22:50

Design a site like this with WordPress.com
Get started