Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Wednesday, December 2, 2020

Add Checkstyle support to Eclipse, Maven, and Jenkins

After PMD and SpotBugs we will have a look at Checkstyle integration into the IDE and our maven builds. Parts of this tutorial are already covered by Lars' tutorial on Using the Checkstyle Eclipse plug-in.

Step 1: Add Eclipse IDE Support

First install the Checkstyle Plugin via the Eclipse Marketplace. Before we enable the checker, we need to define a ruleset to run against. As in the previous tutorials, we will setup project specific rules backed by one ruleset that can also be used by maven later on.

Create a new file for your rules in <yourProject>.releng/checkstyle/checkstyle_rules.xml. If you are familiar with writing rules just add them. In case you are new, you might want to start with one of the default rulesets of checkstyle.

Once we have some rules, we need to add them to our projects. Therefore right click on a project and select Checkstyle/Activate Checkstyle. This will add the project nature and a builder. To make use of our common ruleset, create a file <project>/.checkstyle with following content.

<?xml version="1.0" encoding="UTF-8"?>

<fileset-config file-format-version="1.2.0" simple-config="false" sync-formatter="false">
  <local-check-config name="Skills Checkstyle" location="/yourProject.releng/checkstyle/checkstyle_rules.xml" type="project" description="">
    <additional-data name="protect-config-file" value="false"/>
  </local-check-config>
  <fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
    <file-match-pattern match-pattern=".java$" include-pattern="true"/>
  </fileset>
</fileset-config>

Make sure to adapt the name and location attributes of local-check-config according to your project structure.

Checkstyle will now run automatically on builds or can be triggered manually via the context menu: Checkstyle/Check Code with Checkstyle.

Step 2: Modifying Rules

While we had to do our setup manually, we can now use the UI integration to adapt our rules. Select the Properties context entry from a project and navigate to Checkstyle, page Local Check Configurations. There select your ruleset and click Configure... The following dialog allows to add/remove rules and to change rule properties. All your changes are backed by our checkstyle_rules.xml file we created earlier.

Step 3: Maven Integration

We need to add the Maven Checkstyle Plugin to our build. Therefore add following section to your master pom:

	<properties>
		<maven.checkstyle.version>3.1.1</maven.checkstyle.version>
	</properties>

	<build>
		<plugins>
			<!-- enable checkstyle code analysis -->
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-checkstyle-plugin</artifactId>
				<version>${maven.checkstyle.version}</version>
				<configuration>
					<configLocation>../../releng/yourProject.releng/checkstyle/checkstyle_rules.xml</configLocation>
					<linkXRef>false</linkXRef>
				</configuration>

				<executions>
					<execution>
						<id>checkstyle-integration</id>
						<phase>verify</phase>
						<goals>
							<goal>check</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

In the configuration we address the ruleset we also use for the IDE plugin. Make sure that the relative path fits to your project setup. In the provided setup execution is bound to the verify phase.

Step 4: File Exclusions

Excluding files has to be handled differently for IDE and Maven. The Eclipse plugin allows to define inclusions and exclusions via file-match-pattern entries in the .checkstyle configuration file. To exclude a certain package use:

  <fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
    ...
    <file-match-pattern match-pattern="org.yourproject.generated.package.*$" include-pattern="false"/>
  </fileset>

In maven we need to add exclusions via the plugin configuration section. Typically such exclusions would go to the pom of a specific project and not the master pom:

	<build>
		<plugins>
			<!-- remove generated resources from checkstyle code analysis -->
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-checkstyle-plugin</artifactId>
				<version>${maven.checkstyle.version}</version>

				<configuration>
					<excludes>**/org/yourproject/generated/package/**/*</excludes>
				</configuration>
			</plugin>
		</plugins>
	</build>

Step 5: Jenkins Integration

If you followed my previous tutorials on code checkers, then this is business as usual: use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [checkStyle()]

Try out the live chart on the skills project.

Tuesday, November 24, 2020

Add SpotBugs support to Eclipse, Maven, and Jenkins

SpotBugs (successor of FindBugs) is a tool for static code analysis, similar like PMD. Both tools help to detect bad code constructs which might need improvement. As they partly detect different issues, they may be well combined and used simultaneously.

Step 1: Add Eclipse IDE Support

The SpotBugs Eclipse Plugin can be installed directly via the Eclipse Marketplace.

After installation projects can be configured to use it from the projects Properties context menu. Navigate to the SpotBugs category and enable all checkboxes on the main site. Further set Minimum rank to report to 20 and Minimum confidence to report to Low.

Image

Once done SpotBugs immediately scans the project for problems. Found issues are displayed as custom markers in editors. Further they are visible in the Bug Explorer view as well as in the Problems view.

SpotBugs also comes with a label decoration on elements in the Package Explorer. If you do not like these then disable all Bug count decorator entries in Preferences/General/Appearance/Label Decorations.

Step 2: Maven Integration

Integration is done via the SpotBugs Maven Plugin. To enable, add following section to your master pom:

	<properties>
		<maven.spotbugs.version>4.1.4</maven.spotbugs.version>
	</properties>

	<build>
		<plugins>
			<!-- enable spotbugs code analysis -->
			<plugin>
				<groupId>com.github.spotbugs</groupId>
				<artifactId>spotbugs-maven-plugin</artifactId>
				<version>${maven.spotbugs.version}</version>

				<configuration>
					<effort>Max</effort>
					<threshold>Low</threshold>
					<fork>false</fork>
				</configuration>

				<executions>
					<execution>
						<id>spotbugs-integration</id>
						<phase>verify</phase>
						<goals>
							<goal>spotbugs</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

The execution entry takes care that the spotbugs goal is automatically executed during the verify phase. If you remove the execution section you would have to call the spotbugs goal separately:

mvn spotbugs:spotbugs

Step 3: File Exclusions

You might have code that you do not want to get checked (eg generated files). Exclusions need to be defined in an xml file. A simple filter on package level looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<FindBugsFilter>
    <!-- skip EMF generated packages -->
    <Match>
        <Package name="~org\.eclipse\.skills\.model.*" />
    </Match>
</FindBugsFilter>

See the documentation for a full description of filter definitions.

Once defined this file can be used from the SpotBugs Eclipse plugin as well as from the maven setup.

To simplify the maven configuration we can add following profile to our master pom:

	<profiles>
		<profile>
			<!-- apply filter when filter file exists -->
			<id>auto-spotbugs-exclude</id>
			<activation>
				<file>
					<exists>.settings/spotbugs-exclude.xml</exists>
				</file>
			</activation>

			<build>
				<plugins>
					<!-- enable spotbugs exclude filter -->
					<plugin>
						<groupId>com.github.spotbugs</groupId>
						<artifactId>spotbugs-maven-plugin</artifactId>
						<version>${maven.spotbugs.version}</version>

						<configuration>
							<excludeFilterFile>.settings/spotbugs-exclude.xml</excludeFilterFile>
						</configuration>
					</plugin>
				</plugins>
			</build>
		</profile>
	</profiles>

It gets automatically enabled when a file .settings/spotbugs-exclude.xml exists in the current project.

Step 4: Jenkins Integration

Like with PMD, we again use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [spotBugs(useRankAsPriority: true)]

Try out the live chart on the skills project.

Final Thoughts

PMD is smoother on integration as it stores its rulesets in a common file which can be shared by maven and the Eclipse plugin. SpotBugs currently requires to manage rulesets separately. Still both can be implemented in a way that users automatically get the same warnings in maven and the IDE.

Friday, November 20, 2020

Add Code Coverage Reports to Eclipse, Maven, and Jenkins

Code coverage may provide some insights in your tests. They show which classes, lines of codes, and conditional branches are called by your tests. A high percentage of coverage does not automatically mean that your tests are great - as you might not have a single assertion in your test code - but at least they can give you an impression of dark areas in your code base.

This article is heavily based on the article of Lorenzo Bettini on JaCoCo Code Coverage and Report of multiple Eclipse plug-in projects, so the credits for this setup are his!

Step 1: Eclipse IDE Setup

Coverage in Java projects is typically tracked with the JaCoCo library. The according plugin for Eclipse is called EclEmma and is available via the Eclipse Marketplace.

After installation you have a new run target 

Image

that adds coverage information to your execution. Only thing to do is to rerun your unit tests and check out the Coverage view.

Image

Multiple coverage sessions can be combined into one. That allows to accumulate the results of multiple unit tests into a single coverage report.

Step 2: Tycho integration

For the next steps I expect that you basically followed my tycho tutorials and have a similar setup.

First we need to enable JaCoCo in our builds:

	<build>
		<plugins>
			<!-- enable JaCoCo code coverage -->
			<plugin>
				<groupId>org.jacoco</groupId>
				<artifactId>jacoco-maven-plugin</artifactId>
				<version>0.8.6</version>

				<configuration>
					<output>file</output>
				</configuration>

				<executions>
					<execution>
						<id>jacoco-initialize</id>
						<phase>pre-integration-test</phase>
						<goals>
							<goal>prepare-agent</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

Tycho surefire executes unit tests in the maven integration-test phase, therefore we start the agent right before. This plugin needs to be active for any plugin of type eclipse-plugin-test (see tycho tutorial), but it is safe to put it in the master pom of your *.releng project.

Now each test run creates coverage reports. For analysis purposes we need to merge them into a single one. Therefore create a new General/Project in your workspace, named *.releng.coverage. In the pom.xml file we need to add a step to aggregate all reports into one:

	<build>
		<plugins>
			<plugin>
				<groupId>org.jacoco</groupId>
				<artifactId>jacoco-maven-plugin</artifactId>
				<version>${jacoco.version}</version>
				<executions>
					<execution>
						<phase>verify</phase>
						<goals>
							<goal>report-aggregate</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

Afterwards we need to define dependencies for the projects containing our source code:

	<dependencies>
		<!-- Code dependencies to show coverage on -->
		<dependency>
			<groupId>com.example</groupId>
			<artifactId>com.example.plugin1</artifactId>
			<version>0.1.0-SNAPSHOT</version>
			<scope>compile</scope>
		</dependency>

		<dependency>
			<groupId>com.example</groupId>
			<artifactId>com.example.plugin2</artifactId>
			<version>0.1.0-SNAPSHOT</version>
			<scope>compile</scope>
		</dependency>
		...
	</dependencies>

Further we need dependencies to our test fragments (mind the different scope) :

	<dependencies>
		...
		<!-- Test dependencies -->
		<dependency>
			<groupId>com.example</groupId>
			<artifactId>com.example.project1.test</artifactId>
			<version>0.1.0-SNAPSHOT</version>
			<scope>test</scope>
		</dependency>

		<dependency>
			<groupId>com.example</groupId>
			<artifactId>com.example.project2.test</artifactId>
			<version>0.1.0-SNAPSHOT</version>
			<scope>test</scope>
		</dependency>
		...
	</dependencies>

If unsure, have a look at a complete pom file.

Finally add the new project as a module to your master pom:

	<modules>
		...
		<module>../your.project.releng.coverage</module>
		...
	</modules>

The maven build now generates *.releng.coverage/target/site/jacoco-aggregate/jacoco.xml which can be picked up by various tools. Further you get a nice HTML report in the same folder for free.

Step 3: Jenkins reports

While you may directly publish the HTML report on your jenkins builds, I prefer to use the Code Coverage plugin. With a single instruction in your pipeline

	publishCoverage adapters: [jacocoAdapter(path: 'releng/com.example.releng.coverage/target/site/jacoco-aggregate/jacoco.xml')], sourceFileResolver: sourceFiles('STORE_LAST_BUILD')

it generates nice, interactive reports like these:

Image

You may also have a look at this live report to play around with.

Wednesday, November 18, 2020

Add PMD support to Eclipse, Maven, and Jenkins

PMD is a static code analyzer that checks your source for problematic code constructs, design patterns, and code style.

The code smells reported on grown projects might be huge at first, but PMD allows to customize its rules and to adapt them to your needs.

Step 1: Add PMD support to Eclipse

I am using eclipse-pmd which can be installed from the Eclipse Marketplace.

Step 2: Define a ruleset

PMD needs a ruleset to run against. It is stored as an xml file and can be global, workspace specific or project specific. The choice is up to you. For eclipse projects I typically have a "releng" project to host all my configuration files.

A default ruleset looks like this:

<?xml version="1.0"?>
<ruleset name="Custom Rules"
	xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">

	<description>custom ruleset</description>

	<rule ref="category/java/bestpractices.xml" />
	<rule ref="category/java/codestyle.xml" />
	<rule ref="category/java/design.xml" />
	<rule ref="category/java/documentation.xml" />
	<rule ref="category/java/errorprone.xml" />
	<rule ref="category/java/multithreading.xml" />
	<rule ref="category/java/performance.xml" />
	<rule ref="category/java/security.xml" />
</ruleset>

Store your ruleset somewhere in your workspace or on your file system.

Step 3: Enable PMD on project level

Right click on a project in your Eclipse workspace and select Properties. In PMD section check Enable PMD for this project and Add... the ruleset file stored before. The Name is not important and can be freely chosen.

Your rules are live now and PMD should immediately start to add warnings to your code and the Problems view.

Step 4: Refine your rules

The default ruleset might report some issues you do want to treat differently in your project. Therefore you may change rules by setting parameters or disable unwanted rules at all. To alter a rule, you first have to find it in the list of available rules. For disabling you just need to add an exclude node to your rule settings file, eg:

	<rule ref="category/java/bestpractices.xml">
		<!-- logger takes care of guarding -->
		<exclude name="GuardLogStatement" />
	</rule>

Configuring a rule can be done like this:

	<rule ref="category/java/codestyle.xml/ClassNamingConventions">
		<properties>
			<property name="utilityClassPattern"
				value="[A-Z][a-zA-Z0-9]+" />
		</properties>
	</rule>

A full working ruleset as used by one of my projects can be viewed online.

Whenever you change your ruleset you need to recompile your project to get these rules applied. You may do so by selecting Project/Clean... from the main menu.

Step 5: Maven integration

Integration is done by the maven-pmd-plugin. Just add following section to your pom:

	<build>
		<plugins>
			<!-- enable PMD code analysis -->
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-pmd-plugin</artifactId>
				<version>3.13.0</version>
				<configuration>
					<linkXRef>false</linkXRef>
					<rulesets>path/to/your/ruleset.xml</rulesets>
				</configuration>
			</plugin>
		</plugins>
	</build>

Make sure to adapt the path to your ruleset accordingly.

Afterwards run your build using

mvn pmd:pmd pmd:cpd

If you use the maven-site-plugin, you may additionally generate html reports of PMD findings.

Step 6: Jenkins integration

Static reports are nice, but charts over time/commits are even better. In case you use Jenkins you may have a look at the warnings-ng plugin. When you generate yout pmd.xml files via maven, this plugin can pick them up and draw nice reports. In a pipeline build this only needs one line:

recordIssues(tools: [cpd(), pmdParser()])

to get charts like these:

Image

Try out the live chart on the skills project.

Finally the plugin even allows to compare the amount of issues against a baseline. This allows to add  quality gates, eg to fail the build in case your issue count increases. I strongly encourage to enforce such rules. Otherwise warnings are nice but do get ignored by everybody.


Tuesday, December 18, 2018

Jenkins 7: Pipeline Support

Next step in our Jenkins tutorials is to add support for pipeline builds.

Jenkins Tutorials

For a list of all jenkins related tutorials see Jenkins Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Adjusting the code

Good news first: our code (including jelly) from the previous tutorial is ready to be used in pipeline without change. There are some considerations to be taken when writing pipeline plugins, but we already took care of this.

In pipeline each build step needs a name to be addressed. By default this would be the class name and a call would look like this:
step([$class: 'HelloBuilder', buildMessage: 'Hello from pipeline'])

When using the @Symbol annotation on the Descriptor we can provide a nicer name for our build step.

Step 2: Adding dependencies to the execution environment

Our current test target does not support pipeline jobs as we did not add the right dependencies to the pom file so far. We will first see how to add dependencies in general, in the next step we will fix the build afterwards.

To add support for pipeline, add following definition to your pom.xml:
 <dependencies>
  <dependency>
   <groupId>org.jenkins-ci.plugins.workflow</groupId>
   <artifactId>workflow-aggregator</artifactId>
   <version>2.6</version>
   <scope>test</scope>
  </dependency>
 </dependencies>

We always need groupId, artifactId and a version. To get these parameters you would typically look up a plugin on the Jenkins plugin site. Locate the link to the source code (Github) and open the pom.xml of the corresponding plugin. There you will find definitions for groupId and artifactId.

Available versions can be found on the Jenkins Artifactory server (we added this server to our pom already). There navigate to the public folder, then follow the structure down, first opening folder from the groupId followed by the artifactId. For the pipeline dependency we would open public/org/jenkins-ci/plugins/workflow/workflow-aggregator. Latest version at the time of writing is 2.6.

Setting the scope to test means that we do not have a build dependency on the plugin. Instead we need it only deployed and enabled on our test instance.

When adding build dependencies you should run
mvn -DdownloadSources=true -DdownloadJavadocs=true -DoutputDirectory=target/eclipse-classes eclipse:eclipse
like we did in the first tutorial. It will update the .classpath file in your eclipse project, automatically adding required libraries to the build path. Take care that also the .project file gets rewritten!

Step 3: Dependency Resolution

We added our dependency, so everything should be done, right?
Wrong! The maven-enforcer-plugin verifies plugin dependencies for us and will detect some incompatibilities, eg:
Require upper bound dependencies error for org.jenkins-ci.plugins:script-security:1.39 paths to dependency are:
+-com.codeandme:builder.hello:1.0-SNAPSHOT
  +-org.jenkins-ci.plugins.workflow:workflow-aggregator:2.6
    +-org.jenkins-ci.plugins.workflow:workflow-support:2.20
      +-org.jenkins-ci.plugins:script-security:1.39
and
+-com.codeandme:builder.hello:1.0-SNAPSHOT
  +-org.jenkins-ci.plugins.workflow:workflow-aggregator:2.6
    +-org.jenkins-ci.plugins.workflow:workflow-durable-task-step:2.22
      +-org.jenkins-ci.plugins:script-security:1.39
and
+-com.codeandme:builder.hello:1.0-SNAPSHOT
  +-org.jenkins-ci.plugins.workflow:workflow-aggregator:2.6
    +-org.6wind.jenkins:lockable-resources:2.3
      +-org.jenkins-ci.plugins:script-security:1.26
...
This is one of multiple dependency conflicts detected. It seems that maven at first resolves the dependency with the lowest version number. As some plugins need a newer version, we need to resolve the dependency by our own.

The simplest way I found is to check for the highest version of the required plugin (in the upper case: script-security) and add it to our dependencies sections in the pom.xml file. I was hoping for some maven help on this process, but failed. So I ended up adding required dependencies manually until the build was satisfied.

You might run into other build problems, eg after adding a test dependency to the symbol-annotation plugin, my build started to fail, not being able to resolve Symbol.class anymore. Reason is that symbol-annotation is actually a build dependency rather than a test dependency. By binding it to the test scope only we broke the build.

Once you sorted out all dependencies (see resulting pom.xml) your test instance will be able to run pipeline jobs.

Step 4: Testing the Build Step in Pipeline

On your test instance create a new Pipeline and add following script:
node {
    stage('Greetings') {
        greet buildDelay: 'long', buildMessage: 'Hello pipeline build'
    }
}
Coding the command line for our build step can either be done manually or by using the Pipeline Syntax helper. The link is available right below the Script section in your job configuration. Jenkins makes use of our previous jelly definitions to display a visual helper for the Hello Build step. We may set all optional parameters on the UI and let Jenkins create the command line to be used in the pipeline script.
Image

Thursday, December 14, 2017

Debugger 11: Watch expressions

Now that we have variables working, we might also want to include watch expressions to dynamically inspect code fragments.

Debug Framework Tutorials

For a list of all debug related tutorials see Debug Framework Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Provide the Watch Expression Delegate

Watch points are implemented via an extension point. So switch to your plugin.xml and add a new extension point for org.eclipse.debug.core.watchExpressionDelegates.
The new delegate simply points to our debugModel identifier: com.codeandme.debugModelPresentation.textinterpreter and provides a class implementation:
public class TextWatchExpressionDelegate implements IWatchExpressionDelegate {

 @Override
 public void evaluateExpression(String expression, IDebugElement context, IWatchExpressionListener listener) {
  if (context instanceof TextStackFrame)
   ((TextStackFrame) context).getDebugTarget().fireModelEvent(new EvaluateExpressionRequest(expression, listener));
 }
}
Delegates can decide on which context they may operate. For our interpreter we could evaluate expressions on StackFrames, Threads or the Process, but typically evaluations do take place on a dedicated StackFrame.

Step 2: Evaluation

Now we apply the usual pattern: send an event, let the debugger process it and send some event back to the debug target. Once the evaluation is done we then will inform the provided listener of the outcome of the evaluation.
public class TextDebugTarget extends TextDebugElement implements IDebugTarget, IEventProcessor {

 @Override
 public void handleEvent(final IDebugEvent event) {

   [...]

   } else if (event instanceof EvaluateExpressionResult) {
    IWatchExpressionListener listener = ((EvaluateExpressionResult) event).getOriginalRequest().getListener();
    TextWatchExpressionResult result = new TextWatchExpressionResult((EvaluateExpressionResult)event, this);
    listener.watchEvaluationFinished(result);    
   }
 }
The TextWatchExpressionResult uses a TextValue to represent the evaluation result. As before with variables we may support nested child variables within the value. In case the evaluation failed for some reason we may provide error messages which do get displayed in the Expressions view.

Debugger 10: Editing variables

In the previous tutorial we introduced variables support for our debugger. Now lets see how we can modify variables dynamically during a debug session.

Debug Framework Tutorials

For a list of all debug related tutorials see Debug Framework Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Allowing for editing and trigger update

First variables need to support editing. Then the variables view will automatically provide a text input box on the value field once clicked by the user. This is also a limitation: editing variables requires the framework to interpret an input string and process it accordingly to the target language.

The relevant changes for the TextVariable class are shown below:
public class TextVariable extends TextDebugElement implements IVariable {

 @Override
 public void setValue(String expression) {
  getDebugTarget().fireModelEvent(new ChangeVariableRequest(getName(), expression));
 }

 @Override
 public boolean supportsValueModification() {
  return true;
 }

 @Override
 public boolean verifyValue(String expression) throws DebugException {
  return true;
 }
}
verifyValue(String) and setValue(String) are used by the debug framework when a user tries to edit a variable in the UI. We do not need to update the value yet, but simply trigger an event to update the variable in the debugger.

Step 2: Variable update & refresh

As our primitive interpreter accepts any kind of text variables there is nothing which can go wrong here. Instead of sending an update event for the changed variable we simply use the already existing VariablesEvent to force a refresh of all variables of the current TextStackFrame:
public class TextDebugger implements IDebugger, IEventProcessor {

 @Override
 public void handleEvent(final IDebugEvent event) {

  [...]

  } else if (event instanceof ChangeVariableRequest) {
   fInterpreter.getVariables().put(((ChangeVariableRequest) event).getName(), ((ChangeVariableRequest) event).getContent());
   fireEvent(new VariablesEvent(fInterpreter.getVariables()));
  }
 }
}

Thursday, December 15, 2016

Helpers: TableViewer Text Export

Today I would like to present a small helper class that exports the content of a TableViewer to a String representation.

How to use

The default implementation creates a CSV output. Exporting the table is simple:
TableExporter exporter = new TableExporter(tableViewer);
exporter.print(outputStream);

// exporter.toString() // serializes into a String
The export uses sorting, filtering and column ordering so your output matches to what you see on the screen.

Other output formats

If you are interested in other output formats, have a look at HTMLTableExporter, which renders the same data into an HTML table. You can easily extend the base class to adapt the output format to your own needs.

Examples

Image


rendered as CSV text:

First,Last,City
Obi-Wan,Kenobi,Tatooine
Leia,Organa,Alderaan
Luke,Skywalker,Tatooine
Han,Solo,Corellia

...or as HTML table

First Last City
Obi-Wan Kenobi Tatooine
Leia Organa Alderaan
Luke Skywalker Tatooine
Han Solo Corellia

Wednesday, April 8, 2015

Live charting with EASE

Recently we added charting support to EASE using the Nebula XY chart widget. Say you have a script acquiring some data you now may plot them as your measurement advances.

Step 1: Installation

For this tutorial you need to register the nebula update site:
http://download.eclipse.org/technology/nebula/snapshot
in Preferences/Install/Update/Available Software Sites.

Afterwards install EASE and add EASE Charting Support to install all required modules. This component depends on Nebula Visualization Widgets, therefore we added the nebula update site before. Currently you have to stick to the nighty update site of EASE as charting is not part of the current release.

Step 2: Some example plots

Image
 
Try out this little script to get these sample plots:
loadModule('/Charting');

figure("Simple Charts");
clear();

series("linear", "m+");
for (x = 0; x <= 20; x += 1)
 plotPoint(x, x / 10 - 1);

series("1/x", "ko:");
for (x = 1; x <= 20; x += 0.2)
 plotPoint(x, 4/x);

series("sine", "bx");
for (x = 0; x < 20; x += 0.05)
 plotPoint(x, Math.sin(x) * Math.sin(3 * x));

figure("Advanced");
clear();

series("drop", "gd");
for (t = 0; t < 2; t += 0.01)
 plotPoint(t, Math.pow(Math.E, t * -3) * Math.cos(t * 30));

series("upper limit", "rf--");
plot([0, 2], [0.4, 0.4]);

series("lower limit", "rf--");
plot([0, 2], [-0.4, -0.4]);
Run the script either in the Script Shell or execute it using launch targets.

Step 3: Charting Module API

The API of the charting module tries to keep close to MatLab style, so the format string for a series is almost identical. You may even omit to create figures or series and start plotting right away and we create everything necessary on the fly.

For a better user experience we extended the XYChart a bit by allowing to zoom directly using the scroll wheel either over the plot or its axis. Use a double click for an auto-zoom.

Sunday, February 22, 2015

Reformatting multiple source files

Recently I had to apply a new code formatter template to a project. Changing 50+ files by hand seemed to be too much monkey work. Writing a custom plug-in seemed to be too much work. Fortunately there is this great scripting framework to accomplish this task quite easily.

Step 1: Detecting affected source files

I have already blogged about EASE and how to install it. So I went directly to the script shell to fetch all java files form a certan project:
loadModule('/System/Resources');

 org.eclipse.ease.modules.platform.ResourcesModule@5f8466b4
files = findFiles("*.java", getProject("org.eclipse.some.dedicated.project"), true)
 [Ljava.lang.Object;@1b61effc
Now we have an array of IFile instances. Easy, isn't it?

Step 2: The formatter script

Ideas how to format source files can be found in the forum. Taking the script and porting it to JS is simple:
function formatUnitSourceCode(file) {
 unit = org.eclipse.jdt.core.JavaCore.create(file);

 unit.becomeWorkingCopy(null);

 formatter = org.eclipse.jdt.core.ToolFactory.createCodeFormatter(null);
 range = unit.getSourceRange();
 formatEdit = formatter
   .format(
     org.eclipse.jdt.core.formatter.CodeFormatter.K_COMPILATION_UNIT
       | org.eclipse.jdt.core.formatter.CodeFormatter.F_INCLUDE_COMMENTS,
     unit.getSource(), 0, unit.getSource().length(), 0, null);
 if (formatEdit.hasChildren()) {
  unit.applyTextEdit(formatEdit, null);
  unit.reconcile(org.eclipse.jdt.core.dom.AST.JLS4, false, null, null);
 }

 unit.commitWorkingCopy(true, null);
}

Step 3: glueing it all together

Load the function into your script shell, fetch the list of files as in step 1 and then process all files in a loop:
for each (file in files) formatUnitSourceCode(file);
That's it. This short script uses your current source formatter settings and applies it to all selected files.

This little helper is available online in the EASE script repository.

Wednesday, January 14, 2015

EASE 0.1.0 - the very first release

I am proud to announce the very first release of the Eclipse Advanced Scripting Environment (EASE).

In case you have no idea what this is about, check out the project page. Afterwards you definitely need to install it right away in your IDE and start scripting.

Some facts: 
  • executes script code in your IDE, providing access to the running JRE and thus to the whole Eclipse API
  • supports JavaScript, Jython, Groovy and JRuby (see details)
  • allows to dynamically integrate scripts into toolbars and menus
  • extensible via script libraries (write your own)

This project started as an in house company solution some years ago. When I released it as open source back in 2013 I was quite astonished about the interest from the community. Soon this project moved into the e4 incubator which gave us a great chance to evolve and build up an infrastructure.
Last summer EASE was accepted as an Eclipse incubation project. With our first release we feel to be part of the eclipse community.

Let me take this opportunity to thank all the people who helped to make this happen. When I started out almost 2 years ago I did not expect to meet such an open minded community. So if you think of starting your own project I can just encourage you to take the first step. There are helping hands everywhere, trying to push you forward. Did I already say that I love this community?

XMLMemento: a simpe way to process XML files


Reading and writing simple XML files is a common task nowadays. Java comes with some powerful frameworks like SAX or Xerces to accomplish this task. But sometimes we look for something simpler. Today we will have a look at XMLMemento, a utility class by Eclipse that hides most of the complicated stuff from us.

Source code for this tutorial is available on googlecode as a single zip archive, as a Team Project Set or you can checkout the SVN projects directly. 

Preparations

We will use following XML file for this tutorial
<?xml version="1.0" encoding="UTF-8"?>
<root>
 <project name="org.eclipse.ease">
  <description>EASE is a scripting environment for Eclipse. It allows to
   create, maintain and execute script code in the context of the
   running Eclipse instance.</description>
  <releases>
   <release version="0.1.0" />
   <release version="0.2.0" />
  </releases>
 </project>
 <project name="org.eclipse.tycho">
  <releases>
   <release version="0.16.0" />
   <release version="0.17.0" />
  </releases>
 </project>
</root>

The code samples below use a slightly modified version of XMLMemento. To reuse it more easily in non-Eclipse projects I removed Eclipse specific exceptions, messages and interfaces. If you intend to use XMLMemento within Eclipse, simply stick to the original version contained in the org.eclipse.ui.workbench plugin. For pure Java projects simply copy XMLMemento as it is used in this tutorial.

Step 1: Writing XML

First we need to create a root node. Afterwards we create child nodes and set their attributes.
package com.codeandme.xmlmemento;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;

import org.eclipse.ui.XMLMemento;

public class WriteXML {

 public static void main(String[] args) {

  XMLMemento rootNode = XMLMemento.createWriteRoot("root");

  // first project
  XMLMemento projectNode = rootNode.createChild("project");
  projectNode.putString("name", "org.eclipse.ease");
  projectNode
  .createChild("description")
  .putTextData(
    "EASE is a scripting environment for Eclipse. It allows to create, maintain and execute script code in the context of the running Eclipse instance.");

  XMLMemento releasesNode = projectNode.createChild("releases");
  releasesNode.createChild("release").putString("version", "0.1.0");
  releasesNode.createChild("release").putString("version", "0.2.0");

  // second project
  projectNode = rootNode.createChild("project");
  projectNode.putString("name", "org.eclipse.tycho");

  releasesNode = projectNode.createChild("releases");
  releasesNode.createChild("release").putString("version", "0.16.0");
  releasesNode.createChild("release").putString("version", "0.17.0");

  // output to console
  System.out.println(rootNode);

  // write to file
  try {
   File tempFile = new File("XML sample.xml");
   FileWriter fileWriter = new FileWriter(tempFile);
   rootNode.save(fileWriter);

  } catch (IOException e) {
   e.printStackTrace();
  }
 }
}
Line 13: create the root node
Line 16: create a child node below root
Line 17: set an attribute on project node
Line 20: set text content of a node
Line 36: serialize XML to String (uses node.toString())
Line 42: write XML to File


Step 2: reading XML

Reading XML is also very easy. Just create a read root and start traversing child elements:
package com.codeandme.xmlmemento;

import java.io.FileReader;

import org.eclipse.ui.XMLMemento;

public class ReadXML {

 public static void main(String[] args) {
  try {
   XMLMemento rootNode = XMLMemento.createReadRoot(new FileReader("XML sample.xml"));

   System.out.println("Projcts:");
   for (XMLMemento node : rootNode.getChildren())
    System.out.println("\t" + node.getString("name"));

   System.out.println("EASE versions:");
   for (XMLMemento node : rootNode.getChildren("project")) {
    if ("org.eclipse.ease".equals(node.getString("name"))) {
     for (XMLMemento versionNode : node.getChild("releases").getChildren("release")) {
      System.out.println("\t" + versionNode.getString("version"));
     }
    }
   }

  } catch (Exception e) {
   e.printStackTrace();
  }
 }
}
LIne 11: parse input and extract the root node
Line 14: iterate over all child nodes
Line 15: read node attribute

Tuesday, October 21, 2014

Writing modules for EASE

While EASE is shipped with some basic functionality already implemented, the real benefit comes when we provide our own script modules. Today we will see how such modules are provided.

Source code for this tutorial is available on googlecode as a single zip archive, as a Team Project Set or you can checkout the SVN projects directly.

Step 1: Get EASE

To write modules you either need to install EASE into your running IDE, provide a target platform containing EASE, or import the source (core repository is sufficient) directly. At the end of this tutorial we will create help pages automatically. If you want to use that feature directly go for the source code option.

If not sure which option to choose simply install EASE from the update site.

Step 2: A simple module

Modules are simple classes, that are registered using the org.eclipse.ease.modules extension point. Start with creating a new Plug-in Project and create the extension point. Create a new module extension with id, name and a class. visible sets the default module visibility, but can be changed by the user in the preferences anytime.

If you start providing more and more modules it is a good idea to cluster them by using categories. This results in a cleaner UI integration using trees instead of a flat list of modules. Categories are defined under the same extension point and are straight forward, so I will not explain in detail.

Image

The implementation starts with a POJO class:

package com.codeandme.ease.modules;

import org.eclipse.ease.modules.WrapToScript;

/**
 * Provides basic mathematics operations.
 */
public class SimpleMathModule {

 /**
  * Provide sum of 2 variables.
  *
  * @param a
  *            summand 1
  * @param b
  *            summand 2
  * @return sum
  */
 @WrapToScript
 public double sum(double a, double b) {
  return a + b;
 }

 /**
  * Subtract b from a.
  */
 @WrapToScript
 public double sub(double a, double b) {
  return a - b;
 }

 /**
  * Multiply 2 values.
  */
 @WrapToScript
 public double mul(double a, double b) {
  return a * b;
 }

 /**
  * Divide a by b.
  */
 @WrapToScript
 public double div(double a, double b) {
  return a / b;
 }

 public void doNothing() {
  // not exposed as it lacks the @WrapToScript annotation
 }
}

The only addition to a plain class is the @WrapToScript annotation, which indicates methods to be exposed to scripting. In case a class does not contain any @WrapToScript annotation, all public methods will get exposed.
As modules are created dynamically, make sure that the default constructor exists and is visible!

Step 3: A more complex module

Our 2nd module will be a bit more complex as we introduce a dependency to the SimpleMathModule. Define the module the same way we did before, but add a dependency to com.codeandme.ease.modules.module.simpleMath.

Image

The implementation now derives from AbstractScriptModule which allows us to retrieve the used script engine and the basic environment module, which keeps track of loaded modules. If you cannot extend your class you may also implement IScriptModule and populate the initialize() method on your own.

package com.codeandme.ease.modules;

import org.eclipse.ease.modules.AbstractScriptModule;
import org.eclipse.ease.modules.WrapToScript;

/**
 * High sophisticated mathematics operations.
 */
public class ComplexMathModule extends AbstractScriptModule {

 /** PI constant. */
 @WrapToScript
 public static final double PI = 3.1415926;

 /**
  * Get radus from circle area.
  *
  * @param area
  *            circle area
  * @return radius
  */
 @WrapToScript
 public double getRadius(double area) {
  double rSquare = getEnvironment().getModule(SimpleMathModule.class).div(area, PI);
  return Math.sqrt(rSquare);
 }
}

By querying the environment we can retrieve other module instances dynamically and make use of their functionality.

We also expose a constant PI here. Just remember that only constant fields can be wrapped.

Image

Optional: Create documentation

Users typically will request for module documentation. EASE allows to create eclipse help pages automatically by using a special JavaDoc doclet. The doclet is available in source and binary from the EASE core repository. To use it you need to download at least the bin folder content (along with the directory structure).

Now open menu Project / Generate JavaDoc...  , provide the location of the javadoc binary, select your modules project and point to the custom doclet location.

Image


On the 2nd page add a parameter that points to the Plug-in Project root folder:

-root /your/workspace/location/com.codeandme.ease.modules


Hit Finish to build the documentation. The doclet will alter your plugin.xml, MANIFEST.MF and adds a new help folder to your project.

The help pages are located under Scripting Guide/Loadable modules reference/<Category>/<ModuleName>.

If you are interested in building documentation automatically using maven, please ask on the ease-dev mailing list for details.

Thursday, June 19, 2014

TableViewer context menus

Recently I played around with context menus on tableviewers. I wanted to track the column where the context menu was activated. As it turned out to be quite a tricky task (until you know how it is done) I would like to share my findings. Furthermore I would like to show how to attach context menus to editors without having default entries for Run As and similar actions.

Source code for this tutorial is available on googlecode as a single zip archive, as a Team Project Set or you can checkout the SVN projects directly.

Preparations

For this tutorial I created a Plug-in Project with a simple editor extension. The editor contains a TableViewer with some sample columns. Nothing special about that. Have a look at the source code or Lars' excellent editor tutorial if you are not familiar with these steps.

Step 1: Creating the editor context menu

To enable context menus, we need to register them in our editor code. The typical snippet to do this looks as follows:
MenuManager menuManager = new MenuManager();
Menu contextMenu = menuManager.createContextMenu(table);
table.setMenu(contextMenu);
getSite().registerContextMenu(menuManager, tableViewer);
registerContextMenu() automatically searches for a menu contribution with locationURI = popup:<part.id>. In our case it would look for popup:com.codeandme.editor.sample. You may also provide a dedicated id when registering the context menu. Just make sure you provide the id without the "popup:" prefix, while the menu contribution needs the "popup:" prefix in its locationURI.

You will end up with a context menu already populated with some default entries like Run As, Compare With, Team, and some more. To get rid of them, we need to register the menu using a different site:
getEditorSite().registerContextMenu(menuManager, tableViewer, false);
The boolean parameter allows to enable/disable context menu entries for the editor input part.

Image Image

Step 2: Track the active column

When we want our context menu entry to behave differently depending on the column from where it was triggered, we need to track columns. Some solutions on the web use MouseListeners, which work well for the table body, but not for the header row. A nicer solution relies on MenuDetect events:
fTableViewer.getTable().addListener(SWT.MenuDetect, this);

@Override
public void handleEvent(Event event) {
 Table table = fTableViewer.getTable();

 // calculate click offset within table area
 Point point = Display.getDefault().map(null, table, new Point(event.x, event.y));
 Rectangle clientArea = table.getClientArea();
 fHeaderArea = (clientArea.y <= point.y) && (point.y < (clientArea.y + table.getHeaderHeight()));

 ViewerCell cell = fTableViewer.getCell(point);
 if (cell != null)
  fSelectedColumnIndex = cell.getColumnIndex();

 else {
  // no cell detected, click on header
  int xOffset = point.x;
  int columnIndex = 0;
  int[] order = table.getColumnOrder();
  while ((columnIndex < table.getColumnCount()) && (xOffset > table.getColumn(order[columnIndex]).getWidth())) {
   xOffset -= table.getColumn(order[columnIndex]).getWidth();
   columnIndex++;
  }

  fSelectedColumnIndex = (columnIndex < table.getColumnCount()) ? order[columnIndex] : NO_COLUMN;
 }
}
The full helper class is available under EPL and can be downloaded from the source repository.

Step 2: Delete the active column

To provide a usage example for the TableColumnTracker we will extend our editor and allow users to delete the column under the cursor using the context menu.

The command implementation simply asks the current editor to do the job:
@Override
public Object execute(ExecutionEvent event) throws ExecutionException {
 IWorkbenchPart part = HandlerUtil.getActivePart(event);
 if (part instanceof SampleEditor)
  ((SampleEditor) part).deleteColumn();

 return null;
}
The editor needs to install the tracker and dispose the selected column upon request:
public class SampleEditor extends EditorPart {

 private TableColumnTracker fColumnTracker;

 @Override
 public void createPartControl(Composite parent) {

  [...]

  MenuManager menuManager = new MenuManager();
  Menu contextMenu = menuManager.createContextMenu(table);
  table.setMenu(contextMenu);
  getEditorSite().registerContextMenu(menuManager, fTableViewer, false);

  fColumnTracker = new TableColumnTracker(fTableViewer);
 }

 public void deleteColumn() {
  int columnIndex = fColumnTracker.getSelectedColumnIndex();

  if (columnIndex != TableColumnTracker.NO_COLUMN) {
   fTableViewer.getTable().getColumn(columnIndex).dispose();
   fTableViewer.refresh();
  }
 }
}

Wednesday, June 11, 2014

Adding hyperlink detectors to editors

Ever tried clicking on a method name in the java editor while holding the ctrl key? Sure you have. This hyperlink functionality is extensible and in this post we will see how to do that.

Source code for this tutorial is available on googlecode as a single zip archive, as a Team Project Set or you can checkout the SVN projects directly.

Step 1: Creating the extension

Our target will be to create a simple hyperlink whenever we detect the word "preferences" in a text editor. Upon a click the preferences dialog should pop up.

Start with a new Plug-in Project, open the Manifest Editor and switch to the Extensions tab. Now add an org.eclipse.ui.workbench.texteditor.hyperlinkDetectors extension. Provide a unique id and a nice name. The name will be visible in the preferences under General/Editors/Text Editors/Hyperlinking.

The targetId points to the type of editor we would like to create our links in. As we want to use it in all text editors use org.eclipse.ui.DefaultTextEditor here.

Image

Step 2: Class implementation

Create a new class PreferencesHyperlinkEditor extending from AbstractHyperlinkDetector. Therefore you need to define a dependency to the org.eclipse.jface.text plug-in.
package com.codeandme.hyperlinkdetector;

import org.eclipse.jface.text.BadLocationException;
import org.eclipse.jface.text.IDocument;
import org.eclipse.jface.text.IRegion;
import org.eclipse.jface.text.ITextViewer;
import org.eclipse.jface.text.Region;
import org.eclipse.jface.text.hyperlink.AbstractHyperlinkDetector;
import org.eclipse.jface.text.hyperlink.IHyperlink;
import org.eclipse.jface.text.hyperlink.IHyperlinkDetector;

public class PreferencesHyperlinkDetector extends AbstractHyperlinkDetector implements IHyperlinkDetector {

 private static final String PREFERENCES = "preferences";

 @Override
 public IHyperlink[] detectHyperlinks(ITextViewer textViewer, IRegion region, boolean canShowMultipleHyperlinks) {

  IDocument document = textViewer.getDocument();
  int offset = region.getOffset();

  // extract relevant characters
  IRegion lineRegion;
  String candidate;
  try {
   lineRegion = document.getLineInformationOfOffset(offset);
   candidate = document.get(lineRegion.getOffset(), lineRegion.getLength());
  } catch (BadLocationException ex) {
   return null;
  }

  // look for keyword
  int index = candidate.indexOf(PREFERENCES);
  if (index != -1) {

   // detect region containing keyword
   IRegion targetRegion = new Region(lineRegion.getOffset() + index, PREFERENCES.length());
   if ((targetRegion.getOffset() <= offset) && ((targetRegion.getOffset() + targetRegion.getLength()) > offset))
    // create link
    return new IHyperlink[] { new PreferencesHyperlink(targetRegion) };
  }

  return null;
 }
}
The sample implementation just extracts some text and calculates offsets within the text file. It will fail if you type"preferences" more than once per line, but as a proof of concept this should be sufficient.

We also need to provide an implementation of IHyperlink. The most important method there is open(), which is called upon the click event.
package com.codeandme.hyperlinkdetector;

import org.eclipse.jface.text.IRegion;
import org.eclipse.jface.text.hyperlink.IHyperlink;
import org.eclipse.swt.widgets.Display;
import org.eclipse.ui.dialogs.PreferencesUtil;

public class PreferencesHyperlink implements IHyperlink {

 private final IRegion fUrlRegion;

 public PreferencesHyperlink(IRegion urlRegion) {
  fUrlRegion = urlRegion;
 }

 @Override
 public IRegion getHyperlinkRegion() {
  return fUrlRegion;
 }

 @Override
 public String getTypeLabel() {
  return null;
 }

 @Override
 public String getHyperlinkText() {
  return null;
 }

 @Override
 public void open() {
  PreferencesUtil.createPreferenceDialogOn(Display.getDefault().getActiveShell(), null, null, null).open();
 }
}
To test your implementation open a new text file, enter some sample text and hover over the word "preferences" while holding the ctrl key.

Image

Wednesday, June 4, 2014

Tycho 11: Install root level features


Introduction

Do you know about root level features?

Components installed in eclipse are called installable units (IUs). These are either features or products. Now IUs might be containers for other features, creating a tree like dependency structure. Lets take a short look at the Installation Details (menu Help / About Eclipse) of our sample product from tycho tutorial 8:
Image

We can see that there exists one root level feature Tycho Built Product which contains all the features we defined for our product. What is interesting is, that the Update... and Uninstall... buttons at the bottom are disabled when we select child features.

So in an RCP application we may only update/uninstall root level features. This means that if we want to update a sub component, we need to create a new version of our main product. For a modular application this might not be a desired behavior.

The situation changes when a user installs additional components into a running RCP application. Such features will be handled as root level features and can therefore be updated separately. So our target will be to create a base product and install our features in an additional step.

Great news is, that tycho 0.20.0 allows us to do this very easily.

Tycho Tutorials

For a list of all tycho related tutorials see Tycho Tutorials Overview

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Identify independent features

Tycho will do all the required steps for us, we only need to identify features to be installed at root level. So open your product file using either the Text Editor or XML Editor. Locate the section with the feature definitions. Now add an installMode="root" attribute to any feature to be installed on root level.
   <features>
      <feature id="org.eclipse.e4.rcp"/>
      <feature id="org.eclipse.platform"/>
      <feature id="com.codeandme.tycho.plugin.feature" installMode="root"/>
      <feature id="com.codeandme.tycho.product.feature"/>
      <feature id="org.eclipse.help" installMode="root"/>
      <feature id="org.eclipse.emf.ecore"/>
      <feature id="org.eclipse.equinox.p2.core.feature"/>
      <feature id="org.eclipse.emf.common"/>
      <feature id="org.eclipse.equinox.p2.rcp.feature"/>
      <feature id="org.eclipse.equinox.p2.user.ui"/>
      <feature id="org.eclipse.rcp"/>
      <feature id="org.eclipse.equinox.p2.extras.feature"/>
   </features>

Make sure to update the tycho version to be used to 0.20.0 or above.

Nothing more to do, build your  product and enjoy root level features in action.

Image

Thursday, May 15, 2014

Extending JSDT: adding your own content assist

Extending the JSDT content assist is a  topic asked several times on stackoverflow(20738788, 20779899). As I needed it for the Eclipse EASE project, I decided to come up with a short tutorial.

Source code for this tutorial is available on googlecode as a single zip archive, as a Team Project Set or you can checkout the SVN projects directly. 

Step 1: Preparations

As we will have dependencies to JSDT you either need to check out the source code from gerrit or add JSDT to your target platform.

Step 2: Defining the extension point

Create a new Plug-in Project com.codeandme.jsdt.contentassist and add following dependencies:
  • org.eclipse.wst.jsdt.ui
  • org.eclipse.core.runtime
  • org.eclipse.jface.text
Afterwards create a new Extension for org.eclipse.wst.jsdt.ui.javaCompletionProposalComputer. Provide an ID and Name. The ID will be needed as reference to a category, the name will be displayed in Preferences/JavaScript/Editor/Content Assist/Advanced. Create a proposalCategory node and provide a nice icon for your category.

Image

Create a second extension of the same type for our implementation. Again provide an ID. Create a javaCompletionProposalComputer subnode with a class and activate set to true. The categoryID consists of your plugin name followed by the category ID we provided earlier.

Image

If you use a target definition containing JSDT plugins you might not be able to use the Plug-in Manifest Editor for creating these extensions. In that case you have to edit the xml code directly:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
   <extension point="org.eclipse.wst.jsdt.ui.javaCompletionProposalComputer"
   id="codeandme_category"
   name="Code and me proposals">
   <proposalCategory/>
 </extension>

 <extension
       id="codeandme_proposal"
       point="org.eclipse.wst.jsdt.ui.javaCompletionProposalComputer">
   <javaCompletionProposalComputer
      class="com.codeandme.jsdt.contentassist.CustomCompletionProposalComputer"
      categoryId="com.codeandme.jsdt.contentassist.codeandme_category"
      activate="true">
   </javaCompletionProposalComputer>
 </extension>

</plugin>
Step 3: Proposal computer implementation

Now for the easy part:create a new class CustomCompletionProposalComputer with following content:

package com.codeandme.jsdt.contentassist;

import java.util.ArrayList;
import java.util.List;

import org.eclipse.core.runtime.IProgressMonitor;
import org.eclipse.jface.text.contentassist.CompletionProposal;
import org.eclipse.wst.jsdt.ui.text.java.ContentAssistInvocationContext;
import org.eclipse.wst.jsdt.ui.text.java.IJavaCompletionProposalComputer;

public class CustomCompletionProposalComputer implements IJavaCompletionProposalComputer {

 @Override
 public void sessionStarted() {
 }

 @Override
 public List computeCompletionProposals(ContentAssistInvocationContext context, IProgressMonitor monitor) {

  ArrayList<CompletionProposal> proposals = new ArrayList<CompletionProposal>();

  proposals.add(new CompletionProposal("codeandme.blogspot.com", context.getInvocationOffset(), 0, "codeandme.blogspot.com".length()));
  proposals.add(new CompletionProposal("<your proposal here>", context.getInvocationOffset(), 0, "<your proposal here>".length()));

  return proposals;
 }

 @Override
 public List computeContextInformation(ContentAssistInvocationContext context, IProgressMonitor monitor) {
  return null;
 }

 @Override
 public String getErrorMessage() {
  return null;
 }

 @Override
 public void sessionEnded() {
 }
}


Image

CompletionProposals could contain more information like attached documentation or ContextInformation for variables and other dynamic content. See this post for some information on the Contextinformation.

Optional: Helper methods

Typically you would need to evaluate the last characters before the cursor position to filter your proposals. You also might be interested in the content of the current line left of the cursor position. Maybe these helper methods are of some interest:
 private static final Pattern LINE_DATA_PATTERN = Pattern.compile(".*?([^\\p{Alnum}]?)(\\p{Alnum}*)$");

 /**
  * Extract context relevant information from current line. The returned matcher locates the last alphanumeric word in the line and an optional non
  * alphanumeric character right before that word. result.group(1) contains the last non-alphanumeric token (eg a dot, brackets, arithmetic operators, ...),
  * result.group(2) contains the alphanumeric text. This text can be used to filter content assist proposals.
  * 
  * @param context
  *            content assist context
  * @return matcher containing content assist information
  * @throws BadLocationException
  */
 protected Matcher matchLastToken(final ContentAssistInvocationContext context) throws BadLocationException {
  String data = getCurrentLine(context);
  return LINE_DATA_PATTERN.matcher(data);
 }

 /**
  * Extract text from current line up to the cursor position
  * 
  * @param context
  *            content assist context
  * @return current line data
  * @throws BadLocationException
  */
 protected String getCurrentLine(final ContentAssistInvocationContext context) throws BadLocationException {
  IDocument document = context.getDocument();
  int lineNumber = document.getLineOfOffset(context.getInvocationOffset());
  IRegion lineInformation = document.getLineInformation(lineNumber);

  return document.get(lineInformation.getOffset(), context.getInvocationOffset() - lineInformation.getOffset());
 }

Monday, December 9, 2013

From XML data to an EMF model

Recently I had to implement a large XML data storage into my RCP application. There was no schema available and the data was structured in some awkward way. Still the xml structure needed to be preserved as other tools depend on the data format. As I was playing around with EMF for a long time without actually getting serious on it I thought: "well, wouldn't it be nice if EMF could handle all the XML parsing/writing stuff".

Source code for this tutorial is available on googlecode as a single zip archive, as a Team Project Set or you can checkout the SVN projects directly.

Prerequisites

We need some XML example data to work on. So I chose a small storage for CDs:
<?xml version="1.0" encoding="UTF-8"?>
<collection>
 <disc title="The wall">
  <artist>Pink Floyd</artist>
  <track pos="1">In the flesh</track>
  <track pos="2">The thin ice</track>
  <track pos="3">Another brick in the wall</track>
 </disc>
 <disc title="Christmas compilation">
  <track pos="1">Last Christmas<artist>WHAM</artist></track>
  <track pos="2">Driving home for Christmas<artist>Chris Rea</artist></track>
 </disc>
</collection>

Step 1: Creating a schema

EMF is capable of creating a model from an existing schema. As we do not have one yet, we need to create it. Fortunately we need not to do this on our own. For this tutorial we will use an online converter, alternatively you could use xsd.exe from the Microsoft Windows SDK for .NET Framework 4 if you prefer a local tool.

Your schema.xsd should look like this:
<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="collection">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="disc" maxOccurs="unbounded" minOccurs="0">
          <xs:complexType>
            <xs:sequence>
              <xs:element type="xs:string" name="artist" minOccurs="0"/>
              <xs:element name="track" maxOccurs="unbounded" minOccurs="0">
                <xs:complexType mixed="true">
                  <xs:sequence>
                    <xs:element type="xs:string" name="artist" minOccurs="0"/>
                  </xs:sequence>
                  <xs:attribute type="xs:byte" name="pos" use="optional"/>
                </xs:complexType>
              </xs:element>
            </xs:sequence>
            <xs:attribute type="xs:string" name="title" use="optional"/>
          </xs:complexType>
        </xs:element>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
</xs:schema>

A generated schema might need some tweaking here and there. Eclipse offers a nice visual schema editor for this purpose. Just make sure you do not alter the scheme too much. Your sample data still needs to be valid. To verify this, select your xml and xsd file and select Validate from the context menu.
Image

Step 2: Create a model

Before we can create a model, we need to install some additional components. Pretending you started with a vanilla Eclipse for RCP Developers you additionally need to install EMF - Eclipse Modeling Framework SDK. Furthermore install the XSD Ecore Converter (uncheck Group items by category to find it).

You already should have a Plug-in project to store your files to. Now create a new EMF Generator Model. Name it Catalog.genmodel and select XML Schema for the model import.
Image

On the next page select the schema.xsd file for the model input.
Image

Finally rename the file from Schema.ecore to Catalog.ecore. You will end up not only with a Catalog.genmodel, but also with a Catalog.ecore file.

Step 3: Adapting the model

Looking at the ecore model we can see that all elements contain an ExtendedMetaData annotation. They are responsible for storing the XML representation of the model elements. This allows us to rename model classes and attributes without breaking XML import/export. Eg. we could get rid of the Type extension that was added to all EClasses.

Step 4: Using the model

Now that we have a model, we may generate code from the genmodel just as we would do for any other model: open the genmodel and select Generate All from the context menu of the Catalog node. Run your application, copy over your xml data, rename the data file to something.scheme and open it in the generated editor.
Image
 To load the model from source you may use following snippet:
  SchemaPackageImpl.eINSTANCE.eClass();

  Resource resource = new ResourceSetImpl().getResource(URI.createFileURI("C:\\your\\path\\Sample data.schema"), true);

  EObject root = resource.getContents().get(0);
  if (root instanceof DocumentRoot) {
   for (Disc disc : ((DocumentRoot) root).getCollection().getDisc())
    System.out.println(disc.getTitle());
  }