Visualisation of Ontologies and Large Scale Graphs

{{en|A phylogenetic tree of life, showing the ...
Image via Wikipedia

For a whole number of reasons, I am currently looking into the visualisation of large-scale graphs and ontologies and to that end, I have made some notes concerning tools and concepts which might be useful for others. Here they are:

Visualisation by Node-Link and Tree

jOWL: jQuery Plugin for the navigation and visualisation of OWL ontologies and RDFS documents. Visualisations mainly as trees, navigation bars.

OntoViz: Plugin into Protege…at the moment supports Protege 3.4 and doesn’t seem to work with Protege 4.

IsaViz: Much the same as OntoViz really. Last stable version 2004 and does not seem to see active development.

NeOn Toolkit: The Neon toolkit also has some visualisation capability, but not independent of the editor. Under active development with a growing user base.

OntoTrack: OntoTrack is a graphical OWL editor and as such has visualisation capabilities. Meager though and it does not seem to be supported or developed anymore either…the current version seems about 5 years old.

Cone Trees: Cone trees are three-dimensional extensions of 2D tree structures and have been designed to allow for a greater amount odf information to be visualised and navigated. Not found any software for download at the moment, but the idea is so interesting that we should bear it in mind. Examples are here, here and the key reference is Robertson, George G. and Mackinlay, Jock D. and Card, Stuart K., Cone Trees: animated 3D visualizations of hierarchical information, CHI ’91: Proceedings of the SIGCHI conference on Human factors in computing systems, 1991, ISBN = 0-89791-383-3, pp.189-194. (DOI here)

PhyloWidget: PhyloWidget is software for the visualisation of phylogenetic trees, but should be repurposable for ontology trees. Javascript – so appropriate for websites. Student project as part of the Phyloinformatics Summer of Code 2007.

The JavaScript Information Visualization Toolkit: Extremely pretty JS toolkit for the visualisation of graphs etc…..Dynamic and interactive visualisations too…just pretty. Have spent some time hacking with it and I am becoming a fan.

Welkin: Standalone application for the visualisation of RDF graphs. Allows dynamic filtering, colour coding of resources etc…

Three-Dimensional Visualisation

Ontosphere3D: Visualisation of ontologies on 3D spheres. Does not seem to be supported anymore and requires Java 3D, which is just a bad nightmare in itself.

Cone Trees (see above) with their extension of Disc Trees (for an example of disc trees, see here

3D Hyperbolic Tree as exemplified by the Walrus software. Originally developed for website visualisation, results in stunnign images. Not under active development anymore, but source code available for download.

Cytoscape: The 1000 pound gorilla in the room of large-scale graph visualization. There are several plugins available for interaction with the Gene Ontology, such as BiNGO and ClueGO. Both tools consider the ontologies as annotation rather than a knowledgebase of its own and can be used for the identification of GO terms, which are overrepresented in a cluster/network. In terms of visualisation of ontologies themselves, there is there is the RDFScape plugin, which can visualize ontologies.

Zoomable Visualisations

Jamabalaya – Protege Plugin, but can also run as a browser applet. Uses Shrimp to visualise class hierarchies in ontologies and arrows between boxes to represent relationships.

CropCircles (link is to the paper describing it): CropCircles have been implemented in the SWOOP ontology editor which is not under active development anymore, but where the source code is available.

Information Landscapes – again, no software, just papers.

Reblog this post [with Zemanta]

SWAT4LS2009 – Sonja Zillner: Towards the Ontology Based Classification of Lymphoma Patients using Semantic Image Annotation

(Again, these are notes as the talk happens)

This has to do with the Siemens Project Theseus Medico – Semantic Medical Image Understanding (towards flexible and scalable access to medical images)

Different images from many different sources: e.g. X-ray, MRI etc…use this and combine with treatment plans, patient data etc and integrate with external knowledge sources.

Example Clinical Query:” Show me theCT scans and records of patiens with a Lymph Node enlargement in the neck area” – at the moment query over several disjoint systems is required

Current Motivation: generic and flexible understanding of images is missing
Final Goal: Enhance medical image annotations by integrating clinical data with images
This talk: introduce a formal classification system for patients (ontological model)

Used Knowledge Sources:

Requirements of the Ontological Model

Now showing an example axiomatisation for the counting and location of lymphatic occurences and discussses problems relating to extending existing ontologies….

Now talking about annotating patient records: typical problems are abbreviations, clinical codes, fragments of sentences etc…difficult for NLP people to deal with….

Now showing detailed patient example where application of their classification system led to reclassification of patient in terms of staging system.

Reblog this post [with Zemanta]

SWAT4LS2009 – Keynote Alan Ruttenberg: Semantic Web Technology to Support Studying the Relation of HLA Structure Variation to Disease

(These are live-blogging notes from Alan’s keynote…so don’t expect any coherent text….use them as bullt points to follow the gist of the argument.)

The Science Commons:

  • a project of the Creative Commons
  • 6 people
  • CC specializes CC to science
  • information discovery and re-use
  • establish legal clarity around data sharing and encourage automated attribution and provenance

Semantic Web for Biologist because it maximizes value o scientific work by removing repeat experimentation.

ImmPort Semantic Integration Feasibility Project

  • Immport is an immunology database and analysis portal
  • Goals:metaanalysis
  • Question: how can ontology help data integration for data from many sources

Using semantics to help integrate sequence features of HLA with disorders
Challenges:

  • Curation of sequence features
  • Linking to disorders
  • Associating allele sequences with peptide structures with nomenclature with secondary structure with human phenotype etc etc etc…

Talks about elements of representation

  • pdb structures translated into ontology-bases respresentations
  • canonical MHC molecule instances constructed from IMGT
  • relate each residue in pdb to the canonical residue if exists
  • use existing ontologies
  • contact points between peptide and other chains computed using JMOL following IMGT. Represented as relation between residue instances.
  • Structural features have fiat parts

Connecting Allele Names to Disease Names

  • use papers as join factors: papers mention both disease and allele – noisy
  • use regex and rewrites applied to titles and abstracts to fish out links between diseases and alleles

Correspondence of molecules with allele structures is difficult.

  • use blast to fiind closest allele match between pdb and allele sequence
  • every pdb and allele residue has URI
  • relate matching molecules
  • relate each allele residue to the canonical allele
  • annotate various residoes with various coordinate systems

This creates massive map that can be navigated and queried. Example queries:

  • What autoimmune diseases can de indexed against a given allele?
  • What are the variant residues at a position?
  • Classification of amino acids
  • Show alleles perturned at contacts of 1AGB

Summary of Progress to Date:
Elements of Approach in Place: Structure, Variation, transfer of annotation via alignment, information extraction from literature etc…

Nuts and Bolts:

  • Primary source
  • Local copy of souce
  • Scripts transforms to RDF
  • Exports RDF Bundles
  • Get selected RDF Bundles and load into triple store
  • Parsers generate in memory structures (python, java)
  • Template files are instructions to fomat these into owl
  • Modeling is iteratively refined by editiing templates
  • RDF loaded into Neurocommons, some amount of reasoning

RDFHerd package management for data

neurocommons.org/bundles

Can we reduce the burden of data integration?

  • Too many people are doing data integration – wasting effort
  • Use web as platform
  • Too many ontologies…here’s the social pressure again

Challenges

  • have lawyers bless every bit of data integration
  • reasoning over triple stores
  • SPARQL over HTTP
  • Understand and exploit ontology and reasoning
  • Grow a software ecosystem like Firefox
Reblog this post [with Zemanta]

Hello from Hinxton

So in my last post I pretty much said good-bye to the Unilever Centre and the people there and now it is time for a hello – a hello to a new job. I have recently joined the Department of Genetics and the group of Prof Ashburner as a Research Associate. While I am formally employed by the university, I will, however, spend most of my time at the European Bioinformatics Institute in the group of Christoph Steinbeck.

My remit here will be to continue to develop chemical ontology and in particular to help, together with my colleagues and the ChEBI user community, to put the ChEBI ontology onto a “formal” footing and to align it with the upper ontology used by the OBO Foundry ontologies. I will blog more about this as the story develops – however, for now, I am very excited about this new opportunity. I have a great set of new colleagues (Duncan Hull has also just joined the ChEBI team and has blogged about it) both in the ChEBI group as well as in the wider EBI community and there is a community of people here that believe in the value of this type of work. So I am very much looking forward to helping create some exciting ontology and resources of value to the chemical and biological community.

As I was walking across the Genome campus this morning, I couldn’t help but to be struck by its beauty – here are some pictures I shot with my mobile phone:

Hinxton High Street

Hinxton High Street - On the way to the Genome Campus


Genome Campus - By Hinxton Hall

Genome Campus - By Hinxton Hall

Reblog this post [with Zemanta]

Semantic Web Applications and Tools for Life Sciences – Morning Session

I am currently at a meeting in Edinburgh with the title “Semantic Web Applications and Tools for Life Sciences“. The title is programmatic and it promises to be a hugely exciting meeting. As far as I can tell, the British ontological aristocracy is here and a few more besides. The following are some notes I made during the meeting.

1. Keynote: Semantic Web Technology in Translational Cancer Research (M. Krauthammer, Yale Univ.)

How to integrate semantic web technologies with the Cancer Biomedical Informatics Grid (caBIG)?

Use case: melanoma…worked on at 5 NCI sites in US: Harvard, Penn, Yale, Anderson….can measure all kinases involved in disease pathways…use semantic technologies to share and integrate data from all sites and link to other data sources…e.g. drug screening results etc…..

MelaGrid consortium: data sharing, omics integration, workflow integration for clinical trials

Data sharing: create community wide resources – a federated repository of melanoma specimens

currently caBIG uses ISO/IEC 11179 metadata standards to register CDEs (common data element) and additional annotation via NCI thesaurus concepts: example of use: caTissue…tissue tracking software (multisite banking, form definition, temporal searches etc.)

omics integration: caBIG domain models are in essence ontologies…..translate into OWL models and integrate with other ontologies (e.g. sequence ontology etc.) to align data from various sources

using Sesame as a triple store, but have performance problems….use SPARQL as query language rather than caBIGs own query language

2. Semantic Data Integration for Francisella tubularis novicida Proteomic and Genomic Data (Nadia Anwar et al.)

Why is data integration important in biology?

datainformatics in bioinformatics is nor a solved problem…there are no technologies which satisfy all the problems biologists are likely to ask, also issues with data accesss and permissions…..yet another problem is heterogeneous nature of data: information discovery is not integrated…all technologies have strengths and weaknesses…data relates – but it doesn’t overlap

Solution: semantic data integration across omes data silos….

Case Study: Francisella tularensis (bacterium, infection through airways…infects immune system….francisella can bypass macrophages….forms phagosome, but can escape from it…bioterrorism fears…..”Hittite plague” been associated with Tularemia)

available datasources: genome data…from international database….convert to simple rdf data, kegg, ncbi, GO, Poson, transcriptomics data

used data from proteomics experiment to integrate with the constructed graphs….could show that it was easy to query the whole graph…..but issues with modeling of the data and the resulting rdf graph…so some careful data modeling is still necessary….some performancce issues with datasets cotaining many reified statements…..memory problems…

Summary: In principle it’s easy – in practice it is still hard work

Use of shared lexical resources for efficient ontological engineering (Antonio Jimeno et al.)

Motivation: Health-e-Child Project (creation of an integrated (grid-based) healthcare platform for European Paediatrics

Use Case: Juvenile Rheumathoid Arthritis Ontology construction
reuse existing ontologies – Galen, NCI but….problem with alignment becuase of missing information that could facilitate mapping, also many mapping tools based on statistics….thus trust

A common terminological resource for life sciences….generate a reference thesaurus that Galen,, NCI, JRAO thesaurus to normalise term concepts

Def Thesaurus: Collection of entity names in domain with synonyms, taxonomy of more general and specific terms (DAG)…..no axiomatisation

Problems in thesaurus construction: ambiguity (retinoblastoma – gene or disease), inappropriate term labels, maintenance: thesaurus and ontologies need to be updated simultaneously now…

KASBi: Knowledge Bases Analysis in Systems Biology ()

Problem: Combining data from different data sources – use semweb rather than standard data integration systems for integration…in particualar use reasoners….

In KASBi try and integrate reasoners/semweb with traditional database tech: use semtech to generate a “query plan” which specifies how queries need to be carried out across resources

goWeb – Semantic Search Engine for the Life Science Web (Heiko Dietze)

Typical question: “What is the diagnosis for the symptoms for multiple spinal tumors and skin tumors?”, “Which organisms is FGF8 studied in?”

goWeb combines simple key-word web searching, text mining and ontologies for question answering

Keyword search in goWeb is sent to yahoo, which returns snippets. These are subsequently pushed through NLP to extract concepts and mark them up with ontology concepts…….use ontolgies to further filter results…..

Path Explorer: Service Mining for Biological Pathways on the Web (George Zheng)

Two major biological data representation approaches: free text(discoverable but not invocable), computer models (constructed but made available in isolated environment – invocable but not discoverable)

Solution: model biological processes using web service operations (aim: to be invocable and discoverable) pathways of service oriented processes canbe discovered and invoked

SOA: service providers publish services into registry where they can be discovered by service providers

DAMN – slides are much to small…can’t see anything….”entities are service providers and service consumers”
….ook…..he’s lost me now – I can’t see anything anymore…..

Close integration of ML and NLP tools in …
Scope: Fine grained semantic annotation: eg he GenE protein inhibits……mark up GenE protein as a protein, inhibits as a negative interaction etc…..

Availability of NLP Pipeline….Alvis/A3P, GATE, UMA but domain specific NLP resources are rare

focus on target knowledge ensures learnability
rigorous manual annotation
high quality annotation and low vvlumes require proper nrmalisation of training corpora (syntactic dependencies vs shallow clues)
clarification of different annotatoon tasks and knowledge – consistency between NE ype and semantics

Fine grained annotation is feasible and necessary for high quality services: i.e. in verticals and science….

Right – time for lunch and a break. I have only captured aspects of the presentations and stuff that resonated with me at the time….so please nobody shoot me if they think I haven’t grabbed the most fundamental points….Link to the slides from the event is here

Reblog this post [with Zemanta]