Tomorrow’s Giants 2 – Dataset Comparison, Data Sharing and Future Literatures

Following my first post from last week, here are more questions that the Royal Society wanted us Cambridge researchers to discuss during the peparatory Tomorrow’s Giant’s Meeting in Cambridge.

How can – and is it appropriate to – facilitate inter-laboratory dataset comparison?
Great that the question was asked. And the answer is yes of course it is. Not only is it appropriate, it is the vey essence of scientific endeavour. What else could be called science? That said, the fact that the question even had to be asked and that the answer is not self evident is disappointing. What has science/have scientists lost by way of attitude/ethics etc. that makes us even ask that question? Yes admittedly, there may be commercial reasons as to why this sort of comparison is not desirable. One of the participants in the session was at great pains to point out that there is often commercial interest tied up to data which prevents sharing and re-use and that is a fair point. However, over the past couple of years I have sat through far too many presentations where the presenter got up and talked about the development of a proprietary model/machine learning tool using a proprietary dataset and proprietary software. Now that is NOT science – at best it is a piece of local engineering which solves a particular problem for the presenter, but it does not advance human knowledge at all. I,, as a fellow scientist, could not pick up any aspect of this work and build upon it as it is all proprietary. Local engineering at best.

Does the type of data have an impact on the ways it can be shared?
Flippantly speaking: “you betcha”. Again, great that the question was even asked. And the answer is multifaceted because the question can be read in a number of different ways. It could be read as “does the provenance of the data and context in which it was generated have an impact on the ways in which it can be shared?” The question can also be read as “Does the (technical) format the data is in have an impact on the way in which it can be shared? The answer in both cases is yes. Let’s tackle these two in turn. One of the participants of the workshop worked at the faculty of education and her primary research data consisted of a large collection of interviews she had conducted with children over the course of her work. She believes that this data is valuable to other researchers in her field and would dearly love to share – but finds herself in a mire of legal and ethical concerns with respect to, for example, the children’s privacy that effectively prevent her from data sharing. So yes, the context in which data is produced and the type of data that is generated can be an obstacle to sharing. If “type of data” is understood to mean “format” then the answer is also yes. A number of my colleagues have pointed out (see here, for example) the data loss that occurs when documents containing scientific data are converted from the format in which they were produced to pdf (examples are here, here and here). The production of data in vernacular or lossy dataformats obviously also have an impact on data sharing – particularly when the sharing and exchange format is lossy.
However, the fact that the question had to be asked at all and that it went straight over the heads of most scientists who were at the meeting and who do not work in the data business, is intensely disappointing. Laboratory researchers have no appreciation of what they are doing when they convert their Word documents to pdf. Data science and informatics are not part of the standard curriculum in the education of scientists – something that desperately needs to change if data loss due to ignorance in data handling is to be avoided in the future.

Future literatures in the wider sense i.e. not just how findings are published in journals, but how can interim findings be shared and accessed?
That is a great question and one, as it turns out, that many of the people present in the meeting had pondered themselves in one form or another already. Scientists should not only be assessed on the basis of the journal articles they write, but, for example, also on the (raw) data they publish. However, science has, so far, not only not evolved a technical soloution to the data publication problem (of course, there isn’t just one solution – there are many depending on the type of data as well as the specific subject/sub-subject/sub-sub-subject that is producing the data etc.) Interim findings are part of this and systems like Nature Preceedings could point the way (although even Nature Preceedings does not allow us to deal with data). Obviously, one has to be careful that these do not just become dumping grounds for lower quality science. Once we have evolved technical solutions for publishing data, the next step will be to develop an ecosystem of metrics. And those metrics should only extend to things like data quality, trust and data provenance. Data “usefulness” – e.g. things like citation indices etc for data should, I think, not be part of the mix: it is impossible to predict what data will be useful when and under which circumstances (and incidentally it is the same for papers). In that sense, data usefulness can be as flighty as fashion and should not be a criterion.

There were a few more questions – and I will blog about these in a future post.

Reblog this post [with Zemanta]

Data-Rich Publishing

I have been insanely busy recently with trips and papers and corrections and…etc…and only now have a bit of time to catch up with some of my feeds and people’s blog posts. One post which caught my eye was Egon’s recent blog post about data-rich or data-centric publishing, in which he argues strongly for a new kind of publishing: a publishing in which data is treated as a first class citizen and which allows/requires an author to not just publish the words of a paper, but his research data too and to publish it in such a way that the barrier to access by machines is low.

This reminded me of what I thought was a particularly tragic case, which I blogged about a while ago here. In this particular case, industrious researchers had synthesized an incredible 630 polystyrene copolymers and recorded their RAMAN spectra. Now this is more than a crying shame: a lot of work has gone into producing the polymers and recording the data. And I ask you (provided you are a materials scientist and have an interest in such things), when was the last time that YOU came across such a large and rich library of polymers together with their spectral data? And through no fault of their own, the only way these authors saw to publish their data was in the form of a pdf archive in the supplemental information.

Now Egon’s point was that newly formed journals – and in particular newly formed Journals of Chemoinformatics – have the opportunity to do something fundamentally good and wholesome: namely to change the way in which data publication is being accomplished and to give scientists BETTER tools to deal with and disseminate their data. This long and rambly blogpost is my way of violently agreeing with Egon: I believe that THIS is where an awful lot of the added value of the journal of the future will lie. This will be even more true, as successive generations of scientists will start to become more data savvy: last week I talked to a collaborator of ours who had just put in for some funding to train chemistry students in both chemistry and informatics: a whole dedicated course. Now once these students start their own scientific careers, they will both care and know about science and scientific data. And if I were a publisher, I would want to have something to offer them….

Reblog this post [with Zemanta]