Das aktuelle Heft von D-Lib widmet sich gänzlich den Forschungsdaten. Im Editorial heißt es:
“The articles cover a wide variety of topics, including the acquisition and management of scientific data, the quality and trustworthiness of that data, the connections between data and traditional scholarly publishing, metadata for datasets, and last but not least a peer reviewed journal devoted to the publication of datasets.”
Und tatsächlich bieten die Artikel einen guten Überblick über den Stand der Diskussion. Das Sonderheft geht zurück auf eine Tagung von Data Cite im letzten Jahr.
Auch J. Klump ist in der Ausgabe vertreten: Criteria for the Trustworthiness of Data Centres
Abstract: The use of persistent identifiers to identify data sets as part of the record of science implies that the data objects are persistent themselves. Scientific findings, historical documents and cultural achievements are to a rapidly increasing extent being presented in electronic form — in many cases exclusively so. However, besides the invaluable advantages offered by this form, it also carries serious disadvantages. The rapid obsolescence of the technology required to read the information combined with the frequently imperceptible physical decay of the media themselves represents a serious threat to preservation of the information content. Since research projects only run for a relatively short period of time, it is advisable to shift the burden of responsibility for long-term data curation from the individual researcher to a trusted data repository or archive. But what makes a data repository trustworthy? The trustworthiness of a digital repository can be tested and assessed on the basis of a criteria catalogue. These catalogues can also be used as a basis to develop a procedure for auditing and certification of the trustworthiness of digital repository.