Sumario: | Several recent papers underline methodological points that limit the validity of published results in imaging studies in the life sciences and especially the neurosciences (Carp, 2012; Ingre, 2012; Button et al., 2013; Ioannidis, 2014). At least three main points are identified that lead to biased conclusions in research findings: endemic low statistical power and, selective outcome and selective analysis reporting. Because of this, and in view of the lack of replication studies, false discoveries or solutions persist. To overcome the poor reliability of research findings, several actions should be promoted including conducting large cohort studies, data sharing and data reanalysis. The construction of large-scale online databases should be facilitated, as they may contribute to the definition of a “collective mind” (Fox et al., 2014) facilitating open collaborative work or “crowd science” (Franzoni and Sauermann, 2014). Although technology alone cannot change scientists’ practices (Wicherts et al., 2011; Wallis et al., 2013, Poldrack and Gorgolewski 2014; Roche et al. 2014), technical solutions should be identified which support a more “open science” approach. Also, the analysis of the data plays an important role. For the analysis of large datasets, image processing pipelines should be constructed based on the best algorithms available and their performance should be objectively compared to diffuse the more relevant solutions. Also, provenance of processed data should be ensured (MacKenzie-Graham et al., 2008). In population imaging this would mean providing effective tools for data sharing and analysis without increasing the burden on researchers. This subject is the main objective of this research topic (RT), cross-listed between the specialty section “Computer Image Analysis” of Frontiers in ICT and Frontiers in Neuroinformatics. Firstly, it gathers works on innovative solutions for the management of large imaging datasets possibly distributed in various centers. The paper of Danso et al. describes their experience with the integration of neuroimaging data coming from several stroke imaging research projects. They detail how the initial NeuroGrid core metadata schema was gradually extended for capturing all information required for future metaanalysis while ensuring semantic interoperability for future integration with other biomedical ontologies. With a similar preoccupation of interoperability, Shanoir relies on the OntoNeuroLog ontology (Temal et al., 2008; Gibaud et al., 2011; Batrancourt et al., 2015), a semantic model that formally described entities and relations in medical imaging, neuropsychological and behavioral assessment domains. The mechanism of “Study Card” allows to seamlessly populate metadata aligned with the ontology, avoiding fastidious manual entrance and the automatic control of the conformity of imported data with a predefined study protocol. The ambitious objective with the BIOMIST platform is to provide an environment managing the entire cycle of neuroimaging data from acquisition to analysis ensuring full provenance information of any derived data. Interestingly, it is conceived based on the product lifecycle management approach used in industry for managing products (here neuroimaging data) from inception to manufacturing. Shanoir and BIOMIST share in part the same OntoNeuroLog ontology facilitating their interoperability. ArchiMed is a data management system locally integrated for 5 years in a clinical environment. Not restricted to Neuroimaging, ArchiMed deals with multi-modal and multi-organs imaging data with specific considerations for data long-term conservation and confidentiality in accordance with the French legislation. Shanoir and ArchiMed are integrated into FLI-IAM1, the national French IT infrastructure for in vivo imaging.
|