The Virtual Brain (TVB) is a scientific simulation platform that provides all means to generate, manipulate and visualize connectivity and network dynamics of the brain networks. Researchers in Computational Neuroscience use brain network models to understand the dynamic behavior of the healthy and diseased brain, as measured by various neuroimaging techniques such as fMRI, EEG, and MEG. TVB is currently the only neuroinformatics project providing a platform for researchers to work together on modeling studies using brain network models with realistic connectivity. The modeling of the simulations requires datasets generally hosted on data storage platforms like Zenodo, OpenSourceBrain, EBRAINS, etc. One has to download the dataset manually, unzip it and then use it inside the web GUI. Instead of manually downloading and unzipping the data, the project's goal is to have a dedicated framework to manage the downloading and unzipping the dataset from remote sources. Right now, we are focusing on the Zenodo platform, but there are other platforms like OpenSourceBrain and EBRAINS for which we would like to extend the downloading functionality.
This working group will turn the COBIDAS recommendations and guidelines into a series of checklists hosted on a website, to let users report information faster and with more detail.
The machine-readable output can form the foundation of a Methods section. This will enhance adoption and use of emerging neuroimaging standards such as BIDS and fMRIprep, facilitate data sharing and pre-registration, and help with peer-review.
The groups envisions that using checklists to report methods and results can:
Provide comprehensive human and machine readable descriptions of the data collection and analysis pipelines to reduce inefficiencies and frictions in reuse
Facilitate the creation and preparation of pre-registration and registered reports, and help users think about and create pipelines before they start collecting data
Help make peer-review more objective: by supplying an app to check pipelines
Facilitate systematic literature reviews and meta-analyses
Facilitate data sharing
The implementation of this project should remain flexible enough to accommodate the inclusion of new items in the checklist as new methods mature, and reusable to enable easily setting up a checklist website for a different field.
This Working Group formed as the result of a merger of several INCF Working Groups working in the areas of neuroimaging and reproducibility. The group has several separate projects that all have reproducibility in neuroimaging as an overarching theme, specifically focusing on data sharing, data management, and data description. The working group is composed of 3 task forces: Brain Imaging Informatics (NIDASH), Brain Imaging Data Structure (BIDS), Neuroimaging Data Model (NIDM).
This Working Group develops standards and best practices for quality control (QC) of neuroimaging data, including standardized protocols, easy to use tools and comprehensive manuals. Assessing the quality of neuroimaging data requires human visual inspection. Given the complex nature, diverse presentations, and three-dimensional anatomy of image volumes, this requires inspection in all the three planes and multiple cross-sections through each volume. Often, looking at raw data is not sufficient, but statistical measurements (e.g. across space or time) can greatly assist in identifying the artefacts or rating their severity. For proper QC, multiple types of visualizations and metrics often need to be taken into account, which is time-consuming and subjected to large variabilities.With sample sizes and number of modalities both increasing, there is a great need for developing appropriate QC annotation protocols and corresponding assistive tools.