Neuroimaging Quality Control (niQC)
Neuroimaging Quality Control (niQC) WG aims to develop standards and best practices for quality control of neuroimaging data, including standardized protocols, easy to use tools and comprehensive manuals
Assessing the quality of neuroimaging data, be it a raw MR acquisition (fMRI run or diffusion- or T1-weighted scan) or the result of a particular image processing task (i.e. reconstructed gray or white matter surfaces) requires human visual inspection. Given the complex nature, diverse presentations, and three-dimensional anatomy of image volumes, this requires inspection in all the three planes and multiple cross-sections through each volume. Often, looking at raw data is not sufficient, especially to spot subtle errors, wherein statistical measurements (e.g. across space or time) can greatly assist in identifying the artefacts or rating their severity. For certain data (such as assessing the accuracy of cortical thickness e.g. generated by Freesurfer, or in reviewing an EPI sequence), multiple types of visualizations (such as surface-rendering of pial surface or carpet plots with specific temporal stats in fMRI) and metrics (SNR, CNR, DVARS, Euler number) need to be taken into account for proper quality control (QC). This process is time-consuming and subjected to large intra- and inter-rater variabilities. The inter-rater variability arises from the costly training and “calibration” between two or more raters using the same annotation protocol. The intra-rater variability sources from the individual rater gain in experience, but also from human errors, including inaccurate bookkeeping, fatigue, limitations of the annotation protocol/settings that hide away or obscure imaging artifacts and other defects, changes in the annotation protocol, etc.
The niQC SIG has developed a survey to get a sense of various types of QC/QA that is done in the neuroimaging community. Survey responders should cover the "full life-cycle" of neuroimaging research, including but not limited to data collection, preprocessing, intermediate outputs and final results, and are requested to provide details into the various QC/QA processes they conduct in their projects. The SIG plans to analyze all the responses from the community to identify the challenges our community is facing, consolidate and review existing research, develop protocols, compile manuals, improve tools, and make recommendations for best practices. Fill in the survey here
As the datasets have bigger sample sizes and more modalities, there is a great need for developing appropriate quality annotation protocols and corresponding assistive tools. Quality control of neuroimaging data has been studied and reported in different dimensions, including developing algorithms to detect unusable scans built on more (i.e. image quality metrics) or less interpretable features that are extracted from images and through visual screening following a prescribed protocol and visualization tools. However, a common lesson learnt from previous research in multiple modalities is that the accuracy of these “automatic” methods is too low to be relied on for routine usage and that manual visual inspection is necessary.
Aims and goals
As we attempt to build a cohesive tool, we were met with many open questions, such as
- what kind of preprocessing need be applied before review?
- what is the widely accepted definition for old and recently discovered artefacts?
- what are the acceptable grades of artefact severity?
- what quality metrics should be used for what QC purpose?
- what steps of different pipelines must be QCed?
- when can we rely on automatic methods?
- should we enforce QC to be a requirement for all publications (during peer-review)?
- how do we reduce differences in the way QC processes are reported?
These aspects are yet to be widely and openly consulted for considerations towards defining best practices. Such open discussion and wider consultation is necessary for the development of protocols. Establishing a task force would facilitate much-needed discussion and encourage wide participation.
In addition to the above discussion, there is a clear need for developing educational resources such as comprehensive manuals (covering all modalities in neuroimaging data, different approaches to QC and curation), as well as easy to use tools that integrate the educational components well with the developed manuals and protocols.
Hence, we get together to announce this task force on Neuroimaging Quality Control (niQC) with the following overarching goals:
- develop a comprehensive manual for quality control of neuroimaging data,
- develop guidelines and best practices for conducting and reporting QC,
- publish protocols for different use cases,
- develop easy to use tools implementing those guidelines and protocols,
- integrating manuals and educational components when possible
Report from the niQC meeting on Aug 8, 2018
(At the Hackathon prior to INCF Neuroinformatics 2018 conference)
Each participant described the challenges they faced (from various perspectives), as well the results from their own analyses. Topics touched on were neonatal data, crowdsourcing, lack of consistency, lack of public “rated”/labelled datasets (“ground truth” to develop algorithms). Everyone agreed on the need for standards, easier to use tools and more educational materials. A consensus was reached on running a survey to learn “who is doing what”.
Pradeep to draft the survey to gather in-house protocols, their justification, any existing tools/libraries. Share it with the group to finalize the questions, breadth/depth of the survey, as answers depend on the way questions are asked. Get it circulated widely. Setup website and repos etc.
Another possible survey: for the purpose of algorithm development as well as educational materials, crowdsourcing examples of bad scans (or those with various interesting/extreme artefacts) would be great.
No concrete dates yet – they will be announced via the google group. Suggestions [from Ben Inglis] for an open Google Hangouts/Skype at some regular interval (every 6-8 weeks?), esp. in the first year.
Stephen Strother, David Kennedy, Pierre Bellec, Sebastian Urchs, Elizabeth Dupre, Taylor Salo, Katie Battenhorn, Erin Dickie, Yang Ding, Steve Hodges, Julie Bates, Dawn Smith, Greg Kiar, Basile Pinsard, Pradeep Reddy Raamana