This working group is joint between OCNS and INCF. The group focuses on evaluating and testing computational neuroscience tools; finding them, testing them, learning how they work, and informing developers of issues to ensure that these tools remain in good shape by having communities looking after them. Since many members of the WG are themselves tool developers, we will also learn from each other and will work towards improving interoperability between related tools.
This working group will turn the COBIDAS recommendations and guidelines into a series of checklists hosted on a website, to let users report information faster and with more detail.
The machine-readable output can form the foundation of a Methods section. This will enhance adoption and use of emerging neuroimaging standards such as BIDS and fMRIprep, facilitate data sharing and pre-registration, and help with peer-review.
The groups envisions that using checklists to report methods and results can:
Provide comprehensive human and machine readable descriptions of the data collection and analysis pipelines to reduce inefficiencies and frictions in reuse
Facilitate the creation and preparation of pre-registration and registered reports, and help users think about and create pipelines before they start collecting data
Help make peer-review more objective: by supplying an app to check pipelines
Facilitate systematic literature reviews and meta-analyses
Facilitate data sharing
The implementation of this project should remain flexible enough to accommodate the inclusion of new items in the checklist as new methods mature, and reusable to enable easily setting up a checklist website for a different field.
When The Virtual Brain(TVB) runs simulations on cortical surfaces, we need to be able and compute geodesic distances (distance on the surface) instead of the trivial Euclidean distances. For this calculus, we have a small C++ library, which had become outdated. We need to:
start with an analysis done by the student if the current implementation should be reused and fixed, or completely replaced, then
proceed with the fix/replacement as concluded at the previous step.
If we are to fix the current implementation, we need to fix the 6 issues reported on Github during this project, and also
make sure the library compiles correctly with the latest version of clang.
we need to have unit-test written for the main flows as well as for some common exceptions
the unit-tests should run automatically by integration in our CI Jenkins system
at the end of this project, also tvb-gdist packages on Pypi and conda-forge should be updated.
The SciUnit framework was developed to help researchers create unit tests for scientific models. Currently, unit tests exist for models of single neurons and small networks thereof. However, unit tests for models concerned with large-scale brain network dynamics, such as meso-scale, mean-field descriptions and corticothalamic circuit models have not been developed yet.
● To create a basic GUI interface for the reconstruction pipeline which gives the ability to users to provide input data, choose configurations, identify the outputs, and check logs in case of any problem which occurs during the whole process.
● To integrate the GUI with our Pegasus workflow engine for automation, fault-tolerance and debugging and to provide the job status and execution statistics.
● Implement GUI automated testing.
● To implement more functionality for the GUI at a higher level of abstraction.
This project is about unit tests for brains with SciUnit. We want to evaluate the strength of various models and select them among competitors for analyzing brain data.
The FitzHugh-Nagumo model (FHN) lets us study spike generation in squid giant axons using a 2-dimensional simplified version created from the Hodgkin-Huxley model. This Jupyter notebook file lets users test the stability of various parts of the FitzHugh-Nagumo model under different parameter conditions. Varying the parameters, we can determine where points of stability lie and how the nature of parts of the model differs under these conditions. We use the SciUnit boolean test to determine whether different parameter sets match one another in terms of stability for various conditions.
Currently, XNAT comes with a native built-in GUI. This project will provide a dashboard framework to allow users to easily develop responsive dashboards, for exploring, monitoring, and reviewing datasets stored on any XNAT instance.
It will interact with the XNAT server instance and get the required data from it, using that data, we will visualize the information present in it in a summarized form.
It will be designed so that it can be used with any XNAT instance.
This project will create a flexible dashboard framework which can further improve and add new features as per the changing requirements or needs of the user.
The Workflow Designer is a prototype web-based application allowing drag-and-drop creating, editing, and running workflows from a predefined library of methods. Moreover, any workflow can be exported or imported in JSON format to ensure reusability and local execution of exported JSON configurations. The application is primarily focused on electroencephalographic signal processing and deep learning workflows.
Currently, the entire Workflow Designer system (server, workflow system and methods) is based on Java. The aim of this project is to transfer backend technologies from Java to Python and allow executing workflow blocks (methods) implemented in Python, using e.g. MNE for EEG signal processing, or TensorFlow for deep learning. Just like in the current version, each block has inputs, outputs (can be streams, arrays, files, etc.) and parameters that can be configured using a GUI. After the system is transformed, develop a few deep learning workflow-related blocks to demonstrate the functionality of the system.
The objectives I wish to achieve for EEG and DL workflow are:
Rewrite all the models/algorithms in python. I.e. re-writing:
Neural network models and classifiers in python using Keras (TensorFlow).
Preprocessing, Low/High pass filter, epoch extraction, averaging filter, etc.
Recent advances in the field of computer vision and deep learning has shown great promise in their ability to decipher images and derive inferences involving classifications, detection of objects and approximation of certain values with high accuracy.
Pre-trained models like YOLONet and ResNet are now being used in various industries where they help make our lives easier. But these kinds of models are not yet being used for microscopic images on a large scale. With the right model architecture and training approaches, it is possible to get pre-trained models which would help in the research efforts of many. These pre-trained models, combined with a GUI would act as a community tool which would help speed up the classification of thousands of microscopic images and gain inferences from them.
The top priorities of this proposal are:
Train a deep learning model(s) from the image dataset(s) provided.
In the process of training, develop a data augmentation pipeline which can be used on the cellular image datasets (even on the cellular images which are not involved in this project) to help build a model robust enough for its purpose.
Make the trained model portable so that it can be easily integrated into a GUI backend.
Neuroscience data comes in multiple different data formats and structures. These differences provide a major technical barrier to sharing data between labs or even within labs. Often the organization and naming conventions of neuroscience data structures further obscures how to understand and analyze the data unless already intimately familiar with a specific data structure. The Neurodata Without Borders (NWB 1 ) Initiative provides neurophysiology datasets in a standardized HDF5 format that employs domain knowledge to alleviate the burden of different data formats and structures across multiple experimental paradigms. In addition, the NWB Initiative provides tools for handling, visualizing and analyzing NWB formatted data.
This proposed project aims to contribute to NWB Showcase made available at NWB Explorer 2 on the Open Source Brain repository 3 . The proposed project will deliver multiple converted datasets to be viewed at the NWB Explorer and will integrate tutorials and analysis examples for select converted datasets.