Agent-based Models (ABM) use both first-person and third-person perspectives. This project aims to develop an AR/VR experiment application which is compatible with all devices such as Meta quest and Web XR. This will in turn allow learners and researchers from around the world to access and interact with the models. This application will also incorporate haptic feedback for users to make it more interactive. A well known experimental paradigm (e.g., Morris Water Maze), where the subject can interact with both the environment and stimuli, will be used. This environment should allow for participant data to be collected, and will exhibit a high degree of environmental realism.
DevoGraph has two stages to build a graph neural network (GNN): stage 1 is to extract centroids and we feed this to stage 2 to build a GNN. However, DevoLearn does not properly segment when cells are densely located, and thus, the volume cannot be extracted. We therefore propose a new instance-segmentation model for DevoLearn to extract densely located cells along with their volumes in time series data, as this helps with better graph embeddings at different points of time.Topological data analysis is done using different toolkits on microscopy data to extract topological features of raw data.
This project requires building a new Structural Connectivity Editor Widget. It will have the following features (but not limited to): display connectivity matrix, normalize matrix, resect connections, resect nodes, change connection weights, and save the resultant connectivity. It will help the users to edit the connectivity matrices involved in a TVB simulation. Users can access this new widget via the Jupypterlab notebook or in the Xcircuits extension in Jupyterlab.
Matching papers to reviewers based on topics is a crucial task for the Neurons, Behavior, Data Analysis, and Theory (NBDT) journal. However, the current automatic reviewer assignment tool that uses SciBERT embeddings, cosine similarity, and linear programming may not capture the semantic meaning of the text accurately. This project aims to address this limitation by fine tuning SciBERT on the relevant corpus of data and selecting the appropriate optimization objectives. SciBERT can learn from both the left and right contexts of words and has a vocabulary that is more suitable for scientific texts than BERT. The project will involve creating a training dataset, pre-processing it, generating the appropriate features, selecting and fine-tuning SciBERT, generating the word embeddings from the fine-tuned SciBERT and choosing the appropriate optimization objectives (Contrastive Learning, Learning to Rank Diversely, and LambdaRank) according to the dataset obtained and the subsequent evaluation of its performance. The expected outcome is an improved tool that more accurately matches papers to reviewers for the NBDT journal and can potentially be useful in other domains as well.
The measurement of visual function in infants and young children is crucial for early detection and treatment of eye conditions that can lead to visual deprivation and affect visual development. However, the limited cooperation and inability to provide verbal responses in infants make the accurate and efficient measurement of visual function challenging. This project aims to develop a ready-to-deploy application suite that will address these limitations by integrating hardware devices or deep learning-based infant eye trackers, and visual stimuli analysis into a user-friendly graphical user interface (GUI).
This project is geared towards improving the Longitudinal Online Research and Imaging System (LORIS) data platform, the web-based data and project management software for neuroimaging research studies.
The proposed project aims to address the sustainability challenges faced by open-source software projects by using an agent-based modelling and simulation approach. Open-source projects often face issues such as limited resources, difficulty in attracting and retaining contributors, and communication breakdowns, which can hinder their growth and sustainability. To address these challenges, the project will simulate various scenarios and identify the factors that contribute to the success or failure of open-source projects. The project will provide a framework that enables simulating different scenarios to assist project maintainers in making informed decisions that promote sustainability. This approach can offer a more detailed and context-specific understanding of the challenges open-source projects face and develop effective strategies for maintaining their sustainability.
This project’s objective is to improve the overall GUI design and user experience of the AnalySim platform. From design to functionality, various features will be examined and either replaced or modified to enhance users’ interaction with datasets, code, and site navigation.
We are trying to solve the difficulty of collaboration among multiple users working on large datasets, particularly in analyzing datasets with many parameters and essential features that need to be filtered, measured, and analyzed. The solution is AnalySim, a data-sharing platform that simplifies collaboration by providing easy sharing, analysis, visualization, and collaboration capabilities on datasets. The first deliverable is the ability to embed Jupyter notebooks, Observable HQ notebooks, and Google Colab notebooks on the website using an interactive panel. The second deliverable is an interactive panel or interface that displays a dataset's different types of features, including minimum, maximum, mean, number of non-zero, and the number of invalid values. The interface should show other visualization options like histograms, pie charts, multi-series line charts, pie charts, graphs, and visualizations representing data with more than two dimensions, e.g., 3D Scatter plots, 3D Mesh Plots, 3D line plots, box plots, and bubble charts. The third deliverable is the ability for users to add publications related to the datasets in a project. We will achieve this by creating a text editor page where users can add content, edit content, remove content, add tables, and add images. The text editor should allow users to add latex code so that users can publish mathematical analyses of the findings. We will also implement a page that gets the list of publications, which should contain clickable items that redirect to another page that details the publication. Finally, we will refactor some of the code to maintain a better project structure, encapsulate numerous fragments of the same code in another method to improve the readability of the code and remove deprecated methods.
ASSR refers to the cortical entrainment to frequency and phase of an auditory signal that is presented in a fixed “train of clicks”, in a gamma range rhythm (40 Hz). A hallmark of schizophrenia is a reduction in ASSR; this project aims at reproducing this phenomenon using an auditory cortex (A1) model with thalamocortical connectivity. The latter simulates a cortical column with a depth of 2000 μm and 200 μm diameter, containing over 12k neurons and 30M synapses. Specifically, we aim at reproducing results from an experiment looking at the effects of increased CB1 receptor availability and GABA receptor deficits, as these have been linked to the EEG abnormalities that characterize schizophrenia. We will start by running batch simulation tasks to pull out connectivity rules from the A1 model, modifying GABA and CB1 Receptors. Then we will analyze parameter sweeps for local field potentials, using the LFPy toolbox. Afterwards, we will move to parameter optimization of the A1 model so that it reproduces the ASSR, using the Optuna HPO toolkit. Our ultimate goal is to reproduce the ASSR phenomenon in the A1 model. For every step of the process, documentation will be made available on a deployed site.