Skip to main content
x

Benchmarking modern SNN training algorithms on GeNN

Contact

Manvi Agarwal 
Jamie Knight 
Thomas Nowotny

Deliverables

GeNN is a GPU-enhanced Neuronal Network simulation package in C++, that combines the ease of code generation, primarily for the purpose of setting up the parameters of the simulation through a model definition, with the flexibility of user-defined code, to actually run the simulation and record results. While there are several SNN libraries available, by combining GeNN and standard machine learning packages, it is possible to simulate SNNs and ANNs on the same hardware, thereby providing well-founded comparisons of model performance. This project will primarily use Python (PyGeNN, TensorFlow, Jupyter for tutorials), along with C++ (GeNN).

About
As opposed to the classic architecture of multilayer Artificial Neural Networks (ANNs), which use feedforward or recurrent learning, Spiking Neural Networks (SNNs) offer the potential to compute using sparse binary signals in a biologically realistic fashion. The implementation of SNNs on neuromorphic hardware can lead to fast, low-power, parallel, event-driven information processing. Unlike ANNs, SNNs compute using spikes - this means that neurons are either spiking or not, instead of having continuous-valued activations. Because such spiking behaviour is non-differentiable, it is more difficult to train SNNs with gradient-based methods that are typically used to train ANNs, such as the backpropagation of error. Hence, there are two major paradigms for SNN training - the first consists of various methods to convert a pre-trained ANN to a SNN with minimum loss of accuracy, and the second tries to derive backpropagation-like learning rules for spike-based computation. Review articles such as Pfeiffer & Pfeil (2018) give excellent overviews of both these research areas.

GeNN is a GPU-enhanced Neuronal Network simulation package in C++, that combines the ease of code generation, primarily for the purpose of setting up the parameters of the simulation through a model definition, with the flexibility of user-defined code, to actually run the simulation and record results. While there are several SNN libraries available, by combining GeNN and standard machine learning packages, it is possible to simulate SNNs and ANNs on the same hardware, thereby providing well-founded comparisons of model performance. This project will primarily use Python (PyGeNN, TensorFlow, Jupyter for tutorials), along with C++ (GeNN).