Neuromorphic Intermediate Representation

J. Pedersen, S. Abreu, M. Jobst, G. Lenz, V. Fra, F. C. Bauer, P. Zhou, D. R. Muir, B. Vogginger, K. Heckel, T. C. Stewart, S. Shankar, J. Eshraghian, S. Sheik

jeped@kth.se jegp@mastodon.social github.com/neuromorphs/nir

The Neuromorphic Intermediate Representation

Reproducible computation

Neuron equations are based on idealized continuous-time models

Leaky-integrator:   $\dot{v} = (v_{leak} - v) + R I$

... or not

Similarity for spiking CNN activity
  • NIR reproduces ideal model
  • Exposes discretization mismatch
  • Allows platform-specific optimization
$\to$ SynSense Speck

Norse $\rightarrow$ NIR

import norse.torch as norse

model = norse.SequentialState( ... )

nir_model = norse.to_nir(model, torch.randn(1, 10))
				

NIR $\rightarrow$ SynSense Sinabs

import sinabs.from_nir

sinabs_model = sinabs.from_nir(nir_model, batch_size=4)
				

NIR decouples hardware and algorithms

Neuromorphic Intermediate Representation

J. Pedersen, S. Abreu, M. Jobst, G. Lenz, V. Fra, F. C. Bauer, P. Zhou, D. R. Muir, B. Vogginger, K. Heckel, T. C. Stewart, S. Shankar, J. Eshraghian, S. Sheik

jeped@kth.se jegp@mastodon.social github.com/neuromorphs/nir