Full Text

Turn on search term navigation

© 2022. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Generic simulation code for spiking neuronal networks spends the major part of time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm all threads search through all spikes to pick out the relevant ones. With increasing network size the fraction of hits remains invariant but the absolute number of rejections grows. An alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this every thread completes delivery solely of the section of spikes for its own neurons. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40%. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.

Details

Title
Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
Author
Pronold, Jari; Jordan, Jakob; Wylie, Brian J N; Kitayama, Itaru; Diesmann, Markus; Kunkel, Susanne
Section
ORIGINAL RESEARCH article
Publication year
2022
Publication date
Mar 1, 2022
Publisher
Frontiers Research Foundation
e-ISSN
16625196
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2634663537
Copyright
© 2022. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.