SpiNNcloud

SpiNNcloud is the only solution supporting real-time AI computing at a large-scale, which is essential to create cognitive cities and drive the third-wave of AI. The SpiNNcloud computational power not only approaches the Human Brain, but it also leverages research from the EU’s Human Brain Project (HBP) to provide unique brain-like capabilities in every single industry requiring real-time AI.

Biological Comparison

The Human Brain is the most complex known structure in the universe. It not only excels at performing real-time computing, but it also achieves it with a remarkable energy efficiency. For these reasons, future systems saturated by Moore’s law need to search for brain inspiration if they are to maintain a competitive advantage. Our SpiNNaker2 chip, and its system level, named SpiNNcloud, is designed with two main principles. Its brain-like parallel topology and event-based operation together with its native AI accelerators provide a massively-parallel silicon brain, whereas its highly-tailored energy efficiency schemes inspired by the on-demand biological hyperactivation, ensure a very low-power operation. This is key to provide powerful systems without going against the requirements of a sustainable world.

In terms of the biologically inspired neural networks that can be implemented in the SpiNNcloud system, its number of neurons approach the Human level with a vast number of 10 billion neurons. In addition to that, our system integrates as many transistors in the SpiNNcloud machine as synapses in the Human brain (average of 1014-1015). Our native AI accelerators can support 100 synaptic transmissions per second per synapse of a Human brain. These brain-like capabilities, together with highly-tailored sparsity-aware algorithms, and massively-parallel computation capabilities, makes SpiNNcloud a unique system with an unprecedented large-scale operation at real-time. SpiNNcloud is the only real-time system world-wide approaching the right order of magnitude of the Human brain.

Unique Joint Approach

Our system is not a traditional Neuromorphic computer. We offer a unique joint approach between Neuromorphics, Deep Learning, and Symbolic AI for a real-time, low-latency, and energy-efficient cognitive AI platform. The native support of these three components are crucial to drive the third-wave of AI. The brain-like properties ensure efficiency through event-based sparse processing in real-time, the Deep Learning capabilities provide localized statistical-based networks to fulfill specialized tasks, and the Symbolic layers grant a holistic integration of all the individual capabilities to enable general intelligence frameworks. This joint approach is unprecedented, and its full scope of capabilities were not currently offered in the open market of AI hardware. Our system is the only real-time AI cloud, powering instantaneous robotics control, sensing, prediction and insights, enabling the most intelligent and capable robots, and the most effective cognitive services.

 

Our Topology

As presented in the SpiNNaker2 chip-level description, our system operates with a light-weight network on-chip and on-demand processing triggered by incoming or outcoming data streams. The system has a massively-parallel topology allowing concurrent processing from multiple actuators and sensor streams in real-time, with a response-time below 1ms.

Our main value proposition is summarized as follows:

  1. In contrast to traditional High Performance Computing (HPC) systems, the power consumption of our system is proportional to the information being streamed across and processed during operation.
  2. Our system is holistically fast and efficient. Our individual chips have a state-of-the-art low-power consumption, and maintain a top efficiency for the overall system as well.
  3. Taking inspiration from the biological Interbout Arousal (IBA) mechanism, our system contains fine granular patented power management to reduce the energy consumption, and unleash the maximum performance only on-demand.
  4. Contrary to the traditional communication protocols used in HPC systems, our system works with a light-weight scalable communication architecture between its processing elements to support sparsity, event-based processing, and asynchronous execution.
  5. Our system has native support to perform statistical AI (e.g., Deep Learning), biologically inspired neural networks (e.g., Spiking Neural Networks), and symbolic layers to interconnect holistically localized networks, which are all combined in a unique approach to enable the third-wave of AI.
  6. As it is the Human brain, our system is inherently parallel and performs real-time computation at an unprecedented large-scale.
  7. In contrast to traditional HPC systems, our system can scale up beyond 10 million cores, while maintaining its real-time capabilities. This is possible due to its light-weight network topology and to its independent asynchronous atomic units.
  8. Our system reaches an AI throughput of up to 250 Peta operations per second using an 8-bit format to satisfy the robustness and safety required in critical AI applications (e.g., Automotive).
  9. Our system is the largest brain-like system and can implement up to 10 billion neurons, getting close to the Human brain number of neurons.
  10. Our system comes equipped with the most advanced 3D volumetric hologram display to provide an immersive experience into the internal activity taking place in our silicon brain.

Please contact us for more information.

SpiNNaker2

The SpiNNaker2 is a massively-parallel system incorporating 152 ARM processors with powerful native accelerators to support the efficient implementation of sparsity-aware artificial neural networks, symbolic AI, and spiking neural networks. SpiNNaker2 is the atomic hardware building block of our large-scale system, the SpiNNcloud.

Architecture

SpiNNaker2‘s biological inspiration becomes immanent in the chip’s architecture. Distributed processing elements work asynchronously, enabling massively-parallel event-based and sparse computation on-demand power consumption. The foundation of our asynchronous operation is SpiNNaker2‘s patented light-weight communication system.

Features

  • The chip is designed to reduce power consumption during idle times drastically, when compared to peak performance. Each processing element is running autonomously, allowing power control with Dynamic Voltage and Frequency-Scaling (DVFS) per element, which ensures fine granular power consumption on-demand.
  • A light-weight Network-on-Chip distributes information across the chip, supporting simple input and output streaming of data with a minimum latency.
  • The central SpiNNaker2-router allows for selective broadcasting of information from a sending PE to multiple receiving, reducing network traffic drastically.
  • 4 universally programmable processing elements form a Quad-PE with low latency connection, leveraging allocation of information for fast and efficient data exchange.
  • Dedicated AI accelerators for each PE boost the chip’s performance on cutting-edge AI applications.
  • Brain-like accelerators enhance computational performance for biologically inspired networks. Hence our system provides native support for hybrids of traditional AI and spiking networks.
  • A variety of standard interfaces such as Ethernet ensure a simple integration of our chip into existing systems. This flexibility ensures that every AI application with real-time requirements can leverage the power of SpiNNaker2.
  • Large external memories can be addressed by multiple high-bandwidth interfaces for memory intensive computations.
  • Applied sparsity leverages the potential of the low idle power consumption by reducing the number of computations drastically. Regardless of the model size applied sparsity also reduces network communication up to 90%, leading to further power savings.
  • The fast chip2chip links used to interconnect the SpiNNaker2 chips ensure the real-time response when scaling up. This novel combination of network topology and high-speed periphery enables the assembly of large size SpiNNclouds.

The combination of all these features play a crucial role in achieving our holistically fast large-scale system. 

 

 

 

“Anyone can build a fast CPU. The trick is to build a fast system.”

Seymour Cray