SiliconBurmuin
Technology

In collaboration with European partners and projects, and fusing inspiration from diverse biological vision mechanisms – such as selective attention in foveated vision of vertebrates, multispectral vision of crustaceans, and insect compound eyes – SiliconBurmuin neuromorphic technology will deliver ultra-efficient, real-time, and robust object detection and 3D perception, essential for advanced and safe autonomous navigation as well as to support physical AI.

SiliconBurmuin offers a smooth, incremental path to adopting neuromorphic technology, enabling existing AI workflows (e.g., TensorFlow, PyTorch) and deployments to seamlessly integrate neuromorphic chips (commercial and SiliconBurmuin’s own) with minimal adjustments.

SiliconBurmuin relies on IKERLAN’s BEGI Evaluation Kit (EVK) to prototype and validate its technology in relevant industry use cases before moving to an optimised ASIC implementation. Likewise, BEGI-EVK also enables evaluation and adoption of SiliconBurmuin IP in final user applications.

BEGI-EVK includes an FPGA to host commercial and pre-commercial SoC IP and integrates commercial and pre-commercial chips developed by NimbleAI partners to run AI and SNN models. By combining cutting-edge innovation with proven commercial technology and comprehensive software support, BEGI-EVK accelerates the maturation of SiliconBurmuin IP and fosters trust to drive adoption.

Although BEGI-EVK is suitable for use in real-world deployments, full efficiency and performance will only be achieved when the SiliconBurmuin IP is integrated into a miniaturised chip, alongside NimbleAI and commercial SoC IP. Silicon-level efficiency will enable advanced perception in a tiny footprint, making the chip ideal for mobile and ultralight robotics like drones, where minimal energy use and rapid response are critical.

SiliconBurmuin Tech. / NimbleAI Tech. / Commercial Tech. :

SiliconBurmuin Tech. 

NimbleAI Tech.

Commercial Tech. :


Commercial Chips

Improve the adoption readiness of SiliconBurmuin technology and enable the execution of industry-standard AI models and algorithms.

Designed and commercialised by HAILO. HAILO-8 AI processor implements a highly energy-efficient dataflow architecture with tightly integrated on-chip memory, delivering up to 26 TOPS at just 2.5W typical power consumption. Available in compact M.2 modules, it features a comprehensive dataflow compiler for seamless and rapid porting of neural network models from TensorFlow, TensorFlow Lite, Keras, and PyTorch.

Co-designed and commercialised by Sony and Prophesee. It offers the highest event-based vision resolution at 1 Mpixel and delivers a dynamic range of over 120 dB and microsecond time resolution (equivalent to over 10,000 images per second). It operates within a temperature range of -40°C to +85°C and consumes just 10 mW of power.

Designed and commercialised by Nvidia. It delivers up to 275 TOPS with a powerful software stack that features pre-trained AI models and reference AI workflows to accelerate end-to-end development of advanced AI and robotics applications.
BEGI-EVK integrates Nvidia GPUs to facilitate the adoption of SiliconBurmuin’s neuromorphic technology, allowing users to easily deploy their vision AI models with minimal constraints.

Pre-commercial chips

Enable early testing of disruptive functionalities, opening competitive advantages to adopters.

Designed by CSIC-IMSE. It enables dynamic resolution allocation based on the information gain provided by each sensor region. Low-resolution (LR) coverage of the full field of view enables the detection of potentially relevant regions, which can then be sensed in high-resolution (HR) for maximum accuracy.

It is based on the commercvial Sony-Prophesee IMX636 and light-field microlenses by Raytrix, supported by Raytrix’s SDK, to enable 3D perception-aware application development and deployment on GPUs and IKERLAN’s BEGI specialised processor

It combines efficiency and high dynamic range of DVS with directional light capture of light-field micro-lenses coupled to the sensor to achieve real-time, monocular, passive 3D perception with sub-ms latency. LF-DVS provides significant advantages in response time and energy efficiency compared to mainstream 3D sensing solutions like RGBD and LiDAR, particularly within operational ranges of up to 10 meters and when powered by IKERLAN’s BEGI specialised processor.

It combines efficiency and high dynamic range of DVS with directional light capture of light-field micro-lenses coupled to the sensor to achieve real-time, monocular, passive 3D perception with sub-ms latency. LF-DVS provides significant advantages in response time and energy efficiency compared to mainstream 3D sensing solutions like RGBD and LiDAR, particularly within operational ranges of up to 10 meters and when powered by IKERLAN’s BEGI specialised processor.

Designed by Imec. It features 8 RISC-V-based processing cores for running both event-based and general-purpose workloads, delivering 5 Me/s each, alongside 8 specialised cores with a dedicated microarchitecture optimised for event-based workloads, such as SNNs, achieving up to 500 Me/s each.

Commercial Soc IP

Prototyped on the FPGA along with low TRL IP implementing innovative functionalities to advance readiness and accelerate the implementation of an optimised SiliconBurmuin ASIC.

Designed and commercialised by Codasip. These energy-efficient CPUs are based on the RISC-V ISA. Enabling HW/SW co-optimisation, the Codasip Studio design automation tools offer straightforward customisation of the base core and generate a full HDK and SDK to streamline application development.

Designed and commercialised by Menta. It is built for integration into ASICs or SoCs, offering post-tapeout flexibility, reconfigurability, and power efficiency across various applications. In NimbleAI, Menta’s eFPGA IP enables post-silicon reconfigurable custom instructions for complex computer vision tasks, enhancing hardware acceleration to meet evolving AI demands.

Pre-comercial SoC IP

Implements disruptive functionalities that can be prototyped on FPGA for real-world validation prior to optimised ASIC implementation.

Designed by IKERLAN. BEGI is a digital vision processor optimised for integration into SoCs and/or sensors to bridge AI processing IP and DVS. BEGI allows users to combine the energy efficiency, high dynamic range, and ultra-low latency of DVS with the productivity and advanced capabilities of AI in vision applications. BEGI pre-processes DVS events to provide ultra-low-latency filtered event frames, time-surfaces, optical flow maps, and, when paired with LF-DVS, depth maps.

If hyperspectral DVS proves to be technologically feasible and prototype chips are available, BEGI will incorporate event-driven engines for image segmentation and object detection using spectral event inputs.

Designed by BCAM in collaboration with EU partners. NA is a novel mathematical-computational modelling framework that allows mapping symbolic computations to biologically plausible neural networks. The resulting network is thus endowed with the ability of executing arbitrary partially computable functions or algorithms. Thus, the framework facilitates the design of parsimonious biologically inspired neuromorphic circuits that execute desired algorithms but also universal computation. A proof-of-concept RTL implementation of an NA engine for image processing is envisaged.

Designed by CEA-List. It is a vector co-processor based on a near-memory computing architecture that integrates SRAM memory tightly coupled with a vector processing unit that decodes specific instructions and performs operations on any vector-line of the SRAM. Hence, the C-SRAM can be used both as a programmable vector co-processor and/or as a low-latency SRAM to reduce energy consumption by minimising data movement.

Designed by UPV-EHU in collaboration with IKERLAN and other Basque partners. SoC4CRIS is a RISC-V-based scalable and configurable SoC architecture that integrates AMBA-based on-chip communication IP and interfaces for off-chip memory and industrial sensors and transceivers, including Time Sensitive Network (TSN).

ALGORITHMS & APPLICATIONS

Integrate application software with the previously listed chips and IP to complete industry-ready systems and neurocomputing models for neuroscience research and clinical diagnosis.

Designed by VICOMTECH. Hybrid navigation algorithms integrating CNN models and deep learning models designed for events to leverage the rich temporal information from DVS and the detailed spatial information from frame-based sensors.

Designed by TECNALIA. Techniques for analysing and characterising the latent activity of SNNs, combined with algorithmic methods to extract insights —such as explainable approaches for detecting out-of-distribution stimuli at the network’s inputs— supporting the implementation of selective visual attention to drive the foveated DVS chip.

Designed by BioBizkaia. Computerised modular toolbox combining spatial and dynamic vision assessments with autonomic response measurements to identify visual and physiological markers of major neurodegenerative diseases, such as Alzheimer’s and Parkinson’s.

UPV-EHU is investigating the technological feasibility and potential of hyperspectral DVS to enhance vision pipelines, targeting Imec’s Fabry-Perot on-chip filters applied to DVS pixels. The research aims to create technology-aware hyperspectral DVS datasets by processing real-world data collected onboard trains and other vehicles with a multisensor setup combining commercial Sony-Prophesee IMX636 DVS, together with NIR and SWIR small size hyperspectral snapshot cameras. Proof-of-concept event-based vision pipelines aimed at object detection and tracking are also being developed to process these datasets effectively.

Designed by the University of Manchester. A two-layer SNN model with Leaky Integrate-and-Fire (LIF) neurons to process the low-resolution visual input from the EF-DVS to identify proto-objects (i.e., features resembling objects or their parts) using spiking convolution with Von Mises (VM) filters. The first layer detects filter activations, while the second layer combines neighboring kernels with opposite convexity to create a saliency map. A winner-take-all mechanism in the second layer selects the most salient proto-object as the region of interest for high-resolution processing in the EF-DVS.

BEGI-EVK: Enabling Adoption of SiliconBurmuin Technology by Industry

BEGI-EVK is equipped with standard interfaces, including MIPI, CAN, USB, Ethernet, and PCIe, and offers software support for Linux and ROS. It provides seamless plug-and-play compatibility with Nvidia GPUs and commercial M2 AI acceleration modules like HAILO, delivering GPU-level performance for running energy-efficient user AI models. The AI models deployed on BEGI-EVK are similar to those used with mainstream frame-based sensors and RGBD cameras, which get automatically boosted by processing data structures generated from DVS neuromorphic inputs by BEGI.

In addition to CAF, early users of BEGI-EVK include ViewPointSystem and TU Vienna’s F1TENTH racing team.

SDK for Light-Field DVS: Passive, Monocular, Energy-Efficient and Low-Latency 3D Perception

Raytrix’s SDK enables the development of 3D perception applications on GPUs using light-field-DVS technology, specifically targeting Industry 4.0 and machine-tool sectors. IKERLAN’s BEGI processor enables the transition from GPU to FPGA-based implementation optimised for efficient and high-speed light-field DVS processing. This serves as a first step toward a fully optimised ASIC implementation, aimed at applications with strict timing constraints and limited energy access, such as lightweight mobile robotics.

High-speed 3D perception powered by light-field DVS and BEGI can be tested in targeted use cases using synthetic datasets that simulate realistic visual scenes under various controlled environmental conditions, such as lighting and motion. These datasets, created with Blender models of light-field DVS and accompanied by ground truth, can also be used to train user AI models that process data structures generated by BEGI. Additionally, the Blender models of light-field DVS can be seamlessly integrated into the Gazebo simulation environment to develop autonomous robots with advanced neuromorphic 3D perception capabilities.