Biological Visual Perception and the Power of Neuroscience
Over millions of years, eyes and brains have co-evolved into highly specialised visual systems that adhere to the efficient predictive coding principle, maximising the information encoded from external stimuli, while minimising resource use, energy consumption, and processing time. This is exemplified by mechanisms such as selective attention, foveation, and saccadic scanning.
The spiking activity of the visual cortex operates within a high-dimensional spatiotemporal space, dynamically reshaping its geometry through intricate multiscale feedback in its neural tissue. This enables precise and highly-efficient encoding of complex features in the natural world, supporting higher-level cognitive perception.
Fueled by just a drop of nectar and containing a minimal number of neurons, tiny insect brains best exemplify the power of ultra-efficient biological perception, enabling high-speed flight through a world of immense complexity while swiftly avoiding obstacles and predators. Their miniature visual system with low spatial resolution compound eyes is enhanced by rapid scanning movements, allowing ultra-fast 3D perception. Likewise, some crustaceans possess up to 16 photoreceptor types sensitive to different light spectra that allow them to recognise objects with simpler neural processing than shape-based vision.
A multidisciplinary approach, combining state-of-the-art acquisition technologies (e.g., electrode-arrays, EEG, fMRI, MEG), breakthroughs in data science (e.g., topological and geometrical data analysis), and computational techniques (e.g., retinotopy), has provided detailed insights into the brain’s activity during visual processing. This progress has enabled neuroscientists to map the networks connecting the retina to the visual cortex, uncovering key pathways and mechanisms involved in selective attention, contrast, orientation, motion detection, and depth perception —both in humans and insects.
Data-centric mainstream AI chips
State-of-the-practice AI is built on sensor and processor chips based on decades-old principles (e.g., frame-based imagers and Von Neumann architecture), which lack the evolutionary refinement found in nature and instead adopt a brute-force approach. They run AI models on all sensor data —such as every pixel in input image frames, even in static or irrelevant regions— resulting in a massive amount of computationally intensive matrix operations that yield only limited informational gains.
Research from Epoch found that approximately every nine months, the release of more advanced AI models with more parameters results in a doubling of computational demands. Brute-force processors, including GPUs, will eventually hit their scalability limits and are, in fact, already facing challenges in applications requiring energy efficiency and low-latency.
Information-centric neuromorphic AI chips
A great optimisation opportunity to boost energy efficiency and reduce latency in AI chips lies in acquiring and processing only the data that truly matters — Just as our eyes and brains do! This is the foundation for neuromorphic chips:
👁️ Retina-inspired neuromorphic vision sensors like Dynamic Vision Sensors (DVS) capture only changes in light and deliver a sparse, information-rich stream of visual events that encode scene dynamics (see videos below). In autonomous navigation, DVS capture only moving edges —typically less than 10% of frame pixels— greatly reducing data processing and latency, achieving a time precision equivalent to 10,000 frames per second. Much like retinas do, each DVS pixel automatically adjusts to its local light level after issuing a visual event, enabling a high dynamic range for reliable performance even in varying lighting and poor weather conditions.
🧠 Brain-inspired neuromorphic processors implement event-driven dataflow architectures optimised for running event-based networks, such as Spiking Neural Networks (SNNs). They consume energy only when processing trigger events or inputs —like the visual events delivered by DVS— and leverage the bio-inspired spatiotemporal dynamics to enhance information processing efficiency, integrating the events in real-time as they occur. Recent research confirms that widely used neural networks, such as CNNs developed with popular AI frameworks, can be converted into event-based networks with comparable accuracy but greater efficiency and performance.
By mimicking key biological sensing and perception mechanisms, neuromorphic chips provide important advantages for various vision-related applications. In particular, their energy-efficiency and ultra-low latency enable reflex-like AI functions that connect sensor inputs directly to actions — an essential capability for systems that demand real-time physical AI, such as mobile robots and drones.
High Dynamic Range (HDR)
High-speed (blurless) vision
Ultra-low response time
Ultra-low energy
“Neuromorphic computing will have a substantial impact on existing products and markets, taking three to six years to cross over from early-adopter status to early majority adoption.”
Gartner
“We expect up to 57% penetration of neuromorphic chips in most major applications by 2034.”
Yole Intelligence
Tech Startups and Industry Giants Propel the Neuromorphic AI Revolution
A new wave of deep-tech startups has introduced a range of neuromorphic chips that replicate a range of biological perception mechanisms using cutting-edge semiconductor technology. Originating from multidisciplinary research communities —such as the one aimed to be created in the Basque Country with SiliconBurmuin—, these startups join industry leaders like Intel, IBM, Samsung and SONY to enrich a rapidly expanding neuromorphic ecosystem.
Neuromorphic chips are entering industry and consumer markets, capitalising on the booming AI-driven business opportunities and investment trends. Samsung is advancing prototype consumer products with its proprietary neuromorphic technology, while Prophesee —one of Europe’s best-funded silicon startups— is commercialising DVS sensors that are now being tested for smartphones. Attracted by their exceptional energy efficiency and ultra-low latency, critical sectors like automotive, robotics, and space are testing neuromorphic chips in prototype next-gen products. As neuromorphic chips enter the consumer and industry markets and scale for mass production, costs will drop, driving rapid and widespread adoption.