22 nm FDSOI · In-Memory Compute · Edge AI
Advanced AI models face a fundamental memory bottleneck, massive data movement between sensors and processors, that traditional integration cannot handle. NeurIQ solves this with a single transistor that senses, stores, and computes.
Advanced AI models require massive data movement between sensor, memory, and processor demanding excessive power. Traditional module-level integration cannot meet the strict energy-efficiency and latency demands of Edge deployment.
This architectural mismatch between where data is born and where it is processed is the defining constraint of modern Edge AI.
Industries & Use Cases
From autonomous drones to IoT, NeurIQ's technology enables intelligence wherever power and latency budgets are tight.
Onboard AI vision with minimal power usage enabling longer flights and real-time situational awareness.
Low-latency inference for perception and control loops without relying on cloud connectivity.
AI-powered edge cameras that process data at the source, reducing bandwidth and ensuring privacy.
On-chip compute accelerator for point-cloud processing and signal correlation with FDSOI compatibility.
Compute accelerator for positioning algorithms at the silicon level.
Embed NeurIQ's In-Memory Compute IP into your own FDSOI chip design for any Edge application.
Semiconductor expertise combined with a vision for a fundamentally different approach to Edge AI architecture.
Driving business strategy, partnerships, and commercialization of the FDSOI In-Memory Compute IP platform across global markets.
Leading architecture and technical development of the In-Memory Compute architecture and IP core design based on patented 22 nm FDSOI transistor.