Weebit Nano joins Korea drive for low-power AI chips
Weebit Nano has been selected for a Republic of Korea government-funded programme developing ultra-low-power analogue compute-in-memory technology for artificial intelligence applications, with its resistive random-access memory (ReRAM) central to the work.
The programme uses a compute-in-memory approach in which computation occurs inside memory arrays. By reducing data movement between processors and memory-a key constraint in many AI systems-the design aims to cut power use and boost throughput.
Neural-network weights will be stored in ReRAM crossbar arrays, enabling in-place vector-matrix multiplication. The consortium expects this to improve energy efficiency for inference and, longer term, to support training as data-processing demands grow.
Weebit Nano has also expanded its collaboration with Korean foundry DB HiTek, which will manufacture devices for the consortium. The agreement extends earlier work and makes DB HiTek the project's primary silicon manufacturing partner.
Other participants include the Daegu Gyeongbuk Institute of Science and Technology, Seoul National University, Chungbuk National University, the Electronics and Telecommunications Research Institute, and AnalogueAI, which is working to commercialise products based on compute-in-memory blocks developed through the programme.
From tests to arrays
A key goal is to move beyond small-scale test structures to larger device-array implementations in silicon. Compute-in-memory concepts can show promise in prototypes but often run into engineering and manufacturing hurdles when scaled to arrays suitable for real-world deployment.
The consortium plans to build silicon-verified compute-in-memory blocks and evaluate them at application scale. It will also pursue co-optimisation across device, circuit and architectural levels, with an energy-efficiency target of about 200 TOPS/W.
The work includes integrating emerging synapse-device arrays with commercial silicon-CMOS processes and circuits. Participants also aim to establish a complete, repeatable development flow for analogue compute-in-memory, from device development through silicon verification.
Compute-in-memory designs have gained attention as AI models and workloads strain system power budgets. Conventional accelerators often spend large amounts of energy moving data between compute cores and memory. The programme positions compute in memory as a way to reduce latency and power use by keeping data close to the compute operation.
Weebit Nano develops and licenses memory technologies for semiconductor companies. In this programme, its ReRAM is positioned as a foundational element, with the memory cell serving as both storage and the compute medium within crossbar arrays.
As the consortium's manufacturing partner, DB HiTek will connect emerging memory devices with established production processes. This reflects a broader industry effort to reduce risk by demonstrating new memory and compute concepts within production-relevant process flows.
A major challenge for analogue compute-in-memory is translating small demonstrations into larger, consistent arrays that can be manufactured with acceptable yields and predictable behaviour. The programme's emphasis on array-based silicon implementations and verification points to a focus on practical engineering issues such as device variability, circuit integration, and system-level evaluation.
Weebit Nano Chief Executive Coby Hanoch said the programme is significant for AI system designers.
"AI system designers are increasingly looking to bring memory closer to compute to reduce power and latency. In memory compute is a practical path toward that goal, but it requires validation at realistic scales. This initiative combines device innovation, circuit and architecture co-design, and manufacturable silicon, which is exactly what's needed to move ACiM from research to deployable technology. We're delighted to extend our agreement with DB HiTek as part of this effort, continuing our excellent relationship."
DB HiTek Sales Division General Manager Fred Kim linked the work to a national policy agenda, saying the project aims to strengthen an ecosystem spanning academia and industry.
"This project is part of the Republic of Korea's broader AI Transformation Initiative, which supports technologies critical to future AI semiconductor leadership. By combining emerging memory devices with proven CMOS manufacturing, the consortium aims to significantly improve AI energy efficiency while building domestic capability and a sustainable ecosystem spanning academia and industry. Weebit ReRAM is the ideal memory device to use as a foundation for this work."
The consortium expects the co-design and verification methods developed through the programme to apply beyond AI, including other semiconductor applications, as teams seek repeatable ways to integrate emerging devices with established process technologies.