The LUMI AI Factory welcomes you to the online seminar on Evolutionary Optimization of Neural Network Hardware Accelerators, scheduled for 3 November 2025. The session will cover an overview of small networks used in embedded systems and advanced optimization methods. You will also explore power estimation techniques, mapping challenges to compute units, and the principles of approximate computing.

Neural networks are no longer limited to large GPUs and supercomputers—they are increasingly used in simple embedded systems with limited computing power and memory. In this tutorial, I will explore methods based on evolutionary algorithms to optimize both hardware accelerators for specific networks and neural networks for particular accelerators. It will be shown how intentionally introducing errors into computations, along with improving the arrangement of computational units and memory, can boost inference efficiency. Using Capsule Networks as an example, the presentation will demonstrate how to modify network architecture for better hardware performance, and similarly, small Ternary Networks will be customized for printed electronics.

More information

Previous Post Next Post