Machine Learning Software Stack
The NXP i.MX95 Technical Guide documentation from RidgeRun is presently being developed. |
Introduction
The NXP eIQ Machine Learning Software Development Environment provides a set of libraries and tools for developing and deploying Machine Learning applications on NXP processors.
Within the Yocto ecosystem, eIQ is delivered through the meta-imx / meta-ml layers and supports multiple inference engines targeting different compute units available on the SoC.
This section focuses on the capabilities available on the i.MX95 platform.
Supported Inference Engines (i.MX95)
The eIQ stack supports multiple inference engines, including:
- TensorFlow Lite
- ONNX Runtime
- PyTorch
- OpenCV
These frameworks can run on different compute backends depending on hardware support.
Compute Support on i.MX95
The table below summarizes which inference engines are supported on each compute unit for i.MX95:
| Compute Unit | PyTorch | ONNX Runtime | TensorFlow Lite | OpenCV |
|---|---|---|---|---|
| Cortex-A | ✓ | ✓ | ✓ | ✓ |
| GPU | N/A | N/A | ✓ | N/A |
| NPU | N/A | ✓ | ✓ | N/A |
Notes on Acceleration
- All supported frameworks can run on the Cortex-A cores with multi-threaded execution.
- TensorFlow Lite and ONNX Runtime can take advantage of hardware acceleration on GPU and NPU where supported.
- The NPU provides the highest performance for inference workloads when using the appropriate delegate and optimized models.
Application Domains
The eIQ stack on i.MX95 is designed to support a wide range of embedded AI use cases, including:
- Vision
* Object recognition * Multi-camera processing * Gesture detection
- Voice and Audio
* Voice processing * Sound monitoring and analysis
- LLM / AI workloads
* Text generation * Automatic speech recognition (ASR)