While embedded processor vendors typically focus on deploying machine learning (ML) designs, NXP has taken it a step further by offering tools for data preparation and model training. NXP, primarily interested in enabling the processing platforms and end applications that these processing platforms facilitate, offers a software development environment that provides a collection of workflow tools, inference engines, Neural Network (NN) compilers and libraries optimized for building ML applications using NXP microcontrollers. and application processors.
Inference engines supported by the eIQ ML development environment include Arm NN, Glow, ONNX, TensorFlow Lite and DeepViewRT which serve artificial intelligence (AI) applications ranging from anomaly detection to speech recognition classification of objects. Additionally, eIQ ML software can be leveraged as part of a user’s existing flow or used for the entire flow depending on the targeted ML application.
For classification and detection of vision-based models, which currently account for 60-70% of machine learning applications, the eIQ toolset offers a core model set as a quick starting point for users. Once the embedded developers have finalized the training, using the eIQ ML software, they can analyze the overall model and determine the amount of bandwidth and memory used.
Key features of the toolset
The eIQ ML development environment also includes various sample applications that demonstrate how to integrate neural networks into voice, visual, and sensor applications. The eIQ toolkit in this development environment enables graphical profiling capabilities with runtime information to optimize neural network architectures.
Next, the eIQ portal, an intuitive graphical user interface (GUI), allows users to create, optimize, debug, convert and export ML models. It also allows integrated developers to import datasets and models from TensorFlow and ONNX formats, then quickly train and deploy neural network models and ML workloads.
Finally, the eIQ marketplace offers value-added solutions, professional support and design services from trusted ecosystem partners. Design services, libraries and models are hosted through eIQ Marketplace to enable faster time to market. For example, Au-Zone’s DeepView ML tool suite, based in Calgary, Canada, will augment eIQ with an intuitive workflow and allow developers to quickly train and deploy NN models and ML workloads. on NXP processors.
BYOD and BYOM flows
The eIQ ML development environment hosts two types of flows: bring your own data (BYOD) and bring your own model (BYOM).
For BYOD, the data curation aspect occurs within the tool, so integrated developers can add labels to the data, identify the region of interest, and select the amount of dataset they want. wish to keep for validation. They can also use the increase dataset feature, which provides a set of filters and modifications to existing data. For example, while you can have hundreds of flower images, the feature will set how many images you need to train to create a robust model.
Regarding BYOM, if the embedded developers have a model outside of the eIQ toolset and want to deploy it to NXP processing platforms, the toolset has conversion models. The conversion models go through different inference engines and check which inference engine gives the best results. It is important to note that, even with a converted model, you may find additional data collected in the field that could be useful in the deployment. Here, the toolset allows developers to feed back that data and improve the model over time.
It’s also worth mentioning that users can create templates and target them to run on a CPU, DSP, or GPU depending on the performance profile required. The eIQ ML development environment ensures that all compute engines are supported through different inference engines.
Majeed Ahmad, editor-in-chief of EDN and Planet Analog, has covered the electronics design industry for more than two decades.