Lattice Blog

分享:

晶格突出显示灵活的AI/ML开发并在嵌入式视觉峰会上启动Sensai 4.0 2021

想要嵌入式视觉吗?有mipi吗?
Sreepada Hegade发布05/25/2021

Posted in

毫无疑问,Embedded Vision Summit(本周举行)是专门的会议,并致力于实用,可部署的计算机视觉和视觉人工智能(AI)和机器学习(ML)。毫不奇怪,Lattice是峰会的定期参展商和主持人,因为它为我们提供了一个很好的机会,可以突出我们最新,最出色的设备,工具,技术和解决方案。

我是今年活动的演讲者,并介绍了有关该活动的灵活性AI/ML开发流程是按时,预算和正确的价值主张将产品推向市场的关键。特别是,我专注于AI/ML管道,这些管道针对的是要在Internet遇到的边缘的资源约束设备,与现实世界相遇并与之接口。

Most Edge applications feature different types of sensors involving a variety of electrical interfaces and communications protocols. The data acquired from these sensors typically requires pre-processing and aggregation before being fed into the AI/ML engine for inferencing purposes. In many cases, it is required to perform sensor fusion; the process of combining sensor data derived from disparate sources as the combined information has less uncertainty than if those sources were to be analyzed individually.

Similarly, the neural network AI/ML inference engine itself needs to be flexible. New network architectures are constantly being introduced and new types of operators are continually arriving on the scene. If a development team’s AI/ML engine isn’t flexible enough to be able to take full advantage of these new developments, then their competitors are going to “eat their lunch” as the old saying goes.

推断后,结果需要向目标受众介绍,这可能是人类和/或其他机器和系统。在所有情况下,在向预定的受众群体展示这些结果之前,都需要进行后处理。

The bottom line is that there’s a need for flexibility at every step in the development pipeline. This is why we created ourLattice sensAI™solution stack.

Sensai4.0 Stack
Sensai溶液堆栈

This stack includes hardware platforms, IP cores, software tools, reference designs and demos, and custom design services. In the case of hardware platforms, developers can choose from a variety of FPGA families, including theLattice iCE40™ UltraPlus(对于超小型,超低功率应用),bobappios下载地址MACHXO3D™(for platform management and security),CrossLink™-NX(for embedded vision applications), and ECP5™ ( for general-purpose applications).

FPGAs provide programmable input/outputs (I/Os) that can be configured to support different electrical interface standards, thereby allowing them to interface with a wide variety of sensors. Lattice also offers a cornucopia of hard and soft IP blocks to support different communications protocols

The next step is to perform preprocessing and data aggregation, for which the sensAI stack offers a suite of IP cores for tasks like cropping, resizing, and scaling. The programmable FPGA fabric allows these computationally intensive data processing tasks to be performed in a massively parallel fashion, thereby offering high performance while consuming little power.

As was previously noted, the neural network itself requires a lot of flexibility to be able to support new network topologies and operators. The sensAI stack includes a suite of soft IP neural network cores and accelerators that can be modified quickly and easily to fully address the evolving AL/ML landscape.

As yet another example of flexibility, among the sensAI stack’s multiple implementation examples are two of particular interest. In one, the AI/ML inferencing engine is implemented in programmable fabric, thereby providing a low-power, high-performance solution suitable for high-speed image processing applications. In the other, the neural network inferencing engine is implemented on a RISC-V processor core, thereby providing an ideal solution for AI/Ml tasks that can run quietly in the background, such as predictive maintenance applications.

介绍Sensai Studio

One of the really “hot news” items Lattice is announcing at the Embedded Vision Summit is that the latest and greatest version of the stack,Sensai4.0,包括对新的Lattibob电子竞技俱乐部ce Sensai Studio设计环境的支持,该环境有助于端到端AI/ML模型培训,验证和编译。

Lattice sensAI Studio
New to the sensAI solution stack is Lattice sensAI Studio, a GUI-based tool for training, validating, and compiling ML models optimized for Lattice FPGAs. The tool makes it easy to take advantage of transfer learning to deploy ML models.

This web-based framework, which can be hosted in the cloud or on the developers’ own servers, supports multiple simultaneous users working on the same or different projects. The sensAI Studio provides an easy-to-use GUI-based environment that allows users to select the target FPGA, configure I/Os, drag-and-drop AI/ML model IP, and pre- and post-processing IP, and connect everything together. This new version of sensAI also supports the Lattice Propel design environment for accelerating embedded RISC-V processor-based development.

In addition to TensorFlow AI/ML models, sensAI 4.0 includes support for TensorFlow Lite to reduce power consumption and increase data co-processing performance in AI/ML inferencing applications (TensorFlow Lite runs anywhere from 2 to 10 times faster on a Lattice FPGA than it does on an ARM® Cortex®-M4-based MCU). Furthermore, by leveraging advances in ML model compression and pruning, sensAI 4.0 can support image processing at 60 FPS with QVGA resolution or 30 FPS with VGA resolution.

Sensai Studio还支持最新,bob电子竞技俱乐部最大的AI/ML设计技术,例如转移学习,其中为一个任务开发的模型被重复使用用于相关任务,或为模型执行新任务提供了起点。转移学习也很感兴趣,即采用您已经在一个处理器(例如微控制器)上训练的AI/ML模型,并将该模型转移到较小的较小性能的情况下,具有更高的功能,例如晶格FPGA。

分享:

Like most websites, we use cookies and similar technologies to enhance your user experience. We also allow third parties to place cookies on our website. By continuing to use this website you consent to the use of cookies as described in our饼干政策.