In AI applications, training models are just half of the whole story. Designing a real-time edge device is a crucial task for today’s deep learning applications.
VPU is short for vision processing unit. It can run AI faster, and is well suited for low power consumption applications such as surveillance, retail, transportation. With the advantage of power efficiency and high performance to dedicate DNN topologies, it is perfect to be implemented in AI edge computing device to reduce total power usage, providing longer duty time for the rechargeable edge computing equipment. AI applications at the edge must be able to make judgements without relying on processing in the cloud due to bandwidth constraints, and data privacy concerns. Therefore, how to resolve AI task locally is getting more important.
In the era of AI explosion, various computations rely on server or device which needs larger space and power budget to install accelerators to ensure enough computing performance.
In the past, solution providers have been upgrading hardware architecture to support modern applications, but this has not addressed the question on minimizing physical space. However, space may still be limited if the task cannot be processed on the edge device.
We are pleased to announce the launch of the Mustang-V100-MX8, a small form factor, low power consumption, and high performance VPU base AI edge computing solution compatible with IEI TANK-870AI compact IPC for those with limited space and power budget.