IEI Launches Mustang-V100-MX8 Supported OpenVINO™ Toolkit for AI Deep Learning Applications

Jan 08, 2019

IEI Product News

Mustang-V100-MX8-VPU-accelerator-card-BN
arrow-right-icon Features arrow-right-icon Specifications arrow-right-icon Ordering Information
arrow-right-icon Dimensions arrow-right-icon Packing List
 
Features
 
mustang-v100-VPU-accelerator-card-feature-icon
arrow

Half-Height, Half-Length, Single-Slot compact size

arrow

Low power consumption ,approximate 2.5W for each Intel® Movidius™ Myriad™ X VPU.

arrow

Supported Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit, AI edge computing ready device

arrow

Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously.

 

OpenVINO™ toolkit

OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.

It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.

Mustang-V100-VPU-accelerator-card
 

IEI Mustang-V100-MX8

In AI applications, training models are just half of the whole story. Designing a real-time edge device is a crucial task for today’s deep learning applications.

VPU is short for vision processing unit. It can run AI faster, and is well suited for low power consumption applications such as surveillance, retail, transportation. With the advantage of power efficiency and high performance to dedicate DNN topologies, it is perfect to be implemented in AI edge computing device to reduce total power usage, providing longer duty time for the rechargeable edge computing equipment. AI applications at the edge must be able to make judgements without relying on processing in the cloud due to bandwidth constraints, and data privacy concerns. Therefore, how to resolve AI task locally is getting more important.

In the era of AI explosion, various computations rely on server or device which needs larger space and power budget to install accelerators to ensure enough computing performance.

In the past, solution providers have been upgrading hardware architecture to support modern applications, but this has not addressed the question on minimizing physical space. However, space may still be limited if the task cannot be processed on the edge device.

We are pleased to announce the launch of the Mustang-V100-MX8, a small form factor, low power consumption, and high performance VPU base AI edge computing solution compatible with IEI TANK-870AI compact IPC for those with limited space and power budget.


Specifications
Model Name Mustang-V100-MX8
Main Chip Eight Intel® Movidius™ Myriad™ X MA2485 VPU
Operating Systems Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
Dataplane Interface PCI Express x4
Compliant with PCI Express Specification V2.0
Power Consumption <30W
Operating Temperature 5°C~55°C (ambient temperature)
Cooling Active fan
Dimensions Standard Half-Height, Half-Length, Single-Slot PCIe
Operating Humidity 5% ~ 90%
Power Connector *Preserved PCIe 6-pin 12V external power
Dip Switch/LED indicator Identify card number

*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration


Dimensions (Unit:mm)
Mustang-V100-VPU-accelerator-card-dimension


Ordering Information
Part No. Description
Mustang-V100-MX8-R10 Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe Gen2 x4 interface, RoHS

Packing List
Item Qty
Full height bracket 1
External power cable 1
QIG 1