Overview● Half-Height, Half-Length, Single-slot compact size● Low power consumption ,approximate 25W● Supported OpenVINO™ toolkit, AI edge computing ready device● Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously.Warning: DO NOT install the Mustang-V100-MX8 into the TANK AIoT Dev. Kit before shipment. It is recommended to ship them with their original boxes to prevent the Mustang-V100-MX8 from being damaged.
*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration
Accelerate To The Future
An Intel® Vision Accelerator Design Product
Intel® Vision Accelerator Design with Intel® Movidius™ VPU
A Perfect Choice for AI Deep Learning Inference Workloads
Powered by Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit
- Half-Height, Half-Length, Single-Slot compact size.
- Low power consumption ,approximate 2.5W for each Intel® Movidius™ Myriad™ X VPU.
- Supported Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit, AI edge computing ready device.
- Eight Intel® Movidius™ Myriad™ X VPU can execute multiple topologies simultaneously.
OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.
It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.
IEI Deep Learning Inference Acceleration Card - Mustang-V100
Issue Solution Files1. Fail "HDDL_NOT_INITIALIZED" Download patch to fix Mustsang can not be reseted in some motherboards issueFor Linux OS, click here.For Windows OS, click here. 2. If install Mustang accelerators VPU quntity not equal to "8"
Mustang-MPCIE-MX2 (2 VPU),
two of Mustang-V100-MX8(16 VPU),
Downloago to change autoboot.config device numbersd patch to fix Mustsang can not be recognized in some motherboards issuesudo gedit /opt/intel/openvino_ /inference_engine/external/hddl/config/hddl_autoboot.config
change "total_device_num'" from default "8" to the current quntity!
Mustang-MPCIE-MX2 (2 VPU), "total_device_num" 2
Mustang-M2BM-MX2(2 VPU), "total_device_num" 2
Mustang-V100-MX4(4 VPU), "total_device_num" 4
two of Mustang-V100-MX8(16 VPU), "total_device_num" 16
3. Would like to monitor the performance of each VPU Initial hddldaemon to observe utilization and temperature Document
Spec Item Description Form factor Form factor Dataplane Interface: PCI Express x4 (Compliant with PCI Express Specification V2.0) System Chipset Eight Intel® Movidius™ Myriad™ X MA2485 VPU Supported OS Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows® 10 64bit I/O Interface Other On-board Devices and Interfaces Dip Switch/LED indicator: Identify card number Power Input *Preserved PCIe 6-pin 12V external power
(Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration)
Power consumption approximate 25W Environment Operating Temperature -20°C~60°C Humidity 5% ~ 90% Dimensions Dimensions 169.54x56.16mm (Standard Half-Height, Half-Length, Single-slot PCIe)
Item No. Description Mustang-V100-MX8-R11 Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe gen2 x4 interface, RoHS
1 X Full height bracket
1 x External power cable
1 x QIG
- AI-based Video Analytics Solutions in Smart City
- IEI Allied with AlphaInfo and GeoVision, Co-Creating AI Smart City
- IEI Integration Corp. and CyberLink Work Together to Create Various AloT Solutions for Smart Retail and Security Control
- Medical Image AI Solution
- Leveraging AI Solutions to Make Data Valuable
- Accelerate Medical Analytics with AI Inference System