Choose your language
 

An Intel® Vision Accelerator Design Product

Mustang-MPCIE-MX2

Intel® Vision Accelerator Design with Intel® Movidius™ VPU

A Perfect Choice for AI Deep Learning Inference Workloads

Powered by Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit

  • Compact size miniPCIe 30x50mm
  • Low power consumption, approximate 7.5W for two Intel® Movidius™ Myriad™ X VPU.
  • Supported OpenVINO™ toolkit, AI edge computing ready device
  • Two Intel® Movidius™ Myriad™ X VPU can execute Two topologies simultaneously.

Intel® Distribution of OpenVINO™ toolkit

Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel® platforms and maximizes performance.

It can optimize pre-trained deep learning models such as Caffe, MXNET, and ONNX Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel® silicon products (CPU, GPU/Intel® Processor Graphics, FPGA, VPU).

Features

  • Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit
  • OpenVINO™ toolkit
    • Intel® Deep Learning Deployment Toolkit
      • - Model Optimizer
      • - Inference Engine
    • Optimized computer vision libraries
    • Intel® Media SDK
    • Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
      * For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
    • High flexibility, Mustang-MPCIE-MX2 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.
  • High flexibility, Mustang-MPCIE-MX2 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.

Applications

Dimensions(Unit: mm)

Specifications

Model Name Mustang-MPCIE-MX2
Main Chip 2 x Intel® Movidius™ Myriad™ X MA2485 VPU
Operating Systems Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows® 10 64bit
Dataplane Interface miniPCIe
Power Consumption Approximate 7.5W
Operating Temperature -20°C~60°C
Cooling Active Heatsink
Dimensions 30 x 50 mm
Operating Humidity 5% ~ 90%
Support Topology AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
* For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website. [Supported Models] [Supported Framework Layers]

Ordering Information

Part No. Description
Mustang-MPCIE-MX2-R10 Deep learning inference accelerating miniPCIe card with 2 x Intel® Movidius™ Myriad™ X MA2485 VPU, miniPCIe interface 30mm x 50mm, RoHS