본문 바로가기 주메뉴 바로가기

Ai Solution

Home > 제품소개 > Ai Solution
처음페이지 1 2 마지막페이지
  • Mustang-M2BM-MX2  [ SAMPLE ]

    Overview

    •Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit,Windows® 10 64bit
    •OpenVINO™ toolkit
    ◦Intel® Deep Learning Deployment Toolkit
    - Model Optimizer
    - Inference Engine
    ◦Optimized computer vision libraries
    ◦Intel® Media SDK
    •Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
    * For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
    •High flexibility, Mustang-M2BM-MX2 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.
     

    제품정보 바로가기
  • Mustang-M2AE-MX1  [ SAMPLE ]

    Overview

    •Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows® 10 64bit
    •OpenVINO™ toolkit
    ◦Intel® Deep Learning Deployment Toolkit
    - Model Optimizer
    - Inference Engine
    ◦Optimized computer vision libraries
    ◦Intel® Media SDK

    •Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
    * For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
    •High flexibility, Mustang-M2AE-MX1 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.

    제품정보 바로가기
  • Mustang-MPCIE-MX2  [ SAMPLE ]

    Overview

    ● miniPCIe form factor (30 x 50 mm)
    ● 2 x Intel® Movidius™ Myriad™ X VPU MA2485
    ● Power efficiency ,approximate 7.5W
    ● Operating Temperature -20°C~60°C
    ● Powered by Intel’s OpenVINO™ toolkit
    제품정보 바로가기
  • Mustang-V100-MX4  [ SAMPLE ]

    Overview

    •Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit , Windows® 10 64bit
    •OpenVINO™ toolkit
    ◦Intel® Deep Learning Deployment Toolkit
    - Model Optimizer
    - Inference Engine
    ◦Optimized computer vision libraries
    ◦Intel® Media SDK
    ◦Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
    * For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
    ◦High flexibility, Mustang-V100-MX4 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.

     

    제품정보 바로가기
  • Mustang-V100-MX8  [ SAMPLE ]

    Overview

    •Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit.
    •OpenVINO™ Toolkit
    ◦Intel® Deep Learning Deployment Toolkit
    - Model Optimizer
    - Inference Engine
    ◦Optimized computer vision libraries
    ◦Intel® Media SDK
    ◦*OpenCL™ graphics drivers and runtimes.
    ◦Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101
    - For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.

    •High flexibility, Mustang-V100-MX8 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
    *OpenCL™ is the trademark of Apple Inc. used by permission by Khronos

    제품정보 바로가기
  • Mustang-F100-A10  [ SAMPLE ]

    Overview

    •Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
    •OpenVINO™ toolkit

    ◦Intel® Deep Learning Deployment Toolkit

    - Model Optimizer
    - Inference Engine

    ◦Optimized computer vision libraries
    ◦Intel® Media SDK
    ◦*OpenCL™ graphics drivers and runtimes.
    ◦Current Supported Topologies: AlexNet, GoogleNet, Tiny Yolo, LeNet, SqueezeNet, VGG16, ResNet (more variants are coming soon)
    ◦Intel® FPGA Deep Learning Acceleration Suite

    •High flexibility, Mustang-F100-A10 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
    *OpenCL™ is the trademark of Apple Inc. used by permission by Khronos

    제품정보 바로가기
  • TANK AIoT Developer Kit  [ SAMPLE ]

    Overview

    Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel® platforms and maximizes performance.

    It can optimize pre-trained deep learning models such as Caffe, MXNET, and Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel® silicon products (CPU, GPU/Intel® Processor Graphics, FPGA, VPU).

    제품정보 바로가기
  • GRAND-C422-20D  [ SAMPLE ]

    Overview

    ●Intel® Xeon® W family processor supported
    ●6 x PCIe Slot, up to 4 dual width GPU cards
    ●Water cooling system on CPU
    ●Support two U.2 SSD
    ●Support one M.2 SSD M-key slot ( NVMe PCIe 3.0 x4 )
    ●Support 10GbE network
    ●IPMI remote management
    제품정보 바로가기
  • PPC-3712A-N270  [ KC인증 ]

    Overview

    ● 12.1"" 500 nits high brightness TFT LCD with LED backlight
    ● Dual Gigabit Ethernet provides continuous network service and allows sharing among different workgroups in two different subnets
    ● Optional 802.11b/g/n wireless kit
    ● One slim type CD-ROM drive bay
    ● Robust aluminum front bezel and metal casing
    ● IP 65 compliant front panel

    제품정보 바로가기
  • FLEX-BX200-Q370  [ SAMPLE ]

    Overview

    I/O Interface

    제품정보 바로가기
처음페이지 1 2 마지막페이지