https://cdn.explara.com/public/v_10.0/internal-flow/img/event-landing/default_header_img.png
MasterClass: Accelerate Deep Learning Inference using In...
Oct 30, 2018
10:00 AM - 01:30 PM (Asia/Kolkata)
IoTCOE, Diamond District, Lower Ground Floor, DD3, HAL Old Airport Rd, ISRO Colony, Domlur, Bengaluru, Karnataka 560008
Bangalore,India

This event has ended. Please contact organizer for more details.

Report Issues

Agenda: Accelerate Deep Learning Inference using Intel OpenVINO™ Toolkit

Edge computing has been gaining popularity as latency of cloud-based command and control as well as #calldrop raise issues.  AI or Machine Learning inference on edge has a strong business case. It works faster, takes less power and is more reliable as network issues are bypassed. To make this possible a new generation of AI chips are under development. Current approach takes FPGA blocks and special compilers that reduce common neural net models to a small size and run it with much less power and efficiently. GPU are not needed at the edge.

Intel has merged many capabilities into its new toolkit. The OpenVINO toolkit offers software developers a single toolkit for applications wanting human-like vision capabilities. It does this by supporting deep learning, computer vision, and hardware acceleration with heterogeneous support, all in a single toolkit.

Organization: Intel

Intel is one the largest semiconductor vendor with CPU and other chips. It acquired Nervana and eASIC and offers Movidus USB based neural compute stick.

OpenVINO offers specific capabilities for CNN-based deep learning inference on the edge. It also offers a common API that supports heterogeneous execution across computer vision accelerators: CPUs, GPUs, Intel® Movidius™ Neural Compute Stick, and FPGAs on Linux and Windows. Supports all object detection models on  TensorFlow Model Zoo, all public models on Open Neural Network Exchange (ONNX)

Team: Intel Team

Takeaways

  • Using the toolkit to deploy a neural network and optimize models
  • Intel Deep learning Inference H/w Platforms
  • The Intel® Deep Learning Deployment Toolkit—part of OpenVINO™—including its Model Optimizer (helps quantize pre-trained models) and its Inference Engine (runs seamlessly across CPU, GPU, FPGA, and VPU without requiring the entire framework to be loaded)
  • How the Inference Engine lets you utilize new layers in C/C++ for CPU and OpenCL™ for GPU


  1. ​IoTCOE,
    Diamond District, Lower Ground Floor, DD3, HAL Old Airport Rd, ISRO Colony, Domlur, Bengaluru, Karnataka 560008,
    Bangalore, Karnataka, India


Organiser : TiE Bangalore

TiE Bangalore

TiE is powered by Explara. Explara uses cookies to enhance your experience. By using our site, you agree to our privacy policy.

MasterClass: Accelerate Deep Learning Inference using Intel OpenVINOâ„¢ Toolkit

Ask Organiser

Report spam or issues