This event has ended. Please contact organizer for more details.
Report Issues
Agenda: Accelerate Deep Learning Inference using Intel OpenVINO™ Toolkit
Edge computing has been gaining popularity as latency of cloud-based command and control as well as #calldrop raise issues. AI or Machine Learning inference on edge has a strong business case. It works faster, takes less power and is more reliable as network issues are bypassed. To make this possible a new generation of AI chips are under development. Current approach takes FPGA blocks and special compilers that reduce common neural net models to a small size and run it with much less power and efficiently. GPU are not needed at the edge.
Intel has merged many capabilities into its new toolkit. The OpenVINO toolkit offers software developers a single toolkit for applications wanting human-like vision capabilities. It does this by supporting deep learning, computer vision, and hardware acceleration with heterogeneous support, all in a single toolkit.
Organization: Intel
Intel is one the largest semiconductor vendor with CPU and other chips. It acquired Nervana and eASIC and offers Movidus USB based neural compute stick.
OpenVINO offers specific capabilities for CNN-based deep learning inference on the edge. It also offers a common API that supports heterogeneous execution across computer vision accelerators: CPUs, GPUs, Intel® Movidius™ Neural Compute Stick, and FPGAs on Linux and Windows. Supports all object detection models on TensorFlow Model Zoo, all public models on Open Neural Network Exchange (ONNX)
Team: Intel Team
Takeaways
TiE Bangalore
TiE is powered by Explara. Explara uses cookies to enhance your experience. By using our site, you agree to our privacy policy.