Intel® Distribution of OpenVINO™ Toolkit

An open-source toolkit for optimizing and deploying deep learning models. Boost your AI deep-learning inference performance!

Explore
Video Demo
Product Description

Overview

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, Generative AI, LLMs, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on the latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), large language models (LLM), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI, and deep learning inference deployed from edge to cloud.

Highlights

  • Deploy High-Performance Deep Learning Inference. Optimize and deploy deep learning solutions across multiple Intel® platforms: Intel-powered CPUs, integrated GPUs, Intel discrete GPUs, Intel NPUs, and FPGAs

  • Leverage pre-optimized and open-sourced pre-trained models, code samples, and demos from the Open Model Zoo. More Generative AI coverage, Large Language Model (LLM) support, and more model compression techniques.

  • This AMI includes OpenVINO, and Jupyter Interface to run OpenVINO Notebooks.

Tell Us About Your Needs