Intel
A multinational corporation and technology giant known for designing and manufacturing a wide range of computing hardware, including microprocessors and other semiconductor chips, for various devices and systems.
Intel® AI for Enterprise Inference - Qwen3-14B
This deployment package enables seamless hosting of the Qwen/Qwen3-14B language model on Intel® Xeon® processors using the VLLM CPU-optimized Docker image. Designed for efficient inference on CPU-only environments, this solution leverages vLLM lightweight
Intel® Scenescape
Intel® SceneScape is a multimodal scene intelligence software framework that is used for monitoring and tracking type use cases, and to create a fabric of interconnected, intelligent scenes.
Clear Linux OS
A reference Linux distribution optimized for Intel Architecture
Open Platform for Enterprise AI - OPEA
OPEA (Open Platform for Enterprise AI) is an AI inferencing and finetuning microservice framework that enables the creation and evaluation of open, configurable, and composable generative AI solutions.
Intel® Geti™
Build computer vision models in a fraction of the time and with less data.
Intel® AI for Enterprise Inference - Llama-3.1-8B-Instruct
This deployment package enables seamless hosting of the meta-llama/Llama-3.1-8B-Instruct language model on Intel® Xeon® processors using the VLLM CPU-optimized Docker image.
PostgreSQL Optimized by Intel®
Deploy an AI ready PostgreSQL instance optimized by Intel® on Intel® Xeon Instances with up to 2.4X performance gains over default PostgreSQL.
Intel Tiber AI Studio
A comprehensive development environment that provides tools and resources to help developers build, train, and deploy AI models.
Intel® Distribution of OpenVINO™ Toolkit
An open-source toolkit for optimizing and deploying deep learning models. Boost your AI deep-learning inference performance!
Intel® AI for Enterprise Inference - Mistral-7B-Instruct-v0.3
This deployment package enables seamless hosting of the mistralai/Mistral-7B-Instruct-v0.3 language model on Intel® Xeon® processors using the VLLM CPU-optimized Docker image.
Intel® AI for Enterprise Inference - Mistral-7B-Instruct-v0.3
This deployment package enables seamless hosting of the mistralai/Mistral-7B-Instruct-v0.3 language model on Intel® Xeon® processors using the VLLM CPU-optimized Docker image.