Login Login
Shopping Cart
Login to Request Offer
Please login to proceed with your request

voyage-4-lite Embedding Model

Text embedding model optimized for general-purpose retrieval quality, latency, and cost for AI applications. 32K context length.

Explore
Product Description

Overview

Text embedding models are neural networks that transform texts into numerical vectors. They are a crucial building block for semantic search/retrieval systems and retrieval-augmented generation (RAG) and are responsible for the retrieval quality.

voyage-4-lite is a lightweight, general-purpose embedding model optimized for low latency and cost. Enabled by Matryoshka learning and quantization-aware training, voyage-4-lite supports embeddings in 2048, 1024, 512, and 256 dimensions, with multiple quantization options.

Learn more about voyage-4-lite here: https://blog.voyageai.com/2026/01/15/voyage-4 

Highlights

  • Lightweight, general-purpose embedding model optimized for low latency and cost.

  • Supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization, including float (32-bit floating point), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8).

  • 32K token context length.

Tell Us About Your Needs
Submit Request
GS Catalyst Assistant
Talk to GS Catalyst Assistant
×