Login Login
Shopping Cart
Login to Request Offer
Please login to proceed with your request

voyage-4-large Embedding Model

State-of-the-art text embedding model with the best general-purpose and multilingual retrieval quality. 32K context length.

Explore
Product Description

Overview

Text embedding models are neural networks that transform texts into numerical vectors. They are a crucial building block for semantic search/retrieval systems and retrieval-augmented generation (RAG) and are responsible for the retrieval quality.

voyage-4-large is a state-of-the-art general-purpose and multilingual embedding optimized for retrieval quality. Enabled by Matryoshka learning and quantization-aware training, voyage-4-large supports embeddings in 2048, 1024, 512, and 256 dimensions, with multiple quantization options.

Learn more about voyage-4-large here: https://blog.voyageai.com/2026/01/15/voyage-4 

Highlights

  • State-of-the-art general-purpose and multilingual embedding optimized for retrieval quality.

  • Supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization, including float (32-bit floating point), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8).

  • 32K token context length.

Tell Us About Your Needs
Submit Request
GS Catalyst Assistant
Talk to GS Catalyst Assistant
×