Details
-
Task
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
0
Description
We need to include a description of Vector search in the What's new in 7.6 section. Here's the release note they're using for Capella:
Couchbase Server 7.6 in Capella introduces Vector Search to enable AI integration, semantic search, and the RAG framework. This is done via a developer-friendly vector indexing engine to expose a vector database and search functionality. With Couchbase Capella Vector Search, you can enable fast and highly accurate semantic search, ground LLM responses in relevant data to reduce hallucinations, and enhance or enable use cases like personalized searches in e-commerce and media & entertainment, product recommendations, fraud detection, and reverse image search. You can also enable full access to an AI ecosystem with a Langchain integration, the most popular open-source framework for LLM driven applications.
A Vector Search vector database includes:
- Standard Couchbase vertical/horizontal scaling
- Indexing capable of efficient Insert/Update/Removal of Items (or documents)
- Storage of raw Embedding Vectors in the Data Service in the documents themselves
- Querying Vector Indexes (REST and UI via a JSON object/fragment, Couchbase SDKs, and SQL++)
- SQL++/N1QL integration
- Third-party framework integration: Langchain (later Llamaindex + others)
- Full support for Replicas Partitions and file-based Rebalance
To start using Vector Search in the Capella UI, click Data Tools > Search.