42
.github/workflows/pypi-publish.yaml
vendored
Normal file
42
.github/workflows/pypi-publish.yaml
vendored
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
# This workflow will upload a Python Package using Twine when a release is created
|
||||||
|
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
|
||||||
|
|
||||||
|
# This workflow uses actions that are not certified by GitHub.
|
||||||
|
# They are provided by a third-party and are governed by
|
||||||
|
# separate terms of service, privacy policy, and support
|
||||||
|
# documentation.
|
||||||
|
|
||||||
|
name: PyPI Publish
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
# Pattern matched against refs/tags
|
||||||
|
tags:
|
||||||
|
- 'v*' # Push events to every version tag
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
deploy:
|
||||||
|
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
|
||||||
|
- name: Set up Python
|
||||||
|
uses: actions/setup-python@v2
|
||||||
|
with:
|
||||||
|
python-version: '3.10.x'
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
python -m pip install uv
|
||||||
|
uv sync
|
||||||
|
|
||||||
|
- name: Build package
|
||||||
|
run: uv build
|
||||||
|
|
||||||
|
- name: Publish package
|
||||||
|
run: uv publish
|
||||||
|
with:
|
||||||
|
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
|
||||||
37
README.md
37
README.md
@@ -1,4 +1,5 @@
|
|||||||
# mcp-server-qdrant: A Qdrant MCP server
|
# mcp-server-qdrant: A Qdrant MCP server
|
||||||
|
[](https://smithery.ai/protocol/mcp-server-qdrant)
|
||||||
|
|
||||||
> The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
|
> The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
|
||||||
|
|
||||||
@@ -38,6 +39,14 @@ uv run mcp-server-qdrant \
|
|||||||
--fastembed-model-name "sentence-transformers/all-MiniLM-L6-v2"
|
--fastembed-model-name "sentence-transformers/all-MiniLM-L6-v2"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Installing via Smithery
|
||||||
|
|
||||||
|
To install Qdrant MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/protocol/mcp-server-qdrant):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx @smithery/cli install mcp-server-qdrant --client claude
|
||||||
|
```
|
||||||
|
|
||||||
## Usage with Claude Desktop
|
## Usage with Claude Desktop
|
||||||
|
|
||||||
To use this server with the Claude Desktop app, add the following configuration to the "mcpServers" section of your `claude_desktop_config.json`:
|
To use this server with the Claude Desktop app, add the following configuration to the "mcpServers" section of your `claude_desktop_config.json`:
|
||||||
@@ -69,14 +78,38 @@ By default, the server will use the `sentence-transformers/all-MiniLM-L6-v2` emb
|
|||||||
For the time being, only [FastEmbed](https://qdrant.github.io/fastembed/) models are supported, and you can change it
|
For the time being, only [FastEmbed](https://qdrant.github.io/fastembed/) models are supported, and you can change it
|
||||||
by passing the `--fastembed-model-name` argument to the server.
|
by passing the `--fastembed-model-name` argument to the server.
|
||||||
|
|
||||||
### Environment Variables
|
### Using the local mode of Qdrant
|
||||||
|
|
||||||
|
To use a local mode of Qdrant, you can specify the path to the database using the `--qdrant-local-path` argument:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"qdrant": {
|
||||||
|
"command": "uvx",
|
||||||
|
"args": [
|
||||||
|
"mcp-server-qdrant",
|
||||||
|
"--qdrant-local-path",
|
||||||
|
"/path/to/qdrant/database",
|
||||||
|
"--collection-name",
|
||||||
|
"your_collection_name"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
It will run Qdrant local mode inside the same process as the MCP server. Although it is not recommended for production.
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
The configuration of the server can be also done using environment variables:
|
The configuration of the server can be also done using environment variables:
|
||||||
|
|
||||||
- `QDRANT_URL`: URL of the Qdrant server
|
- `QDRANT_URL`: URL of the Qdrant server, e.g. `http://localhost:6333`
|
||||||
- `QDRANT_API_KEY`: API key for the Qdrant server
|
- `QDRANT_API_KEY`: API key for the Qdrant server
|
||||||
- `COLLECTION_NAME`: Name of the collection to use
|
- `COLLECTION_NAME`: Name of the collection to use
|
||||||
- `FASTEMBED_MODEL_NAME`: Name of the FastEmbed model to use
|
- `FASTEMBED_MODEL_NAME`: Name of the FastEmbed model to use
|
||||||
|
- `QDRANT_LOCAL_PATH`: Path to the local Qdrant database
|
||||||
|
|
||||||
|
You cannot provide `QDRANT_URL` and `QDRANT_LOCAL_PATH` at the same time.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "mcp-server-qdrant"
|
name = "mcp-server-qdrant"
|
||||||
version = "0.5.1"
|
version = "0.5.2"
|
||||||
description = "MCP server for retrieving context from a Qdrant vector database"
|
description = "MCP server for retrieving context from a Qdrant vector database"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.10"
|
requires-python = ">=3.10"
|
||||||
|
|||||||
@@ -9,23 +9,25 @@ class QdrantConnector:
|
|||||||
:param qdrant_api_key: The API key to use for the Qdrant server.
|
:param qdrant_api_key: The API key to use for the Qdrant server.
|
||||||
:param collection_name: The name of the collection to use.
|
:param collection_name: The name of the collection to use.
|
||||||
:param fastembed_model_name: The name of the FastEmbed model to use.
|
:param fastembed_model_name: The name of the FastEmbed model to use.
|
||||||
|
:param qdrant_local_path: The path to the storage directory for the Qdrant client, if local mode is used.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
qdrant_url: str,
|
qdrant_url: Optional[str],
|
||||||
qdrant_api_key: Optional[str],
|
qdrant_api_key: Optional[str],
|
||||||
collection_name: str,
|
collection_name: str,
|
||||||
fastembed_model_name: str,
|
fastembed_model_name: str,
|
||||||
|
qdrant_local_path: Optional[str] = None,
|
||||||
):
|
):
|
||||||
self._qdrant_url = qdrant_url.rstrip("/")
|
self._qdrant_url = qdrant_url.rstrip("/") if qdrant_url else None
|
||||||
self._qdrant_api_key = qdrant_api_key
|
self._qdrant_api_key = qdrant_api_key
|
||||||
self._collection_name = collection_name
|
self._collection_name = collection_name
|
||||||
self._fastembed_model_name = fastembed_model_name
|
self._fastembed_model_name = fastembed_model_name
|
||||||
# For the time being, FastEmbed models are the only supported ones.
|
# For the time being, FastEmbed models are the only supported ones.
|
||||||
# A list of all available models can be found here:
|
# A list of all available models can be found here:
|
||||||
# https://qdrant.github.io/fastembed/examples/Supported_Models/
|
# https://qdrant.github.io/fastembed/examples/Supported_Models/
|
||||||
self._client = AsyncQdrantClient(qdrant_url, api_key=qdrant_api_key)
|
self._client = AsyncQdrantClient(location=qdrant_url, api_key=qdrant_api_key, path=qdrant_local_path)
|
||||||
self._client.set_model(fastembed_model_name)
|
self._client.set_model(fastembed_model_name)
|
||||||
|
|
||||||
async def store_memory(self, information: str):
|
async def store_memory(self, information: str):
|
||||||
@@ -40,10 +42,14 @@ class QdrantConnector:
|
|||||||
|
|
||||||
async def find_memories(self, query: str) -> list[str]:
|
async def find_memories(self, query: str) -> list[str]:
|
||||||
"""
|
"""
|
||||||
Find memories in the Qdrant collection.
|
Find memories in the Qdrant collection. If there are no memories found, an empty list is returned.
|
||||||
:param query: The query to use for the search.
|
:param query: The query to use for the search.
|
||||||
:return: A list of memories found.
|
:return: A list of memories found.
|
||||||
"""
|
"""
|
||||||
|
collection_exists = await self._client.collection_exists(self._collection_name)
|
||||||
|
if not collection_exists:
|
||||||
|
return []
|
||||||
|
|
||||||
search_results = await self._client.query(
|
search_results = await self._client.query(
|
||||||
self._collection_name,
|
self._collection_name,
|
||||||
query_text=query,
|
query_text=query,
|
||||||
|
|||||||
@@ -12,10 +12,11 @@ from .qdrant import QdrantConnector
|
|||||||
|
|
||||||
|
|
||||||
def serve(
|
def serve(
|
||||||
qdrant_url: str,
|
qdrant_url: Optional[str],
|
||||||
qdrant_api_key: Optional[str],
|
qdrant_api_key: Optional[str],
|
||||||
collection_name: str,
|
collection_name: str,
|
||||||
fastembed_model_name: str,
|
fastembed_model_name: str,
|
||||||
|
qdrant_local_path: Optional[str] = None,
|
||||||
) -> Server:
|
) -> Server:
|
||||||
"""
|
"""
|
||||||
Instantiate the server and configure tools to store and find memories in Qdrant.
|
Instantiate the server and configure tools to store and find memories in Qdrant.
|
||||||
@@ -23,11 +24,12 @@ def serve(
|
|||||||
:param qdrant_api_key: The API key to use for the Qdrant server.
|
:param qdrant_api_key: The API key to use for the Qdrant server.
|
||||||
:param collection_name: The name of the collection to use.
|
:param collection_name: The name of the collection to use.
|
||||||
:param fastembed_model_name: The name of the FastEmbed model to use.
|
:param fastembed_model_name: The name of the FastEmbed model to use.
|
||||||
|
:param qdrant_local_path: The path to the storage directory for the Qdrant client, if local mode is used.
|
||||||
"""
|
"""
|
||||||
server = Server("qdrant")
|
server = Server("qdrant")
|
||||||
|
|
||||||
qdrant = QdrantConnector(
|
qdrant = QdrantConnector(
|
||||||
qdrant_url, qdrant_api_key, collection_name, fastembed_model_name
|
qdrant_url, qdrant_api_key, collection_name, fastembed_model_name, qdrant_local_path
|
||||||
)
|
)
|
||||||
|
|
||||||
@server.list_tools()
|
@server.list_tools()
|
||||||
@@ -112,7 +114,7 @@ def serve(
|
|||||||
@click.option(
|
@click.option(
|
||||||
"--qdrant-url",
|
"--qdrant-url",
|
||||||
envvar="QDRANT_URL",
|
envvar="QDRANT_URL",
|
||||||
required=True,
|
required=False,
|
||||||
help="Qdrant URL",
|
help="Qdrant URL",
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
@@ -134,12 +136,23 @@ def serve(
|
|||||||
help="FastEmbed model name",
|
help="FastEmbed model name",
|
||||||
default="sentence-transformers/all-MiniLM-L6-v2",
|
default="sentence-transformers/all-MiniLM-L6-v2",
|
||||||
)
|
)
|
||||||
|
@click.option(
|
||||||
|
"--qdrant-local-path",
|
||||||
|
envvar="QDRANT_LOCAL_PATH",
|
||||||
|
required=False,
|
||||||
|
help="Qdrant local path",
|
||||||
|
)
|
||||||
def main(
|
def main(
|
||||||
qdrant_url: str,
|
qdrant_url: Optional[str],
|
||||||
qdrant_api_key: str,
|
qdrant_api_key: str,
|
||||||
collection_name: Optional[str],
|
collection_name: Optional[str],
|
||||||
fastembed_model_name: str,
|
fastembed_model_name: str,
|
||||||
|
qdrant_local_path: Optional[str],
|
||||||
):
|
):
|
||||||
|
# XOR of url and local path, since we accept only one of them
|
||||||
|
if not (bool(qdrant_url) ^ bool(qdrant_local_path)):
|
||||||
|
raise ValueError("Exactly one of qdrant-url or qdrant-local-path must be provided")
|
||||||
|
|
||||||
async def _run():
|
async def _run():
|
||||||
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
|
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
|
||||||
server = serve(
|
server = serve(
|
||||||
@@ -147,6 +160,7 @@ def main(
|
|||||||
qdrant_api_key,
|
qdrant_api_key,
|
||||||
collection_name,
|
collection_name,
|
||||||
fastembed_model_name,
|
fastembed_model_name,
|
||||||
|
qdrant_local_path,
|
||||||
)
|
)
|
||||||
await server.run(
|
await server.run(
|
||||||
read_stream,
|
read_stream,
|
||||||
|
|||||||
Reference in New Issue
Block a user