A Python library for interacting with LLMs using mcp.run tools
- Ollama: https://ollama.com/
- Claude: https://www.anthropic.com/api
- OpenAI: https://openai.com/api/
- Gemini: https://ai.google.dev/
- Llamafile: https://github.com/Mozilla-Ocho/llamafile
- Real-time chat interface with AI models
- Tool suggestion and execution within conversations
- Support for both local and cloud-based AI providers
uv
npm
ollama
(optional)
You will need to get an mcp.run session ID by running:
npx --yes -p @dylibso/mcpx gen-session --write
This will generate a new session and write the session ID to a configuration file that can be used
by mcpx-py
.
If you need to store the session ID in an environment variable you can run gen-session
without the --write
flag:
npx --yes -p @dylibso/mcpx gen-session
which should output something like:
Login successful!
Session: kabA7w6qH58H7kKOQ5su4v3bX_CeFn4k.Y4l/s/9dQwkjv9r8t/xZFjsn2fkLzf+tkve89P1vKhQ
Then set the MPC_RUN_SESSION_ID
environment variable:
$ export MCP_RUN_SESSION_ID=kabA7w6qH58H7kKOQ5su4v3bX_CeFn4k.Y4l/s/9dQwkjv9r8t/xZFjsn2fkLzf+tkve89P1vKhQ
Using uv
:
uv add mcpx-py
Or pip
:
pip install mcpx-py
from mcpx_py import Chat, Claude
llm = Chat(Claude)
# Or OpenAI
# from mcpx import OpenAI
# llm = Chat(OpenAI)
# Or Ollama
# from mcpx import Ollama
# llm = Chat(Ollama)
# Or Gemini
# from mcpx import Gemini
# llm = Chat(Gemini)
# Prompt claude and iterate over the results
async for response in llm.send_message(
"summarize the contents of example.com"
):
print(response)
More examples can be found in the examples/ directory
uv tool install mcpx-py
From git:
uv tool install git+https://github.com/dylibso/mcpx-py
Or from the root of the repo:
uv tool install .
mcpx-client can also be executed without being installed using uvx
:
uvx --from git+https://github.com/dylibso/mcpx-py mcpx-client
mcpx-client --help
mcpx-client chat
mcpx-client list
mcpx-client tool eval-js '{"code": "2+2"}'
- Sign up for an Anthropic API account at https://console.anthropic.com
- Get your API key from the console
- Set the environment variable:
ANTHROPIC_API_KEY=your_key_here
- Create an OpenAI account at https://platform.openai.com
- Generate an API key in your account settings
- Set the environment variable:
OPENAI_API_KEY=your_key_here
- Create an Gemini account at https://aistudio.google.com
- Generate an API key in your account settings
- Set the environment variable:
GEMINI_API_KEY=your_key_here
- Install Ollama from https://ollama.ai
- Pull your desired model:
ollama pull llama3.2
- No API key needed - runs locally
- Download a Llamafile model from https://github.com/Mozilla-Ocho/llamafile/releases
- Make the file executable:
chmod +x your-model.llamafile
- Run in JSON API mode:
./your-model.llamafile --json-api --host 127.0.0.1 --port 8080
- Use with the OpenAI provider pointing to
http://localhost:8080