Github Trending - Weekly
Github weekly trending
Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.
๐ Docs โข ๐ Web โข ๐ฅ App โข ๐ฌ Discord โข โ๐ฝ Blog
๐ New
- Start any message with
/research
to try out the experimental research mode with Khoj. - Anyone can now create custom agents with tunable personality, tools and knowledge bases.
- Read about Khoj's excellent performance on modern retrieval and reasoning benchmarks.
Overview
Khoj is a personal AI app to extend your capabilities. It smoothly scales up from an on-device personal AI to a cloud-scale enterprise AI.
- Chat with any local or online LLM (e.g llama3, qwen, gemma, mistral, gpt, claude, gemini).
- Get answers from the internet and your docs (including image, pdf, markdown, org-mode, word, notion files).
- Access it from your Browser, Obsidian, Emacs, Desktop, Phone or Whatsapp.
- Create agents with custom knowledge, persona, chat model and tools to take on any role.
- Automate away repetitive research. Get personal newsletters and smart notifications delivered to your inbox.
- Find relevant docs quickly and easily using our advanced semantic search.
- Generate images, talk out loud, play your messages.
- Khoj is open-source, self-hostable. Always.
- Run it privately on your computer or try it on our cloud app.
See it in action
Go to https://app.khoj.dev to see Khoj live.
Full feature list
You can see the full feature list here.
Self-Host
To get started with self-hosting Khoj, read the docs.
Enterprise
Khoj is available as a cloud service, on-premises, or as a hybrid solution. To learn more about Khoj Enterprise, visit our website.
Contributors
Cheers to our awesome contributors! ๐
Made with contrib.rocks.
Interested in Contributing?
We are always looking for contributors to help us build new features, improve the project documentation, or fix bugs. If you're interested, please see our Contributing Guidelines and check out our Contributors Project Board.
10 Lessons to Get Started Building AI Agents
AI Agents for Beginners - A Course
10 Lessons teaching everything you need to know to start building AI Agents
Language Support
๐ฑ Getting Started
This course has 10 lessons covering the fundamentals of building AI Agents. Each lesson covers its own topic so start wherever you like!
There is multi-language support for this course. Go to our available languages here.
If this is your first time building with Generative AI models, check out our Generative AI For Beginners course, which includes 21 lessons on building with GenAI.
Don't forget to star (๐) this repo and fork this repo to run the code.
What You Need
Each lesson in this course includes code examples, which can be found in the code_samples folder. You can fork this repo to create your own copy.
The code example in these exercises, utilize Azure AI Foundry and GitHub Model Catalogs for interacting with Language Models:
- Github Models - Free / Limited
- Azure AI Foundry - Azure Account Required
This course also uses the following AI Agent frameworks and services from Microsoft:
For more information on running the code for this course, go to the Course Setup.
๐ Want to help?
Do you have suggestions or found spelling or code errors? Raise an issue or Create a pull request
If you get stuck or have any questions about building AI Agents, join our Azure AI Community Discord.
๐ Each lesson includes
- A written lesson located in the README and a short video
- Python code samples supporting Azure AI Foundry and Github Models (Free)
- Links to extra resources to continue your learning
๐๏ธ Lessons
Lesson | Text & Code | Video | Extra Learning |
---|---|---|---|
Intro to AI Agents and Agent Use Cases | Link | Video | Link |
Exploring AI Agentic Frameworks | Link | Video | Link |
Understanding AI Agentic Design Patterns | Link | Video | Link |
Tool Use Design Pattern | Link | Video | Link |
Agentic RAG | Link | Video | Link |
Building Trustworthy AI Agents | Link | Video | Link |
Planning Design Pattern | Link | Video | Link |
Multi-Agent Design Pattern | Link | Video | Link |
Metacognition Design Pattern | Link | Video | Link |
AI Agents in Production | Link | Video | Link |
๐ Multi-Language Support
Language | Code | Link to Translated README | Last Updated |
---|---|---|---|
Chinese (Simplified) | zh | Chinese Translation | 2025-03-24 |
Chinese (Traditional) | tw | Chinese Translation | 2025-03-28 |
Chinese (Hong Kong) | hk | Chinese (Hong Kong) Translation | 2025-03-28 |
French | fr | French Translation | 2025-03-28 |
Japanese | ja | Japanese Translation | 2025-03-28 |
Korean | ko | Korean Translation | 2025-03-28 |
Portuguese | pt | Portuguese Translation | 2025-03-28 |
Spanish | es | Spanish Translation | 2025-03-28 |
German | de | German Translation | 2025-03-28 |
Persian | fa | Persian Translation | 2025-03-28 |
Polish | pl | Polish Translation | 2025-03-28 |
๐ Other Courses
Our team produces other courses! Check out:
- NEW Generative AI for Beginners using .NET
- Generative AI for Beginners
- ML for Beginners
- Data Science for Beginners
- AI for Beginners
- Cybersecurity for Beginners
- Web Dev for Beginners
- IoT for Beginners
- XR Development for Beginners
- Mastering GitHub Copilot for AI Paired Programming
- Mastering GitHub Copilot for C#/.NET Developers
- Choose Your Own Copilot Adventure
๐ Community Thanks
Thanks to Shivam Goyal for contributing important code samples demonstrating Agentic RAG.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos is subject to those third-parties' policies.
Vim-fork focused on extensibility and usability
Neovim is a project that seeks to aggressively refactor Vim in order to:
- Simplify maintenance and encourage contributions
- Split the work between multiple developers
- Enable advanced UIs without modifications to the core
- Maximize extensibility
See the Introduction wiki page and Roadmap for more information.
Features
- Modern GUIs
- API access from any language including C/C++, C#, Clojure, D, Elixir, Go, Haskell, Java/Kotlin, JavaScript/Node.js, Julia, Lisp, Lua, Perl, Python, Racket, Ruby, Rust
- Embedded, scriptable terminal emulator
- Asynchronous job control
- Shared data (shada) among multiple editor instances
- XDG base directories support
- Compatible with most Vim plugins, including Ruby and Python plugins
See :help nvim-features
for the full list, and :help news
for noteworthy changes in the latest version!
Install from package
Pre-built packages for Windows, macOS, and Linux are found on the Releases page.
Managed packages are in Homebrew, Debian, Ubuntu, Fedora, Arch Linux, Void Linux, Gentoo, and more!
Install from source
See BUILD.md and supported platforms for details.
The build is CMake-based, but a Makefile is provided as a convenience. After installing the dependencies, run the following command.
make CMAKE_BUILD_TYPE=RelWithDebInfo
sudo make install
To install to a non-default location:
make CMAKE_BUILD_TYPE=RelWithDebInfo CMAKE_INSTALL_PREFIX=/full/path/
make install
CMake hints for inspecting the build:
cmake --build build --target help
lists all build targets.build/CMakeCache.txt
(orcmake -LAH build/
) contains the resolved values of all CMake variables.build/compile_commands.json
shows the full compiler invocations for each translation unit.
Transitioning from Vim
See :help nvim-from-vim
for instructions.
Project layout
โโ cmake/ CMake utils
โโ cmake.config/ CMake defines
โโ cmake.deps/ subproject to fetch and build dependencies (optional)
โโ runtime/ plugins and docs
โโ src/nvim/ application source code (see src/nvim/README.md)
โ โโ api/ API subsystem
โ โโ eval/ Vimscript subsystem
โ โโ event/ event-loop subsystem
โ โโ generators/ code generation (pre-compilation)
โ โโ lib/ generic data structures
โ โโ lua/ Lua subsystem
โ โโ msgpack_rpc/ RPC subsystem
โ โโ os/ low-level platform code
โ โโ tui/ built-in UI
โโ test/ tests (see test/README.md)
License
Neovim contributions since b17d96 are licensed under the Apache 2.0 license, except for contributions copied from Vim (identified by the vim-patch
token). See LICENSE for details.
Vim is Charityware. You can use and copy it as much as you like, but you are
encouraged to make a donation for needy children in Uganda. Please see the
kcc section of the vim docs or visit the ICCF web site, available at these URLs:
https://iccf-holland.org/
https://www.vim.org/iccf/
https://www.iccf.nl/
You can also sponsor the development of Vim. Vim sponsors can vote for
features. The money goes to Uganda anyway.
PlayStation 4 emulator for Windows, Linux and macOS written in C++
shadPS4
General information
shadPS4 is an early PlayStation 4 emulator for Windows, Linux and macOS written in C++.
If you encounter problems or have doubts, do not hesitate to look at the Quickstart.
To verify that a game works, you can look at shadPS4 Game Compatibility.
To discuss shadPS4 development, suggest ideas or to ask for help, join our Discord server.
To get the latest news, go to our X (Twitter) or our website.
For those who'd like to donate to the project, we now have a Kofi page!
Status
[!IMPORTANT] shadPS4 is early in development, don't expect a flawless experience.
Currently, the emulator can successfully run games like Bloodborne, Dark Souls Remastered, Red Dead Redemption and many other games.
Why
This project began as a fun project. Given our limited free time, it may take some time before shadPS4 can run more complex games, but we're committed to making small, regular updates.
Building
[!IMPORTANT] If you want to use shadPS4 to play your games, you don't have to follow the build instructions, you can simply download the emulator from either the release tab or the action tab.
Windows
Check the build instructions for Windows.
Linux
Check the build instructions for Linux.
macOS
Check the build instructions for macOS.
[!IMPORTANT] macOS users need at least macOS 15 on Apple Silicon-based Mac devices and at least macOS 14 on Intel-based Mac devices.
Debugging and reporting issues
For more information on how to test, debug and report issues with the emulator or games, read the Debugging documentation.
Keyboard and Mouse Mappings
[!NOTE] Some keyboards may also require you to hold the Fn key to use the F* keys. Mac users should use the Command key instead of Control, and need to use Command+F11 for full screen to avoid conflicting with system key bindings.
Button | Function |
---|---|
F10 | FPS Counter |
Ctrl+F10 | Video Debug Info |
F11 | Fullscreen |
F12 | Trigger RenderDoc Capture |
[!NOTE] Xbox and DualShock controllers work out of the box.
Controller button | Keyboard equivalent |
---|---|
LEFT AXIS UP | W |
LEFT AXIS DOWN | S |
LEFT AXIS LEFT | A |
LEFT AXIS RIGHT | D |
RIGHT AXIS UP | I |
RIGHT AXIS DOWN | K |
RIGHT AXIS LEFT | J |
RIGHT AXIS RIGHT | L |
TRIANGLE | Numpad 8 or C |
CIRCLE | Numpad 6 or B |
CROSS | Numpad 2 or N |
SQUARE | Numpad 4 or V |
PAD UP | UP |
PAD DOWN | DOWN |
PAD LEFT | LEFT |
PAD RIGHT | RIGHT |
OPTIONS | RETURN |
BACK BUTTON / TOUCH PAD | SPACE |
L1 | Q |
R1 | U |
L2 | E |
R2 | O |
L3 | X |
R3 | M |
Keyboard and mouse inputs can be customized in the settings menu by clicking the Controller button, and further details and help on controls are also found there. Custom bindings are saved per-game. Inputs support up to three keys per binding, mouse buttons, mouse movement mapped to joystick input, and more.
Main team
- georgemoralis
- raphaelthegreat
- psucien
- skmp
- wheremyfoodat
- raziel1000
- viniciuslrangel
- roamic
- poly
- squidbus
- frodo
Logo is done by Xphalnos
Contributing
If you want to contribute, please look the CONTRIBUTING.md file.
Open a PR and we'll check it :)
Translations
If you want to translate shadPS4 to your language we use crowdin.
Contributors
Special Thanks
A few noteworthy teams/projects who've helped us along the way are:
-
Panda3DS: A multiplatform 3DS emulator from our co-author wheremyfoodat. They have been incredibly helpful in understanding and solving problems that came up from natively executing the x64 code of PS4 binaries
-
fpPS4: The fpPS4 team has assisted massively with understanding some of the more complex parts of the PS4 operating system and libraries, by helping with reverse engineering work and research.
-
yuzu: Our shader compiler has been designed with yuzu's Hades compiler as a blueprint. This allowed us to focus on the challenges of emulating a modern AMD GPU while having a high-quality optimizing shader compiler implementation as a base.
-
felix86: A new x86-64 โ RISC-V Linux userspace emulator
License
The fast, Pythonic way to build Model Context Protocol servers ๐
๐ FastMCP has been added to the official MCP SDK! ๐
You can now find FastMCP as part of the official Model Context Protocol Python SDK:
๐ github.com/modelcontextprotocol/python-sdk
Please note: this repository is no longer maintained.
Model Context Protocol (MCP) servers are a new, standardized way to provide context and tools to your LLMs, and FastMCP makes building MCP servers simple and intuitive. Create tools, expose resources, and define prompts with clean, Pythonic code:
# demo.py
from fastmcp import FastMCP
mcp = FastMCP("Demo ๐")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
That's it! Give Claude access to the server by running:
fastmcp install demo.py
FastMCP handles all the complex protocol details and server management, so you can focus on building great tools. It's designed to be high-level and Pythonic - in most cases, decorating a function is all you need.
Key features:
- Fast: High-level interface means less code and faster development
- Simple: Build MCP servers with minimal boilerplate
- Pythonic: Feels natural to Python developers
- Complete*: FastMCP aims to provide a full implementation of the core MCP specification
(*emphasis on aims)
๐จ ๐ง ๐๏ธ FastMCP is under active development, as is the MCP specification itself. Core features are working but some advanced capabilities are still in progress.
Table of Contents
Installation
We strongly recommend installing FastMCP with uv, as it is required for deploying servers:
uv pip install fastmcp
Note: on macOS, uv may need to be installed with Homebrew (brew install uv
) in order to make it available to the Claude Desktop app.
Alternatively, to use the SDK without deploying, you may use pip:
pip install fastmcp
Quickstart
Let's create a simple MCP server that exposes a calculator tool and some data:
# server.py
from fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
You can install this server in Claude Desktop and interact with it right away by running:
fastmcp install server.py
Alternatively, you can test it with the MCP Inspector:
fastmcp dev server.py
What is MCP?
The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through Resources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through Tools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through Prompts (reusable templates for LLM interactions)
- And more!
There is a low-level Python SDK available for implementing the protocol directly, but FastMCP aims to make that easier by providing a high-level, Pythonic interface.
Core Concepts
Server
The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
from fastmcp import FastMCP
# Create a named server
mcp = FastMCP("My App")
# Specify dependencies for deployment and development
mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
Resources
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects. Some examples:
- File contents
- Database schemas
- API responses
- System information
Resources can be static:
@mcp.resource("config://app")
def get_config() -> str:
"""Static configuration data"""
return "App configuration here"
Or dynamic with parameters (FastMCP automatically handles these as MCP templates):
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
"""Dynamic user data"""
return f"Profile data for user {user_id}"
Tools
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects. They're similar to POST endpoints in a REST API.
Simple calculation example:
@mcp.tool()
def calculate_bmi(weight_kg: float, height_m: float) -> float:
"""Calculate BMI given weight in kg and height in meters"""
return weight_kg / (height_m ** 2)
HTTP request example:
import httpx
@mcp.tool()
async def fetch_weather(city: str) -> str:
"""Fetch current weather for a city"""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weather.com/{city}"
)
return response.text
Complex input handling example:
from pydantic import BaseModel, Field
from typing import Annotated
class ShrimpTank(BaseModel):
class Shrimp(BaseModel):
name: Annotated[str, Field(max_length=10)]
shrimp: list[Shrimp]
@mcp.tool()
def name_shrimp(
tank: ShrimpTank,
# You can use pydantic Field in function signatures for validation.
extra_names: Annotated[list[str], Field(max_length=10)],
) -> list[str]:
"""List all shrimp names in the tank"""
return [shrimp.name for shrimp in tank.shrimp] + extra_names
Prompts
Prompts are reusable templates that help LLMs interact with your server effectively. They're like "best practices" encoded into your server. A prompt can be as simple as a string:
@mcp.prompt()
def review_code(code: str) -> str:
return f"Please review this code:\n\n{code}"
Or a more structured sequence of messages:
from fastmcp.prompts.base import UserMessage, AssistantMessage
@mcp.prompt()
def debug_error(error: str) -> list[Message]:
return [
UserMessage("I'm seeing this error:"),
UserMessage(error),
AssistantMessage("I'll help debug that. What have you tried so far?")
]
Images
FastMCP provides an Image
class that automatically handles image data in your server:
from fastmcp import FastMCP, Image
from PIL import Image as PILImage
@mcp.tool()
def create_thumbnail(image_path: str) -> Image:
"""Create a thumbnail from an image"""
img = PILImage.open(image_path)
img.thumbnail((100, 100))
# FastMCP automatically handles conversion and MIME types
return Image(data=img.tobytes(), format="png")
@mcp.tool()
def load_image(path: str) -> Image:
"""Load an image from disk"""
# FastMCP handles reading and format detection
return Image(path=path)
Images can be used as the result of both tools and resources.
Context
The Context object gives your tools and resources access to MCP capabilities. To use it, add a parameter annotated with fastmcp.Context
:
from fastmcp import FastMCP, Context
@mcp.tool()
async def long_task(files: list[str], ctx: Context) -> str:
"""Process multiple files with progress tracking"""
for i, file in enumerate(files):
ctx.info(f"Processing {file}")
await ctx.report_progress(i, len(files))
# Read another resource if needed
data = await ctx.read_resource(f"file://{file}")
return "Processing complete"
The Context object provides:
- Progress reporting through
report_progress()
- Logging via
debug()
,info()
,warning()
, anderror()
- Resource access through
read_resource()
- Request metadata via
request_id
andclient_id
Running Your Server
There are three main ways to use your FastMCP server, each suited for different stages of development:
Development Mode (Recommended for Building & Testing)
The fastest way to test and debug your server is with the MCP Inspector:
fastmcp dev server.py
This launches a web interface where you can:
- Test your tools and resources interactively
- See detailed logs and error messages
- Monitor server performance
- Set environment variables for testing
During development, you can:
- Add dependencies with
--with
:fastmcp dev server.py --with pandas --with numpy
- Mount your local code for live updates:
fastmcp dev server.py --with-editable .
Claude Desktop Integration (For Regular Use)
Once your server is ready, install it in Claude Desktop to use it with Claude:
fastmcp install server.py
Your server will run in an isolated environment with:
- Automatic installation of dependencies specified in your FastMCP instance:
mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
- Custom naming via
--name
:fastmcp install server.py --name "My Analytics Server"
- Environment variable management:
# Set variables individually fastmcp install server.py -e API_KEY=abc123 -e DB_URL=postgres://... # Or load from a .env file fastmcp install server.py -f .env
Direct Execution (For Advanced Use Cases)
For advanced scenarios like custom deployments or running without Claude, you can execute your server directly:
from fastmcp import FastMCP
mcp = FastMCP("My App")
if __name__ == "__main__":
mcp.run()
Run it with:
# Using the FastMCP CLI
fastmcp run server.py
# Or with Python/uv directly
python server.py
uv run python server.py
Note: When running directly, you are responsible for ensuring all dependencies are available in your environment. Any dependencies specified on the FastMCP instance are ignored.
Choose this method when you need:
- Custom deployment configurations
- Integration with other services
- Direct control over the server lifecycle
Server Object Names
All FastMCP commands will look for a server object called mcp
, app
, or server
in your file. If you have a different object name or multiple servers in one file, use the syntax server.py:my_server
:
# Using a standard name
fastmcp run server.py
# Using a custom name
fastmcp run server.py:my_custom_server
Examples
Here are a few examples of FastMCP servers. For more, see the examples/
directory.
Echo Server
A simple server demonstrating resources, tools, and prompts:
from fastmcp import FastMCP
mcp = FastMCP("Echo")
@mcp.resource("echo://{message}")
def echo_resource(message: str) -> str:
"""Echo a message as a resource"""
return f"Resource echo: {message}"
@mcp.tool()
def echo_tool(message: str) -> str:
"""Echo a message as a tool"""
return f"Tool echo: {message}"
@mcp.prompt()
def echo_prompt(message: str) -> str:
"""Create an echo prompt"""
return f"Please process this message: {message}"
SQLite Explorer
A more complex example showing database integration:
from fastmcp import FastMCP
import sqlite3
mcp = FastMCP("SQLite Explorer")
@mcp.resource("schema://main")
def get_schema() -> str:
"""Provide the database schema as a resource"""
conn = sqlite3.connect("database.db")
schema = conn.execute(
"SELECT sql FROM sqlite_master WHERE type='table'"
).fetchall()
return "\n".join(sql[0] for sql in schema if sql[0])
@mcp.tool()
def query_data(sql: str) -> str:
"""Execute SQL queries safely"""
conn = sqlite3.connect("database.db")
try:
result = conn.execute(sql).fetchall()
return "\n".join(str(row) for row in result)
except Exception as e:
return f"Error: {str(e)}"
@mcp.prompt()
def analyze_table(table: str) -> str:
"""Create a prompt template for analyzing tables"""
return f"""Please analyze this database table:
Table: {table}
Schema:
{get_schema()}
What insights can you provide about the structure and relationships?"""
Contributing
Open Developer Guide
Prerequisites
FastMCP requires Python 3.10+ and uv.
Installation
For development, we recommend installing FastMCP with development dependencies, which includes various utilities the maintainers find useful.
git clone https://github.com/jlowin/fastmcp.git
cd fastmcp
uv sync --frozen --extra dev
For running tests only (e.g., in CI), you only need the testing dependencies:
uv sync --frozen --extra tests
Testing
Please make sure to test any new functionality. Your tests should be simple and atomic and anticipate change rather than cement complex patterns.
Run tests from the root directory:
pytest -vv
Formatting
FastMCP enforces a variety of required formats, which you can automatically enforce with pre-commit.
Install the pre-commit hooks:
pre-commit install
The hooks will now run on every commit (as well as on every PR). To run them manually:
pre-commit run --all-files
Opening a Pull Request
Fork the repository and create a new branch:
git checkout -b my-branch
Make your changes and commit them:
git add . && git commit -m "My changes"
Push your changes to your fork:
git push origin my-branch
Feel free to reach out in a GitHub issue or discussion if you have any questions!
Go library for the WhatsApp web multidevice API
whatsmeow
whatsmeow is a Go library for the WhatsApp web multidevice API.
Discussion
Matrix room: #whatsmeow:maunium.net
For questions about the WhatsApp protocol (like how to send a specific type of message), you can also use the WhatsApp protocol Q&A section on GitHub discussions.
Usage
The godoc includes docs for all methods and event types. There's also a simple example at the top.
Features
Most core features are already present:
- Sending messages to private chats and groups (both text and media)
- Receiving all messages
- Managing groups and receiving group change events
- Joining via invite messages, using and creating invite links
- Sending and receiving typing notifications
- Sending and receiving delivery and read receipts
- Reading and writing app state (contact list, chat pin/mute status, etc)
- Sending and handling retry receipts if message decryption fails
- Sending status messages (experimental, may not work for large contact lists)
Things that are not yet implemented:
- Sending broadcast list messages (this is not supported on WhatsApp web either)
- Calls
A collection of MCP servers.
Awesome MCP Servers 
A curated list of awesome Model Context Protocol (MCP) servers.
What is MCP?
MCP is an open protocol that enables AI models to securely interact with local and remote resources through standardized server implementations. This list focuses on production-ready and experimental MCP servers that extend AI capabilities through file access, database connections, API integrations, and other contextual services.
Clients
Checkout awesome-mcp-clients and glama.ai/mcp/clients.
[!TIP] Glama Chat is a multi-modal AI client with MCP support & AI gateway.
Tutorials
Community
Legend
- ๐๏ธ โ official implementation
- programming language
- ๐ โ Python codebase
- ๐ โ TypeScript codebase
- ๐๏ธ โ Go codebase
- ๐ฆ โ Rust codebase
- #๏ธโฃ - C# Codebase
- โ - Java codebase
- scope
- โ๏ธ - Cloud Service
- ๐ - Local Service
- ๐ - Embedded Systems
- operating system
- ๐ โ For macOS
- ๐ช โ For Windows
- ๐ง - For Linux
[!NOTE] Confused about Local ๐ vs Cloud โ๏ธ?
- Use local when MCP server is talking to a locally installed software, e.g. taking control over Chrome browser.
- Use network when MCP server is talking to remote APIs, e.g. weather API.
Server Implementations
[!NOTE] We now have a web-based directory that is synced with the repository.
- ๐ - Aggregators
- ๐จ - Art & Culture
- ๐ - Browser Automation
- โ๏ธ - Cloud Platforms
- ๐จโ๐ป - Code Execution
- ๐ฅ๏ธ - Command Line
- ๐ฌ - Communication
- ๐ค - Customer Data Platforms
- ๐๏ธ - Databases
- ๐ - Data Platforms
- ๐ ๏ธ - Developer Tools
- ๐ - Embedded system
- ๐ - File Systems
- ๐ฐ - Finance & Fintech
- ๐ฎ - Gaming
- ๐ง - Knowledge & Memory
- ๐บ๏ธ - Location Services
- ๐ฏ - Marketing
- ๐ - Monitoring
- ๐ - Search & Data Extraction
- ๐ - Security
- ๐ - Sports
- ๐ง - Support & Service Management
- ๐ - Translation Services
- ๐ - Travel & Transportation
- ๐ - Version Control
- ๐ ๏ธ - Other Tools and Integrations
๐ Aggregators
Servers for accessing many apps and tools through a single MCP server.
- PipedreamHQ/pipedream โ๏ธ ๐ - Connect with 2,500 APIs with 8,000+ prebuilt tools, and manage servers for your users, in your own app.
๐จ Art & Culture
Access and explore art collections, cultural heritage, and museum databases. Enables AI models to search and analyze artistic and cultural content.
- abhiemj/manim-mcp-server ๐ ๐ ๐ช ๐ง - A local MCP server that generates animations using Manim.
- burningion/video-editing-mcp ๐ - Add, Analyze, Search, and Generate Video Edits from your Video Jungle Collection
- djalal/quran-mcp-server ๐ ๐ MCP server to interact with Quran.com corpus via the official REST API v4.
- r-huijts/rijksmuseum-mcp ๐ โ๏ธ - Rijksmuseum API integration for artwork search, details, and collections
- r-huijts/oorlogsbronnen-mcp ๐ โ๏ธ - Oorlogsbronnen (War Sources) API integration for accessing historical WWII records, photographs, and documents from the Netherlands (1940-1945)
- samuelgursky/davinci-resolve-mcp ๐ - MCP server integration for DaVinci Resolve providing powerful tools for video editing, color grading, media management, and project control
- yuna0x0/anilist-mcp ๐ โ๏ธ - A MCP server integrating AniList API for anime and manga information
๐ Browser Automation
Web content access and automation capabilities. Enables searching, scraping, and processing web content in AI-friendly formats.
- 34892002/bilibili-mcp-js ๐ ๐ - A MCP server that supports searching for Bilibili content. Provides LangChain integration examples and test scripts.
- automatalabs/mcp-server-playwright ๐ - An MCP server for browser automation using Playwright
- blackwhite084/playwright-plus-python-mcp ๐ - An MCP python server using Playwright for browser automation,more suitable for llm
- browserbase/mcp-server-browserbase ๐๏ธ ๐ - Automate browser interactions in the cloud (e.g. web navigation, data extraction, form filling, and more)
- co-browser/browser-use-mcp-server ๐๐ฎ - browser-use packaged as an MCP server with SSE transport. includes a dockerfile to run chromium in docker + a vnc server.
- co-browser/browser-use-mcp-server ๐ - browser-use packaged as an MCP server with SSE transport. includes a dockerfile to run chromium in docker + a vnc server.
- executeautomation/playwright-mcp-server ๐ - An MCP server using Playwright for browser automation and webscrapping
- eyalzh/browser-control-mcp ๐ ๐ - An MCP server paired with a browser extension that enables LLM clients to control the user's browser (Firefox).
- fradser/mcp-server-apple-reminders ๐ ๐ ๐ - An MCP server for interacting with Apple Reminders on macOS
- getrupt/ashra-mcp ๐ ๐ - Extract structured data from any website. Just prompt and get JSON.
- kimtaeyoon83/mcp-server-youtube-transcript ๐ โ๏ธ - Fetch YouTube subtitles and transcripts for AI analysis
- kimtth/mcp-aoai-web-browsing ๐ ๐ - A
minimal
server/client MCP implementation using Azure OpenAI and Playwright. - microsoft/playwright-mcp - Official Microsoft Playwright MCP server, enabling LLMs to interact with web pages through structured accessibility snapshots
- modelcontextprotocol/server-puppeteer ๐ ๐ - Browser automation for web scraping and interaction
- pskill9/web-search ๐ ๐ - An MCP server that enables free web searching using Google search results, with no API keys required.
- recursechat/mcp-server-apple-shortcuts ๐ ๐ ๐ - An MCP Server Integration with Apple Shortcuts
โ๏ธ Cloud Platforms
Cloud platform service integration. Enables management and interaction with cloud infrastructure and services.
- alexei-led/aws-mcp-server ๐ โ๏ธ - A lightweight but powerful server that enables AI assistants to execute AWS CLI commands, use Unix pipes, and apply prompt templates for common AWS tasks in a safe Docker environment with multi-architecture support
- alexei-led/k8s-mcp-server ๐ - A lightweight yet robust server that empowers AI assistants to securely execute Kubernetes CLI commands (
kubectl
,helm
,istioctl
, andargocd
) using Unix pipes in a safe Docker environment with multi-architecture support. - bright8192/esxi-mcp-server ๐ โ๏ธ - A VMware ESXi/vCenter management server based on MCP (Model Control Protocol), providing simple REST API interfaces for virtual machine management.
- cloudflare/mcp-server-cloudflare ๐๏ธ ๐ โ๏ธ - Integration with Cloudflare services including Workers, KV, R2, and D1
- flux159/mcp-server-kubernetes - ๐ โ๏ธ/๐ Typescript implementation of Kubernetes cluster operations for pods, deployments, services.
- jdubois/azure-cli-mcp - A wrapper around the Azure CLI command line that allows you to talk directly to Azure
- johnneerdael/netskope-mcp โ๏ธ - An MCP to give access to all Netskope Private Access components within a Netskope Private Access environments including detailed setup information and LLM examples on usage.
- johnneerdael/netskope-mcp ๐ โ๏ธ - An MCP to give access to all Netskope Private Access components within a Netskope Private Access environments including detailed setup information and LLM examples on usage.
- manusa/Kubernetes MCP Server - ๐๏ธ ๐ A powerful Kubernetes MCP server with additional support for OpenShift. Besides providing CRUD operations for any Kubernetes resource, this server provides specialized tools to interact with your cluster.
- nwiizo/tfmcp - ๐ฆ ๐ - A Terraform MCP server allowing AI assistants to manage and operate Terraform environments, enabling reading configurations, analyzing plans, applying configurations, and managing Terraform state.
- rohitg00/kubectl-mcp-server - ๐ โ๏ธ/๐ A Model Context Protocol (MCP) server for Kubernetes that enables AI assistants like Claude, Cursor, and others to interact with Kubernetes clusters through natural language.
- strowk/mcp-k8s-go - ๐๏ธ โ๏ธ/๐ Kubernetes cluster operations through MCP
- thunderboltsid/mcp-nutanix - ๐๏ธ ๐ /โ๏ธ Go-based MCP Server for interfacing with Nutanix Prism Central resources.
- weibaohui/k8m - ๐๏ธ โ๏ธ/๐ Provides MCP multi-cluster Kubernetes management and operations, featuring a management interface, logging, and nearly 50 built-in tools covering common DevOps and development scenarios. Supports both standard and CRD resources.
- weibaohui/kom - ๐๏ธ โ๏ธ/๐ Provides MCP multi-cluster Kubernetes management and operations. It can be integrated as an SDK into your own project and includes nearly 50 built-in tools covering common DevOps and development scenarios. Supports both standard and CRD resources.
- wenhuwang/mcp-k8s-eye ๐๏ธ โ๏ธ/๐ MCP Server for kubernetes management, and analyze your cluster, application health
๐จโ๐ป Code Execution
Code execution servers. Allow LLMs to execute code in a secure environment, e.g. for coding agents.
- pydantic/pydantic-ai/mcp-run-python ๐๐ - Run Python code in a secure sandbox via MCP tool calls
๐ฅ๏ธ Command Line
Run commands, capture output and otherwise interact with shells and command line tools.
- ferrislucas/iterm-mcp ๐ฅ๏ธ ๐ ๏ธ ๐ฌ - A Model Context Protocol server that provides access to iTerm. You can run commands and ask questions about what you see in the iTerm terminal.
- g0t4/mcp-server-commands ๐ ๐ - Run any command with
run_command
andrun_script
tools. - maxim-saplin/mcp_safe_local_python_executor - Safe Python interpreter based on HF Smolagents
LocalPythonExecutor
- MladenSU/cli-mcp-server ๐ ๐ - Command line interface with secure execution and customizable security policies
- OthmaneBlial/term_mcp_deepseek ๐ ๐ - A DeepSeek MCP-like Server for Terminal
- tumf/mcp-shell-server - A secure shell command execution server implementing the Model Context Protocol (MCP)
- tumf/mcp-shell-server A secure shell command execution server implementing the Model Context Protocol (MCP)
๐ฌ Communication
Integration with communication platforms for message management and channel operations. Enables AI models to interact with team communication tools.
- AbdelStark/nostr-mcp - ๐ โ๏ธ - A Nostr MCP server that allows to interact with Nostr, enabling posting notes, and more.
- adhikasp/mcp-twikit ๐ โ๏ธ - Interact with Twitter search and timeline
- agentmail-toolkit/mcp - ๐ ๐ฌ - An MCP server to create inboxes on the fly to send, receive, and take actions on email. We aren't AI agents for email, but email for AI Agents.
- arpitbatra123/mcp-googletasks - ๐ โ๏ธ - An MCP server to interface with the Google Tasks API
- carterlasalle/mac_messages_mcp ๐ ๐ ๐ - An MCP server that securely interfaces with your iMessage database via the Model Context Protocol (MCP), allowing LLMs to query and analyze iMessage conversations. It includes robust phone number validation, attachment processing, contact management, group chat handling, and full support for sending and receiving messages.
- elie222/inbox-zero - ๐ โ๏ธ - An MCP server for Inbox Zero. Adds functionality on top of Gmail like finding out which emails you need to reply to or need to follow up on.
- gotoolkits/wecombot - ๐ โ๏ธ - An MCP server application that sends various types of messages to the WeCom group robot.
- hannesrudolph/imessage-query-fastmcp-mcp-server ๐ ๐ ๐ - An MCP server that provides safe access to your iMessage database through Model Context Protocol (MCP), enabling LLMs to query and analyze iMessage conversations with proper phone number validation and attachment handling
- lharries/whatsapp-mcp - ๐ โ๏ธ - An MCP server for WhatsApp, search and send through pesonal and group messages
- lharries/whatsapp-mcp ๐ ๐๏ธ - An MCP server for searching your personal WhatsApp messages, contacts and sending messages to individuals or groups
- MarkusPfundstein/mcp-gsuite - ๐ โ๏ธ - Integration with gmail and Google Calendar.
- modelcontextprotocol/server-bluesky ๐ โ๏ธ - Bluesky instance integration for querying and interaction
- modelcontextprotocol/server-slack ๐ โ๏ธ - Slack workspace integration for channel management and messaging
- sawa-zen/vrchat-mcp - ๐ ๐ This is an MCP server for interacting with the VRChat API. You can retrieve information about friends, worlds, avatars, and more in VRChat.
- teddyzxcv/ntfy-mcp - The MCP server that keeps you informed by sending the notification on phone using ntfy
- userad/didlogic_mcp - ๐ โ๏ธ - An MCP server for DIDLogic. Adds functionality to manage SIP endpoints, numbers and destinations.
- zcaceres/gtasks-mcp - ๐ โ๏ธ - An MCP server to Manage Google Tasks
๐ค Customer Data Platforms
Provides access to customer profiles inside of customer data platforms
- iaptic/mcp-server-iaptic ๐๏ธ ๐ โ๏ธ - Connect with iaptic to ask about your Customer Purchases, Transaction data and App Revenue statistics.
- OpenDataMCP/OpenDataMCP ๐ โ๏ธ - Connect any Open Data to any LLM with Model Context Protocol.
- sergehuber/inoyu-mcp-unomi-server ๐ โ๏ธ - An MCP server to access and updates profiles on an Apache Unomi CDP server.
- tinybirdco/mcp-tinybird ๐ โ๏ธ - An MCP server to interact with a Tinybird Workspace from any MCP client.
๐๏ธ Databases
Secure database access with schema inspection capabilities. Enables querying and analyzing data with configurable security controls including read-only access.
- Aiven-Open/mcp-aiven - ๐ โ๏ธ ๐๏ธ - Navigate your Aiven projects and interact with the PostgreSQLยฎ, Apache Kafkaยฎ, ClickHouseยฎ and OpenSearchยฎ services
- alexanderzuev/supabase-mcp-server - Supabase MCP Server with support for SQL query execution and database exploration tools
- aliyun/alibabacloud-tablestore-mcp-server โ ๐ โ๏ธ - MCP service for Tablestore, features include adding documents, semantic search for documents based on vectors and scalars, RAG-friendly, and serverless.
- benborla29/mcp-server-mysql โ๏ธ ๐ - MySQL database integration in NodeJS with configurable access controls and schema inspection
- bytebase/dbhub ๐ ๐ โ Universal database MCP server supporting mainstream databases.
- c4pt0r/mcp-server-tidb ๐ โ๏ธ - TiDB database integration with schema inspection and query capabilities
- Canner/wren-engine ๐ ๐ฆ ๐ - The Semantic Engine for Model Context Protocol(MCP) Clients and AI Agents
- centralmind/gateway ๐๏ธ ๐ ๐ ๐ช - MCP and MCP SSE Server that automatically generate API based on database schema and data. Supports PostgreSQL, Clickhouse, MySQL, Snowflake, BigQuery, Supabase
- ClickHouse/mcp-clickhouse ๐ โ๏ธ - ClickHouse database integration with schema inspection and query capabilities
- cr7258/elasticsearch-mcp-server ๐ ๐ - MCP Server implementation that provides Elasticsearch interaction
- Dataring-engineering/mcp-server-trino ๐ โ๏ธ - Trino MCP Server to query and access data from Trino Clusters.
- designcomputer/mysql_mcp_server ๐ ๐ - MySQL database integration with configurable access controls, schema inspection, and comprehensive security guidelines
- domdomegg/airtable-mcp-server ๐ ๐ - Airtable database integration with schema inspection, read and write capabilities
- ergut/mcp-bigquery-server ๐ โ๏ธ - Server implementation for Google BigQuery integration that enables direct BigQuery database access and querying capabilities
- f4ww4z/mcp-mysql-server ๐ ๐ - Node.js-based MySQL database integration that provides secure MySQL database operations
- fireproof-storage/mcp-database-server ๐ โ๏ธ - Fireproof ledger database with multi-user sync
- FreePeak/db-mcp-server ๐๏ธ ๐ โ A high-performance multi-database MCP server built with Golang, supporting MySQL & PostgreSQL (NoSQL coming soon). Includes built-in tools for query execution, transaction management, schema exploration, query building, and performance analysis, with seamless Cursor integration for enhanced database workflows.
- furey/mongodb-lens ๐ ๐ - MongoDB Lens: Full Featured MCP Server for MongoDB Databases
- gannonh/firebase-mcp ๐ฅ โ ๏ธ - Firebase services including Auth, Firestore and Storage.
- get-convex/convex-backend ๐ โ๏ธ - Convex database integration to introspect tables, functions, and run oneoff queries (Source)
- GreptimeTeam/greptimedb-mcp-server ๐ ๐ - MCP Server for querying GreptimeDB.
- hannesrudolph/sqlite-explorer-fastmcp-mcp-server ๐ ๐ - An MCP server that provides safe, read-only access to SQLite databases through Model Context Protocol (MCP). This server is built with the FastMCP framework, which enables LLMs to explore and query SQLite databases with built-in safety features and query validation.
- idoru/influxdb-mcp-server ๐ โ๏ธ ๐ - Run queries against InfluxDB OSS API v2.
- isaacwasserman/mcp-snowflake-server ๐ โ๏ธ - Snowflake integration implementing read and (optional) write operations as well as insight tracking
- joshuarileydev/supabase-mcp-server - Supabase MCP Server for managing and creating projects and organisations in Supabase
- jovezhong/mcp-timeplus ๐ โ๏ธ - MCP server for Apache Kafka and Timeplus. Able to list Kafka topics, poll Kafka messages, save Kafka data locally and query streaming data with SQL via Timeplus
- KashiwaByte/vikingdb-mcp-server ๐ โ๏ธ - VikingDB integration with collection and index introduction, vector store and search capabilities.
- kiliczsh/mcp-mongo-server ๐ ๐ - A Model Context Protocol Server for MongoDB
- ktanaka101/mcp-server-duckdb ๐ ๐ - DuckDB database integration with schema inspection and query capabilities
- LucasHild/mcp-server-bigquery ๐ โ๏ธ - BigQuery database integration with schema inspection and query capabilities
- mcp-server-jdbc โ ๐ - Connect to any JDBC-compatible database and query, insert, update, delete, and more.
- memgraph/mcp-memgraph ๐ ๐ - Memgraph MCP Server - includes a tool to run a query against Memgraph and a schema resource.
- modelcontextprotocol/server-postgres ๐ ๐ - PostgreSQL database integration with schema inspection and query capabilities
- modelcontextprotocol/server-sqlite ๐ ๐ - SQLite database operations with built-in analysis features
- neo4j-contrib/mcp-neo4j ๐ ๐ - Model Context Protocol with Neo4j
- neondatabase/mcp-server-neon ๐ โ๏ธ โ An MCP Server for creating and managing Postgres databases using Neon Serverless Postgres
- niledatabase/nile-mcp-server MCP server for Nile's Postgres platform - Manage and query Postgres databases, tenants, users, auth using LLMs
- openlink/mcp-server-odbc ๐ ๐ - An MCP server for generic Database Management System (DBMS) Connectivity via the Open Database Connectivity (ODBC) protocol
- openlink/mcp-server-sqlalchemy ๐ ๐ - An MCP server for generic Database Management System (DBMS) Connectivity via SQLAlchemy using Python ODBC (pyodbc)
- pab1it0/adx-mcp-server ๐ โ๏ธ - Query and analyze Azure Data Explorer databases
- pab1it0/prometheus-mcp-server ๐ โ๏ธ - Query and analyze Prometheus, open-source monitoring system.
- qdrant/mcp-server-qdrant ๐ ๐ - A Qdrant MCP server
- QuantGeekDev/mongo-mcp ๐ ๐ - MongoDB integration that enables LLMs to interact directly with databases.
- rashidazarang/airtable-mcp ๐ โ๏ธ - Connect AI tools directly to Airtable. Query, create, update, and delete records using natural language. Features include base management, table operations, schema manipulation, record filtering, and data migration through a standardized MCP interface.
- runekaagaard/mcp-alchemy ๐ ๐ - Universal SQLAlchemy-based database integration supporting PostgreSQL, MySQL, MariaDB, SQLite, Oracle, MS SQL Server and many more databases. Features schema and relationship inspection, and large dataset analysis capabilities.
- sirmews/mcp-pinecone ๐ โ๏ธ - Pinecone integration with vector search capabilities
- TheRaLabs/legion-mcp ๐ ๐ Universal database MCP server supporting multiple database types including PostgreSQL, Redshift, CockroachDB, MySQL, RDS MySQL, Microsoft SQL Server, BigQuery, Oracle DB, and SQLite.
- tinybirdco/mcp-tinybird ๐ โ๏ธ - Tinybird integration with query and API capabilities
- tradercjz/dolphindb-mcp-server ๐ โ๏ธ - TDolphinDB database integration with schema inspection and query capabilities
- weaviate/mcp-server-weaviate ๐ ๐ โ๏ธ - An MCP Server to connect to your Weaviate collections as a knowledge base as well as using Weaviate as a chat memory store.
- XGenerationLab/xiyan_mcp_server ๐ โ๏ธ โ An MCP server that supports fetching data from a database using natural language queries, powered by XiyanSQL as the text-to-SQL LLM.
- xing5/mcp-google-sheets ๐ โ๏ธ - A Model Context Protocol server for interacting with Google Sheets. This server provides tools to create, read, update, and manage spreadsheets through the Google Sheets API.
- zilliztech/mcp-server-milvus ๐ ๐ โ๏ธ - MCP Server for Milvus / Zilliz, making it possible to interact with your database.
๐ Data Platforms
Data Platforms for data integration, transformation and pipeline orchestration.
- JordiNei/mcp-databricks-server - Connect to Databricks API, allowing LLMs to run SQL queries, list jobs, and get job status.
- keboola/keboola-mcp-server - interact with Keboola Connection Data Platform. This server provides tools for listing and accessing data from Keboola Storage API.
๐ป Developer Tools
Tools and integrations that enhance the development workflow and environment management.
- 21st-dev/Magic-MCP - Create crafted UI components inspired by the best 21st.dev design engineers.
- admica/FileScopeMCP ๐ ๐ ๐ฆ - Analyzes your codebase identifying important files based on dependency relationships. Generates diagrams and importance scores, helping AI assistants understand the codebase.
- api7/apisix-mcp ๐๏ธ ๐ ๐ MCP Server that support for querying and managing all resource in Apache APISIX.
- automation-ai-labs/mcp-link ๐๏ธ ๐ - Seamlessly Integrate Any API with AI Agents (with OpenAPI Schema)
- Coment-ML/Opik-MCP ๐๏ธ ๐ โ๏ธ ๐ - Talk to your LLM observability, traces and monitoring captured by Opik using natural language.
- davidlin2k/pox-mcp-server ๐ ๐ - MCP server for the POX SDN controller to provides network control and management capabilities.
- delano/postman-mcp-server ๐ โ๏ธ - Interact with Postman API
- flipt-io/mcp-server-flipt ๐ ๐ - Enable AI assistants to interact with your feature flags in Flipt.
- GLips/Figma-Context-MCP ๐ ๐ - Provide coding agents direct access to Figma data to help them one-shot design implementation.
- gofireflyio/firefly-mcp ๐๏ธ ๐ โ๏ธ - Integrates, discovers, manages, and codifies cloud resources with Firefly.
- Govcraft/rust-docs-mcp-server ๐ฆ ๐ - Provides up-to-date documentation context for a specific Rust crate to LLMs via an MCP tool, using semantic search (embeddings) and LLM summarization.
- haris-musa/excel-mcp-server ๐ ๐ - An Excel manipulation server providing workbook creation, data operations, formatting, and advanced features (charts, pivot tables, formulae).
- higress-group/higress-ops-mcp-server ๐ ๐ - MCP server that provides comprehensive tools for managing Higress gateway configurations and operations.
- hungthai1401/bruno-mcp ๐ ๐ - A MCP server for interacting with Bruno API Client.
- hyperb1iss/droidmind ๐ ๐ - Control Android devices with AI through MCP, enabling device control, debugging, system analysis, and UI automation with a comprehensive security framework.
- IlyaGulya/gradle-mcp-server ๐ - Gradle integration using the Gradle Tooling API to inspect projects, execute tasks, and run tests with per-test result reporting
- InhiblabCore/mcp-image-compression ๐ ๐ - MCP server for local compression of various image formats.
- ios-simulator-mcp ๐ ๐ ๐ - A Model Context Protocol (MCP) server for interacting with iOS simulators. This server allows you to interact with iOS simulators by getting information about them, controlling UI interactions, and inspecting UI elements.
- j4c0bs/mcp-server-sql-analyzer ๐ - MCP server that provides SQL analysis, linting, and dialect conversion using SQLGlot
- jasonjmcghee/claude-debugs-for-you ๐ ๐ - An MCP Server and VS Code Extension which enables (language agnostic) automatic debugging via breakpoints and expression evaluation.
- jetbrains/mcpProxy ๐๏ธ ๐ ๐ - Connect to JetBrains IDE
- Jktfe/serveMyAPI ๐ ๐ ๐ - A personal MCP (Model Context Protocol) server for securely storing and accessing API keys across projects using the macOS Keychain.
- joshuarileydev/app-store-connect-mcp-server ๐ ๐ - An MCP server to communicate with the App Store Connect API for iOS Developers
- joshuarileydev/simulator-mcp-server ๐ ๐ - An MCP server to control iOS Simulators
- lamemind/mcp-server-multiverse ๐ ๐ ๐ ๏ธ - A middleware server that enables multiple isolated instances of the same MCP servers to coexist independently with unique namespaces and configurations.
- langfuse/mcp-server-langfuse ๐ ๐ - MCP server to access and manage LLM application prompts created with Langfuse Prompt Management.
- mrexodia/user-feedback-mcp ๐ ๐ - Simple MCP Server to enable a human-in-the-loop workflow in tools like Cline and Cursor.
- OctoMind-dev/octomind-mcp - ๐ โ๏ธ lets your preferred AI agent create & run fully managed Octomind end-to-end tests from your codebase or other data sources like Jira, Slack or TestRail.
- pskill9/website-downloader ๐๏ธ ๐ - This MCP server provides a tool to download entire websites using wget. It preserves the website structure and converts links to work locally.
- QuantGeekDev/docker-mcp ๐๏ธ ๐ - Docker container management and operations through MCP
- r-huijts/xcode-mcp-server ๐ ๐ ๐ - Xcode integration for project management, file operations, and build automation
- ReAPI-com/mcp-openapi ๐ ๐ - MCP server that lets LLMs know everything about your OpenAPI specifications to discover, explain and generate code/mock data
- Rootly-AI-Labs/Rootly-MCP-server ๐๏ธ๐โ๏ธ๐ - MCP server for the incident management platform Rootly.
- sammcj/mcp-package-version ๐ ๐ - An MCP Server to help LLMs suggest the latest stable package versions when writing code.
- sapientpants/sonarqube-mcp-server ๐ฆ โ๏ธ ๐ - A Model Context Protocol (MCP) server that integrates with SonarQube to provide AI assistants with access to code quality metrics, issues, and quality gate statuses
- SDGLBL/mcp-claude-code ๐ ๐ - An implementation of Claude Code capabilities using MCP, enabling AI code understanding, modification, and project analysis with comprehensive tool support.
- snaggle-ai/openapi-mcp-server ๐๏ธ ๐ - Connect any HTTP/REST API server using an Open API spec (v3)
- stass/lldb-mcp ๐ ๐ ๐ง ๐ - A MCP server for LLDB enabling AI binary and core file analysis, debugging, disassembling.
- tumf/mcp-text-editor ๐ ๐ - A line-oriented text file editor. Optimized for LLM tools with efficient partial file access to minimize token usage.
- vivekvells/mcp-pandoc ๐๏ธ ๐ - MCP server for seamless document format conversion using Pandoc, supporting Markdown, HTML, PDF, DOCX (.docx), csv and more.
- VSCode Devtools ๐ - Connect to VSCode ide and use semantic tools like
find_usages
- xcodebuild ๐ Build iOS Xcode workspace/project and feed back errors to llm.
- xzq.xu/jvm-mcp-server ๐ ๐ - An implementation project of a JVM-based MCP (Model Context Protocol) server.
- yangkyeongmo@/mcp-server-apache-airflow ๐ ๐ - MCP server that connects to Apache Airflow using official client.
- YuChenSSR/mindmap-mcp-server ๐ ๐ - A Model Context Protocol (MCP) server for generating a beautiful interactive mindmap.
- YuChenSSR/multi-ai-advisor ๐ ๐ - A Model Context Protocol (MCP) server that queries multiple Ollama models and combines their responses, providing diverse AI perspectives on a single question.
- yWorks/mcp-typescribe ๐ ๐ - MCP server that provides Typescript API information efficiently to the agent to enable it to work with untrained APIs
- zcaceres/fetch-mcp ๐ ๐ - An MCP server to flexibly fetch JSON, text, and HTML data
- zenml-io/mcp-zenml ๐ ๐ โ๏ธ - An MCP server to connect with your ZenML MLOps and LLMOps pipelines
๐งฎ Data Science Tools
Integrations and tools designed to simplify data exploration, analysis and enhance data science workflows.
- ChronulusAI/chronulus-mcp ๐ โ๏ธ - Predict anything with Chronulus AI forecasting and prediction agents.
- reading-plus-ai/mcp-server-data-exploration ๐ โ๏ธ - Enables autonomous data exploration on .csv-based datasets, providing intelligent insights with minimal effort.
- zcaceres/markdownify-mcp ๐ ๐ - An MCP server to convert almost any file or web content into Markdown
- jjsantos01/jupyter-notebook-mcp ๐ ๐ - connects Jupyter Notebook to Claude AI, allowing Claude to directly interact with and control Jupyter Notebooks.
๐ Embedded System
Provides access to documentation and shortcuts for working on embedded devices.
- horw/esp-mcp ๐ - Workflow for fixing build issues in ESP32 series chips using ESP-IDF.
๐ File Systems
Provides direct access to local file systems with configurable permissions. Enables AI models to read, write, and manage files within specified directories.
- cyberchitta/llm-context.py ๐ ๐ - Share code context with LLMs via MCP or clipboard
- exoticknight/mcp-file-merger ๐๏ธ ๐ - File merger tool, suitable for AI chat length limits.
- filesystem@quarkiverse/quarkus-mcp-servers โ ๐ - A filesystem allowing for browsing and editing files implemented in Java using Quarkus. Available as jar or native image.
- hmk/box-mcp-server ๐ โ๏ธ - Box integration for listing, reading and searching files
- mamertofabian/mcp-everything-search ๐ ๐ ๐ช - Fast Windows file search using Everything SDK
- mark3labs/mcp-filesystem-server ๐๏ธ ๐ - Golang implementation for local file system access.
- modelcontextprotocol/server-filesystem ๐ ๐ - Direct local file system access.
- modelcontextprotocol/server-google-drive ๐ โ๏ธ - Google Drive integration for listing, reading, and searching files
- Xuanwo/mcp-server-opendal ๐ ๐ โ๏ธ - Access any storage with Apache OpenDALโข
๐ฐ Finance & Fintech
Financial data access and analysis tools. Enables AI models to work with market data, trading platforms, and financial information.
- anjor/coinmarket-mcp-server ๐ โ๏ธ - Coinmarket API integration to fetch cryptocurrency listings and quotes
- bankless/onchain-mcp ๐ โ๏ธ - Bankless Onchain API to interact with smart contracts, query transaction and token information
- base/base-mcp ๐๏ธ ๐ โ๏ธ - Base Network integration for onchain tools, allowing interaction with Base Network and Coinbase API for wallet management, fund transfers, smart contracts, and DeFi operations
- berlinbra/alpha-vantage-mcp ๐ โ๏ธ - Alpha Vantage API integration to fetch both stock and crypto information
- bitteprotocol/mcp ๐ - Bitte Protocol integration to run AI Agents on several blockchains.
- chargebee/mcp ๐๏ธ ๐ โ๏ธ - MCP Server that connects AI agents to Chargebee platform.
- ferdousbhai/investor-agent ๐ โ๏ธ - Yahoo Finance integration to fetch stock market data including options recommendations
- ferdousbhai/tasty-agent ๐ โ๏ธ - Tastyworks API integration to handle trading activities on Tastytrade
- getalby/nwc-mcp-server ๐ ๐ - Bitcoin Lightning wallet integration powered by Nostr Wallet Connect
- heurist-network/heurist-mesh-mcp-server ๐๏ธ โ ๏ธ ๐ ๐ - Access specialized web3 AI agents for blockchain analysis, smart contract security auditing, token metrics evaluation, and on-chain interactions through the Heurist Mesh network. Provides comprehensive tools for DeFi analysis, NFT valuation, and transaction monitoring across multiple blockchains
- kukapay/crypto-feargreed-mcp ๐ โ๏ธ - Providing real-time and historical Crypto Fear & Greed Index data.
- kukapay/crypto-indicators-mcp ๐ โ๏ธ - An MCP server providing a range of cryptocurrency technical analysis indicators and strategie.
- kukapay/crypto-sentiment-mcp ๐ โ๏ธ - An MCP server that delivers cryptocurrency sentiment analysis to AI agents.
- kukapay/cryptopanic-mcp-server ๐ โ๏ธ - Providing latest cryptocurrency news to AI agents, powered by CryptoPanic.
- kukapay/dune-analytics-mcp ๐ โ๏ธ - A mcp server that bridges Dune Analytics data to AI agents.
- kukapay/freqtrade-mcp ๐ โ๏ธ - An MCP server that integrates with the Freqtrade cryptocurrency trading bot.
- kukapay/jupiter-mcp ๐ โ๏ธ - An MCP server for executing token swaps on the Solana blockchain using Jupiter's new Ultra API.
- kukapay/pancakeswap-poolspy-mcp ๐ โ๏ธ - An MCP server that tracks newly created pools on Pancake Swap.
- kukapay/rug-check-mcp ๐ โ๏ธ - An MCP server that detects potential risks in Solana meme tokens.
- kukapay/thegraph-mcp ๐ โ๏ธ - An MCP server that powers AI agents with indexed blockchain data from The Graph.
- kukapay/token-minter-mcp ๐ โ๏ธ - An MCP server providing tools for AI agents to mint ERC-20 tokens across multiple blockchains.
- kukapay/token-revoke-mcp ๐ โ๏ธ - An MCP server for checking and revoking ERC-20 token allowances across multiple blockchains.
- kukapay/uniswap-poolspy-mcp ๐ โ๏ธ - An MCP server that tracks newly created liquidity pools on Uniswap across multiple blockchains.
- kukapay/uniswap-trader-mcp ๐ โ๏ธ - An MCP server for AI agents to automate token swaps on Uniswap DEX across multiple blockchains.
- kukapay/whale-tracker-mcp ๐ โ๏ธ - A mcp server for tracking cryptocurrency whale transactions.
- longportapp/openapi - ๐ โ๏ธ - LongPort OpenAPI provides real-time stock market data, provides AI access analysis and trading capabilities through MCP.
- mcpdotdirect/evm-mcp-server ๐ โ๏ธ - Comprehensive blockchain services for 30+ EVM networks, supporting native tokens, ERC20, NFTs, smart contracts, transactions, and ENS resolution.
- mcpdotdirect/starknet-mcp-server ๐ โ๏ธ - Comprehensive Starknet blockchain integration with support for native tokens (ETH, STRK), smart contracts, StarknetID resolution, and token transfers.
- minhyeoky/mcp-server-ledger ๐ ๐ - A ledger-cli integration for managing financial transactions and generating reports.
- openMF/mcp-mifosx โ๏ธ ๐ - A core banking integration for managing clients, loans, savings, shares, financial transactions and generating financial reports.
- narumiruna/yfinance-mcp ๐ โ๏ธ - An MCP server that uses yfinance to obtain information from Yahoo Finance.
- pwh-pwh/coin-mcp-server ๐ โ๏ธ - Bitget API to fetch cryptocurrency price.
- QuantGeekDev/coincap-mcp ๐ โ๏ธ - Real-time cryptocurrency market data integration using CoinCap's public API, providing access to crypto prices and market information without API keys
- SaintDoresh/Crypto-Trader-MCP-ClaudeDesktop ๐ โ๏ธ - An MCP tool that provides cryptocurrency market data using the CoinGecko API.
- SaintDoresh/YFinance-Trader-MCP-ClaudeDesktop ๐ โ๏ธ - An MCP tool that provides stock market data and analysis using the Yahoo Finance API.
๐ฎ Gaming
Integration with gaming related data, game engines, and services
- CoderGamester/mcp-unity ๐ #๏ธโฃ ๐ - MCP Server for Unity3d Game Engine integration for game development
- Coding-Solo/godot-mcp ๐ ๐ - A MCP server for interacting with the Godot game engine, providing tools for editing, running, debugging, and managing scenes in Godot projects.
- pab1ito/chess-mcp ๐ โ๏ธ - Access Chess.com player data, game records, and other public information through standardized MCP interfaces, allowing AI assistants to search and analyze chess information.
- rishijatia/fantasy-pl-mcp ๐ โ๏ธ - An MCP server for real-time Fantasy Premier League data and analysis tools.
๐ง Knowledge & Memory
Persistent memory storage using knowledge graph structures. Enables AI models to maintain and query structured information across sessions.
- CheMiguel23/MemoryMesh ๐ ๐ - Enhanced graph-based memory with a focus on AI role-play and story generation
- graphlit-mcp-server ๐ โ๏ธ - Ingest anything from Slack, Discord, websites, Google Drive, Linear or GitHub into a Graphlit project - and then search and retrieve relevant knowledge within an MCP client like Cursor, Windsurf or Cline.
- hannesrudolph/mcp-ragdocs ๐ ๐ - An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context
- kaliaboi/mcp-zotero ๐ โ๏ธ - A connector for LLMs to work with collections and sources on your Zotero Cloud
- mcp-summarizer ๐ โ๏ธ - AI Summarization MCP Server, Support for multiple content types: Plain text, Web pages, PDF documents, EPUB books, HTML content
- mem0ai/mem0-mcp ๐ ๐ - A Model Context Protocol server for Mem0 that helps manage coding preferences and patterns, providing tools for storing, retrieving and semantically handling code implementations, best practices and technical documentation in IDEs like Cursor and Windsurf
- modelcontextprotocol/server-memory ๐ ๐ - Knowledge graph-based persistent memory system for maintaining context
- topoteretes/cognee ๐ ๐ - Memory manager for AI apps and Agents using various graph and vector stores and allowing ingestion from 30+ data sources
๐บ๏ธ Location Services
Location-based services and mapping tools. Enables AI models to work with geographic data, weather information, and location-based analytics.
- briandconnelly/mcp-server-ipinfo ๐ โ๏ธ - IP address geolocation and network information using IPInfo API
- kukapay/nearby-search-mcp ๐ โ๏ธ - An MCP server for nearby place searches with IP-based location detection.
- modelcontextprotocol/server-google-maps ๐ โ๏ธ - Google Maps integration for location services, routing, and place details
- QGIS MCP - connects QGIS Desktop to Claude AI through the MCP. This integration enables prompt-assisted project creation, layer loading, code execution, and more.
- SaintDoresh/Weather-MCP-ClaudeDesktop ๐ โ๏ธ - An MCP tool that provides real-time weather data, forecasts, and historical weather information using the OpenWeatherMap API.
- SecretiveShell/MCP-timeserver ๐ ๐ - Access the time in any timezone and get the current local time
- webcoderz/MCP-Geo ๐ ๐ - Geocoding MCP server for nominatim, ArcGIS, Bing
๐ฏ Marketing
Tools for creating and editing marketing content, working with web meta data, product positioning, and editing guides.
- Open Strategy Partners Marketing Tools ๐ ๐ - A suite of marketing tools from Open Strategy Partners including writing style, editing codes, and product marketing value map creation.
๐ Monitoring
Access and analyze application monitoring data. Enables AI models to review error reports and performance metrics.
- grafana/mcp-grafana ๐๏ธ ๐ ๐ โ๏ธ - Search dashboards, investigate incidents and query datasources in your Grafana instance
- hyperb1iss/lucidity-mcp ๐ ๐ - Enhance AI-generated code quality through intelligent, prompt-based analysis across 10 critical dimensions from complexity to security vulnerabilities
- last9/last9-mcp-server - Seamlessly bring real-time production contextโlogs, metrics, and tracesโinto your local environment to auto-fix code faster
- metoro-io/metoro-mcp-server ๐๏ธ ๐๏ธ โ๏ธ - Query and interact with kubernetes environments monitored by Metoro
- modelcontextprotocol/server-raygun ๐ โ๏ธ - Raygun API V3 integration for crash reporting and real user monitoring
- modelcontextprotocol/server-sentry ๐ โ๏ธ - Sentry.io integration for error tracking and performance monitoring
- pydantic/logfire-mcp ๐๏ธ ๐ โ๏ธ - Provides access to OpenTelemetry traces and metrics through Logfire
- seekrays/mcp-monitor ๐๏ธ ๐ - A system monitoring tool that exposes system metrics via the Model Context Protocol (MCP). This tool allows LLMs to retrieve real-time system information through an MCP-compatible interface.๏ผsupport CPUใMemoryใDiskใNetworkใHostใProcess๏ผ
๐ Search & Data Extraction
- ac3xx/mcp-servers-kagi ๐ โ๏ธ - Kagi search API integration
- andybrandt/mcp-simple-arxiv - ๐ โ๏ธ MCP for LLM to search and read papers from arXiv
- andybrandt/mcp-simple-pubmed - ๐ โ๏ธ MCP to search and read medical / life sciences papers from PubMed.
- angheljf/nyt ๐ โ๏ธ - Search articles using the NYTimes API
- apify/mcp-server-rag-web-browser ๐ โ๏ธ - An MCP server for Apify's open-source RAG Web Browser Actor to perform web searches, scrape URLs, and return content in Markdown.
- Bigsy/Clojars-MCP-Server ๐ โ๏ธ - Clojars MCP Server for upto date dependency information of Clojure libraries
- blazickjp/arxiv-mcp-server โ๏ธ ๐ - Search ArXiv research papers
- chanmeng/google-news-mcp-server ๐ โ๏ธ - Google News integration with automatic topic categorization, multi-language support, and comprehensive search capabilities including headlines, stories, and related topics through SerpAPI.
- ConechoAI/openai-websearch-mcp ๐ ๐ โ๏ธ - This is a Python-based MCP server that provides OpenAI
web_search
build-in tool. - devflowinc/trieve ๐๏ธ ๐ โ๏ธ ๐ - Crawl, embed, chunk, search, and retrieve information from datasets through Trieve
- Dumpling-AI/mcp-server-dumplingai ๐๏ธ ๐ โ๏ธ - Access data, web scraping, and document conversion APIs by Dumpling AI
- erithwik/mcp-hn ๐ โ๏ธ - An MCP server to search Hacker News, get top stories, and more.
- exa-labs/exa-mcp-server ๐๏ธ ๐ โ๏ธ โ A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
- fatwang2/search1api-mcp ๐ โ๏ธ - Search via search1api (requires paid API key)
- hellokaton/unsplash-mcp-server) ๐ โ๏ธ - A MCP server for Unsplash image search.
- Ihor-Sokoliuk/MCP-SearXNG ๐ ๐ /โ๏ธ - A Model Context Protocol Server for SearXNG
- jae-jae/fetcher-mcp ๐ ๐ - MCP server for fetching web page content using Playwright headless browser, supporting Javascript rendering and intelligent content extraction, and outputting Markdown or HTML format.
- jae-jae/g-search-mcp ๐ ๐ - A powerful MCP server for Google search that enables parallel searching with multiple keywords simultaneously.
- kshern/mcp-tavily โ๏ธ ๐ โ Tavily AI search API
- modelcontextprotocol/server-brave-search ๐ โ๏ธ - Web search capabilities using Brave's Search API
- modelcontextprotocol/server-fetch ๐ ๐ โ๏ธ - Efficient web content fetching and processing for AI consumption
- mzxrai/mcp-webresearch ๐๐ - Search Google and do deep web research on any topic
- nickclyde/duckduckgo-mcp-server ๐ โ๏ธ - Web search using DuckDuckGo
- reading-plus-ai/mcp-server-deep-research ๐ โ๏ธ - MCP server providing OpenAI/Perplexity-like autonomous deep research, structured query elaboration, and concise reporting.
- SecretiveShell/MCP-searxng ๐ ๐ - An MCP Server to connect to searXNG instances
- tinyfish-io/agentql-mcp ๐๏ธ ๐ โ๏ธ - MCP server that provides AgentQL's data extraction capabilities.
- Tomatio13/mcp-server-tavily โ๏ธ ๐ โ Tavily AI search API
- vectorize-io/vectorize-mcp-server โ๏ธ ๐ - Vectorize MCP server for advanced retrieval, Private Deep Research, Anything-to-Markdown file extraction and text chunking.
- zhsama/duckduckgo-mcp-server ๐ ๐ โ๏ธ - This is a TypeScript-based MCP server that provides DuckDuckGo search functionality.
- zoomeye-ai/mcp_zoomeye ๐ โ๏ธ - Querying network asset information by ZoomEye MCP Server
๐ Security
- 13bm/GhidraMCP ๐ โ ๐ - MCP server for integrating Ghidra with AI assistants. This plugin enables binary analysis, providing tools for function inspection, decompilation, memory exploration, and import/export analysis via the Model Context Protocol.
- atomicchonk/roadrecon_mcp_server ๐ ๐ช ๐ MCP server for analyzing ROADrecon gather results from Azure tenant enumeration
- BurtTheCoder/mcp-dnstwist ๐ ๐ช โ๏ธ - MCP server for dnstwist, a powerful DNS fuzzing tool that helps detect typosquatting, phishing, and corporate espionage.
- BurtTheCoder/mcp-maigret ๐ ๐ช โ๏ธ - MCP server for maigret, a powerful OSINT tool that collects user account information from various public sources. This server provides tools for searching usernames across social networks and analyzing URLs.
- BurtTheCoder/mcp-shodan ๐ ๐ช โ๏ธ - MCP server for querying the Shodan API and Shodan CVEDB. This server provides tools for IP lookups, device searches, DNS lookups, vulnerability queries, CPE lookups, and more.
- BurtTheCoder/mcp-virustotal ๐ ๐ช โ๏ธ - MCP server for querying the VirusTotal API. This server provides tools for scanning URLs, analyzing file hashes, and retrieving IP address reports.
- fr0gger/MCP_Security ๐ โ๏ธ - MCP server for querying the ORKL API. This server provides tools for fetching threat reports, analyzing threat actors, and retrieving intelligence sources.
- qianniuspace/mcp-security-audit ๐ โ๏ธ A powerful MCP (Model Context Protocol) Server that audits npm package dependencies for security vulnerabilities. Built with remote npm registry integration for real-time security checks.
- semgrep/mcp-security-audit ๐ โ๏ธ Allow AI agents to scan code for security vulnerabilites using Semgrep. actors, and retrieving intelligence sources. security vulnerabilities. Built with remote npm registry integration for real-time security checks.
- mrexodia/ida-pro-mcp ๐ ๐ - MCP server for IDA Pro, allowing you to perform binary analysis with AI assistants. This plugin implement decompilation, disassembly and allows you to generate malware analysis reports automatically.
๐ Sports
Tools for accessing sports-related data, results, and statistics.
- r-huijts/firstcycling-mcp ๐ โ๏ธ - Access cycling race data, results, and statistics through natural language. Features include retrieving start lists, race results, and rider information from firstcycling.com.
๐ง Support & Service Management
Tools for managing customer support, IT service management, and helpdesk operations.
- effytech/freshdesk-mcp ๐ โ๏ธ - MCP server that integrates with Freshdesk, enabling AI models to interact with Freshdesk modules and perform various support operations.
๐ Translation Services
Translation tools and services to enable AI assistants to translate content between different languages.
- translated/lara-mcp ๐ ๐ - MCP Server for Lara Translate API, enabling powerful translation capabilities with support for language detection and context-aware translations.
๐ Travel & Transportation
Access to travel and transportation information. Enables querying schedules, routes, and real-time travel data.
- Airbnb MCP Server ๐ โ๏ธ - Provides tools to search Airbnb and get listing details.
- KyrieTangSheng/mcp-server-nationalparks ๐ โ๏ธ - National Park Service API integration providing latest information of park details, alerts, visitor centers, campgrounds, and events for U.S. National Parks
- NS Travel Information MCP Server ๐ โ๏ธ - Access Dutch Railways (NS) travel information, schedules, and real-time updates
- pab1it0/tripadvisor-mcp ๐ ๐ - A MCP server that enables LLMs to interact with Tripadvisor API, supporting location data, reviews, and photos through standardized MCP interfaces
๐ Version Control
Interact with Git repositories and version control platforms. Enables repository management, code analysis, pull request handling, issue tracking, and other version control operations through standardized APIs.
- adhikasp/mcp-git-ingest ๐ ๐ - Read and analyze GitHub repositories with your LLM
- ddukbg/github-enterprise-mcp ๐ โ๏ธ ๐ - MCP server for GitHub Enterprise API integration
- kopfrechner/gitlab-mr-mcp ๐ โ๏ธ - Interact seamlessly with issues and merge requests of your GitLab projects.
- modelcontextprotocol/server-git ๐ ๐ - Direct Git repository operations including reading, searching, and analyzing local repositories
- modelcontextprotocol/server-github ๐ โ๏ธ - GitHub API integration for repository management, PRs, issues, and more
- modelcontextprotocol/server-gitlab ๐ โ๏ธ ๐ - GitLab platform integration for project management and CI/CD operations
- oschina/mcp-gitee ๐๏ธ โ๏ธ ๐ - Gitee API integration, repository, issue, and pull request management, and more.
- Tiberriver256/mcp-server-azure-devops ๐ โ๏ธ - Azure DevOps integration for repository management, work items, and pipelines.
๐ ๏ธ Other Tools and Integrations
- AbdelStark/bitcoin-mcp - โฟ A Model Context Protocol (MCP) server that enables AI models to interact with Bitcoin, allowing them to generate keys, validate addresses, decode transactions, query the blockchain, and more.
- akseyh/bear-mcp-server - Allows the AI to read from your Bear Notes (macOS only)
- allenporter/mcp-server-home-assistant ๐ ๐ - Expose all Home Assistant voice intents through a Model Context Protocol Server allowing home control.
- Amazon Bedrock Nova Canvas ๐ โ๏ธ - Use Amazon Nova Canvas model for image generation.
- amidabuddha/unichat-mcp-server ๐/๐ โ๏ธ - Send requests to OpenAI, MistralAI, Anthropic, xAI, Google AI or DeepSeek using MCP protocol via tool or predefined prompts. Vendor API key required
- anaisbetts/mcp-installer ๐ ๐ - An MCP server that installs other MCP servers for you.
- anaisbetts/mcp-youtube ๐ โ๏ธ - Fetch YouTube subtitles
- andybrandt/mcp-simple-openai-assistant - ๐ โ๏ธ MCP to talk to OpenAI assistants (Claude can use any GPT model as his assitant)
- andybrandt/mcp-simple-timeserver ๐ ๐ โ๏ธ - An MCP server that allows checking local time on the client machine or current UTC time from an NTP server
- anjor/coinmarket-mcp-server ๐ ๐ - Coinmarket API integration to fetch cryptocurrency listings and quotes
- apify/actors-mcp-server ๐ โ๏ธ - Use 3,000+ pre-built cloud tools, known as Actors, to extract data from websites, e-commerce, social media, search engines, maps, and more
- apinetwork/piapi-mcp-server ๐ โ๏ธ PiAPI MCP server makes user able to generate media content with Midjourney/Flux/Kling/Hunyuan/Udio/Trellis directly from Claude or any other MCP-compatible apps.
- awkoy/replicate-flux-mcp ๐ โ๏ธ - Provides the ability to generate images via Replicate's API.
- awwaiid/mcp-server-taskwarrior ๐ ๐ - An MCP server for basic local taskwarrior usage (add, update, remove tasks)
- baba786/phabricator-mcp-server ๐ โ๏ธ - Interacting with Phabricator API
- Badhansen/notion-mcp ๐ โ๏ธ - A Model Context Protocol (MCP) server that integrates with Notion's API to manage personal todo lists efficiently.
- bart6114/my-bear-mcp-server ๐ ๐ ๐ - Allows to read notes and tags for the Bear Note taking app, through a direct integration with Bear's sqlitedb.
- billster45/mcp-chatgpt-responses ๐ โ๏ธ - MCP server for Claude to talk to ChatGPT and use its web search capability.
- blurrah/mcp-graphql ๐ โ๏ธ - Allows the AI to query GraphQL servers
- calclavia/mcp-obsidian ๐ ๐ - This is a connector to allow Claude Desktop (or any MCP client) to read and search any directory containing Markdown notes (such as an Obsidian vault).
- chrishayuk/mcp-cli ๐ ๐ - Yet another CLI tool for testing MCP servers
- danhilse/notion_mcp ๐ โ๏ธ - Integrates with Notion's API to manage personal todo lists
- evalstate/mcp-miro ๐ โ๏ธ - Access MIRO whiteboards, bulk create and read items. Requires OAUTH key for REST API.
- future-audiences/wikimedia-enterprise-model-context-protocol ๐ โ๏ธ - Wikipedia Article lookup API
- githejie/mcp-server-calculator ๐ ๐ - This server enables LLMs to use calculator for precise numerical calculations
- gotoolkits/DifyWorkflow - ๐๏ธ โ๏ธ Tools to the query and execute of Dify workflows
- hiromitsusasaki/raindrop-io-mcp-server ๐ โ๏ธ - An integration that allows LLMs to interact with Raindrop.io bookmarks using the Model Context Protocol (MCP).
- hmk/attio-mcp-server - ๐ โ๏ธ Allows AI clients to manage records and notes in Attio CRM
- isaacwasserman/mcp-vegalite-server ๐ ๐ - Generate visualizations from fetched data using the VegaLite format and renderer.
- ivo-toby/contentful-mcp ๐ ๐ - Update, create, delete content, content-models and assets in your Contentful Space
- j3k0/speech.sh ๐ - Let the agent speak things out loud, notify you when he's done working with a quick summary
- joshuarileydev/mac-apps-launcher-mcp-server ๐ ๐ - An MCP server to list and launch applications on MacOS
- kelvin6365/plane-mcp-server - ๐๏ธ ๐ This MCP Server will help you to manage projects and issues through Plane's API
- kenliao94/mcp-server-rabbitmq ๐ ๐ - Enable interaction (admin operation, message enqueue/dequeue) with RabbitMQ
- kj455/mcp-kibela - ๐ โ๏ธ Allows AI models to interact with Kibela
- KS-GEN-AI/confluence-mcp-server ๐ โ๏ธ ๐ ๐ช - Get Confluence data via CQL and read pages.
- KS-GEN-AI/jira-mcp-server ๐ โ๏ธ ๐ ๐ช - Read jira data via JQL and api and execute requests to create and edit tickets.
- lciesielski/mcp-salesforce ๐ โ๏ธ - MCP server with basic demonstration of interactions with Salesforce instance
- llmindset/mcp-hfspace ๐ โ๏ธ - Use HuggingFace Spaces directly from Claude. Use Open Source Image Generation, Chat, Vision tasks and more. Supports Image, Audio and text uploads/downloads.
- magarcia/mcp-server-giphy ๐ โ๏ธ - Search and retrieve GIFs from Giphy's vast library through the Giphy API.
- makehq/mcp-server ๐๏ธ ๐ ๐ - Turn your Make scenarios into callable tools for AI assistants.
- marcelmarais/Spotify - ๐ ๐ Control Spotify playback and manage playlists.
- MarkusPfundstein/mcp-obsidian ๐ โ๏ธ ๐ - Interacting with Obsidian via REST API
- mcp-server-jfx โ ๐ - Draw on JavaFX canvas.
- mediar-ai/screenpipe - ๐๏ธ ๐ฆ ๐ ๐ Local-first system capturing screen/audio with timestamped indexing, SQL/embedding storage, semantic search, LLM-powered history analysis, and event-triggered actions - enables building context-aware AI agents through a NextJS plugin ecosystem.
- modelcontextprotocol/server-everything ๐ ๐ - MCP server that exercises all the features of the MCP protocol
- mrjoshuak/godoc-mcp ๐๏ธ ๐ - Token-efficient Go documentation server that provides AI assistants with smart access to package docs and types without reading entire source files
- mzxrai/mcp-openai ๐ โ๏ธ - Chat with OpenAI's smartest models
- NakaokaRei/swift-mcp-gui ๐ ๐ - MCP server that can execute commands such as keyboard input and mouse movement
- nguyenvanduocit/all-in-one-model-context-protocol ๐๏ธ ๐ - Some useful tools for developer, almost everything an engineer need: confluence, Jira, Youtube, run script, knowledge base RAG, fetch URL, Manage youtube channel, emails, calendar, gitlab
- NON906/omniparser-autogui-mcp - ๐ Automatic operation of on-screen GUI.
- pierrebrunelle/mcp-server-openai ๐ โ๏ธ - Query OpenAI models directly from Claude using MCP protocol
- pskill9/hn-server - ๐ โ๏ธ Parses the HTML content from news.ycombinator.com (Hacker News) and provides structured data for different types of stories (top, new, ask, show, jobs).
- PV-Bhat/vibe-check-mcp-server ๐ โ๏ธ - An MCP server that prevents cascading errors and scope creep by calling a "Vibe-check" agent to ensure user alignment.
- pwh-pwh/cal-mcp - An MCP server for Mathematical expression calculation
- pyroprompts/any-chat-completions-mcp - Chat with any other OpenAI SDK Compatible Chat Completions API, like Perplexity, Groq, xAI and more
- reeeeemo/ancestry-mcp ๐ ๐ - Allows the AI to read .ged files and genetic data
- rember/rember-mcp ๐ ๐ - Create spaced repetition flashcards in Rember to remember anything you learn in your chats.
- roychri/mcp-server-asana - ๐ โ๏ธ This Model Context Protocol server implementation of Asana allows you to talk to Asana API from MCP Client such as Anthropic's Claude Desktop Application, and many more.
- rusiaaman/wcgw ๐ ๐ - Autonomous shell execution, computer control and coding agent. (Mac)
- SecretiveShell/MCP-wolfram-alpha ๐ โ๏ธ - An MCP server for querying wolfram alpha API.
- Seym0n/tiktok-mcp ๐ โ๏ธ - Interact with TikTok videos
- sirmews/apple-notes-mcp ๐ ๐ - Allows the AI to read from your local Apple Notes database (macOS only)
- sooperset/mcp-atlassian ๐ โ๏ธ - MCP server for Atlassian products (Confluence and Jira). Supports Confluence Cloud, Jira Cloud, and Jira Server/Data Center. Provides comprehensive tools for searching, reading, creating, and managing content across Atlassian workspaces.
- suekou/mcp-notion-server ๐ ๐ - Interacting with Notion API
- tacticlaunch/mcp-linear ๐ โ๏ธ ๐ ๐ช ๐ง - Integrates with Linear project management system
- tanigami/mcp-server-perplexity ๐ โ๏ธ - Interacting with Perplexity API.
- tevonsb/homeassistant-mcp ๐ ๐ - Access Home Assistant data and control devices (lights, switches, thermostats, etc).
- tomekkorbak/oura-mcp-server ๐ โ๏ธ - An MCP server for Oura, an app for tracking sleep
- tomekkorbak/strava-mcp-server ๐ โ๏ธ - An MCP server for Strava, an app for tracking physical exercise
- wanaku-ai/wanaku - โ๏ธ ๐ The Wanaku MCP Router is a SSE-based MCP server that provides an extensible routing engine that allows integrating your enterprise systems with AI agents.
- wong2/mcp-cli ๐ ๐ - CLI tool for testing MCP servers
- ws-mcp - Wrap MCP servers with a WebSocket (for use with kitbitz)
- yuna0x0/hackmd-mcp ๐ โ๏ธ - Allows AI models to interact with HackMD
- ZeparHyfar/mcp-datetime - MCP server providing date and time functions in various formats
- zueai/mcp-manager ๐ โ๏ธ - Simple Web UI to install and manage MCP servers for Claude Desktop App.
- HenryHaoson/Yuque-MCP-Server - ๐ โ๏ธ A Model-Context-Protocol (MCP) server for integrating with Yuque API, allowing AI models to manage documents, interact with knowledge bases, search content, and access analytics data from the Yuque platform.
Frameworks
- FastMCP ๐ - A high-level framework for building MCP servers in Python
- FastMCP ๐ - A high-level framework for building MCP servers in TypeScript
- Foxy Contexts ๐๏ธ - Golang library to write MCP Servers declaratively with functional testing included
- gabfr/waha-api-mcp-server ๐ - An MCP server with openAPI specs for using the WhatsApp unnoficial API (https://waha.devlike.pro/ - also open source: https://github.com/devlikeapro/waha
- Genkit MCP ๐ โ Provides integration between Genkit and the Model Context Protocol (MCP).
- http4k MCP SDK ๐ - Functional, testable Kotlin SDK based around the popular http4k Web toolkit. Supports new HTTP Streaming protocol.
- lastmile-ai/mcp-agent ๐ค ๐ - Build effective agents with MCP servers using simple, composable patterns.
- LiteMCP ๐ - A high-level framework for building MCP servers in JavaScript/TypeScript
- marimo-team/codemirror-mcp - CodeMirror extension that implements the Model Context Protocol (MCP) for resource mentions and prompt commands.
- mark3labs/mcp-go ๐๏ธ - Golang SDK for building MCP Servers and Clients.
- mcp-framework ๐ - Fast and elegant TypeScript framework for building MCP servers
- mcp-proxy - ๐ A TypeScript SSE proxy for MCP servers that use
stdio
transport. - mcp-rs-template ๐ฆ - MCP CLI server template for Rust
- metoro-io/mcp-golang ๐๏ธ - Golang framework for building MCP Servers, focussed on type safety
- mullerhai/sakura-mcp ๐ฆ โ - Scala MCP Framework for Build effective agents with MCP servers and MCP clients shade from modelcontextprotocol.io.
- paulotaylor/voyp-mcp ๐ - VOYP - Voice Over Your Phone MCP Server for making calls.
- poem-web/poem-mcpserver ๐ฆ - MCP Server implementation for Poem.
- quarkiverse/quarkus-mcp-server โ - Java SDK for building MCP servers using Quarkus.
- rectalogic/langchain-mcp ๐ - Provides MCP tool calling support in LangChain, allowing for the integration of MCP tools into LangChain workflows.
- ribeirogab/simple-mcp ๐ - A simple TypeScript library for creating MCP servers.
- salty-flower/ModelContextProtocol.NET #๏ธโฃ ๐ - A C# SDK for building MCP servers on .NET 9 with NativeAOT compatibility โก ๐
- spring-ai-mcp โ ๐ฑ - Java SDK and Spring Framework integration for building MCP client and MCP servers with various, plugable, transport options.
- spring-projects-experimental/spring-ai-mcp โ ๐ฑ - Java SDK and Spring Framework integration for building MCP client and MCP servers with various, plugable, transport options.
- Template MCP Server ๐ - A CLI tool to create a new Model Context Protocol server project with TypeScript support, dual transport options, and an extensible structure
- sendaifun/solana-mcp-kit - Solana MCP SDK
Utilities
- boilingdata/mcp-server-and-gw ๐ - An MCP stdio to HTTP SSE transport gateway with example server and MCP client.
- f/MCPTools ๐จ - A command-line development tool for inspecting and interacting with MCP servers with extra features like mocks and proxies.
- flux159/mcp-chat ๐๐ฅ๏ธ - A CLI based client to chat and connect with any MCP server. Useful during development & testing of MCP servers.
- isaacwasserman/mcp-langchain-ts-client ๐ โ Use MCP provided tools in LangChain.js
- kukapay/whattimeisit-mcp ๐ โ๏ธ - A lightweight mcp server that tells you exactly what time is it.
- kukapay/whereami-mcp ๐ โ๏ธ - A lightweight mcp server that tells you exactly where you are based on your current IP.
- kukapay/whoami-mcp ๐ ๐ - A lightweight MCP server that tells you exactly who you are.
- lightconetech/mcp-gateway ๐ - A gateway demo for MCP SSE Server.
- mark3labs/mcphost ๐๏ธ - A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP).
- MCP-Connect ๐ - A tiny tool that enables cloud-based AI services to access local Stdio based MCP servers by HTTP/HTTPS requests.
- SecretiveShell/MCP-Bridge ๐ โ an openAI middleware proxy to use mcp in any existing openAI compatible client
- sparfenyuk/mcp-proxy ๐ โ An MCP stdio to SSE transport gateawy.
- TBXark/mcp-proxy ๐๏ธ - An MCP proxy server that aggregates and serves multiple MCP resource servers through a single http server.
- upsonic/gpt-computer-assistant ๐ โ framework to build vertical AI agent
Tips and Tricks
Official prompt to inform LLMs how to use MCP
Want to ask Claude about Model Context Protocol?
Create a Project, then add this file to it:
https://modelcontextprotocol.io/llms-full.txt
Now Claude can answer questions about writing MCP servers and how they work
Star History
Use your Neovim like using Cursor AI IDE!
avante.nvim
avante.nvim is a Neovim plugin designed to emulate the behaviour of the Cursor AI IDE. It provides users with AI-driven code suggestions and the ability to apply these recommendations directly to their source files with minimal effort.
[!NOTE]
๐ฅฐ This project is undergoing rapid iterations, and many exciting features will be added successively. Stay tuned!
https://github.com/user-attachments/assets/510e6270-b6cf-459d-9a2f-15b397d1fe53
https://github.com/user-attachments/assets/86140bfd-08b4-483d-a887-1b701d9e37dd
Sponsorship โค๏ธ
If you like this project, please consider supporting me on Patreon, as it helps me to continue maintaining and improving it:
Features
- AI-Powered Code Assistance: Interact with AI to ask questions about your current code file and receive intelligent suggestions for improvement or modification.
- One-Click Application: Quickly apply the AI's suggested changes to your source code with a single command, streamlining the editing process and saving time.
Installation
For building binary if you wish to build from source, then cargo
is required. Otherwise curl
and tar
will be used to get prebuilt binary from GitHub.
lazy.nvim (recommended)
{
"yetone/avante.nvim",
event = "VeryLazy",
version = false, -- Never set this value to "*"! Never!
opts = {
-- add any opts here
-- for example
provider = "openai",
openai = {
endpoint = "https://api.openai.com/v1",
model = "gpt-4o", -- your desired model (or use gpt-4o, etc.)
timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
temperature = 0,
max_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
--reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
},
},
-- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
build = "make",
-- build = "powershell -ExecutionPolicy Bypass -File Build.ps1 -BuildFromSource false" -- for windows
dependencies = {
"nvim-treesitter/nvim-treesitter",
"stevearc/dressing.nvim",
"nvim-lua/plenary.nvim",
"MunifTanjim/nui.nvim",
--- The below dependencies are optional,
"echasnovski/mini.pick", -- for file_selector provider mini.pick
"nvim-telescope/telescope.nvim", -- for file_selector provider telescope
"hrsh7th/nvim-cmp", -- autocompletion for avante commands and mentions
"ibhagwan/fzf-lua", -- for file_selector provider fzf
"nvim-tree/nvim-web-devicons", -- or echasnovski/mini.icons
"zbirenbaum/copilot.lua", -- for providers='copilot'
{
-- support for image pasting
"HakonHarnes/img-clip.nvim",
event = "VeryLazy",
opts = {
-- recommended settings
default = {
embed_image_as_base64 = false,
prompt_for_file_name = false,
drag_and_drop = {
insert_mode = true,
},
-- required for Windows users
use_absolute_path = true,
},
},
},
{
-- Make sure to set this up properly if you have lazy=true
'MeanderingProgrammer/render-markdown.nvim',
opts = {
file_types = { "markdown", "Avante" },
},
ft = { "markdown", "Avante" },
},
},
}
vim-plug
" Deps
Plug 'nvim-treesitter/nvim-treesitter'
Plug 'stevearc/dressing.nvim'
Plug 'nvim-lua/plenary.nvim'
Plug 'MunifTanjim/nui.nvim'
Plug 'MeanderingProgrammer/render-markdown.nvim'
" Optional deps
Plug 'hrsh7th/nvim-cmp'
Plug 'nvim-tree/nvim-web-devicons' "or Plug 'echasnovski/mini.icons'
Plug 'HakonHarnes/img-clip.nvim'
Plug 'zbirenbaum/copilot.lua'
" Yay, pass source=true if you want to build from source
Plug 'yetone/avante.nvim', { 'branch': 'main', 'do': 'make' }
autocmd! User avante.nvim lua << EOF
require('avante').setup()
EOF
mini.deps
local add, later, now = MiniDeps.add, MiniDeps.later, MiniDeps.now
add({
source = 'yetone/avante.nvim',
monitor = 'main',
depends = {
'nvim-treesitter/nvim-treesitter',
'stevearc/dressing.nvim',
'nvim-lua/plenary.nvim',
'MunifTanjim/nui.nvim',
'echasnovski/mini.icons'
},
hooks = { post_checkout = function() vim.cmd('make') end }
})
--- optional
add({ source = 'hrsh7th/nvim-cmp' })
add({ source = 'zbirenbaum/copilot.lua' })
add({ source = 'HakonHarnes/img-clip.nvim' })
add({ source = 'MeanderingProgrammer/render-markdown.nvim' })
later(function() require('render-markdown').setup({...}) end)
later(function()
require('img-clip').setup({...}) -- config img-clip
require("copilot").setup({...}) -- setup copilot to your liking
require("avante").setup({...}) -- config for avante.nvim
end)
Packer
-- Required plugins
use 'nvim-treesitter/nvim-treesitter'
use 'stevearc/dressing.nvim'
use 'nvim-lua/plenary.nvim'
use 'MunifTanjim/nui.nvim'
use 'MeanderingProgrammer/render-markdown.nvim'
-- Optional dependencies
use 'hrsh7th/nvim-cmp'
use 'nvim-tree/nvim-web-devicons' -- or use 'echasnovski/mini.icons'
use 'HakonHarnes/img-clip.nvim'
use 'zbirenbaum/copilot.lua'
-- Avante.nvim with build process
use {
'yetone/avante.nvim',
branch = 'main',
run = 'make',
config = function()
require('avante').setup()
end
}
Home Manager
programs.neovim = {
plugins = [
{
plugin = pkgs.vimPlugins.avante-nvim;
type = "lua";
config = ''
require("avante_lib").load()
require("avante").setup()
'' # or builtins.readFile ./plugins/avante.lua;
}
];
};
Nixvim
plugins.avante.enable = true;
plugins.avante.settings = {
# setup options here
};
Lua
-- deps:
require('cmp').setup ({
-- use recommended settings from above
})
require('img-clip').setup ({
-- use recommended settings from above
})
require('copilot').setup ({
-- use recommended settings from above
})
require('render-markdown').setup ({
-- use recommended settings from above
})
require('avante').setup ({
-- Your config here!
})
[!IMPORTANT]
avante.nvim
is currently only compatible with Neovim 0.10.1 or later. Please ensure that your Neovim version meets these requirements before proceeding.
[!NOTE]
When loading the plugin synchronously, we recommend
require
ing it sometime after your colorscheme.
[!NOTE]
Recommended Neovim options:
-- views can only be fully collapsed with the global statusline vim.opt.laststatus = 3
[!TIP]
Any rendering plugins that support markdown should work with Avante as long as you add the supported filetype
Avante
. See https://github.com/yetone/avante.nvim/issues/175 and this comment for more information.
Default setup configuration
See config.lua#L9 for the full config
Default configuration
{
---@alias Provider "claude" | "openai" | "azure" | "gemini" | "cohere" | "copilot" | string
provider = "claude", -- The provider used in Aider mode or in the planning phase of Cursor Planning Mode
-- WARNING: Since auto-suggestions are a high-frequency operation and therefore expensive,
-- currently designating it as `copilot` provider is dangerous because: https://github.com/yetone/avante.nvim/issues/1048
-- Of course, you can reduce the request frequency by increasing `suggestion.debounce`.
auto_suggestions_provider = "claude",
cursor_applying_provider = nil, -- The provider used in the applying phase of Cursor Planning Mode, defaults to nil, when nil uses Config.provider as the provider for the applying phase
claude = {
endpoint = "https://api.anthropic.com",
model = "claude-3-5-sonnet-20241022",
temperature = 0,
max_tokens = 4096,
},
---Specify the special dual_boost mode
---1. enabled: Whether to enable dual_boost mode. Default to false.
---2. first_provider: The first provider to generate response. Default to "openai".
---3. second_provider: The second provider to generate response. Default to "claude".
---4. prompt: The prompt to generate response based on the two reference outputs.
---5. timeout: Timeout in milliseconds. Default to 60000.
---How it works:
--- When dual_boost is enabled, avante will generate two responses from the first_provider and second_provider respectively. Then use the response from the first_provider as provider1_output and the response from the second_provider as provider2_output. Finally, avante will generate a response based on the prompt and the two reference outputs, with the default Provider as normal.
---Note: This is an experimental feature and may not work as expected.
dual_boost = {
enabled = false,
first_provider = "openai",
second_provider = "claude",
prompt = "Based on the two reference outputs below, generate a response that incorporates elements from both but reflects your own judgment and unique perspective. Do not provide any explanation, just give the response directly. Reference Output 1: [{{provider1_output}}], Reference Output 2: [{{provider2_output}}]",
timeout = 60000, -- Timeout in milliseconds
},
behaviour = {
auto_suggestions = false, -- Experimental stage
auto_set_highlight_group = true,
auto_set_keymaps = true,
auto_apply_diff_after_generation = false,
support_paste_from_clipboard = false,
minimize_diff = true, -- Whether to remove unchanged lines when applying a code block
enable_token_counting = true, -- Whether to enable token counting. Default to true.
enable_cursor_planning_mode = false, -- Whether to enable Cursor Planning Mode. Default to false.
enable_claude_text_editor_tool_mode = false, -- Whether to enable Claude Text Editor Tool Mode.
},
mappings = {
--- @class AvanteConflictMappings
diff = {
ours = "co",
theirs = "ct",
all_theirs = "ca",
both = "cb",
cursor = "cc",
next = "]x",
prev = "[x",
},
suggestion = {
accept = "<M-l>",
next = "<M-]>",
prev = "<M-[>",
dismiss = "<C-]>",
},
jump = {
next = "]]",
prev = "[[",
},
submit = {
normal = "<CR>",
insert = "<C-s>",
},
cancel = {
normal = { "<C-c>", "<Esc>", "q" },
insert = { "<C-c>" },
},
sidebar = {
apply_all = "A",
apply_cursor = "a",
retry_user_request = "r",
edit_user_request = "e",
switch_windows = "<Tab>",
reverse_switch_windows = "<S-Tab>",
remove_file = "d",
add_file = "@",
close = { "<Esc>", "q" },
close_from_input = nil, -- e.g., { normal = "<Esc>", insert = "<C-d>" }
},
},
hints = { enabled = true },
windows = {
---@type "right" | "left" | "top" | "bottom"
position = "right", -- the position of the sidebar
wrap = true, -- similar to vim.o.wrap
width = 30, -- default % based on available width
sidebar_header = {
enabled = true, -- true, false to enable/disable the header
align = "center", -- left, center, right for title
rounded = true,
},
input = {
prefix = "> ",
height = 8, -- Height of the input window in vertical layout
},
edit = {
border = "rounded",
start_insert = true, -- Start insert mode when opening the edit window
},
ask = {
floating = false, -- Open the 'AvanteAsk' prompt in a floating window
start_insert = true, -- Start insert mode when opening the ask window
border = "rounded",
---@type "ours" | "theirs"
focus_on_apply = "ours", -- which diff to focus after applying
},
},
highlights = {
---@type AvanteConflictHighlights
diff = {
current = "DiffText",
incoming = "DiffAdd",
},
},
--- @class AvanteConflictUserConfig
diff = {
autojump = true,
---@type string | fun(): any
list_opener = "copen",
--- Override the 'timeoutlen' setting while hovering over a diff (see :help timeoutlen).
--- Helps to avoid entering operator-pending mode with diff mappings starting with `c`.
--- Disable by setting to -1.
override_timeoutlen = 500,
},
suggestion = {
debounce = 600,
throttle = 600,
},
}
Blink.cmp users
For blink cmp users (nvim-cmp alternative) view below instruction for configuration This is achieved by emulating nvim-cmp using blink.compat or you can use Kaiser-Yang/blink-cmp-avante.
Lua
file_selector = {
--- @alias FileSelectorProvider "native" | "fzf" | "mini.pick" | "snacks" | "telescope" | string | fun(params: avante.file_selector.IParams|nil): nil
provider = "fzf",
-- Options override for custom providers
provider_opts = {},
}
To create a customized file_selector, you can specify a customized function to launch a picker to select items and pass the selected items to the handler
callback.
file_selector = {
---@param params avante.file_selector.IParams
provider = function(params)
local filepaths = params.filepaths ---@type string[]
local title = params.title ---@type string
local handler = params.handler ---@type fun(selected_filepaths: string[]|nil): nil
-- Launch your customized picker with the items built from `filepaths`, then in the `on_confirm` callback,
-- pass the selected items (convert back to file paths) to the `handler` function.
local items = __your_items_formatter__(filepaths)
__your_picker__({
items = items,
on_cancel = function()
handler(nil)
end,
on_confirm = function(selected_items)
local selected_filepaths = {}
for _, item in ipairs(selected_items) do
table.insert(selected_filepaths, item.filepath)
end
handler(selected_filepaths)
end
})
end,
---below is optional
provider_opts = {
---@param params avante.file_selector.opts.IGetFilepathsParams
get_filepaths = function(params)
local cwd = params.cwd ---@type string
local selected_filepaths = params.selected_filepaths ---@type string[]
local cmd = string.format("fd --base-directory '%s' --hidden", vim.fn.fnameescape(cwd))
local output = vim.fn.system(cmd)
local filepaths = vim.split(output, "\n", { trimempty = true })
return vim
.iter(filepaths)
:filter(function(filepath)
return not vim.tbl_contains(selected_filepaths, filepath)
end)
:totable()
end
}
end
}
Choose a selector other that native, the default as that currently has an issue For lazyvim users copy the full config for blink.cmp from the website or extend the options
compat = {
"avante_commands",
"avante_mentions",
"avante_files",
}
For other users just add a custom provider
default = {
...
"avante_commands",
"avante_mentions",
"avante_files",
}
providers = {
avante_commands = {
name = "avante_commands",
module = "blink.compat.source",
score_offset = 90, -- show at a higher priority than lsp
opts = {},
},
avante_files = {
name = "avante_files",
module = "blink.compat.source",
score_offset = 100, -- show at a higher priority than lsp
opts = {},
},
avante_mentions = {
name = "avante_mentions",
module = "blink.compat.source",
score_offset = 1000, -- show at a higher priority than lsp
opts = {},
}
...
}
Usage
Given its early stage, avante.nvim
currently supports the following basic functionalities:
[!IMPORTANT]
Avante will only support Claude, and OpenAI (and its variants including azure)out-of-the-box due to its high code quality generation. For all OpenAI-compatible providers, see wiki for more details.
[!IMPORTANT]
Due to the poor performance of other models, avante.nvim only recommends using the claude-3.5-sonnet model.>All features can only be guaranteed to work properly on the claude-3.5-sonnet model.>We do not accept changes to the code or prompts to accommodate other models. Otherwise, it will greatly increase our maintenance costs.>We hope everyone can understand. Thank you!
[!IMPORTANT]
Since avante.nvim now supports cursor planning mode, the above statement is no longer valid! avante.nvim now supports most models! If you encounter issues with normal usage, please try enabling cursor planning mode.
[!IMPORTANT]
For most consistency between neovim session, it is recommended to set the environment variables in your shell file. By default,
Avante
will prompt you at startup to input the API key for the provider you have selected.For Claude:
export ANTHROPIC_API_KEY=your-api-key
For OpenAI:
export OPENAI_API_KEY=your-api-key
For Azure OpenAI:
export AZURE_OPENAI_API_KEY=your-api-key
For Amazon Bedrock:
export BEDROCK_KEYS=aws_access_key_id,aws_secret_access_key,aws_region[,aws_session_token]
Note: The aws_session_token is optional and only needed when using temporary AWS credentials
- Open a code file in Neovim.
- Use the
:AvanteAsk
command to query the AI about the code. - Review the AI's suggestions.
- Apply the recommended changes directly to your code with a simple command or key binding.
Note: The plugin is still under active development, and both its functionality and interface are subject to significant changes. Expect some rough edges and instability as the project evolves.
Key Bindings
The following key bindings are available for use with avante.nvim
:
Key Binding | Description |
---|---|
Leaderaa | show sidebar |
Leaderat | toggle sidebar visibility |
Leaderar | refresh sidebar |
Leaderaf | switch sidebar focus |
Leadera? | select model |
Leaderae | edit selected blocks |
LeaderaS | stop current AI request |
co | choose ours |
ct | choose theirs |
ca | choose all theirs |
c0 | choose none |
cb | choose both |
cc | choose cursor |
]x | move to previous conflict |
[x | move to next conflict |
[[ | jump to previous codeblocks (results window) |
]] | jump to next codeblocks (results windows) |
[!NOTE]
If you are using
lazy.nvim
, then all keymap here will be safely set, meaning if<leader>aa
is already binded, then avante.nvim won't bind this mapping. In this case, user will be responsible for setting up their own. See notes on keymaps for more details.
Neotree shortcut
In the neotree sidebar, you can also add a new keyboard shortcut to quickly add file/folder
to Avante Selected Files
.
Neotree configuration
return {
{
'nvim-neo-tree/neo-tree.nvim',
config = function()
require('neo-tree').setup({
filesystem = {
commands = {
avante_add_files = function(state)
local node = state.tree:get_node()
local filepath = node:get_id()
local relative_path = require('avante.utils').relative_path(filepath)
local sidebar = require('avante').get()
local open = sidebar:is_open()
-- ensure avante sidebar is open
if not open then
require('avante.api').ask()
sidebar = require('avante').get()
end
sidebar.file_selector:add_selected_file(relative_path)
-- remove neo tree buffer
if not open then
sidebar.file_selector:remove_selected_file('neo-tree filesystem [1]')
end
end,
},
window = {
mappings = {
['oa'] = 'avante_add_files',
},
},
},
})
end,
},
}
Commands
Command | Description | Examples |
---|---|---|
:AvanteAsk [question] [position] |
Ask AI about your code. Optional position set window position and ask enable/disable direct asking mode |
:AvanteAsk position=right Refactor this code here |
:AvanteBuild |
Build dependencies for the project | |
:AvanteChat |
Start a chat session with AI about your codebase. Default is ask =false |
|
:AvanteClear |
Clear the chat history | |
:AvanteEdit |
Edit the selected code blocks | |
:AvanteFocus |
Switch focus to/from the sidebar | |
:AvanteRefresh |
Refresh all Avante windows | |
:AvanteStop |
Stop the current AI request | |
:AvanteSwitchProvider |
Switch AI provider (e.g. openai) | |
:AvanteShowRepoMap |
Show repo map for project's structure | |
:AvanteToggle |
Toggle the Avante sidebar | |
:AvanteModels |
Show model list |
Highlight Groups
Highlight Group | Description | Notes |
---|---|---|
AvanteTitle | Title | |
AvanteReversedTitle | Used for rounded border | |
AvanteSubtitle | Selected code title | |
AvanteReversedSubtitle | Used for rounded border | |
AvanteThirdTitle | Prompt title | |
AvanteReversedThirdTitle | Used for rounded border | |
AvanteConflictCurrent | Current conflict highlight | Default to Config.highlights.diff.current |
AvanteConflictIncoming | Incoming conflict highlight | Default to Config.highlights.diff.incoming |
AvanteConflictCurrentLabel | Current conflict label highlight | Default to shade of AvanteConflictCurrent |
AvanteConflictIncomingLabel | Incoming conflict label highlight | Default to shade of AvanteConflictIncoming |
AvantePopupHint | Usage hints in popup menus | |
AvanteInlineHint | The end-of-line hint displayed in visual mode |
See highlights.lua for more information
Ollama
ollama is a first-class provider for avante.nvim. You can use it by setting provider = "ollama"
in the configuration, and set the model
field in ollama
to the model you want to use. For example:
provider = "ollama",
ollama = {
model = "qwq:32b",
}
[!NOTE] If you use ollama, the code planning effect may not be ideal, so it is strongly recommended that you enable cursor-planning-mode
AiHubMix
AiHubMix is a built-in provider for avante.nvim. You can register an account on the AiHubMix official website, then create an API Key within the website, and set this API Key in your environment variables:
export AIHUBMIX_API_KEY=your_api_key
Then in your configuration, set provider = "aihubmix"
, and set the model
field to the model name you want to use, for example:
provider = "aihubmix",
aihubmix = {
model = "gpt-4o-2024-11-20",
}
Custom providers
Avante provides a set of default providers, but users can also create their own providers.
For more information, see Custom Providers
Cursor planning mode
Because avante.nvim has always used Aiderโs method for planning applying, but its prompts are very picky with models and require ones like claude-3.5-sonnet or gpt-4o to work properly.
Therefore, I have adopted Cursorโs method to implement planning applying. For details on the implementation, please refer to cursor-planning-mode.md
RAG Service
Avante provides a RAG service, which is a tool for obtaining the required context for the AI to generate the codes. By default, it is not enabled. You can enable it this way:
rag_service = {
enabled = false, -- Enables the RAG service
host_mount = os.getenv("HOME"), -- Host mount path for the rag service
provider = "openai", -- The provider to use for RAG service (e.g. openai or ollama)
llm_model = "", -- The LLM model to use for RAG service
embed_model = "", -- The embedding model to use for RAG service
endpoint = "https://api.openai.com/v1", -- The API endpoint for RAG service
},
If your rag_service provider is openai
, then you need to set the OPENAI_API_KEY
environment variable!
If your rag_service provider is ollama
, you need to set the endpoint to http://localhost:11434
(note there is no /v1
at the end) or any address of your own ollama server.
If your rag_service provider is ollama
, when llm_model
is empty, it defaults to llama3
, and when embed_model
is empty, it defaults to nomic-embed-text
. Please make sure these models are available in your ollama server.
Additionally, RAG Service also depends on Docker! (For macOS users, OrbStack is recommended as a Docker alternative).
host_mount
is the path that will be mounted to the container, and the default is the home directory. The mount is required for the RAG service to access the files in the host machine. It is up to the user to decide if you want to mount the whole /
directory, just the project directory, or the home directory. If you plan using avante and RAG event for projects stored outside your home directory, you will need to set the host_mount
to the root directory of your file system.
The mount will be read only.
After changing the rag_service configuration, you need to manually delete the rag_service container to ensure the new configuration is used: docker rm -fv avante-rag-service
Web Search Engines
Avante's tools include some web search engines, currently support:
The default is Tavily, and can be changed through configuring Config.web_search_engine.provider
:
web_search_engine = {
provider = "tavily", -- tavily, serpapi, searchapi, google or kagi
}
Environment variables required for providers:
- Tavily:
TAVILY_API_KEY
- SerpApi:
SERPAPI_API_KEY
- SearchAPI:
SEARCHAPI_API_KEY
- Google:
GOOGLE_SEARCH_API_KEY
as the API keyGOOGLE_SEARCH_ENGINE_ID
as the search engine ID
- Kagi:
KAGI_API_KEY
as the API Token - Brave Search:
BRAVE_API_KEY
as the API key
Disable Tools
Avante enables tools by default, but some LLM models do not support tools. You can disable tools by setting disable_tools = true
for the provider. For example:
{
claude = {
endpoint = "https://api.anthropic.com",
model = "claude-3-5-sonnet-20241022",
timeout = 30000, -- Timeout in milliseconds
temperature = 0,
max_tokens = 4096,
disable_tools = true, -- disable tools!
},
}
In case you want to ban some tools to avoid its usage (like Claude 3.7 overusing the python tool) you can disable just specific tools
{
disabled_tools = { "python" },
}
Tool list
rag_search, python, git_diff, git_commit, list_files, search_files, search_keyword, read_file_toplevel_symbols, read_file, create_file, rename_file, delete_file, create_dir, rename_dir, delete_dir, bash, web_search, fetch
Custom Tools
Avante allows you to define custom tools that can be used by the AI during code generation and analysis. These tools can execute shell commands, run scripts, or perform any custom logic you need.
Example: Go Test Runner
Here's an example of a custom tool that runs Go unit tests:
{
custom_tools = {
{
name = "run_go_tests", -- Unique name for the tool
description = "Run Go unit tests and return results", -- Description shown to AI
command = "go test -v ./...", -- Shell command to execute
param = { -- Input parameters (optional)
type = "table",
fields = {
{
name = "target",
description = "Package or directory to test (e.g. './pkg/...' or './internal/pkg')",
type = "string",
optional = true,
},
},
},
returns = { -- Expected return values
{
name = "result",
description = "Result of the fetch",
type = "string",
},
{
name = "error",
description = "Error message if the fetch was not successful",
type = "string",
optional = true,
},
},
func = function(params, on_log, on_complete) -- Custom function to execute
local target = params.target or "./..."
return vim.fn.system(string.format("go test -v %s", target))
end,
},
},
}
MCP
Now you can integrate MCP functionality for Avante through mcphub.nvim
. For detailed documentation, please refer to mcphub.nvim
Claude Text Editor Tool Mode
Avante leverages Claude Text Editor Tool to provide a more elegant code editing experience. You can now enable this feature by setting enable_claude_text_editor_tool_mode
to true
in the behaviour
configuration:
{
behaviour = {
enable_claude_text_editor_tool_mode = true,
},
}
[!NOTE] To enable Claude Text Editor Tool Mode, you must use the
claude-3-5-sonnet-*
orclaude-3-7-sonnet-*
model with theclaude
provider! This feature is not supported by any other models!
Custom prompts
By default, avante.nvim
provides three different modes to interact with: planning
, editing
, and suggesting
, followed with three different prompts per mode.
planning
: Used withrequire("avante").toggle()
on sidebarediting
: Used withrequire("avante").edit()
on selection codeblocksuggesting
: Used withrequire("avante").get_suggestion():suggest()
on Tab flow.cursor-planning
: Used withrequire("avante").toggle()
on Tab flow, but only when cursor planning mode is enabled.
Users can customize the system prompts via Config.system_prompt
. We recommend calling this in a custom Autocmds depending on your need:
vim.api.nvim_create_autocmd("User", {
pattern = "ToggleMyPrompt",
callback = function() require("avante.config").override({system_prompt = "MY CUSTOM SYSTEM PROMPT"}) end,
})
vim.keymap.set("n", "<leader>am", function() vim.api.nvim_exec_autocmds("User", { pattern = "ToggleMyPrompt" }) end, { desc = "avante: toggle my prompt" })
If one wish to custom prompts for each mode, avante.nvim
will check for project root based on the given buffer whether it contains the following patterns: *.{mode}.avanterules
.
The rules for root hierarchy:
- lsp workspace folders
- lsp root_dir
- root pattern of filename of the current buffer
- root pattern of cwd
Example folder structure for custom prompt
If you have the following structure:
.
โโโ .git/
โโโ typescript.planning.avanterules
โโโ snippets.editing.avanterules
โโโ suggesting.avanterules
โโโ src/
typescript.planning.avanterules
will be used forplanning
modesnippets.editing.avanterules
will be used forediting
modesuggesting.avanterules
will be used forsuggesting
mode.
[!important]
*.avanterules
is a jinja template file, in which will be rendered using minijinja. See templates for example on how to extend current templates.
TODOs
- Chat with current file
- Apply diff patch
- Chat with the selected block
- Slash commands
- Edit the selected block
- Smart Tab (Cursor Flow)
- Chat with project (You can use
@codebase
to chat with the whole project) - Chat with selected files
- Tool use
- MCP
- Better codebase indexing
Roadmap
- Enhanced AI Interactions: Improve the depth of AI analysis and recommendations for more complex coding scenarios.
- LSP + Tree-sitter + LLM Integration: Integrate with LSP and Tree-sitter and LLM to provide more accurate and powerful code suggestions and analysis.
Contributing
Contributions to avante.nvim are welcome! If you're interested in helping out, please feel free to submit pull requests or open issues. Before contributing, ensure that your code has been thoroughly tested.
See wiki for more recipes and tricks.
Acknowledgments
We would like to express our heartfelt gratitude to the contributors of the following open-source projects, whose code has provided invaluable inspiration and reference for the development of avante.nvim:
Nvim Plugin | License | Functionality | Location |
---|---|---|---|
git-conflict.nvim | No License | Diff comparison functionality | lua/avante/diff.lua |
ChatGPT.nvim | Apache 2.0 License | Calculation of tokens count | lua/avante/utils/tokens.lua |
img-clip.nvim | MIT License | Clipboard image support | lua/avante/clipboard.lua |
copilot.lua | MIT License | Copilot support | lua/avante/providers/copilot.lua |
jinja.vim | MIT License | Template filetype support | syntax/jinja.vim |
codecompanion.nvim | MIT License | Secrets logic support | lua/avante/providers/init.lua |
aider | Apache 2.0 License | Planning mode user prompt | lua/avante/templates/planning.avanterules |
The high quality and ingenuity of these projects' source code have been immensely beneficial throughout our development process. We extend our sincere thanks and respect to the authors and contributors of these projects. It is the selfless dedication of the open-source community that drives projects like avante.nvim forward.
Business Sponsors
Meshy AI
The #1 AI 3D Model Generator for Creators
|
BabelTower API
No account needed, use any model instantly
|
License
avante.nvim is licensed under the Apache 2.0 License. For more details, please refer to the LICENSE file.
Star History
Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions
Node Version Manager

Table of Contents
- Intro
- About
- Installing and Updating
- Usage
- Running Tests
- Environment variables
- Bash Completion
- Compatibility Issues
- Installing nvm on Alpine Linux
- Uninstalling / Removal
- Docker For Development Environment
- Problems
- macOS Troubleshooting
- WSL Troubleshooting
- Maintainers
- Project Support
- Enterprise Support
- License
- Copyright notice
Intro
nvm
allows you to quickly install and use different versions of node via the command line.
Example:
$ nvm use 16
Now using node v16.9.1 (npm v7.21.1)
$ node -v
v16.9.1
$ nvm use 14
Now using node v14.18.0 (npm v6.14.15)
$ node -v
v14.18.0
$ nvm install 12
Now using node v12.22.6 (npm v6.14.5)
$ node -v
v12.22.6
Simple as that!
About
nvm is a version manager for node.js, designed to be installed per-user, and invoked per-shell. nvm
works on any POSIX-compliant shell (sh, dash, ksh, zsh, bash), in particular on these platforms: unix, macOS, and windows WSL.
Installing and Updating
Install & Update Script
To install or update nvm, you should run the install script. To do that, you may either download and run the script manually, or use the following cURL or Wget command:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
Running either of the above commands downloads a script and runs it. The script clones the nvm repository to ~/.nvm
, and attempts to add the source lines from the snippet below to the correct profile file (~/.bashrc
, ~/.bash_profile
, ~/.zshrc
, or ~/.profile
). If you find the install script is updating the wrong profile file, set the $PROFILE
env var to the profile fileโs path, and then rerun the installation script.
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Additional Notes
-
If the environment variable
$XDG_CONFIG_HOME
is present, it will place thenvm
files there. -
You can add
--no-use
to the end of the above script to postpone usingnvm
until you manuallyuse
it:
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" --no-use # This loads nvm, without auto-using the default version
-
You can customize the install source, directory, profile, and version using the
NVM_SOURCE
,NVM_DIR
,PROFILE
, andNODE_VERSION
variables. Eg:curl ... | NVM_DIR="path/to/nvm"
. Ensure that theNVM_DIR
does not contain a trailing slash. -
The installer can use
git
,curl
, orwget
to downloadnvm
, whichever is available. -
You can instruct the installer to not edit your shell config (for example if you already get completions via a zsh nvm plugin) by setting
PROFILE=/dev/null
before running theinstall.sh
script. Here's an example one-line command to do that:PROFILE=/dev/null bash -c 'curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash'
Installing in Docker
When invoking bash as a non-interactive shell, like in a Docker container, none of the regular profile files are sourced. In order to use nvm
, node
, and npm
like normal, you can instead specify the special BASH_ENV
variable, which bash sources when invoked non-interactively.
# Use bash for the shell
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Create a script file sourced by both interactive and non-interactive bash shells
ENV BASH_ENV /home/user/.bash_env
RUN touch "${BASH_ENV}"
RUN echo '. "${BASH_ENV}"' >> ~/.bashrc
# Download and install nvm
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.2/install.sh | PROFILE="${BASH_ENV}" bash
RUN echo node > .nvmrc
RUN nvm install
Installing in Docker for CICD-Jobs
More robust, works in CI/CD-Jobs. Can be run in interactive and non-interactive containers. See https://github.com/nvm-sh/nvm/issues/3531.
FROM ubuntu:latest
ARG NODE_VERSION=20
# install curl
RUN apt update && apt install curl -y
# install nvm
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
# set env
ENV NVM_DIR=/root/.nvm
# install node
RUN bash -c "source $NVM_DIR/nvm.sh && nvm install $NODE_VERSION"
# set ENTRYPOINT for reloading nvm-environment
ENTRYPOINT ["bash", "-c", "source $NVM_DIR/nvm.sh && exec \"$@\"", "--"]
# set cmd to bash
CMD ["/bin/bash"]
This example defaults to installation of nodejs version 20.x.y. Optionally you can easily override the version with docker build args like:
docker build -t nvmimage --build-arg NODE_VERSION=19 .
After creation of the image you can start container interactively and run commands, for example:
docker run --rm -it nvmimage
root@0a6b5a237c14:/# nvm -v
0.40.2
root@0a6b5a237c14:/# node -v
v19.9.0
root@0a6b5a237c14:/# npm -v
9.6.3
Noninteractive example:
user@host:/tmp/test $ docker run --rm -it nvmimage node -v
v19.9.0
user@host:/tmp/test $ docker run --rm -it nvmimage npm -v
9.6.3
Troubleshooting on Linux
On Linux, after running the install script, if you get nvm: command not found
or see no feedback from your terminal after you type command -v nvm
, simply close your current terminal, open a new terminal, and try verifying again. Alternatively, you can run the following commands for the different shells on the command line:
bash: source ~/.bashrc
zsh: source ~/.zshrc
ksh: . ~/.profile
These should pick up the nvm
command.
Troubleshooting on macOS
Since OS X 10.9, /usr/bin/git
has been preset by Xcode command line tools, which means we can't properly detect if Git is installed or not. You need to manually install the Xcode command line tools before running the install script, otherwise, it'll fail. (see #1782)
If you get nvm: command not found
after running the install script, one of the following might be the reason:
-
Since macOS 10.15, the default shell is
zsh
and nvm will look for.zshrc
to update, none is installed by default. Create one withtouch ~/.zshrc
and run the install script again. -
If you use bash, the previous default shell, your system may not have
.bash_profile
or.bashrc
files where the command is set up. Create one of them withtouch ~/.bash_profile
ortouch ~/.bashrc
and run the install script again. Then, run. ~/.bash_profile
or. ~/.bashrc
to pick up thenvm
command. -
You have previously used
bash
, but you havezsh
installed. You need to manually add these lines to~/.zshrc
and run. ~/.zshrc
. -
You might need to restart your terminal instance or run
. ~/.nvm/nvm.sh
. Restarting your terminal/opening a new tab/window, or running the source command will load the command and the new configuration. -
If the above didn't help, you might need to restart your terminal instance. Try opening a new tab/window in your terminal and retry.
If the above doesn't fix the problem, you may try the following:
-
If you use bash, it may be that your
.bash_profile
(or~/.profile
) does not source your~/.bashrc
properly. You could fix this by addingsource ~/<your_profile_file>
to it or following the next step below. -
Try adding the snippet from the install section, that finds the correct nvm directory and loads nvm, to your usual profile (
~/.bash_profile
,~/.zshrc
,~/.profile
, or~/.bashrc
). -
For more information about this issue and possible workarounds, please refer here
Note For Macs with the Apple Silicon chip, node started offering arm64 arch Darwin packages since v16.0.0 and experimental arm64 support when compiling from source since v14.17.0. If you are facing issues installing node using nvm
, you may want to update to one of those versions or later.
Ansible
You can use a task:
- name: Install nvm
ansible.builtin.shell: >
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
args:
creates: "{{ ansible_env.HOME }}/.nvm/nvm.sh"
Verify Installation
To verify that nvm has been installed, do:
command -v nvm
which should output nvm
if the installation was successful. Please note that which nvm
will not work, since nvm
is a sourced shell function, not an executable binary.
Note: On Linux, after running the install script, if you get nvm: command not found
or see no feedback from your terminal after you type command -v nvm
, simply close your current terminal, open a new terminal, and try verifying again.
Important Notes
If you're running a system without prepackaged binary available, which means you're going to install node or io.js from its source code, you need to make sure your system has a C++ compiler. For OS X, Xcode will work, for Debian/Ubuntu based GNU/Linux, the build-essential
and libssl-dev
packages work.
Note: nvm
also supports Windows in some cases. It should work through WSL (Windows Subsystem for Linux) depending on the version of WSL. It should also work with GitBash (MSYS) or Cygwin. Otherwise, for Windows, a few alternatives exist, which are neither supported nor developed by us:
Note: nvm
does not support Fish either (see #303). Alternatives exist, which are neither supported nor developed by us:
- bass allows you to use utilities written for Bash in fish shell
- fast-nvm-fish only works with version numbers (not aliases) but doesn't significantly slow your shell startup
- plugin-nvm plugin for Oh My Fish, which makes nvm and its completions available in fish shell
- nvm.fish - The Node.js version manager you'll adore, crafted just for Fish
- fish-nvm - Wrapper around nvm for fish, delays sourcing nvm until it's actually used.
Note: We still have some problems with FreeBSD, because there is no official pre-built binary for FreeBSD, and building from source may need patches; see the issue ticket:
Note: On OS X, if you do not have Xcode installed and you do not wish to download the ~4.3GB file, you can install the Command Line Tools
. You can check out this blog post on how to just that:
Note: On OS X, if you have/had a "system" node installed and want to install modules globally, keep in mind that:
- When using
nvm
you do not needsudo
to globally install a module withnpm -g
, so instead of doingsudo npm install -g grunt
, do insteadnpm install -g grunt
- If you have an
~/.npmrc
file, make sure it does not contain anyprefix
settings (which is not compatible withnvm
) - You can (but should not?) keep your previous "system" node install, but
nvm
will only be available to your user account (the one used to install nvm). This might cause version mismatches, as other users will be using/usr/local/lib/node_modules/*
VS your user account using~/.nvm/versions/node/vX.X.X/lib/node_modules/*
Homebrew installation is not supported. If you have issues with homebrew-installed nvm
, please brew uninstall
it, and install it using the instructions below, before filing an issue.
Note: If you're using zsh
you can easily install nvm
as a zsh plugin. Install zsh-nvm
and run nvm upgrade
to upgrade (you can set NVM_AUTO_USE=true
to have it automatically detect and use .nvmrc
files).
Note: Git versions before v1.7 may face a problem of cloning nvm
source from GitHub via https protocol, and there is also different behavior of git before v1.6, and git prior to v1.17.10 can not clone tags, so the minimum required git version is v1.7.10. If you are interested in the problem we mentioned here, please refer to GitHub's HTTPS cloning errors article.
Git Install
If you have git
installed (requires git v1.7.10+):
- clone this repo in the root of your user profile
cd ~/
from anywhere thengit clone https://github.com/nvm-sh/nvm.git .nvm
cd ~/.nvm
and check out the latest version withgit checkout v0.40.2
- activate
nvm
by sourcing it from your shell:. ./nvm.sh
Now add these lines to your ~/.bashrc
, ~/.profile
, or ~/.zshrc
file to have it automatically sourced upon login: (you may have to add to more than one of the above files)
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Manual Install
For a fully manual install, execute the following lines to first clone the nvm
repository into $HOME/.nvm
, and then load nvm
:
export NVM_DIR="$HOME/.nvm" && (
git clone https://github.com/nvm-sh/nvm.git "$NVM_DIR"
cd "$NVM_DIR"
git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" $(git rev-list --tags --max-count=1)`
) && \. "$NVM_DIR/nvm.sh"
Now add these lines to your ~/.bashrc
, ~/.profile
, or ~/.zshrc
file to have it automatically sourced upon login: (you may have to add to more than one of the above files)
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Manual Upgrade
For manual upgrade with git
(requires git v1.7.10+):
- change to the
$NVM_DIR
- pull down the latest changes
- check out the latest version
- activate the new version
(
cd "$NVM_DIR"
git fetch --tags origin
git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" $(git rev-list --tags --max-count=1)`
) && \. "$NVM_DIR/nvm.sh"
Usage
To download, compile, and install the latest release of node, do this:
nvm install node # "node" is an alias for the latest version
To install a specific version of node:
nvm install 14.7.0 # or 16.3.0, 12.22.1, etc
To set an alias:
nvm alias my_alias v14.4.0
Make sure that your alias does not contain any spaces or slashes.
The first version installed becomes the default. New shells will start with the default version of node (e.g., nvm alias default
).
You can list available versions using ls-remote
:
nvm ls-remote
And then in any new shell just use the installed version:
nvm use node
Or you can just run it:
nvm run node --version
Or, you can run any arbitrary command in a subshell with the desired version of node:
nvm exec 4.2 node --version
You can also get the path to the executable to where it was installed:
nvm which 12.22
In place of a version pointer like "14.7" or "16.3" or "12.22.1", you can use the following special default aliases with nvm install
, nvm use
, nvm run
, nvm exec
, nvm which
, etc:
node
: this installs the latest version ofnode
iojs
: this installs the latest version ofio.js
stable
: this alias is deprecated, and only truly applies tonode
v0.12
and earlier. Currently, this is an alias fornode
.unstable
: this alias points tonode
v0.11
- the last "unstable" node release, since post-1.0, all node versions are stable. (in SemVer, versions communicate breakage, not stability).
Long-term Support
Node has a schedule for long-term support (LTS) You can reference LTS versions in aliases and .nvmrc
files with the notation lts/*
for the latest LTS, and lts/argon
for LTS releases from the "argon" line, for example. In addition, the following commands support LTS arguments:
nvm install --lts
/nvm install --lts=argon
/nvm install 'lts/*'
/nvm install lts/argon
nvm uninstall --lts
/nvm uninstall --lts=argon
/nvm uninstall 'lts/*'
/nvm uninstall lts/argon
nvm use --lts
/nvm use --lts=argon
/nvm use 'lts/*'
/nvm use lts/argon
nvm exec --lts
/nvm exec --lts=argon
/nvm exec 'lts/*'
/nvm exec lts/argon
nvm run --lts
/nvm run --lts=argon
/nvm run 'lts/*'
/nvm run lts/argon
nvm ls-remote --lts
/nvm ls-remote --lts=argon
nvm ls-remote 'lts/*'
/nvm ls-remote lts/argon
nvm version-remote --lts
/nvm version-remote --lts=argon
/nvm version-remote 'lts/*'
/nvm version-remote lts/argon
Any time your local copy of nvm
connects to https://nodejs.org, it will re-create the appropriate local aliases for all available LTS lines. These aliases (stored under $NVM_DIR/alias/lts
), are managed by nvm
, and you should not modify, remove, or create these files - expect your changes to be undone, and expect meddling with these files to cause bugs that will likely not be supported.
To get the latest LTS version of node and migrate your existing installed packages, use
nvm install --reinstall-packages-from=current 'lts/*'
Migrating Global Packages While Installing
If you want to install a new version of Node.js and migrate npm packages from a previous version:
nvm install --reinstall-packages-from=node node
This will first use "nvm version node" to identify the current version you're migrating packages from. Then it resolves the new version to install from the remote server and installs it. Lastly, it runs "nvm reinstall-packages" to reinstall the npm packages from your prior version of Node to the new one.
You can also install and migrate npm packages from specific versions of Node like this:
nvm install --reinstall-packages-from=5 6
nvm install --reinstall-packages-from=iojs v4.2
Note that reinstalling packages explicitly does not update the npm version โ this is to ensure that npm isn't accidentally upgraded to a broken version for the new node version.
To update npm at the same time add the --latest-npm
flag, like this:
nvm install --reinstall-packages-from=default --latest-npm 'lts/*'
or, you can at any time run the following command to get the latest supported npm version on the current node version:
nvm install-latest-npm
If you've already gotten an error to the effect of "npm does not support Node.js", you'll need to (1) revert to a previous node version (nvm ls
& nvm use <your latest _working_ version from the ls>
), (2) delete the newly created node version (nvm uninstall <your _broken_ version of node from the ls>
), then (3) rerun your nvm install
with the --latest-npm
flag.
Default Global Packages From File While Installing
If you have a list of default packages you want installed every time you install a new version, we support that too -- just add the package names, one per line, to the file $NVM_DIR/default-packages
. You can add anything npm would accept as a package argument on the command line.
# $NVM_DIR/default-packages
rimraf
[email protected]
stevemao/left-pad
io.js
If you want to install io.js:
nvm install iojs
If you want to install a new version of io.js and migrate npm packages from a previous version:
nvm install --reinstall-packages-from=iojs iojs
The same guidelines mentioned for migrating npm packages in node are applicable to io.js.
System Version of Node
If you want to use the system-installed version of node, you can use the special default alias "system":
nvm use system
nvm run system --version
Listing Versions
If you want to see what versions are installed:
nvm ls
If you want to see what versions are available to install:
nvm ls-remote
Setting Custom Colors
You can set five colors that will be used to display version and alias information. These colors replace the default colors. Initial colors are: g b y r e
Color codes:
r/R = red / bold red
g/G = green / bold green
b/B = blue / bold blue
c/C = cyan / bold cyan
m/M = magenta / bold magenta
y/Y = yellow / bold yellow
k/K = black / bold black
e/W = light grey / white
nvm set-colors rgBcm
Persisting custom colors
If you want the custom colors to persist after terminating the shell, export the NVM_COLORS
variable in your shell profile. For example, if you want to use cyan, magenta, green, bold red and bold yellow, add the following line:
export NVM_COLORS='cmgRY'
Suppressing colorized output
nvm help (or -h or --help)
, nvm ls
, nvm ls-remote
and nvm alias
usually produce colorized output. You can disable colors with the --no-colors
option (or by setting the environment variable TERM=dumb
):
nvm ls --no-colors
nvm help --no-colors
TERM=dumb nvm ls
Restoring PATH
To restore your PATH, you can deactivate it:
nvm deactivate
Set default node version
To set a default Node version to be used in any new shell, use the alias 'default':
nvm alias default node # this refers to the latest installed version of node
nvm alias default 18 # this refers to the latest installed v18.x version of node
nvm alias default 18.12 # this refers to the latest installed v18.12.x version of node
Use a mirror of node binaries
To use a mirror of the node binaries, set $NVM_NODEJS_ORG_MIRROR
:
export NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
nvm install node
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist nvm install 4.2
To use a mirror of the io.js binaries, set $NVM_IOJS_ORG_MIRROR
:
export NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
nvm install iojs-v1.0.3
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist nvm install iojs-v1.0.3
nvm use
will not, by default, create a "current" symlink. Set $NVM_SYMLINK_CURRENT
to "true" to enable this behavior, which is sometimes useful for IDEs. Note that using nvm
in multiple shell tabs with this environment variable enabled can cause race conditions.
Pass Authorization header to mirror
To pass an Authorization header through to the mirror url, set $NVM_AUTH_HEADER
NVM_AUTH_HEADER="Bearer secret-token" nvm install node
.nvmrc
You can create a .nvmrc
file containing a node version number (or any other string that nvm
understands; see nvm --help
for details) in the project root directory (or any parent directory). Afterwards, nvm use
, nvm install
, nvm exec
, nvm run
, and nvm which
will use the version specified in the .nvmrc
file if no version is supplied on the command line.
For example, to make nvm default to the latest 5.9 release, the latest LTS version, or the latest node version for the current directory:
$ echo "5.9" > .nvmrc
$ echo "lts/*" > .nvmrc # to default to the latest LTS version
$ echo "node" > .nvmrc # to default to the latest version
[NB these examples assume a POSIX-compliant shell version of echo
. If you use a Windows cmd
development environment, eg the .nvmrc
file is used to configure a remote Linux deployment, then keep in mind the "
s will be copied leading to an invalid file. Remove them.]
Then when you run nvm use:
$ nvm use
Found '/path/to/project/.nvmrc' with version <5.9>
Now using node v5.9.1 (npm v3.7.3)
Running nvm install will also switch over to the correct version, but if the correct node version isn't already installed, it will install it for you.
$ nvm install
Found '/path/to/project/.nvmrc' with version <5.9>
Downloading and installing node v5.9.1...
Downloading https://nodejs.org/dist/v5.9.1/node-v5.9.1-linux-x64.tar.xz...
#################################################################################### 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v5.9.1 (npm v3.7.3)
nvm use
et. al. will traverse directory structure upwards from the current directory looking for the .nvmrc
file. In other words, running nvm use
et. al. in any subdirectory of a directory with an .nvmrc
will result in that .nvmrc
being utilized.
The contents of a .nvmrc
file must contain precisely one <version>
(as described by nvm --help
) followed by a newline. .nvmrc
files may also have comments. The comment delimiter is #
, and it and any text after it, as well as blank lines, and leading and trailing white space, will be ignored when parsing.
Key/value pairs using =
are also allowed and ignored, but are reserved for future use, and may cause validation errors in the future.
Run npx nvmrc
to validate an .nvmrc
file. If that toolโs results do not agree with nvm, one or the other has a bug - please file an issue.
Deeper Shell Integration
You can use nvshim
to shim the node
, npm
, and npx
bins to automatically use the nvm
config in the current directory. nvshim
is not supported by the nvm
maintainers. Please report issues to the nvshim
team.
If you prefer a lighter-weight solution, the recipes below have been contributed by nvm
users. They are not supported by the nvm
maintainers. We are, however, accepting pull requests for more examples.
Calling nvm use
automatically in a directory with a .nvmrc
file
In your profile (~/.bash_profile
, ~/.zshrc
, ~/.profile
, or ~/.bashrc
), add the following to nvm use
whenever you enter a new directory:
bash
Put the following at the end of your $HOME/.bashrc
:
cdnvm() {
command cd "$@" || return $?
nvm_path="$(nvm_find_up .nvmrc | command tr -d '\n')"
# If there are no .nvmrc file, use the default nvm version
if [[ ! $nvm_path = *[^[:space:]]* ]]; then
declare default_version
default_version="$(nvm version default)"
# If there is no default version, set it to `node`
# This will use the latest version on your machine
if [ $default_version = 'N/A' ]; then
nvm alias default node
default_version=$(nvm version default)
fi
# If the current version is not the default version, set it to use the default version
if [ "$(nvm current)" != "${default_version}" ]; then
nvm use default
fi
elif [[ -s "${nvm_path}/.nvmrc" && -r "${nvm_path}/.nvmrc" ]]; then
declare nvm_version
nvm_version=$(<"${nvm_path}"/.nvmrc)
declare locally_resolved_nvm_version
# `nvm ls` will check all locally-available versions
# If there are multiple matching versions, take the latest one
# Remove the `->` and `*` characters and spaces
# `locally_resolved_nvm_version` will be `N/A` if no local versions are found
locally_resolved_nvm_version=$(nvm ls --no-colors "${nvm_version}" | command tail -1 | command tr -d '\->*' | command tr -d '[:space:]')
# If it is not already installed, install it
# `nvm install` will implicitly use the newly-installed version
if [ "${locally_resolved_nvm_version}" = 'N/A' ]; then
nvm install "${nvm_version}";
elif [ "$(nvm current)" != "${locally_resolved_nvm_version}" ]; then
nvm use "${nvm_version}";
fi
fi
}
alias cd='cdnvm'
cdnvm "$PWD" || exit
This alias would search 'up' from your current directory in order to detect a .nvmrc
file. If it finds it, it will switch to that version; if not, it will use the default version.
zsh
This shell function will install (if needed) and nvm use
the specified Node version when an .nvmrc
is found, and nvm use default
otherwise.
Put this into your $HOME/.zshrc
to call nvm use
automatically whenever you enter a directory that contains an .nvmrc
file with a string telling nvm which node to use
:
# place this after nvm initialization!
autoload -U add-zsh-hook
load-nvmrc() {
local nvmrc_path
nvmrc_path="$(nvm_find_nvmrc)"
if [ -n "$nvmrc_path" ]; then
local nvmrc_node_version
nvmrc_node_version=$(nvm version "$(cat "${nvmrc_path}")")
if [ "$nvmrc_node_version" = "N/A" ]; then
nvm install
elif [ "$nvmrc_node_version" != "$(nvm version)" ]; then
nvm use
fi
elif [ -n "$(PWD=$OLDPWD nvm_find_nvmrc)" ] && [ "$(nvm version)" != "$(nvm version default)" ]; then
echo "Reverting to nvm default version"
nvm use default
fi
}
add-zsh-hook chpwd load-nvmrc
load-nvmrc
After saving the file, run source ~/.zshrc
to reload the configuration with the latest changes made.
fish
This requires that you have bass installed.
# ~/.config/fish/functions/nvm.fish
function nvm
bass source ~/.nvm/nvm.sh --no-use ';' nvm $argv
end
# ~/.config/fish/functions/nvm_find_nvmrc.fish
function nvm_find_nvmrc
bass source ~/.nvm/nvm.sh --no-use ';' nvm_find_nvmrc
end
# ~/.config/fish/functions/load_nvm.fish
function load_nvm --on-variable="PWD"
set -l default_node_version (nvm version default)
set -l node_version (nvm version)
set -l nvmrc_path (nvm_find_nvmrc)
if test -n "$nvmrc_path"
set -l nvmrc_node_version (nvm version (cat $nvmrc_path))
if test "$nvmrc_node_version" = "N/A"
nvm install (cat $nvmrc_path)
else if test "$nvmrc_node_version" != "$node_version"
nvm use $nvmrc_node_version
end
else if test "$node_version" != "$default_node_version"
echo "Reverting to default Node version"
nvm use default
end
end
# ~/.config/fish/config.fish
# You must call it on initialization or listening to directory switching won't work
load_nvm > /dev/stderr
Running Tests
Tests are written in Urchin. Install Urchin (and other dependencies) like so:
npm install
There are slow tests and fast tests. The slow tests do things like install node and check that the right versions are used. The fast tests fake this to test things like aliases and uninstalling. From the root of the nvm git repository, run the fast tests like this:
npm run test/fast
Run the slow tests like this:
npm run test/slow
Run all of the tests like this:
npm test
Nota bene: Avoid running nvm while the tests are running.
Environment variables
nvm exposes the following environment variables:
NVM_DIR
- nvm's installation directory.NVM_BIN
- where node, npm, and global packages for the active version of node are installed.NVM_INC
- node's include file directory (useful for building C/C++ addons for node).NVM_CD_FLAGS
- used to maintain compatibility with zsh.NVM_RC_VERSION
- version from .nvmrc file if being used.
Additionally, nvm modifies PATH
, and, if present, MANPATH
and NODE_PATH
when changing versions.
Bash Completion
To activate, you need to source bash_completion
:
[[ -r $NVM_DIR/bash_completion ]] && \. $NVM_DIR/bash_completion
Put the above sourcing line just below the sourcing line for nvm in your profile (.bashrc
, .bash_profile
).
Usage
nvm:
$ nvm
Tab
alias deactivate install list-remote reinstall-packages uninstall version
cache exec install-latest-npm ls run unload version-remote
current help list ls-remote unalias use which
nvm alias:
$ nvm alias
Tab
default iojs lts/* lts/argon lts/boron lts/carbon lts/dubnium lts/erbium node stable unstable
$ nvm alias my_alias
Tab
v10.22.0 v12.18.3 v14.8.0
nvm use:
$ nvm use
Tab
my_alias default v10.22.0 v12.18.3 v14.8.0
nvm uninstall:
$ nvm uninstall
Tab
my_alias default v10.22.0 v12.18.3 v14.8.0
Compatibility Issues
nvm
will encounter some issues if you have some non-default settings set. (see #606) The following are known to cause issues:
Inside ~/.npmrc
:
prefix='some/path'
Environment Variables:
$NPM_CONFIG_PREFIX
$PREFIX
Shell settings:
set -e
Installing nvm on Alpine Linux
In order to provide the best performance (and other optimizations), nvm will download and install pre-compiled binaries for Node (and npm) when you run nvm install X
. The Node project compiles, tests and hosts/provides these pre-compiled binaries which are built for mainstream/traditional Linux distributions (such as Debian, Ubuntu, CentOS, RedHat et al).
Alpine Linux, unlike mainstream/traditional Linux distributions, is based on BusyBox, a very compact (~5MB) Linux distribution. BusyBox (and thus Alpine Linux) uses a different C/C++ stack to most mainstream/traditional Linux distributions - musl. This makes binary programs built for such mainstream/traditional incompatible with Alpine Linux, thus we cannot simply nvm install X
on Alpine Linux and expect the downloaded binary to run correctly - you'll likely see "...does not exist" errors if you try that.
There is a -s
flag for nvm install
which requests nvm download Node source and compile it locally.
If installing nvm on Alpine Linux is still what you want or need to do, you should be able to achieve this by running the following from you Alpine Linux shell, depending on which version you are using:
Alpine Linux 3.13+
apk add -U curl bash ca-certificates openssl ncurses coreutils python3 make gcc g++ libgcc linux-headers grep util-linux binutils findutils
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
Alpine Linux 3.5 - 3.12
apk add -U curl bash ca-certificates openssl ncurses coreutils python2 make gcc g++ libgcc linux-headers grep util-linux binutils findutils
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
Note: Alpine 3.5 can only install NodeJS versions up to v6.9.5, Alpine 3.6 can only install versions up to v6.10.3, Alpine 3.7 installs versions up to v8.9.3, Alpine 3.8 installs versions up to v8.14.0, Alpine 3.9 installs versions up to v10.19.0, Alpine 3.10 installs versions up to v10.24.1, Alpine 3.11 installs versions up to v12.22.6, Alpine 3.12 installs versions up to v12.22.12, Alpine 3.13 & 3.14 install versions up to v14.20.0, Alpine 3.15 & 3.16 install versions up to v16.16.0 (These are all versions on the main branch). Alpine 3.5 - 3.12 required the package python2
to build NodeJS, as they are older versions to build. Alpine 3.13+ requires python3
to successfully build newer NodeJS versions, but you can use python2
with Alpine 3.13+ if you need to build versions of node supported in Alpine 3.5 - 3.15, you just need to specify what version of NodeJS you need to install in the package install script.
The Node project has some desire but no concrete plans (due to the overheads of building, testing and support) to offer Alpine-compatible binaries.
As a potential alternative, @mhart (a Node contributor) has some Docker images for Alpine Linux with Node and optionally, npm, pre-installed.
Uninstalling / Removal
Manual Uninstall
To remove nvm
manually, execute the following:
First, use nvm unload
to remove the nvm command from your terminal session and delete the installation directory:
$ nvm_dir="${NVM_DIR:-~/.nvm}"
$ nvm unload
$ rm -rf "$nvm_dir"
Edit ~/.bashrc
(or other shell resource config) and remove the lines below:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[[ -r $NVM_DIR/bash_completion ]] && \. $NVM_DIR/bash_completion
Docker For Development Environment
To make the development and testing work easier, we have a Dockerfile for development usage, which is based on Ubuntu 18.04 base image, prepared with essential and useful tools for nvm
development, to build the docker image of the environment, run the docker command at the root of nvm
repository:
$ docker build -t nvm-dev .
This will package your current nvm repository with our pre-defined development environment into a docker image named nvm-dev
, once it's built with success, validate your image via docker images
:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nvm-dev latest 9ca4c57a97d8 7 days ago 650 MB
If you got no error message, now you can easily involve in:
$ docker run -h nvm-dev -it nvm-dev
nvm@nvm-dev:~/.nvm$
Please note that it'll take about 8 minutes to build the image and the image size would be about 650MB, so it's not suitable for production usage.
For more information and documentation about docker, please refer to its official website:
Problems
-
If you try to install a node version and the installation fails, be sure to run
nvm cache clear
to delete cached node downloads, or you might get an error like the following:curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
-
Where's my
sudo node
? Check out #43 -
After the v0.8.6 release of node, nvm tries to install from binary packages. But in some systems, the official binary packages don't work due to incompatibility of shared libs. In such cases, use
-s
option to force install from source:
nvm install -s 0.8.6
- If setting the
default
alias does not establish the node version in new shells (i.e.nvm current
yieldssystem
), ensure that the system's nodePATH
is set before thenvm.sh
source line in your shell profile (see #658)
macOS Troubleshooting
nvm node version not found in vim shell
If you set node version to a version other than your system node version nvm use 6.2.1
and open vim and run :!node -v
you should see v6.2.1
if you see your system version v0.12.7
. You need to run:
sudo chmod ugo-x /usr/libexec/path_helper
More on this issue in dotphiles/dotzsh.
nvm is not compatible with the npm config "prefix" option
Some solutions for this issue can be found here
There is one more edge case causing this issue, and that's a mismatch between the $HOME
path and the user's home directory's actual name.
You have to make sure that the user directory name in $HOME
and the user directory name you'd see from running ls /Users/
are capitalized the same way (See this issue).
To change the user directory and/or account name follow the instructions here
Homebrew makes zsh directories unsecure
zsh compinit: insecure directories, run compaudit for list.
Ignore insecure directories and continue [y] or abort compinit [n]? y
Homebrew causes insecure directories like /usr/local/share/zsh/site-functions
and /usr/local/share/zsh
. This is not an nvm
problem - it is a homebrew problem. Refer here for some solutions related to the issue.
Macs with Apple Silicon chips
Experimental support for the Apple Silicon chip architecture was added in node.js v15.3 and full support was added in v16.0. Because of this, if you try to install older versions of node as usual, you will probably experience either compilation errors when installing node or out-of-memory errors while running your code.
So, if you want to run a version prior to v16.0 on an Apple Silicon Mac, it may be best to compile node targeting the x86_64
Intel architecture so that Rosetta 2 can translate the x86_64
processor instructions to ARM-based Apple Silicon instructions. Here's what you will need to do:
-
Install Rosetta, if you haven't already done so
$ softwareupdate --install-rosetta
You might wonder, "how will my Apple Silicon Mac know to use Rosetta for a version of node compiled for an Intel chip?". If an executable contains only Intel instructions, macOS will automatically use Rosetta to translate the instructions.
-
Open a shell that's running using Rosetta
$ arch -x86_64 zsh
Note: This same thing can also be accomplished by finding the Terminal or iTerm App in Finder, right clicking, selecting "Get Info", and then checking the box labeled "Open using Rosetta".
Note: This terminal session is now running in
zsh
. Ifzsh
is not the shell you typically use,nvm
may not besource
'd automatically like it probably is for your usual shell through your dotfiles. If that's the case, make sure to sourcenvm
.$ source "${NVM_DIR}/nvm.sh"
-
Install whatever older version of node you are interested in. Let's use 12.22.1 as an example. This will fetch the node source code and compile it, which will take several minutes.
$ nvm install v12.22.1 --shared-zlib
Note: You're probably curious why
--shared-zlib
is included. There's a bug in recent versions of Apple's systemclang
compiler. If one of these broken versions is installed on your system, the above step will likely still succeed even if you didn't include the--shared-zlib
flag. However, later, when you attempt tonpm install
something using your old version of node.js, you will seeincorrect data check
errors. If you want to avoid the possible hassle of dealing with this, include that flag. For more details, see this issue and this comment -
Exit back to your native shell.
$ exit $ arch arm64
Note: If you selected the box labeled "Open using Rosetta" rather than running the CLI command in the second step, you will see
i386
here. Unless you have another reason to have that box selected, you can deselect it now. -
Check to make sure the architecture is correct.
x64
is the abbreviation forx86_64
, which is what you want to see.$ node -p process.arch x64
Now you should be able to use node as usual.
WSL Troubleshooting
If you've encountered this error on WSL-2:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0curl: (6) Could not resolve host: raw.githubusercontent.com
It may be due to your antivirus, VPN, or other reasons.
Where you can ping 8.8.8.8
while you can't ping google.com
This could simply be solved by running this in your root directory:
sudo rm /etc/resolv.conf
sudo bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
sudo bash -c 'echo "[network]" > /etc/wsl.conf'
sudo bash -c 'echo "generateResolvConf = false" >> /etc/wsl.conf'
sudo chattr +i /etc/resolv.conf
This deletes your resolv.conf
file that is automatically generated when you run WSL, creates a new file and puts nameserver 8.8.8.8
, then creates a wsl.conf
file and adds [network]
and generateResolveConf = false
to prevent auto-generation of that file.
You can check the contents of the file by running:
cat /etc/resolv.conf
Maintainers
Currently, the sole maintainer is @ljharb - more maintainers are quite welcome, and we hope to add folks to the team over time. Governance will be re-evaluated as the project evolves.
Project Support
Only the latest version (v0.40.2 at this time) is supported.
Enterprise Support
If you are unable to update to the latest version of nvm
, our partners provide commercial security fixes for all unsupported versions:
License
See LICENSE.md.
Copyright notice
Copyright OpenJS Foundation and nvm
contributors. All rights reserved. The OpenJS Foundation has registered trademarks and uses trademarks. For a list of trademarks of the OpenJS Foundation, please see our Trademark Policy and Trademark List. Trademarks and logos not indicated on the list of OpenJS Foundation trademarks are trademarksโข or registeredยฎ trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them. The OpenJS Foundation | Terms of Use | Privacy Policy | Bylaws | Code of Conduct | Trademark Policy | Trademark List | Cookie Policy
Official Implementation of "KBLaM: Knowledge Base augmented Language Model"
KBLaM - Knowledge Base Augmented Language Models [ICLR 2025]
This repo contains the official implementation of KBLaM: Knowledge Base Augmented Language Models.
Authors: Xi Wang, Liana Mikaelyan, Taketomo Isazawa, Mathew Salvaris, James Hensman.
KBLaM is a new method for augmentating LLMs with external knowledge. Unlike Retrieval-Augmented Generation, KBLAM eliminates external retrieval modules, and unlike in-context learning, its computational overhead scales linearly with KB size rather than quadratically.
Supported Models
The following models from Hugging Face hub are currently supported:
To add support for new model types, you will need to update the model processing scripts to incorporate an adapter similar to llama_model.py
in src/kblam/models
.
Setting up
Install the kblam package with
pip install -e .
To use Llama models, you will need to generate a token from Hugging Face and use it to log in:
pip install huggingface_hub
huggingface-cli login
The experiments in the paper can be replicated by running the scripts in ./experiments
.
Dataset Construction
To run the synthetic dataset construction, you will need a valid Azure OpenAI endpoint.
To construct a synthetic KB and question-answer pairs use dataset_generation/gen_synthetic_data.py
The question-answer pairs are constructed in the form:
What is the description of {entity_name}?
The description of {entity_name} is {description}.
To generate KB embeddings, use dataset_generation/generate_kb_embeddings.py
. The embeddings we current support are text-embedding-ada-002 and all-MiniLM-L6-v2.
Training
To train the model, run the following (with the appropriate arguments):
python train.py --dataset synthetic_data --N 120000 --B 20 --total_steps 601 --encoder_spec OAI --use_oai_embd --key_embd_src key --use_data_aug
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
FAQ
What is KBLaM?
KBLaM is a method to enhance a transformer-based LLM to augment it with knowledge. It consists of a base LLM, and some adapters that we train to transform the knowledge base to special knowledge tokens that the LLM ingests. In particular, because we only train adapters over the knowledge part, the base LLM is completely unmodified with regards to text input. If given no knowledge base, the model outputs the exact same thing as the base model for any given input.
What can KBLaM do?
KBLaM can, in addition to the base LLMโs capabilities, also attend over the knowledge base to answer questions in a grounded manner.
What is/are KBLaMโs intended use(s)?
The model is intended to be used for research.
How was KBLaM evaluated? What metrics are used to measure performance?
KBLaM was evaluated on accuracy of retrieval from the knowledge base, its refusal rate (how often it correctly said that it didnโt have the requisite information to answer the question), and precision and recall on how well the answers aligned with the correct answers given the knowledge base.
What are the limitations of KBLaM? How can users minimize the impact of KBLaMโs limitations when using the system?
When used with knowledge bases that are very different from the knowledge base it was trained on, KBLaM will give incomplete answers, and the answers can be reworded from the original value in the knowledge base or at times entirely incorrect. As a result, KBLaM is not currently intended for use as a complete system in a production setting, but is a research project that we are sharing.
What operational factors and settings allow for effective and responsible use of KBLaM?
KBLaM with no knowledge base will perform the exact same as the base model. With a knowledge base, for effective use, one should make sure that the training dataset and the usecase have sufficiently similar knowledge bases
How do I provide feedback on KBLaM?
Please add issues to this repository to provide feedback on KBLaM.
Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.
๐ Awesome LLM Apps
A curated collection of awesome LLM apps built with RAG and AI agents. This repository features LLM apps that use models from OpenAI, Anthropic, Google, and open-source models like DeepSeek, Qwen or Llama that you can run locally on your computer.
๐ค Why Awesome LLM Apps?
- ๐ก Discover practical and creative ways LLMs can be applied across different domains, from code repositories to email inboxes and more.
- ๐ฅ Explore apps that combine LLMs from OpenAI, Anthropic, Gemini, and open-source alternatives with RAG and AI Agents.
- ๐ Learn from well-documented projects and contribute to the growing open-source ecosystem of LLM-powered applications.
๐จ Open Source AI Agent Hackathon! ๐จ
We're launching a Global AI Agent Hackathon in collaboration with AI Agent ecosystem partners โ open to all developers, builders, and startups working on agents, RAG, tool use, or multi-agent systems.
๐ฐ Win up to $20,000 in cash by building Agents
- ๐ 10 winners: $300 each
- ๐ฅ 10 winners: $500 each
- ๐ฅ 5 winners: $1,000 each
- ๐ฅ 1 winner: $2,000
- ๐ GRAND PRIZE: $5,000 ๐
๐ Bonus
- Top 5 projects will be featured in the top trending Awesome LLM Apps repo.
๐ค Partners
Unwind AI, Agno and more Agent ecosystem companies joining soon.
๐ Here's the timeline:
- April 3rd - Final dates revealed
- April 10th - Prize and success criteria announced
- April 15th (tentative) - Hackathon starts
- May 30th (tentative) - Hackathon ends
Join us for a month of building Agents!
Prizes will be distributed on an ongoing basis and continue till all prizes are awarded.
โญ Star this repo and subscribe to Unwind AI for latest updates.
๐ค Want to join us as a partner or judge?
If you're a company in the AI agent ecosystem or would like to judge the hackathon, reach out to Shubham Saboo or Ashpreet Bedi on X to partner. Letโs make this the biggest open source AI Agent hackathon.
๐ Featured AI Projects
AI Agents
- ๐ผ AI Customer Support Agent
- ๐ AI Investment Agent
- ๐จโโ๏ธ AI Legal Agent Team
- ๐ผ AI Recruitment Agent Team
- ๐จโ๐ผ AI Services Agency
- ๐งฒ AI Competitor Intelligence Agent Team
- ๐๏ธโโ๏ธ AI Health & Fitness Planner Agent
- ๐ AI Startup Trend Analysis Agent
- ๐๏ธ AI Journalist Agent
- ๐ฒ AI Finance Agent Team
- ๐ฏ AI Lead Generation Agent
- ๐ฐ AI Personal Finance Agent
- ๐ฉป AI Medical Scan Diagnosis Agent
- ๐จโ๐ซ AI Teaching Agent Team
- ๐ซ AI Travel Agent
- ๐ฌ AI Movie Production Agent
- ๐ฐ Multi-Agent AI Researcher
- ๐ป Multimodal AI Coding Agent Team with o3-mini and Gemini
- ๐ AI Meeting Agent
- โ AI Chess Agent Game
- ๐ AI Real Estate Agent
- ๐ Local News Agent OpenAI Swarm
- ๐ AI Finance Agent with xAI Grok
- ๐ฎ AI 3D PyGame Visualizer with DeepSeek R1
- ๐ง AI Reasoning Agent
- ๐งฌ Multimodal AI Agent
RAG (Retrieval Augmented Generation)
- ๐ Autonomous RAG
- ๐ Agentic RAG
- ๐ค Agentic RAG with Gemini Flash Thinking
- ๐ Deepseek Local RAG Reasoning Agent
- ๐ Llama3.1 Local RAG
- ๐งฉ RAG-as-a-Service
- ๐ฆ Local RAG Agent
- ๐ RAG App with Hybrid Search
- ๐ฅ๏ธ Local RAG App with Hybrid Search
- ๐ RAG Agent with Database Routing
- ๐ Corrective RAG Agent
MCP AI Agents
LLM Apps with Memory
- ๐พ AI Arxiv Agent with Memory
- ๐ LLM App with Personalized Memory
- ๐ฉ๏ธ AI Travel Agent with Memory
- ๐๏ธ Local ChatGPT with Memory
Chat with X
- ๐ฌ Chat with GitHub Repo
- ๐จ Chat with Gmail
- ๐ Chat with PDF
- ๐ Chat with Research Papers
- ๐ Chat with Substack Newsletter
- ๐ฝ๏ธ Chat with YouTube Videos
LLM Finetuning
Advanced Tools and Frameworks
- ๐งช Gemini Multimodal Chatbot
- ๐ Mixture of Agents
- ๐ MultiLLM Chat Playground
- ๐ LLM Router App
- ๐ฌ Local ChatGPT Clone
- ๐ Web Scraping AI Agent
- ๐ Web Search AI Assistant
- ๐งช Cursor AI Experiments
๐ Getting Started
-
Clone the repository
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
-
Navigate to the desired project directory
cd awesome-llm-apps/chat_with_X_tutorials/chat_with_gmail
-
Install the required dependencies
pip install -r requirements.txt
-
Follow the project-specific instructions in each project's
README.md
file to set up and run the app.
๐ค Contributing to Open Source
Contributions are welcome! If you have any ideas, improvements, or new apps to add, please create a new GitHub Issue or submit a pull request. Make sure to follow the existing project structure and include a detailed README.md
for each new app.
Thank You, Community, for the Support! ๐
๐ Donโt miss out on future updates! Star the repo now and be the first to know about new and exciting LLM apps with RAG and AI Agents.
A Go implementation of the Model Context Protocol (MCP), enabling seamless integration between LLM applications and external data sources and tools.
MCP Go ๐
A Go implementation of the Model Context Protocol (MCP), enabling seamless integration between LLM applications and external data sources and tools.
package main
import (
"context"
"errors"
"fmt"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
)
func main() {
// Create MCP server
s := server.NewMCPServer(
"Demo ๐",
"1.0.0",
)
// Add tool
tool := mcp.NewTool("hello_world",
mcp.WithDescription("Say hello to someone"),
mcp.WithString("name",
mcp.Required(),
mcp.Description("Name of the person to greet"),
),
)
// Add tool handler
s.AddTool(tool, helloHandler)
// Start the stdio server
if err := server.ServeStdio(s); err != nil {
fmt.Printf("Server error: %v\n", err)
}
}
func helloHandler(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
name, ok := request.Params.Arguments["name"].(string)
if !ok {
return nil, errors.New("name must be a string")
}
return mcp.NewToolResultText(fmt.Sprintf("Hello, %s!", name)), nil
}
That's it!
MCP Go handles all the complex protocol details and server management, so you can focus on building great tools. It aims to be high-level and easy to use.
Key features:
- Fast: High-level interface means less code and faster development
- Simple: Build MCP servers with minimal boilerplate
- Complete*: MCP Go aims to provide a full implementation of the core MCP specification
(*emphasis on aims)
๐จ ๐ง ๐๏ธ MCP Go is under active development, as is the MCP specification itself. Core features are working but some advanced capabilities are still in progress.
Table of Contents
Installation
go get github.com/mark3labs/mcp-go
Quickstart
Let's create a simple MCP server that exposes a calculator tool and some data:
package main
import (
"context"
"errors"
"fmt"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
)
func main() {
// Create a new MCP server
s := server.NewMCPServer(
"Calculator Demo",
"1.0.0",
server.WithResourceCapabilities(true, true),
server.WithLogging(),
)
// Add a calculator tool
calculatorTool := mcp.NewTool("calculate",
mcp.WithDescription("Perform basic arithmetic operations"),
mcp.WithString("operation",
mcp.Required(),
mcp.Description("The operation to perform (add, subtract, multiply, divide)"),
mcp.Enum("add", "subtract", "multiply", "divide"),
),
mcp.WithNumber("x",
mcp.Required(),
mcp.Description("First number"),
),
mcp.WithNumber("y",
mcp.Required(),
mcp.Description("Second number"),
),
)
// Add the calculator handler
s.AddTool(calculatorTool, func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
op := request.Params.Arguments["operation"].(string)
x := request.Params.Arguments["x"].(float64)
y := request.Params.Arguments["y"].(float64)
var result float64
switch op {
case "add":
result = x + y
case "subtract":
result = x - y
case "multiply":
result = x * y
case "divide":
if y == 0 {
return nil, errors.New("Cannot divide by zero")
}
result = x / y
}
return mcp.NewToolResultText(fmt.Sprintf("%.2f", result)), nil
})
// Start the server
if err := server.ServeStdio(s); err != nil {
fmt.Printf("Server error: %v\n", err)
}
}
What is MCP?
The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through Resources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through Tools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through Prompts (reusable templates for LLM interactions)
- And more!
Core Concepts
Server
Show Server Examples
The server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
// Create a basic server
s := server.NewMCPServer(
"My Server", // Server name
"1.0.0", // Version
)
// Start the server using stdio
if err := server.ServeStdio(s); err != nil {
log.Fatalf("Server error: %v", err)
}
Resources
Show Resource Examples
Resources are how you expose data to LLMs. They can be anything - files, API responses, database queries, system information, etc. Resources can be:- Static (fixed URI)
- Dynamic (using URI templates)
Here's a simple example of a static resource:
// Static resource example - exposing a README file
resource := mcp.NewResource(
"docs://readme",
"Project README",
mcp.WithResourceDescription("The project's README file"),
mcp.WithMIMEType("text/markdown"),
)
// Add resource with its handler
s.AddResource(resource, func(ctx context.Context, request mcp.ReadResourceRequest) ([]mcp.ResourceContents, error) {
content, err := os.ReadFile("README.md")
if err != nil {
return nil, err
}
return []mcp.ResourceContents{
mcp.TextResourceContents{
URI: "docs://readme",
MIMEType: "text/markdown",
Text: string(content),
},
}, nil
})
And here's an example of a dynamic resource using a template:
// Dynamic resource example - user profiles by ID
template := mcp.NewResourceTemplate(
"users://{id}/profile",
"User Profile",
mcp.WithTemplateDescription("Returns user profile information"),
mcp.WithTemplateMIMEType("application/json"),
)
// Add template with its handler
s.AddResourceTemplate(template, func(ctx context.Context, request mcp.ReadResourceRequest) ([]mcp.ResourceContents, error) {
// Extract ID from the URI using regex matching
// The server automatically matches URIs to templates
userID := extractIDFromURI(request.Params.URI)
profile, err := getUserProfile(userID) // Your DB/API call here
if err != nil {
return nil, err
}
return []mcp.ResourceContents{
mcp.TextResourceContents{
URI: request.Params.URI,
MIMEType: "application/json",
Text: profile,
},
}, nil
})
The examples are simple but demonstrate the core concepts. Resources can be much more sophisticated - serving multiple contents, integrating with databases or external APIs, etc.
Tools
Show Tool Examples
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects. They're similar to POST endpoints in a REST API.
Simple calculation example:
calculatorTool := mcp.NewTool("calculate",
mcp.WithDescription("Perform basic arithmetic calculations"),
mcp.WithString("operation",
mcp.Required(),
mcp.Description("The arithmetic operation to perform"),
mcp.Enum("add", "subtract", "multiply", "divide"),
),
mcp.WithNumber("x",
mcp.Required(),
mcp.Description("First number"),
),
mcp.WithNumber("y",
mcp.Required(),
mcp.Description("Second number"),
),
)
s.AddTool(calculatorTool, func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
op := request.Params.Arguments["operation"].(string)
x := request.Params.Arguments["x"].(float64)
y := request.Params.Arguments["y"].(float64)
var result float64
switch op {
case "add":
result = x + y
case "subtract":
result = x - y
case "multiply":
result = x * y
case "divide":
if y == 0 {
return nil, errors.New("Division by zero is not allowed")
}
result = x / y
}
return mcp.FormatNumberResult(result), nil
})
HTTP request example:
httpTool := mcp.NewTool("http_request",
mcp.WithDescription("Make HTTP requests to external APIs"),
mcp.WithString("method",
mcp.Required(),
mcp.Description("HTTP method to use"),
mcp.Enum("GET", "POST", "PUT", "DELETE"),
),
mcp.WithString("url",
mcp.Required(),
mcp.Description("URL to send the request to"),
mcp.Pattern("^https?://.*"),
),
mcp.WithString("body",
mcp.Description("Request body (for POST/PUT)"),
),
)
s.AddTool(httpTool, func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
method := request.Params.Arguments["method"].(string)
url := request.Params.Arguments["url"].(string)
body := ""
if b, ok := request.Params.Arguments["body"].(string); ok {
body = b
}
// Create and send request
var req *http.Request
var err error
if body != "" {
req, err = http.NewRequest(method, url, strings.NewReader(body))
} else {
req, err = http.NewRequest(method, url, nil)
}
if err != nil {
return nil, fmt.Errorf("Failed to create request: %v", err)
}
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("Request failed: %v", err)
}
defer resp.Body.Close()
// Return response
respBody, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("Failed to read response: %v", err)
}
return mcp.NewToolResultText(fmt.Sprintf("Status: %d\nBody: %s", resp.StatusCode, string(respBody))), nil
})
Tools can be used for any kind of computation or side effect:
- Database queries
- File operations
- External API calls
- Calculations
- System operations
Each tool should:
- Have a clear description
- Validate inputs
- Handle errors gracefully
- Return structured responses
- Use appropriate result types
Prompts
Show Prompt Examples
Prompts are reusable templates that help LLMs interact with your server effectively. They're like "best practices" encoded into your server. Here are some examples:
// Simple greeting prompt
s.AddPrompt(mcp.NewPrompt("greeting",
mcp.WithPromptDescription("A friendly greeting prompt"),
mcp.WithArgument("name",
mcp.ArgumentDescription("Name of the person to greet"),
),
), func(ctx context.Context, request mcp.GetPromptRequest) (*mcp.GetPromptResult, error) {
name := request.Params.Arguments["name"]
if name == "" {
name = "friend"
}
return mcp.NewGetPromptResult(
"A friendly greeting",
[]mcp.PromptMessage{
mcp.NewPromptMessage(
mcp.RoleAssistant,
mcp.NewTextContent(fmt.Sprintf("Hello, %s! How can I help you today?", name)),
),
},
), nil
})
// Code review prompt with embedded resource
s.AddPrompt(mcp.NewPrompt("code_review",
mcp.WithPromptDescription("Code review assistance"),
mcp.WithArgument("pr_number",
mcp.ArgumentDescription("Pull request number to review"),
mcp.RequiredArgument(),
),
), func(ctx context.Context, request mcp.GetPromptRequest) (*mcp.GetPromptResult, error) {
prNumber := request.Params.Arguments["pr_number"]
if prNumber == "" {
return nil, fmt.Errorf("pr_number is required")
}
return mcp.NewGetPromptResult(
"Code review assistance",
[]mcp.PromptMessage{
mcp.NewPromptMessage(
mcp.RoleSystem,
mcp.NewTextContent("You are a helpful code reviewer. Review the changes and provide constructive feedback."),
),
mcp.NewPromptMessage(
mcp.RoleAssistant,
mcp.NewEmbeddedResource(mcp.ResourceContents{
URI: fmt.Sprintf("git://pulls/%s/diff", prNumber),
MIMEType: "text/x-diff",
}),
),
},
), nil
})
// Database query builder prompt
s.AddPrompt(mcp.NewPrompt("query_builder",
mcp.WithPromptDescription("SQL query builder assistance"),
mcp.WithArgument("table",
mcp.ArgumentDescription("Name of the table to query"),
mcp.RequiredArgument(),
),
), func(ctx context.Context, request mcp.GetPromptRequest) (*mcp.GetPromptResult, error) {
tableName := request.Params.Arguments["table"]
if tableName == "" {
return nil, fmt.Errorf("table name is required")
}
return mcp.NewGetPromptResult(
"SQL query builder assistance",
[]mcp.PromptMessage{
mcp.NewPromptMessage(
mcp.RoleSystem,
mcp.NewTextContent("You are a SQL expert. Help construct efficient and safe queries."),
),
mcp.NewPromptMessage(
mcp.RoleAssistant,
mcp.NewEmbeddedResource(mcp.ResourceContents{
URI: fmt.Sprintf("db://schema/%s", tableName),
MIMEType: "application/json",
}),
),
},
), nil
})
Prompts can include:
- System instructions
- Required arguments
- Embedded resources
- Multiple messages
- Different content types (text, images, etc.)
- Custom URI schemes
Examples
For examples, see the examples/
directory.
Extras
Request Hooks
Hook into the request lifecycle by creating a Hooks
object with your selection among the possible callbacks. This enables telemetry across all functionality, and observability of various facts, for example the ability to count improperly-formatted requests, or to log the agent identity during initialization.
Add the Hooks
to the server at the time of creation using the server.WithHooks
option.
Contributing
Open Developer Guide
Prerequisites
Go version >= 1.23
Installation
Create a fork of this repository, then clone it:
git clone https://github.com/mark3labs/mcp-go.git
cd mcp-go
Testing
Please make sure to test any new functionality. Your tests should be simple and atomic and anticipate change rather than cement complex patterns.
Run tests from the root directory:
go test -v './...'
Opening a Pull Request
Fork the repository and create a new branch:
git checkout -b my-branch
Make your changes and commit them:
git add . && git commit -m "My changes"
Push your changes to your fork:
git push origin my-branch
Feel free to reach out in a GitHub issue or discussion if you have any questions!
Ingress NGINX Controller for Kubernetes
Ingress NGINX Controller
Overview
ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
Learn more about Ingress on the Kubernetes documentation site.
Get started
See the Getting Started document.
Do not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster. See the FAQ for more.
Troubleshooting
If you encounter issues, review the troubleshooting docs, file an issue, or talk to us on the #ingress-nginx channel on the Kubernetes Slack server.
Changelog
See the list of releases for all changes. For detailed changes for each release, please check the changelog-$version.md file for the release version. For detailed changes on the ingress-nginx
helm chart, please check the changelog folder for a specific version. CHANGELOG-$current-version.md file.
Supported Versions table
Supported versions for the ingress-nginx project mean that we have completed E2E tests, and they are passing for the versions listed. Ingress-Nginx versions may work on older versions, but the project does not make that guarantee.
Supported | Ingress-NGINX version | k8s supported version | Alpine Version | Nginx Version | Helm Chart Version |
---|---|---|---|---|---|
๐ | v1.12.1 | 1.32, 1.31, 1.30, 1.29, 1.28 | 3.21.3 | 1.25.5 | 4.12.1 |
๐ | v1.12.0 | 1.32, 1.31, 1.30, 1.29, 1.28 | 3.21.0 | 1.25.5 | 4.12.0 |
๐ | v1.12.0-beta.0 | 1.32, 1.31, 1.30, 1.29, 1.28 | 3.20.3 | 1.25.5 | 4.12.0-beta.0 |
๐ | v1.11.5 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.21.3 | 1.25.5 | 4.11.5 |
๐ | v1.11.4 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.21.0 | 1.25.5 | 4.11.4 |
๐ | v1.11.3 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.3 | 1.25.5 | 4.11.3 |
๐ | v1.11.2 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.0 | 1.25.5 | 4.11.2 |
๐ | v1.11.1 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.0 | 1.25.5 | 4.11.1 |
๐ | v1.11.0 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.0 | 1.25.5 | 4.11.0 |
v1.10.6 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.21.0 | 1.25.5 | 4.10.6 | |
v1.10.5 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.3 | 1.25.5 | 4.10.5 | |
v1.10.4 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.0 | 1.25.5 | 4.10.4 | |
v1.10.3 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.0 | 1.25.5 | 4.10.3 | |
v1.10.2 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.20.0 | 1.25.5 | 4.10.2 | |
v1.10.1 | 1.30, 1.29, 1.28, 1.27, 1.26 | 3.19.1 | 1.25.3 | 4.10.1 | |
v1.10.0 | 1.29, 1.28, 1.27, 1.26 | 3.19.1 | 1.25.3 | 4.10.0 | |
v1.9.6 | 1.29, 1.28, 1.27, 1.26, 1.25 | 3.19.0 | 1.21.6 | 4.9.1 | |
v1.9.5 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.9.0 | |
v1.9.4 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.8.3 | |
v1.9.3 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.8.* | |
v1.9.1 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.8.* | |
v1.9.0 | 1.28, 1.27, 1.26, 1.25 | 3.18.2 | 1.21.6 | 4.8.* | |
v1.8.4 | 1.27, 1.26, 1.25, 1.24 | 3.18.2 | 1.21.6 | 4.7.* | |
v1.7.1 | 1.27, 1.26, 1.25, 1.24 | 3.17.2 | 1.21.6 | 4.6.* | |
v1.6.4 | 1.26, 1.25, 1.24, 1.23 | 3.17.0 | 1.21.6 | 4.5.* | |
v1.5.1 | 1.25, 1.24, 1.23 | 3.16.2 | 1.21.6 | 4.4.* | |
v1.4.0 | 1.25, 1.24, 1.23, 1.22 | 3.16.2 | 1.19.10โ | 4.3.0 | |
v1.3.1 | 1.24, 1.23, 1.22, 1.21, 1.20 | 3.16.2 | 1.19.10โ | 4.2.5 |
See this article if you want upgrade to the stable Ingress API.
Get Involved
Thanks for taking the time to join our community and start contributing!
-
This project adheres to the Kubernetes Community Code of Conduct. By participating in this project, you agree to abide by its terms.
-
Contributing: Contributions of all kinds are welcome!
- Read
CONTRIBUTING.md
for information about setting up your environment, the workflow that we expect, and instructions on the developer certificate of origin that we require. - Join our Kubernetes Slack channel for developer discussion : #ingress-nginx-dev.
- Submit GitHub issues for any feature enhancements, bugs, or documentation problems.
- Please make sure to read the Issue Reporting Checklist before opening an issue. Issues not conforming to the guidelines may be closed immediately.
- Join our ingress-nginx-dev mailing list
- Read
-
Support:
- Join the #ingress-nginx-users channel inside the Kubernetes Slack to ask questions or get support from the maintainers and other users.
- The GitHub issues in the repository are exclusively for bug reports and feature requests.
- Discuss: Tweet using the
#IngressNginx
hashtag or sharing with us @IngressNginx.
License
Agentic AI Framework for Java Developers
The community driven Spring AI Alibaba OpenManus implementtation can be found at community/openmanus module.
Spring AI Alibaba
An AI application framework for Java developers built on top of Spring AI that provides seamless integration with Alibaba Cloud QWen LLM services and cloud-native infrastructures.
Get Started
Please refer to quick start for how to quickly add generative AI to your Spring Boot applications.
Overall, it takes only two steps to turn your Spring Boot application into an intelligent agent:
Because Spring AI Alibaba is developed based on Spring Boot 3.x, it requires JDK version 17 and above.
- Add
spring-ai-alibaba-starter
dependency to your project.
<dependency>
<groupId>com.alibaba.cloud.ai</groupId>
<artifactId>spring-ai-alibaba-starter</artifactId>
<version>1.0.0-M6.1</version>
</dependency>
NOTICE: Since spring-ai related packages haven't been published to the central repo yet, it's needed to add the following maven repository to your project in order to successfully resolve artifacts like spring-ai-core.
<repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories>
Addendum: If the mirrorOf tag in your local Maven settings. xml is configured with the wildcard *, please modify it according to the following example.
<mirror> <id>xxxx</id> <mirrorOf>*,!spring-milestones</mirrorOf> <name>xxxx</name> <url>xxxx</url> </mirror>
- Inject
ChatClient
@RestController
public class ChatController {
private final ChatClient chatClient;
public ChatController(ChatClient.Builder builder) {
this.chatClient = builder.build();
}
@GetMapping("/chat")
public String chat(String input) {
return this.chatClient.prompt()
.user(input)
.call()
.content();
}
}
Examples
Spring AI Alibaba and Spring AI usage examples
Core Features
Spring AI Alibaba provides the following features, read the documentation on our website for more details of how to use these features.
- Support for Alibaba Cloud QWen Model and Dashscope Model service.
- Support high-level AI agent abstraction -- ChatClient.
- Support various Model types like Chat, Text to Image, Audio Transcription, Text to Speech.
- Both synchronous and stream API options are supported.
- Mapping of AI Model output to POJOs.
- Portable API across Vector Store providers.
- Function calling.
- Spring Boot Auto Configuration and Starters.
- RAG (Retrieval-Augmented Generation) support: DocumentReader, Splitter, Embedding, VectorStore, and Retriever.
- Support conversation with ChatMemory
Roadmap
Spring AI Alibaba aims to reduce the complexity of building AI native Java applications, from development, evaluation to deployment and observability. In order to achieve that, we provide both open-source framework and ecosystem integrations around it, below are the features that we plan to support in the near future:
- Prompt Template Management
- Event Driven AI Application
- Support of more Vector Databases
- Function Deployment
- Observability
- AI proxy support: prompt filtering, rate limit, multiple Model, etc.
- Development Tools
Contribution Guide
Please refer to the Contribution Guide to learn how to participate in the development of Spring AI Alibaba.
References
- Spring AI
- Spring AI Alibaba
- Alibaba Cloud Dashscope Model Service Platform (้ฟ้ไบ็พ็ผๆจกๅๆๅกๅๅบ็จๅผๅๅนณๅฐ)
Contact Us
- Dingtalk Group (้้็พค), search
61290041831
and join. - Wechat Group (ๅพฎไฟกๅ ฌไผๅท), scan the QR code below and follow us.

Credit
Some of this project's ideas and codes are inspired by or rewrote from the following projects. Great thanks to those who have created and open-sourced these projects.
- Spring AI, a Spring-friendly API and abstractions for developing AI applications licensed under the Apache License 2.0.
- Langgraph, a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows licensed under the MIT license.
- Langgraph4J, a porting of original LangGraph from the LangChain AI project in Java fashion.
Elegant reading of real-time and hottest news
NewsNow
English | ็ฎไฝไธญๆ | ๆฅๆฌ่ช
[!NOTE] This is a demo version currently supporting Chinese only. A full-featured version with better customization and English content support will be released later.
Elegant reading of real-time and hottest news
Features
- Clean and elegant UI design for optimal reading experience
- Real-time updates on trending news
- GitHub OAuth login with data synchronization
- 30-minute default cache duration (logged-in users can force refresh)
- Adaptive scraping interval (minimum 2 minutes) based on source update frequency to optimize resource usage and prevent IP bans
Deployment
Basic Deployment
For deployments without login and caching:
- Fork this repository
- Import to platforms like Cloudflare Page or Vercel
Cloudflare Page Configuration
- Build command:
pnpm run build
- Output directory:
dist/output/public
GitHub OAuth Setup
- Create a GitHub App
- No special permissions required
- Set callback URL to:
https://your-domain.com/api/oauth/github
(replaceyour-domain
with your actual domain) - Obtain Client ID and Client Secret
Environment Variables
Refer to example.env.server
. For local development, rename it to .env.server
and configure:
# Github Client ID
G_CLIENT_ID=
# Github Client Secret
G_CLIENT_SECRET=
# JWT Secret, usually the same as Client Secret
JWT_SECRET=
# Initialize database, must be set to true on first run, can be turned off afterward
INIT_TABLE=true
# Whether to enable cache
ENABLE_CACHE=true
Database Support
Supported database connectors: https://db0.unjs.io/connectors Cloudflare D1 Database is recommended.
- Create D1 database in Cloudflare Worker dashboard
- Configure database_id and database_name in wrangler.toml
- If wrangler.toml doesn't exist, rename example.wrangler.toml and modify configurations
- Changes will take effect on next deployment
Docker Deployment
In project root directory:
docker compose up
You can also set Environment Variables in docker-compose.yml
.
Development
[!Note] Requires Node.js >= 20
corepack enable
pnpm i
pnpm dev
Adding Data Sources
Refer to shared/sources
and server/source
s directories. The project provides complete type definitions and a clean architecture.
Roadmap
- Add multi-language support (English, Chinese, more to come).
- Improve personalization options (category-based news, saved preferences).
- Expand data sources to cover global news in multiple languages.
release when ready
Contributing
Contributions are welcome! Feel free to submit pull requests or create issues for feature requests and bug reports.
License
MIT ยฉ ourongxing