The perfect chat interface for LLMs
TL;DR
- For simplicity, privacy, security, and flexibility I want to use a single interface for chatting with multiple LLMs which stores all prompt/responses locally and allows me to do stuff like tag and group them.
- I want something popular and Open Source I can host locally.
- After asking OpenAI Deep Research, I chose this one to try out (see below for the analysis from Deep Research): open-webui
- open-webui isn’t exactly what I wanted—it has a lot more features than I want, which means it’s more complex than I want (I’m not a fan of complex tools)—but it has the features I need.
- I set it up in a Docker containing and now have it as a new app in my dock. You can see an animation of me using it above. It was easy to connect to OpenRouter.ai, which means I have loads of different models to choose from and experiment with. The ability to put chats into folders is great, and all chats are saved locally to a SQLite database that I can access via scripts to do (yet to be decided) things with.
I chat with LLMs a lot. Multiple times during the day. Mainly with ChatGPT from OpenAI (I pay 20USD a month), but also with Claude (I paid 205EUR for a year) and Perplexity (free plan). I also have been using the OpenRouter.ai interface to access multiple models. I have used the Safari “Add to dock” feature to make my own apps for these (the web interface actually has more functionality than the apps do, at least for ChatGPT).
I would like to consolidate this proliferation of chat interfaces to a single interface. Why? A few reasons, but principally:
- I would like to be able to manage archived chats better, for example categorizing them, or starring important ones.
- I would like to be able to chat with local models via the same interface, perhaps making a local model my personal chat interface in the longer term.
- I would like to have a database of my archived chats. Why? I can then process them, and use that to, for example, make my own personal chat “memory” which I can edit, which would become part of the context for my own personal chat interface.
- I have become increasingly cautious about privacy and security when it comes to the stuff I share with LLMs. I have shared a lot with OpenAI and I do trust it? Not really. I don’t want to be getting personalized ads based on my chat history. There are darker scenarios that need to take into account—with the dark turn the USA has taken recently, I don’t want, for example, USA customs officers having access to my chat history. The possibility of OpenAI accounts getting hacked is also a possibility
I have found ChatGPT Deep Research to be really good at finding the best solution to this kind of technical problem. Over the last week or so I’ve discovered various excellent Open Source tools that I didn’t know about previously using ChatGPT Deep Research. Here’s the prompt I used in this case:
What is the best chat interface for using LLMs? My ideal solutions should be:
- Popular and widely used
- Open Source on GitHub with good star ratings
- Compatible with OpenRouter.ai
- Able to save all prompts and responses locally
- Web-based (preferably), but desktop and terminal options are fine too
- Written in Python or Node.js if possible
- Suitable for local self-hosting
Below is the response from Deep Research (it took 7 minutes). I’m going to choose open-webui to test. It’s not exactly what I want—it’s has way more features than I need—but it has the features I want and is worth testing.
I’ll install it with Docker but with the SQLite database accessible locally, something like this:
docker run -d \
--name open-webui \
-p 3000:3000 \
-v ~/data:/app/backend/data \
ghcr.io/open-webui/open-webui:main
Best Open-Source Chat Interfaces for LLMs
Below we identify and compare several popular open-source chat interfaces for large language models (LLMs) that meet the criteria. Each of these tools is self-hostable, has a significant user base (high GitHub star count), supports or can be adapted to use OpenRouter.ai as a backend, and stores conversation data locally. We prioritize tools that balance usability, OpenRouter compatibility, and local data handling.
Open Web UI
- GitHub: open-webui/open-webui (⭐ 89k)
- Key Features: A self-hosted chat interface that runs entirely offline, with multi-model support. It can interface with local backends (like Ollama or llama.cpp) and OpenAI-compatible APIs 1. It supports easy model switching, retrieval-augmented generation (RAG) with document uploads, and a plugin system for extending functionality 2. The UI is mobile-responsive and supports rich markdown (LaTeX, code highlighting, etc.).
- OpenRouter Compatibility: Yes (configurable) – Open Web UI lets you customize the OpenAI API endpoint, making it straightforward to use OpenRouter as the backend 3. In fact, the project explicitly mentions linking with OpenRouter by setting the OpenAI-compatible API URL to OpenRouter’s endpoint 4.
- Local History Handling: Designed with privacy in mind – since it runs locally/offline, all chat data stays on your machine. Conversations are not sent to any cloud service. By default, the chat history is kept in memory for the session; if you keep the server running, you can scroll through past messages. (It does not require an external database – the focus is on local, ephemeral use, though you could potentially export or log chats manually.)
- Setup Complexity: Easy. It’s a Python package – you can install via pip and launch with a single command 5. This spins up a local web app (e.g. at
localhost:8080
). No external dependencies or cloud services needed.
- Pros: Extremely popular with an active community. Fully offline capability with support for both local and API-based models, giving flexibility 6. Strong privacy (no data leaves your machine). Plugin support and RAG features for advanced use-cases.
- Cons: Being a relatively new project, it may not (yet) have the polished multi-user management or advanced agent features some others do. Chat history persists only as long as the app is running (no built-in long-term chat log storage across sessions, beyond what your browser might cache). However, its rapid development and community suggest it’s improving quickly.
ChatGPT Next Web
- GitHub: ChatGPT-Next-Web (⭐ 82k)
- Key Features: A sleek, lightweight ChatGPT-style interface that can run as a web app or be packaged for desktop/mobile. It offers a ChatGPT-like experience with enhancements: PWA support (so you can “install” it like a native app), responsive design with dark mode, and multi-language UI 7 8. It supports markdown rendering (including math and mermaid charts) and streaming AI responses 9. Notably, it’s compatible with self-hosted models (like LocalAI, RWKV-Runner) as well as OpenAI API 10. There’s also a prompt template system and other power-user features (e.g. conversation “compression” to handle long chats) 11 12.
- OpenRouter Compatibility: Yes (via config) – ChatGPT-Next-Web is designed to use OpenAI’s API by default, but it allows configuring a custom API base URL 13. By setting the
BASE_URL
to OpenRouter’s endpoint and providing your OpenRouter API key (treated like an OpenAI key), you can route all calls through OpenRouter.ai. (The project documentation gives examples of using proxies or Azure endpoints similarly 14, so OpenRouter can be used in the same way.)
- Local History Handling: Emphasizes privacy: “all data is stored locally in the browser.” 15 Chat histories are saved in your browser’s local storage, meaning your prompts and responses stay on your machine and persist between sessions (unless you clear them). No conversation data is sent to any server aside from the model API requests. This local-first design was a core principle 16. Additionally, it has an auto-compression feature that summarizes older messages in long conversations to save token usage 17 (this happens locally).
- Setup Complexity: Moderate. You can deploy with one click on Vercel or run locally. Running locally requires Node.js (since it’s a Next.js app). You’d clone the repo, set your API keys in an
.env.local
file, and then build or run the development server. The one-click deploy is very easy, but even local setup is relatively straightforward for anyone comfortable with Node tools 18 19.
- Pros: Very user-friendly with a polished UI (it feels like the official ChatGPT interface). Supports multiple platforms (it even can be compiled to a desktop app via Tauri) and is lightweight. Strong privacy stance, keeping chat data in-browser 20. Also supports using local LLMs via integration with projects like LocalAI, which is a plus for flexibility 21.
- Cons: Focused primarily on a single-user ChatGPT experience. It doesn’t natively have multi-user or role-based features (it does allow an optional access password for your deployed instance). While it supports custom backend models, switching between many different model providers on the fly isn’t as built-in (you might edit config or use one at a time). In short, it’s excellent for a personal chat interface, but less of an “LLM command center” than some others.
LibreChat (formerly ChatGPT-UI)
- GitHub: danny-avila/LibreChat (⭐ 24k)
- Key Features: LibreChat is a highly extensible ChatGPT-like interface that supports multiple AI providers and advanced features out of the box 22. It can connect to OpenAI, Anthropic (Claude), Google PaLM, local models, and more under one unified UI. You can manage multiple conversation threads with a sidebar (including searching through history and forking chats). It has an “agents” system – for example, tools like a file Q&A agent or a code interpreter agent are included. There’s also an “Artifacts” feature for generating diagrams, images, etc., from the chat 23. LibreChat supports multi-user accounts with roles/permissions, which makes it suitable for a team setting as well 24. In short, it aims to be a complete, self-hosted ChatGPT alternative with lots of bells and whistles.
- OpenRouter Compatibility: Yes (native integration) – LibreChat lists OpenRouter as one of the many model providers it supports natively (alongside Azure, Anthropic, etc.) 25. This means you can plug in your OpenRouter API key and use OpenRouter’s models directly from the UI without needing to modify the code. Its documentation indicates support for OpenRouter and other third-party APIs out-of-the-box.
- Local History Handling: All conversations are stored in a self-hosted database (when you run LibreChat, it uses a local database to save user accounts, chat histories, etc.). This enables features like searching your chats by content and resuming past conversations from where you left off. Because you host it yourself, the data remains on your machine or server. You can have extensive chat logs, and even across multiple users, all stored locally. (For example, if using Docker, the data can be persisted in a volume or mounted directory.)
- Setup Complexity: Moderate to High. LibreChat provides multiple installation methods: Docker (recommended), NPM, or even a Helm chart for Kubernetes 26. The Docker setup involves pulling the image and starting containers (it’s fairly straightforward if you’re familiar with Docker 27 28). The NPM route requires Node.js and involves building the app, which is a bit more involved. Overall, because it has more moving parts (database, multi-user backend), setup is a bit heavier than simpler UIs.
- Pros: Feature-rich and versatile. It’s one of the few that offers a full “ChatGPT-like” experience with support for various model APIs and extra tools (agents, search, plugins) in one place 29. It’s actively maintained (“Active project” as noted, with frequent updates). Great if you want a powerful all-in-one chat system with team support and are willing to self-host a slightly larger app.
- Cons: The richness comes at the cost of complexity. It’s a heavier application to run – for instance, it uses a database and might consume more resources. The UI, while powerful, might feel more complex for a casual user compared to simpler UIs. Also, because it’s under active development, occasional bugs or breaking changes could occur. For a single-user who just wants a quick chat interface, this might be overkill – LibreChat shines more in multi-model, multi-user scenarios.
LobeChat (Lobe UI)
- GitHub: lobehub/lobe-chat (⭐ 58k)
- Key Features: LobeChat is a modern-looking, SvelteKit-based chat framework with a polished interface. It supports multiple AI providers – for example, you can use OpenAI, Anthropic, various open-source models, etc., and switch between them easily 30. It has a built-in Knowledge Base feature for retrieval-augmented generation: you can upload files or add data and then ask questions that LobeChat will answer with that context. Another distinctive feature is its Agent Marketplace: the community can create and share “AI Personas” or mini-plugins that you can plug into LobeChat to extend its capabilities. Markdown rendering, code execution plugins, and other multi-modal supports are included as well. Despite the rich feature set, the UI is designed to be user-friendly with a desktop app feel.
- OpenRouter Compatibility: Yes (native integration) – LobeChat includes OpenRouter as one of its supported providers. In the documentation, OpenRouter is described as a gateway to many model APIs 31, and you can select it as a backend. Essentially, you can configure your OpenRouter API key in LobeChat and then choose from OpenRouter’s available models through the interface. (LobeChat’s provider list is extensive – it even lists OpenRouter, HuggingFace, Together AI, and others in its UI 32.) This makes using OpenRouter’s models very straightforward.
- Local History Handling: LobeChat uses a local database (it prompts you to set up a folder for data on first run) to store conversations, knowledge base documents, and configurations. All your prompt/response history is saved in this local database. This means you can close the app and return later to find your chats intact. Nothing is sent to an external storage by default – any stored conversation data resides on your machine. If you use the Docker deployment, this corresponds to a persistent volume holding the data.
- Setup Complexity: Moderate. The project provides a one-command setup script for convenience 33, which can set up the environment (it uses Docker under the hood to launch the app). Essentially, after running the provided script or docker-compose, you visit
localhost:3210
and go through a brief initial setup (choosing language, adding API keys, etc.) 34. Because it involves Docker and a database, it’s not as simple as running a single binary, but the maintainers have tried to simplify it with automation. Technical users should find it reasonably straightforward, while non-technical users might need to follow the guide closely.
- Pros: Highly polished UI and feature-packed – it stands out for offering knowledge-base integration and a plugin ecosystem in addition to basic chat. Good for power users who want more than just Q&A with a model. Multi-model support is first-class, and switching between different AI providers (local or cloud) is easy. The project has a lot of community attention (reflected in its stars) and is open-source (Apache-2.0 licensed).
- Cons: Consumes more resources than a minimal chat UI, since it runs a database and potentially many features at once. The initial setup is heavier, and troubleshooting a multi-container app can be more involved if something goes wrong. Also, while the interface is modern, new users may feel a bit overwhelmed by all the options (agent marketplace, knowledge base, etc.) if they only need a simple chat. In summary, LobeChat is excellent if you need its advanced features, but simpler alternatives might be preferable for quick, one-off chat sessions.
SillyTavern
- GitHub: SillyTavern/SillyTavern (⭐ 13k)
- Key Features: SillyTavern is a frontend for power users, originally born from the role-playing chatbot community. It provides a highly customizable chat UI and supports a wide range of LLM backends: local (Oobabooga/TextGen, KoboldCPP, etc.) and cloud APIs (OpenAI, Claude, OpenRouter, etc.) 35. A hallmark feature is character-based chatting – you can create characters or personas with profiles and even have multi-character chats. It supports features like “lorebooks” (world or context info), image generation integration (via Stable Diffusion APIs), text-to-speech for responses, and many configurable prompt settings 36. Essentially, it gives you a sandbox to fine-tune the chat experience (e.g., you can adjust system prompts, use novel formats, etc.). Despite the name, it’s not just for silly conversations – it’s a robust interface for any use-case once set up.
- OpenRouter Compatibility: Yes (native) – SillyTavern natively supports OpenRouter in its backend connectors. In fact, the project description explicitly lists OpenRouter alongside other APIs 37. You can select OpenRouter as the provider and supply your OpenRouter API key in the settings. SillyTavern will then route your chats through OpenRouter.ai, giving you access to any model on OpenRouter’s platform. This is intended to be plug-and-play, as the devs keep up with many API integrations.
- Local History Handling: All data stays local. SillyTavern does not send your chat content anywhere except to the model API you’re using. By default it stores settings and API keys in local files on your system (e.g. a
secrets.json
in a user data folder) 38. Chat histories for your characters are stored in your browser’s local storage or can be saved to files – you have the option to export or backup dialogues if desired. Since it’s a purely front-end application (the “server” is just for serving the files and coordinating with model APIs), it relies on the browser to retain state. This means if you close the browser, you might lose the current chat unless you saved it or the session is still running. However, the design assumes a persistent usage where you don’t refresh often; many users keep dedicated sessions for their chats. In practice, it’s private and local by design (the README notes they “do not provide any online service or track any user data” 39).
- Setup Complexity: Moderate. You need Node.js to run it. Installation is usually: clone the repo, install dependencies, and run the start script 40. There’s also a community-provided launcher to simplify this. Once running (served on
localhost:8000
by default 41), you configure your model API settings in the UI. The setup is command-line driven but reasonably simple for those with basic tech experience. No containers or databases needed – it’s mostly static files plus a small Node server.
- Pros: Extremely flexible and community-driven. If you want fine control over prompts or to connect to a variety of experimental backends, SillyTavern excels. It’s particularly beloved for creative use-cases (roleplay, storytelling) due to features like character personas and the ability to inject lore/contexts. It also has a rich extension ecosystem (for example, adding custom UIs, automations, etc.). And importantly, it has verified support for OpenRouter and many other services out-of-the-box 42.
- Cons: The UI is utilitarian compared to flashier options – it’s functional but not as pretty or “simple” as some others. The plethora of options can present a steep learning curve (the project acknowledges this is part of the appeal for power users). It’s primarily single-user and session-based; managing multiple distinct chat sessions or long-term histories is less straightforward here than in something like LibreChat. In short, it’s fantastic for enthusiasts who want control and customization, but casual users might find it more complicated than necessary.
To summarize, here is a comparison of the top options across key considerations:
Tool | Popularity (GitHub Stars) | OpenRouter Support | History Storage | Platform/Stack | Notable Pros | Potential Cons |
---|---|---|---|---|---|---|
Open Web UI | 89k 43 (very active) | Yes – configurable API base for OpenRouter 44 | In-memory/browser (offline) – all data local (session-based) | Python (FastAPI) / Web | Multi-model (local & API) support; offline mode; plugin & doc upload features 45; easy install | No multi-user; history not saved to disk by default (lost on restart) |
ChatGPT Next Web | 82k 46 (very active) | Yes – set custom BASE_URL to OpenRouter 47 |
Browser local storage (persists locally) 48 | Node.js (Next.js) / Web | Polished ChatGPT-like UI; PWA and cross-platform; lightweight; privacy-focused 49 | Single-user oriented; switching between many models requires reconfiguring env |
LibreChat | 24k 50 (active project) | Yes – native support for OpenRouter in providers list 51 | Local DB (persistent) – full chat history saved on server | Node.js (Express+Vue) / Web | Rich features (agents, multi-model, multi-user) 52; team ready; search and plugins | Heavier setup (Docker/DB); UI more complex; higher resource usage |
LobeChat | 58k 53 (trending) | Yes – native; choose OpenRouter as a provider in UI 54 | Local DB (persistent) – chats and uploaded knowledge saved locally | Node.js (SvelteKit) / Web | Modern UI with knowledge-base & plugin ecosystem; one-click deployment script | Setup uses Docker; a lot of features (could be overkill for simple use) |
SillyTavern | 13k 55 (niche but active) | Yes – native; built-in OpenRouter API connector 56 | Browser memory (session) + optional save/export – no cloud storage | Node.js / Web | Highly customizable; supports many backends; character and role-play features 57 | UI/options can overwhelm; less out-of-the-box persistence for multiple chats |
Each of these tools strikes a slightly different balance. For a straightforward ChatGPT-like experience with easy setup, ChatGPT Next Web is a great choice. If you want a more enterprise or team solution with many features, LibreChat shines. Open Web UI is excellent for offline use and simplicity when you occasionally need to switch between local and remote models. LobeChat offers a cutting-edge interface with extensibility, and SillyTavern caters to power users who want maximum control (especially if you plan to leverage OpenRouter for varied models or specialized roleplay scenarios).
All of the above are open-source and can be run locally without relying on any proprietary cloud service. Which one is “best” depends on your specific needs – but rest assured, each can be configured to work with OpenRouter.ai and will keep your conversation data under your control, locally. 58 59
References
Footnotes
tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, …)↩︎github.com
GitHub - open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, …)↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - danny-avila/LibreChat: Enhanced ChatGPT Clone: Features Agents, DeepSeek, Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active project.↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - lobehub/lobe-chat: Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / DeepSeek / Qwen), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Plugins/Artifacts) and Thinking. One-click FREE deployment of your private ChatGPT/ Claude / DeepSeek application.↩︎github.com
GitHub - lobehub/lobe-chat: Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / DeepSeek / Qwen), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Plugins/Artifacts) and Thinking. One-click FREE deployment of your private ChatGPT/ Claude / DeepSeek application.↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, …)↩︎github.com
GitHub - open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, …)↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - ChatGPTNextWeb/NextChat: ✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows↩︎github.com
GitHub - danny-avila/LibreChat: Enhanced ChatGPT Clone: Features Agents, DeepSeek, Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active project.↩︎github.com
GitHub - danny-avila/LibreChat: Enhanced ChatGPT Clone: Features Agents, DeepSeek, Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active project.↩︎tyolab.com
Exploring the Best Free and Open-Source Chat UIs for LLMs↩︎github.com
GitHub - lobehub/lobe-chat: Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / DeepSeek / Qwen), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Plugins/Artifacts) and Thinking. One-click FREE deployment of your private ChatGPT/ Claude / DeepSeek application.↩︎github.com
GitHub - lobehub/lobe-chat: Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / DeepSeek / Qwen), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Plugins/Artifacts) and Thinking. One-click FREE deployment of your private ChatGPT/ Claude / DeepSeek application.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎github.com
GitHub - open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, …)↩︎github.com
GitHub - SillyTavern/SillyTavern: LLM Frontend for Power Users.↩︎