Find answers to common questions about AgentykChat
AgentykChat is a free, open-source desktop application that lets you chat with AI models completely locally on your computer. It provides a ChatGPT-like experience without requiring API keys, internet connection (after initial setup), or monthly subscriptions.
Yes! AgentykChat is completely free and open-source under the MIT license. There are no hidden costs, no subscriptions, and no per-user fees. You only need your own computer to run it.
Unlike ChatGPT, AgentykChat runs entirely on your computer. Your conversations never leave your machine, there's no cloud processing, and you don't need an internet connection after downloading models. Plus, it's completely free with no usage limits.
No! AgentykChat has a user-friendly graphical interface. While it's built for privacy-conscious users and developers, anyone can use it. Just download, install, and start chatting.
Absolutely. All AI processing happens locally on your computer. Your conversations, documents, and any data you input never leave your machine. There's no telemetry, no analytics, and no data collection of any kind.
No. Since everything runs locally, your conversations are stored only on your computer. They're never sent to external servers, and we have no way to access them.
No. We don't collect any usage data, analytics, or telemetry. Your privacy is complete.
Yes. Documents you upload are processed entirely locally using the RAG system. They're never sent to external servers and remain on your computer.
Minimum: 8GB RAM, 10GB free disk space, macOS 11+, Windows 10+, or Linux. Recommended: 16GB RAM for larger models and better performance.
First install Ollama, then download AgentykChat for your platform. Detailed installation guides for macOS, Windows, and Linux are available on our Download page.
Yes, Ollama is required as it provides the AI model infrastructure. It's free and easy to install. Our installation guide walks you through the process.
Installation takes 5-10 minutes. Downloading your first AI model can take 10-30 minutes depending on the model size and your internet speed.
Yes! After downloading the app and your AI models, AgentykChat works completely offline. Internet is only needed for initial downloads and updates.
AgentykChat supports 30+ models through Ollama, including Llama 2/3, Mistral, Mixtral, CodeLlama, Phi, and many more. Models range from tiny (1B parameters) to massive (70B+ parameters).
For beginners, we recommend Llama 2 (7B) or Mistral (7B) - they offer a good balance of quality and speed. For coding, try CodeLlama. For low-end hardware, try Phi or TinyLlama.
Model sizes vary: Small models (1-3B) need 1-2GB, medium models (7-13B) need 4-8GB, and large models (34-70B) need 20-40GB. You can delete models you don't use.
Yes! You can download as many models as you have disk space for and switch between them freely. Each conversation can use a different model.
In AgentykChat, go to the Models tab and click "Download Model." Select from the list of available models. Downloads happen in the background while you continue chatting.
Yes! Any model supported by Ollama can be used. You can also create custom models using Ollama's model creation features.
RAG (Retrieval-Augmented Generation) lets you upload documents (PDF, DOCX, TXT) and ask questions about them. AgentykChat creates embeddings of your documents and retrieves relevant sections when answering your questions.
Yes! Upload PDFs in the Knowledge Base tab, then ask questions about them in your chat. The AI will use the document content to provide accurate, context-aware answers.
There's no hard limit, but performance depends on your hardware. Start with a few documents and add more as needed.
Yes! Conversations can be exported as plain text (.txt) or JSON format for backup, sharing, or analysis.
Yes! Code blocks in AI responses are automatically highlighted with proper syntax for multiple programming languages.
Yes! You can set custom system prompts to change how the AI responds, adjust model parameters like temperature, and choose different models for different tasks.
Performance depends on your hardware and the model size. Try using a smaller model, close other applications to free up RAM, or upgrade your hardware. GPU acceleration can significantly improve speed.
Yes! Ollama automatically uses your GPU if available (NVIDIA CUDA or Apple Silicon). This significantly speeds up AI responses.
Make sure Ollama is running. On macOS/Linux, run 'ollama serve' in terminal. On Windows, check if Ollama is running in the system tray. Restart both applications if needed.
Check your internet connection and try again. Downloads can be resumed. If issues persist, try downloading the model directly through Ollama CLI: 'ollama pull [model-name]'.
Check our troubleshooting guide for your platform. Common solutions: ensure Ollama is installed and running, check system requirements, restart your computer, or reinstall AgentykChat.
We welcome contributions! Report bugs, suggest features, improve documentation, or submit code on our GitHub repository. Check our Contributing guide for details.
Join our Discord community for real-time help, check our documentation, search GitHub Discussions, or file an issue on GitHub.
We release patch updates (bug fixes) every 2 weeks, minor updates (new features) monthly, and major updates (breaking changes) quarterly.
Yes! AgentykChat is released under the MIT license, which allows commercial use, modification, and distribution.
Star us on GitHub, share with others, contribute code or documentation, report bugs, or help other users in the community!
Check our public roadmap! Upcoming features include voice chat, browser extensions, mobile apps, plugin system, and many community-requested features.
Yes! AgentykChat will always be free and open-source. We may offer optional paid services in the future (like cloud sync), but the core app will never require payment.
Yes! A mobile companion app is planned for 2025. It will sync with your desktop app and allow viewing conversations on the go.
Absolutely! Submit feature requests on GitHub Discussions or vote on existing requests. Community input directly influences our roadmap.