This guide walks you through setting up the DMOM-RAG chatbot with Docker and Python 3.11+. Follow the steps below to get up and running.
Ollama is required to run your local LLM.
👉 Refer to the installation guide I created Running Ollama Locally .
Clone the repo:
git clone <https://github.com/tungedng2710/TonAI-RAG.git>
Move into the project directory:
cd TonAI-RAG
Copy the example environment file:
cp .env.example .env
Then edit .env:
OLLAMA_BASE_URL → your Ollama URLGEMINI_API_KEY → your Gemini key (optional if using Gemini)