This guide walks you through setting up the DMOM-RAG chatbot with Docker and Python 3.11+. Follow the steps below to get up and running.


1. Install Ollama

Ollama is required to run your local LLM.

👉 Refer to the installation guide I created Running Ollama Locally .


2. Pull the RAG Code

Clone the repo:

git clone <https://github.com/tungedng2710/TonAI-RAG.git>

3. Configure Environment Variables

Move into the project directory:

cd TonAI-RAG

Copy the example environment file:

cp .env.example .env

Then edit .env:


4. Ingest Data