Query Your Graph with Natural Language
Ever wished you could just ask your database questions in plain English instead of crafting complex Cypher queries? With our new Proof of Concept, that’s now possible. We’ve integrated Neo4j, Ollama, and OpenWebUI into a single, streamlined experience, letting you query your graph database with nothing but natural language.
The Stack Behind the Magic
This project brings together:
- Neo4j – a powerful graph database engine
- Ollama – your local LLM runtime, running models like Qwen, Llama or Mistral efficiently
- Model Context Protocol (MCP) – a new open protocol to allow models to use external tools
- Open WebUI – an intuitive interface for interacting with your LLMs and tools
All components are containerized using Docker, and the setup process takes only a few moments.
Getting Started in Seconds
We designed the setup to be simple. Just clone the repository and run the provided script:
chmod +x start-containers.sh
./start-containers.sh
This spins up:
- A pre-populated Neo4j database
- The Ollama server for running local models
- The MCP server, bridging the LLM with the necessary Neo4j query tools
- Open WebUI, which is the frontend where all the action happens
No manual configuration, no extra dependencies. Just plug and play.
How It Works
At the heart of this system is the Neo4j MCP server. It is an official integration from Neo4j that listens for natural language input, translates it into Cypher and executes it against your graph.
By hooking this up to Open WebUI, we give users a clean, friendly interface to interact with the graph using natural language, backed by the power of Ollama-hosted LLMs.
Behind the Scenes: From LangChain to MCP
Our initial prototype used the Pipelines container from the OpenWebUI team and a self-written pipeline, which relied on LangChain to process natural language and generate Cypher queries. While it worked in some cases, we ran into recurring issues:
- Intermittent failures where no answer was returned at all
- Unstable behavior across sessions and models
- Difficult debugging, with intransparent data flow
It felt like we were constantly chasing down invisible bugs.
By switching to the official Neo4j MCP server, things became dramatically more stable. Even if a model sometimes produces a Cypher query that’s not exactly what we wanted, the system stays up, responsive, and debuggable. The LLM no longer crashes or disconnects, it just keeps working.
This stability made the experience far more usable and gave us confidence to build on top of it.
Try It Yourself
Visit
http://localhost:8088
to open OpenWebUIGo to the Admin Panel → Settings → Tools
Add a connection:
- URL:
http://neo4j-mcp-server:8000/neo4j-aura
- Key: leave empty
- URL:
Import
cypher-query-model.json
via the Models tabRefresh the page (clear cache)
Now select the model ‘Cypher Query’ from the dropdown menu in the top left corner. Once that’s done, you’ll see a tool toggle in your chat UI. Try typing:
“List all Departments nodes.”
…and watch the response come back directly from Neo4j.
This is especially useful for:
- Non-technical users needing insights from graph data
- Rapid prototyping and testing of data queries
- Exploring unfamiliar graph structures without deep Cypher knowledge
We remove the barrier between humans and complex data models. No need for Cypher expertise. No black-box APIs. Everything runs locally, transparently, and reproducibly.
Why This Matters
We built this to showcase the power of open standards and modular tooling in the AI+graph space. With the official Neo4j MCP integration and a local LLM stack, you’re in full control of your data.
Want to dive in? Grab the code from GitHub or here and start asking your database questions the way you think of them: in natural language.