GenAI for Urban Planning.

As an AI Consultant for a legal entity, I designed and implemented a bespoke AI-powered assistant to tackle the immense challenge of analyzing unstructured legal documents in urban planning cases. Faced with tight deadlines and thousands of pages of text, the client needed a cost-effective way to extract key arguments and draft objections. I architected a solution using Retrieval-Augmented Generation (RAG), integrating a custom knowledge base with the Gemini 2.5 Pro LLM via the open-source AnythingLLM framework to create a powerful, domain-specific tool that reduces legal analysis time by up to 80%.

Scope

A Custom RAG Solution for Legal Document Analysis

A Custom RAG Solution for Legal Document Analysis

/

Client

Anonymized Legal Entity

Anonymized Legal Entity

/

Duration

4 months

4 months

/

Year

2025

2025

Skills

AI System Architecture, Retrieval-Augmented Generation (RAG), LLM Evaluation & Selection, Prompt Engineering, Knowledge Base Creation, Embeddings, User-Centered AI Design

AI System Architecture, Retrieval-Augmented Generation (RAG), LLM Evaluation & Selection, Prompt Engineering, Knowledge Base Creation, Embeddings, User-Centered AI Design

Tools

AnythingLLM, Gemini 2.5 Pro API, Google Search API

AnythingLLM, Gemini 2.5 Pro API, Google Search API

/

Challenge

(01)

/

Challenge

(01)

/

Challenge

(01)

Legal teams in urban planning face a recurring, high-stakes problem: they must analyze vast quantities of dense, unstructured legal documents under severe time constraints. A single case can involve hundreds of pages, and missing a key detail or failing to formulate a strong objection can have irreversible consequences for communities and the environment.

The Business Problem

The client's manual workflow was unsustainable. With limited human capacity and budget, their team was constantly at risk of overload. The process of manually scanning PDFs for key information was slow, inefficient, and not scalable. They needed a solution that was more than 90% cheaper than commercial legal AI tools, without sacrificing control or flexibility.

The User (Legal Team) Problem

The legal team's primary challenges were:

  • Information Overload: Quickly finding relevant precedents, arguments, and data points within thousands of pages of text was nearly impossible.

  • Perspective Simulation: They needed to analyze documents from multiple viewpoints (e.g., a citizen, an investor, an environmentalist) to build robust arguments, a mentally taxing and time-consuming task.

  • Drafting Bottlenecks: Formulating precise legal objections based on templates and extracted information was a repetitive, manual process prone to delays.

/

Solution

(02)

/

Solution

(02)

/

Solution

(02)

I designed a modular, AI-powered assistant built on a Retrieval-Augmented Generation (RAG) architecture. This approach grounds a powerful Large Language Model (LLM) in a specific, curated knowledge base of the client's legal documents, ensuring responses are accurate, relevant, and fact-based.

1. Architecture & Technology Selection

A thorough market analysis of existing tools revealed that commercial solutions were either too restrictive ("black boxes"), too complex for a small project, or had critical limitations (e.g., NotebookLM's source limits).

I architected a custom solution using:

  • RAG Framework: AnythingLLM was chosen for its open-source nature, giving us full control over the RAG pipeline—from chunk sizes and embedding models to retrieval parameters. This transparency was critical for a legal use case.

  • LLM Engine: After comparing leading models (GPT-4o, Claude 3.5 Sonnet), Gemini 2.5 Pro was selected. Its massive 1M token context window was ideal for ingesting long legal documents without losing context, and its API cost was up to 10x cheaper than competitors, fitting the client's budget.

  • Live Data: The Google Search API was integrated to allow the model to pull in up-to-date information, such as recent news or public statements.

2. Use Case-Driven Design & Prompt Engineering

The solution was not a one-size-fits-all chatbot. I designed and engineered distinct "modes," each with its own fine-tuned system prompt and parameters, to address specific user needs:

  • Research Mode: Answers questions strictly from the uploaded legal documents, providing citations for verification.

  • Personas Mode: Simulates stakeholder viewpoints to uncover social impacts and diverse arguments. The prompt for this mode was specifically tuned for empathy and realism.

  • Objection Drafting Mode: Generates structured legal objections based on pre-defined templates and the facts of the case, dramatically speeding up the drafting process.

Crucially, every prompt was treated as a core part of the system's architecture. I iteratively A/B tested them for tone, accuracy, and hallucination control, adjusting parameters like temperature to balance factuality with creativity where needed.

3. User Testing & Deployment

The system was deployed to the client for active use in real-world workflows. The iterative feedback loop is ongoing: we gather qualitative feedback on the AI's performance, identify edge cases, and continuously refine the system prompts to improve the quality, speed, and legal consistency of the outputs.

/

Conclusion

(03)

/

Conclusion

(03)

/

Conclusion

(03)

This project successfully demonstrates how a custom, open-source AI solution can outperform restrictive commercial tools for domain-specific tasks. The resulting AI assistant is a scalable, modular, and cost-effective tool that empowers the legal team to navigate complex information landscapes with speed and precision.

Impact & Results

  • 🚀 Massive Efficiency Gain: Early testing shows a reduction in time spent on legal analysis and drafting of up to 80%, depending on the specific task.

  • 💡 Transferable Framework: The core architecture—combining a flexible RAG framework with meticulously engineered prompts—is a valuable and replicable model for other document-intensive professional domains.

  • empower Empowered Action: The tool gives a small, resource-constrained team the leverage to effectively challenge complex urban planning cases, contributing to a more sustainable and equitable future.

Lessons Learned

The primary takeaway is that for specialized, high-stakes domains like law, a custom-tuned RAG system is superior to a general-purpose LLM. The ability to control the entire data pipeline, from document chunking to prompt engineering, is what ensures reliability and user trust. This project proved that the true power of AI in a professional setting isn't just the model itself, but the thoughtful architecture built around it to guide its reasoning and ground it in factual, domain-specific knowledge.