Tech Stack

Built for scale and reliability

A modern, serverless architecture designed to handle millions of analyses without breaking a sweat.

System Architecture

👤
User
Vercel
☁️
Cloud Run
🤖
🐘
AI + Data
1. Request

User submits URLs

2. Process

Sources analyzed in parallel

3. Synthesize

AI combines into 360° view

AI & LLM

🤖

Claude (Anthropic)

Primary LLM for analysis and synthesis

🧠

GPT-4 (OpenAI)

Fallback LLM for reliability

🔗

LangGraph

Orchestration patterns for multi-step pipelines

Backend

🐍

Python + FastAPI

High-performance async API

☁️

Google Cloud Run

Serverless, auto-scaling compute

📬

Upstash

Redis queues for job orchestration

Data

🐘

Neon Postgres

Serverless database with branching

📊

SQLAlchemy + Alembic

ORM and migrations

👁️

LangSmith

LLM observability and tracing

Frontend

⚛️

Next.js 15

React framework with App Router

🎨

Tailwind CSS

Utility-first styling

Framer Motion

Production-ready animations

Vercel

Edge deployment and CDN

Design Principles

Serverless First

Scale to zero when idle, scale to millions when needed. No servers to manage.

🛡️

Graceful Degradation

If one source fails, partial results are preserved. LLM fallback ensures reliability.

👁️

Full Observability

Every LLM call traced, every cost tracked. Know exactly what's happening.