Self-Employed, Vibe Coding & Advisory Oct 2024 to present

Taking time to study, build, and think. Building serious, technically complex AI systems, consulting selectively for early-stage teams, travelling, and participating in a few meaningful investment syndicates. Deliberately unhurried, choosing depth over volume.

Consulting Syndicate investing AI systems development Independent research
Head of Growth & Strategic Operations, Arbitrum Foundation Mar 2023 to Sep 2024

Worked directly with the board to build the Arbitrum Foundation from scratch, stewarding one of the largest treasuries in Web3 at $8B. Owned everything end-to-end without a playbook, from setting up the website and legal infrastructure to managing global service providers, launching strategic initiatives, running the grant programme, and shaping overall ecosystem strategy. Less a defined role, more a founding operator.

Foundation building Strategy Ecosystem growth Grant programme Operations
Ecosystem Lead, Polygon Apr 2021 to Oct 2022

Joined as an early employee and helped scale the ecosystem from the ground up supporting 4,000+ founders, onboarding 70+ wallet and 100+ infrastructure partners, and consulting web2 enterprises on blockchain adoption. Contributed to competitive analysis and product positioning across the stack.

Developer relations Partnerships Grants Enterprise
AI Product Manager Associate, GEP Worldwide Jan 2020 to Apr 2021

Helped build and ship AI features for GEP's contracts-management module, including OCR pipelines, key-term extraction, and risk analysis workflows. Contributed across the full product cycle from research and PRDs to prototyping and testing. Received an early promotion for research depth and product thinking.

AI product OCR NLP Supply chain
Consilience Founder · Active

Consilience is a personal knowledge engine for serious thinkers, researchers, and writers who work across disciplines, traditions, and subjects. It captures your thinking via voice, ingests your sources: PDFs, YouTube, articles, books, and people by name, and builds a private knowledge graph that connects your ideas, your sources, and thinkers whose work converges with yours from completely different directions. Your lens, calibrated from your own recognition language, vocabulary, and worldview, becomes the translation layer through which every source is read. When you write, Consilience retrieves from this library through your lens and generates in multiple modes: Parallel Lens, Gap Finder, Synthesis, Essay and Book Chapter, and Compare. It suggests thinkers you have never encountered whose essence-level thinking maps to yours, not by keyword but by recognition. The longer you use it, the more precisely it sounds like you, draws the connections you would draw, and surfaces the dots you would connect. Because it is not a generic AI assistant. It is your knowledge, your lens, your library, working together.

3-level embeddings: surface, conceptual, essence Dense vector + BM25 + graph retrieval Multi-hop graph reasoning Knowledge graph Domain SLM from confirmations Voice ingestion Multi-source ingestion

theconsilience.io →

Insurance Claims AI Agent 9-node LangGraph · 2026

A production-grade nine-node multi-agent AI workflow for insurance claims processing, built in four hours as a forward-deployed prototype. The system handles end-to-end claims triage: email ingestion, parallel data retrieval across customer records, policy contracts and communication templates, AI-driven underwriting reasoning, human-in-the-loop approval before any communication sends, and automated customer notification with a full timestamped audit trail on every node transition for regulatory compliance. Designed to demonstrate how regulated industries, insurance, mortgage, healthcare, financial services, can transform high-volume human judgment workflows into auditable, compliant AI systems without sacrificing oversight. The architecture generalises: the same orchestration pattern applies to mortgage underwriting, healthcare prior authorisations, and loan origination.

LangGraph StateGraph Multi-agent orchestration Parallel node execution Conditional routing Human-in-the-loop interrupts MemorySaver checkpointing Claude Haiku, classification Claude Sonnet, underwriting reasoning Audit trail and compliance logging Regulated industry AI Python

github.com/Anuja-Khatri/insurance-claims-ai-agent →

DeFi Liquidity Infrastructure Co-founder · Paused

A platform to move liquidity between any vault or protocol with a single click, abstracting cross-chain complexity. Prototyped ML-based recommendations and self-executing transactions. Paused. Not because the idea failed, but because I wasn't willing to build for the narrative or the funding round.

DeFi Cross-chain ML recommendations Smart contracts
LangGraph: multi-agent orchestration and stateful workflow graphs Multi-agent system design: parallel execution, conditional routing, node-level state Human-in-the-loop architecture for compliance-critical systems RAG pipeline design: hybrid retrieval (dense vector + BM25 + graph) 3-level embedding architecture (surface, conceptual, essence) Knowledge graph construction and multi-hop reasoning Fine-tuning strategy: QLoRA domain adapters, consensus-based label validation Domain SLM development from confirmation signals AI evaluation: RAGAS, LLM-as-judge, golden datasets LLM selection: cost, latency, reasoning depth tradeoffs across the stack AI product strategy: decomposing vague mandates into sequenced execution Shipping production AI systems solo
First-principles strategy Ecosystem and partnerships Go-to-market Org design Grant programmes Deep-tech startup operations
BSc Physics, Mathematics and Statistics MBA, Operations and Strategy Six Sigma Black Belt PMP, Project Management Professional Agentic AI for Product Managers, Maven
Essay EA Forum · 2025

The Legitimacy Gap at the Heart of AI Alignment

AI alignment has two problems. The technical one gets all the attention. The governance one is almost entirely ignored, and it is the prior problem. Whose values? Decided how? Accountable to whom? You cannot get the specification right if you have not answered who gets to specify in the first place. The essay walks through what every major approach actually does and where it breaks. RLHF, Constitutional AI, OpenAI's model spec, the EU AI Act: each one assumes the specification is legitimate. None of them have a process for making it so. Iason Gabriel at DeepMind, Gillian Hadfield at Toronto, and Atoosa Kasirzadeh at Edinburgh are all pointing at the same gap. The process is broken before the content is even written. Three things are missing from every current approach. Legitimacy: no AI lab has a process for deciding values that any affected party participated in. Revisability: specifications can be changed by the company tomorrow with no external participation, no delay, no public record. Accountability: when things go wrong, the response is a blog post. Four technical architectures make the governance framework real rather than aspirational. Hardware-walled computation regions where certain pathways were never built, not trained against. A live metacognitive register that tracks internal states automatically before they drive outputs. Formal mathematical verification of bounded safety properties, stronger than probabilistic testing. And cryptographic proofs that let regulators verify compliance without seeing model weights. Each governance layer has a technical equivalent. The constitutional layer needs hardware where violations are physically impossible. The accountability layer needs cryptographic verification that removes the need for trust. The revisability layer needs a live internal dashboard that catches failures before they reach outputs. Without the technical layer, governance is aspiration. With it, governance is verifiable. It closes with the question nobody in AI governance is asking: what should the deliberation process itself be checked against, other than more human preferences?

Research paper arXiv submission in progress

Intelligence Is Not a Scale: A Structural Taxonomy for Evaluating Artificial Minds

The debate about whether AI systems understand, feel, or deserve moral consideration has stalled because everyone is arguing on the wrong axis. This paper proposes that intelligence is not a single scale but a layered structure with three qualitatively distinct levels, each requiring conditions the level below cannot produce through scaling alone. A brainless single-celled organism satisfies all ten conditions of the second layer. The world's most capable language models satisfy none. The paper closes with three architectural frameworks specifying what crossing this gap would actually require.

Conference paper DOI 10.5281/zenodo.19068591

Tat Tvam Asi: Integrating Vedic Ontology and Modern Science in the Study of Consciousness

This paper proposes an integrative model of consciousness bridging Vedic metaphysics, modern physics, neuroscience, and evolutionary biology through a single organising principle: coherence. It argues that quantum field theory and Vedic descriptions of Brahman are converging on the same reality, and that consciousness is not an epiphenomenon of matter but the coherence field that organises energy into form. Submitted to the World Association for Vedic Studies, WAVES 2026.

Preprint DOI 10.5281/zenodo.19081013

Intelligence Beyond Representation: What AGI Misses When Language Becomes the Substrate

Current AI development implicitly assumes that intelligence is a function of representations, symbolic rules, embeddings, or language models. This essay challenges that by examining intelligence through evolution, neuroscience, biology, and physics. Drawing on LeCun, Hoffman, Friston, and Levin, it proposes a three-layer model, brain, mind, consciousness, and argues that LLMs occupy only the first layer. The architectural implications for AGI are concrete.

Essay

The Myth of Fully Decentralized Governance

Written from inside one of Web3's largest DAOs, this essay challenges the ideology of full decentralization as a governance model. Drawing on first-hand experience at Arbitrum and case studies across Compound and Solana, it argues that DAOs fail not because of their technology but because of the absence of structure, leadership, and aligned incentives. The essay proposes hybrid governance as the practical path forward: decentralize execution and transparency, but keep vision and strategic decision-making in the hands of committed, qualified people.

I grew up in a household where ambition had to be earned, not assumed. I chose Physics, with Mathematics and Statistics, not because it was practical, but because I needed to understand how things actually work underneath. That instinct shaped everything that followed. Before my MBA, I worked as a cold caller to fund it. That job taught me more than most classrooms did: how to earn attention in one sentence, how to stay grounded after rejection, and how to hear what people mean rather than what they say.

The pattern in my career is not an industry. It is depth of comprehension, applied fast. At GEP I was working across pharma clients like Roche and enterprise contracts, learning compliance and how large organisations make decisions. At Polygon I was distributing grants to founders across automobile, energy, real estate, and finance simultaneously, which meant understanding what each industry was actually trying to solve before deciding who deserved capital. I moved from enterprise software to blockchain with no roadmap and reached an executive seat at one of the largest foundations in the space within three years. I write research papers on AI safety, consciousness, and DAO governance, not as a side interest but because when I pick something up, I go all the way in. I am now building production AI systems. The through-line is not hustle. It is that I find the underlying structure of whatever I enter, quickly, and operate from there. That is what I do in every room I walk into.

My path has never moved in a straight line, and I think that is the point. Physics taught me to model systems. Six Sigma taught me to eliminate what doesn't matter. An MBA in strategy gave me the language for decisions at scale. Working in AI product management, then deep-tech ecosystems, then running operations for a foundation managing billions, each chapter came with a steep learning curve I had no choice but to climb. I have never waited to feel ready. I have always figured it out.

I have always been drawn to things before they were obvious, early-stage, ambiguous, real ownership. I turned down a Google offer to go deeper into something I believed in more. I do my best work when the map does not exist yet, the team has high integrity, and the problem genuinely matters. I also attempted to build a DeFi infrastructure startup and chose to pause it. Not because the idea failed, but because I was not willing to build something just for the narrative or the funding round. That distinction matters to me more than momentum for its own sake.

I have travelled solo across 10+ countries. Outside of work, I read across physics, ancient scriptures, philosophy, AI and consciousness, not as separate subjects but as different angles on the same question. Spirituality, to me, is higher-order logic: a framework for understanding reality that predates and often outreaches what modern science has caught up to yet. I prefer depth over breadth, in ideas, in work, and in people.

Away from screens I keep fish, tend to plants, and spend time around animals and nature. I play table tennis and lift weights. These are not hobbies so much as the counterweight that keeps everything else honest.

Anuja speaking on panel

On the mic

KryptoSeoul conference

KryptoSeoul

Metaverse Summit Paris

Metaverse Summit · Paris