How to Build a Personal Intelligence Agency

Most people check the news the same way every day. They scroll Twitter, skim a few headlines, maybe glance at a newsletter. It works, but it is shallow. You only see what the algorithm decides to show you, filtered through whatever is trending at the moment.
What if you had a small team of analysts working for you instead? One watching global events in real time. Another tracking prediction markets for signals the news hasn't caught yet. A third doing deep research on the topics you actually care about. And every morning, a briefing lands in your inbox that ties it all together.
That is what a personal intelligence agency looks like. And you can build one today with a few AI agents, a Postgres database, and some glue.
The concept is not new. Intelligence agencies have used this exact cycle for decades: plan what to collect, gather from multiple sources, analyze, and disseminate. The difference is you can now automate every step.
The Simple Version: Individual Agents on Telegram
Before building anything complex, start with the simplest pattern that works. Spin up individual agents, each with a single job, and have them report directly to you on Telegram.
A GDELT monitor that watches for spikes in event activity related to your interests. You tell it what regions or topics to watch. When something significant happens, it sends you a Telegram message with a summary and source links.
A Polymarket tracker that monitors prediction markets you care about. It alerts you when odds shift significantly on contracts you are following, or when new markets open that match your interests.
An X researcher that follows specific accounts, hashtags, or conversations and surfaces things worth reading.
Each agent runs independently. Each one messages you directly. There is no shared state, no coordination, no database. It is just three bots pinging your phone when something matters.
This pattern is surprisingly effective. You get real time alerts. You stay informed without doomscrolling. And because each agent has a narrow focus, the signal to noise ratio stays high.
But eventually you will want something more structured.
The Full Architecture: Coordinated Agents with Postgres
The next level is to stop treating your agents as independent units and start treating them as a team. The key difference: they share a database, and one agent is responsible for synthesizing everything into a single daily report. This is essentially multi-agent systems applied to open source intelligence.
Here is the architecture:
The Database
Postgres is the source of truth. Every agent writes to it. The schema is straightforward:
CREATE TABLE insights (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
source TEXT NOT NULL, -- 'gdelt', 'polymarket', 'x'
agent TEXT NOT NULL, -- which agent found this
topic TEXT,
title TEXT NOT NULL,
summary TEXT NOT NULL,
raw_data JSONB,
relevance_score FLOAT,
url TEXT,
discovered_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE reports (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
content TEXT NOT NULL,
sent_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE watched_topics (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
topic TEXT NOT NULL,
priority INT DEFAULT 5,
active BOOLEAN DEFAULT true
);
The watched_topics table is what makes this personal. You populate it with the things you care about: "semiconductor supply chains," "EU AI regulation," "Middle East energy," whatever. Every agent reads from this table to know what to look for.
Agent 1: The Polymarket Analyst
This agent focuses on prediction markets. It pulls data from Polymarket's API, tracks contracts that relate to your watched topics, and writes insights when it detects meaningful movement.
It runs on a schedule, say every 2 hours. For each contract it follows, it checks:
- Has the probability shifted more than 5% since the last check?
- Are there new contracts related to watched topics?
- Is trading volume spiking on anything relevant?
When it finds something, it writes a row to the insights table with a summary explaining why the shift matters. The relevance_score field lets the summarizer agent prioritize later.
Agent 2: The GDELT Breadth Scanner
GDELT is massive. It indexes news from every country in dozens of languages and tags events with structured metadata. Think of it as a real time event data layer over the entire world's media output. This agent does a breadth first scan: it casts a wide net across your watched topics and captures the general landscape.
Every few hours, it queries the GDELT API for recent events matching your topics, groups them by theme, and writes high level summaries to the database. Think of it as the agent that tells you "there were 47 articles about semiconductor tariffs today, up from 12 yesterday, with most coverage coming from East Asian outlets."
It is not trying to go deep. It is trying to spot patterns, trends, and anomalies across the full surface area of global news.
Agent 3: The GDELT Deep Researcher
This agent picks up where the breadth scanner leaves off. It looks at what the breadth scanner found, identifies the most significant items, and does actual research.
It reads the full articles. It cross references sources. It checks whether the event connects to anything else in the database. It writes longer, more analytical summaries that explain context and implications.
This is the agent that turns "semiconductor tariff articles spiked" into "The spike is driven by a leaked draft proposal from the EU Commission that would impose 25% tariffs on AI chip imports. Three separate outlets confirmed the document. Polymarket contract on EU chip tariffs moved from 15% to 38% in the same window."
Agent 4: The Morning Briefer
This is the orchestrator. It runs once a day, early morning, and its job is simple: read everything the other agents collected in the last 24 hours, synthesize it into a coherent briefing, and send it to you via email.
It queries the insights table for the last 24 hours, orders by relevance score, groups by topic, and writes a report that reads like something a human analyst would produce. The structure might look like:
Top developments (3 to 5 items that matter most)
Market signals (what prediction markets are saying)
Emerging patterns (things that showed up across multiple sources)
Deep dives (the detailed research from Agent 3)
Watchlist updates (any changes in the landscape for your tracked topics)
It writes the report to the reports table and sends it via email. If you want, it can also post a condensed version to Telegram.
Making It Work in Practice
A few practical notes on actually running this:
Start with watched_topics. The whole system is only as good as what you tell it to care about. Start with 5 to 10 topics. Be specific. "AI" is too broad. "Foundation model training costs Q1 2026" is useful.
Tune relevance scoring. Each agent should score its own insights. The summarizer uses these scores to prioritize. Start simple (high/medium/low mapped to numbers) and refine over time.
Let the agents update each other. The deep researcher should be able to flag something as "needs Polymarket check" by writing a row that the Polymarket agent picks up. This cross pollination is where the real intelligence emerges.
Keep raw data. The JSONB raw_data column is there for a reason. Store the original API responses, article URLs, and market data. You will want to go back to primary sources.
Run on a schedule, not in real time. Unless you are trading, you do not need minute by minute updates. The breadth scanner every 4 hours, the Polymarket agent every 2 hours, the deep researcher twice a day, and the briefer once each morning. That is plenty.