Signal scores companies on six dimensions of AI maturity using seven public data sources. No surveys, no self-reporting, no marketing decks. Just what the data shows.
Every Signal report starts with raw data collection across seven public sources. We pull job postings, GitHub activity, SEC filings (for public companies), funding data, press coverage, leadership profiles, and company websites — all within a 10-second window per source. Private companies are scored with the same rigor using the sources available to them.
The raw data passes through our AI scoring engine, which evaluates each of six dimensions on a 0–100 scale. The weighted average produces the overall score. If data is missing for a dimension, it scores null with low confidence — we never fabricate signal.
The flagship metric. The Narrative Gap is the delta between a company's Public Narrative score and its Product Embedding score. A company that talks about AI more than it ships AI will show a large positive gap.
Gap > 25 pointstriggers the "Marketing-Led" verdict override — regardless of the overall score. This is the theater detector.
A gap below 10 is healthy. Between 10–25 is worth watching. Above 25 means the company is managing perception more carefully than capability.
7 data sources queried in parallel. Each fetcher has a 10s timeout and returns null on failure.
Raw data is structured into a typed schema. Missing sources are marked, not faked.
AI evaluates each dimension 0–100 with evidence citations and confidence levels.
Weighted average produces the overall score. Narrative Gap check triggers verdict override.
Measures the volume, specificity, and seniority of AI-related job postings relative to industry peers. Generic 'AI experience preferred' listings score low; specific roles like 'Staff ML Engineer — Retrieval Systems' score high.
Evaluates whether AI strategy is backed by genuine executive commitment. Looks for dedicated AI leadership roles, board expertise, and whether claims trace to organic capability or acquisitions.
Analyzes actual technical infrastructure for AI. Distinguishes between companies running production ML pipelines and those simply mentioning AI in marketing materials. Tech references on the company website corroborate stack depth only when confirmed by GitHub or job descriptions.
Tracks financial commitment to AI through acquisitions, R&D allocation, strategic partnerships, and disclosed investment figures. For private companies, Crunchbase funding data and press coverage carry primary weight. Words are cheap — money is signal.
Quantifies the volume and intensity of a company's public AI messaging — including their own website marketing claims. A high narrative score isn't inherently bad — but when it outpaces product reality, the gap becomes the story.
Measures whether AI features are actually shipping in products users touch. We scrape company product pages for real AI feature evidence and cross-reference against GitHub activity and hiring patterns.
Repository activity, contributor patterns, commit history, language distribution
Code doesn't lie. Active ML repositories with meaningful commit patterns indicate real engineering investment.
Active listings from LinkedIn, company career pages
Hiring intent is a leading indicator. Companies building AI capabilities need people to build them.
10-K, 10-Q, 8-K filings, earnings call transcripts (public companies only)
Regulatory filings carry legal weight — companies are more careful about claims made to the SEC. For private companies, this source is gracefully skipped and other signals are weighted more heavily.
Funding rounds, acquisitions, investor profiles
Follow the money. Acquisitions and investment patterns reveal strategic priorities.
Press releases, media coverage, analyst reports
Measures the narrative layer — what companies want the market to believe about their AI story.
Executive profiles, board composition, organizational structure, team/about pages
Real AI commitment shows up in org charts. Dedicated AI leadership signals long-term strategy.
Homepage, product pages, AI-related content, careers and team pages via Firecrawl
First-party claims are the baseline for narrative gap analysis. What a company says on its own site is compared against hard evidence from every other source.
Score 72+ with Narrative Gap < 15. The company is genuinely building and shipping AI capabilities. Evidence backs the claims.
Score 45–71 with Narrative Gap < 25. Real effort underway, but gaps remain between ambition and execution.
Score < 45, or Narrative Gap > 25. The story runs ahead of the substance. More theater than transformation.
Insufficient data across too many dimensions. We can't score what we can't see — and we won't guess.
Signal is a point-in-time snapshot, not a continuous monitor (unless you're on Pro). Reports reflect data available at scan time with a 24-hour cache.
We rely on public data only. Companies with strong private AI efforts and poor public signaling will score lower than their true capability — that's a feature, not a bug. If it's not public, we can't score it.
The scorer is an LLM. Like all LLMs, it can hallucinate or misjudge nuance. Every dimension includes a confidence level. Low-confidence scores should be treated as directional, not definitive.
Analyze any company and see seven sources scored across six dimensions in 60 seconds.
Analyze a Company — Free →