← All posts
insight agenciesAIstartups

What we're learning from our first customers

Ajin Kabeer · April 8, 2026LinkedInTwitterGitHub

Cover image

Two of our first customers look nothing alike.

Provokers is one of Latin America's largest insight agencies. 150 people spread across 8 countries. Buenos Aires, São Paulo, Madrid. 500+ clients. They've been doing this for 15 years.

thinqinsights is five people. North American boutique. All senior. Career researchers who left the big firms to do it their way.

The bottleneck

Both were stuck on synthesis.

At Provokers, a single report could take 10 to 15 days. Research scattered across five different tools. People using ChatGPT on the side. Manual cross-referencing. Manual deck building. Clients asking for faster turnarounds while budgets tightened.

At thinqinsights, it looked different but felt the same. Multi-method research - surveys, interviews, social scraping, market data - each living in a different place. Weeks of work to pull it all together. Senior people spending their time on tasks they were overqualified for.

Here's what struck me: the problem had nothing to do with team size.

What insight work actually is

When you break down what insight agencies do, there are really two types of work.

Pattern recognition work. Taking raw data from multiple sources and finding connections. Spotting themes. Cross-referencing. Organizing. Structuring. This is cognitively demanding but fundamentally deterministic. Given the same inputs, different researchers should surface similar patterns.

Judgment work. Deciding which patterns matter. Understanding client context. Knowing what questions to ask next. Framing insights so they land. Making strategic recommendations. This is non-deterministic. It requires experience, intuition, relationship knowledge. Ten researchers could see the same pattern and recommend ten different things based on context only they understand.

The bottleneck for both agencies was the first type of work eating up all the time for the second.

What actually changed

Provokers deployed us across all 70+ people. Not a pilot. The whole company. Delivery went from 15 days to 36 hours. They consolidated five tools into one.

thinqinsights went from weeks to hours on synthesis. One platform handling everything - surveys, social data, market analysis. Eva, their head of research, now sends clients a dashboard link instead of promising a PDF in three weeks.

Both teams changed how they allocate attention. Pattern recognition got compressed into hours. Judgment work expanded to fill the space.

Rafael at Provokers Chile told us: "We cannot operate without Meaningful anymore."

That's not hyperbole. They restructured around it.

The underlying pattern

Here's what I keep coming back to: scale doesn't change the fundamental economics of the work.

Whether you're 5 people or 150, the same ratio holds. Most of the time goes to pattern recognition. Not because it's more valuable, but because it's manual. The judgment work - the part clients actually pay for - gets squeezed into whatever time remains.

Flip that ratio and everything changes.

thinqinsights is five people operating at the output level of a fully-staffed agency. Not because they work harder. Because their time allocation shifted. More judgment work. Less pattern work.

Provokers is taking on more projects without hiring. Same reason. The constraint wasn't people. It was where those people spent their cognitive energy.

This isn't about AI making research faster. It's about unbottlenecking the part that matters.

What AI is actually good at

We talk about AI like it's one thing. It's not.

AI is exceptional at pattern recognition across large datasets. Finding themes. Cross-referencing. Structuring information. Deterministic work at scale.

AI is terrible at judgment. It can't know which insight matters to your client based on a conversation you had last week. It can't draw on 20 years of running studies to know when something's worth digging into. It can't frame a recommendation so it lands with the specific executive who needs to hear it.

The agencies that get this right are using AI for what it's good at. And doubling down on human expertise for everything else.

The operating layer

I think about this constantly while building Meaningful.

What agencies need isn't another point solution. It's an operating layer for how insight work gets done in the AI era.

Think about what an operating system does. It sits between the hardware and the applications. It handles resource allocation. It manages processes. It provides a consistent interface. It lets you build more sophisticated things on top because the foundational complexity is abstracted away.

That's what we're building for insight agencies.

Provokers deployed us across 150 people in 8 countries. It's not a tool they use sometimes. It's the layer their entire operation runs on. Every project. Every office. Every day.

thinqinsights uses us the same way. Five people, but it's still their operating layer. Multi-method studies flow through the same system. Everything connects. Nothing falls through the cracks.

The result: both agencies are doing more sophisticated work now. Not simpler work. More sophisticated.

Provokers is moving from one-off reports to continuous intelligence systems. Living dashboards that update as new data comes in. They couldn't do that when synthesis took weeks.

thinqinsights is taking on multi-method studies they'd have passed on before. The synthesis overhead would've killed the economics. Not anymore.

Both have the same constraint removed: their experts now have room to be expert.

Where this goes

I'm learning this: agencies don't need more AI tools. They need a different operating layer.

One that handles the deterministic work - pattern recognition, synthesis, cross-referencing - so humans can focus on judgment. One that connects all the data sources instead of adding to the sprawl. One that lets five people operate like fifty, or 150 people take on work that used to require 300.

That's not about automation. It's about architecture.

If you try to replace judgment with AI, you get generic outputs that miss context. If you ignore AI entirely, your experts drown in manual work.

The agencies getting this right are the ones asking: what should our people spend their time on? And then building the operating layer to make that possible.

We're figuring it out alongside them.

Ajin


Read the full stories: Provokers case study · thinqinsights case study