Book a demo
Blog & News

CTO insights for market data teams + why AI elevates the role of human market data pros

January 2026

Market Data Matters The January 2026 edition

This is the 11th edition of our industry newsletter with musings, observations and ideas regarding the challenges and opportunities facing market data management leaders.

2026: The year market data’s foundations get exposed
A CTO’s view on what’s really changing underneath market data

We hope you enjoy this piece as much as we enjoyed chatting with Paiman, our CTO. Keep an eye out for next month’s edition of Market Data Matters, as Paiman takes us deeper into the topics of scale and operating models to answer a big question: As market data operations scale globally, what work stops scaling well and why?  He’ll focus on entitlement management, access provisioning, and the complexities of cost allocation. Now, let's dive in!
 
If 2025 was the year market data costs came under scrutiny, could 2026 be the year the foundations are finally exposed?

Systems that were ‘good enough’ are being tested and long-standing assumptions that there’s a system handling all this start to unravel when leadership asks for defensible answers.

Six months into his role as CTO at TRG Screen, Paiman Nodoushani sits at the intersection of technology, engineering, operations and governance – working across global teams and complex client environments. In this Q&A, he shares what he’s seeing and where he thinks market data leaders need to focus as we head into 2026.

Q: From your perspective as a CTO, what feels fundamentally different about market data management as we head into 2026?

The biggest shift is that market data has moved from being a ‘cost of doing business’ to being under a microscope.

A few years ago, large institutions could be spending $50m–$200m on market data and, as long as traders had what they needed, it was broadly accepted. Today, every dollar has to be justified – and that pressure is coming straight from the CFO.

What’s interesting is that leadership often believes they already have visibility. In reality, they don’t, or at least not end to end. Market data sits in different places across the organization, and it can be very difficult to get a holistic view of spending.

So when basic questions come up, like “What are we actually spending on equities globally?” or  “Which desks are driving the cost spikes?,” the answers are far harder to produce than expected.
 
Q: There’s often an assumption that “there must be a system handling this.” What’s really happening and where is risk lurking?

In most firms, very smart people are doing a huge amount of manual work.

They’re reconciling invoices in spreadsheets, tracking down who ordered what, proving compliance, building reports for finance. It can easily consume 30–40% of a market data team’s time, and it’s not value-add work.

That’s where the real risk creeps in. Manual processes and fragmented data make it easy to miss redundancy, over-provision users or inadvertently create compliance gaps.

Market data hasn’t just suddenly become a problem. The inefficiency and risk were always there – they’re  just becoming harder to hide.
 
Q: When firms really dig into this, what do they usually find under the hood?

Two things consistently show up: technical debt and organizational fragmentation.

On the technical side, firms are running systems built 10 or 20 years ago – layers of middleware and custom applications stitched together in ways that often aren’t documented. People are afraid to touch them because they don’t know what will break on the trading floor, so modernization gets deferred and problems get patched.

Organizationally, no one owns market data end to end. Procurement negotiates contracts. Market data and IT handle delivery and entitlements. Finance pays invoices. The business consumes. Everyone owns a piece, but no one has visibility across the full lifecycle.

That’s how firms renew multi-million-dollar contracts and only discover months later, through a usage audit, that 40–50% of licensed users aren’t even active.

“What’s exposed first isn’t a system failure, but a visibility gap – the difference between what firms think they know and what they can actually see end to end.”
 
Q: Why is now the right time to modernize?

Because the cost of delay keeps rising.

What’s changed is that modernization no longer has to be a multi-year, high-risk program built entirely in-house. There are now mature platforms and technology providers – increasingly supported by AI – that can accelerate work that previously took years.

Just as importantly, leadership patience has run out. Cost pressure, regulatory scrutiny and global operating models demand transparency, and manual workarounds don’t scale. Modernization isn’t just a technology decision anymore; it’s an operating decision.

If firms don’t address it now, they’re effectively committing to the same problems for another decade, just at a larger and more complex scale.
 
Q: As market data operations extend globally, what stops scaling well?

Three areas consistently break: entitlement management, access provisioning and cost allocation. And they all break down for the same reason - processes that work fine when you've got centralized teams in one or two locations completely fall apart when you have teams across time zones and regional boundaries.

Processes built for a world where teams sat in one place, worked the same hours and passed work between people informally don’t survive global scale. Even simple tasks can turn into multi-day approval chains, slowed by handoffs, time differences and undocumented knowledge.

You can’t scale human handoffs and tribal knowledge across a distributed organization, which is where automation stops being optional.
 
Q: Automation raises concerns around control, accountability, and even professional risk. How should teams think about trust and governance?

The biggest barrier isn’t technology, it’s career risk.

People worry that if automation makes a mistake, their name is on it. That’s why guardrails matter.

Automation should handle the routine and escalate the exceptions. Standard access packages can be auto-approved. Anything unusual, high-cost or outside policy goes to a human. Same with invoices, handle what matches expectations, flag what doesn’t.

That shifts accountability from individuals to agreed policy. Decisions become auditable, defensible and transparent, and it becomes safe to say yes to automation.
 
Q: Where is AI delivering real value today, and where is the hype ahead of reality?

The strongest ROI today is in document and contract analysis.

Firms are digitizing years of vendor contracts in days; extracting renewal terms, cancellation windows, price escalators. That directly improves negotiating leverage and reduces surprise renewals.

Where expectations run ahead is predictive analytics and automated vendor negotiation. Business needs change too often, and negotiations rely on context and relationships that AI isn’t ready to replace.

The teams getting this right use AI to eliminate grunt work and surface insight, not to outsource judgment. The question isn’t just what technology can automate, but what remains distinctly human.

“If you had to future-proof one human skill in this age of AI, it would be authenticity.”
 
Q: If you were advising a market data leader for the next 12–18 months, what should they prioritize and stop doing?

Three priorities, in order.

First, get the data foundation right. Build a single source of truth so you know what market data you have, who’s using it and what it costs.

Second: automate the manual work that’s draining capacity, like invoicing, provisioning, usage reporting.

Third: invest in AI and modernization pragmatically. Start where ROI is clear and where it directly reduces risk or operational friction.

What should leaders stop doing? Stop chasing perfect data, it doesn’t exist. Eighty percent accuracy is enough to make better decisions, then improve over time.

And stop reinventing the wheel. Market data teams have historically custom-built tools because they believed their requirements were unique. Today, there are mature built-for-purpose platforms and solution providers that cover the majority of those needs.

“Continuing to build everything in-house just creates the next generation of legacy.”
 
Q: What mindset matters more than any specific tool as we head into 2026?

Strategic thinking.

Tactical thinking asks, “How do we get through this month?” Strategic thinking asks, “Why does this problem exist at all?”.

Teams stuck in firefighting stay busy forever. Leaders who carve out time to think structurally break the cycle, and those are the teams that scale.

Closing thought

The future of market data isn’t tool-led, it’s foundation-led.

Teams that invest in visibility, automation and governance now don’t just reduce cost and risk. They create room to adapt, to truly modernize.

And heading into 2026, that flexibility may matter more than anything else.
 
 
 
AI didnt kill the market data professional
 

AI didn’t kill the market data professional

It just changed the job.

Cast your mind back to early 2025.

AI was already everywhere.  We understand AI means different things to different people. In this context, we mean systems that use automation, machine learning and language models to analyse market data, surface insights and propose actions.

Every headline screamed replacement of humans, disruption, the end of work as we know it. If you work in market data (or any profession), chances are you had at least one quiet moment of dread where you wondered whether your hard-won expertise was about to be automated away.

Fast forward to now, and something more interesting has happened.

AI hasn’t removed the human role in market data. Instead, it’s exposed where humans matter most.

In our internal and client conversations the past several months, one theme has kept resurfacing (unprompted, and with surprising consistency): AI changes the work, but it doesn’t replace the people. It reshapes where human judgment, experience and context sit in the operating model.

What’s emerging isn’t a smaller human role. It’s a different one. Here’s what that shift really looks like in practice.

From doing the work to judging the work

AI is exceptionally good at mechanics.

Reading invoices. Extracting clauses. Comparing usage. Surfacing patterns. Routing requests. Answering questions across multiple datasets at once. These are relatively mature AI applications.

That’s the easy part. While AI can “judge” in its own way, what it still can’t do is have awareness of judgement nor have a sense of liability. Nor can it discern. Take, for instance, questions like these:

  • Is this license technically compliant but commercially reckless?
  • Is this request reasonable given what’s happening on that desk?
  • Is this optimization smart, or does it create risk elsewhere?
  • Is this answer ‘correct,’ or just statistically plausible?
 
In an AI-enabled market data function, humans increasingly move from being the makers of outputs to the interpreters of outcomes. AI brings the gray areas to the surface faster. People decide what they actually mean. Human judgment is essential because someone must ultimately own responsibility for the resulting decisions and their consequences.

That shift elevates domain expertise rather than eroding it.

The rise of the overseer, not the back-office processor

One of the most practical changes we’re already seeing is the emergence of a new supervisory layer.

As AI takes on the heavy lifting, like drafting recommendations, validating data, proposing actions. Humans step into the role of checker. But not in the old QA sense.

This isn’t box-ticking. Decisions must still be made by the humans in the loop.

People oversee AI-driven workflows, validate decisions, override where needed and understand the downstream consequences of getting something wrong. The skill here isn’t speed; it’s judgement, synthesis, instinct, and experience.

Several firms are already formalizing this, with explicit roles accountable for AI output quality and governance. In some cases, that responsibility is moving all the way to the board.

Data stewardship becomes mission-critical

AI is only as good as the data it learns from and, bear with us on this small mindbender: the data about your market data program is rarely perfectly clean, static or simple.

Exchange rules evolve. Contracts are nuanced. Usage behaviours shift. Exceptions pile up.

That’s why one of the most important human roles in an AI-driven model is data stewardship.

People who understand licensing logic, entitlement rules, policy nuance and real-world usage patterns become custodians of ‘truth.’ They curate the inputs, correct them when reality changes and ensure AI systems don’t scale bad assumptions at machine speed.

This isn’t hygiene anymore. It’s foundational work.

Client conversations become more human, not less

As AI accelerates analysis and insight, client-facing roles (internal and external) change, too; and arguably improve.

Instead of spending time assembling information, humans spend more time explaining it. Stress testing it. Translating it into business language. Advising on trade-offs AI can’t see.

AI handles the mechanics. People carry the conversation.

That creates a more consultative dynamic: fewer reactive interactions, more proactive guidance and more trust built through context and empathy, something no model genuinely replicates.

Designing workflows becomes a human craft

One of the most misunderstood ideas about AI is that value comes from answers. In reality, the real impact comes from workflows.

Deciding where automation helps and where it harms. Understanding how decisions move across teams. Knowing where risk sits. Anticipating how regulators and auditors will view outcomes.

These are deeply human judgements, grounded in experience rather than data alone. The firms pulling ahead aren’t waiting for perfect tools, they’re redesigning processes now, with humans firmly in the loop.

The most valuable skill: knowing what not to automate

Perhaps the most important takeaway from 2025 is this: AI is a powerful accelerator but it’s also extremely agreeable.

It rarely says to the user, ‘that’s a bad idea’. It doesn’t know when context has shifted. It can’t feel when something is off.

The smartest teams are learning not just how to use AI, but where not to trust it blindly. They test. They challenge. They experiment, and they kill what doesn’t work.

That discipline – curiosity paired with judgement – is becoming one of the most valuable capabilities in the function.

So where does that leave us?

AI isn’t hollowing out market data teams. It’s sharpening them. It’s removing the tedious mechanics, not the thinking. It’s amplifying expertise, not replacing it. And it’s pushing human roles into places where impact, trust and judgement actually matter.

The real question for 2026 isn’t ‘Which jobs disappear?’

It’s this: : Which new kinds of expertise will define the best market data management teams?

Thanks for taking time to read Market Data Matters! Don't forget to subscribe if you haven't yet. 

Subscribe for the LinkedIn newsletter

Back to overview