VMblog Expert Interview: Jessica Reeves on Native AI Explainability and Event Sourcing
Jessica Reeves discusses Axoniq's AI explainability platform in this VMblog Expert Interview. Why foundation matters more than speed.
In a recent interview, Jessica Reeves, CEO of Axoniq, discusses how the company is evolving from its event sourcing roots to become an AI explainability platform for enterprises. What emerges is a fundamentally different approach to how companies should think about AI infrastructure.
What Makes Axoniq Different: The End-to-End Platform
When asked what distinguishes Axoniq from other enterprise AI solutions, Jessica is clear about the foundation: "What makes us different is that we're truly an end-to-end platform, and it really starts at that foundation. The infrastructure layer. Many of the things you see out there today are wrappers around AI or bolt-ons, etc. What we do is really think about the whole AI journey from beginning to end and start at that foundation."
She explains how this plays out practically: "Whether you're building new, we can ingest code and refactor the code, or if you want to prompt in what you're trying to build, the beauty of it is it's all built then on the infrastructure of event sourcing. That's how we do it. Which gives you the benefit of having a single source of truth—a lineage of understanding exactly what your application has done, what it's done, when it's done, what decisions, and why it's done. And that gives you a lot of knowledge into what's going on in your application, not only from a technical level but also from a business level."
The key distinction: "We're definitely end-to-end and it's native. All the code you produce on our platform is built around this paradigm of event sourcing and its associated infrastructure. It's not just bolted on."
Redefining Explainability Beyond the Model
When asked about the platform's native explainability layer and what that actually means, Jessica pushes back on the conventional definition: "AI explainability is a new kind of market category. A lot of times they're talking about the models themselves, but they forget about all of the other infrastructure around it. And that really matters. The model is just one component of that."
She describes what real explainability looks like: "Explainability is really showcasing and surfacing just natively what exactly happened, when it happened, in what sequence it happened, who pushed it, whether it be a system or a user, and then you can correlate why it happened. So it really shows the full history, not just the state of what is today."
This matters across multiple stakeholders: "From a compliance perspective, it gives you that single source of truth and that audit trail. From an AI perspective, it's feeding in that data and the context to AI and agentic workflows to give better AI answers based on the full history. And then, explainability is also important for developers and practitioners, particularly when it comes to being able to replay what happened in your system. Or there's this gnarly bug, and we're trying to figure out what happened forensically; you can replay it. It kind of hits all of the stakeholders in that regard."
The Core Problem: Patching AI Onto Legacy Systems
When asked about why enterprises struggle to integrate AI into legacy systems, Jessica identifies the fundamental issue: "I think what's holding people back is they're trying to patch. They're building on top of a foundation that was not built for agentic workflows and AI. And I think that realization is going to come, whether you like it or not—it's coming."
Her solution: "On our platform, you can prompt in what you want the original code to be, or just feed in that original code, and then it will refactor based on that in this AI-ready infrastructure way of event sourcing to ensure that it has that context and explainability built in natively. It's not a bolt-on. It's not building on top of really what is broken today, but it's transforming it. And you can do it little by little or all at once."
Event Sourcing as Foundation, Not Reinvention
When asked how the new explainability layer extends event sourcing, Jessica clarifies the company's positioning: "I would say that we haven't really said 'oh now we're a new AI company.' We've been solving these problems all along. Just the market has almost caught up to what we have been solving"

Where It Works: Regulated and Complex Industries
When asked why regulated and complex industries are early adopters, Jessica points to specific use cases: "We see a lot of financial industries. When considering both aspects from a trading perspective, that really matters for real-time and scalability, as well as credit decisions and other operational pieces of the financial ecosystem. What matters a lot is that you have to get it right, both in terms of regulations and legislation. That's almost like having a single source of truth that your applications rely on for their decisions. That's a perfect fit for more complex industries."
She also highlights e-commerce: "We see that as a use case across many e-commerce companies as well. And that's handling you know hundreds of thousands of what we call events—which is system decisions—right a second. So it holds that scalability really at scale for some of the largest companies in the world."
The AI Transformation Ahead
On the broader market and urgency, Jessica is direct: "You're going to see folks using AI in a way that we haven't seen before. Non-technical people."
She shares a personal example: "I built an application on our platform just the other day. And I don't have a computer science degree."
But there's a clock ticking: "The longer you wait for that AI readiness, just the more complex and expensive it will be. It's coming. It is coming. It's in some regards, it's here right now."
Her final point: "I think I'd leave with this: this is a massive transformation and you're going to have to be ready for it."
Jessica Reeves and the Axoniq team are offering free AI credits for the next 30 days. Check out the platform at axoniq.io.


