Interview: Pedro Alfama - Verdaio.ai EU Compliance
Compliance is becoming AI’s killer use case. Pedro Alfama built Verdaio.ai to make EU regulation actionable for every operator today.
Pedro Alfama and I have known each other for more than fifteen years. We met in a different life, the kind where “product” meant metal, distribution, training, and sales targets, and we kept crossing paths as both our careers drifted toward technology.
Pedro’s path makes sense once you zoom out. He’s done the hard yards in high-volume, real-world environments: Ford, IVECO, then into product roles where execution matters more than theory. The turning point, at least for this story, was his time at TLScontact: building technology products that sit right on top of highly sensitive data, documents, passports, biometrics, and that must work inside strict operational and regulatory constraints. That’s a brutal training ground. You either learn to build responsibly, or you learn what breaks first.
Fast-forward to today, and we’re suddenly operating in the same arena again: AI, regulation, and the messy reality of shipping products into Europe. Pedro has launched Verdaio.ai, a compliance intelligence platform designed for companies that don’t have a legal department on standby but still need to navigate the EU’s fast-expanding rulebook, across sustainability, privacy, AI, technology, and cybersecurity.
I tested Verdaio myself. What surprised me wasn’t just the breadth of tools, it was how “operational” the product feels. It doesn’t speak like a consultant’s slide deck. It behaves as a product leader built it: clear prompts, structured outputs, practical next steps, and an obvious bias toward getting teams unstuck quickly. That’s why I’m sharing this conversation.
Pedro isn’t positioning himself as a lawyer, and Verdaio isn’t giving legal advice. The value is different: it turns regulation into a set of concrete questions, artefacts, and decisions that teams can actually execute, before the procurement questionnaire lands, before the auditor asks, before the first real fine hits the market.
Pedro is the founder and product leader at Verdaio.ai
1. Why is compliance becoming one of AI’s biggest real-world use cases? Why did you decide to build Verdaio?
Before I started building my own products, I worked on different projects, one of which was highly compliance-sensitive. I was a Product Manager for an Outsourced Visa Service Provider, a company that runs visa application centres on behalf of governments. My product involved collecting all documents submitted online by the user, as well as passport and fingerprint scans, and a photo taken at the Visa Application Centres. I find it difficult to imagine more sensitive data.
After that experience, I built some products and MVPs, but I always felt I lacked full control as AI moved so fast.
So I started doing more research and decided to build a product about compliance because I realized how little I knew, how time-consuming it is, and how necessary it already is in 2026. What I want to explain is why I believed it was worth a product, especially for small and mid-sized companies. The current alternatives focus on only one area, are too expensive or require serious integration.
Also, most current discussions focus on new AI models, tools, token efficiency, faster building, and so on. That seems to be all we need to know.
I wanted to add “making it compliant” to this scene.
A disclaimer up front: I’m not a lawyer, and I have never worked in governance or compliance. I’m a product builder and I focus on solutions.
What I have built is Verdaio, a product that makes it simple for a company to find where it falls short of European regulation. It is not legal advice. It sits between a company knowing nothing about the applicable rules and that same company paying thousands of euros to lawyers and consultants to find out. And when a lawyer or consultant is still needed afterwards, their work is usually cheaper, because the groundwork is already done.
When I looked at the European regulatory landscape - the AI Act, the Cyber Resilience Act, NIS2, DORA, the post-Omnibus versions of CSRD and CS3D, I noticed something uncomfortable.
Big companies have legal teams, compliance officers, and consultancy retainers. They can absorb the cost of new regulation. Even big fines. Small and mid-sized companies, who are in scope of most of the same rules, mostly don’t have any of that. No compliance officer, no legal budget, no one whose job it is to read EUR-Lex on a Tuesday. They’re expected to comply anyway. A small company doesn’t need to build AI. But if they use some provider that does, there are already rules for that.
That’s the gap I built Verdaio to address. Not because I had a market study in hand - I didn’t - but because I couldn’t see how an SME like mine, in Portugal or anywhere in the world that does business in Europe, was meant to navigate, say, the AI Act, without spending money it doesn’t have on consultants. The work had to get cheaper or it wouldn’t get done. AI was the obvious lever to make it cheaper because compliance work is structured, text-heavy, jurisdiction-specific, and largely involves synthesis. Exactly what LLMs are good at.
The product itself took months to develop. But the sharpest test and evidence for this came when I pointed our own AI Act Readiness tool at Verdaio as a subject. The first offline pass came back around 45 out of 100, “Developing” , a humbling result for a compliance product.
Then came around fifteen days of fixing the gaps the tool had just surfaced. The most intense stretch was a five-day sprint writing every artefact the tool flagged as missing:classification doc, model card, risk register, technical documentation, AI literacy record, incident log, accuracy methodology, scope-exclusion notes, supplier DPA register. The rest of the fortnight went into rolling out completion logging across every tool, building a benchmark harness to make the outputs reproducible, hardening prompts against injection, tightening the model temperature to close output variance, and redoing the legal pack end to end. After the sprint we hit 85 out of 100, “Advanced”. Today, on a reproducible run, we sit at 92. I suspect some much larger companies, even with full legal teams, would run Verdaio against themselves and find gaps they hadn’t noticed either.
That’s the shape of the use case. A solo founder, months to build the product focused on compliance and around fifteen days to fix the compliance gaps it surfaced - moved from “developing” to “advanced” on a concrete regulatory readiness scale. Five years ago that journey would have required a consultancy engagement and five-figure fees. Today it doesn’t have to. But “doable” is not “prompt a generic chatbot and call it done” - that would produce confident-sounding nonsense against regulation this dense. What made the work possible was a tool that already had the regulatory framework built in - the articles, the classification boundaries, the obligation grid, the scoring methodology - and someone whose job is to keep that framework current as the law moves. That is what Verdaio is, and it is why I could use my own product to audit itself. For the SMEs I built Verdaio for, the availability of that kind of specialised tooling is the difference between compliance being realistic and remaining a consultancy luxury they cannot afford.
2. Most companies use AI already. Why do so few understand their legal exposure?
I don’t have much hard evidence here. These are just patterns I’ve noticed while talking to people and exploring during development.
But I’ll give you a practical example. GDPR has been around for years. I suggest a simple test for anyone building products: Ask an LLM to conduct full research on how compliant your website is regarding GDPR.
It’s very common to find gaps when you least expect them.
So, if this happens under GDPR, with the EU AI Act being so new, I wouldn’t expect better results.
Another piece of hard evidence I have was found on the frontier models. They are making it visible that a Human-in-the-Loop (HITL) is now a reality in their chats.
This change is due to regulation. Other businesses might also need to make changes, but I see little evidence they are aware.
However, the biggest evidence is that AI sometimes doesn’t arrive in companies labelled “AI”. It arrives as a feature inside a SaaS tool. Your sales team turns on an AI lead scorer. Your support team deploys a chatbot. Your dashboard starts generating “AI insights”. Few businesses procure and assess AI systems. They procure a CRM upgrade. So when regulation starts asking about the AI systems you operate, most companies don’t even know where their inventory is.
Another personal experience serves as evidence. I studied machine learning and I focus on EU regulations daily for my project. Even so, I had to re-read the Provider/Deployer definitions in the AI Act several times to figure out which one Verdaio was. (Both, it turns out, for different tools.) If that’s me, I don’t blame a product manager at a mid-sized SaaS for not having figured it out yet.
And the fourth, which I suspect matters most, is that there’s no forcing function yet for most companies. The fines are theoretical until someone gets fined. What I’m starting to see change that is enterprise procurement. When you try to sell into a bank, an insurer, or a public authority, their procurement team is now asking hard questions about AI governance. That’s what pulls the issue from “someday” to “this week” for a lot of teams. At least that’s what’s happening in the few enterprise conversations I’ve had.
3. Can regulation become a growth advantage instead of just a cost?
Honestly, I think so because the focus will shift in that direction.
Two things shifted my view.
The first was watching what Omnibus I did to CSRD scope earlier this year. Thousands of SMEs got cut out of the mandatory reporting perimeter. Most saw that as relief. The sharper ones I’ve talked to saw it as an opening. They now voluntarily publish under the VSME standard, because their larger customers still have to report and need sustainability data from suppliers. Suddenly “voluntary” reporting becomes a commercial weapon. The supplier that hands over a clean VSME pack wins the contract. That’s not theory. It’s been a real pattern in how people describe their CSRD conversations to me.
The second was more personal. After I finished building Verdaio’s internal compliance pack, with all the documentation the regulation expects, I realised I had something I could show. When someone asks “how do you handle AI governance?”, I can send them a folder. That changes the sales conversation. The quality of the conversation changes immediately.
So my honest answer: I’ve seen hints that regulation can be an advantage, but it’s an advantage for the companies that treat their compliance work as evidence they show, not credentials they earn. Evidence beats credentials once the market matures. That’s my guess.
4. What breaks first in companies: the AI model or the governance around it?
This one I feel more confident about, because I lived it. Governance, almost every time.
AI products in production usually ship with unexamined defaults. What parameters are set, what gets logged, which outputs are benchmarked, and how reproducibility is proven. Any one of those is an audit problem later if it was never written down.
Verdaio had exactly this problem during development. No temperature policy. No reproducibility benchmark. No change log. No test harness. I fixed it in an afternoon. Benchmark at 0, production at 0.3. It has been stable since. The model wasn’t broken. The process around the model was.
I tell the story because it’s the cleanest version of a pattern I think is everywhere. The model does what it was trained to do, roughly. The decisions around how you call it, what you log, how you version prompts, when you re-benchmark: that is where things silently rot. And when something goes wrong publicly, the company that can point to a written decision with a date on it is fine. The one that can’t is in real trouble.
The unglamorous version of the answer: most of the value of “good AI governance” is just having written down what you decided and why. That part isn’t hard. It’s just not done.
5. In five years, will every company need an AI compliance layer like they need accounting today?
To correctly address that question, we first need to answer: In five years, will every company be using AI? I would say it’s almost unavoidable at this point.
A lot could be said about reality five years from now, especially around AI. But I’m confident that “doing and using AI the right way” will be mandatory.
Let’s look at the regulation plans.
The enforcement calendar is crowded and moving. The AI Act’s high-risk rules are scheduled for August 2026 and August 2027, though a proposed Digital Omnibus could push those dates to December 2027 and August 2028 if adopted. The Cyber Resilience Act’s main obligations take effect at the end of 2027. The revised Product Liability Directive starts covering software and AI from December 2026. CS3D is transposed by mid-2028, applied mid-2029, with first disclosures in 2030. CSRD Wave 2 reporting arrives in 2028. PSD3 and the Payment Services Regulation are expected to apply around 2028. Any European company, or any company selling into Europe, will meet at least one of these in the next five years. Most will encounter several.
If AI is almost unavoidable, and the regulatory calendar is what I just described, then yes, every company will need some version of an AI compliance layer. Just as every company has accounting, bookkeeping, and payroll because those obligations didn’t become optional once they were written into law, AI governance is becoming a non-negotiable function for companies that make decisions about people. Hiring, pricing, credit scoring, content moderation, support routing, lead qualification. If your software affects a human outcome, regulators will likely increasingly want to know how.
What Verdaio Changed in 7 Days
Pedro ran the same EU AI Act Assessment twice, one week apart, using Verdaio itself as the subject.
Before (12 April 2026): “Progressing” — 65/100
The first report classified Verdaio’s system as Minimal Risk (Art. 6 not triggered) and described a company with “strong foundations” but several missing pieces. The gaps were practical and familiar to anyone shipping AI fast: incomplete risk management documentation, thin data/input governance notes, and weak human oversight / override mechanics. In short: nothing “broken” in the product — but the evidence layer was patchy.
After (19 April 2026): “Advanced” — 92/100
Seven days later, the picture changes completely. The second report classifies Verdaio as Limited Risk under Article 50 (because AI-generated content reaches natural persons) and shows an Advanced posture: risk management, technical documentation, logging, transparency disclosures, human oversight, and post-market monitoring are reported as in place.
What matters here isn’t the score as a trophy. It’s what the delta represents.
The real “upgrade” wasn’t the model, it was the governance wrapper
Across the two PDFs, you can see the shift from “we basically do this” to “we can prove this”. The improvement is driven by very operational moves:
Art. 50 transparency tightened: making AI disclosure clearer and positioned correctly in the user journey and outputs.
Risk work formalised: turning partial risk thinking into an actual risk register with owners, decisions, and review cadence.
GPAI supply-chain trail documented: model/version tracking and retaining upstream documentation (Anthropic + Amazon Bedrock) as a durable record.
Human oversight made explicit: simple mechanisms to flag issues, halt/withdraw, and reinforce “advisory-only” boundaries.
Monitoring and reproducibility are treated as first-class: not “nice to have”, but part of how you defend the system over time.
Why this matters (even if you’re not “an AI company”)
This is the part most teams miss: compliance is rarely blocked by a single catastrophic flaw. It’s blocked by missing artefacts and undocumented decisions.
And that’s exactly why compliance is emerging as one of AI’s most valuable real-world uses: it can convert dense regulation into structured work, and structured work into evidence. The “before” report shows what happens when a product is ahead of its paperwork. The “after” report shows what happens when the paperwork becomes part of the product.
If you sell into Europe — or sell to anyone who sells into Europe — this “evidence layer” will increasingly decide whether you pass procurement, shorten security review cycles, or get stalled for months.
The Before sample:
The After sample:
What I like about Pedro’s thinking is that it removes the drama from compliance without underestimating the risk.
A few ideas from this conversation will stick with me:
Compliance is becoming a mainstream AI use case not because it’s exciting, but because it’s structured, text-heavy, and full of checklists — exactly where specialised AI systems can compress cost and time.
Most organisations don’t understand their exposure because AI arrives sideways: via CRM features, support tools, analytics add-ons. If you don’t know your AI inventory, you can’t govern it.
Governance breaks before models do. Temperature policies, benchmarking, logging, versioning, and incident records — unglamorous, but decisive when scrutiny shows up.
Regulation can become a commercial weapon when compliance is treated as evidence you can show, not a certificate you claim. In procurement-heavy markets, that distinction changes conversations fast.
The practical takeaway is simple: the EU’s regulatory calendar is no longer “future”. It’s becoming an operating environment. The companies that win won’t be the ones with the best intentions; they’ll be the ones with the cleanest artefacts, the clearest accountability, and the fastest path from “we should” to “we did”.
That’s the bet Verdaio is making: not replacing lawyers, but making the first 80% of compliance work achievable for teams that would otherwise postpone it forever. And if you’re selling into regulated buyers, banks, insurers, public sector, enterprise procurement, “achievable” is often the difference between closing a deal and never making it past the questionnaire.


