I design automation systems and governance frameworks for small teams deploying AI agents. The kind of work that's hard to find guidance on, because most of it was written for enterprises with 500 people.
What I Do
Automation systems using n8n, Claude, Notion, and custom integrations. Designed for real volume, documented so your team can maintain them.
Recent: a lead enrichment pipeline processing 500+ prospects/day.
Systems that find, score, and route leads. Competitive intelligence. Outbound personalization. The parts of sales operations that benefit most from automation.
Governance frameworks for teams deploying AI agents without a dedicated governance function. Taking standards like OWASP, NIST, and the EU AI Act and making them usable for smaller teams.
Diagrams, system maps, and visual artifacts that make complex AI architectures legible. Useful for internal alignment, investor decks, or documentation.
How It Works
We look at your current workflow together, figure out where automation would help most, and scope a first project. 30 minutes. You get a written assessment regardless of next steps.
I build the system, test it with real data, and hand it off with documentation. Check-ins along the way so nothing is a surprise. Usually 2 to 4 weeks.
Systems always need tuning after launch. 30 days of post-launch support comes standard, with optional retainer if you want ongoing help.
Governance Frameworks
AI governance policies degrade as models update, threats evolve, and teams change. Governance Half-Life measures the rate of that degradation, so you know which policies need review now and which can wait.
The decay curve is driven by model update frequency, threat landscape velocity, and regulatory change rate. "Your prompt injection policy has a half-life of 6 weeks" is the kind of statement nobody else is making.
Maps to EU AI Act Art. 9, NIST AI RMF "Measure"
How fast can your organization actually absorb a new governance requirement? Most teams find out the hard way. Governance Metabolism scores your time-to-policy, cross-functional coordination speed, and incident response latency.
An org with high metabolism adapts to new regulations in weeks. Low metabolism means you are still interpreting requirements when enforcement hits.
Fills the organizational context gap identified by iEnable's Layer 7 analysis
Agent permissions should not persist indefinitely. Permission Decay scores each permission by data sensitivity, blast radius, and time since last review. Permissions with high decay scores get flagged for revocation or re-scoping.
The math behind what most frameworks only describe in principle.
Maps to OWASP ASI03, EU AI Act Art. 9
Five rungs of human-agent delegation, from "verify everything" to "full delegation." Unlike organizational maturity models, the Trust Ladder measures where you are in your relationship with a specific agent or system. A personal diagnostic for practitioners deciding how much autonomy to grant.
Complements CSA Agentic Trust Framework maturity levels
Writing
EU AI Act enforcement begins August 2, 2026
daysApril 2026
Why your AI policies are already stale, and how to measure exactly how fast they degrade.
May 2026
What each agentic risk actually means for your n8n workflow. The enterprise standard, translated for teams that don't have a CISO.
June 2026
The metric that predicts EU AI Act readiness. Measures how fast your org can absorb new governance.
About
I spent a few years at Accenture doing data governance for financial services clients. It taught me how governance works at scale, and how often it doesn't.
At Chicago Booth, I studied finance and strategic management while building AI automation systems on the side. I kept noticing the same gap: the teams shipping agents fastest had the least guidance on how to govern them. And most of the frameworks out there assumed you had a platform engineering team and a dedicated risk function.
So I started building for the teams that don't. Loomiq takes standards like OWASP, NIST, and the EU AI Act and makes them usable for smaller organizations. I also build the automation systems themselves, because it's hard to write good governance without understanding how the systems actually behave.
"The best governance is invisible to the agent and legible to the human. If your governance framework can't explain what happened and why, it's not governance. It's hope."
Contact
If any of this is relevant to what you're working on, I'd like to hear about it. I'm happy to talk through what you're building, whether or not it turns into a project.
Ways we can work together: