▲ AI Safety · Platform Engineering · Social Impact

Seize the Future.
Build Together.

Engineer, researcher, and founder at the intersection of AI safety, platform engineering, and global impact. Transitioning from agentic AI development into the most pressing problem of our time: AI safety.

Learn more →
10+ Years Engineering & Technical Leadership
4,000+ People Reached via Social Enterprise
Ashoka & SOCAP Fellow & Programme Nominee
3x Accelerator Winner IMPULSA · Alterna · Booking Booster
Founder/Partner Urtisan (CNC & Manufacture) · Splitfire (Energy Labs)

Research & Areas of Interest

Safety & Alignment Lab

Unified research addressing ethical implications, safety concerns, policy frameworks, and community impact of AI systems. A holistic approach to responsible AI through integrated tracks.

EthicsGovernancePolicy

Energy Transition Lab

Investigating how AI can accelerate the shift to sustainable energy systems. Research into grid optimisation, demand forecasting, and climate-aware infrastructure planning.

EnergyClimateInfrastructure

Creative AI Lab

Exploring the intersection of generative systems and human creativity — from co-authorship to novel art forms. Building tools that augment, not replace, creative expression.

Generative AIToolsArt

Developer Tools & Frameworks

Creating practical, open-source tools and frameworks for responsible AI development. Democratising access to ethical AI tooling through transparent, collaborative builds.

Open SourceToolingSDK

Tools

Open-source tools and demos being built at KairosLabs — at the intersection of AI safety engineering and ML security.

In Progress

LLM Safety Eval Harness

A harness engineering approach to LLM safety evaluation — wrapping models in reproducible, composable test scenarios rather than one-off scripts. Covers prompt injection resistance, refusal consistency, output sanitisation, and instruction-following under adversarial conditions. Harnesses are first-class artifacts: versioned, shareable, and independent of the model under test. Built on UK AISI's Inspect framework.

In Progress

ML Security Demo Harness

Harness-based demonstrations of ML security attack surfaces: adversarial examples, data poisoning, model extraction, and membership inference. Each attack scenario is encapsulated as a standalone harness — reproducible, self-documenting, and runnable against any compatible model. Designed as both a security education tool and a template for building your own security evals.

Planned

Agentic Safety Monitor

Lightweight observability layer for agentic AI systems — tracking tool call sequences, detecting anomalous behaviour patterns, and flagging potential safety violations in autonomous pipelines. Informed by production agentic AI deployment at The Economist and the emerging literature on agentic failure modes.

Planned

RAG Security Harness

Security testing harness for RAG pipelines — systematically probing retrieval poisoning, indirect prompt injection via documents, and output exfiltration vectors. Harnesses are scoped per attack class, composable into full pipeline audits, and designed to run in CI alongside functional tests. Extends the AI Engineering course RAG work with a dedicated adversarial layer.

About

KairosLabs banner

Engineer, founder/partner, and researcher deliberately transitioning toward AI safety — the most pressing problem of our time.

I'm a platform engineer and technical leader currently serving as Co-Technical Lead for an Agentic AI project at The Economist, where I've gained firsthand exposure to the safety challenges of deploying autonomous AI systems in production. That work has deepened both my understanding of the problem and my conviction that my background can contribute meaningfully to solving it.

Before AI, I founded and grew social enterprises in Guatemala that reached over 4,000 people — work recognised with an Ashoka Fellow nomination, a SOCAP Programme nomination, and three accelerator wins across IMPULSA National, Alterna, and Booking Booster. That work instilled a discipline of evaluating decisions by expected impact rather than convention. I'm now applying that same framework to the question of where a platform engineer with 10+ years of DevSecOps depth, statistics training, and a linguistics background can have the greatest effect on AI safety.

I'm actively upskilling in ML and exploring whether my comparative advantage lies in AI safety engineering — contributing immediately to infrastructure and security — or in the longer investment of AI safety research via DPhil or intensive fellowship. KairosLabs is the platform through which I'm building, researching, and connecting in public.

Focus Areas

Existential Risk Reduction
Risks from Agentic AI
AI Security & Robustness
AI Governance & Coordination
Platform Engineering
DevSecOps / SRE
Agentic AI Systems
ML Security
Statistics / Data Science
Technical Leadership
Social Entrepreneurship
5 Languages

Experience

Jul 2025 — Now

Technical Co-Lead, Agentic AI — IndexAI

The Economist

Co-leading technical delivery of IndexAI — The Economist's agentic AI product built on AWS Bedrock AgentCore. Directly responsible for the safety, reliability, and engineering architecture of an autonomous AI system in production. Firsthand exposure to the alignment and safety challenges that arise when deploying agentic systems at scale.

2025 — Now

Engineering Lead, Platform Engineering

The Economist Group

Engineering leadership across The Economist and EIU platforms, including agentic AI development. Gained firsthand exposure to the safety challenges and failure modes of deploying autonomous AI systems in production — deepening both understanding of the problem and conviction that this background can contribute meaningfully to solving it.

2022 — 2025

Engineering Lead, DevSecOps & SRE

The Economist / EIU

Led DevSecOps enablement and site reliability engineering across The Economist Group. Built security-first infrastructure underpinning global media operations, developing practices and a security-first perspective that now informs AI safety thinking.

2020 — 2022

Site Reliability Engineer

Economist Intelligence Unit

SRE and software engineering on the Viewpoint Big Data project. Foundation in production reliability, observability, and large-scale data infrastructure. Also completed School of Code bootcamp (2020).

2016 — 2020

Head of Social Enterprise & Board Member

Niños de Guatemala

Grew and ran multiple social enterprises as a sustainable income source facilitating education for 525 children and ~4,000 community members. Ashoka Fellow Nominee 2019, SOCAP Nominee, ALTERNA Seed Capital Winner, IMPULSA National Winner 2018, Tour Operator of the Year 2017, 2018 & 2019 (Luxury Travel Guide), Antigua10x Business Incubator Finalist. Pitch presenter at FLII 2019 (LatAm Forum for Impact Investment, ~1,000 attendees).

2014 — 2016

Strategy & Growth Lead / Digital Product Specialist

LIFULL Connect (formerly Trovit)

Post-acquisition growth leadership across APAC & EMEA for a platform spanning 250 sites, 63 countries, 300M ads/month and 180M visits/month. Led product rollout of Real Time Bidding technology; drove ~€36M p.a. revenue growth. First point of contact for 16 Country Managers.

2011 — 2014

Emerging Markets Lead / Content & Data Lead

Trovit

New market launches and business development across emerging markets at a Barcelona-based tech company. Organised Trovit Talks — monthly events for 200+ tech professionals with speakers from SeedRocket, 4 Founders Capital, and others.

Collaborations

Organisations, programs, and communities shaping the responsible AI landscape — and where KairosLabs connects.

Events

Nov 2025

Attended

AI Growth Summit

London · London, UK

Nov 2025

Attended

Gen AI Summit

London · London, UK

Nov 2025

Attended

AI Advantage Summit

London · London, UK

29–31 May 2026

Attending

EA Global: London 2026

InterContinental London — The O2 · London, UK

Resources

Curated reading, tools, and programmes for anyone building a career in AI safety.

Mission & Methodology

AI Safety Foundations

Problem Orientation

Short Timelines & Defense-in-Depth

Technical Alignment: Courses & Upskilling

  • ARENA ↗ ARENA

    Structured curriculum for ML safety research engineering — transformers, RL, interpretability, evals. The fastest path to technical safety contributions.

  • fast.ai ML Course ↗ fast.ai

    Practical deep learning from first principles. Highly regarded for intuition-building and hands-on implementation.

  • ML Safety Course (Hendrycks) ↗ CAIS

    Dan Hendrycks's course covering robustness, monitoring, alignment, and systemic safety. Purpose-built for safety researchers.

  • AGI Safety Course — DeepMind ↗ Google DeepMind

    DeepMind's public course on AGI safety, covering strategy and technical approaches.

  • Levelling Up in AI Safety Research Engineering ↗ LessWrong

    Practical guide to growing from software engineer to safety research engineer — skill gaps, projects, and pathways.

  • AI Safety Seminar — Boaz Barak ↗ Boaz Barak

    Seminar materials on AI safety and alignment from a theoretical computer science perspective.

Technical Alignment: Projects & Orgs

Bleeding Edge Research

Cyber / InfoSec

AI Policy & Strategy

International Coordination & Post-AGI

Niche Domains: Bio, Theory, Hardware & Sentience

Career, Fellowships & Funding

  • MATS Program ↗ MATS

    Machine Learning Alignment Theory Scholars — research fellowship pairing scholars with top safety mentors. Highest-signal pathway into technical alignment research.

  • AI Safety Camp ↗ AISC

    Project-based research sprint. Low commitment, high learning — one of the best cheap tests for research fit.

  • 80,000 Hours Fellowships Collection ↗ 80,000 Hours

    Airtable of fellowships across technical safety, governance, policy, and adjacent fields — maintained by 80k Hours.

  • EA Opportunities Board ↗ EA

    Internships, fellowships, and job opportunities across EA-aligned organisations.

  • 80,000 Hours Job Board ↗ 80,000 Hours

    Curated high-impact roles including AI safety engineering, research, policy, and operations.

  • Constellation Incubator ↗ Constellation

    Berkeley-based AI safety incubator providing office space, funding, and community for early-stage safety researchers.

  • Catalyze Impact ↗ Catalyze Impact

    AI safety entrepreneurship incubator for founders building safety-relevant companies and projects.

  • The AI Safety Research Fund ↗ AI Safety Fund

    Independent fund supporting AI safety researchers and projects. Also has donation and volunteer opportunities.

Researcher Advice

80k Hours Career Reviews

Podcasts: Must-Listen

Commentators & Newsletters

Let's build
together.

Open to AI safety engineering roles, research collaborations, fellowship conversations, and anyone thinking seriously about where technical talent can have the most impact on AI risk.

Get in touch →

Status

Open to AI Safety collaboration

Location

Oxfordshire, UK