Alea
Back to Podcast Digest
Jo Van Eyck··21m

When worlds collide: software engineering meets AI engineering

TL;DR

  • The answer to “learn coding or AI engineering?” is emphatically both — Jo Van Eyck frames modern builders as needing fluency in two “computers”: deterministic software engineering on one side, and stochastic LLM-based systems on the other.

  • Classic software skills still matter, but the value has shifted upward from syntax to judgment — he argues that memorizing refactor shortcuts matters less now, while system design, code quality taste, testing, Git workflows, delivery discipline, and security are still core.

  • AI engineering is its own real discipline, not just prompting — Jo calls out context engineering, harness engineering, evals, RAG, MCP, and multi-agent topologies as the new stack you need to understand if frontier models like Claude Opus are your “operating system.”

  • Tomorrow’s winning engineers will be generalists who can cross the boundary between both worlds — his blunt advice is not to get stuck debating Angular vs. React, because hybrid builders will “beat the living daylight out of today’s specialists.”

  • His real projects are already 50/50 software and AI — the Obsidian semantic-related-notes plugin mixes Python, embeddings, ChromaDB, and data pipelines with Claude-powered workflows, while his personal newsfeed aggregator combines TypeScript integrations with LLM-based prioritization and summarization.

  • Even vibe coding needs old-school engineering discipline — in his newsfeed app, the first thing he did was block Claude from reading .env files to avoid API-key exfiltration, then added specs, unit tests, plugin boundaries, and versioned Claude plan files alongside the code.

The Breakdown

Two Computers, Not One Career Choice

Jo opens with the question a lot of people are circling: should you still learn to code, or should you just go all-in on AI engineering? His answer is “yes, all of the above,” and he uses a metaphor from Philip Carter — LLMs as “weird computers” — to frame the moment. Traditional software runs on deterministic von Neumann machines; LLM-based systems behave like a second kind of computer entirely, one that’s probabilistic, harder to reason about, and impossible to treat like normal code.

Why Software Engineering Still Has Teeth

He’s clear that the left-hand world — system design, code quality, version control, testing, delivery, security — is still “super relevant.” What’s dropped in value is the low-level prestige of knowing every coding shortcut by heart; what still compounds is design taste, knowing good code from bad code, and shipping without merge hell or regressions. It’s a subtle but important repositioning: less worship of syntax, more respect for engineering judgment.

AI Engineering Is a New Stack, Not a Buzzword

On the right-hand side, Jo groups everything above model training into “AI engineering”: building on top of frontier models like Claude Opus or “ChatGPT 5.4 or whatever we’re at today.” Here the important concepts are context engineering, harnesses, evals, RAG, MCP, and increasingly multi-agent setups with coordinators, sub-agents, and peer networks. His point is that evals in this world feel a lot like test automation from the old one — similar intent, very different failure modes.

From Single-Agent Helpers to Multi-Agent Maturity

He points to Lada Kesler’s Augmented Coding Patterns as his favorite starting point, especially for getting from zero to a solid single-agent workflow. But he says the field has already moved, and cites Adi Osmani’s AI Code Con talk discussing Steve Yegge’s coding maturity ladder as a better map for higher levels — especially level seven and beyond, where multiple agents run in parallel and orchestration becomes the game. If you’re already deep in one camp, his advice is simple: go explore the other one now.

Why Hybrid Software Is Already the Default

Jo says the apps he’s building are now roughly “50/50” between conventional code and agentic components. Even if your product is mostly deterministic, your tooling increasingly lives in the AI world, which means you need to understand things like evals just to work effectively. That’s why he thinks this isn’t an abstract future trend — software itself is becoming a hybrid medium.

Project One: An Obsidian Plugin That Thinks Semantically

His first example is a small Obsidian “related notes” tab that behaves differently from Ctrl+F: it matches on meaning rather than exact words. Under the hood, it’s a very hybrid system — Python scripts, embeddings, ChromaDB, syncing flows, and software architecture from the classic world, plus Claude-assisted workflows layered on top. The memorable use case is asking Claude to scan his vault for sources and draft a YouTube post about “the clashing of two worlds.”

Project Two: A Personal AI Newsfeed for Information Overload

The second project, built last week, is a personal feed aggregator pulling from YouTube history, Spotify podcasts, Obsidian notes, and Raindrop bookmarks into a daily or weekly digest — “like the TLDR newsletter, but for me personally.” The plumbing is straight software engineering: APIs, Git history, plugin architecture, secure key handling. The intelligence layer is pure AI engineering: taking all those JSON sources, prioritizing what matters, generating summaries, and even suggesting writing ideas or future YouTube topics.

How He Actually Vibe-Coded It

The most practical bit is his build process. He started by making sure Claude could not read his API keys — because yes, it can read env files unless you explicitly blacklist them — then dictated a markdown spec via transcript, including both the problem and pieces of the intended architecture. From there he iterated with Claude, added unit tests for the TypeScript parts, split each integration into its own skill, used Anthropic’s skill creator to help shape those files, and even configured Claude Code to persist its plans into the repo so they could be versioned and recovered after crashes.