Hamel & Shreya's LLM Evals Course: Lesson 1
Notes from the first lesson of Parlance Lab's Maven course on evaluating LLM applications - covering the Three Gulfs model and why eval is where most people get stuck.
Notes from the first lesson of Parlance Lab's Maven course on evaluating LLM applications - covering the Three Gulfs model and why eval is where most people get stuck.
Trying to blend together two AI Framework styling into one that's more practically useful
I like bits of Brunig's and Mollick's AI frameworks, but neither quite works for me.
A systematic approach to analysing and improving large language model applications through error analysis.
Why evaluation-driven experimentation creates better roadmaps in AI products.
Understanding the combinatorial complexity problem that plagues many software systems, and how modern architectures solve it.