LLM Evals Course Lesson 7: Interfaces for Human Review
Notes from lesson 7 of Hamel and Shreya's LLM evaluation course - interface design principles and strategic sampling.
Notes from lesson 7 of Hamel and Shreya's LLM evaluation course - interface design principles and strategic sampling.
How to set up a persistent Docker environment for AI coding tools without losing your authentication every time you restart the container.
Notes from lesson 6 of Hamel and Shreya's LLM evaluation course - debugging agentic systems, handling complex data modalities, and implementing CI/CD for production LLM applications.
Pearson FT published AI Demystified offers a gentle introduction for business leaders who want to understand how AI might impact their field.
Notes from lesson 2 of Hamel and Shreya's LLM evaluation course - covering error analysis, open and axial coding, and systematic approaches to understanding where AI systems fail.
Notes from the first lesson of Parlance Lab's Maven course on evaluating LLM applications - covering the Three Gulfs model and why eval is where most people get stuck.
Trying to blend together two AI Framework styling into one that's more practically useful
I like bits of Brunig's and Mollick's AI frameworks, but neither quite works for me.
A systematic approach to analysing and improving large language model applications through error analysis.