Coursera

Building Reliable LLM Systems

Coursera

Building Reliable LLM Systems

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

2 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

2 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Build scripts with lexical/semantic metrics to evaluate LLMs, diagnose hallucinations, and balance vector-search recall against latency.

  • Apply hypothesis testing, confidence intervals, and significance metrics to evaluate model accuracy and validate results from A/B experiments.

  • Utilize parameterized SQL and data manipulation to segment user logs, calculate retention, and securely retrieve large-scale datasets.

  • Analyze LLM performance gaps to prioritize technical fixes and implement remediation measures for production-level reliability.

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

March 2026

Assessments

14 assignments¹

AI Graded see disclaimer
Taught in English

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Build your subject-matter expertise

This course is part of the LLM Engineering That Works: Prompting, Tuning, and Retrieval Specialization
When you enroll in this course, you'll also be enrolled in this Specialization.
  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate

There are 5 modules in this course

This module lays the groundwork for quantitative Large Language Mode (LLM) evaluation. Learners will discover why relying on intuition to judge model performance is unsustainable and explore the foundational metrics used to create automated, objective evaluation systems. We will cover both lexical similarity metrics (like BLEU and ROUGE-L) that assess text structure and semantic metrics (like cosine similarity) that capture meaning. By the end of this module, learners will have the conceptual understanding and practical code to build their first automated evaluation script.

What's included

8 videos3 readings3 assignments3 ungraded labs

When a production chatbot starts giving incorrect answers, how do you find the problem and fix it? This module equips AI practitioners, ML engineers, and data analysts with the essential skills for debugging production LLMs. Go beyond theory and learn the systematic, data-driven workflow that professionals use to solve the critical problem of AI hallucinations. You will be equipped to transition from merely observing AI failures to expertly diagnosing and resolving them.

What's included

5 videos3 readings3 assignments2 ungraded labs

When making high-stakes deployment decisions, a simple accuracy score is not enough. This module equips you with the statistical methods to rigorously validate LLM performance improvements. By the end of this module, you will be able to move beyond subjective "it seems better" evaluations to confidently state, "we can prove it's better," ensuring every deployment decision is backed by sound statistical evidence.

What's included

5 videos2 readings3 assignments3 ungraded labs

In the world of large-scale AI, slow queries and inefficient search can bring a system to its knees. This module provides the critical skills to prevent that, focusing on practical database and vector search optimization techniques. By the end of this module, you will be equipped to systematically analyze and optimize production retrieval systems, ensuring your AI applications are not only powerful but also fast and reliable.

What's included

4 videos3 readings4 assignments3 ungraded labs

In this module, you will conduct an end-to-end performance audit comparing two LLM variants using an A/B test dataset. You will implement a pipeline to calculate key performance metrics, including lexical and semantic similarity, and use statistical A/B testing to validate model improvements. The project culminates in a comprehensive report where you will correlate hallucination rates with retrieval logs and synthesize your findings into data-driven recommendations for stakeholders, guiding the decision for a production-level rollout in a customer support application.

What's included

2 readings1 assignment

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructor

Professionals from the Industry
211 Courses 33,604 learners

Offered by

Coursera

Why people choose Coursera for their career

Felipe M.

Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."

Jennifer J.

Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."

Larry W.

Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."

Chaitanya A.

"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions

¹ Some assignments in this course are AI-graded. For these assignments, your data will be used in accordance with Coursera's Privacy Notice.