Loading Events

Operationalizing AI: MLOps x LLMOps

15 April , 11:00 - 12:30

About the webinar

In this MLOps and LLMOps webinar, we’ll walk through the entire AI lifecycle – from idea and experimentation to production, deployment and continuous monitoring, highlighting how AI differs from traditional software (data-driven, non-linear, and sometimes unpredictable even when “done right”). You’ll learn the main deployment patterns (batch/offline, real-time/online, and common patterns employed in cloud solutions) and the key trade-offs around latency, scaling, and operational reliability.

We’ll then connect MLOps and LLMOps in a practical way: versioning data/models/prompts, reproducibility, CI/CD, and testing strategies for probabilistic systems. It’s aimed at data scientists, ML engineers, software engineers, and AI engineers who want a clear, production-focused view of how to run ML and LLM solutions end-to-end.

Who is the webinar for?

It’s aimed at data scientists, ML engineers, software engineers, and AI engineers who want a clear, production-focused view of how to run ML and LLM solutions end-to-end. Also suited to those with no experience in building and deploying AI models, and are curious on AI/ML/LLM Ops.

Key takeaways for participants:

  • Key differences between AI and traditional software

  • How these differences translate to model deployment

  • What is ML and LLM Ops and how they differ

  • Different model deployment strategies

Speaker bio:

Murilo Kuniyoshi Suzart Cunha (https://www.linkedin.com/in/murilo-cunha/)

Murilo is a machine learning engineer specializing in productionizing models and applying AI Ops best practices, with a focus on the evolving landscape of LLMOps. He takes a pragmatic approach to machine learning, ensuring AI initiatives deliver tangible ROI. An experienced international conference speaker and open source supporter, Murilo is also the host of the Monkey Patching Podcast.

Event details

Date & Time

15 April 2026
11:00 - 12:30
Format
Online