Loading Events

Resource-Efficient AI Model Parallelisation on LUMI Supercomputer

22 January , 11:00 - 12:30
Featured image for Resource-efficient AI model prallelisation on LUMI Supercomputer webinar

Mimer Webinar] Resource-Efficient AI Model Parallelisation on LUMI Supercomputer

Speaker: Dr. Vijeta Sharma

About the Webinar:

This webinar explores how to harness the full potential of the LUMI supercomputer for large-scale AI model training through efficient utilisation of HPC resources. Participants will learn how thoughtful design of neural network architectures and optimal use of parallelisation techniques—such as model, data, and tensor parallelisation—can significantly improve performance and resource efficiency.

The session will demonstrate how frameworks like PyTorch and TensorFlow can be leveraged to distribute training workloads effectively across multiple GPUs and nodes on LUMI. Attendees will gain practical insights into balancing computational loads, minimising communication overhead, and achieving scalability for advanced AI workloads in an HPC environment.

Who is the Webinar For:

This webinar is designed for AI practitioners, computational scientists, and HPC users who aim to train large-scale machine learning models efficiently on modern supercomputing infrastructures. It is ideal for professionals seeking to optimise their deep learning workflows by leveraging advanced parallelisation techniques and maximising GPU performance on systems like LUMI. Participants with a background in AI, data analytics, or scientific computing who wish to scale their models and improve training efficiency in high-performance environments will particularly benefit from this session.

Key Takeaways:

  • Understand the fundamentals of model, data, and tensor parallelisation.
  • Learn strategies for efficient AI training on HPC systems like LUMI.
  • Explore practical examples using PyTorch and TensorFlow.
  • Gain insights into optimising GPU utilisation for scalable AI workloads.

This event has passed.

Resource-Efficient AI Model Parallelisation on LUMI Supercomputer

22 January , 11:00 - 12:30
Featured image for Resource-efficient AI model prallelisation on LUMI Supercomputer webinar
Featured image for Resource-efficient AI model prallelisation on LUMI Supercomputer webinar

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Resource-Efficient AI Model Parallelisation on LUMI Supercomputer

22 January , 11:00 - 12:30

Event details

Date & Time

22 January 2026
11:00 - 12:30
Format
Online