BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260421T090513Z
LOCATION:Plenary Room (Bldg. 6 - 001)
DTSTART;TZID=Europe/Stockholm:20260629T193200
DTEND;TZID=Europe/Stockholm:20260629T193300
UID:submissions.pasc-conference.org_PASC26_sess124_pos110@linklings.com
SUMMARY:Evaluating Open-Source Infrastructure-As-Code Virtual Clusters aga
 inst SuperMUC-NG Phase 1
DESCRIPTION:Prasanth Babu Ganta, Elmira Birang, Plamen Dobrev, Birkan Emre
 m, Matteo Foglieni, and Ferdinand Jamitzky (Leibniz Supercomputing Centre)
 \n\nTraditional high-performance computing (tHPC) infrastructure requires 
 weeks to months for hardware procurement, network configuration and softwa
 re integration, which limits agility for short-term projects and hampers r
 eproducibility through non-standardized configurations. Infrastructure-as-
 Code (IaC) promises rapid, version-controlled cluster deployment, yet prod
 uction-grade open-source IaC frameworks for communication-intensive worklo
 ads remain underexplored. Prior work reports 5–10% single-node virtualizat
 ion overhead but highlights multi-node scaling challenges dominated by net
 work latency.\n\nWe benchmark virtual HPC (vHPC) clusters deployed via Mag
 ic Castle within Germany’s InHPC-DE project, focusing on open-source IaC r
 ather than proprietary offerings such as AWS ParallelCluster or Azure Cycl
 eCloud. A IaC-based vHPC cluster is compared against SuperMUC-NG Phase 1, 
 a traditional bare-metal HPC system, using four biophysical/chemical simul
 ation codes: Quantum ESPRESSO, GROMACS, LAMMPS, and CP2K.\n\nAt single-nod
 e and low core counts, vHPC performance closely matches tHPC for all appli
 cations, indicating minimal computational overhead. For communication-inte
 nsive workloads (GROMACS, CP2K), strong-scaling efficiency degrades signif
 icantly beyond one node due to limited network bandwidth and high latency,
  thus far from 100 Gbit/s Omni-Path and sub-microsecond latency in tHPC sy
 stems. Our results show that IaC-based vHPC is production-ready for worklo
 ads with moderate communication requirements and is immediately applicable
  for burst computing, education, benchmarking, development workflows, and 
 federated multi-site infrastructure.\n\n
END:VEVENT
END:VCALENDAR
