BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260421T090515Z
LOCATION:Bldg. 6 - Room 004
DTSTART;TZID=Europe/Stockholm:20260629T170000
DTEND;TZID=Europe/Stockholm:20260629T173000
UID:submissions.pasc-conference.org_PASC26_sess159_msa147@linklings.com
SUMMARY:LLM Infrastructure on HPC: Workflows, Constraints, and Solutions
DESCRIPTION:Ahmad Alhineidi (University of Bern, Data Science Lab)\n\nThe 
 integration of Large Language Models (LLMs) into academic research is seve
 rely constrained by data privacy regulations. Researchers handling sensiti
 ve, GDPR-protected data cannot utilize commercial cloud APIs, necessitatin
 g the local deployment of open-weight LLMs on High-Performance Computing (
 HPC) clusters. However, the steep technical learning curve of HPC environm
 ents—requiring Slurm orchestration and container management—frequently ali
 enates non-technical domain experts. \n\nWe present Text Lab, a production
 -ready, ephemeral AI platform deployed via Open OnDemand that bridges this
  usability gap. Operating entirely within self-cleaning, isolated Apptaine
 r containers, Text Lab provides an intuitive web interface for researchers
  to interact with advanced LLMs without writing code. \n\nOur "Zero-Footpr
 int" architecture leverages shared, read-only model caches to efficiently 
 distribute massive LLM weights across GPU nodes, preventing storage bloat 
 while minimizing startup latency. The system features a sandboxed Model Co
 ntext Protocol (MCP) deployment, allowing local LLMs to autonomously gener
 ate and execute Python data visualization code on private, unredacted data
 sets. \n\nBy abstracting the complexities of Slurm job scheduling and GPU 
 memory allocation, Text Lab democratizes access to secure generative AI, d
 emonstrating that institutions can deliver scalable LLM capabilities to re
 searchers while maintaining absolute data sovereignty.\n\nDomain: Chemistr
 y and Materials, Climate, Weather, and Earth Sciences, Applied Social Scie
 nces and Humanities, Engineering, Life Sciences, Physics, Computational Me
 thods and Applied Mathematics\n\nSession Chairs: Tobias Hodel (University 
 of Bern, Switzerland) and Sukanya Nath (University of Bern)\n\n
END:VEVENT
END:VCALENDAR
