Author name: Neja Šamec

EESSI has been recognized with an HPCwire Award!

We are thrilled to share that EESSI (European Environment for Scientific Software Installations) has received the HPCwire Readers’ Choice Award for the Best HPC Programming Tool or Technology! This prestigious recognition celebrates the groundbreaking work of EESSI and MultiXscale, showcasing their significant impact on the HPC community. A heartfelt thank you to everyone who contributed their dedication and expertise to push the boundaries of scientific software installations. This award is a true testament to what we can achieve together! Thank you very much to EuroHPC JU for the ongoing support for MultiXscale and EESSI, and thanks to everyone who voted for EESSI! Here’s to continued innovation and excellence in HPC. More information available here

MultiXscale CECAM Webinar Video Now Available!

We’re excited to announce that the recording of our recent MultiXscale CECAM webinar: Supporting the Development of Multiscale Methods via the European Environment for Scientific Software Installations (EESSI) is now available to view! This session dives into the innovative aspects of the MultiXscale CoE, covering key research applications, methodologies, and collaborative efforts that drive our work. Missed the live session? Full webinar available on our YouTube channel.

Exascale Day: Inspiring Future Scientists

To celebrate Exascale Day, the MultiXscale CoE in partnership with NCC Slovenia and SLING, opened the doors of the National Institute of Chemistry (NIC) to students from local primary and high schools. The event gave young students a unique chance to explore the world of supercomputers and understand how these powerful machines drive scientific breakthroughs. The day began with a short presentation on the significance of exascale computing in science, highlighting its role in solving complex problems such as climate research, drug discovery, and advanced simulations. Following this, students toured the NIC facility, where they saw the impressive infrastructure behind high-performance computing. By showcasing the potential of exascale computing the event aimed to spark curiosity and inspire the next generation of scientists and engineers. This collaboration between MultiXscale, NIC, NCC Slovenia, and SLING demonstrated the importance of education and outreach in shaping the future of computing.

Pre-exascale Workloads of DPD Fluid Simulations Using OBMD on EuroHPC Vega System – Achieving Another Milestone in MultiXscale

We are excited to announce the successful completion of Milestone 5 in the MultiXscale project: “WP4 Pre-exascale workload executed on EuroHPC architecture”. This marks a significant achievement in our ongoing efforts to push the boundaries of computational fluid dynamics. In this milestone, we performed a comprehensive strong scalability study of a DPD fluid in equilibrium using the open boundary molecular dynamics plugin (OBMD) (solid lines), and compared it to simulations without OBMD (dashed lines), where periodic boundary conditions were applied. One of the standout results from our study was that the inclusion of OBMD did not impact the strong scalability of the LAMMPS code, except for a constant prefactor. This means that while OBMD introduces a small fixed computational overhead, it does not affect the efficiency or the scaling behavior of the code as we increase the computational resources. We observed excellent strong scalability up to at least 80 A100 GPUs, which represents about one-third of the EuroHPC Vega GPU partition. To put this into perspective, each A100 GPU delivers approximately 10 TFlops in double precision and 20 TFlops in single precision. This means that the 80 GPU workload we tested is running on hardware with between 0.75 and 1.5 Petaflops raw compute capabilities, highlighting the immense computational power being utilized. Such scalability is crucial for large-scale simulations, as it ensures our methods remain efficient even when deployed on advanced supercomputing infrastructures. The strong scalability of our simulations is vital for researchers aiming to model complex fluid dynamics at large scales. By confirming that OBMD scales effectively alongside LAMMPS, we can confidently apply this method to larger, more detailed simulations without worrying about performance degradation. This opens up new possibilities for highly accurate simulations in fields such as materials science, biophysics, and chemical engineering. With Milestone 5 now complete, we are moving closer to our overarching goal of enabling multi-scale simulations that leverage the power of modern supercomputing resources. This achievement highlights the robustness and efficiency of our OBMD approach, and we are excited to continue exploring its applications in even more complex scenarios.

MultiXscale – CECAM webinar: Supporting the Development of Multiscale Methods via the European Environment for Scientific Software Installations (EESSI)

17 October 2024 | 09:00 -13:00 CEST The goal of the MultiXscale EuroHPC Centre-of-Excellence is to enable the simulation of hydrodynamics at different length scales, from atomistic to continuum models, on large scale HPC resources like those provided by EuroHPC systems. It will do this via 3 scientific showcases:  This webinar provides an introduction to EESSI and demonstrates how it supports the development of the key MultiXscale application codes—LAMMPS, waLBerla, and ESPResSo. Discover how EESSI accelerates scientific software installations and development, enabling cutting-edge research in multiscale modeling across various scientific domains. Perfect for researchers, developers, and engineers looking to enhance their software efficiency. This webinar is a joint effort between the CoE MultiXscale EuroHPC and CECAM. Moderators: Matej Praprotnik (NIC), Sara Bonella (CECAM) Zoom link: https://epfl.zoom.us/s/63031772254 Program: 09:00 – 09:15 Welcome message and introduction (Matej Praprotnik, National Institute of Chemistry and Sara Bonella, CECAM) 09:15 – 09:45 Introduction to EESSI (Kenneth Hoste, Ghent University) 09:45 – 10:20 Improving the Scalability of Energy Materials Simulations in ESPResSo (Jean-Noël Grad, University of Stuttgart) 10:20 – 10:25 EESSI CI/CD services for ESPResSo and pyMBE (Alan O`Cais, University of Barcelona) 10:25 – 10:40 Q&A 10:40 – 11:00 Coffee break 11:00 – 11:20 Digital twin for ultrasound through OBMD plugin for LAMMPS (Tilen PotiskNational Institute of Chemistry)  11:20 – 11:40 Performance Portability and Scalability of Codes: Kokkos and ALL (Rodrigo Bartolomeu, Jülich Supercomputing Centre) 11:40 – 12:05 Usage of waLBerla for simulation of turbulent flows (Matteo Zanfrognini, LEONARDO) 12:05 – 12:10 Supporting waLBerla applications in EESSI(Alan O`Cais, University of Barcelona) 12:10 – 12:30 Mesoscopic simulations of full supercapacitors using pystencils in EESSI (Céline Merlet, University of Toulouse) 12:30 – 13:00 Q&A

Leveraging EESSI for SKA Radio Astronomy Data on Global SRCnet Infrastructure

In collaboration with the SKA project, we demonstrated the successful use of European Environment for Scientific Software Installations (EESSI) to run radio astronomy analyses on the globally distributed SRCnet infrastructure. The SKA project faces an immense challenge as it must process and analyze an estimated 700 PB of data each year while operating across a globally distributed infrastructure. It is crucial to ensure that the right software is delivered to the correct locations with optimal performance in order to effectively handle this massive amount of data. By deploying software across multiple SKA regional centres, including those in the Netherlands, Japan, Korea, and Canada, we showcased how EESSI enables seamless and efficient data processing. This proof of concept highlighted the flexibility of EESSI across a variety of systems, such as HPC, Cloud, and Kubernetes, meeting the complex requirements of the SKA’s high-performance data analysis needs. As a proof of concept, we deployed various pieces of software that are normally used as part of a radio astronomy analysis pipeline (AOFlagger, Casacore, IDG, EveryBeam, DP3 and WSClean) through EESSI. This allowed the SKA regional centers to run this pipeline on any node of their distributed infrastructure without the need for downloading complete containers first. EESSI’s capability to optimize software for various CPU models and reduce network traffic and startup latency proved invaluable, which has been shown to deliver up to 30% performance improvements for certain use cases. While EESSI may not be a one-size-fits-all solution for SKA, its key technologies can play an important role in helping to meet these demands. By adopting and integrating select components of EESSI, SKA can improve the efficiency and performance of its software stack.

ESPResSo Summer School, “Simulating soft matter across scales”, October 7-11, 2024, Stuttgart, Germany

We invite all interested to attend the ESPResSo summer school “Simulating soft matter across scales” on October 7-11, 2024, University of Stuttgart, Germany. The school will focus on coarse-grained and lattice-based simulations methods to model soft matter systems at mesoscopic length and time scales. We will simulate coarse-grained ionic liquids in electrolytic capacitors, coarse-grained liquids with machine-learned effective potentials, polymer diffusion, hydrodynamic interactions via the lattice-Boltzmann method, and electrokinetics and catalysis with diffusion-advection-reaction solvers. Lectures will provide an introduction to the physics and simulation model building as well as an overview of the necessary simulation algorithms to resolve physical processes at different time scales. During the afternoon, students will practice running their own simulations in hands-on sessions using ESPResSo and waLBerla. Time will be dedicated to research talks and poster sessions. Invited speakers: Attendance to the summer school is free of charge. To register, go to https://www.cecam.org/workshop-details/1324 and write a short motivation and CV. You can submit a poster abstract until September 1, 2024. Feel free to forward this announcement to interested colleagues. A flyer is available at: https://members.cecam.org/storage/workshop_files/flyer-1715854439.pdf

All-Hands Meeting in Slovakia, 2024

The All-Hands Meeting held in Slovakia from April 22-24, 2024, was a resounding success, bringing together members from CASTIEL 2, EuroCC 2, and EuroHPC CoE Consortia. This productive event fostered important discussions and set a clear path forward for our collaborative efforts. Discussions focused on enhancing collaboration to attract new users and organize effective training sessions, identifying and promoting mutual benefits for CoEs’ target groups and the dissemination of knowledge across NCCs and CoEs was explored. Copyright ©2024 EuroCC 2, CASTIEL 2, and EuroHPC CoE Consortia

Lhumos – The innovative e-learning platform

Given the increasing demand for coding skills in the technology industry, there is a crucial need to enhance code usage and provide training on various methods in High-Performance Computing (HPC) environments. Developed with the support of MaX, CECAM, MARVEL, MultiXscale, and DOME4.0, Lhumos (Learning HUb for MOdeling and Simulation) stands as a groundbreaking educational platform. It is specifically designed to support the upskilling of students, scientists, and industrial users in HPC applications within the material sciences domain. Tailored for both early-career and advanced scientists, this e-learning platform consolidates a wealth of resources. It includes videos, lectures, codes, tutorials, seminars, and exercises covering diverse subjects such as electronic structure calculations, molecular dynamics, high-performance computing, and code optimization. To introduce the Lhumos project and showcase its various sections and materials, an online presentation is scheduled for January 15 at 2 pm CET. The agenda for the event is as follows: You can join the webinar online at https://epfl.zoom.us/j/65060242920?pwd=S3Raa0tqTWNoNDRpT2hJM3k0ZHhldz09. Stay tuned for more upcoming events, workshops, and learning opportunities as Lhumos continues to bridge the gap between theory and application, academia, and industry.

Scroll to Top