Show simple item record

dc.contributor.supervisorKelefouras, Vasilios
dc.contributor.authorRahman, Md Shidur
dc.contributor.otherFaculty of Science and Engineeringen_US
dc.date.accessioned2024-11-14T09:49:02Z
dc.date.available2024-11-14T09:49:02Z
dc.date.issued2024
dc.identifier10698869en_US
dc.identifier.urihttps://pearl.plymouth.ac.uk/handle/10026.1/22611
dc.description.abstract

In the scientific exploration of Quantum Chromodynamics (QCD)—the theory governing the strong interaction among quarks and gluons—large-scale numerical simulations are per formed using the framework of lattice gauge theories. Lattice Gauge Theory (LGT) simulations involve the formulation of gauge field theories on a space-time lattice. HiRep is a simulation suite designed for running lattice simulations, leveraging high-performance computing platforms. HiRep is designed to be flexible enough to study a wide range of strongly interacting systems, particularly those pertinent to novel physics investigations at CERN’s Large Hadron Collider (LHC). However, improving the execution time of HiRep is a challenging and non-trivial task. Even marginal improvements in HiRep’s execution time can have a significant impact on paving the way to new discoveries in the field of particle physics. However, a detailed study, analysis, and profiling of the HiRep application revealed that the implementation of the Dirac operator is one of the most computationally intensive routines, serving as the main performance bottleneck. Consequently, this routine was optimized for CPU-based distributed-memory hardware platforms. The main performance inefficiencies in clude communication overhead due to extensive data exchanges between MPI processes, work load imbalances in OpenMP regions, inefficient data reuse of lattice sites, and ineffective auto vectorization. To this end, both algorithmic and hardware-dependent optimization strategies are employed. These strategies include efficient hybrid parallelization (using both MPI and OpenMP parallel programming frameworks), optimizing OpenMP parallelism through loop collapsing, memory access patterns optimization, and vectorization (using both AVX2 and Clang compiler’s vector intrinsics). Based on experimental results obtained from two distinct High-Performance Computing (HPC) platforms, the proposed optimizations boost the performance of HiRep, achieving an overall speedup of up to ×1.80 compared to the baseline MPI version.

en_US
dc.language.isoen
dc.publisherUniversity of Plymouth
dc.subjectLattice simulation, Dirac operator, Performance optimization, Hybrid programming (MPI+OpenMP), Memory access patterns, Vectorizationen_US
dc.subject.classificationPhDen_US
dc.titleImproving the Performance of HiRep Lattice Simulations Software by Exploiting the CPU Hardware Architecture Details and Algorithm Characteristicsen_US
dc.typeThesis
plymouth.versionpublishableen_US
dc.identifier.doihttp://dx.doi.org/10.24382/5244
dc.rights.embargoperiodNo embargoen_US
dc.type.qualificationDoctorateen_US
rioxxterms.versionNA
plymouth.orcid_idhttps://orcid.org/0000-0001-5192-4527en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV