Project Milestone Report

Date: 12/04/2023

Authors: Ethan Meitz and Nick Hattrup
CMU 15-618 Fall 2023


Project Schedule

Results to Date

In our project we are working to implement Coulombic interactions (force and energy) on GPU(s) in Julia. We have completed an implementation of Smooth Particle Mesh Ewald (SPME) in Python as well as traditional Ewald sums and a direct sum with naive for loops. Each of these methods was compared to data from LAMMPS to verify the correctness of our implementation before we port the code onto the GPU. We have not begun implementing code on the GPU as the SPME method took longer than expected to get working; however, it has the best computational complexity of existing methods so we took the extra time to get it working. That said, we identified the three main kernels we will need to port our code to the GPU: The first kernel interpolates point charges onto a mesh grid, the second kernel calculates the real-space energy and force and the final kernel calculates the reciprocal-space energy and force. By implementing the code on CPU first we identified numerous ways to simplify and optimize the code through the use of look-up tables and in-place computation. We also created a Julia package (not public yet), LongRangeInteractions.jl, that allows us to rapidly test and swap between the various methods of calculating Coulombic interactions. This package will also serve as a framework for future work to build from so that our code can be incorporated into molecular dynamics packages in Julia like Molly.jl.

Progress and Final Deliverables

The serial implementation of Smooth Particle Mesh Ewald proved harder than initially anticipated. The existing resources for this method were severely lacking and we had to re-derive most of the expressions to correctly implement the code. At this point the multi-GPU implementation is a stretch but we will still aim for that as it should not be a huge jump from single GPU to multiple GPUs. That said, there could be immaturity in the Julia ecosystem for CUDA aware MPI that is unforeseen. Our goals for the poster session are now:

Our deliverables will be:

Issues

No major issues for single GPU, there could be issues with MPI aware CUDA in Julia but that remains a "nice to have" feature so we will not worry about it for now.