Skip to main content

Posts

Prototyping some CUDA code

After a few false starts, and at a bit of a slowed pace do to other academic responsibilities created some working CUDA code.  The static member class is still causing some issues, but the workaround was to create a new array and overload the new and delete operators to allow CUDA to handle moving memory to and from the GPU.  It is a definite step forward, but there are still several issues.  GPU utilization sits quite low except for specific moments and I think that is due to a lot of the radiation calculations being on the CPU.  As you can see, the GPU does eventually catch up with the serial code, but the divisions being noted are in terms of the number of rays used for the two dimensional radiative transfer calculations.  The crossover point occurs somewhere near 1000 radial and 1000 azimuthal rays, which is not particularly useful in practice.  I am pleased with the start, though, as we now have compiling and running GPU code.  The next task is to reassess the order and stru
Recent posts

More adventures in Inverse-Compton

Had to do some back pedaling on implementing Inverse-Compton.  The Klein-Nishina effects are not as important for most observations, and there was some strangeness in the fast cooling regime.   The main issue was that the cooling frequency was not suppressed as one would expect and this was due to stupidity in the way I originally implemented it.  Here are some new results for physical parameters E = 10^52 ergs, p=2.5, theta_0= 0.2 rad, theta_0bs=0, n= 5, e_e=1, e_b = 0.01, and ksi_n=1.  We tested in the fast cooling regime using the standard expression for Y (Full derivation to follow) and in the slow cooling regime using an approximation of the formula derived in Beniamini et. al. 2015 (arxiv:1504.04833v2). The red line is the minimum accelerated electron emitted frequency, the dashed line is the IC suppressed cooling frequency, and the blue line is the un-suppressed cooling frequency.  We originally had a debate about whether the fast cooling spectrum should have the same

Synchotron Self-Compton Radiation and some multithreading

  Progress has been made since the last post.  My modifications to boxfit now allow for basic Inverse-Compton radiation.  Here is a reference spectrum generated using the shipped settings.  The current method uses the definition of the inverse compton parameter (Y) laid out in Nakar et. al. Apj, 703, 675, and functions for the slow cooling regime mainly, with placeholders in the other regimes. The orange is the SSC enabled spectrum, and it is behaving exactly as expected above the cooling break.   The Next step is to get the proper parameterization for Y based on Nakar et. al. as well as Beniamini et al. MNRAS, 454, 1073B.  This includes the Klein-Nishina effect at higher frequencies.  I do worry a bit about how computationally expensive this will be, but I can't really speak to optimizations until I have a better idea of what the algorithm is going to look like.   I am still working on the CUDA port, but I haven't had much time to think about how I want to change the dat

An update on the static class member

    Okay, so I have started my teaching duties for the summer, I have also started working with the original code to add inverse-Compton scatter--This may be helpful for finally finishing up the model fit of GRB 070125: The original reason I started playing with boxfit .  Both of these have slowed progess on the CUDA port, so it is behind where I wanted it to be in the original roadmap.  I do bear some good news, though.  I think I understand why the static is causing such a big problem with CUDA.      Originally, I was confused because I had overloaded the new and delete operators to automatically move any declared variables to device memory using CUDAMallocManaged() (I know this is not an efficient means of allocated device memory, but it is useful for getting a prototype up quickly.)  The problem is that static members of a class are identical across all instances of the class, so there is no way to seperately call the array for both the host and the device, the memory pointer alw

So that static array is actually really cool... but not super useful in CUDA

     So that static array I mentioned last post just got a lot more interesting.  At first, I thought it was an array of values that contained information about the Equidistant Surface for doing calculations on how much flux the observer actually sees.  It does do this, but the way it does it is very different than what I thought.  My intuition was that each array element was some value, and that it was static for ease of access by mulitiple programs.  What it actually is, is an array of a struct that contains several doubles.      What this means is you specify values for each struct object for each array point, creating a collection of parameters for for every point in discrete space.  It is very cool indeed, but I am now even more unsure of how to copy this static object to the GPU.  I don't want to dismantle it, as it is such an elegant solution to the problem of defining all these variables for each point, but I have yet to find a way to preserve the array while transfering i

Update: A D'Oh moment, and the curious case of static member classes.

     So last night, I was laying in bed thinking about the myriad of issues I had been having with functions pointing to member objects and multiple threads trying to assign values to those objects, when I realized something.  The part I was attempting to parallelize was a waste because there were multiple serial functions embedded in them.  What actually made sense was to move further into the code and instead of trying to parallelize the flux calculation in time, it made more sense to parallelize the spatial calculations because those do not have the same dependencies on objects.      This morning has consisted of implementing that code, and tagging the required functions as device runnable code, going back and catching typos, and then relocating the kernel because __global__ functions cannot be called as class types.  I was feeling great about this and was quite confident as I keyed in the make arguments, when... Whoops...      Turns out, static-member classes, because of

Here things are.

     Okay, this is the inaugural post of the Afterglow project development blog.  I have been pushing hard the last couple of weeks to get the CUDA-accelerated version of boxfit to into a state where it can be alpha-tested.  The main issue is in trying to deal with the way variables are allocated in classes.  There are a lot of functions that call class objects which are just doubles or ints, with the idea being that they can be passed between functions without actually having to be explicitly passed.      This works quite well when each object is allocated and requested sequentially for a thread, or when each thread has an independent instance of that object.  The problem with porting the code to CUDA is that the objects become shared among the threads, so each thread attempts to assign its own value to the variable and everything would go up in flames were it not for the compiler catching the code as incompatible.      I made a video talking about this which I may or may not post