We all know the glamour of having the fastest HPC machine, or the most nodes, or fattest pipes. But what ends up lost in the hoopla of all the hardware hype is the fact that someone has to write the code for this stuff to be even marginally useful for handling enormous computations. Herein lies one of the problems with high performance, scientific computing - not enough skilled programmers. Simply put, software development isn’t keeping pace with hardware development. This has been a problem for some time and still is. Writing code and programming applications (from middleware to debuggers) that enable a large computational, data intensive problem to be broken into parts that are solved individually and then reassembled into a single solution is non-trivial. Though a little dated, Susan Graham and Marc Snir, of Cal Berkeley and Illinois, Ubana-Champaign respectively, touched on this still relevant problem in their February 2005 CTWatch Quarterly article “The NRC Report on the Future of Supercomputing.” Gregory Wilson, a CS professor, gets a little more specific in “Where’s the Real Bottleneck in Scientific Computing?” from American Scientist. A more recent discussion of the lag in software development can be found in Doug Post’s keynote talk “The Opportunities and Challenges for Computational Science and Engineering” from the inauguration of the new Virtual Institute - High Productivity Supercomputing (VI-HPS).
Leave a Reply
You must be logged in to post a comment.