We all know the glamour of having the fastest HPC machine, or the most nodes, or fattest pipes. But what ends up lost in the hoopla of all the hardware hype is the fact that someone has to write the code for this stuff to be even marginally useful for handling enormous computations. Herein lies one of the problems with high performance, scientific computing - not enough skilled programmers. Simply put, software development isn’t keeping pace with hardware development. This has been a problem for some time and still is. Writing code and programming applications (from middleware to debuggers) that enable a large computational, data intensive problem to be broken into parts that are solved individually and then reassembled into a single solution is non-trivial. Though a little dated, Susan Graham and Marc Snir, of Cal Berkeley and Illinois, Ubana-Champaign respectively, touched on this still relevant problem in their February 2005 CTWatch Quarterly article “The NRC Report on the Future of Supercomputing.” Gregory Wilson, a CS professor, gets a little more specific in “Where’s the Real Bottleneck in Scientific Computing?” from American Scientist. A more recent discussion of the lag in software development can be found in Doug Post’s keynote talk “The Opportunities and Challenges for Computational Science and Engineering” from the inauguration of the new Virtual Institute - High Productivity Supercomputing (VI-HPS).
Archive for the ‘Middleware’ Category
You’ve read about it and might have even heard some talk about it - the Grid. So you want to know what it is and why it matters? We can help. It was a TV miniseries, but the “Grid” we’re referring to here has to do with a distributed computational resource. Ian Foster provided a nice description back in 2002 called What is the Grid? A Three Point Checklist. For an even more comprehensive explanation of the Grid, visit NCSA’s What is the Grid? website, where a Who’s Who of the high performance computing community answer many pivotal questions regarding the Grid and its use.
Back in May 2005, readers were reminded (and some informed for the first time) about the Semantic Grid effort that’s been underway since 2001. Recently, IST Results did a piece on the Semantic Grid by touting more of the potential commercial benefit of such a resource. A significant component of the Semantic Grid, a methodologically sound technological infrastructure, is being addressed by the OntoGrid Project.
More indepth information on both the Semantic Grid and the OntoGrid project can be found in this article.
Standards are good. But too many are bad. Such is the case with open standards for the Grid. In this article from Grid Computing Planet, standards are touted as one of the reasons for slower adoption of Grids, or at least slower migration from academia to the business enterprise. With the proliferation of web services, grid management tools are becoming more important as the article also touches on the lack of consensus for Globus as the way to go in Grid middleware.
Widespread talk of the Semantic Grid seems to have cooled over the last couple of years. However, it is still under active development and moving along nicely. The formal effort began in 2001 as part of the e-Science program in the UK to reach a goal of semantic interoperability with an infrastructure
where all resources, including services, are adequately described in a form that is machine-processable….the Semantic Grid is an extension of the current Grid in which information and services are given well-defined meaning, better enabling computers and people to work in cooperation (from the Semantic Grid website).
Development seems to be gaining considerable speed as a greater number of research initiatives related to grid computing are underway. A good primer on the Semantic Grid effort can be found in this presentation (13 MB) given in Amsterdam in April by Dr. David De Roure, one of the lead researchers. Supercomputing Online also has a short piece about the effort.
InfoWorld has published an interesting Q & A with Grady Booch, known as a co-creator of the unified modeling language (UML). In the interview, Booch fields several questions about a variety of topics, including parallelizing software and what happens when Moore’s law expires. Though towing the company line, Booch nevertheless shares his insight into future application development and open source issues as well.