Monday, 29 August 2011

Supercomputers and other large science facilities

In my recent HPCwire feature, I wrote that I occasionally say, glibly and deliberately provocatively, that if the scientific community can justify (to funders and to the public) billions of dollars, large power consumptions, lots of staff etc for domain specific major scientific intrusments like LHC, Hubble, NIF, etc, then how come we can’t make a case for a facility needing comparable resources but can do wonders for a whole range of science problems and industrial applications?

There is a partial answer to that ...

Friday, 19 August 2011

What happened to High Productivity Computing?

How to make HPC more effective? Value for money and high impact strategic research facilities like HPC are often difficult to match. Not so long ago, this concern meant that the familiar HPC acronym was hijacked to mean "High Productivity Computing", to emphasize that it is not only the raw compute performance at your disposal that counts but, more importantly, how well you are able to make use of that performance. In other words: how productive is it?

What is this HPC thing?

[Originally posted on The NAG Blog]


I’m sure something like this is familiar to many readers of this blog. The focus here is HPC, but there is a similar story for mathematicians, numerical software engineeers, etc.


You've just met an old acquaintance. Or a family member is curious. Or at social events (when social means talking to real people not twitter/facebook). We see that question coming. We panic. Then the family/friend/stranger, asks it. We freeze. How to reply? Can I get a meaningful, ideally interesting, answer out before they get bored? What if I fail to get the message across correctly? Oops, this pause before answering has gone on too long. Now they are looking at me strangely. They are thinking the answer is embarrassing or weird. This is not a good start.


The question? “What do you do then?” Followed by: “Oh! So what exactly is supercomputing then?


Thursday, 11 August 2011

Big Data and Supercomputing for Science

It is interesting to note the increasing attention “big data” seems to be getting from the supercomputing community.

Data explosion


We talk about the challenges of the exponential increase in data, or even an “explosion of data”. This is caused by our ever-growing ability to generate data. More powerful computational resources deliver finer resolutions, wider parameter studies, etc. The emergence of individual scale HPC (GPU etc.) that is both cost-viable and effort-viable gives increased data creation capability to the many scientists not using high end supercomputers. And instrumental sources continue to improve in resolution and speed.

So, we are collecting more data than we have before. We are also increasing our use of multiple data sources – fusion from various sensors and computer models to form predictions or study scientific phenomena.

It is also common to questions such as: are we drowning in volume of data? Is this growth in data overwhelming our ability to extract useful information or insight? Is the potential value of the increased data lost by our inability to manage and comprehend it? Does having more data mean more information – or less due to analysis overload? Do the diversity of formats, quality, and sources further hinder data use?


Monday, 8 August 2011

Summer season big changes - football or supercomputing?

The world of supercomputing has gone mad.

So it seems as I catch up on the news around the HPC community after a week's vacation. Just today the news of IBM walking away from half a decade's work on Blue Waters and the story of an unknown organisation [now revealed to be NVidia] tempting Steve Scott to leave his Cray CTO role have been huge news but thinking back over the summer months there has been more.

The immediate comparison to me is that of the European football summer season (soccer for my American readers). Key players are signed by new clubs, managers leave for pastures new (or are pushed), and ownership takeover bids succeed or fail. It feeds a few months of media speculation, social gossip, with occasional breaking news (i.e. actual facts) and several major moves (mostly big surprises, but some pre-hyped for long before). But clubs emerge from the summer with new teams, new ambitions, and new odds of achieving success.

The world of HPC has such a summer I think.

Friday, 24 June 2011

ISC11 Review

ISC11 - the mid-season big international conference for the world of supercomputing - was held this week in Hamburg.

Here, I update my ISC11 preview post with my thoughts after the event.

I said I was watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Review: None of this is definitive, but my gut reaction is that MIC won this battle. GPU lost. Fusion didn't play again. My feeling from talking to attendees was that MIC was second only to the K story, in terms of what people were talking about (and asking NAG - as collaborators in the MIC programme - what we thought). Partly because of the MIC hype, and the K success (performance and power efficient without GPUs), GPUs took a quieter role than recent years. Fusion, disappointingly, once again seemed to have a quiet time in terms of people talking about it (or not). Result? As I thought, manycore is now realistically meaning more than just NVidia/CUDA.

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Review: Exascale won this one hands down, I think. Some lone voices still tried to talk about desktop HPC, missing middles, mass usage of HPC and so-on. But exascale got the hype again (not necessarily wrong for one of the year's primary "supercomputing" shows!)

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

Review: Hardware still got more attention than software. Top500, MIC, etc. Although ease-of-programming for MIC was a common question too. I did miss lots of talks, so perhaps there was more there focusing on applications and software challenges than I caught. But the chat in the corridors was still hardware dominated I thought.

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

Review: Well, I got those two wrong! Flags were out in force, with Japan (K, Fujitsu, Top500, etc) and France (Bull keynote) waving strongly among others. And clouds were seemingly the question to be asked at every panel! But in a way, I was still right - flags and clouds do matter and will get people talking - but I mainatin that manycore, exascale vs desktop, and the desperation of software all matter more.


 What did you learn? What stood out for you? Please add your comments and thoughts below ...

Friday, 17 June 2011

ISC 11 Preview

ISC11 - the mid-season big international conference for the world of supercomputing - is next week in Hamburg.

Will you be attending? What will you be looking to learn? I will be watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

 What will you be looking out for?

Thursday, 26 May 2011

Poll: legacy software and future HPC applications

I've added a new quick survey to the HPC Notes blog: "Which do you agree with more for developing the next generation of HPC applications?"

Is the argument about "protecting our invetsment" a load of nonsense? Or is throwing it all away and starting again irresponsible?

I've only allowed the two extremes as voting options - I know you might want to say "both" - but choose one!

See top right of the blog home page. Vote away ...

For clues on my own views, and some good audience debate, see the software panel video recording from the recent NCSA PSP annual meeting here: http://www.ncsa.illinois.edu/Conferences/2011Meeting/agenda.html

Blog on the topic to follow shortly ... after some of your votes have been posted :-)

Thursday, 24 March 2011

Poll: exascale, personal or industrial?

I've added a new quick survey to the HPC Notes blog: "Which is more interesting - exascale computing, personal supercomputing or industry use of HPC?"

See top right of the blog home page. You can even give different answers for "reading about" and "working on"...

Investments Today for Effective Exascale Tomorrow

I contributed to this article in the March 2011 The Exascale Report by Mike Bernhardt.

"Initiatives are being launched, research centers are being established, teams are being formed, but in reality, we are barely getting started with exascale research. Opinions vary as to where we should be focusing our resources.

In this issue, The Exascale Report asks NAG's Andy Jones, Lawrence Livermore's Dona Crawford, and Growth Science International's Thomas Thurston where should we (as a global community) be placing our efforts today with exascale research and development?"


Friday, 18 March 2011

Performance and Results

[Originally posted on The NAG Blog]

What's in a catch phrase?

As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".

What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.

When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."