I've added a new quick survey to the HPC Notes blog: "Which is more interesting - exascale computing, personal supercomputing or industry use of HPC?"
See top right of the blog home page. You can even give different answers for "reading about" and "working on"...
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Thursday, 24 March 2011
Investments Today for Effective Exascale Tomorrow
I contributed to this article in the March 2011 The Exascale Report by Mike Bernhardt.
"Initiatives are being launched, research centers are being established, teams are being formed, but in reality, we are barely getting started with exascale research. Opinions vary as to where we should be focusing our resources.
In this issue, The Exascale Report asks NAG's Andy Jones, Lawrence Livermore's Dona Crawford, and Growth Science International's Thomas Thurston where should we (as a global community) be placing our efforts today with exascale research and development?"
"Initiatives are being launched, research centers are being established, teams are being formed, but in reality, we are barely getting started with exascale research. Opinions vary as to where we should be focusing our resources.
In this issue, The Exascale Report asks NAG's Andy Jones, Lawrence Livermore's Dona Crawford, and Growth Science International's Thomas Thurston where should we (as a global community) be placing our efforts today with exascale research and development?"
Labels:
exascale,
interview,
software,
strategy,
supercomputing,
The Exascale Report
Friday, 18 March 2011
Performance and Results
[Originally posted on The NAG Blog]
What's in a catch phrase?
As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".
What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.
When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."
What's in a catch phrase?
As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".
What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.
When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."
Labels:
hpc,
multicore,
NAG,
parallel programming,
performance,
software
Thursday, 17 March 2011
The Addictive Allure of Supercomputing
The European Medical Device Technology (EMDT) magazine interviewed me recently. InsideHPC also has pointed to the interview here.
The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."
We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.
The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."
We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.
Thursday, 10 March 2011
Meeting HPC people
About a year ago, I wrote this article for ZDNet UK, describing what I thought were some of the key events in the supercomputing/HPC community.
I said: "Many people have rightly remarked that the HPC community really is that — a community — and that there is still a relatively high degree of connection between the various practitioners. In other words, despite its growing size and global reach, it feels like a small community. People know each other. Consequently, networking, whether technical or commercial, goes a long way to helping your business."
And: "Whatever your scale of technical computing, from multicore workstations to multi-thousand-node supercomputers, getting involved with the active HPC community can help you with your parallel computing goals. Online resources can help, but by far the most effective way of benefiting from the wider HPC community is by participating at the right events."
I listed some key events, with a comment about the nature and value of each.
I have now added a survey to this website (top right) to find out which events people plan to attend in 2011.
I may have missed out your favourite conference in the original article, or in the survey above, in which case I would like to hear about it too - maybe via the comments page here, or directly.
I hope to meet soome of you when out and about in the coming year ...
I said: "Many people have rightly remarked that the HPC community really is that — a community — and that there is still a relatively high degree of connection between the various practitioners. In other words, despite its growing size and global reach, it feels like a small community. People know each other. Consequently, networking, whether technical or commercial, goes a long way to helping your business."
And: "Whatever your scale of technical computing, from multicore workstations to multi-thousand-node supercomputers, getting involved with the active HPC community can help you with your parallel computing goals. Online resources can help, but by far the most effective way of benefiting from the wider HPC community is by participating at the right events."
I listed some key events, with a comment about the nature and value of each.
I have now added a survey to this website (top right) to find out which events people plan to attend in 2011.
I may have missed out your favourite conference in the original article, or in the survey above, in which case I would like to hear about it too - maybe via the comments page here, or directly.
I hope to meet soome of you when out and about in the coming year ...
Labels:
events,
hpc,
people,
supercomputing
NAG out and about
[Originally posted on The NAG Blog]
The NAG website has a section called "Meet our experts - NAG out and about", which gives a list of upcoming events worldwide that NAG experts will be attending or presenting at.
The page also notes: "We regularly organise and participate in conferences, seminars and
training days with our customers and partners. If you would like to talk
to us
about hosting a NAG seminar at your organisation or any training
requirements you might have email us at
sales@nag.co.uk".
In my own focus of high performance computing (HPC), I have previously written (for ZDNet UK) about some key supercomputing events. For those of you interested in meeting up with HPC experts (especially from NAG!), I have set up a survey of HPC events - please let us know which events you plan to attend in 2011 - and see which events other readers of The NAG Blog are attending.
The NAG website has a section called "Meet our experts - NAG out and about", which gives a list of upcoming events worldwide that NAG experts will be attending or presenting at.
The page also notes: "We regularly organise and participate in conferences, seminars and
training days with our customers and partners. If you would like to talk
to us
about hosting a NAG seminar at your organisation or any training
requirements you might have email us at
sales@nag.co.uk".
In my own focus of high performance computing (HPC), I have previously written (for ZDNet UK) about some key supercomputing events. For those of you interested in meeting up with HPC experts (especially from NAG!), I have set up a survey of HPC events - please let us know which events you plan to attend in 2011 - and see which events other readers of The NAG Blog are attending.
Saturday, 30 October 2010
Comparing HPC across China, USA and Europe
[Originally posted on The NAG Blog]
In my earlier blog post today on China announcing the world's faster supercomputer, I said I'd be back with more later on the comparisons with the USA, Europe and others. In this morning's blog, I made the point that the world's fastest supercomputer, in itself, is not world changing. But leading supercomputers, critically matched with appropriate expertise in programming and using them, togther with the vision to ensure use across basic research, industry and defence applications can indeed be strategically beneficial to a nation - including real economic impact.
There are plenty of reports and studies describing the strategic impact of HPC within a given organisation or at national levels (some are catalogued by IDC here), so let's take it as a premise for the following thoughts.
In my earlier blog post today on China announcing the world's faster supercomputer, I said I'd be back with more later on the comparisons with the USA, Europe and others. In this morning's blog, I made the point that the world's fastest supercomputer, in itself, is not world changing. But leading supercomputers, critically matched with appropriate expertise in programming and using them, togther with the vision to ensure use across basic research, industry and defence applications can indeed be strategically beneficial to a nation - including real economic impact.
There are plenty of reports and studies describing the strategic impact of HPC within a given organisation or at national levels (some are catalogued by IDC here), so let's take it as a premise for the following thoughts.
Labels:
hpc,
leadership,
NAG,
petaflops,
software,
supercomputing
Friday, 29 October 2010
Why does the China supercomputer matter to western governments?
[Originally posted on The NAG Blog]
There is a lot of fuss in the mainstream media (BBC, FT, CNET, even the Daily Mail!) the last few days about the world's fastest supercomputer being in China for the first time. And much ado on Twitter (me too - @hpcnotes).
But much of the mainstream reporting, twitter-fest, and blogging is missing the point I think. China deploying the world's fastest supercomputer is news (the fastest supercomputer has almost always been American for decades, with the occasional Japanese crown). But the machine alone is not the big news.
There is a lot of fuss in the mainstream media (BBC, FT, CNET, even the Daily Mail!) the last few days about the world's fastest supercomputer being in China for the first time. And much ado on Twitter (me too - @hpcnotes).
But much of the mainstream reporting, twitter-fest, and blogging is missing the point I think. China deploying the world's fastest supercomputer is news (the fastest supercomputer has almost always been American for decades, with the occasional Japanese crown). But the machine alone is not the big news.
Labels:
hpc,
leadership,
NAG,
petaflops,
software,
supercomputing
Thursday, 23 September 2010
Is power-hungry supercomputing OK now?
[Article by me on ZDNet UK, 23 September, 2010]
We may be planning for a 1,000-fold increase in compute power in the next decade, but what about the extra power consumption ...
http://www.zdnet.co.uk/news/emerging-tech/2010/09/23/is-power-hungry-supercomputing-ok-now-40090137/
We may be planning for a 1,000-fold increase in compute power in the next decade, but what about the extra power consumption ...
http://www.zdnet.co.uk/news/emerging-tech/2010/09/23/is-power-hungry-supercomputing-ok-now-40090137/
Labels:
exascale,
hpc,
leadership,
power,
supercomputing,
ZDNetUK
Monday, 13 September 2010
Do you want ice with your supercomputer?
[Originally posted on The NAG Blog]
“Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.
Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.
“I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?”
“Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?”
“Ah, fair point”, he conceded.
What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?”
Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?
Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).
The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?
NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).
Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.
“Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.
Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.
“I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?”
“Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?”
“Ah, fair point”, he conceded.
What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?”
Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?
Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).
The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?
NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).
Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.
Labels:
HECToR,
hpc,
NAG,
performance,
productivity,
software,
supercomputing
Monday, 30 August 2010
Me on HPC 2
Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...
Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West
[full article requires subscription, extracts here are not complete, and are modified slightly to support that]
"cost of science, not just the cost of supercomputer ownership"
"lead time, and funding, to get the user community ready"
"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"
"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."
"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."
Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West
[full article requires subscription, extracts here are not complete, and are modified slightly to support that]
"cost of science, not just the cost of supercomputer ownership"
"lead time, and funding, to get the user community ready"
"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"
"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."
"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."
Subscribe to:
Posts (Atom)