I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.
This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts
Friday, 12 October 2012
Friday, 15 June 2012
Supercomputers are for dreams
I was invited to the 2012 NCSA Annual Private Sector Program (PSP) meeting in May. In my few years of attending, this has always been a great meeting (attendance by invitation only), with an unusually high concentration of real HPC users and managers from industry.
NCSA have recently released streaming video recordings of the main sessions - the videos can be found as links on the Annual PSP Meeting agenda page.
Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.
The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.
NCSA have recently released streaming video recordings of the main sessions - the videos can be found as links on the Annual PSP Meeting agenda page.
Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.
The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.
Labels:
blue waters,
events,
hpc,
ncsa,
parallel programming,
people,
performance,
productivity,
software,
strategy,
supercomputing
Friday, 19 August 2011
What happened to High Productivity Computing?
How to make HPC more effective? Value for money and high impact strategic research facilities like HPC are often difficult to match. Not so long ago, this concern meant that the familiar HPC acronym was hijacked to mean "High Productivity Computing", to emphasize that it is not only the raw compute performance at your disposal that counts but, more importantly, how well you are able to make use of that performance. In other words: how productive is it?
Labels:
hpc,
performance,
productivity
Friday, 18 March 2011
Performance and Results
[Originally posted on The NAG Blog]
What's in a catch phrase?
As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".
What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.
When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."
What's in a catch phrase?
As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".
What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.
When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."
Labels:
hpc,
multicore,
NAG,
parallel programming,
performance,
software
Monday, 13 September 2010
Do you want ice with your supercomputer?
[Originally posted on The NAG Blog]
“Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.
Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.
“I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?”
“Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?”
“Ah, fair point”, he conceded.
What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?”
Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?
Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).
The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?
NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).
Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.
“Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.
Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.
“I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?”
“Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?”
“Ah, fair point”, he conceded.
What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?”
Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?
Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).
The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?
NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).
Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.
Labels:
HECToR,
hpc,
NAG,
performance,
productivity,
software,
supercomputing
Thursday, 28 January 2010
Are we taking supercomputing code seriously?
[Article by me on ZDNet UK, 28 January, 2010]
The supercomputing programs behind so much science and research are written by people who are not software pros ...
http://www.zdnet.co.uk/news/it-strategy/2010/01/28/are-we-taking-supercomputing-code-seriously-40004192/
The supercomputing programs behind so much science and research are written by people who are not software pros ...
http://www.zdnet.co.uk/news/it-strategy/2010/01/28/are-we-taking-supercomputing-code-seriously-40004192/
Thursday, 1 October 2009
Should programming supercomputers be hard?
[Article by me on ZDNet UK, 1 October, 2009]
Those who glibly argue for easier programming of supercomputers are broaching a complex issue ...
http://www.zdnet.co.uk/news/it-strategy/2009/10/01/should-programming-supercomputers-be-hard-39763731/
Those who glibly argue for easier programming of supercomputers are broaching a complex issue ...
http://www.zdnet.co.uk/news/it-strategy/2009/10/01/should-programming-supercomputers-be-hard-39763731/
Monday, 10 August 2009
Personal supercomputing anyone?
[Article by me on ZDNet UK, 10 August, 2009]
Personal supercomputing may sound like a contradiction in terms, but it definitely exists ...
http://www.zdnet.co.uk/news/it-strategy/2009/08/10/personal-supercomputing-anyone-39710087/
Personal supercomputing may sound like a contradiction in terms, but it definitely exists ...
http://www.zdnet.co.uk/news/it-strategy/2009/08/10/personal-supercomputing-anyone-39710087/
Thursday, 18 June 2009
When supercomputing benchmarks fail to add up
[Article by me on ZDNet UK, 18 June, 2009]
Using benchmarks to choose a supercomputer is more complex than just picking the fastest system ...
http://www.zdnet.co.uk/news/it-strategy/2009/06/18/when-supercomputing-benchmarks-fail-to-add-up-39664193/
Using benchmarks to choose a supercomputer is more complex than just picking the fastest system ...
http://www.zdnet.co.uk/news/it-strategy/2009/06/18/when-supercomputing-benchmarks-fail-to-add-up-39664193/
Labels:
hpc,
performance,
procurement,
risk,
supercomputing,
ZDNetUK
Tuesday, 16 December 2008
How to stand out in the supercomputing crowd
[Article by me on ZDNet UK, 16 December, 2008]
High-performance computing's key business benefit may be to differentiate an organisation from its rivals, but that shouldn't rule out the use of commodity products ...
http://www.zdnet.co.uk/news/it-strategy/2008/12/16/how-to-stand-out-in-the-supercomputing-crowd-39578009/
High-performance computing's key business benefit may be to differentiate an organisation from its rivals, but that shouldn't rule out the use of commodity products ...
http://www.zdnet.co.uk/news/it-strategy/2008/12/16/how-to-stand-out-in-the-supercomputing-crowd-39578009/
Labels:
hpc,
performance,
procurement,
strategy,
supercomputing,
ZDNetUK
Friday, 31 October 2008
Is supercomputing just about performance?
[Article by me on ZDNet UK, 31 October, 2008]
You may think you know what 'HPC' stands for — but conflicting views on what that 'P' really stands for reflect important changes taking place within the field of supercomputing ...
http://www.zdnet.co.uk/news/servers/2008/10/30/is-supercomputing-just-about-performance-39534285/
You may think you know what 'HPC' stands for — but conflicting views on what that 'P' really stands for reflect important changes taking place within the field of supercomputing ...
http://www.zdnet.co.uk/news/servers/2008/10/30/is-supercomputing-just-about-performance-39534285/
Labels:
hpc,
performance,
productivity,
software,
ZDNetUK
Subscribe to:
Posts (Atom)