[Originally posted on The NAG Blog]
What's in a catch phrase?
As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".
What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.
When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Friday, 18 March 2011
Thursday, 17 March 2011
The Addictive Allure of Supercomputing
The European Medical Device Technology (EMDT) magazine interviewed me recently. InsideHPC also has pointed to the interview here.
The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."
We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.
The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."
We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.
Thursday, 10 March 2011
Meeting HPC people
About a year ago, I wrote this article for ZDNet UK, describing what I thought were some of the key events in the supercomputing/HPC community.
I said: "Many people have rightly remarked that the HPC community really is that — a community — and that there is still a relatively high degree of connection between the various practitioners. In other words, despite its growing size and global reach, it feels like a small community. People know each other. Consequently, networking, whether technical or commercial, goes a long way to helping your business."
And: "Whatever your scale of technical computing, from multicore workstations to multi-thousand-node supercomputers, getting involved with the active HPC community can help you with your parallel computing goals. Online resources can help, but by far the most effective way of benefiting from the wider HPC community is by participating at the right events."
I listed some key events, with a comment about the nature and value of each.
I have now added a survey to this website (top right) to find out which events people plan to attend in 2011.
I may have missed out your favourite conference in the original article, or in the survey above, in which case I would like to hear about it too - maybe via the comments page here, or directly.
I hope to meet soome of you when out and about in the coming year ...
I said: "Many people have rightly remarked that the HPC community really is that — a community — and that there is still a relatively high degree of connection between the various practitioners. In other words, despite its growing size and global reach, it feels like a small community. People know each other. Consequently, networking, whether technical or commercial, goes a long way to helping your business."
And: "Whatever your scale of technical computing, from multicore workstations to multi-thousand-node supercomputers, getting involved with the active HPC community can help you with your parallel computing goals. Online resources can help, but by far the most effective way of benefiting from the wider HPC community is by participating at the right events."
I listed some key events, with a comment about the nature and value of each.
I have now added a survey to this website (top right) to find out which events people plan to attend in 2011.
I may have missed out your favourite conference in the original article, or in the survey above, in which case I would like to hear about it too - maybe via the comments page here, or directly.
I hope to meet soome of you when out and about in the coming year ...
Labels:
events,
hpc,
people,
supercomputing
NAG out and about
[Originally posted on The NAG Blog]
The NAG website has a section called "Meet our experts - NAG out and about", which gives a list of upcoming events worldwide that NAG experts will be attending or presenting at.
The page also notes: "We regularly organise and participate in conferences, seminars and
training days with our customers and partners. If you would like to talk
to us
about hosting a NAG seminar at your organisation or any training
requirements you might have email us at
sales@nag.co.uk".
In my own focus of high performance computing (HPC), I have previously written (for ZDNet UK) about some key supercomputing events. For those of you interested in meeting up with HPC experts (especially from NAG!), I have set up a survey of HPC events - please let us know which events you plan to attend in 2011 - and see which events other readers of The NAG Blog are attending.
The NAG website has a section called "Meet our experts - NAG out and about", which gives a list of upcoming events worldwide that NAG experts will be attending or presenting at.
The page also notes: "We regularly organise and participate in conferences, seminars and
training days with our customers and partners. If you would like to talk
to us
about hosting a NAG seminar at your organisation or any training
requirements you might have email us at
sales@nag.co.uk".
In my own focus of high performance computing (HPC), I have previously written (for ZDNet UK) about some key supercomputing events. For those of you interested in meeting up with HPC experts (especially from NAG!), I have set up a survey of HPC events - please let us know which events you plan to attend in 2011 - and see which events other readers of The NAG Blog are attending.
Saturday, 30 October 2010
Comparing HPC across China, USA and Europe
[Originally posted on The NAG Blog]
In my earlier blog post today on China announcing the world's faster supercomputer, I said I'd be back with more later on the comparisons with the USA, Europe and others. In this morning's blog, I made the point that the world's fastest supercomputer, in itself, is not world changing. But leading supercomputers, critically matched with appropriate expertise in programming and using them, togther with the vision to ensure use across basic research, industry and defence applications can indeed be strategically beneficial to a nation - including real economic impact.
There are plenty of reports and studies describing the strategic impact of HPC within a given organisation or at national levels (some are catalogued by IDC here), so let's take it as a premise for the following thoughts.
In my earlier blog post today on China announcing the world's faster supercomputer, I said I'd be back with more later on the comparisons with the USA, Europe and others. In this morning's blog, I made the point that the world's fastest supercomputer, in itself, is not world changing. But leading supercomputers, critically matched with appropriate expertise in programming and using them, togther with the vision to ensure use across basic research, industry and defence applications can indeed be strategically beneficial to a nation - including real economic impact.
There are plenty of reports and studies describing the strategic impact of HPC within a given organisation or at national levels (some are catalogued by IDC here), so let's take it as a premise for the following thoughts.
Labels:
hpc,
leadership,
NAG,
petaflops,
software,
supercomputing
Friday, 29 October 2010
Why does the China supercomputer matter to western governments?
[Originally posted on The NAG Blog]
There is a lot of fuss in the mainstream media (BBC, FT, CNET, even the Daily Mail!) the last few days about the world's fastest supercomputer being in China for the first time. And much ado on Twitter (me too - @hpcnotes).
But much of the mainstream reporting, twitter-fest, and blogging is missing the point I think. China deploying the world's fastest supercomputer is news (the fastest supercomputer has almost always been American for decades, with the occasional Japanese crown). But the machine alone is not the big news.
There is a lot of fuss in the mainstream media (BBC, FT, CNET, even the Daily Mail!) the last few days about the world's fastest supercomputer being in China for the first time. And much ado on Twitter (me too - @hpcnotes).
But much of the mainstream reporting, twitter-fest, and blogging is missing the point I think. China deploying the world's fastest supercomputer is news (the fastest supercomputer has almost always been American for decades, with the occasional Japanese crown). But the machine alone is not the big news.
Labels:
hpc,
leadership,
NAG,
petaflops,
software,
supercomputing
Thursday, 23 September 2010
Is power-hungry supercomputing OK now?
[Article by me on ZDNet UK, 23 September, 2010]
We may be planning for a 1,000-fold increase in compute power in the next decade, but what about the extra power consumption ...
http://www.zdnet.co.uk/news/emerging-tech/2010/09/23/is-power-hungry-supercomputing-ok-now-40090137/
We may be planning for a 1,000-fold increase in compute power in the next decade, but what about the extra power consumption ...
http://www.zdnet.co.uk/news/emerging-tech/2010/09/23/is-power-hungry-supercomputing-ok-now-40090137/
Labels:
exascale,
hpc,
leadership,
power,
supercomputing,
ZDNetUK
Monday, 13 September 2010
Do you want ice with your supercomputer?
[Originally posted on The NAG Blog]
“Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.
Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.
“I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?”
“Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?”
“Ah, fair point”, he conceded.
What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?”
Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?
Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).
The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?
NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).
Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.
“Would you like ice with your drink?” It’s a common question of course. One that divides people – few will think “I don’t mind” – most have a firm preference one way or the other. There are people who hate ice with their drink and those who freak if there is none. National stereotypes have a role to play – in the USA the question is not always asked – it’s assumed you want ice with everything. In the UK, you often have to ask specifically to get ice.
Yet the role of ice in making our drinks chilled is misleading. I once had a discussion with a leading American member of the international HPC community about this. “No ice”, he was complaining as we headed out of a European country, “they had no ice for the drink”.
“I don’t get this obsession with ice”, I chipped in. “What?!” He looked at me as if I were mad. “Why do you like your coke warm?”
“Ah, but that’s just it”, I replied. “I hate warm drinks – I really like my coke chilled. But surely, in this modern world over a century after the invention of the refrigerator, it’s not unreasonable to expect the fluid to be chilled – without the need to drop lumps of solid water into it?”
“Ah, fair point”, he conceded.
What has this got to do with supercomputing? Perhaps the common thread is that usually we just accept the habitual choices of ways to do things – and don’t often step back to think – “are those the only choices?”
Maybe we should step back a little more often and ask ourselves what we are trying to achieve with HPC – and are the usual choices the only ways forward? Or are there different ways to approach the problem that will deliver simpler, better or cheaper performance?
Perhaps your business/research goals mean you need to conduct more complex modelling or you need faster performance. Maybe the drive of computing technology towards many-core processors rather than faster processors is limiting your ability to achieve this. (I have had several conversations recently, where companies are buying older technology because their software won’t run on multicore).
The “ice or no ice” question might be whether or not to upgrade your HPC with the latest multicore processors. But what about the “just chill the fluid” option? Well, how about upgrading the software instead, or as well?
NAG has plenty of case studies to show where enhancements to software have achieved huge gains in performance or capability (e.g., www.hector.ac.uk/cse/reports).
Sometimes buying more compute power is the right answer. Sometimes, extracting more efficient performance from what you have is the answer. Bringing them together - a balance of hardware upgrades and software innovations might well give you the best chance of optimising cost efficiency, performance and sustainability of performance.
Labels:
HECToR,
hpc,
NAG,
performance,
productivity,
software,
supercomputing
Monday, 30 August 2010
Me on HPC 2
Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...
Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West
[full article requires subscription, extracts here are not complete, and are modified slightly to support that]
"cost of science, not just the cost of supercomputer ownership"
"lead time, and funding, to get the user community ready"
"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"
"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."
"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."
Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West
[full article requires subscription, extracts here are not complete, and are modified slightly to support that]
"cost of science, not just the cost of supercomputer ownership"
"lead time, and funding, to get the user community ready"
"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"
"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."
"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."
Monday, 19 July 2010
Time Machines and Supercomputers
[Originally posted on The NAG Blog]
I found a Linpack App for the iPhone last week. Nothing special, just a bit of five minute fun. It seems a 3G model achieves about 20 MFLOPS. [Note 1]
What's that got to do with time machines? Well it got me thinking "I wonder when 20 MFLOPS was the performance of a leading edge supercomputer?" Actually, it was before the start of the Top500 list (1993), so finding out was beyond the research I was prepared to do for this blog.
So I thought instead about the first supercomputer I used in anger. As soon as I name it, if anyone is still reading this waffle, you will immediately fall into two camps - those who think I'm too young to be nostalgic about old supercomputers yet - and those who think I'm too old to be talking about modern supercomoputers :-).
It was a Cray T3D.
You're still waiting for the time machine bit ... hang on in there.
My application on that T3D sustained about 25 GFLOPS. Which is about the same as a high end PC of recent years. What this means to me is that anyone who cares to apply the effort today with a high end PC, could get comparable results to that work of 15-20 years ago that needed the supercomputer.
Or, in other words, that supercomputer gave us a 15-20 years time advantage over everyone who didn't have supercomputers - or a few years over others with smaller supercomputers. [Note 2]
That is one of the key benefits of High Performance Computing - the ability to get a result before a competitor - you could say HPC is a time machine for simulation and modelling.
Now for the [Notes] - which actually contain the real story!
Note 1 : It's not really true to say the iPhone 3G can do 20 MFLOPs - all we can say is that particular App achieved 20 MFLOPs on that iPhone 3G. The result is a factor of both the software and the hardware. Better performance can come from optimising the application as much as from buying a more powerful phone.
Note 2 : If fact, even with the same supercomputer, it would be hard for most people to replicate the results - simply because there was as much value in the software (physics, algorithms, performance engineering, implementation, etc) and the associated validation and verification program as there was in the supercomputer.
The supercomputer offered us a time machine. But the attention to performance and scalability in the application enabled us to actually use that time machine to get results faster than others - even if those others used the same supercomputer. And the validation and verification effort meant that we could trust what our time machine was telling us.
I found a Linpack App for the iPhone last week. Nothing special, just a bit of five minute fun. It seems a 3G model achieves about 20 MFLOPS. [Note 1]
What's that got to do with time machines? Well it got me thinking "I wonder when 20 MFLOPS was the performance of a leading edge supercomputer?" Actually, it was before the start of the Top500 list (1993), so finding out was beyond the research I was prepared to do for this blog.
So I thought instead about the first supercomputer I used in anger. As soon as I name it, if anyone is still reading this waffle, you will immediately fall into two camps - those who think I'm too young to be nostalgic about old supercomputers yet - and those who think I'm too old to be talking about modern supercomoputers :-).
It was a Cray T3D.
You're still waiting for the time machine bit ... hang on in there.
My application on that T3D sustained about 25 GFLOPS. Which is about the same as a high end PC of recent years. What this means to me is that anyone who cares to apply the effort today with a high end PC, could get comparable results to that work of 15-20 years ago that needed the supercomputer.
Or, in other words, that supercomputer gave us a 15-20 years time advantage over everyone who didn't have supercomputers - or a few years over others with smaller supercomputers. [Note 2]
That is one of the key benefits of High Performance Computing - the ability to get a result before a competitor - you could say HPC is a time machine for simulation and modelling.
Now for the [Notes] - which actually contain the real story!
Note 1 : It's not really true to say the iPhone 3G can do 20 MFLOPs - all we can say is that particular App achieved 20 MFLOPs on that iPhone 3G. The result is a factor of both the software and the hardware. Better performance can come from optimising the application as much as from buying a more powerful phone.
Note 2 : If fact, even with the same supercomputer, it would be hard for most people to replicate the results - simply because there was as much value in the software (physics, algorithms, performance engineering, implementation, etc) and the associated validation and verification program as there was in the supercomputer.
The supercomputer offered us a time machine. But the attention to performance and scalability in the application enabled us to actually use that time machine to get results faster than others - even if those others used the same supercomputer. And the validation and verification effort meant that we could trust what our time machine was telling us.
Labels:
hpc,
NAG,
parallel programming,
software,
supercomputing,
time machine
Wednesday, 30 June 2010
Me on HPC and multicore
Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...
What You Should Know about Power and Performance Efficiency
Scientific Computing, August 2010, Suzanne Tracy
"Components driving power consumption fall into two categories — those that, as consumers, we cannot control, and those we can. Power consumed by server hardware is increasing and is beyond our direct control as buyers (although manufacturers are working to optimize power efficiency). The biggest factors we can influence are design and deployment of HPC systems as a whole (datacenter included) and recognizing total cost of ownership (including power) when procuring."
"The primary strategy for optimizing power is to ensure proper total cost of ownership (including power) as the driver of procurement, not purely peak performance and initial capital cost. This enables the evolutions of datacenter optimization (e.g. run warm, “free-cooling,” hot aisles) and choices of power-efficient HPC system designs (e.g. more parallelism, lower power processors, etcetera) to be correctly attributed as delivering increased performance against cost."
"Optimizing software and algorithms is a key opportunity to dramatically improve the total cost of ownership of HPC solutions. By optimizing applications, fewer resources are required to deliver the results, thus reducing the power required. Equally, innovations in algorithms can deliver applications that are power-aware — that is, they recognize the energy consumed and the user can balance energy-cost against time-to-solution when selecting algorithms for a given simulation."
"The primary breakthrough will be the recognition of the role software (both implementation efficiency and algorithm design) has to play in delivering cost savings related to power efficiency. Beyond that, the key hardware technologies will be increased use of power switching across the system — while many modern processors will reduce power when not fully utilized, the ability to gate specific parts of the chip will improve, and the same capability will work into other parts of the system — memory, interconnect (maybe balancing power against bandwidth on a job-by-job basis), I/O, etcetera."
Multiple cores multiply programming
Scientific Computing World, June 2010, Paul Schreier
"When it comes to parallel programming, it’s easy to do something that looks right, but it’s difficult to be sure it is right and will do the same thing under all conditions," says Andrew Jones.
"We strongly urge people to use prepackaged routines such as these where other people have done the difficult work of dividing up the tasks in an optimal way," says Jones.
Personal Supercomputers?
Genomeweb, October 2009, By Matthew Dublin
"There is always going to be a class of computing power that is much bigger than anything that will physically fit on your desk because if you can buy something for $1,000 or $10,000 then there are going to be users that are prepared to buy hundreds of them for a million dollars," Jones says. "And there's always going to be something that is orders of magnitude bigger than what most people can afford but the cheap stuff gets more powerful."
"I don't think there's anything wrong with the term 'personal supercomputing' if it successfully gets a whole lot more people making use of the compute power that's available," Jones says. "It's marketing, but it's perfectly valid marketing, aimed at an audience that would normally not go anywhere near large-scale supercomputers. ... HPC can do so much for people trying to do simulations and modeling that whatever we call it to get more people to using it, the better."
With virtualization, high-performance computing becomes more mainstream
SearchServerVirtualization.com, November 2008, By Jo Maitland
"Scheduling jobs, queuing jobs, shoring up resources, determining policies such as rejecting a job that doesn't have an estimate of how long the job is going to take … these are typical HPC skills but start to overlap when you're managing a virtualized compute environment," said Andrew Jones.
Jones said he does not believe mainstream computing will ever catch up with HPC. "By definition, HPC will always be more powerful than mainstream computing," he says.
What You Should Know about Power and Performance Efficiency
Scientific Computing, August 2010, Suzanne Tracy
"Components driving power consumption fall into two categories — those that, as consumers, we cannot control, and those we can. Power consumed by server hardware is increasing and is beyond our direct control as buyers (although manufacturers are working to optimize power efficiency). The biggest factors we can influence are design and deployment of HPC systems as a whole (datacenter included) and recognizing total cost of ownership (including power) when procuring."
"The primary strategy for optimizing power is to ensure proper total cost of ownership (including power) as the driver of procurement, not purely peak performance and initial capital cost. This enables the evolutions of datacenter optimization (e.g. run warm, “free-cooling,” hot aisles) and choices of power-efficient HPC system designs (e.g. more parallelism, lower power processors, etcetera) to be correctly attributed as delivering increased performance against cost."
"Optimizing software and algorithms is a key opportunity to dramatically improve the total cost of ownership of HPC solutions. By optimizing applications, fewer resources are required to deliver the results, thus reducing the power required. Equally, innovations in algorithms can deliver applications that are power-aware — that is, they recognize the energy consumed and the user can balance energy-cost against time-to-solution when selecting algorithms for a given simulation."
"The primary breakthrough will be the recognition of the role software (both implementation efficiency and algorithm design) has to play in delivering cost savings related to power efficiency. Beyond that, the key hardware technologies will be increased use of power switching across the system — while many modern processors will reduce power when not fully utilized, the ability to gate specific parts of the chip will improve, and the same capability will work into other parts of the system — memory, interconnect (maybe balancing power against bandwidth on a job-by-job basis), I/O, etcetera."
Multiple cores multiply programming
Scientific Computing World, June 2010, Paul Schreier
"When it comes to parallel programming, it’s easy to do something that looks right, but it’s difficult to be sure it is right and will do the same thing under all conditions," says Andrew Jones.
"We strongly urge people to use prepackaged routines such as these where other people have done the difficult work of dividing up the tasks in an optimal way," says Jones.
Personal Supercomputers?
Genomeweb, October 2009, By Matthew Dublin
"There is always going to be a class of computing power that is much bigger than anything that will physically fit on your desk because if you can buy something for $1,000 or $10,000 then there are going to be users that are prepared to buy hundreds of them for a million dollars," Jones says. "And there's always going to be something that is orders of magnitude bigger than what most people can afford but the cheap stuff gets more powerful."
"I don't think there's anything wrong with the term 'personal supercomputing' if it successfully gets a whole lot more people making use of the compute power that's available," Jones says. "It's marketing, but it's perfectly valid marketing, aimed at an audience that would normally not go anywhere near large-scale supercomputers. ... HPC can do so much for people trying to do simulations and modeling that whatever we call it to get more people to using it, the better."
With virtualization, high-performance computing becomes more mainstream
SearchServerVirtualization.com, November 2008, By Jo Maitland
"Scheduling jobs, queuing jobs, shoring up resources, determining policies such as rejecting a job that doesn't have an estimate of how long the job is going to take … these are typical HPC skills but start to overlap when you're managing a virtualized compute environment," said Andrew Jones.
Jones said he does not believe mainstream computing will ever catch up with HPC. "By definition, HPC will always be more powerful than mainstream computing," he says.
Subscribe to:
Posts (Atom)