I contributed to this article in the March 2011 The Exascale Report by Mike Bernhardt.
"Initiatives are being launched, research centers are being established, teams are being formed, but in reality, we are barely getting started with exascale research. Opinions vary as to where we should be focusing our resources.
In this issue, The Exascale Report asks NAG's Andy Jones, Lawrence Livermore's Dona Crawford, and Growth Science International's Thomas Thurston where should we (as a global community) be placing our efforts today with exascale research and development?"
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Showing posts with label interview. Show all posts
Showing posts with label interview. Show all posts
Thursday, 24 March 2011
Thursday, 17 March 2011
The Addictive Allure of Supercomputing
The European Medical Device Technology (EMDT) magazine interviewed me recently. InsideHPC also has pointed to the interview here.
The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."
We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.
The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."
We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.
Monday, 30 August 2010
Me on HPC 2
Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...
Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West
[full article requires subscription, extracts here are not complete, and are modified slightly to support that]
"cost of science, not just the cost of supercomputer ownership"
"lead time, and funding, to get the user community ready"
"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"
"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."
"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."
Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West
[full article requires subscription, extracts here are not complete, and are modified slightly to support that]
"cost of science, not just the cost of supercomputer ownership"
"lead time, and funding, to get the user community ready"
"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"
"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."
"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."
Wednesday, 30 June 2010
Me on HPC and multicore
Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...
What You Should Know about Power and Performance Efficiency
Scientific Computing, August 2010, Suzanne Tracy
"Components driving power consumption fall into two categories — those that, as consumers, we cannot control, and those we can. Power consumed by server hardware is increasing and is beyond our direct control as buyers (although manufacturers are working to optimize power efficiency). The biggest factors we can influence are design and deployment of HPC systems as a whole (datacenter included) and recognizing total cost of ownership (including power) when procuring."
"The primary strategy for optimizing power is to ensure proper total cost of ownership (including power) as the driver of procurement, not purely peak performance and initial capital cost. This enables the evolutions of datacenter optimization (e.g. run warm, “free-cooling,” hot aisles) and choices of power-efficient HPC system designs (e.g. more parallelism, lower power processors, etcetera) to be correctly attributed as delivering increased performance against cost."
"Optimizing software and algorithms is a key opportunity to dramatically improve the total cost of ownership of HPC solutions. By optimizing applications, fewer resources are required to deliver the results, thus reducing the power required. Equally, innovations in algorithms can deliver applications that are power-aware — that is, they recognize the energy consumed and the user can balance energy-cost against time-to-solution when selecting algorithms for a given simulation."
"The primary breakthrough will be the recognition of the role software (both implementation efficiency and algorithm design) has to play in delivering cost savings related to power efficiency. Beyond that, the key hardware technologies will be increased use of power switching across the system — while many modern processors will reduce power when not fully utilized, the ability to gate specific parts of the chip will improve, and the same capability will work into other parts of the system — memory, interconnect (maybe balancing power against bandwidth on a job-by-job basis), I/O, etcetera."
Multiple cores multiply programming
Scientific Computing World, June 2010, Paul Schreier
"When it comes to parallel programming, it’s easy to do something that looks right, but it’s difficult to be sure it is right and will do the same thing under all conditions," says Andrew Jones.
"We strongly urge people to use prepackaged routines such as these where other people have done the difficult work of dividing up the tasks in an optimal way," says Jones.
Personal Supercomputers?
Genomeweb, October 2009, By Matthew Dublin
"There is always going to be a class of computing power that is much bigger than anything that will physically fit on your desk because if you can buy something for $1,000 or $10,000 then there are going to be users that are prepared to buy hundreds of them for a million dollars," Jones says. "And there's always going to be something that is orders of magnitude bigger than what most people can afford but the cheap stuff gets more powerful."
"I don't think there's anything wrong with the term 'personal supercomputing' if it successfully gets a whole lot more people making use of the compute power that's available," Jones says. "It's marketing, but it's perfectly valid marketing, aimed at an audience that would normally not go anywhere near large-scale supercomputers. ... HPC can do so much for people trying to do simulations and modeling that whatever we call it to get more people to using it, the better."
With virtualization, high-performance computing becomes more mainstream
SearchServerVirtualization.com, November 2008, By Jo Maitland
"Scheduling jobs, queuing jobs, shoring up resources, determining policies such as rejecting a job that doesn't have an estimate of how long the job is going to take … these are typical HPC skills but start to overlap when you're managing a virtualized compute environment," said Andrew Jones.
Jones said he does not believe mainstream computing will ever catch up with HPC. "By definition, HPC will always be more powerful than mainstream computing," he says.
What You Should Know about Power and Performance Efficiency
Scientific Computing, August 2010, Suzanne Tracy
"Components driving power consumption fall into two categories — those that, as consumers, we cannot control, and those we can. Power consumed by server hardware is increasing and is beyond our direct control as buyers (although manufacturers are working to optimize power efficiency). The biggest factors we can influence are design and deployment of HPC systems as a whole (datacenter included) and recognizing total cost of ownership (including power) when procuring."
"The primary strategy for optimizing power is to ensure proper total cost of ownership (including power) as the driver of procurement, not purely peak performance and initial capital cost. This enables the evolutions of datacenter optimization (e.g. run warm, “free-cooling,” hot aisles) and choices of power-efficient HPC system designs (e.g. more parallelism, lower power processors, etcetera) to be correctly attributed as delivering increased performance against cost."
"Optimizing software and algorithms is a key opportunity to dramatically improve the total cost of ownership of HPC solutions. By optimizing applications, fewer resources are required to deliver the results, thus reducing the power required. Equally, innovations in algorithms can deliver applications that are power-aware — that is, they recognize the energy consumed and the user can balance energy-cost against time-to-solution when selecting algorithms for a given simulation."
"The primary breakthrough will be the recognition of the role software (both implementation efficiency and algorithm design) has to play in delivering cost savings related to power efficiency. Beyond that, the key hardware technologies will be increased use of power switching across the system — while many modern processors will reduce power when not fully utilized, the ability to gate specific parts of the chip will improve, and the same capability will work into other parts of the system — memory, interconnect (maybe balancing power against bandwidth on a job-by-job basis), I/O, etcetera."
Multiple cores multiply programming
Scientific Computing World, June 2010, Paul Schreier
"When it comes to parallel programming, it’s easy to do something that looks right, but it’s difficult to be sure it is right and will do the same thing under all conditions," says Andrew Jones.
"We strongly urge people to use prepackaged routines such as these where other people have done the difficult work of dividing up the tasks in an optimal way," says Jones.
Personal Supercomputers?
Genomeweb, October 2009, By Matthew Dublin
"There is always going to be a class of computing power that is much bigger than anything that will physically fit on your desk because if you can buy something for $1,000 or $10,000 then there are going to be users that are prepared to buy hundreds of them for a million dollars," Jones says. "And there's always going to be something that is orders of magnitude bigger than what most people can afford but the cheap stuff gets more powerful."
"I don't think there's anything wrong with the term 'personal supercomputing' if it successfully gets a whole lot more people making use of the compute power that's available," Jones says. "It's marketing, but it's perfectly valid marketing, aimed at an audience that would normally not go anywhere near large-scale supercomputers. ... HPC can do so much for people trying to do simulations and modeling that whatever we call it to get more people to using it, the better."
With virtualization, high-performance computing becomes more mainstream
SearchServerVirtualization.com, November 2008, By Jo Maitland
"Scheduling jobs, queuing jobs, shoring up resources, determining policies such as rejecting a job that doesn't have an estimate of how long the job is going to take … these are typical HPC skills but start to overlap when you're managing a virtualized compute environment," said Andrew Jones.
Jones said he does not believe mainstream computing will ever catch up with HPC. "By definition, HPC will always be more powerful than mainstream computing," he says.
Thursday, 14 August 2008
NAG Embarks on a New Business Venture
[Interview with me in HPCwire, August 14, 2008]
by John E. West, for HPCwire
... responding to changes in computing at both ends of the spectrum, [NAG] is positioning itself as the place to go, not just for shrink-wrapped libraries, but also for education and expertise in how to program in parallel, and even for expert advice on how to buy, build and run your own supercomputer. HPCwire talked to Andrew Jones, vice-president of HPC business at NAG, on what he has in mind for this new business and how he sees the future of HPC and parallel programming shaping up ...
http://www.hpcwire.com/features/NAG_Embarks_on_a_New_Business_Venture.html?viewAll=y
by John E. West, for HPCwire
... responding to changes in computing at both ends of the spectrum, [NAG] is positioning itself as the place to go, not just for shrink-wrapped libraries, but also for education and expertise in how to program in parallel, and even for expert advice on how to buy, build and run your own supercomputer. HPCwire talked to Andrew Jones, vice-president of HPC business at NAG, on what he has in mind for this new business and how he sees the future of HPC and parallel programming shaping up ...
http://www.hpcwire.com/features/NAG_Embarks_on_a_New_Business_Venture.html?viewAll=y
Labels:
hpc,
HPCwire,
interview,
John West,
multicore,
NAG,
parallel programming,
procurement,
productivity,
software,
strategy,
supercomputing
Subscribe to:
Posts (Atom)