These are revelations from inside the strange world of supercomputing centers. Nobody is pretending these are real stories. They couldn’t possibly be. Could they?
On one of my many long haul airplane plane journeys this year, I caught myself thinking about the strange things that go on inside supercomputer centers - and other parts of the HPC world. I thought it might be fun to poke at and mock such activities while trying to make some serious points.
Since the flight was a long one, I started writing ... and so "Secrets of the Supercomputers" was born.
You can find Episode 1 at HPC Wire today, touching on the topic of HPC procurement.
No offense to anyone intended. Gentle mocking maybe. Serious lessons definitely.
Take a look here for some serious comments on HPC procurement at the NAG blog.
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Tuesday, 17 June 2014
Tuesday, 10 June 2014
Silence ...
Really, October 2013? That long since I wrote a blog? Not even anything for SC13? Oops. Still, busy is good. Be nice to get some more blog posts again though. Maybe a preview of ISC14 in the next few days ...
Friday, 18 October 2013
Essential guide to HPC on twitter
Please read the updated version of this post at:
https://www.hpcnotes.com/p/hpc-on-twitter.html
(Original kept here for reference)
Who are the best HPC people on twitter?
A good question posed by Suhaib Khan (@suhaibkhan) - which he made tougher by saying "pick your top 5". A short debate followed on twitter but I thought the content was useful enough to record in a blog post for community reference. I also strongly urge anyone to provide further input to this topic and I'll update this post.
Some rules (mine not Suhaib's):
- What are the minimum set of accounts you can follow and still expect to catch most of the HPC news, gossip, opinion pieces, analysis and key technical content?
- How to avoid too much marketing?
- How to access comment and debate beyond the news headlines?
- Which HPC people are not only active but also interactive on twitter?
Thursday, 10 October 2013
Supercomputing - the reality behind the vision
My opinion piece "Supercomputing - the reality behind the vision" was published today in Scientific Computing World, where I:
Read the full article here: http://www.scientific-computing.com/news/news_story.php?news_id=2270.
- liken a supercomputer to a "pile of silicon, copper, optical fibre, pipework, and other heavy hardware [...] an imposing monument that politicians can cut ribbons in front of";
- describe system architecture as "the art of balancing the desires of capacity, performance and resilience against the frustrations of power, cooling, dollars, space, and so on";
- introduce software as magic and infrastructure and a virtual knowledge engine;
- and note that "delivering science insight or engineering results from [supercomputing] requires users";
- and propose that we need a roadmap for people just as much as for the hardware technology.
Read the full article here: http://www.scientific-computing.com/news/news_story.php?news_id=2270.
Labels:
explain hpc,
hpc,
people,
strategy,
supercomputing
Friday, 30 August 2013
All software needs to be parallel
In short, if your software does not exploit parallel processing techniques, then your code is limited to less than 2% of the potential performance of the processor. And this is just for a single processor - it is even more critical if the code has to run on a cluster or a supercomputer.
Labels:
hpc,
parallel programming,
software
Thursday, 18 July 2013
An early blog about SC13 Denver - just for fun ...
As SC13 registration opens this week, it occurs to me both how far away SC13 is (a whole summer and several months after that) but also how close SC13 is (only a summer and a month or two). It got me thinking how far ahead people plan for SC. I have heard of people who book hotels for the next SC as soon as they home from the previous SC (to secure the best deal/hotel/etc.). I have also heard stories of those who still have not booked flights only days before SC.
So, just for fun - how far ahead do you plan your travel for SC? Are you the kind of HPC person who books SC13 as soon as SC12 has ended? Or do you leave SC13 travel booking until a week or two before SC13? Of course, it may not be up to you - many attendees need to get travel authority etc. and this is often hard to get a long time in advance.
Please complete the survey here - http://www.surveymonkey.com/s/3MRSYYH
Once I have enough reponses, I will write another blog revealing the results.
Enjoy!
[PS - this survey is not on behalf of, or affiliated with, either the SC13 organisers or anyone else - it's just a curiosity and to share in a blog later.]
So, just for fun - how far ahead do you plan your travel for SC? Are you the kind of HPC person who books SC13 as soon as SC12 has ended? Or do you leave SC13 travel booking until a week or two before SC13? Of course, it may not be up to you - many attendees need to get travel authority etc. and this is often hard to get a long time in advance.
Please complete the survey here - http://www.surveymonkey.com/s/3MRSYYH
Once I have enough reponses, I will write another blog revealing the results.
Enjoy!
[PS - this survey is not on behalf of, or affiliated with, either the SC13 organisers or anyone else - it's just a curiosity and to share in a blog later.]
Labels:
events,
sc13,
supercomputing
Monday, 10 June 2013
China supercomputer to be world's fastest (again) - Tianhe-2
It seems that China's Tianhe-2 supercomputer will confirmed as the world's fastest supercomputer at next Top500 list to be revealed at the ISC'13 conference next week.
I was going to write about the Chinese Tianhe-2 supercomputer and how it matters to the USA and Europe - then I found these old blog posts of mine:
I was going to write about the Chinese Tianhe-2 supercomputer and how it matters to the USA and Europe - then I found these old blog posts of mine:
- Why does the China supercomputer matter to western governments?
- Comparing HPC across China, USA and Europe
Labels:
china,
isc13,
leadership,
supercomputer,
top500
Sunday, 2 June 2013
Supercomputing goes to Leipzig - a preview of ISC13
I have written my preview of ISC13 over at the NAG Blog ... a new location, Tianhe-2, MIC vs. GPU, industry, exascale, big data and ecoystems. Not quite HPC keyword bingo but close :-)
See you there!
See you there!
Tuesday, 12 March 2013
Name that supercomputer (Quiz)
Instead of a sensible HPC blog post, how about some fun? Can you name these supercomputers?
I'm looking for actual machine names (e.g. 'Sequoia') and the host site (e.g. LLNL). Bonus points for the funding agency (e.g. DOE NNSA) and the machine type (e.g. IBM BlueGene/Q).
Submit your guesses or knowledgeable answers either through the comments field below, or to me on twitter (@hpcnotes).
For the photos, if you are stuck, you might need to use clues from my twitter stream as to where I have been recently.
Answers will be revealed once there have been enough guesses to amuse me. Have fun!
I'm looking for actual machine names (e.g. 'Sequoia') and the host site (e.g. LLNL). Bonus points for the funding agency (e.g. DOE NNSA) and the machine type (e.g. IBM BlueGene/Q).
Submit your guesses or knowledgeable answers either through the comments field below, or to me on twitter (@hpcnotes).
For the photos, if you are stuck, you might need to use clues from my twitter stream as to where I have been recently.
Answers will be revealed once there have been enough guesses to amuse me. Have fun!
- Which supercomputer are we looking underneath?
- Acceptance of this leading system became a HPC news topic recently
- NAG provides the Computational Science & Engineering Support Service for this one
- One letter is all that’s needed to describe this supercomputer
- Racing cattle powered by Greek letters
- Spock was one of these
- Which supercomputer does this photo show the inner rows of?
- Memory with a deerstalker & pipe
- Put an end to Ming (or did he)?
- This plant/leaf is normally silver when used as the national symbol of this one’s host country
Labels:
fun,
hpc,
quiz,
supercomputers
Friday, 11 January 2013
Predictions for 2013 in HPC
As we stumble into the first weeks of 2013, it is the season for predictions about what the coming year will bring. In my case, following my recent review of HPC in 2012, I get to make some predictions for the world of HPC in 2013.
Buzzwords
First up, this year’s buzzword for HPC marketing and technology talks. Last year was very much the year of “Big Data” as a buzzword. As that starts to become old hat (and real work) a new buzzword will be required. Cynical? My prediction is that this year will see Big Data still present in HPC discussions and real usage but it will diminish in use as a buzzword. 2013 will probably spawn two buzzwords.
The first buzzword will be “energy-efficient computing”. We saw the use of this a little last year but I think it will become the dominant buzzword this year. Most technical talks will include some reference to energy-efficient computing (or the energy cost of the solution or etc.). All marketing departments will swing into action to brand their HPC products and services as energy efficient computing – much as they did with Big Data and before that, Cloud Computing, and so on. Yes, I’m being a tad cynical about the whole thing. I’m not suggesting that energy efficiency is not important – in fact it is essential to meet our ambitions in HPC. I’m merely noting its impending over-use as a theme. And of course, energy efficient computing is not the same as Green Computing – after all that buzzword is several years old now.
Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.
Buzzwords
First up, this year’s buzzword for HPC marketing and technology talks. Last year was very much the year of “Big Data” as a buzzword. As that starts to become old hat (and real work) a new buzzword will be required. Cynical? My prediction is that this year will see Big Data still present in HPC discussions and real usage but it will diminish in use as a buzzword. 2013 will probably spawn two buzzwords.
The first buzzword will be “energy-efficient computing”. We saw the use of this a little last year but I think it will become the dominant buzzword this year. Most technical talks will include some reference to energy-efficient computing (or the energy cost of the solution or etc.). All marketing departments will swing into action to brand their HPC products and services as energy efficient computing – much as they did with Big Data and before that, Cloud Computing, and so on. Yes, I’m being a tad cynical about the whole thing. I’m not suggesting that energy efficiency is not important – in fact it is essential to meet our ambitions in HPC. I’m merely noting its impending over-use as a theme. And of course, energy efficient computing is not the same as Green Computing – after all that buzzword is several years old now.
Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.
Thursday, 20 December 2012
A review of 2012 in supercomputing - Part 2
This is Part 2 of my review of the year 2012 in supercomputing and related matters.
In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.
Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.
The themes that stick out in my mind from HPC/supercomputing in 2012 are:
The exascale race stalls
The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.
Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]
Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.
The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.
Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.
In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.
Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.
The themes that stick out in my mind from HPC/supercomputing in 2012 are:
- The exascale race stalls
- Petaflops become "ordinary"
- HPC seeks to engage a broader user community
- Assault on the Top500
The exascale race stalls
The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.
Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]
Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.
The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.
Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.
Labels:
blue waters,
data,
exascale,
hpc,
people,
petaflops,
supercomputing,
top500
Subscribe to:
Posts (Atom)