See the video here: https://www.youtube.com/watch?v=uZlr3SMgGLo
The hpcnotes HPC blog - supercomputing, HPC, high performance computing, cloud, e-infrastructure, scientific computing, exascale, parallel programming services, software, big data, multicore, manycore, Phi, GPU, HPC events, opinion, ...
Monday, 8 August 2016
Friday, 15 April 2016
HPC babble
Two things:
- I seem to have written a lot of stuff on HPC over the years (probably mostly waffle, nonsense and wildly wrong predictions).
- Here is a list of most of it: http://www.hpcnotes.com/p/interviews-quotes-articles.html.
Labels:
hpc
Monday, 9 November 2015
SC15 Preview
SC15 - the biggest get-together of the High Performance Computing (HPC) world - takes place next week in Austin, TX. Around 10,000 buyers, users, programmers, managers, business development people, funders, researchers, media, etc. will be there.
With a large technical program, an even larger exhibition, and plenty of associated workshops, product launches, user groups, etc., SC15 will dominate the world of HPC for a week, plus most of this week leading up to it. It is one of the best ways for HPC practitioners to share experiences, learn about the latest advances, and build collaborations and business relationships.
So, to wet your appetites, here is the @hpcnotes preview to SC15 - what I think might be the key topics, things to look out for, what not to miss, etc.
New supercomputers
It's always one of the aspects of SC that grabs the media and attendee attention the most. Which new biggest supercomputers will be announced? Will there be a new occupier of the No.1 spot on the Top500 list? Usually I have some idea of what new supercomputers are coming up before they are public, but this year I have no idea. My guess? No new No.1. A few new Top20 machines. So which one will win the news coverage?
New products
In spite of the community repeatedly acknowledging that the whole system is important - memory, interconnect, I/O, software, architecture, packaging, etc., judging by the media attention and informal conversations, we still seem to get most excited by the processors.
Monday, 5 October 2015
HPC Bingo
A big part of SC (Austin in 2015) is actually getting there. Most attendees will have to navigate the joys of long distance air travel. If you travel enough, or play the game wisely, you can secure frequent flyer elite status which helps make the air travel more bearable. Here is a version of elite status bingo for HPC. I listed some categories and "achievements" required for each. Can you claim elite HPC status?
There have been lots of systems in HPC over the years, but we should stick to options that even a recent recruit to HPC might be able to claim. You can award yourself this category if you have used (logged into and run or compiled code) each of these systems:
Award yourself this category if you have written programs to run on a HPC system in each of these:
Buzzwords seem to be an integral part of HPC. To be awarded this category, you must have used each of these in talks (powerpoint etc.) since SC14:
HPC System User category
There have been lots of systems in HPC over the years, but we should stick to options that even a recent recruit to HPC might be able to claim. You can award yourself this category if you have used (logged into and run or compiled code) each of these systems:
- IBM Power system
- Cray XT, XE, or XC
- SGI shared memory system - Origin, Altix or UV
- x86 cluster
- A system with any one of Sparc, vector, or ARM, GPU, Phi, or FPGA
HPC Programmer category
Award yourself this category if you have written programs to run on a HPC system in each of these:
- Fortran 77
- Fortran 90 or later
- C
- MPI
- OpenMP
- Any one of CUDA, OpenACC, OpenCL, Python, R, Matlab
HPC Talker/Buzzword category
Buzzwords seem to be an integral part of HPC. To be awarded this category, you must have used each of these in talks (powerpoint etc.) since SC14:
- Big Data
- Any of green computing, energy efficient computing, or power aware computing
- One of my HPC analogies?
- "it's all about the science" (but then just talked about the HPC like everyone else!!)
- Any reference to "FLOPS are free, data movement is hard" or similar
- Exascale
Previous SC content ...
I'll write some new content for SC15 Austin soon but while you are waiting, here are two of my previous writings on SC:
Enjoy!
Enjoy!
Labels:
hpc,
SC15,
supercomputing
Essential Analogies for the HPC Advocate
This is an update of a two-part article I wrote for HPC Wire in 2013: Part 1 and Part 2.
An important ability for anyone involved in High Performance Computing (HPC or supercomputing or big data processing, etc.) is to be able to explain just what HPC is to others.
"Others” include politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or family asking for the umpteenth time “what exactly do you do?”
One of the easiest ways to explain HPC is to use analogies that relate the concepts to things that the listener is more familiar with. So here is a run-through of some useful analogies for explaining HPC or one of its concepts:
Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger.
Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers.
There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning. Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on.
It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is an HPC analogy that I use myself frequently.
I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”).
However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology. Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN.
The derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important.
This analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.
An important ability for anyone involved in High Performance Computing (HPC or supercomputing or big data processing, etc.) is to be able to explain just what HPC is to others.
"Others” include politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or family asking for the umpteenth time “what exactly do you do?”
One of the easiest ways to explain HPC is to use analogies that relate the concepts to things that the listener is more familiar with. So here is a run-through of some useful analogies for explaining HPC or one of its concepts:
- The simple yet powerful: A spade
- The moral high ground: A science/engineering instrument
- Duh! Clue’s in the name: Big computer
- The testosterone favorite: Formula 1
- The TARDIS factor: Time Machine
- Not special, just normal: Library
- Imagine a silly task: Aircraft vs. Car
- Monuments: Ecosystems
- The HPC Hotel
The simple yet powerful: A spade
Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger.
Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers.
There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning. Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on.
It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is an HPC analogy that I use myself frequently.
The moral high ground: A science/engineering instrument
I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”).
However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology. Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN.
The derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important.
This analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.
Labels:
analogies,
explain hpc,
hpc,
HPCwire,
supercomputer,
time machine
Thursday, 27 August 2015
The price of open-source software - a joint response
This viewpoint is published jointly on software.ac.uk, hpcnotes.com (personal blog), danielskatzblog.wordpress.com (personal blog) under a CC-BY licence. It was written by Neil Chue Hong (Software Sustainability Institute), Simon Hettrick (Software Sustainability Institute), Andrew Jones (@hpcnotes & NAG), and Daniel S. Katz (University of Chicago & Argonne National Laboratory)
In their recent paper, Krylov et al. [1] state that the goal of the research community is to advance “what is good for scientific discovery.” We wholeheartedly agree. We also welcome the debate on the role of open source in research, begun by Gezelter [2], in which Krylov was participating. However, we have several concerns with Krylov’s arguments and reasoning on the best way to advance scientific discovery with respect to research software.
Gezelter raises the question of whether it should be standard practice for software developed by publicly funded researchers to be released under an open-source licence. Krylov responds that research software should be developed by professional software developers and sold to researchers.
We advocate that software developed with public funds should be released as open-source by default (supporting Gezelter’s position). However, we also support Krylov’s call for the involvement of professional software developers where appropriate, and support Krylov’s argument that researchers should be encouraged to use existing software where possible. We acknowledge many of Krylov’s arguments of the benefits of professionally written and supported software.
Our first major concern with Krylov’s paper is its focus on arguing against an open-source mandate on software developed by publicly funded researchers. To the knowledge of the authors, no such mandate exists. It appears that Krylov is pre-emptively arguing against the establishment of such a mandate, or even against it becoming “standard practice” in academia. There is a significant difference between a recommendation of releasing as open-source by default (which we firmly support) and a mandate that all research software must be open source (which we don’t support, because it hinders the flexibility that scientific discovery needs).
Our second major concern is Krylov’s assumption that the research community could rely entirely on software purchased from professional software developers. We agree with this approach whenever it is feasible. However, by concentrating on large-scale quantum chemistry software, Krylov overlooks the diversity of software used in research. A significant amount of research software is at a smaller scale: from few line scripts to short programs. Although it is of fundamental importance to research, this small-scale software is typically used by only a handful of researchers. There are many benefits in employing professionals to develop research software but, since so much research software is not commercially viable, the vast majority of it will continue to be developed by researchers for their own use. We do advocate researchers engaging with professional software developers as far as appropriate when developing their own software.
Our desire is to maximise the benefit of software by making it open—allowing researchers other than the developers to read, understand, modify, and use it in their own research—by default. This does not preclude commercial licensing where it both is feasible and is the best way of maximising the software benefit. We believe this is also the central message of Gezelter.
In addition to these two fundamental issues with Krylov, we would like to respond to some of the individual points raised.
In their recent paper, Krylov et al. [1] state that the goal of the research community is to advance “what is good for scientific discovery.” We wholeheartedly agree. We also welcome the debate on the role of open source in research, begun by Gezelter [2], in which Krylov was participating. However, we have several concerns with Krylov’s arguments and reasoning on the best way to advance scientific discovery with respect to research software.
Gezelter raises the question of whether it should be standard practice for software developed by publicly funded researchers to be released under an open-source licence. Krylov responds that research software should be developed by professional software developers and sold to researchers.
We advocate that software developed with public funds should be released as open-source by default (supporting Gezelter’s position). However, we also support Krylov’s call for the involvement of professional software developers where appropriate, and support Krylov’s argument that researchers should be encouraged to use existing software where possible. We acknowledge many of Krylov’s arguments of the benefits of professionally written and supported software.
Our first major concern with Krylov’s paper is its focus on arguing against an open-source mandate on software developed by publicly funded researchers. To the knowledge of the authors, no such mandate exists. It appears that Krylov is pre-emptively arguing against the establishment of such a mandate, or even against it becoming “standard practice” in academia. There is a significant difference between a recommendation of releasing as open-source by default (which we firmly support) and a mandate that all research software must be open source (which we don’t support, because it hinders the flexibility that scientific discovery needs).
Our second major concern is Krylov’s assumption that the research community could rely entirely on software purchased from professional software developers. We agree with this approach whenever it is feasible. However, by concentrating on large-scale quantum chemistry software, Krylov overlooks the diversity of software used in research. A significant amount of research software is at a smaller scale: from few line scripts to short programs. Although it is of fundamental importance to research, this small-scale software is typically used by only a handful of researchers. There are many benefits in employing professionals to develop research software but, since so much research software is not commercially viable, the vast majority of it will continue to be developed by researchers for their own use. We do advocate researchers engaging with professional software developers as far as appropriate when developing their own software.
Our desire is to maximise the benefit of software by making it open—allowing researchers other than the developers to read, understand, modify, and use it in their own research—by default. This does not preclude commercial licensing where it both is feasible and is the best way of maximising the software benefit. We believe this is also the central message of Gezelter.
In addition to these two fundamental issues with Krylov, we would like to respond to some of the individual points raised.
Labels:
open source,
research,
software,
software licences
Tuesday, 17 June 2014
Secrets of the Supercomputers
These are revelations from inside the strange world of supercomputing centers. Nobody is pretending these are real stories. They couldn’t possibly be. Could they?
On one of my many long haul airplane plane journeys this year, I caught myself thinking about the strange things that go on inside supercomputer centers - and other parts of the HPC world. I thought it might be fun to poke at and mock such activities while trying to make some serious points.
Since the flight was a long one, I started writing ... and so "Secrets of the Supercomputers" was born.
You can find Episode 1 at HPC Wire today, touching on the topic of HPC procurement.
No offense to anyone intended. Gentle mocking maybe. Serious lessons definitely.
Take a look here for some serious comments on HPC procurement at the NAG blog.
On one of my many long haul airplane plane journeys this year, I caught myself thinking about the strange things that go on inside supercomputer centers - and other parts of the HPC world. I thought it might be fun to poke at and mock such activities while trying to make some serious points.
Since the flight was a long one, I started writing ... and so "Secrets of the Supercomputers" was born.
You can find Episode 1 at HPC Wire today, touching on the topic of HPC procurement.
No offense to anyone intended. Gentle mocking maybe. Serious lessons definitely.
Take a look here for some serious comments on HPC procurement at the NAG blog.
Labels:
fun,
hpc,
HPCwire,
procurement,
spoof
Tuesday, 10 June 2014
Silence ...
Really, October 2013? That long since I wrote a blog? Not even anything for SC13? Oops. Still, busy is good. Be nice to get some more blog posts again though. Maybe a preview of ISC14 in the next few days ...
Friday, 18 October 2013
Essential guide to HPC on twitter
Please read the updated version of this post at:
https://www.hpcnotes.com/p/hpc-on-twitter.html
(Original kept here for reference)
Who are the best HPC people on twitter?
A good question posed by Suhaib Khan (@suhaibkhan) - which he made tougher by saying "pick your top 5". A short debate followed on twitter but I thought the content was useful enough to record in a blog post for community reference. I also strongly urge anyone to provide further input to this topic and I'll update this post.
Some rules (mine not Suhaib's):
- What are the minimum set of accounts you can follow and still expect to catch most of the HPC news, gossip, opinion pieces, analysis and key technical content?
- How to avoid too much marketing?
- How to access comment and debate beyond the news headlines?
- Which HPC people are not only active but also interactive on twitter?
Subscribe to:
Posts (Atom)