An important ability for anyone involved in High Performance Computing (HPC or supercomputing or big data processing, etc.) is to be able to explain just what HPC is to others.
"Others” include politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or family asking for the umpteenth time “what exactly do you do?”
One of the easiest ways to explain HPC is to use analogies that relate the concepts to things that the listener is more familiar with. So here is a run-through of some useful analogies for explaining HPC or one of its concepts:
- The simple yet powerful: A spade
- The moral high ground: A science/engineering instrument
- Duh! Clue’s in the name: Big computer
- The testosterone favorite: Formula 1
- The TARDIS factor: Time Machine
- Not special, just normal: Library
- Imagine a silly task: Aircraft vs. Car
- Monuments: Ecosystems
- The HPC Hotel
The simple yet powerful: A spade
Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger.
Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers.
There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning. Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on.
It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is an HPC analogy that I use myself frequently.
The moral high ground: A science/engineering instrument
I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”).
However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology. Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN.
The derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important.
This analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.
Duh! Clue’s in the name: Big computer
I see this in so many “Intro to HPC” type courses – defining HPC as a computer 1000x more powerful than a desktop computer. Or worse, a computer that costs several million dollars, requires a megawatt of power, and fills a room. For bonus points the weight of the machine or how much cooling water it churns can be used.
This is not really an analogy – simply a statement of the fact that HPC usually involves extreme computer hardware (albeit a narrow definition of HPC). But the reader/listener is left clueless as to the reason why anyone would fill a room full of computers and stump up for a $1m/year electricity bill.
In fact, I would go as far as to say that this type of description of HPC (“it’s a big computer”) should be banned from the repertoire of any HPC person wishing to retain the community respect. Unless used in conjunction with a solid and inspiring description of the purpose and benefits of HPC.
The testosterone favorite: Formula 1
This analogy is often used to explain how HPC relates to “normal” IT.
Normal IT is your family car (apparently Americans call this an “automobile”). It gets you from A to B (and indeed lots of other places on the map, providing your satnav is playing nicely). It uses commodity components. Most adults can learn to drive it (although the quality of some people’s driving is suspect).
HPC is like Formula 1 (a proper motor racing sport, not sure about NASCAR-lets-drive-in-circles-for-ages). It allows a higher budget to achieve the highest performance within a set of constraints (rules for F1, power etc. for HPC). It uses specialized components. Few adults can learn to drive F1 cars effectively. Few will get a chance to drive an F1 car. The relationship between F1/car and HPC/IT is often compared too – F1 (HPC) is at the leading of motoring (computing) technology, and successful technologies trickle down to mass production use in family cars (common IT).
However, this analogy focuses on the technology and fails to relate the purpose or benefit of HPC. It also perpetuates a picture of a niche activity relevant only to a few – which is part of the perception issue that HPC needs to break free of. Overall, I have personally avoided this analogy for these reasons.
The TARDIS factor: Time Machine
Computing capability continues to increase, whether Moore's Law or not. This means that anyone who cares to apply the effort today with a high end PC could get comparable results to work undertaken a decade ago that needed the biggest supercomputers of the time.
But, it is better to look at it the other way around: in other words, that supercomputer gave the user a decade time advantage over others who didn't have supercomputers - or a few years over others with smaller supercomputers.
This is the essence of HPC - the ability to get a result before a competitor - you could say HPC is a time machine for simulation and modelling.
Of course, the capability is a factor of both the software and the hardware. If fact, even with the same supercomputer, it could be hard for others to replicate the results - because there is usually as much value in the software (physics, algorithms, performance engineering, implementation, etc.) and the associated validation and verification program as in the supercomputer.
The supercomputer offers the user a time machine. But the investment in the software (performance, scalability, algorithms, etc.) enables that user to actually use that time machine to get results faster than others - even if those others used the same supercomputer. And the validation and verification efforts enable users to trust what the time machine is telling them.
Not special, just normal: Library
One of the great HPC analogies I have heard is one that describes where HPC should sit in the make-up of R&D organizations, especially universities. This one says that HPC should occupy the same position in any research organization (university) as a library – i.e., a core part of the essential infrastructure and a research tool that can be turned to many projects. A university for the last few centuries without a library? As silly as a modern R&D organization without access to HPC facilities.
There are tiers of libraries too. Supporting the university library are national libraries with greater breadth of material. Equally important are the local research group libraries with much more specialized texts that may not be found in the larger more general purpose libraries. And the local libraries have a lower barrier to access. I’m sure the reader can work out the analogies to the traditional pyramid of HPC tiers.
Imagine a silly task: Aircraft vs. Car
One of the favorite hunting grounds for HPC analogies is explaining the nature and usefulness of the capability vs. capacity distinction. First, let me get a common mistake out of the way – I often see people trying to describe capability as the role of a supercomputer and capacity as the role of a cluster. There is no reason why a well architected commodity cluster cannot do capability computing and certainly poorly implemented supercomputers can be useless for capability work.
Usually we start by asking the reader/listener to imagine a task that needs doing/solving. Let’s say we have to move a thousand shoe boxes from one city to another. We can load up a car (or a group of cars if we have a team of willing friends) with boxes and drive them to the new location, and repeat as needed. As the problem gets bigger (more boxes or more distant cities) the cars take longer to complete the task, or more cars are needed. However the cars can still do the job.
Now, imagine the destination city is across an ocean. It doesn’t matter how many cars are put on to the job or how much time is allocated, the cars cannot move the boxes across the ocean. But a cargo airplane can. This is capability – a job that cannot be achieved without that platform.
In HPC, capability computing jobs are those that cannot be completed by waiting longer or using a collection of smaller resources. This is often equated to jobs that require the use of the whole supercomputer (or half of it or some other large fraction) – but this is not a general answer to capability. Capability might only require a small fraction of the machine, but needs some special features it has. And not all jobs that use the full size of a system are capability jobs.
There is also a great derived analogy – the aircraft can be used for both jobs (assuming availability of runways etc.). And so a capability computing system can be used for capacity work too – but the reverse is not true. Although of course, a system designed for capability might not be as cost-effective when used for capacity workloads.
Monuments: Ecosystems
Another aspect of HPC that cries out for effective analogies is the need to explain why supercomputing needs proper resourcing – i.e., people and software, not just a room filling lump of silicon and copper.
One impactful analogy I have heard is to describe supercomputers purchased or deployed without adequate matching investment in software and people as “monuments.” Great to look at, but not very functional.
A relevant analogy is to consider a long haul passenger airplane. To deliver its mission, the airplane must be supplemented by an entire ecosystem of pilots, cabin crew (or flight attendants if on a US-based airline), runways, passenger terminals, air traffic control, processes/procedures, etc.
Likewise, HPC needs an ecosystem of people, software, datacenters, I/O subsystems, etc., to deliver its mission. And just like air travel, much of the complexity is in the ecosystem beyond the hardware product.
And, here is the important bit, the differentiation and economic impact comes from getting the ecosystem right. Airlines have the same aircraft as their competitors just as companies normally have access to the same HPC technology as their competitors. But, how the staff interacts with the customers, quality of the back-end support, the processes/policies – these are what distinguish one airline from another. Similarly, the software, the support staff, the policies, etc., are what enables each company to gain a competitive advantage over their peers who may be using the same HPC technology.
I have written about this idea of an ecosystem previously: supercomputing beyond the vision,
The HPC Hotel
This analogy is great for explaining many different HPC concepts. Imagine your job is to refurbish a hotel. Clearly this task is easier if you have additional workers – more people means the job can be done quicker. And you can accept contracts to refurbish bigger hotels. But you need to coordinate all these extra workers of course.
I’m sure you can see the use of this analogy for explaining parallelism and scalability (decomposition, coordination, scheduling conflicts, resource contention, etc.). You can also use it to introduce special vs. general purpose processors (everyone can do any job vs. combination of plumbers, electricians, plasterers, etc.). It can be used to explain that a variety of skills are needed to make the refurbishment (HPC simulation) effective.
The HPC hotel analogy can be used to show that the job of running a hotel is not the same as the job of designing a hotel is not the same as the job of building/refurbishing a hotel is not the same as staying in a hotel. In the same way, it is silly that one person expects to be expert in using HPC, and writing the applications, and running the cluster, and designing the cluster, and so on.
The analogy can also be used to describe areas of differentiation – hotels (HPC) can differentiate from each other on both the rooms (hardware) and the services/staff/policies (support & software).
Do you have other good analogies that you use to describe HPC? Have you used any of the above and found them to be particularly effective?
No comments:
Post a Comment