Showing posts with label explain hpc. Show all posts
Showing posts with label explain hpc. Show all posts

Friday, 22 May 2020

What makes a Supercomputer Centre a Supercomputer Centre?

When is a Supercomputer Center not a Supercomputer Center?

The world of HPC has always been a place of rapid change in technology with slower change in business models and skill profiles, but what actually makes a supercomputer center a supercomputer center?

Tin (or Silcon maybe)

Is it having a big HPC system? How big counts? Does it matter what type of "big" system you have?

Does it matter if there is not one big supercomputer but instead a handful of medium sized ones of different types?

Does it count if the supercomputers are across the street, or in a self-owned/operated datacentre the other side of town? What if the supercomputers are located hundreds of miles away from the HPC (eg to get cheap power & cooling)?

Who and How

Or is it having a team of HPC experts able to help users? How many experts? What level of expertise counts? How many have to be RSE (Research Software Engineer) types?

Is it having the vision and processes to recognise they are primarily a service provider to their users ("customers") rather than thinking of themselves mainly as a buyer of HPC kit?

What if you mainly have AI workloads rather than "traditional" HPC? What if you only run many small simulation jobs and no simulations that span thousands of cores? What if users only ever submit jobs via web portals and never log in to the supercomputers directly?

Is it essential to have a .edu, .gov, .ac.uk etc. address? Or can .com be a supercomputer center too?

This but not that?

If you have no supercomputers of your own, but have 50 top class HPC experts who work with users on other supercomputers and also research future technologies - is that a supercomputer center?

If you have a very large HPC system but only the bare miuminm of HPC staff and no technology R&D efforts - is that a supercopmputer center?

Which of the last two adds more value to your users?

Declare or Earn?

Is it merely a matter of declaration - "we are a supercomputer center"? Or it is a matter of other supercomputer centers accepting you as a peer? But then who counts as other supercomputer centers to accept you? What if some do and some don't?

Is there a difference between a supercomputer center and a supercomputing center?

What do you think? And does your answer depend on whether you are a user, or work at a "traditional" supercomputer center, or a new type of supercomputing center, or a HPC vendor, or from outside the HPC field?

Monday, 5 October 2015

Essential Analogies for the HPC Advocate

This is an update of a two-part article I wrote for HPC Wire in 2013: Part 1 and Part 2.

An important ability for anyone involved in High Performance Computing (HPC or supercomputing or big data processing, etc.) is to be able to explain just what HPC is to others.

"Others” include politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or family asking for the umpteenth time “what exactly do you do?

One of the easiest ways to explain HPC is to use analogies that relate the concepts to things that the listener is more familiar with. So here is a run-through of some useful analogies for explaining HPC or one of its concepts:

The simple yet powerful: A spade


Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger.

Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers.

There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning. Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on.

It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is an HPC analogy that I use myself frequently.

The moral high ground: A science/engineering instrument


I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”).

However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology. Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN.

The derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important.

This analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.

Thursday, 10 October 2013

Supercomputing - the reality behind the vision

My opinion piece "Supercomputing - the reality behind the vision" was published today in Scientific Computing World, where I:
  • liken a supercomputer to a "pile of silicon, copper, optical fibre, pipework, and other heavy hardware [...] an imposing monument that politicians can cut ribbons in front of";
  • describe system architecture as "the art of balancing the desires of capacity, performance and resilience against the frustrations of power, cooling, dollars, space, and so on";
  • introduce software as magic and infrastructure and a virtual knowledge engine;
  • and note that "delivering science insight or engineering results from [supercomputing] requires users";
  • and propose that we need a roadmap for people just as much as for the hardware technology.

Read the full article here: http://www.scientific-computing.com/news/news_story.php?news_id=2270.


Friday, 12 October 2012

The making of “1000x” – unbalanced supercomputing

I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.

This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.

Thursday, 2 August 2012

What is the point of supercomputers?

Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.

So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.

Thursday, 19 January 2012

Cloud computing or HPC? Finding trends.

I posted "Cloud computing or HPC? Finding trends." on the NAG blog today. Some extracts ...
Enable innovation and efficiency in product design and manufacture by using more powerful simulations. Apply more complex models to better understand and predict the behaviour of the world around us. Process datasets faster and with more advance analyses to extract more reliable and previously hidden insights and opportunities.
... and ...
High performance computing (HPC), supercomputing, computational science and engineering, technical computing, advanced computer modelling, advanced research computing, etc. The range of names/labels and the diversity of the audience involved mean that what is a common everyday term for many (e.g. HPC) is an unrecognised meaningless acronym to others - even though they are doing "HPC".
... and then I use some Google Trends plots to explore some ideas ...

Read the full article ...

Monday, 29 August 2011

Supercomputers and other large science facilities

In my recent HPCwire feature, I wrote that I occasionally say, glibly and deliberately provocatively, that if the scientific community can justify (to funders and to the public) billions of dollars, large power consumptions, lots of staff etc for domain specific major scientific intrusments like LHC, Hubble, NIF, etc, then how come we can’t make a case for a facility needing comparable resources but can do wonders for a whole range of science problems and industrial applications?

There is a partial answer to that ...

Friday, 19 August 2011

What is this HPC thing?

[Originally posted on The NAG Blog]


I’m sure something like this is familiar to many readers of this blog. The focus here is HPC, but there is a similar story for mathematicians, numerical software engineeers, etc.


You've just met an old acquaintance. Or a family member is curious. Or at social events (when social means talking to real people not twitter/facebook). We see that question coming. We panic. Then the family/friend/stranger, asks it. We freeze. How to reply? Can I get a meaningful, ideally interesting, answer out before they get bored? What if I fail to get the message across correctly? Oops, this pause before answering has gone on too long. Now they are looking at me strangely. They are thinking the answer is embarrassing or weird. This is not a good start.


The question? “What do you do then?” Followed by: “Oh! So what exactly is supercomputing then?


Thursday, 4 February 2010

Don't call it High Performance Computing?

[Originally posted on The NAG Blog]

Having just signed up for twitter (HPCnotes), I've realised that the space I previously had to get my point across was nothing short of luxurious (e.g. my ZDNet columns). It's like the traditional challenge of the elevator pitch - can you make your point about High Performance Computing (HPC) in the 140 character limit of a tweet? It might even be a challenge to state what HPC is in 140 characters. Can we sum up our profession that simply? To a non-HPC person?





The inspired John West of InsideHPC fame wrote about the need to explain HPC some time ago in HPCwire. It's not an abstract problem. As multicore processors (whether CPUs or GPUs) become the default for scientific computing, the parallel programming technologies and methods of HPC are becoming important for all numercial computing users - even if they don't identify themselves as HPC users. In turn, of course, HPC benefits in sustainability and usability from the mass market use of parallel programming skills and technologies.





I'll try to put it in 140 characters (less space for a link): Multicore CPUs promise extra performance but software must be optimised to take advantage. HPC methods can help.





It's not good - can you say it better? Add a comment to this blog post to try ...





For those of you finding this blog post from the short catch line above, hoping to find the answer to how HPC methods can help - well that's what my future posts and those of my colleagues here will address.