Showing posts with label exascale. Show all posts
Showing posts with label exascale. Show all posts

Thursday, 20 December 2012

A review of 2012 in supercomputing - Part 2

This is Part 2 of my review of the year 2012 in supercomputing and related matters.

In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.

Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.

The themes that stick out in my mind from HPC/supercomputing in 2012 are:
  • The exascale race stalls
  • Petaflops become "ordinary"
  • HPC seeks to engage a broader user community
  • Assault on the Top500

The exascale race stalls

The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.

Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]

Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.

The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.

Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.


Wednesday, 30 May 2012

The power of supercomputers - energy, exascale and elevators

Paul Henning has written on his blog (HPC Ruminations) about the growing issue of power requirements for large scale computing. Paul's blog post - "Familiarity Breeds Complacency" - is partly in response to my article at HPCwire - "Exascale: power is not the problem" and my follow-up disucssion on here - "Supercomputers and other large science facilities".

Paul makes several good points and his post is well worth reading. He ends with an observation that I've noted before (in my own words):

One of supercomputing's biggest strengths - it's ability to help almost all areas of science and engineering - is also one of it's greatest weaknesses - because there a portfolio of cases rather than a single compelling champion to drive attention and investment.

PS - I've added Paul's new blog to my list of HPC blogs and news sites.

Friday, 25 May 2012

Looking ahead to ISC'12

I have posted my preview of ISC'12 Hamburg - the summer's big international conference for the world of supercomputing over on the NAG blog. I will be attending ISC'12, along with several of my NAG colleagues. My blog post discusses these five key topics:
  • GPU vs MIC vs Other
  • What is happening with Exascale?
  • Top 500, Top 10,
  • Tens of PetaFLOPS
  • Finding the advantage in software
  • Big Data and HPC 
Read more on the NAG blog ...

Friday, 4 November 2011

My SC11 diary 10

It seems I have been blogging about SC11 for a long time - but it has only been two weeks since the first SC11 diary post, and this is only the 10th SC11 diary entry. However, this will also be the final SC11 diary blog post.

I will write again before SC11 in HPC Wire (to be published around or just before the start of SC11).

And, then maybe a SC11 related blog post after SC11 has all finished.

So, what thoughts for the final pre-SC11 diary then? I'm sure you have noticed that the pre-show press coverage has started in volume now. Perhaps my preview of the SC11 battleground, what to look out for, what might emerge, ...


Thursday, 11 August 2011

Big Data and Supercomputing for Science

It is interesting to note the increasing attention “big data” seems to be getting from the supercomputing community.

Data explosion


We talk about the challenges of the exponential increase in data, or even an “explosion of data”. This is caused by our ever-growing ability to generate data. More powerful computational resources deliver finer resolutions, wider parameter studies, etc. The emergence of individual scale HPC (GPU etc.) that is both cost-viable and effort-viable gives increased data creation capability to the many scientists not using high end supercomputers. And instrumental sources continue to improve in resolution and speed.

So, we are collecting more data than we have before. We are also increasing our use of multiple data sources – fusion from various sensors and computer models to form predictions or study scientific phenomena.

It is also common to questions such as: are we drowning in volume of data? Is this growth in data overwhelming our ability to extract useful information or insight? Is the potential value of the increased data lost by our inability to manage and comprehend it? Does having more data mean more information – or less due to analysis overload? Do the diversity of formats, quality, and sources further hinder data use?


Friday, 24 June 2011

ISC11 Review

ISC11 - the mid-season big international conference for the world of supercomputing - was held this week in Hamburg.

Here, I update my ISC11 preview post with my thoughts after the event.

I said I was watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Review: None of this is definitive, but my gut reaction is that MIC won this battle. GPU lost. Fusion didn't play again. My feeling from talking to attendees was that MIC was second only to the K story, in terms of what people were talking about (and asking NAG - as collaborators in the MIC programme - what we thought). Partly because of the MIC hype, and the K success (performance and power efficient without GPUs), GPUs took a quieter role than recent years. Fusion, disappointingly, once again seemed to have a quiet time in terms of people talking about it (or not). Result? As I thought, manycore is now realistically meaning more than just NVidia/CUDA.

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Review: Exascale won this one hands down, I think. Some lone voices still tried to talk about desktop HPC, missing middles, mass usage of HPC and so-on. But exascale got the hype again (not necessarily wrong for one of the year's primary "supercomputing" shows!)

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

Review: Hardware still got more attention than software. Top500, MIC, etc. Although ease-of-programming for MIC was a common question too. I did miss lots of talks, so perhaps there was more there focusing on applications and software challenges than I caught. But the chat in the corridors was still hardware dominated I thought.

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

Review: Well, I got those two wrong! Flags were out in force, with Japan (K, Fujitsu, Top500, etc) and France (Bull keynote) waving strongly among others. And clouds were seemingly the question to be asked at every panel! But in a way, I was still right - flags and clouds do matter and will get people talking - but I mainatin that manycore, exascale vs desktop, and the desperation of software all matter more.


 What did you learn? What stood out for you? Please add your comments and thoughts below ...

Friday, 17 June 2011

ISC 11 Preview

ISC11 - the mid-season big international conference for the world of supercomputing - is next week in Hamburg.

Will you be attending? What will you be looking to learn? I will be watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

 What will you be looking out for?

Thursday, 26 May 2011

Poll: legacy software and future HPC applications

I've added a new quick survey to the HPC Notes blog: "Which do you agree with more for developing the next generation of HPC applications?"

Is the argument about "protecting our invetsment" a load of nonsense? Or is throwing it all away and starting again irresponsible?

I've only allowed the two extremes as voting options - I know you might want to say "both" - but choose one!

See top right of the blog home page. Vote away ...

For clues on my own views, and some good audience debate, see the software panel video recording from the recent NCSA PSP annual meeting here: http://www.ncsa.illinois.edu/Conferences/2011Meeting/agenda.html

Blog on the topic to follow shortly ... after some of your votes have been posted :-)

Thursday, 24 March 2011

Poll: exascale, personal or industrial?

I've added a new quick survey to the HPC Notes blog: "Which is more interesting - exascale computing, personal supercomputing or industry use of HPC?"

See top right of the blog home page. You can even give different answers for "reading about" and "working on"...

Investments Today for Effective Exascale Tomorrow

I contributed to this article in the March 2011 The Exascale Report by Mike Bernhardt.

"Initiatives are being launched, research centers are being established, teams are being formed, but in reality, we are barely getting started with exascale research. Opinions vary as to where we should be focusing our resources.

In this issue, The Exascale Report asks NAG's Andy Jones, Lawrence Livermore's Dona Crawford, and Growth Science International's Thomas Thurston where should we (as a global community) be placing our efforts today with exascale research and development?"


Thursday, 17 March 2011

The Addictive Allure of Supercomputing

The European Medical Device Technology (EMDT) magazine interviewed me recently. InsideHPC also has pointed to the interview here.

The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."

We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that excpet for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.

Thursday, 23 September 2010

Is power-hungry supercomputing OK now?

[Article by me on ZDNet UK, 23 September, 2010]

We may be planning for a 1,000-fold increase in compute power in the next decade, but what about the extra power consumption ...

http://www.zdnet.co.uk/news/emerging-tech/2010/09/23/is-power-hungry-supercomputing-ok-now-40090137/

Monday, 30 August 2010

Me on HPC 2

Things I have said (or have been attributed as saying - not always the same thing!) - some older interviews with me in various publications about HPC, multicore, etc ...


Successful Deployment at Extreme Scale: More than Just the Iron
The Exascale Report
August 2010, by John West

[full article requires subscription, extracts here are not complete, and are modified slightly to support that]

"cost of science, not just the cost of supercomputer ownership"

"lead time, and funding, to get the user community ready"

"spend a year or more selecting a machine and then deploy it as quickly as possible, makes it very difficult to build a community and get codes ready ahead of time"

"software must be viewed as part of the scientific instrument, in this case a supercomputer, that needs its own investment. High performance computing is really about the software; whatever hardware you are using is just an accelerator system."

"a machine is deployed and then obsolete within three years. And the users often have no idea what architecture is coming next. There is no real chance for planning, or a return on software development investment."

Thursday, 18 February 2010

Exascale or personal HPC?

[Originally posted on The NAG Blog]

Which is more interesting for HPC watchers - the ambition of exaflops or personal supercomputing? Anyone who answers "personal supercomputing" is probably not being honest (I welcome challenges!). How many people find watching cars on the local road more interesting than F1 racing? Or think local delivery vans more fascinating than the space shuttle? Of course, everyday cars and local delivery vans are more important for most people than F1 and the space shuttle. And so personal supercomputing is more important than exaflops for most people.

High performance computing at an individual or small group scale directly impacts a far broader set of researchers and business users than exaflops will (at least for the next decade or two). Of course, in the same way that F1 and the shuttle pioneer technologies that improve cars and other everyday products, so the exaflops ambition (and the petaflops race before it) will pioneer technologies that make individual scale HPC better.

One potential benefit to widespread technical computing that some are hoping for is an evolution in programming. It is almost certain that the software challenges of an exaflops supercomputer with a complex distributed processing and memory hierarchy demanding billion-way concurrency will be the critical factor to success and thus tools and language evolutions will be developed to help the task.

Languages might be extended (more likely than new languages) to help express parallelism better. Better may mean easier or with assured correctness rather than higher performance. Language implementations might evolve to better support robustness in the face of potential errors. Successful exascale applications might expect to make much greater use of solver and utility libraries optimized for specific supercomputers. Indeed one outlying idea is that libraries might evolve to become part of the computer system rather than part of the application. Developments like these should also help to make the task of programming personal scale high performance computing much easier, reducing the expertise required to get acceptable performance from a system using tens of cores or GPUs.

Of course, while we wait for the exascale benefits to trickle down, getting applications to achieve reasonable performance across many cores still requires specialist skills.

Tuesday, 15 December 2009

2009-2019: A Look Back on a Decade of Supercomputing

[Article by me for HPCwire, December 15, 2009]

As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It's amazing to think how much has changed in that time.

http://www.hpcwire.com/features/2009-2019-A-Look-Back-on-a-Decade-of-Supercomputing-79351812.html?viewAll=y

Thursday, 12 November 2009

Tough choices for supercomputing's legacy apps

[Article by me on ZDNet UK, 12 November, 2009]

The prospect of hundreds of petaflops and exascale computing raises tricky issues for legacy apps ...

http://www.zdnet.co.uk/news/it-strategy/2009/11/12/tough-choices-for-supercomputings-legacy-apps-39869521/

Thursday, 18 December 2008

Santa's HPC Woes

[Article by me for HPCwire, December 18, 2008]

In a break with centuries of reticence, perhaps the most widely recognised distributor of festive spirit and products, Santa Claus, has revealed some details of the HPC underpinning his time-critical global operations.

http://www.hpcwire.com/features/Santas-HPC-Woes-36399314.html