Showing posts with label fusion. Show all posts
Showing posts with label fusion. Show all posts

Friday, 25 May 2012

Looking ahead to ISC'12

I have posted my preview of ISC'12 Hamburg - the summer's big international conference for the world of supercomputing over on the NAG blog. I will be attending ISC'12, along with several of my NAG colleagues. My blog post discusses these five key topics:
  • GPU vs MIC vs Other
  • What is happening with Exascale?
  • Top 500, Top 10,
  • Tens of PetaFLOPS
  • Finding the advantage in software
  • Big Data and HPC 
Read more on the NAG blog ...

Friday, 24 June 2011

ISC11 Review

ISC11 - the mid-season big international conference for the world of supercomputing - was held this week in Hamburg.

Here, I update my ISC11 preview post with my thoughts after the event.

I said I was watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Review: None of this is definitive, but my gut reaction is that MIC won this battle. GPU lost. Fusion didn't play again. My feeling from talking to attendees was that MIC was second only to the K story, in terms of what people were talking about (and asking NAG - as collaborators in the MIC programme - what we thought). Partly because of the MIC hype, and the K success (performance and power efficient without GPUs), GPUs took a quieter role than recent years. Fusion, disappointingly, once again seemed to have a quiet time in terms of people talking about it (or not). Result? As I thought, manycore is now realistically meaning more than just NVidia/CUDA.

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Review: Exascale won this one hands down, I think. Some lone voices still tried to talk about desktop HPC, missing middles, mass usage of HPC and so-on. But exascale got the hype again (not necessarily wrong for one of the year's primary "supercomputing" shows!)

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

Review: Hardware still got more attention than software. Top500, MIC, etc. Although ease-of-programming for MIC was a common question too. I did miss lots of talks, so perhaps there was more there focusing on applications and software challenges than I caught. But the chat in the corridors was still hardware dominated I thought.

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

Review: Well, I got those two wrong! Flags were out in force, with Japan (K, Fujitsu, Top500, etc) and France (Bull keynote) waving strongly among others. And clouds were seemingly the question to be asked at every panel! But in a way, I was still right - flags and clouds do matter and will get people talking - but I mainatin that manycore, exascale vs desktop, and the desperation of software all matter more.


 What did you learn? What stood out for you? Please add your comments and thoughts below ...

Friday, 17 June 2011

ISC 11 Preview

ISC11 - the mid-season big international conference for the world of supercomputing - is next week in Hamburg.

Will you be attending? What will you be looking to learn? I will be watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

 What will you be looking out for?

Tuesday, 22 June 2010

Technical computing futures part 2: GPU and manycore success

[Originally posted on The NAG Blog]

In my previous blog, I suggested that the HPC revolution towards GPUs (or similar many-core technologies) as the primary processor has a lot in common with the move from RISC to commodity x86 processors a few years ago. A new technology appears to offer cheaper (or better) performance than the incumbent, for some porting and tuning pain. Of course, I’m not the first HPC blogger to have made this observation, but I hope to follow it a little further.



In particular, my previous blog suggested the outcome might be: “at first the uptake is tentative ... but in a few years time, we might well look back with nostalgia to when GPU’s were not the dominant processor for HPC systems” – in other words, hard going initially, but GPU/many-core will “win” eventually. I even ended up with an ambitious promise for my next blog (i.e. this one): “an idea of what/who will emerge as the dominant solution ...



Continuing the basis of using the past to guess the future, my prediction is that the next steady state of HPC processors will be GPU-like/manycore technologies (for most of the FLOPS at least) and, just like the current steady state (x86), those few companies with the strongest financial muscle will eventually own the dominant market share. However, other companies will have pioneered many of the technologies that make that dominant market share possible, enjoying good market share surges in the process.



I can even have a go at predicting some of the path that might get us to the next steady state of HPC architecture. NVIDIA has already shown us that GPUs for HPC are sometimes a good solution – and importantly, that a good programming ecosystem (CUDA) really helps adoption. Over the last year or so, I’d say the HPC community has moved from “if GPUs can work in this case ...” to “how do I make GPUs work across my workload?



As Intel’s Knights processors bring us many-core but with a familiar x86 instruction set, we might learn that getting good performance across a broad range of applications is possible, but critically dependent on software tools and hard work by skilled parallel programmers. AMD’s Fusion with tighter links between CPU & GPU, could show that the nature of the integration between the many-core/GPU unit and the rest of the system (be it CPU, network, main memory etc) will affect not only maximum performance on specific applications, but maybe more importantly the ease of getting “good enough” performance across a range of applications.



I don't know of any GPU/many-core/accelerator announcements from IBM, but it’s always possible IBM will throw in another useful contribution before the dust settles. They were one of the first into many-core processors for HPC acceleration with Cell and they cannot be easily counted out of top end HPC solutions - e.g. the forthcoming Blue Waters (POWER7) and Sequoia (BG/Q) chart-toppers.



But back to my “winner” prediction. When the revolution settles into a new steady state of mostly GPU/many-core for HPC processors, there won’t be (can’t be) critical distinctions between the various products anymore for most applications. Whichever product we consider (whether GPU or x86-based or whatever), many-core is sufficiently different from few-core (e.g. 1-8 cores) to mean that the early winners have been those users who are easily able to move their key applications across to get step changes in cost and performance.



The big winners in the next stages of the GPU/manycore emergence will be those users who can move the bulk of their high-value-generating HPC usage to many-core processors with the most attractive transition (economy and speed) compared to their competitors.



So what about the dominant solution I promised? For the technology to be pervasive, first there must be greater commonality between offerings (I stop short of standardization) so that programmers have at least a hope of portability. Secondly, users need to be able to extract the available performance. Ideally these would mean a software method that makes many-core programming “good enough easily enough” is discovered – and if so, that software method will be the dominant solution, across all hardware.



Or, if the magic bullet is still not market ready, skilled parallel programmers will be the dominant solution for achieving competitive performance and cost benefits - just like it is for HPC using commodity x86 processors today.