As we stumble into the first weeks of 2013, it is the season for predictions about what the coming year will bring. In my case, following my recent review of HPC in 2012, I get to make some predictions for the world of HPC in 2013.
Buzzwords
First up, this year’s buzzword for HPC marketing and technology talks. Last year was very much the year of “Big Data” as a buzzword. As that starts to become old hat (and real work) a new buzzword will be required. Cynical? My prediction is that this year will see Big Data still present in HPC discussions and real usage but it will diminish in use as a buzzword. 2013 will probably spawn two buzzwords.
The first buzzword will be “energy-efficient computing”. We saw the use of this a little last year but I think it will become the dominant buzzword this year. Most technical talks will include some reference to energy-efficient computing (or the energy cost of the solution or etc.). All marketing departments will swing into action to brand their HPC products and services as energy efficient computing – much as they did with Big Data and before that, Cloud Computing, and so on. Yes, I’m being a tad cynical about the whole thing. I’m not suggesting that energy efficiency is not important – in fact it is essential to meet our ambitions in HPC. I’m merely noting its impending over-use as a theme. And of course, energy efficient computing is not the same as Green Computing – after all that buzzword is several years old now.
Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.
The second buzz-theme that I see emerging this year, probably towards the second half of the year is “ease-of-use”. See - it’s only a buzz-theme not a buzzword because it’s not a catchy phrase yet. But someone will be inspired and a buzzword it will become.
I think that the drive for the public HPC community to engage with industry, together with the increasing recognition of the need to widen the adoption of HPC beyond high end niches, and the marketing requirements of technologies such as Xeon Phi will mean that ease-of-use will become a key aspect of the HPC debate in 2013. To have a sustainable growth and future, HPC has to move away from requiring specialist skill sets for users. For advanced users and programmers – this is OK – the technology is complex and rightly requires expertise. But for basic users who are primarily using HPC to do a job, e.g. in engineering, we need a better interface than batch scripts and a better way to extract performance than understanding the underlying technology architecture.
Technology choices
Less of a prediction, more of stating the obvious, but I’d expect the battle between GPUs and Intel MIC to continue through 2013 – with the benefits and issues of each approach being debated, demonstrated and challenged. The debate over when and how far to commit to accelerators/co-processors compared to traditional processor-only HPC systems solutions will continue. I know of several HPC procurements where accelerators/co-processors are still being avoided due to the need to ensure a solution that is optimal across a general purpose HPC service.
Many individual applications have been shown to achieve meaningful performance gains using GPU/MIC but when considering a large multi-user community, there will always be some applications that simply will not be ready for that architectural change for some years, or where a lack of parallel programming experts means that not enough of the application base can be converted yet. However, I do that think that manycore processors are an inevitable step along the HPC evolution and applications that avoid adopting manycore (whether GPU/MIC/other) now are simply delaying the day when they will have to evolve or be left behind. The harsh reality is that the hardware evolves and applications have to move with it. Rarely does the hardware ignore technology trends to swing back to application expectations. I am not sure GPU/MIC in their current forms are the final form of manycore processing for the exascale era – but I’m sure they are close enough to provide a valuable stepping stone that won’t be wasted effort and will deliver performance in the meantime.
Company moves
In terms of HPC companies, this is always a risky area to make predictions, but I’d guess two are probably well known enough that I can make predictions now and claim credit later for stating the bleedin’ obvious.
Intel spent much of the last few years swallowing parts of the HPC ecosystem to broaden its HPC capabilities. Prediction? Intel will continue to acquire key capabilities related to HPC and we will start to see their plans emerge for how they will exploit this ingested ecosystem.
Related to that, is Cray – they bought Appro towards the end of 2012 but by my sums they still have some money left over from the Intel interconnect deal – I’d expect Cray to invest some more of that money this year – perhaps in acquisitions, perhaps in technology research, perhaps in collaborations. Yes – I know I’ve covered my bases there – but predictions are awfully hard to get right if you don’t cover your bases!
I'd expect some major moves from a few other HPC companies too - but let's not be too ambitious with the predictions!
Industrial HPC
Finally, the start of 2013 saw the news stories around the break-up of Encanto. This was the supercomputer installed in New Mexico to engage with industrial users of HPC and encourage high tech industry to locate R&D in the area.
This theme, national lab / university HPC centre engaging with industry on HPC adoption, has been a long standing vision of HPC funders and centres across the globe. Only a few centres in the world have made a success of industrial partnerships, yet many new ones are able to convince their governments to invest in supercomputing facilities to bring supercomputing capabilities to industrial users and so generate positive economic impact. Why are the past failures not a barrier to this? Partly because it is like the lottery – success is hard to be sure of and most will fail - but the one that gets it right will have a huge pay-off.
If anyone does manage to make use of HPC technologies pervasive within their industrial base, then the long term economic benefits could be immense – especially if other engagement attempts continue to fail and thus a multi-sector competitive advantage opens up. But the economic and business models have to be right - not merely the technology evangelization.
Of course, bringing HPC benefits to industrial users is not purely the preserve of academic centres and national labs – there are many commercial organisations providing this support too. For example, my own company, NAG, provides services to industry to evolve and scale applications to benefit from modern HPC technology – whether multicore clusters or petascale supercomputers.
So ...
So that’s it then: energy-efficient, easy-to-use, and industrially engaged. 2013 looks fun already!
6 comments:
Are those really buzzwords, or business necessities? They're specific, useful and nowhere near as vague as "Big Data". :-)
Energy efficiency is a hardware problem, and ease of use and industrially-engaged go hand-in-hand and they're both a software problem. Those last two are still the real barriers to adoption, we have hardware flops raining from the sky now.
On the subject of Encanto, was it a lottery result or bad business management?
Surely the key metric for a buzzword is that a sufficiently imaginative marketing department can apply it to almost any product or service! On that basis, both "energy effiency" and "ease of use" qualify :-)
Seriously though, I think most buzzwords start out as focused and relevant issues/challenges. However, through rampant over-use in talks/panels/etc and marketing departments' tendancy for indiscriminate brandishing of the new buzzwords towards any product or service, however peripheral the connection, they are soon robbed of specific or useful meaning.
I'd argue that energy effiency is both a hardware and software problem/opportunity - for example more efficient code can consume less power.
In terms of supercomputer centers engaging with industry - I wasn't suggesting the process is like a lottery. Merely that our repeated chasing of the [potential] big pay-off in spite of numerous previous failures is based on the same mentality as buying lottery tickets - the chance to win is worth the risk of failure.
You are right, however, that good business management should shorten the odds of the win.
The problem with achieving "ease of use" is that it is very expensive to achieve, and is implemented in software. And many HPC customers don't like spending money on software, because every dollar or Euro spent on software, is one less dollar or Euro that they are spending on hardware FLOPS. A frequent comment from many HPC customers is that they like the software from *name-of-commercial-vendor*, but its too expensive. So they end up using open source or vendor-developed software, and complain it doesn't meet their needs, is buggy, or requires code conversions. Duh.
Industrial HPC customers, who tend to be required to demonstrate the ROI of their purchase, and a few enlightened HPC government centers, are willing to pay for software. But its hard for commercial software to succeed in HPC when a large portion of the market isn't willing to pay for it.
Yes, absolutely. Hardware FLOPS are a tangible, measurable no-brainer, even if you can't use them. Those parts of the HPC market that want it cheap/free, aren't much of a business market anyway, because value is hard-capped by the price. You hit the nail on the head with industry though. If you have to actually justify a ROI (shock, horror...) and there are real consequences to failure, you have to use something that works and that costs real money (as it should). There's billions in that market today, even without steps forward in ease-of-use.
Post a Comment