Showing posts with label strategy. Show all posts
Showing posts with label strategy. Show all posts

Thursday, 10 October 2013

Supercomputing - the reality behind the vision

My opinion piece "Supercomputing - the reality behind the vision" was published today in Scientific Computing World, where I:
  • liken a supercomputer to a "pile of silicon, copper, optical fibre, pipework, and other heavy hardware [...] an imposing monument that politicians can cut ribbons in front of";
  • describe system architecture as "the art of balancing the desires of capacity, performance and resilience against the frustrations of power, cooling, dollars, space, and so on";
  • introduce software as magic and infrastructure and a virtual knowledge engine;
  • and note that "delivering science insight or engineering results from [supercomputing] requires users";
  • and propose that we need a roadmap for people just as much as for the hardware technology.

Read the full article here: http://www.scientific-computing.com/news/news_story.php?news_id=2270.


Friday, 15 June 2012

Supercomputers are for dreams

I was invited to the 2012 NCSA Annual Private Sector Program (PSP) meeting in May. In my few years of attending, this has always been a great meeting (attendance by invitation only), with an unusually high concentration of real HPC users and managers from industry.

NCSA have recently released streaming video recordings of the main sessions - the videos can be found  as links on the Annual PSP Meeting agenda page.

Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.

The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.


Monday, 29 August 2011

Supercomputers and other large science facilities

In my recent HPCwire feature, I wrote that I occasionally say, glibly and deliberately provocatively, that if the scientific community can justify (to funders and to the public) billions of dollars, large power consumptions, lots of staff etc for domain specific major scientific intrusments like LHC, Hubble, NIF, etc, then how come we can’t make a case for a facility needing comparable resources but can do wonders for a whole range of science problems and industrial applications?

There is a partial answer to that ...

Thursday, 11 August 2011

Big Data and Supercomputing for Science

It is interesting to note the increasing attention “big data” seems to be getting from the supercomputing community.

Data explosion


We talk about the challenges of the exponential increase in data, or even an “explosion of data”. This is caused by our ever-growing ability to generate data. More powerful computational resources deliver finer resolutions, wider parameter studies, etc. The emergence of individual scale HPC (GPU etc.) that is both cost-viable and effort-viable gives increased data creation capability to the many scientists not using high end supercomputers. And instrumental sources continue to improve in resolution and speed.

So, we are collecting more data than we have before. We are also increasing our use of multiple data sources – fusion from various sensors and computer models to form predictions or study scientific phenomena.

It is also common to questions such as: are we drowning in volume of data? Is this growth in data overwhelming our ability to extract useful information or insight? Is the potential value of the increased data lost by our inability to manage and comprehend it? Does having more data mean more information – or less due to analysis overload? Do the diversity of formats, quality, and sources further hinder data use?


Monday, 8 August 2011

Summer season big changes - football or supercomputing?

The world of supercomputing has gone mad.

So it seems as I catch up on the news around the HPC community after a week's vacation. Just today the news of IBM walking away from half a decade's work on Blue Waters and the story of an unknown organisation [now revealed to be NVidia] tempting Steve Scott to leave his Cray CTO role have been huge news but thinking back over the summer months there has been more.

The immediate comparison to me is that of the European football summer season (soccer for my American readers). Key players are signed by new clubs, managers leave for pastures new (or are pushed), and ownership takeover bids succeed or fail. It feeds a few months of media speculation, social gossip, with occasional breaking news (i.e. actual facts) and several major moves (mostly big surprises, but some pre-hyped for long before). But clubs emerge from the summer with new teams, new ambitions, and new odds of achieving success.

The world of HPC has such a summer I think.

Thursday, 24 March 2011

Investments Today for Effective Exascale Tomorrow

I contributed to this article in the March 2011 The Exascale Report by Mike Bernhardt.

"Initiatives are being launched, research centers are being established, teams are being formed, but in reality, we are barely getting started with exascale research. Opinions vary as to where we should be focusing our resources.

In this issue, The Exascale Report asks NAG's Andy Jones, Lawrence Livermore's Dona Crawford, and Growth Science International's Thomas Thurston where should we (as a global community) be placing our efforts today with exascale research and development?"


Thursday, 28 January 2010

Are we taking supercomputing code seriously?

[Article by me on ZDNet UK, 28 January, 2010]

The supercomputing programs behind so much science and research are written by people who are not software pros ...

http://www.zdnet.co.uk/news/it-strategy/2010/01/28/are-we-taking-supercomputing-code-seriously-40004192/

Thursday, 12 November 2009

Tough choices for supercomputing's legacy apps

[Article by me on ZDNet UK, 12 November, 2009]

The prospect of hundreds of petaflops and exascale computing raises tricky issues for legacy apps ...

http://www.zdnet.co.uk/news/it-strategy/2009/11/12/tough-choices-for-supercomputings-legacy-apps-39869521/

Thursday, 5 February 2009

What to do if your supercomputing supplier fails

[Article by me on ZDNet UK, 5 February, 2009]

High-performance computing providers often live on the edge — technologically and financially. But if your supplier fails, it need not be a disaster ...

http://www.zdnet.co.uk/news/it-strategy/2009/02/05/what-to-do-if-your-supercomputing-supplier-fails-39610056/

Tuesday, 16 December 2008

How to stand out in the supercomputing crowd

[Article by me on ZDNet UK, 16 December, 2008]

High-performance computing's key business benefit may be to differentiate an organisation from its rivals, but that shouldn't rule out the use of commodity products ...

http://www.zdnet.co.uk/news/it-strategy/2008/12/16/how-to-stand-out-in-the-supercomputing-crowd-39578009/

Thursday, 14 August 2008

NAG Embarks on a New Business Venture

[Interview with me in HPCwire, August 14, 2008]

by John E. West, for HPCwire

... responding to changes in computing at both ends of the spectrum, [NAG] is positioning itself as the place to go, not just for shrink-wrapped libraries, but also for education and expertise in how to program in parallel, and even for expert advice on how to buy, build and run your own supercomputer. HPCwire talked to Andrew Jones, vice-president of HPC business at NAG, on what he has in mind for this new business and how he sees the future of HPC and parallel programming shaping up ...

http://www.hpcwire.com/features/NAG_Embarks_on_a_New_Business_Venture.html?viewAll=y