Friday, 12 October 2012

The making of “1000x” – unbalanced supercomputing

I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.

This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.

Tuesday, 2 October 2012

The first mention of SC12

It's that time of year again. SC has started to drift into my inbox and phone conversations with increasing regularity - here comes Supercomputing 2012 in Salt Lake City. Last year, in the run up to SC11 in Seattle, I wrote the SC11 diary - blogging every few days on my preparations and thoughts for the biggest annual event of the supercomputing world.

I'm not sure I'll do such a diary again this year (unless by popular demand - not likely!). However, I will be writing some articles for some publications (HPC Wire and others - see my previous articles) in the coming weeks which will set the scene for SC from my point of view - burning issues I hope will be debated in the community, key technology areas I will be watching, and so on.

In the meantime, if you crave SC reading material, you might amuse yourself by reading my previous fun at SC time (e.g. The top ten myths of SC - in HPC Wire for SC11) or you might even want to translate my fun from ISC (Are you an ISC veteran?) to new meanings at SC.

If you want more serious content, then browse on this blog site (e.g. tagged "events") or on the NAG Blog (e.g. tagged "HPC").

If you find nothing you like - drop me a comment below or via twitter and I'll see what I can do to address the topic you are interested in!

Thursday, 2 August 2012

What is the point of supercomputers?

Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.

So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.

Friday, 15 June 2012

Supercomputers are for dreams

I was invited to the 2012 NCSA Annual Private Sector Program (PSP) meeting in May. In my few years of attending, this has always been a great meeting (attendance by invitation only), with an unusually high concentration of real HPC users and managers from industry.

NCSA have recently released streaming video recordings of the main sessions - the videos can be found  as links on the Annual PSP Meeting agenda page.

Bill Gropp chaired a panel session on "Modern Software Implementation" with myself and Gerry Labedz as panellists.

The full video (~1 hour) is here but I have also prepared a breakdown of the panel discussion in this blog post below.


Wednesday, 6 June 2012

Some fun for ISC12

I have written a guest blog post for the ISC'12 website - "Are you an ISC veteran?". The article is intended to raise a few serious observations amongst the fun.

I also wrote an earlier guest blog post for the ISC'12 website - "Is co-design for exascale computing a false hope?"

I've added these two links to my page on this site "Interviews, Quotes, Articles" (which lists my various articles, interviews, etc. in other locations around the internet).


Wednesday, 30 May 2012

The power of supercomputers - energy, exascale and elevators

Paul Henning has written on his blog (HPC Ruminations) about the growing issue of power requirements for large scale computing. Paul's blog post - "Familiarity Breeds Complacency" - is partly in response to my article at HPCwire - "Exascale: power is not the problem" and my follow-up disucssion on here - "Supercomputers and other large science facilities".

Paul makes several good points and his post is well worth reading. He ends with an observation that I've noted before (in my own words):

One of supercomputing's biggest strengths - it's ability to help almost all areas of science and engineering - is also one of it's greatest weaknesses - because there a portfolio of cases rather than a single compelling champion to drive attention and investment.

PS - I've added Paul's new blog to my list of HPC blogs and news sites.

Friday, 25 May 2012

Looking ahead to ISC'12

I have posted my preview of ISC'12 Hamburg - the summer's big international conference for the world of supercomputing over on the NAG blog. I will be attending ISC'12, along with several of my NAG colleagues. My blog post discusses these five key topics:
  • GPU vs MIC vs Other
  • What is happening with Exascale?
  • Top 500, Top 10,
  • Tens of PetaFLOPS
  • Finding the advantage in software
  • Big Data and HPC 
Read more on the NAG blog ...

Thursday, 29 March 2012

Co-design for exascale

I wrote a blog for the ISC website on co-design for exascale.

This has also been mentioned on InsideHPC here.

I made similar comments at the panel hosted by Thomas Sterling at the HPCC conference in Newport, RI earlier this week.

The video of this panel should be posted soon at InsideHPC soon.

Thursday, 9 February 2012

HPC Insiders - The Newport Gathering

The warm up for the annual HPCC meeting in Newport RI (March 26-28) has started - Are You an HPC Industry Insider?.

"The National High Performance Computing and Communications Conference (NHPCC) will highlight several exciting changes this year. Also known as the Newport Conference, the elite gathering that started 26 years ago as a one-day event to bring vendors together with government agency personnel has expanded its focus this year to include a more global perspective."

"Another significant change this year is the emphasis on manufacturing and competitiveness."

I have a page on this blog site the main HPC events of the year. Many people have rightly remarked that the HPC community really is that - a community - and that there is still a relatively high degree of connection between the various practitioners. In other words, despite its growing size and global reach, it feels like a small community. People know each other. Consequently, networking, whether technical or commercial, goes a long way to helping your business. Whatever your scale of technical computing, from multicore workstations to multi-thousand-node supercomputers, getting involved with the active HPC community can help you with your parallel computing goals. Online resources can help, but by far the most effective way of benefiting from the wider HPC community is by participating at the right events.

I enjoy this Newport event - I think it is one of the best annual events for the HPC community - and am looking forward to great discussions and meeting the many friends in the international HPC community. See you there!

Thursday, 19 January 2012

Cloud computing or HPC? Finding trends.

I posted "Cloud computing or HPC? Finding trends." on the NAG blog today. Some extracts ...
Enable innovation and efficiency in product design and manufacture by using more powerful simulations. Apply more complex models to better understand and predict the behaviour of the world around us. Process datasets faster and with more advance analyses to extract more reliable and previously hidden insights and opportunities.
... and ...
High performance computing (HPC), supercomputing, computational science and engineering, technical computing, advanced computer modelling, advanced research computing, etc. The range of names/labels and the diversity of the audience involved mean that what is a common everyday term for many (e.g. HPC) is an unrecognised meaningless acronym to others - even though they are doing "HPC".
... and then I use some Google Trends plots to explore some ideas ...

Read the full article ...