Showing posts with label supercomputing. Show all posts
Showing posts with label supercomputing. Show all posts

Friday, 22 May 2020

What makes a Supercomputer Centre a Supercomputer Centre?

When is a Supercomputer Center not a Supercomputer Center?

The world of HPC has always been a place of rapid change in technology with slower change in business models and skill profiles, but what actually makes a supercomputer center a supercomputer center?

Tin (or Silcon maybe)

Is it having a big HPC system? How big counts? Does it matter what type of "big" system you have?

Does it matter if there is not one big supercomputer but instead a handful of medium sized ones of different types?

Does it count if the supercomputers are across the street, or in a self-owned/operated datacentre the other side of town? What if the supercomputers are located hundreds of miles away from the HPC (eg to get cheap power & cooling)?

Who and How

Or is it having a team of HPC experts able to help users? How many experts? What level of expertise counts? How many have to be RSE (Research Software Engineer) types?

Is it having the vision and processes to recognise they are primarily a service provider to their users ("customers") rather than thinking of themselves mainly as a buyer of HPC kit?

What if you mainly have AI workloads rather than "traditional" HPC? What if you only run many small simulation jobs and no simulations that span thousands of cores? What if users only ever submit jobs via web portals and never log in to the supercomputers directly?

Is it essential to have a .edu, .gov, .ac.uk etc. address? Or can .com be a supercomputer center too?

This but not that?

If you have no supercomputers of your own, but have 50 top class HPC experts who work with users on other supercomputers and also research future technologies - is that a supercomputer center?

If you have a very large HPC system but only the bare miuminm of HPC staff and no technology R&D efforts - is that a supercopmputer center?

Which of the last two adds more value to your users?

Declare or Earn?

Is it merely a matter of declaration - "we are a supercomputer center"? Or it is a matter of other supercomputer centers accepting you as a peer? But then who counts as other supercomputer centers to accept you? What if some do and some don't?

Is there a difference between a supercomputer center and a supercomputing center?

What do you think? And does your answer depend on whether you are a user, or work at a "traditional" supercomputer center, or a new type of supercomputing center, or a HPC vendor, or from outside the HPC field?

Friday, 29 September 2017

Finding a Competitive Advantage with High Performance Computing

High Performance Computing (HPC), or supercomputing, is a critical enabling capability for many industries, including energy, aerospace, automotive, manufacturing, and more. However, one of the most important aspects of HPC is that HPC is not only an enabler, it is often also a differentiator – a fundamental means of gaining a competitive advantage.

Differentiating with HPC


Differentiating (gaining a competitive advantage) through HPC can include:
  • faster - complete calculations in a shorter time;
  • more - complete more computations in a given amount of time;
  • better - undertake more complex computations;
  • cheaper - deliver computations at a lower cost;
  • confidence - increase the confidence in the results of the computations; and 
  • impact - effectively exploiting the results of the computations in the business.
These are all powerful business benefits, enabling quicker and better decision making, reducing the cost of business operations, better understanding risk, supporting safety, etc.

Strategic delivery choices are the broad decisions about how to do/use HPC within an organization. This might include:
  • choosing between cloud computing and traditional in-house HPC systems (or points on a spectrum between these two extremes);
  • selecting between a cost-driven hardware philosophy and a capability-driven hardware philosophy;
  • deciding on a balance of internal capability and externally acquired capability;
  • choices on the balance of investment across hardware, software, people and processes.
The answers to these strategic choices will depend on the environment (market landscape, other players, etc.), how and where you want to navigate that environment, and why. This is an area where our consulting customers benefit from our expertise and experience. If I were to extract a core piece of advice from those many consulting projects, it would be: "explicitly make a decision rather than drift into one, and document the reasons, risk accepted, and stakeholder buy-in".

Which HPC technology?


A key means of differentiating with HPC, and one of the most visible, is through the choice of hardware technologies used and at what scale. The HPC market is currently enjoying (or is it suffering?) a broader range of credible hardware technology options than the previous few years.

Monday, 31 July 2017

HPC Getting More Choices - Technology Diversity

HPC has been easy for a while ...


When buying new workstations or personal computers, it is easy to adopt the simple mantra that a newer processor or higher clock frequency means your application will run faster. It is not totally true, but it works well enough. However, with High Performance Computing, HPC, it is more complicated.

HPC works by using parallel computing – the use of many computing elements together. The nature of these computing elements, how they are combined, the hardware and software ecosystems around them, and the challenges for the programmer and user vary significantly – between products and across time. Since HPC works by bringing together many technology elements, the interaction between those elements becomes as important as the elements themselves.

Whilst there has always been a variety of HPC technology solutions, there has been a strong degree of technical similarity of the majority of HPC systems in the last decade or so. This has meant that (i) code portability between platforms has been relatively easy to achieve and (ii) attention to on-node memory bandwidth (including cache optimization) and inter-node scaling aspects would get you a long way towards a single code base that performs well on many platforms.

Increase in HPC technology diversity


However, there is a marked trend of an increase in diversity of technology options over the last few years, with all signs that this is set to continue for the next few years. This includes breaking the near-ubiquity of Intel Xeon processors, the use of many-core processors for the compute elements, increasing complexity (and choice) of the data storage (memory) and movement (interconnect) hierarchies of HPC systems, new choices in software layers, new processor architectures, etc.

This means that unless your code is adjusted to effectively exploit the architecture of your HPC system, your code may not run faster at all on the newer system.

It also means HPC clusters proving themselves where custom supercomputers might have previously been the only option, and custom supercomputers delivering value where commodity clusters might have previously been the default.

Wednesday, 28 June 2017

Is cloud inevitable for HPC?

In 2009, I wrote this article for HPC Wire: "2009-2019: A Look Back on a Decade of Supercomputing", pretending to look back on supercomputing between 2009 and 2019 from the perspective of beyond 2020.

The article opens with the idea that owning your own supercomputer was a thing of the past:
"As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!"
I got this bit wrong:
"And then the critical step — businesses and researchers finally understood that their competitive asset was the capabilities of their modelling software and user expertise — not the hardware itself. Successful businesses rushed to establish a lead over their competitors by investing in their modelling capability — especially robustness (getting trustable predictions/analysis), scalability (being able to process much larger datasets than before) and performance (driving down time to solutions)."
Hardware still matters - in some cases - as a means of gaining a competitive advantage in performance or cost [We help advise if that is true for our HPC consulting customers, and how to ensure the operational and strategic advantage is measured and optimized].

And, of course, my predicted rush to invest in software and people hasn't quite happened yet.

Towards the end, I predicted three major computing providers, from which most people got their HPC needs:
"We have now left the housing and daily care of the hardware to the specialists. The volume of public and private demand has set the scene for strong HPC provision into the future. We have the three official global providers to ensure consumer choice, with its competitive benefits, but few enough providers to underpin their business cases for the most capable possible HPC infrastructure."
Whilst my predictions were a little off in timing, some could be argued to have come true e.g., the rise to the top of Chinese supercomputing, the increasing likelihood of using someone else's supercomputer rather than buying your own (even if we still call it cloud), etc.

With the ongoing debate around cloud vs in-house HPC (where I am desperately trying to inject some impartial debate to balance the relentless and brash cloud marketing), re-visiting this article made an interesting trip down memory lane for me. I hope you might enjoy it too.

As I recently posted on LinkedIn:
"Cloud will never be the right solution for everyone/every use case. Cloud is rightly the default now for corporate IT, hosted applications, etc. But, this cloud-for-everything is unfortunately, wrongly, extrapolated to specialist computing (e.g.,  high performance computing, HPC), where cloud won't be the default for a long time.
For many HPC users, cloud is becoming a viable path to HPC, and very soon perhaps even the default option for many use cases. But, cloud is not yet, and probably never will be, the right solution for everyone. There will always be those who can legitimately justify a specialized capability (e.g., a dedicated HPC facility) rather than a commodity solution (i.e., cloud, even "HPC cloud"). The reasons for this might include better performance, specific operational constraints, lower TCO, etc. that only specialized facilities can deliver. 
The trick is to get an unbiased view for your specific situation, and you should be aware that most of the commentators on cloud are trying to sell cloud solutions or related services, so are not giving you impartial advice!"
[We provide that impartial advice on cloud, measuring performance, TCO, and related topics to our HPC consulting customers]


@hpcnotes

Wednesday, 21 June 2017

Deeply learning about HPC - ISC17 day 3 summary - Wednesday evening

For most of the HPC people gathered in Frankfurt for ISC17, Wednesday evening marks the end of the hard work, the start of the journey home for some, already home for others. A few hardy souls will hang on until Thursday for the workshops. So, as you relax with a drink in Frankfurt, trudge through airports on the way home, or catch up on the week's emails, here's my final daily summary of ISC17, as seen through the lens of twitter, private conversations, and the HPC media.

This follows my highlights blogs from Monday "Cutting through the ISC17 clutter"  (~20k views so far) and Tuesday "ISC17 information overload" (~4k views so far).

So what sticks out from the last day, and what sticks out from the week overall?

Deep Learning

Wednesday was touted by ISC as "deep learning day". If we follow the current convention (inaccurate but seemingly pervasive) of using deep learning, machine learning, AI (nobody actually spells out artificial intelligence), big data, data analytics, etc. as totally interchangeable terms (why let facts get in the way of good marketing?), then Wednesday was indeed deep learning day, judging by by tweet references to one or more of the above. However, I struggle to nail down exactly what I am supposed to have learnt about HPC and deep learning from today's content. Perhaps you had to be there in person (there is a reason why attending conferences is better than watching via twitter).

I think my main observations are:
  • DL/ML/AI/BigData/analytics/... is a real and growing part of the HPC world - both in terms of "traditional" HPC users looking at these topics, and new users from these backgrounds peering into the HPC community to seek performance advantages.
  • A huge proportion of the HPC community doesn't really know what DL/ML/... actually means in practice (which software, use case, workflow, skills, performance characteristics, ...).
  • It is hard to find the reality behind the marketing of DL/ML/... products, technologies, and "success stories" of the various vendors. But, hey, what's new? - I was driven to deal with this issue for GPUs and cloud in my recent webinar "Dissecting the myths of Cloud and GPUs for HPC".
  • Between all of the above, I still feel there is a huge opportunity being missed: for users in either community and for the technology/product providers. I don't have the answers though.

Snippets

Barcelona (BSC) has joined other HPC centers (e.g., Bristol Isambard, Cambridge Peta5, ...) in buying a bit of everything to explore the technology diversity for future HPC systems: "New MareNostrum Supercomputer Reflects Processor Choices Confronting HPC Users".

Exascale is now a world-wide game: China, European countries, USA, Japan are all close enough to start talking about how they might get to exascale, rather than merely visions of wanting to get there.

People are on the agenda: growing the future HPC talent, e.g., the ISC STEM Student Day Day & Gala, the Student Cluster Competition, gender diversity (Women-in-HPC activities), and more.

Wrapping up

There are some parts of ISC that have been repeated over the years due to demand. Thomas Sterling's annual "HPC Achievement & Impact" keynote that traditionally closes ISC (presenting as I write this) is an excellent session and goes a long way towards justifying the technical program registration fee.

2017 sees the welcome return of Addison Snell's "Analyst Crossfire". With a great selection of questions, fast pace, and well chosen panel members, this is always a good event. Of course, I am biased towards the ISC11 Analyst Crossfire being the best one!

I'll join Addison's fun with my "one up, one down" for ISC17. Up is CSCS, not merely for Piz Daint knocking the USA out of the top 3 of the Top500, but for a sustained program of supercomputing over many years, culminating in this leadership position. Down is Intel - brings a decent CPU to market in Skylake but gets backlash for pricing, has to face uncertainty over the CORAL Aurora project, and in spite of a typically high profile presence at the show, a re-emerging rival AMD takes a good share of the twitter & press limelight with EPYC.


Until next time

That's all from me for ISC17. I'll be back with more blogs over the next few weeks, based on my recent conference talks (e.g., "Six Trends in HPC for Engineers" and "Measuring the Business Impact of HPC").

You can catch up with me in person at the SEG Annual Meeting, EAGE HPC Workshop (I'm presenting), the TACC-NAG Training Institute for Managers, and SC17 (I can reveal we will be delivering tutorials again, including a new one - more details soon!).

In the meantime, interact with me on twitter @hpcnotes, where I provide pointers to key HPC content, plus my comments and opinions on HPC matters (with a bit of F1 and travel geekery thrown in for fun).

Safe travels,

Tuesday, 20 June 2017

ISC17 information overload - Tuesday afternoon summary

I hope you've been enjoying a productive ISC17 if you are in Frankfurt, or if not have been able to keep up with the ISC17 news flow from afar.

My ISC17 highlights blog post from yesterday ("Cutting through the clutter of ISC17: Monday lunchtime summary") seems to have collected over 11,000 page-views so far. Since this hpcnotes blog normally only manages several hundred to a few thousand page views per post, I'm assuming a bot somewhere is inflating the stats. However, there are probably enough real readers to make me write another one. So here goes - my highlights of ISC17 news flow as of Tuesday mid-afternoon.

Monday, 19 June 2017

Cutting through the clutter of ISC17: Monday lunchtime summary

ISC, the HPC community's 2nd biggest annual gathering, in fully underway in Frankfurt now. ISC week is characterized by a vibrant twitter flood (#ISC17), topped up with a deluge of press releases (a small subset of which are actually news), plus a plethora of news and analysis pieces in the HPC media. And, of course, anyone physically present at ISC, has presentations, meetings, and exhibitors further demanding their attention.

I go to ISC almost every year. It is a valuable use of time for anyone in the HPC community or who uses, or has an interest in, HPC even if they don't see themselves as part of the HPC community. However, I have decided not to attend ISC this year, due to other commitments. However, I will keep an eye on the "news" throughout the week and post a handful of summary blogs (like this one), which might be a useful catch-up on "news" so far, whether you are attending ISC or watching from afar.

Saturday, 26 November 2016

Secrets, lies, women and money: the definitive summary of SC16 - Part 1

Just over a week ago 11,000 people were making their way home from the biggest supercomputing event of the year – SC16 in Salt Lake City. With so much going on at SC, even those who were there in person likely still missed a huge proportion of what happened. It’s simply too busy to keep up with all the news during the week, too many events/talks/meetings happening in parallel, and much of the interesting stuff only gets talked about behind closed doors or through informal networking.

There were even a couple of top-notch tutorials on HPC acquisition and TCO/funding models :-)

Amongst this productive chaos, I was flattered to be told several times at during SC that people find my blogs worth reading and commented they hadn’t seen any recently. I guess the subtext was “it’s about time I wrote some more”. So, I’ll make an effort to blog more often again. Starting with my thoughts on SC16 itself.

As ever, while I do soften the occasional punch in my writing (not usually in person though), there remains the possibility that some readers won’t like some of my opinions, and there’s always the risk of me straying into controversy in places.

I've got four topics to cover: secrets, lies, women and money.

Monday, 9 November 2015

SC15 Preview

SC15 - the biggest get-together of the High Performance Computing (HPC) world - takes place next week in Austin, TX. Around 10,000 buyers, users, programmers, managers, business development people, funders, researchers, media, etc. will be there.

With a large technical program, an even larger exhibition, and plenty of associated workshops, product launches, user groups, etc., SC15 will dominate the world of HPC for a week, plus most of this week leading up to it. It is one of the best ways for HPC practitioners to share experiences, learn about the latest advances, and build collaborations and business relationships.

So, to wet your appetites, here is the @hpcnotes preview to SC15 - what I think might be the key topics, things to look out for, what not to miss, etc.

New supercomputers

It's always one of the aspects of SC that grabs the media and attendee attention the most. Which new biggest supercomputers will be announced? Will there be a new occupier of the No.1 spot on the Top500 list? Usually I have some idea of what new supercomputers are coming up before they are public, but this year I have no idea. My guess? No new No.1. A few new Top20 machines. So which one will win the news coverage?

New products

In spite of the community repeatedly acknowledging that the whole system is important - memory, interconnect, I/O, software, architecture, packaging, etc., judging by the media attention and informal conversations, we still seem to get most excited by the processors.

Monday, 5 October 2015

Previous SC content ...

I'll write some new content for SC15 Austin soon but while you are waiting, here are two of my previous writings on SC:
Enjoy!

Thursday, 10 October 2013

Supercomputing - the reality behind the vision

My opinion piece "Supercomputing - the reality behind the vision" was published today in Scientific Computing World, where I:
  • liken a supercomputer to a "pile of silicon, copper, optical fibre, pipework, and other heavy hardware [...] an imposing monument that politicians can cut ribbons in front of";
  • describe system architecture as "the art of balancing the desires of capacity, performance and resilience against the frustrations of power, cooling, dollars, space, and so on";
  • introduce software as magic and infrastructure and a virtual knowledge engine;
  • and note that "delivering science insight or engineering results from [supercomputing] requires users";
  • and propose that we need a roadmap for people just as much as for the hardware technology.

Read the full article here: http://www.scientific-computing.com/news/news_story.php?news_id=2270.


Thursday, 18 July 2013

An early blog about SC13 Denver - just for fun ...

As SC13 registration opens this week, it occurs to me both how far away SC13 is (a whole summer and several months after that) but also how close SC13 is (only a summer and a month or two). It got me thinking how far ahead people plan for SC. I have heard of people who book hotels for the next SC as soon as they home from the previous SC (to secure the best deal/hotel/etc.). I have also heard stories of those who still have not booked flights only days before SC.

So, just for fun - how far ahead do you plan your travel for SC? Are you the kind of HPC person who books SC13 as soon as SC12 has ended? Or do you leave SC13 travel booking until a week or two before SC13? Of course, it may not be up to you - many attendees need to get travel authority etc. and this is often hard to get a long time in advance.

Please complete the survey here - http://www.surveymonkey.com/s/3MRSYYH

Once I have enough reponses, I will write another blog revealing the results.

Enjoy!

[PS - this survey is not on behalf of, or affiliated with, either the SC13 organisers or anyone else - it's just a curiosity and to share in a blog later.]

Friday, 11 January 2013

Predictions for 2013 in HPC

As we stumble into the first weeks of 2013, it is the season for predictions about what the coming year will bring. In my case, following my recent review of HPC in 2012, I get to make some predictions for the world of HPC in 2013.


Buzzwords

First up, this year’s buzzword for HPC marketing and technology talks. Last year was very much the year of “Big Data” as a buzzword. As that starts to become old hat (and real work) a new buzzword will be required. Cynical? My prediction is that this year will see Big Data still present in HPC discussions and real usage but it will diminish in use as a buzzword. 2013 will probably spawn two buzzwords.

The first buzzword will be “energy-efficient computing”. We saw the use of this a little last year but I think it will become the dominant buzzword this year. Most technical talks will include some reference to energy-efficient computing (or the energy cost of the solution or etc.). All marketing departments will swing into action to brand their HPC products and services as energy efficient computing – much as they did with Big Data and before that, Cloud Computing, and so on. Yes, I’m being a tad cynical about the whole thing. I’m not suggesting that energy efficiency is not important – in fact it is essential to meet our ambitions in HPC. I’m merely noting its impending over-use as a theme. And of course, energy efficient computing is not the same as Green Computing – after all that buzzword is several years old now.

Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.

Thursday, 20 December 2012

A review of 2012 in supercomputing - Part 2

This is Part 2 of my review of the year 2012 in supercomputing and related matters.

In Part 1 of the review I re-visited the predictions I made at the start of 2012 and considered how they became real or not over the course of the year. This included cloud computing, Big Data (mandatory capitalization!), GPU, MIC, and ARM - and software innovation. You can find Part 1 here: http://www.hpcnotes.com/2012/12/a-review-of-2012-in-supercomputing-part.html.

Part 2 of the review looks at the themes and events that emerged during the year. As in Part 1, this is all thoroughly biased, of course, towards things that interested me throughout the year.

The themes that stick out in my mind from HPC/supercomputing in 2012 are:
  • The exascale race stalls
  • Petaflops become "ordinary"
  • HPC seeks to engage a broader user community
  • Assault on the Top500

The exascale race stalls

The global race towards exascale supercomputing has been a feature of the last few years. I chipped in myself at the start of 2012 with a debate on the "co-design" mantra.

Confidently tracking the Top500 trend lines, the HPC community had pinned 2018 as the inevitable arrival date of the first supercomputer with a peak performance in excess of 1 exaflops. [Note the limiting definition of the target - loosely coupled computing complexes with aggregate capacity greater than exascale will probably turn up before the HPC machines - and peak performance in FLOPS is the metric here - not application performance or any assumptions of balanced systems.]

Some more cautious folk hedged a delay into their arrival dates and talked about 2020. However, it became apparent throughout 2012 that the US government did not have the appetite (or political support) to commit to being the first to deploy an exascale supercomputer. Other regions of the world have - like the USA government - stated their ambitions to be among the leaders in exascale computing. But no government has yet stood up and committed to a timetable nor to being the first to get there. Critically, neither has anyone committed the required R&D funding needed now to develop the technologies [hardware and software] that will make exascale supercomputing viable.

The consensus at the end of 2012 seems to be towards a date of 2022 for the first exascale supercomputer - and there is no real consensus on which country will win the race to have the first exascale computer.

Perhaps we need to re-visit our communication of the benefits of more powerful supercomputers to the wider economy and society (what is the point of supercomputers?). Communicating the value to society and describing the long term investment requirements is always a fundamental need of any specialist technology but it becomes crucially essential during the testing fiscal conditions (and thus political pressures) that governments face right now.


Tuesday, 18 December 2012

A review of 2012 in supercomputing - Part 1

It's that time of year when doing a review of the last twelve months seems like a good idea for a blog topic. (To be followed soon after by a blog of predictions for the next year.)
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.


Predictions for 2012

Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
  • The use of "cloud computing" as the preferred marketing buzzword used for large swathes of the HPC product space would come to an end.
  • There would be an onslaught of "Big Data" (note the compulsory capital letters) as the marketing buzzword of choice for 2012 - to be applied to as many HPC products as possible - even if only a tenuous relevance (just like cloud computing before it - and green computing before that - and so on ...)
  • There would be a vigorous ongoing debate over the relative merits and likely success of GPUs (especially from NVidia) vs. Intel's MIC (now called Xeon Phi).
  • ARM would become a common part of the architecture debate alongside x86 and accelerators.
  • There would be a growth in the recognition that software and people matter just as much as the hardware.

Thursday, 8 November 2012

HPC notes at SC12

I'll be at SC12 next week.

I have a mostly full schedule in advance but I always leave a little time to explore the show floor, and to meet new people or old friends.

If you are at SC12 too, you might be able to find me via the NAG booth (#2431) - or walking the streets between meetings - or at one of the networking receptions.


If you are a twitter person - you can find me at @hpcnotes (but be warned I won't be tweeting most of the HPC news during the show - @HPC_Guru is much better for that).

Hope to see some of you there. And rember my tribute from last year to those not attending SC.

Tuesday, 6 November 2012

HPC fun for SC12

I've previously written some light-hearted but partly serious pieces for the main supercomputing events.

I'm working on one for SC12 too - again to be published in HPC Wire - but in the meantime, here are pointers for the SC11 and ISC11 articles:

Friday, 12 October 2012

The making of “1000x” – unbalanced supercomputing

I have posted a new article on the NAG blog: The making of "1000x" – unbalanced supercomputing.

This goes behind my article in HPCwire ("Chasing1000x: The future of supercomputing is unbalanced"), where I explain the pun in the title and dip into some of the technology issues affecting the next 1000x in performance.

Tuesday, 2 October 2012

The first mention of SC12

It's that time of year again. SC has started to drift into my inbox and phone conversations with increasing regularity - here comes Supercomputing 2012 in Salt Lake City. Last year, in the run up to SC11 in Seattle, I wrote the SC11 diary - blogging every few days on my preparations and thoughts for the biggest annual event of the supercomputing world.

I'm not sure I'll do such a diary again this year (unless by popular demand - not likely!). However, I will be writing some articles for some publications (HPC Wire and others - see my previous articles) in the coming weeks which will set the scene for SC from my point of view - burning issues I hope will be debated in the community, key technology areas I will be watching, and so on.

In the meantime, if you crave SC reading material, you might amuse yourself by reading my previous fun at SC time (e.g. The top ten myths of SC - in HPC Wire for SC11) or you might even want to translate my fun from ISC (Are you an ISC veteran?) to new meanings at SC.

If you want more serious content, then browse on this blog site (e.g. tagged "events") or on the NAG Blog (e.g. tagged "HPC").

If you find nothing you like - drop me a comment below or via twitter and I'll see what I can do to address the topic you are interested in!

Thursday, 2 August 2012

What is the point of supercomputers?

Maybe it seems an odd question to ask on a blog dedicated to High Performance Computing (HPC). But it is good to question why we do things – hopefully leading us to a clearer justification for investing money, time and effort. Ideally, this would also enable better delivery – the “how” supporting the “why” – focusing on the best processes, technologies, etc. to achieve the goals identified in the justification.

So, again, why supercomputing? Perhaps you think the answer is obvious – supercomputing enables modelling and simulation to be done faster than with normal computers, or enables bigger problems to be solved.