Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Friday, 22 May 2020

What makes a Supercomputer Centre a Supercomputer Centre?

When is a Supercomputer Center not a Supercomputer Center?

The world of HPC has always been a place of rapid change in technology with slower change in business models and skill profiles, but what actually makes a supercomputer center a supercomputer center?

Tin (or Silcon maybe)

Is it having a big HPC system? How big counts? Does it matter what type of "big" system you have?

Does it matter if there is not one big supercomputer but instead a handful of medium sized ones of different types?

Does it count if the supercomputers are across the street, or in a self-owned/operated datacentre the other side of town? What if the supercomputers are located hundreds of miles away from the HPC (eg to get cheap power & cooling)?

Who and How

Or is it having a team of HPC experts able to help users? How many experts? What level of expertise counts? How many have to be RSE (Research Software Engineer) types?

Is it having the vision and processes to recognise they are primarily a service provider to their users ("customers") rather than thinking of themselves mainly as a buyer of HPC kit?

What if you mainly have AI workloads rather than "traditional" HPC? What if you only run many small simulation jobs and no simulations that span thousands of cores? What if users only ever submit jobs via web portals and never log in to the supercomputers directly?

Is it essential to have a .edu, .gov, .ac.uk etc. address? Or can .com be a supercomputer center too?

This but not that?

If you have no supercomputers of your own, but have 50 top class HPC experts who work with users on other supercomputers and also research future technologies - is that a supercomputer center?

If you have a very large HPC system but only the bare miuminm of HPC staff and no technology R&D efforts - is that a supercopmputer center?

Which of the last two adds more value to your users?

Declare or Earn?

Is it merely a matter of declaration - "we are a supercomputer center"? Or it is a matter of other supercomputer centers accepting you as a peer? But then who counts as other supercomputer centers to accept you? What if some do and some don't?

Is there a difference between a supercomputer center and a supercomputing center?

What do you think? And does your answer depend on whether you are a user, or work at a "traditional" supercomputer center, or a new type of supercomputing center, or a HPC vendor, or from outside the HPC field?

Friday, 21 February 2020

Why cloud computing is like air travel

Some fun observations comparing the worlds of cloud computing and air travel ...

Why cloud computing is like air travel

  • The price depends on how far in advance you commit/buy.
  • The marketing focus on the desirability of the posher seats / more powerful VMs but on the costs of the cheapest seats / VMs.
  • Just like there are three main alliances (Oneworld, Star Alliance, SkyTeam) plus various independents airlines, there are three main cloud providers (Microsoft Azure, Google, Amazon) plus various specialist cloud providers.

Monday, 13 January 2020

A step into the future: HPC and cloud

I am delighted to announce that at the start of February, I will be joining the Microsoft Azure HPC engineering & product team.

The HPC world has experienced several big changes in technology or business model over the last few decades. Cloud computing is probably the next big change facing HPC, on both business model and technology fronts.

I have been privileged to have earned a reputation with a wide range of HPC buyers and technology vendors as an impartial and knowledgeable voice on both the business and technical aspects of HPC (including cloud) over the last few years. A major trend that I observed was the pace at which I had to keep updating my independent assessment of the readiness and value of cloud. Today, on-premises HPC is still a great option to deliver impact and value to users. However, I have watched the amazing journey of cloud towards a genuine option delivering new or better value to HPC users and buyers.

In particular, I have been impressed with the approach taken by Microsoft Azure towards the HPC space. This includes strong technology and product offerings, a sector-leading people strategy, and much more. Of course, the journey towards leadership of cloud for HPC is still in progress and I am excited to help drive that adventure by joining the Azure HPC team.

More details of our vision, and my own role, will be shared over the coming days and months. Follow me on Twitter (@hpcnotes) and LinkedIn (www.linkedin.com/in/andrewjones) to learn more.




Friday, 29 September 2017

Finding a Competitive Advantage with High Performance Computing

High Performance Computing (HPC), or supercomputing, is a critical enabling capability for many industries, including energy, aerospace, automotive, manufacturing, and more. However, one of the most important aspects of HPC is that HPC is not only an enabler, it is often also a differentiator – a fundamental means of gaining a competitive advantage.

Differentiating with HPC


Differentiating (gaining a competitive advantage) through HPC can include:
  • faster - complete calculations in a shorter time;
  • more - complete more computations in a given amount of time;
  • better - undertake more complex computations;
  • cheaper - deliver computations at a lower cost;
  • confidence - increase the confidence in the results of the computations; and 
  • impact - effectively exploiting the results of the computations in the business.
These are all powerful business benefits, enabling quicker and better decision making, reducing the cost of business operations, better understanding risk, supporting safety, etc.

Strategic delivery choices are the broad decisions about how to do/use HPC within an organization. This might include:
  • choosing between cloud computing and traditional in-house HPC systems (or points on a spectrum between these two extremes);
  • selecting between a cost-driven hardware philosophy and a capability-driven hardware philosophy;
  • deciding on a balance of internal capability and externally acquired capability;
  • choices on the balance of investment across hardware, software, people and processes.
The answers to these strategic choices will depend on the environment (market landscape, other players, etc.), how and where you want to navigate that environment, and why. This is an area where our consulting customers benefit from our expertise and experience. If I were to extract a core piece of advice from those many consulting projects, it would be: "explicitly make a decision rather than drift into one, and document the reasons, risk accepted, and stakeholder buy-in".

Which HPC technology?


A key means of differentiating with HPC, and one of the most visible, is through the choice of hardware technologies used and at what scale. The HPC market is currently enjoying (or is it suffering?) a broader range of credible hardware technology options than the previous few years.

Wednesday, 28 June 2017

Is cloud inevitable for HPC?

In 2009, I wrote this article for HPC Wire: "2009-2019: A Look Back on a Decade of Supercomputing", pretending to look back on supercomputing between 2009 and 2019 from the perspective of beyond 2020.

The article opens with the idea that owning your own supercomputer was a thing of the past:
"As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!"
I got this bit wrong:
"And then the critical step — businesses and researchers finally understood that their competitive asset was the capabilities of their modelling software and user expertise — not the hardware itself. Successful businesses rushed to establish a lead over their competitors by investing in their modelling capability — especially robustness (getting trustable predictions/analysis), scalability (being able to process much larger datasets than before) and performance (driving down time to solutions)."
Hardware still matters - in some cases - as a means of gaining a competitive advantage in performance or cost [We help advise if that is true for our HPC consulting customers, and how to ensure the operational and strategic advantage is measured and optimized].

And, of course, my predicted rush to invest in software and people hasn't quite happened yet.

Towards the end, I predicted three major computing providers, from which most people got their HPC needs:
"We have now left the housing and daily care of the hardware to the specialists. The volume of public and private demand has set the scene for strong HPC provision into the future. We have the three official global providers to ensure consumer choice, with its competitive benefits, but few enough providers to underpin their business cases for the most capable possible HPC infrastructure."
Whilst my predictions were a little off in timing, some could be argued to have come true e.g., the rise to the top of Chinese supercomputing, the increasing likelihood of using someone else's supercomputer rather than buying your own (even if we still call it cloud), etc.

With the ongoing debate around cloud vs in-house HPC (where I am desperately trying to inject some impartial debate to balance the relentless and brash cloud marketing), re-visiting this article made an interesting trip down memory lane for me. I hope you might enjoy it too.

As I recently posted on LinkedIn:
"Cloud will never be the right solution for everyone/every use case. Cloud is rightly the default now for corporate IT, hosted applications, etc. But, this cloud-for-everything is unfortunately, wrongly, extrapolated to specialist computing (e.g.,  high performance computing, HPC), where cloud won't be the default for a long time.
For many HPC users, cloud is becoming a viable path to HPC, and very soon perhaps even the default option for many use cases. But, cloud is not yet, and probably never will be, the right solution for everyone. There will always be those who can legitimately justify a specialized capability (e.g., a dedicated HPC facility) rather than a commodity solution (i.e., cloud, even "HPC cloud"). The reasons for this might include better performance, specific operational constraints, lower TCO, etc. that only specialized facilities can deliver. 
The trick is to get an unbiased view for your specific situation, and you should be aware that most of the commentators on cloud are trying to sell cloud solutions or related services, so are not giving you impartial advice!"
[We provide that impartial advice on cloud, measuring performance, TCO, and related topics to our HPC consulting customers]


@hpcnotes

Wednesday, 21 June 2017

Deeply learning about HPC - ISC17 day 3 summary - Wednesday evening

For most of the HPC people gathered in Frankfurt for ISC17, Wednesday evening marks the end of the hard work, the start of the journey home for some, already home for others. A few hardy souls will hang on until Thursday for the workshops. So, as you relax with a drink in Frankfurt, trudge through airports on the way home, or catch up on the week's emails, here's my final daily summary of ISC17, as seen through the lens of twitter, private conversations, and the HPC media.

This follows my highlights blogs from Monday "Cutting through the ISC17 clutter"  (~20k views so far) and Tuesday "ISC17 information overload" (~4k views so far).

So what sticks out from the last day, and what sticks out from the week overall?

Deep Learning

Wednesday was touted by ISC as "deep learning day". If we follow the current convention (inaccurate but seemingly pervasive) of using deep learning, machine learning, AI (nobody actually spells out artificial intelligence), big data, data analytics, etc. as totally interchangeable terms (why let facts get in the way of good marketing?), then Wednesday was indeed deep learning day, judging by by tweet references to one or more of the above. However, I struggle to nail down exactly what I am supposed to have learnt about HPC and deep learning from today's content. Perhaps you had to be there in person (there is a reason why attending conferences is better than watching via twitter).

I think my main observations are:
  • DL/ML/AI/BigData/analytics/... is a real and growing part of the HPC world - both in terms of "traditional" HPC users looking at these topics, and new users from these backgrounds peering into the HPC community to seek performance advantages.
  • A huge proportion of the HPC community doesn't really know what DL/ML/... actually means in practice (which software, use case, workflow, skills, performance characteristics, ...).
  • It is hard to find the reality behind the marketing of DL/ML/... products, technologies, and "success stories" of the various vendors. But, hey, what's new? - I was driven to deal with this issue for GPUs and cloud in my recent webinar "Dissecting the myths of Cloud and GPUs for HPC".
  • Between all of the above, I still feel there is a huge opportunity being missed: for users in either community and for the technology/product providers. I don't have the answers though.

Snippets

Barcelona (BSC) has joined other HPC centers (e.g., Bristol Isambard, Cambridge Peta5, ...) in buying a bit of everything to explore the technology diversity for future HPC systems: "New MareNostrum Supercomputer Reflects Processor Choices Confronting HPC Users".

Exascale is now a world-wide game: China, European countries, USA, Japan are all close enough to start talking about how they might get to exascale, rather than merely visions of wanting to get there.

People are on the agenda: growing the future HPC talent, e.g., the ISC STEM Student Day Day & Gala, the Student Cluster Competition, gender diversity (Women-in-HPC activities), and more.

Wrapping up

There are some parts of ISC that have been repeated over the years due to demand. Thomas Sterling's annual "HPC Achievement & Impact" keynote that traditionally closes ISC (presenting as I write this) is an excellent session and goes a long way towards justifying the technical program registration fee.

2017 sees the welcome return of Addison Snell's "Analyst Crossfire". With a great selection of questions, fast pace, and well chosen panel members, this is always a good event. Of course, I am biased towards the ISC11 Analyst Crossfire being the best one!

I'll join Addison's fun with my "one up, one down" for ISC17. Up is CSCS, not merely for Piz Daint knocking the USA out of the top 3 of the Top500, but for a sustained program of supercomputing over many years, culminating in this leadership position. Down is Intel - brings a decent CPU to market in Skylake but gets backlash for pricing, has to face uncertainty over the CORAL Aurora project, and in spite of a typically high profile presence at the show, a re-emerging rival AMD takes a good share of the twitter & press limelight with EPYC.


Until next time

That's all from me for ISC17. I'll be back with more blogs over the next few weeks, based on my recent conference talks (e.g., "Six Trends in HPC for Engineers" and "Measuring the Business Impact of HPC").

You can catch up with me in person at the SEG Annual Meeting, EAGE HPC Workshop (I'm presenting), the TACC-NAG Training Institute for Managers, and SC17 (I can reveal we will be delivering tutorials again, including a new one - more details soon!).

In the meantime, interact with me on twitter @hpcnotes, where I provide pointers to key HPC content, plus my comments and opinions on HPC matters (with a bit of F1 and travel geekery thrown in for fun).

Safe travels,

Tuesday, 20 June 2017

ISC17 information overload - Tuesday afternoon summary

I hope you've been enjoying a productive ISC17 if you are in Frankfurt, or if not have been able to keep up with the ISC17 news flow from afar.

My ISC17 highlights blog post from yesterday ("Cutting through the clutter of ISC17: Monday lunchtime summary") seems to have collected over 11,000 page-views so far. Since this hpcnotes blog normally only manages several hundred to a few thousand page views per post, I'm assuming a bot somewhere is inflating the stats. However, there are probably enough real readers to make me write another one. So here goes - my highlights of ISC17 news flow as of Tuesday mid-afternoon.

Saturday, 26 November 2016

Secrets, lies, women and money: the definitive summary of SC16 - Part 1

Just over a week ago 11,000 people were making their way home from the biggest supercomputing event of the year – SC16 in Salt Lake City. With so much going on at SC, even those who were there in person likely still missed a huge proportion of what happened. It’s simply too busy to keep up with all the news during the week, too many events/talks/meetings happening in parallel, and much of the interesting stuff only gets talked about behind closed doors or through informal networking.

There were even a couple of top-notch tutorials on HPC acquisition and TCO/funding models :-)

Amongst this productive chaos, I was flattered to be told several times at during SC that people find my blogs worth reading and commented they hadn’t seen any recently. I guess the subtext was “it’s about time I wrote some more”. So, I’ll make an effort to blog more often again. Starting with my thoughts on SC16 itself.

As ever, while I do soften the occasional punch in my writing (not usually in person though), there remains the possibility that some readers won’t like some of my opinions, and there’s always the risk of me straying into controversy in places.

I've got four topics to cover: secrets, lies, women and money.

Tuesday, 18 December 2012

A review of 2012 in supercomputing - Part 1

It's that time of year when doing a review of the last twelve months seems like a good idea for a blog topic. (To be followed soon after by a blog of predictions for the next year.)
So, here goes - my review of the year 2012 in supercomputing and related matters. Thoroughly biased, of course, towards things that interested me throughout the year.


Predictions for 2012

Towards the end of 2011 and in early 2012 I made various predictions about HPC in 2012. Here are the ones I can find or recall:
  • The use of "cloud computing" as the preferred marketing buzzword used for large swathes of the HPC product space would come to an end.
  • There would be an onslaught of "Big Data" (note the compulsory capital letters) as the marketing buzzword of choice for 2012 - to be applied to as many HPC products as possible - even if only a tenuous relevance (just like cloud computing before it - and green computing before that - and so on ...)
  • There would be a vigorous ongoing debate over the relative merits and likely success of GPUs (especially from NVidia) vs. Intel's MIC (now called Xeon Phi).
  • ARM would become a common part of the architecture debate alongside x86 and accelerators.
  • There would be a growth in the recognition that software and people matter just as much as the hardware.

Friday, 25 May 2012

Looking ahead to ISC'12

I have posted my preview of ISC'12 Hamburg - the summer's big international conference for the world of supercomputing over on the NAG blog. I will be attending ISC'12, along with several of my NAG colleagues. My blog post discusses these five key topics:
  • GPU vs MIC vs Other
  • What is happening with Exascale?
  • Top 500, Top 10,
  • Tens of PetaFLOPS
  • Finding the advantage in software
  • Big Data and HPC 
Read more on the NAG blog ...

Thursday, 19 January 2012

Cloud computing or HPC? Finding trends.

I posted "Cloud computing or HPC? Finding trends." on the NAG blog today. Some extracts ...
Enable innovation and efficiency in product design and manufacture by using more powerful simulations. Apply more complex models to better understand and predict the behaviour of the world around us. Process datasets faster and with more advance analyses to extract more reliable and previously hidden insights and opportunities.
... and ...
High performance computing (HPC), supercomputing, computational science and engineering, technical computing, advanced computer modelling, advanced research computing, etc. The range of names/labels and the diversity of the audience involved mean that what is a common everyday term for many (e.g. HPC) is an unrecognised meaningless acronym to others - even though they are doing "HPC".
... and then I use some Google Trends plots to explore some ideas ...

Read the full article ...

Friday, 4 November 2011

My SC11 diary 10

It seems I have been blogging about SC11 for a long time - but it has only been two weeks since the first SC11 diary post, and this is only the 10th SC11 diary entry. However, this will also be the final SC11 diary blog post.

I will write again before SC11 in HPC Wire (to be published around or just before the start of SC11).

And, then maybe a SC11 related blog post after SC11 has all finished.

So, what thoughts for the final pre-SC11 diary then? I'm sure you have noticed that the pre-show press coverage has started in volume now. Perhaps my preview of the SC11 battleground, what to look out for, what might emerge, ...


Friday, 24 June 2011

ISC11 Review

ISC11 - the mid-season big international conference for the world of supercomputing - was held this week in Hamburg.

Here, I update my ISC11 preview post with my thoughts after the event.

I said I was watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Review: None of this is definitive, but my gut reaction is that MIC won this battle. GPU lost. Fusion didn't play again. My feeling from talking to attendees was that MIC was second only to the K story, in terms of what people were talking about (and asking NAG - as collaborators in the MIC programme - what we thought). Partly because of the MIC hype, and the K success (performance and power efficient without GPUs), GPUs took a quieter role than recent years. Fusion, disappointingly, once again seemed to have a quiet time in terms of people talking about it (or not). Result? As I thought, manycore is now realistically meaning more than just NVidia/CUDA.

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Review: Exascale won this one hands down, I think. Some lone voices still tried to talk about desktop HPC, missing middles, mass usage of HPC and so-on. But exascale got the hype again (not necessarily wrong for one of the year's primary "supercomputing" shows!)

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

Review: Hardware still got more attention than software. Top500, MIC, etc. Although ease-of-programming for MIC was a common question too. I did miss lots of talks, so perhaps there was more there focusing on applications and software challenges than I caught. But the chat in the corridors was still hardware dominated I thought.

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

Review: Well, I got those two wrong! Flags were out in force, with Japan (K, Fujitsu, Top500, etc) and France (Bull keynote) waving strongly among others. And clouds were seemingly the question to be asked at every panel! But in a way, I was still right - flags and clouds do matter and will get people talking - but I mainatin that manycore, exascale vs desktop, and the desperation of software all matter more.


 What did you learn? What stood out for you? Please add your comments and thoughts below ...

Friday, 17 June 2011

ISC 11 Preview

ISC11 - the mid-season big international conference for the world of supercomputing - is next week in Hamburg.

Will you be attending? What will you be looking to learn? I will be watching out for three battles.

GPU vs MIC vs Fusion

The fight for top voice in manycore/GPU world will be one interesting theme of ISC11. Will this be the year that the GPU/manycore theme really means more than just NVidia and CUDA? AMD has opened the lid on Fusion in recent weeks and has sparked some real interest. Intel's MIC (or Knights) is probably set for some profile at ISC11 now the Knights Ferry program has been running a while. How will NVidia react to no longer being the loudest (only?) noise in GPU/manycore land? Or will NVidia's early momentum carry through?

Exascale vs Desktop HPC

Both the exascale vision/race/distraction (select according to your preference) and the promise of desktop HPC (personal supercomputing?) have space on the agenda and exhibit floor at ISC11. Which will be the defining scale of the show? Will most attendees be discussing exascale and the research/development challenges to get there? Or will the hopes and constraints of "HPC for the masses" have people talking in the aisles? Will the lone voices trying to link the two extremes be heard? (technology trickle down, market solutions to efficient parallel programming etc.) What about the "missing middle"?

Software vs Hardware

The biggie for me. Will this be the year that software really gets as much attention as hardware? Will the challenges and opportunities of major applications renovation get the profile it deserves? Will people just continue to say "and software too". Or will the debate - and actions - start to follow? The themes above might (should) help drive this (porting to GPU, new algorithms for manycore, new paradigms for exascale, etc). Will people trying to understand where to focus their budget get answers? Balance of hardware vs software development vs new skills? Balance of "protect legacy investment" against opportunity of fresh look at applications?

The rest?

What have I not listed? National flag waving. I'm not sure I will be watching too closely whether USA, Japan, China, Russia or Europe get the most [systems|petaflops|press releases|whatever]. Nor the issue of cloud vs traditional HPC. I'm not saying those two don't matter. But I am guessing the three topics above will have more impact on the lives of HPC users and technology developers - both next week and for the next year once back at work.

 What will you be looking out for?

Thursday, 18 December 2008

Santa's HPC Woes

[Article by me for HPCwire, December 18, 2008]

In a break with centuries of reticence, perhaps the most widely recognised distributor of festive spirit and products, Santa Claus, has revealed some details of the HPC underpinning his time-critical global operations.

http://www.hpcwire.com/features/Santas-HPC-Woes-36399314.html