HPC from A-Z (part 15) - O

O is for Oil and Gas

As demand continues to soar, the suppliers of our natural oil and gas resources are under increasing amounts of pressure to deliver product and keep prices low – these are basic economic principles, after all. But less well-known is the role which HPC is playing to ease the burden on companies involved in this market.

The applications of HPC are diverse and virtually limitless. Analytics, simulation, modelling, seismic imaging – all of these help to extend life of oilfields, identify new submarine oilfields, achieve drastic time reductions in drilling, and make real-time and future-looking calculations about risk.

At Platform Computing we’ve experienced the challenges oil and gas companies are facing first hand through our work with leading companies in this industry, including Statoil Hydro and other oil field services providers and energy giants. For example, a top five global energy company turned to us recently to help achieve cuts in manufacturing and capital costs. Our analysis revealed that they could achieve this with a simple cluster management solution that is easy to use and administer without a dedicated IT admin resources.

As exploration and extraction takes companies to ever more extreme and challenging environments, HPC can be the difference between good and bad decision making... and, ultimately, how much profit the oil giants can turn.

Forrester Evaluates Private Cloud Vendors – Platform Tops Media Scoring

Given all the discussion around cloud software, one would have thought a comprehensive analysis of vendors would have been published by now. However, Forrester Research just released the first quantitative analysis of the private cloud vendor marketplace entitled “Market Overview: Private Cloud Solutions, Q2 2011.” The market overview examines the landscape of vendors providing solutions designed to accelerate the implementation of an infrastructure-as-a-service (IaaS) cloud in a customer’s data center.

Complimentary Full Copy of the Report


The report is significant as the first ‘apples-to-apples” comparison of vendors using customer references, a compulsory 30 minute demonstration and written responses. Only 15 vendors were able to meet this initial hurdle, suggesting a number of vendors where marketing far outstrips reality. Vendors were scored 0-4 across 10 criteria. Forrester did not add up the scores but the media quickly did the sub-analysis and published a ranking table. The result: Platform Computing received the most points.


Source: SYS-CON Media


The # 2 vendor was VMware and we congratulate them on their effort. Given that one of the key report findings is to “prepare for islands of hypervisors” – meaning an increase in the need to support other hypervisors such as KVM and Xen -- companies should pause to reconsider locking into a single hypervisor vendor. Cloud is much more than VM. Also, be careful about pricing models that penalize your ability to utilize your cloud environment more efficiently (i.e., per VM versus per server).


Read the Platform Computing press release


Another interesting point was discussed by the report’s author, James Staten, in his blog. Mr. Staten cautions readers to evaluate vendors on their specific requirements. We totally agree.


First, in the report, an attempt is made to catalog vendors based upon their origins, e.g., enterprise systems management vendors versus pure-play cloud solutions, etc. Our belief is that regardless of origins, customers should choose a vendor with a comprehensive technology footprint, solid customer base, strong financials, and global support system. Claiming you are # 1 among pure-play or grid vendors is meaningless and an artificial distinction. At the end of the day, one wants the best solution to solving a set of problems. Period.


Second, Mr. Staten recommends looking differently at either software-only versus ‘physical compute and storage included’ vendors. If you take those points away from HP, IBM, Dell, Microsoft, and BMC, the Platform lead just grows. And if you want an appliance, Platform has partners that offer a complete pre-integrated solution today.


Finally, the report recommends a strategy of trying before you buy. Since a private cloud project involves both technology and process change, we recommend running a proof-of-concept (POC) to gain buy-in from end user communities and different business units. A POC can also help validate assumptions against your business case for justifying the value of a private cloud.


Resources

  • Learn more about Platform's private cloud solution here
  • Download the Platform ISF product here
  • Schedule a discovery session with one of our cloud consultants by contacting us at privatecloud@platform.com.

Are there any criteria that you would have added to the evaluation?


Private Cloud at State Street




There is nothing better than a real world customer case study to explain the value of deploying a product. Cut through the marketing and explain:
  1. The underlying problem / pain points
  2. The vendor selection process
  3. Key decisions and lesson learned
  4. Actual value being achieved

At the recent Wall Street & Technology Capital Markets Cloud Symposium, the Chief Infrastructure Architect from State Street, Kevin Sullivan, discussed how cloud – specifically private cloud – is being rolled out across 20+ projects.


For more details about Mr. Sullivan's presentation, there is a great article in Advanced Trading magazine by Phil Albinus. Mr. Albinus provides details about the decision making process, key vendors, and next steps. To learn more about Platform Computing's cloud solution, please visit our private cloud solution area.



Some additional metrics that were discussed include:

  • Started in 2009
  • Went from 400 to 150 use cases
  • Operated a 6 month proof-of-concept (POC)
  • Down-selected from 150 technology partners to a ‘handful’
  • Provisioning has been reduced from 8 weeks to hours
  • Unique Active-Active two data center configuration for 100% app high availability

One of the four key elements to the State Street cloud environment is the ‘cloud controller’. In their environment, the cloud controller manages self-service, configurations, provisioning, and automation of the application construction within the infrastructure. A key benefit for the developers was the ability to get access to more self-service capacity on-demand much faster. This easily outweighed any cultural concerns regarding not being able to specify to the n-th degree an IT environment request. If needed, IT could still request ‘custom’ but it would take longer.


In addition, Mr. Sullivan highlighted several key design principles driving the design, building, and management of the cloud:

  • Simplification – creating of standards, consolidation, and self-service
  • Automation – for deployment, metrics, elasticity, and monitoring
  • Leverage – commodity hardware and software stack with a focus on reuse of the platform, services, and data

The benefits and successes of private cloud at State Street are becoming increasingly known as their CIO Christopher Perretta is fond of discussing the topic. Some additional resources to learn more about the State Street cloud project are:

What do you think can be learned from State Street’s successes?

HPC from A-Z (part 14) - N

N is for Neuroscience

Earlier blog posts in this series have addressed the potential High Performance Computing has in helping scientists with medical research. This post focuses on Neuroscience in particular.

High Performance Computing can help neuroscientists quickly and effectively create and test accurate models of the brain. This helps bring research to life and could potentially unlock many of the mysteries of this complex organ. HPC can be used to research mental health problems, such as depression, and also neurological disorders, such as epilepsy. HPC is also being used for Alzheimer’s research.

The Visual Neuroscience Group at Harvard’s Rowland Institute is using HPC to better understand how the brain works. I like their analogy: “the brain is a massively parallel computer, far exceeding the power available in currently available computers”. If you’re going to try and understand the most complex organ in the human body, then it makes sense to match this research with the most powerful computing available.

Platform HPC Interface Gets a Makeover

Every once in a while, it’s nice to have a refresh. Sometimes a new haircut or a new tech toy can do wonders to give ourselves a boost. The same goes for software—while upgrades or small version improvements may happen frequently throughout a product’s lifecycle, products don’t always get the refresh they deserve when it comes to their interface.

Today, Platform HPC got a refresh with our announcement of Platform HPC 3. Platform HPC is our complete HPC management product, which is geared toward making cluster use and management easy for users and administrators. In this latest version, we’ve completely redesigned the interface to make cluster deployment, management, monitoring and use even more intuitive. The interface now includes a provisioning wizard, as well as an enhanced view across the entire HPC environment. Now users can easily see every part of their cluster ecosystem to help make sure everything is running smoothly at all times.

In addition to the new interface, Platform HPC 3 also includes new features for managing heterogeneous cluster environments. Users can monitor GPUs, get enhanced alerts and provisioning status updates, and integrate with third-party management software packages. The product also features automated failover capabilities for workload and cluster management tools, as well as for reporting and monitoring tools, with automatic recovery when primary nodes come back up.

Platform HPC also includes the tools to integrate with a number of the most widely used HPC applications on the market, and supports a diverse set of applications across a variety of industries including manufacturing, oil and gas, energy, life sciences, media/digital content creation, higher education and research and government. For a list of technical apps we support, please see our product page.

Here are the before and after shots of our new interface—we hope you like the new version!







Looking Beyond EMC's Announcement

Looking Beyond EMC’s Announcement
Well, they finally made it official.  The announcement this week from EMC marked its formal entry into  “Big Data” field  with an appliance solution for Hadoop – a  product  called Greenplum HD Data Computing Appliance, which will be rolling out later this year. The move firmly places EMC in competition with those already in play (on different levels) in a market that is quickly heating up, including IBM,  Cloudera, Platform Computing,  Hstreaming, Yahoo, and a handful of others, each with offerings looking to tame the challenges of “Big Data.”
EMC’s offering is a bundled appliance solution for Hadoop. The solution will integrate EMC’s own Greenplum Data Computing Appliance with a distribution of Hadoop software. Although it is  designed as a plug-n-play solution for those running Hadoop,  when it comes to support, EMC will have to work out a decent plan with its partners to make it hassle free for customers. 

Notable benefits of EMC’s offering, as the company claims, are performance (for their Enterprise edition), fault tolerance, and a turn-key solution.  No mention of any performance benchmarks, potential use cases or support plans were in the announcement.

Despite all the benefits listed, the appliance, however,  does not address some of the important “Big Data”  requirements that have been preventing users from moving their applications into production. In particular, I’m  referring to a lack of high resource utilization that allows users to do more with less, high reliability and efficient manageability that guarantee high level  SLA requirements.

“Big Data” is a hot topic today. So hot that almost every IT provider, regardless of its area of focus,  wants a piece of the pie. The result?  Confused customers. So to narrow down the game play, let’s first take a look at who’s who in the field. There are really two types of solution providers today :  1) those offering (or who will offer) a full software stack for Hadoop, such as  IBM, EMC, Cloudera, Aster Data, etc.;  and 2) those who provide best-of-breed component solutions in the software stack, such as Platform Computing. For the former, the major advantage is support for all layers of the stack (application, distributed runtime, and data).  However, the trade off is living with shortcomings (poor reliability, scalability, low resource utilization, just to name a few) in each layer of the stack. For the latter, the focus is on delivering best-in-class component layer(s) within the full stack to address specific sets of requirements IT and end users demand for their MapReduce applications – think of department store vs. specialty store analogy.

We call Platform Computing’s upcoming solution for MapReduce  a “best of breed”  solution because its sole focus is to provide the most complete distributed  runtime engine within the Hadoop software stack to make  MapReduce applications enterprise ready. What lies underneath that stack is Platform Computing’s years of expertise in distributed workload management and resource management.   It’s a proven enterprise level technology and is the foundation on which many Fortune 500 companies run mission critical, extremely demanding distributed workloads.  Bringing this enterprise capability to the “Big Data” environment is a natural market expansion for the company. As the full-stack wars heat up, Platform’s solution can be easily integrated into any alternative stack as a compatible replacement for Hadoop-based runtime environments and become a value–add to its partners.  Platform Computing will roll out its first MapReduce distributed runtime offering  in early June, which will be a major mile-stone for the company following a well received announcement of our support for MapReduce in March. In the upcoming weeks, we will be providing more details on this new product  so you can understand why we call it “best of breed.” Stay tuned everyone!



The Economics of Cloud, Part I - IDC HPC User Forum

After attending IDC’s HPC User Forum in Houston last month and participating in an HPC cloud panel, it's clear that many potential cloud users still seem confused about the economics of cloud and when it's beneficial. One of the complaints we heard many times was about Amazon’s pricing model being several factors (nearly three times) more expensive than outright hardware purchases. While true, users who complain about this fact may be, at least partially, missing the primary use case for external cloud computing.


Our cloud panel didn’t have enough time to delineate the conditions and workload where cloud computing offers economic advantages, so it seems appropriate to start that discussion here in the first of a series of the Economics of Cloud.


There several factors that should contribute to doing an HPC cloud computing pilot and most are necessary, but not required conditions. These include:

· Practical data sizes input and output or post processing methods that can be used to post process without data transfer

· Serial or course grained parallel workload

· Data security policies that can be satisfied by the cloud

· Application OS and performance requirements that lead to acceptable performance in the cloud

· Unsteady workload requirements (meaning the amount of resource a workload requires varies over time)


This last factor is the one that might be the most confusing. Using IaaS can be very cost effective if the results from a workload are highly valuable and short lived. Conversely, results of unknown value and lengthy execution durations or large data requirements can have enormous charges associated with them.


One simple way of visualizing this is to understand the peak workload (expressed as a fraction of the available local resource) and the average workload. The difference between these two values, if significant, is a good indicator for whether cloud computing could have positive ROI or not. If this effect is plotted in time and the average and peak lines are overlaid, the term "peak shaving" is clearly an apt description of what benefit cloud computing can offer.


Invariably, a steady workload is most efficiently processed in a local data enter resource when compared to pay-per-use rates. Indeed, most IaaS providers have calculated a factor between two and three times over into their pricing for hardware costs to account for the opportunity value of near instantaneous access to compute resources. Thus, paying this "tax" for a steady workload could have disastrous financial consequences if adopted as a strategy.


Anyone interested in permanent or long term cloud resource access should probably investigate longer term service contracts with a selected IaaS provider if local resources are not an option. Such an alternative agreement could easily change any potential negative financial estimates for the benefits of cloud.

HPC from A-Z (part 13) - M

M is for Medicine

High Performance Computing can assist with medical research, helping researchers and scientists achieve tangible and game changing results. It can speed up the time taken to crunch vital medical data from a wide variety of sources -- from breast cancer scans to DNA profiles. I expect the next medical breakthrough will have been powered in some part by High Performance Computing.

One of our customers, Harvard Medical School, has taken advantage of High Performance Computing to help aid their scientific discovery. Dr Marcos Athanasoulis, Dr. PH, Director of IT, HMS summed up the potential of High Performance Computing very neatly: “High performance computing is just at the center of discovery today and it’s personally gratifying for me that we are enabling researchers to one day find the cure for cancer, to continue the discovery and genomics and proteomics and that the impact of our work here can actually make a big difference on alleviating human suffering caused by disease.” He also said, “the internal grid allows our researchers to collaborate more easily than ever before and focus their attention on medical research instead of IT management.”

It is encouraging to see how HPC is being used to help researchers collaborate on finding a cure for some of the world's most devastating diseases. Another example is the FightAIDS@Home project which has set up an HPC grid in order to utilize idle computing cycles to assist in building on our growing knowledge of the structural biology of AIDS. It’s incredible to think of all the home users across to world contributing to the project!

HPC from A-Z (part 12) - L

L = Love and Linear Regression

This blog post will focus on the letter L, and potential ways HPC can help solve two issues, which begin with L. The first is technical, the second is rather more hypothetical.

Linear regression assesses the relationship between two different variables, to find a correlation. This method, which is popular among scientists, can sometimes be a lengthy process. However, HPC can help speed up this process by automating it, which means that researchers can identify correlations much quicker.

Love
Imagine if we could use an HPC environment to model the brain to see what happens when we fall in love? Imagine if we could pin point what happens in our brains when we feel those first flutters and see how it affects our perception and sense of happiness? Or perhaps there’s a use for HPC in helping dating agencies find the perfect match for their customers? Do you think a HPC environment could solve a cosmic love equation?

Making Cluster Administration Easier

The last thing busy HPC administrators and IT managers need is to spend their time developing complex scripts for extracting and analyzing massive volumes of workload and system data needed to ensure that workload is being distributed efficiently and that SLAs are being met. To help ease that burden, Platform announced the latest versions of Platform RTM and Platform Analytics today. Designed to work with Platform LSF, these tools provide IT managers and HPC administrators the information they need for better decision making, which results in more effective operation of their HPC datacenter.

Platform RTM 8 is a comprehensive operational dashboard that provides real-time workload monitoring, reporting and management for one or multiple HPC clusters. It helps cluster administrators be more efficient in their day-to-day activities by providing the information and tools needed to improve cluster efficiency, enable better user productivity, and contain or reduce costs. A flexible alerting facility quickly notifies administrators and managers of any issues so that they can take proactive action. Unlike competing tools that only monitor the infrastructure, Platform RTM is both workload and resource-aware, providing full visibility to Platform LSF clusters. With its broad set of capabilities, Platform RTM can replace multiple tools in typical Platform LSF environments, resulting in improved productivity as well as reduced cost and complexity.

Platform Analytics 8 covers the other end of the spectrum by providing a historical perspective of the datacenter. The tool includes a high performing analytical engine and a robust, easy to use interface, which makes is easier to identify and quickly removing bottlenecks, spot emerging trends and plan capacity more effectively. Built on top of a high performing analytics engine, Platform Analytics has an easy to use interface and correlates multiple types of data for improved decision making. The tool is fully functional “out of the box” and includes several interactive dashboards, making it quick and easy to analyze key HPC data. This is definitely an advantage over other HPC analytics products, which require you to build complex analytics models from scratch, often with several intermediate steps.

Many of Platform’s customers have been using these tools to help get peak utilization from their HPC datacenters, including Cadence Design Systems, Red Bull Racing and Simulia, Dassault Systemes. For more on today’s releases and the new features in Platform RTM 8 and Platform Analytics 8, please see today’s press release.