De-risking SAP Performance and Availability

Best Practices, SAP, VirtualWisdom Add comments

It’s no secret that many enterprise mission critical IT implementations depend on SAP.  In 2008, the Standish Group estimated the average cost of an ERP downtime at $888K per hour. If you’re an SAP user, you probably have some idea of your cost of downtime.

What’s surprising to me is that often companies still rely on massive over-provisioning to handle the database growth and ensure that their infrastructure can meet the level of performance and availability required for informal or formal Service Level Agreements.  On one level, it’s understandable, because the stakes are so high.  But we’re starting to see a trend towards better instrumentation and monitoring, because, while the stakes are high, so are the costs.

The truth is, the performance of SAP is usually not bottlenecked by server-side issues, but rather by I/O issues.  Unfortunately, most of today’s monitoring solutions, including the best known APM solutions, have a tough time correlating your applications with your infrastructure.  The “link” between the application and the infrastructure is often inferred, or is so high level that deriving actual cause and effect is still a guessing game.

Many of our largest customers de-risk their SAP applications using VirtualWisdom to directly correlate the infrastructure latency to their application instances.  In this simple dashboard widget (below), an application owner tracks, in real time, the application latency, in milliseconds, caused by the SAN infrastructure.

With this level of tracking and correlation, many of the largest SAP and VirtualWisdom customers have successfully de-risked their growing, mission-critical SAP deployments.

To hear our Director of Solutions Consulting Alex D’Anna discuss this issue in more detail, I encourage you to attend his 35-minute On-Demand webcast.

Comments are closed.

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in