Notice: register_sidebar was called incorrectly. No id was set in the arguments array for the "Sidebar 1" sidebar. Defaulting to "sidebar-1". Manually set the id to "sidebar-1" to silence this notice and keep existing sidebar content. Please see Debugging in WordPress for more information. (This message was added in version 4.2.0.) in /usr/share/wordpress/wp-includes/functions.php on line 4139 Tech Notes » VirtualWisdom

Expanding the Reach of Real-time Monitoring

Best Practices, Real-Time Monitoring, SAN, SNW, VirtualWisdom No Comments »

It’s been a busy and exciting few months at Virtual Instruments. At SNW Fall last month, we introduced the new high-density VirtualWisdom SAN Performance Probe. By doubling the density and supporting up to 16 Fiber Channel links per unit, the ProbeFC8-HD enables customers to monitor more of their infrastructure for less. In fact, customers can expect to reduce the cost of real-time monitoring by 25 percent and lower power consumption by 40 percent.

We also announced enhanced support for FCoE. With FCoE-specific enhancements to the current SAN Availability Probe module we’re able to deliver improved monitoring of top-of-rack FCoE switches, extending visibility into infrastructure performance, health and utilization across converged network environments.

We had the chance to meet with a number of customers, press and analysts at SNW to share our news. Check out the news and learn more about our VirtualWisdom platform, courtesy of W. Curtis Preston, Truebit.TV.

How to Re-Use VirtualWisdom Data in 3rd-Party Tools

BAT, How to, samplecode, Scheduler, VirtualWisdom No Comments »

Everyone I’ve personally met who has witnessed the detail of VirtualWisdom metrics tends to be first amazed, then relates it to LAN and ethernet tools, then questions why we haven’t seen this before. The next question in very large organizations is “how can we re-use this data in our [insert home-grown tool here] ?”

Incorporating VirtualWisdom into an organization has various points of “friction”: training on a new tool, understanding the metrics, collection of data to help VirtualWisdom correlate, and beginning to use it internally. As a Virtual Instruments Field Application Engineer (AE or FAE), I tend to see the initial friction (collection of data, such as nicknames, or grouping Business-Units as UDCs. The less common friction is “OK, we love VirtualWisdom, but our expansive storage team want to exploit the metrics in our custom home-grown planning tools”.

Converting VirtualWisdom into basic data-collector ignores the reporting, recording, and alerting capabilities it offers; re-using its data in multiple entities of a corporation is an expansion on VirtualWisdom’s utility, and I’m more than happy to help a customer do that. The more we can help our customers make an informed decision — including leveraging more data in their own tools — the more we can help our customers “free their data” and improve the performance and reliability of a complex data-storage environment.

My entries here on the Virtual Instruments Bast Practices blog tend to be of a how-to nature; in this article, I’d like to show how the opensource tool “MailDropQueue” can help push VirtualWisdom data into your home-grown toolset.


There was a time that customers tried to find our reports by digging through the “exports” directory after a new report was produced — because the most recent is the correct one, right? This ran into problems when users generated reports just before scheduled reports, and when the scheduled “searcher” went searching, the wrong report would be found. Additionally, some reports took a long time, and would not always be finished by the time the customer’s scripts went searching. Customers typically knew what script they could run to consume the data and push it to their own systems, but the issues in finding that file (moreso due to a lack of shell on Windows) caused this solution to become over-complex.

Replication at the database level gives us the same problem: the data is in a schema, and it’s difficult to make sense of without the reporting engine.

A while ago, VirtualWisdom gained the ability to serialize colliding reports: if a user asks for a report at the same time the system or another user is generating a report, the requests get serialized. This allows VirtualWisdom to avoid deadlock/livelock situations at the risk of the delay we’re used to at printers: your two-page TPS Reports are waiting behind a 4000 page print of the History of the World, Part 1A. The benefit of a consistently-responsive VirtualWisdom platform are well worth this benefit. Unfortunately, the API that many users ask for poses this same risk: adding a parallel load onto VirtualWisdom that needs an immediate response, adding delay in both responses and risking concurrency delays at the underlying datastore.

The asynchronous approach — wherein VirtualWisdom can generate data to share through its reporting engine — is more cooperative to VirtualWisdom’s responsiveness, but returns us to the issue of “how do I find that report on the filesystem?  The correct report?”

MailDropQueue is a tool in the traditional nature of UNIX: small things that do specific jobs. UNIX was flush with small tools such as sed, awk, lpr, wc, nohup, nice, cut, etc that could be streamed to achieve complex tasks. In a similar way, MailDropQueue receives an email, strips off the attachment, and for messages matching certain criteria, executes actions for each.

It’s possible for VirtualWisdom to generate “the right data” (blue section, above), send it to MailDropQueue (red portion, above), and have MailDropQueue execute the action on that attachment (green part above).  In our example, let’s consider where a customer knows what they want to do with a CSV file; suppose they have a script such as:

@echo off

The actual magic in this script isn’t as important as the fact that we can indeed trigger it for every attachment we see to a certain destination. Now all we need is to make a destination trigger this script (ie the green portion of the diagram above):

 <?xml version='1.0' ?> <actions> <trigger name="all"> <condition type="true"/> <action>IMPORT</action> </trigger> <script id="IMPORT" name="import" script="DATABASE-IMPORT.BAT" parameters="$attachmentname"/> </actions> 

From the above, the “condition type=true” stands out, but it is possible to constrain this once we know it works, such as to trigger that specific script only when the recipient email matches “”:

 <condition type="equal"> <recipient/> <value></value> </condition> 

Also, it’s not so obvious, but the result of every received email that matches the condition (“true”) is to run the script with the attachment as the first parameter. This means that if an email arrives with an attachment “”, MailDropQueue would Runtime.exec("DATABASE-IMPORT.BAT").

For reference, I’m running this on a host called, compiled to use the default port (8463) as:

java -jar maildropqueue.jar -c maildropqueue.xml

Where maildropqueue.jar is compiled by defaults (./configure && make) from a “git clone”, and maildropqueue.xml contains the configuration above. There’s a downloadable

Finally, We need to configure VirtualWisdom to generate and send this data attached to an email; this is a fairly simple problem for any VirtualWisdom administrator to do.  Following is a walk-thru up to confirming that the content is being generated and sent to the MailDropQueue; the composition of the report and the handler script “IMPORT-DATABASE.BAT” is too environmentally-specific to cover in this article.

  1. Create the report (outside the scope of this article) — confirm that it produces output. The following snapshot uses our internal demo-database, not actual customer data:

    Capacity / Performance Statistical Report

  2. Create a Schedule to regularly generate and send it:
    1. In the Report Generation Configuration, check that you have the hourly summary if so desired:
    2. Check that all probes are used, but you don’t need to keep the report very long:
    3. Confirm that you have file-format set to CSV unless your handler script can dismantle XLS, or you intend to publis a PDF:
    4. Choose to send email; this is the key part. The message can include anything as subject and body, but you must check “E-mail report as attachment”:
    5. …and finally: you may not yet have the distribution list set up, consider the following example. Note that the port number is 8025, and server is localhost because in this test, I’m running the MailDropQueue on the same server. The sender and recipient don’t matter unless you later determine which actions to run bases on triggers matching sender or recipient:
    6. Check that your MailDropQueue is running on the same port (this is an example running MailDropQueue using VirtualWisdom’s enclosed Java and the config example above; the two “non-body MIME skipped:” messages are from clicking “Send Test E-Mail” twice):
  3. Finally, run your MailDropQueue. The skip used above is shown here (except that running it requires removing the “-V”, highlighted), as well as the config, and an output of “java -jar maildropqueue.jar -V” to show how MailDropQueue parsed the configfile:
  4. Clicking “Run Now” on the Scheduled Action for the report generation shows end-to-end that VirtualWisdom can generate a report, send it to MailDropQueue, and cause a script to be triggered on reception. Of course, if the script configured into MailDropQueue hasn’t been written, a Java error will result, as shown:
  5. Now the only things left to do are:
    1. Write the report so that the correct data is sent as tables and Statistical Summary reports (only one View per section)
    2. Write the IMPORT-DATABASE.BAT so that it reacts correctly to the zipped archive of CSV files

Merging UDCs in VirtualWisdom to Join Manual and Generated UDCs

BAT, How to, UDC, VirtualWisdom, xmllint, xsltproc No Comments »

UDCs — User-Defined Context — can be very useful for showing the actual use, or membership of a device on the SAN, or for assigning a priority for alerting and thresholds. Often, these are hand-generated, but we do have methods of creating them from other content.

One customer has both: a UDC generated/converted from other content, and some manually-assigned content. Merging these would help him to assign filters and alerts as a single group, but the effort to merge it was looking excessive.

As you may recall, my content on Virtual Instruments Best Practices blog tend to be the how-to variety, and in this article, I’d like to share how to merge two UDCs programmatically, which can then be scripted in any automated collection scripts or tools you’re already using.

Merge, Then Cleanup

The general process we use for this is to merge the content first (using xmllint), then clean it up (using xsltproc) so that it’s back to sane, predictable UDC that is ready for routinely-scheduled import:

UDCs merged using xmllint and xsltproc

Notice in this image that the only things that change are in the upper box (“UDC Files”), which can be either manually-edited or autonomously-generated by filter or transform. As well, the result is a standard UDC from which we can generate filters or otherwise edit using XML tools.

As you can see, the tools used here are fairly standard; the only real development are the smaller scripts for each tool. UDCs are simply XML, and as such, quite easy to manipulate using standard XML tools.

Let’s break this down into the multiple steps.


The easiest way I found to concatenate two XML files was to use XInclude with an XPointer statement:

<?xml version="1.0"?>
<list xmlns:xi="">
    <xi:include href="File1.udc" xpointer="xpointer(//list/*)"/>
    <xi:include href="File2.udc" xpointer="xpointer(//list/*)"/>

In parts, this is really duplicates of the following:


If you’ve written any sourcecode, you’d recognize an #include statement, or an import com.example.*; this is really no different: the document referenced by the “href” (File1.udc) replaces this xi:include statement. The second part, an xpointer="...", further clarifies the import by indicating that only a part of the document we include should come in — in this case, child elements of the “list” element. If you look at a UDC File, you’ll see that “list” is the root node; if that statement makes very little sense, then think of this as “we’re including all the stuff inside the outermost container, but not the container itself”. And hey, look again at the full file above: we specify a <list> and </list> around the inclusions. Coincidence? Not at all; this is a method of avoiding having two outermost root nodes, which cannot be further altered using XML because XML can only have one outermost root node.

…and it’s easier this way: don’t filter out what you can avoid including in the first case. It’s possible that there’s a better set of inclusion elements here, but this works well enough.

If we had three UDCs to merge, you can see that it would merely require another xi:include statement.

To act on this file, we execute xmllint using the “-xinclude” parameter (normal hyphen, only one “-”, not two) as follows. Note that xmllint is available on most non-Windows systems, and should be easily acquired using Microsoft Services for UNIX for a Windows system.

xmllint.exe -xinclude concatenate-UDC.xml > Merged.udc

for Windows, or for non-windows:

xmllint -xinclude concatenate-UDC.xml > Merged.udc

(using “Merged.udc” as a temporary file)

We now have a UDC file with only one outermost “list” element or root element, but it has a few problems:

  1. Every new UDC starts with Evaluation Order of 1; this is reflected in the UDC, and has to be fixed
  2. Only one default item should be given: we choose the one from the first file
  3. We only copy the first file’s definition of the UDC (Metric, set, etc) so the user needs to avoid doing this on two UDCs of different metric/set (illogical UDCs will result)

The first two issues can be fixed in the next step.

Clean the Concatenated Result

XSLT, or XSL Transformations, uses XSL (Extensible Stylesheet Language) to transform XML into a different XML, or even into a simpler form such as straight text or ambiguous markups such as CSV. In general, XSLT can map XML data from one schema to another, convert data from one schema to another, or simply extract elements of data into a text stream.

In our case, we’re using it to remove the redundant parts that will cause VirtualWisdom’s UDC parser to reject the document. There is currently no schema definition, so we have to make best efforts to make the resulting UDC look like one exported from VirtualWisdom.

The XSLT is a bit complex to post here, but it should be available by clicking on the marked-up filename below. Note that like xmllint, xsltproc is widely available as Linux and UNIX packages, or via the Microsoft Services for UNIX (currently a re-packaged Cygwin environment).

We execute the cleanup XSLT as follows:

xsltproc.exe concatenate-UDC.xsl Merged.udc > VirtualWisdomDataUDCImportCombined.udc

for Windows, or for non-windows:

xsltproc concatenate-UDC.xsl Merged.udc > VirtualWisdomDataUDCImportCombined.udc

Note here that the file we use is similarly-named, but end in “xsl”, not “xml”. Also, we write the file directly into the UDCImport directory of a VirtualWisdomData folder, which is where an import schedule would look for it.

This resulting file can be directly imported; an example import schedule is at the bottom of the Use UDCs to Collect Devices by Name Pattern article presented on May 1st, 2012. As well, because the UDC is in a standard form, it can be used to Quickly Create Filters for use in Dashboards, Reports, and Alarms.

Evolving SANs tend to have evolving naming schemes and assignment methods, so there will often be many different systems of identifiers that can be joined to work with different Business Units, customers, or functional groups; different such groups tend to cause different sources of information to be polled, and different formats to result. I hope this process help you to reduce the manually copying of data attributes which is so prone to human error and scheduling delays.

I hope this helps you to “set it and forget it” on more sources of data. Accurate data drives decisions: how can you methodically fix what you cannot measure and make sense of?

Edit 2012-08-23: concatenate-UDC.xsl was misspelled on the webserver (concatenate-UDCs.xsl), four downloads failed due to my error. Sorry, it should be easier to download now.

Using VirtualWisdom to Reclaim Unused Disk Space

Best Practices, LUN, SAN performance storage i/o bottleneck, VirtualWisdom No Comments »

I was talking with an independent contractor a few days ago and she mentioned that more than a few customers justify buying storage management tools by using them to find unused disk space. It’s pretty common to find allocated but unused space that often amounts to tens or even hundreds of thousands of dollars’ worth of space. Though VirtualWisdom isn’t thought of as a storage capacity monitor, by watching for I/O activity, you can easily find opportunities to reclaim unused LUNs.

Below is a step-by-step process with screen grabs to illustrate exactly how this is done.

Start your VirtualWisdom Views client, which is the administration interface for VirtualWisdom. The Views client allows you to configure VirtualWisdom, create reports, set alarms and monitor the data collected by VirtualWisdom. Since we are looking for I/O traffic to a LUN, use the SAN Performance Probe to monitor the frames/sec metric in the SCSI metric set; the screen shot is below.

Then select the LUN and storage fields.

Sort the data first by Frames/Sec then Storage and LUN, via the Data Groupings tab.

Then, in the “data views” tab, select the Summary Table to list out each LUN, and a Trend chart to show the peak data for each period. The Trend chart is important because the Summary Table will show the average for a period. It’s important to note that such a small value over a long period could average out to zero. The Trend chart will let us spot these values.

Go to the reports tab in the views client. Set a period, say 30 days, and generate the report. For our small test lab, you can see that our tool found one LUN with zero activity in 30 days. With the SAN Performance Probe it’s easy to inspect the LUN and figure out why it hasn’t had any traffic.

You can use the same report with different selection criteria to look for underutilized LUNs. It’s easy, quick, and the ROI can be substantial. For a short video of this VirtualWisdom use case take a look below:

A Simple Strategy for Reducing Your Reactive Tickets

Best Practices, datacenter migration, VirtualWisdom No Comments »

One of Virtual Instruments’ missions is to reduce the big, messy outages that happen in our customers’ datacenters. For many sectors across the industry, we work to determine ways to catch problems early to avoid outages. Numerous competitors have analytics engines and all kinds of software to detect problems before they become outages, but they are not all equal.

A few weeks ago, I was reviewing a customer’s successful datacenter migration.  Their strategy for success when using VirtualWisdom is to tag all the tickets that result from our solution. The IT staff is told to do the VirtualWisdom tickets first, and the customer found that over time, the number of reactive tickets decreased. What was happening was that VirtualWisdom found all the small problems that might otherwise get overlooked, such as: increases in physical errors that precede the failure of a SFP fiber module, the increase of traffic on a link that creeps over the threshold where it won’t successfully fail over to its backup link, and the misconfiguration of a path. VirtualWisdom quietly and diligently finds these problems, and the customer found if they fixed them, bigger problems were avoided.

I wanted to share this policy with my blog readers because it makes a lot of sense and is simple to implement. If anyone else tries this strategy, let me know how well it works for you.

Using VirtualWisdom to De-risk a Migration / Consolidation Project

Best Practices, SAN performance storage i/o bottleneck, VirtualWisdom No Comments »

I wanted to share a real-life example of how VirtualWisdom can be used to de-risk your migration and consolidation projects:

Recently, one of our customers used VirtualWisdom to help successfully migrate a datacenter, and at the same time, consolidate two mission-critical, Oracle-based applications from two older-generation storage systems to one new storage system.

Pre-migration analysis for the new data center ensured that it was “production” ready.  VirtualWisdom was used to identify naming issues with the zone configuration, an incorrectly configured server from a multipathing perspective, queue depth configuration issues, physical layer problems, and miscellaneous performance concerns.  It’s worth noting that the physical layer issues concerned two links that were found to be borderline within specification at 4Gb, and several other ports that were found to be outside of specification at 8Gb, and were addressed before the migration occurred.  We highly recommend paying particular attention to physical layers issues when migrating to 8Gb SANs, as what worked fine at 4Gb may not work so well at 8Gb.

Before the move, the applications were benchmarked to help increase performance.  During the spin-up of the new site, which occurred on a weekend when traffic was low, VirtualWisdom reported an intermittent latency issue. The latency issue occurred for only a second or two every minute.  The  vendor performance tool that the customer was using could not detect the issue because it was averaging the latency metric and was not granular enough to pick up the anomaly.  The issue was serious enough that the team had to fix it by Monday or they forecasted an outage.  The fall-back plan was to re-deploy on the older storage arrays.  VirtualWisdom, which aggregates metrics to the one-second level, found a process that lasted one second, which was causing the problem.  Once the offending process was identified and remediated, the problem disappeared.  The new site went fully live; the Oracle-based applications functioned as predicted, and VirtualWisdom was able to confirm that the infrastructure performance of the new site, with the consolidated array, met its SLAs.

For more information on how VirtualWisdom can be used to de-risk your migration and consolidation projects, check out this tech brief on private cloud de-risking:, or this blog on private cloud migration best practices: or this whitepaper on datacenter consolidation best practices at:  If you would like to talk with the customer in this story to learn more, contact your Virtual Instruments account team and they can arrange it for you.

Controlling Over-Provisioning of Your Storage Ports

Best Practices, latency, over-provisioning, SAN, storage arrays, VirtualWisdom No Comments »

While it’s generally accepted that SAN storage utilization is low, only a few industry luminaries, such as John Toigo, have talked about the severe underutilization of Fibre Channel (FC) SAN fabrics.  The challenge, of course, is that few IT shops have actually instrumented their SANs to enable accurate measurements of fabric utilization.  Instead, 100% of enterprise applications get the bandwidth that perhaps only 5% of the applications, wasting CAPEX need. 

In dealing with several dozen large organizations, we have found that nearly all FC storage networks are seriously over-provisioned, with average utilization rates well below 10%.  Here’s a VirtualWisdom dashboard widget (below) that shows the most heavily utilized storage ports on two storage arrays, taken from an F500 customer.  The figures refer to “% utilization.”

Beyond the obvious unnecessary expense, the reality is that with such low utilization rates, simply building in more SAN hardware to address performance and availability challenges does nothing more than add complexity and increase risk.  With VirtualWisdom, you can consolidate your ports, or avoid buying new ones, and track the net effect on your application latency to the millisecond.  The dashboard widgets below show the “before” and “after” latency figures that resulted from the configuration changes to this SAN, using VirtualWisdom.  They demonstrate a negligible effect.

Latency “before”

Latency “after”

Our most successful customers have tripled utilization and have been able to reduce future storage port purchases by 50% or more, saving $100 – $300K per new storage array.

For a more detailed discussion of SAN over-provisioning, click here, or check out this ten-minute video discussing this issue and over-tiering.

Eager Attendees Ready to Learn During Hands-On-Lab Sessions at Spring SNW 2012

Best Practices, Hands-On Lab, SAN, SNW, storage, VirtualWisdom No Comments »

 At the spring Storage Network World (SNW) show in Dallas, I had the pleasure of teaching the hands-on lab session for VirtualWisdom with Andrew Benrey, VI Solutions Consultant, and we had a fantastic response to our “Storage Implications for Server Virtualization” session. We co-presented with Avere and HP 3par, and during the two-hour session, we covered how to use VirtualWisdom to administer and optimize a fiber channel SAN, NAS optimization with the Avere appliance and the use of thin provisioning and reclamation using the HP 3par arrays.

The lab exercises covered all areas of SAN administration. The first exercise looked at how we discover and report physical layer errors. We then looked at queue depth performance, imbalanced paths, and detection of slow-draining devices using buffer-to-buffer credits. In the last exercise, we reviewed a VMware infrastructure showing the virtual machines, fiber channel fabric and SCSI performance.

I found it interesting that for most of the lab sessions, many students picked the VirtualWisdom lab to start with. I believe that with the demand for proactive SAN management, more and more people are finding out about the benefits of VirtualWisdom, and came to the hands-on-lab to see for themselves. When looking at the attendance numbers, our lab was sold out for most sessions. Our most popular session had a sign up list of 52 for 20 seats.  During the six sessions we conducted, we were able to meet and talk with almost 500 attendees in depth about the need for tools like VirtualWisdom and the advantages this platform offers for SAN teams working in a virtualized environment.  Attendees liked the ability to quickly walk through the infrastructure from the ESXi server down to the storage array and spot the anomalies. The ability to go back in time was also of importance. Several customers were in the lab as part of their product evaluation.

Those of you who have seen VirtualWisdom understand how rich our user interface can be. For the lab exercises, I specifically divided up exercises so that the lab attendees had a much simpler and more easily understood interface in which to work. This turned out well as very few of the attendees needed additional help in working with the Dashboard interface.

Storage Network World Hands-On Lab Infrastructure

De-risking SAP Performance and Availability

Best Practices, SAP, VirtualWisdom No Comments »

It’s no secret that many enterprise mission critical IT implementations depend on SAP.  In 2008, the Standish Group estimated the average cost of an ERP downtime at $888K per hour. If you’re an SAP user, you probably have some idea of your cost of downtime.

What’s surprising to me is that often companies still rely on massive over-provisioning to handle the database growth and ensure that their infrastructure can meet the level of performance and availability required for informal or formal Service Level Agreements.  On one level, it’s understandable, because the stakes are so high.  But we’re starting to see a trend towards better instrumentation and monitoring, because, while the stakes are high, so are the costs.

The truth is, the performance of SAP is usually not bottlenecked by server-side issues, but rather by I/O issues.  Unfortunately, most of today’s monitoring solutions, including the best known APM solutions, have a tough time correlating your applications with your infrastructure.  The “link” between the application and the infrastructure is often inferred, or is so high level that deriving actual cause and effect is still a guessing game.

Many of our largest customers de-risk their SAP applications using VirtualWisdom to directly correlate the infrastructure latency to their application instances.  In this simple dashboard widget (below), an application owner tracks, in real time, the application latency, in milliseconds, caused by the SAN infrastructure.

With this level of tracking and correlation, many of the largest SAP and VirtualWisdom customers have successfully de-risked their growing, mission-critical SAP deployments.

To hear our Director of Solutions Consulting Alex D’Anna discuss this issue in more detail, I encourage you to attend his 35-minute On-Demand webcast.

Spring 2012: Storage Networking World

Best Practices, Dallas, SNW, storage, virtualization, VirtualWisdom No Comments »

It was great to be at the Storage Networking World (SNW) show in Dallas last week. We saw more customers sending people from the operations and the architecture/planning groups. It’s important for operations and architecture/planning to work together on SAN infrastructure, so it was good to see this and to hear some of the attendee’s remark they were hired to bridge the gap between these groups.

In a panel of CIOs at medium to large companies, all agreed that staffing remains a huge issue.  No one is getting new headcount, yet the number of new technologies they have to work with continues to grow.  Some saw a solution in cross-training IT staff.  One CIO is creating “pods” where architects and planners work closely with operations.  Everyone agreed that even though the effect of training and cross-training staff often results in “poaching,” it was still worth it to have a better-trained staff.  At Virtual Instruments, we agree with this trend and see cross-domain expertise taking on a more of an important role. VirtualWisdom, for instance, is designed for use by everyone in the infrastructure, from the DBAs and server admins to the fabric and storage admins.

Stew Carless, Virtual Instruments Solutions Architect, held a well-attended session on, “Exploiting Storage Performance Metrics to Optimize Storage Management Processes.”  In the session, Stew talked about how using the right instrumentation can go a long way towards eliminating a lot of the guessing game that often accompanies provisioning decisions.

Over at the Hands-on-Lab, Andrew Benrey and I led the Virtual Instruments part of the “Storage Implications for Server Virtualization” session. We had a full house for most of the sessions and we were pleased that many of the lab attendees were familiar with Virtual Instruments before they participated in the lab.

In a real-time illustration of managing the unexpected: The big news at the show came from the U.S. weather service, when a series of tornados ripped through the Dallas area to the east and west of the hotel. The SNW staff and the hotel did an excellent job of gathering everyone on the expo floor and sharing updates on what was happening. After a two-hour interruption, the SNW staff did a great job of getting the conference back underway. The expo exhibitors enjoyed the two hours of a captive audience!

With a couple of exceptions, many of the big vendors weren’t at SNW, which we see as a positive trend.  People come to these events to learn about new things, and frankly, the newest things come from the newest, smallest vendors.  At SNW, the floor was full of smaller, newer vendors who may not have direct sales forces who can blanket continents, but whose fresh insights and new approaches provided valuable insights for the SAN community.  I didn’t hear one end user complain that their favorite big vendor wasn’t there.

The next Storage Network World show will be in Santa Clara this October. We are looking forward to meeting everyone again and to catch up on what’s going on.



WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in