Notice: register_sidebar was called incorrectly. No id was set in the arguments array for the "Sidebar 1" sidebar. Defaulting to "sidebar-1". Manually set the id to "sidebar-1" to silence this notice and keep existing sidebar content. Please see Debugging in WordPress for more information. (This message was added in version 4.2.0.) in /usr/share/wordpress/wp-includes/functions.php on line 4139 Tech Notes » samplecode

How to Re-Use VirtualWisdom Data in 3rd-Party Tools

BAT, How to, samplecode, Scheduler, VirtualWisdom No Comments »

Everyone I’ve personally met who has witnessed the detail of VirtualWisdom metrics tends to be first amazed, then relates it to LAN and ethernet tools, then questions why we haven’t seen this before. The next question in very large organizations is “how can we re-use this data in our [insert home-grown tool here] ?”

Incorporating VirtualWisdom into an organization has various points of “friction”: training on a new tool, understanding the metrics, collection of data to help VirtualWisdom correlate, and beginning to use it internally. As a Virtual Instruments Field Application Engineer (AE or FAE), I tend to see the initial friction (collection of data, such as nicknames, or grouping Business-Units as UDCs. The less common friction is “OK, we love VirtualWisdom, but our expansive storage team want to exploit the metrics in our custom home-grown planning tools”.

Converting VirtualWisdom into basic data-collector ignores the reporting, recording, and alerting capabilities it offers; re-using its data in multiple entities of a corporation is an expansion on VirtualWisdom’s utility, and I’m more than happy to help a customer do that. The more we can help our customers make an informed decision — including leveraging more data in their own tools — the more we can help our customers “free their data” and improve the performance and reliability of a complex data-storage environment.

My entries here on the Virtual Instruments Bast Practices blog tend to be of a how-to nature; in this article, I’d like to show how the opensource tool “MailDropQueue” can help push VirtualWisdom data into your home-grown toolset.

 

There was a time that customers tried to find our reports by digging through the “exports” directory after a new report was produced — because the most recent report.csv.zip is the correct one, right? This ran into problems when users generated reports just before scheduled reports, and when the scheduled “searcher” went searching, the wrong report would be found. Additionally, some reports took a long time, and would not always be finished by the time the customer’s scripts went searching. Customers typically knew what script they could run to consume the data and push it to their own systems, but the issues in finding that file (moreso due to a lack of shell on Windows) caused this solution to become over-complex.

Replication at the database level gives us the same problem: the data is in a schema, and it’s difficult to make sense of without the reporting engine.

A while ago, VirtualWisdom gained the ability to serialize colliding reports: if a user asks for a report at the same time the system or another user is generating a report, the requests get serialized. This allows VirtualWisdom to avoid deadlock/livelock situations at the risk of the delay we’re used to at printers: your two-page TPS Reports are waiting behind a 4000 page print of the History of the World, Part 1A. The benefit of a consistently-responsive VirtualWisdom platform are well worth this benefit. Unfortunately, the API that many users ask for poses this same risk: adding a parallel load onto VirtualWisdom that needs an immediate response, adding delay in both responses and risking concurrency delays at the underlying datastore.

The asynchronous approach — wherein VirtualWisdom can generate data to share through its reporting engine — is more cooperative to VirtualWisdom’s responsiveness, but returns us to the issue of “how do I find that report on the filesystem?  The correct report?”

MailDropQueue is a tool in the traditional nature of UNIX: small things that do specific jobs. UNIX was flush with small tools such as sed, awk, lpr, wc, nohup, nice, cut, etc that could be streamed to achieve complex tasks. In a similar way, MailDropQueue receives an email, strips off the attachment, and for messages matching certain criteria, executes actions for each.

It’s possible for VirtualWisdom to generate “the right data” (blue section, above), send it to MailDropQueue (red portion, above), and have MailDropQueue execute the action on that attachment (green part above).  In our example, let’s consider where a customer knows what they want to do with a CSV file; suppose they have a script such as:

@echo off
call DATABASE-IMPORT.BAT TheData.CSV

The actual magic in this script isn’t as important as the fact that we can indeed trigger it for every attachment we see to a certain destination. Now all we need is to make a destination trigger this script (ie the green portion of the diagram above):

 <?xml version='1.0' ?> <actions> <trigger name="all"> <condition type="true"/> <action>IMPORT</action> </trigger> <script id="IMPORT" name="import" script="DATABASE-IMPORT.BAT" parameters="$attachmentname"/> </actions> 

From the above, the “condition type=true” stands out, but it is possible to constrain this once we know it works, such as to trigger that specific script only when the recipient email matches “ftp@example.com”:

 <condition type="equal"> <recipient/> <value>ftp@example.com</value> </condition> 

Also, it’s not so obvious, but the result of every received email that matches the condition (“true”) is to run the script with the attachment as the first parameter. This means that if an email arrives with an attachment “performance.csv.zip”, MailDropQueue would Runtime.exec("DATABASE-IMPORT.BAT performance.csv.zip").

For reference, I’m running this on a host called fakemta.example.com, compiled to use the default port (8463) as:

java -jar maildropqueue.jar -c maildropqueue.xml

Where maildropqueue.jar is compiled by defaults (./configure && make) from a “git clone”, and maildropqueue.xml contains the configuration above. There’s a downloadable

Finally, We need to configure VirtualWisdom to generate and send this data attached to an email; this is a fairly simple problem for any VirtualWisdom administrator to do.  Following is a walk-thru up to confirming that the content is being generated and sent to the MailDropQueue; the composition of the report and the handler script “IMPORT-DATABASE.BAT” is too environmentally-specific to cover in this article.

  1. Create the report (outside the scope of this article) — confirm that it produces output. The following snapshot uses our internal demo-database, not actual customer data:

    Capacity / Performance Statistical Report

  2. Create a Schedule to regularly generate and send it:
    1. In the Report Generation Configuration, check that you have the hourly summary if so desired:
    2. Check that all probes are used, but you don’t need to keep the report very long:
    3. Confirm that you have file-format set to CSV unless your handler script can dismantle XLS, or you intend to publis a PDF:
    4. Choose to send email; this is the key part. The message can include anything as subject and body, but you must check “E-mail report as attachment”:
    5. …and finally: you may not yet have the distribution list set up, consider the following example. Note that the port number is 8025, and server is localhost because in this test, I’m running the MailDropQueue on the same server. The sender and recipient don’t matter unless you later determine which actions to run bases on triggers matching sender or recipient:
    6. Check that your MailDropQueue is running on the same port (this is an example running MailDropQueue using VirtualWisdom’s enclosed Java and the config example above; the two “non-body MIME skipped:” messages are from clicking “Send Test E-Mail” twice):
  3. Finally, run your MailDropQueue. The skip used above is shown here (except that running it requires removing the “-V”, highlighted), as well as the config, and an output of “java -jar maildropqueue.jar -V” to show how MailDropQueue parsed the configfile:
  4. Clicking “Run Now” on the Scheduled Action for the report generation shows end-to-end that VirtualWisdom can generate a report, send it to MailDropQueue, and cause a script to be triggered on reception. Of course, if the script configured into MailDropQueue hasn’t been written, a Java error will result, as shown:
  5. Now the only things left to do are:
    1. Write the report so that the correct data is sent as tables and Statistical Summary reports (only one View per section)
    2. Write the IMPORT-DATABASE.BAT so that it reacts correctly to the zipped archive of CSV files

Use UDCs to Group ISL Trunks or PortChannels

How to, samplecode, UDC No Comments »

I recently worked with a VirtualWisdom Administrator who wanted to group his ISL port utilization to match his ISL trunks, so we worked out a method of doing this, and I wanted to share it.  As a Field Application Engineer at Virtual Instruments, I tend to focus on these lower-level “how to” issues, working with users to achieve the data representation they need to make informed decisions in lieu of guesses and rule-of-thumb.

Initially, this administrator and I spoke of “trunks”, but between Brocade and Cisco terminology, these mean different things. The aggregations of ISLs into single logical units are “Trunking” in Brocade, but “Port Channels” in Cisco. Trunking E ports in Cisco is a different thing. I’ll use “Aggregate” as much as possible to refer to this in terminology as vendor-neutral as VirtualWisdom is.

We discussed why this Admin wanted to see more than just “the top 20″ values on a list of ISLs.  Diving deeper, this was because the top 24 entries are all the same aggregate: essentially, the first entire page is taken up by channels 1 and 2 of a a single aggregate ISL.  He wanted to skip beyond this to see the next 20 or 40 ISLs so he could see which ISLs were getting near 90% utilization.  So… why not combine these into a single filter or expression that matches the aggregate, and make sure that each aggregate uses only one row of this resulting table?

Additionally, one switch vender implements this aggregation of ISLs as balanced utilization across a collection of links between the same endpoints; conversely, another vendor implements this by overflowing one container, then moving to the next. In essence, an abstract aggregation that is has 60% Utilization may look like a collection of ports or links each with utilization 60%, or might look like 6 out of 10 links with Utilization 100%, and 4 of 10 with 0%.  That’s very difficult to separate in the data, and can obfuscate which ISL aggregations are approaching maximum desired load.

VirtualWisdom’s focus is on ports: what type, attached to what, etc.  Using Aliases or Nicknames, we can describe the endpoint, and in VirtualWisdom 3.0.0 and later, those ISL Nicknames are determined for us.  Unfortunately, these all have switch/blade/port, they’re too detailed. We cannot use that combination for a “group-by”expression to separate out the ISL aggregate.

VirtualWisdom is “too detailed” in this case: it wants to show all the ports individually.

A User-Defined Context, or UDC, is a metric with constant values applied using filter expressions. We often use these to automatically apply a logical grouping that better represents the real world implementation. One ISL aggregate between two switches A1 and A2 tends to encompass all E or TE ports on A1 connected to A2, and conversely, all A2 E or TE ports attached to switch A1.  That tends to make this one ISL unique from others.  We create a UDC in the SNMP/Link scope with values based on the “name of the switch” in an ISL: for example, in “SW12A44:3:1 ISL” as a link name, “SW12A44″ is the switch name.  ISLs between two switches share the same switch names, but are distinct by this same manner from ISLs to other switches. All we need is a UDC with values such as “SW12A44″ where “Attached Port Name MATCHES ^SW12A44:*”, and “SW12B44″ where “Attached Port Name MATCHES ^SW12B44:*”.

An example UDC would look like (Using terminology that’s a bit Brocade-leaning for this UDC because the Administrator favoured Brocade terminology) :

UDC to Group ISL Trunks

As you can see, grouping these ISL connects by “Probe Name”, “Channel”, and “TrunkChannel”, and filtering by “Attached ISL” would summarize traffic on all ISLs by the switches each connects, but aggregating bandwidth of all trunk members between each switch. Grouping by Channel continues to help us keep the directions separate so that a trunk loaded with 95% in one direction and 5% in the other shows “95%” and “5%” rather than “50%”.

You’ll notice, too, that we’ve added short-circuit to mark any non-ISL as a “NoTrunk”, the same as the default value. This avoids running the heavier “MATCHES” expression to evaluate ports that aren’t even ISLs. Your Portal server will thank you.

This logic assumes that all ISLs between two switches are in the same Aggregate; if you have any two switches with more than one distinct aggregation of ISLs, our logic no longer applies. One of our Analysts has seen ISLs grouped into multiple distinct aggregates even though they’re between the same switches, but it wasn’t the case in the discussion sponsoring the work I wanted to share.

Some customers have smaller SANs with a few dozen switches; others exceed 280 switches. This number of switches, and the various ISL possibilities between these, makes writing and maintaining a UDC with over 200 values very difficult and labour-intensive.  Because the user is effectively transferring config information from one format to another, accuracy risks can enter where users are transposing digits, or delayed in echoing the updated config information, or (more often) is simply not informed that any change needs to be echoed or copied. These risks are significant detriments to using this method.

To de-risk this implementation and help you try it out, we’ve created a script to convert a basic list of ISLs with nicknames into a UDC: while the blogging engine doesn’t let me upload this file, your VI Support and Services teams can help you get “ISL2TrunkChannelUDC.awk”, and a version of “awk” to run it. If your report with a Data View of “Table” is saved as a CSV with the AttachedPortName in the 4th column, you would run this script as:

awk -v COL=4 -f ISL2TrunkChannelUDC.awk 'Table-(summary).csv' > ISLAggregation.udc

(resulting UDC tested on version 3.0.2 and version 3.1.0 pre-release, incompatible before version 3.0.0)

I hope this helps you keep watch on your ISL utilization, and show the correct justification for adding ISLs to an aggregation, or balancing traffic to another less-used edge switch.

Keep those nicknames updated, and have a great holiday.

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in