Notice: register_sidebar was called incorrectly. No id was set in the arguments array for the "Sidebar 1" sidebar. Defaulting to "sidebar-1". Manually set the id to "sidebar-1" to silence this notice and keep existing sidebar content. Please see Debugging in WordPress for more information. (This message was added in version 4.2.0.) in /usr/share/wordpress/wp-includes/functions.php on line 4139 Tech Notes » How to

Scheduling Cyclic Jobs in MacOSX

How to No Comments »

Many of us UNIX old-timers are quite accustomed to cronjobs, but MacOSX has a centralized “LaunchDaemon” called launchd — to leverage it to run cronjobs gives an OS-specific, perhaps OS-preferred, method of doing so.

The TL;DR:

<?xml version=”1.0″ encoding=”UTF-8″?>

<!DOCTYPE plist PUBLIC “-//Apple//DTD PLIST 1.0//EN” “http://www.apple.com/DTDs/PropertyList-1.0.dtd”>

<plist version=”1.0″>

<dict>

<key>Label</key><string>com.example.rsync.SyncTheRepos</string>

<key>Program</key><string>/usr/bin/rsync</string>

<key>ProgramArguments</key>

<array>

<string>-avr</string>

<string>–delete-after</string>

<string>rsync.example.com::repos</string>

<string>~/Documents/Repos</string>

</array>

<key>EnableGlobbing</key><true/>

<key>StartCalendarInterval</key>

<dict>

<key>Hour</key><integer>3</integer>

<key>Minute</key><integer>14</integer>

</dict>

<key>ProcessType</key><string>Background</string>

</dict>

</plist>

In general, there is a lot of flexibility in setting up a launchd plist — the config info on https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/launchd.plist.5.html plus the various examples on the internet should help, but I generally take this example and re-use it.

Once this plist is saved as a local text file in ~/Library/LaunchAgents/ (for example, I’ll save mine as ~/Library/LaunchAgents/rsync-repos.plist), I activate it using:

launchd load -w ~/Library/LaunchAgents/rsync-repos.plist

If I want to disable the job, I use:

launchd unload -w ~/Library/LaunchAgents/rsync-repos.plist

The files are not modified in either case.

Git-Backed Icinga

How to No Comments »

I want to make git commits to configure my icinga2 monitor: commit to a “devel” tree for config validation, commit to a “prod” tree to validate-and-activate a config.

wpg_div_wp_graphviz_1

 

Felipe Contreras has slowly converted me to using git.  I fight with git, but it seems the way forward based on critical mass of user-base, and only takes 7 more steps to do each thing, so my fingers get used to the additional commands.

 

I’m looking at a few tasks to get this done:

  1. init the repository
  2. load the example config shipped with icinga2 to pre-seed the “master” branch
  3. create “prod” (from “master”) and “devel” branches
  4. pre-commit hooks (both branches): run a “/etc/init.d/icinga2 configcheck” on the content
  5. post-commit hook on “prod”: export to running directory and “/etc/init.d/icinga2 configcheck”

Let’s see how this goes…

Read the rest of this entry »

VMWare Copied Linux Gotchas

How to, virtualization No Comments »

When I manually-clone a VM in VMWare, there are a few things I tend to have to remember. More of a memo to myself, this post will be edited or refer to later posts as necessary. I use this because I forget, and google finds my own stuff as quickly as someone else’s…

  1. Install a new VM
  2. Choose to install from another VM
  3. Choose to duplicate, not share nor steal the disk(s)
  4. Search for the Virtual Disk.vmdk file to copy (if it’s not found, is the prototype VM stopped?)
  5. wait for the install to complete
  6. edit the new MAC into the /etc/sysconfig/network-scripts/ifcfg-eth0 file
  7. check for a butchered /etc/udev/rules.d/70-persistent-net.rules file (delete the ones from the previous MAC)

From there, the new clone acts like an independent system. I usually pop into my router and hard-set the MACAddr’s name so that the DDNS gives me the IP from the hostname when the DHCP dishes out an address. That avoids the DNS delay that most people kinda forget/dont-care in PTR lookup at connection time in, oh, everything.

MDS Exclusions to Attempt to UnCrapify Powerpoint

How to No Comments »

Powerpoint is one of those necessary pills-to-swallow that gives an impressive choking fit at times. In my case, I just want a simple presentation driver, and although I may choose not to use the swooping transitions and 41000 font choices, the ever-increasing load due to dormant enhancements risks bloat to failure.

Specifically, Outlook 2011 tends to stall for up to 6 seconds at a time (no big deal if it didn’t happen every few seconds, and that the typing tends to go to a random place/dialog/chat window).

Activity Monitor shows that MDS VM usage multiplies when Outlook or Powerpoint are started; even with just Outlook running, MDS is at 11% runocc and 984MB (yes, nearly 1G virtual). Physical/Core usage is at 1/3 Gb.

My hypothesis (ObConspiracy: if you want to make a competitor look bad, you want your product to run really crappy on it, but not all the time, and with such a degraded experience that the issues with your competing product pale by comparison — that turns all your regular users into hecklers for those unwilling to accept a huge hurdle for a non-critical app) is that MDS is seeing continuous change in the data files backing Outlook and Powerpoint.

.. so let’s reduce that visibility to files constantly in-flux:
$ sudo defaults read /.Spotlight-V100/VolumeConfiguration.plist Exclusions
(
"/Users/allan.clark/Documents/Microsoft User Data/Office 2011 Identities/Main Identity/Data Records/Exchange Moves",
"/Users/allan.clark/Documents/Microsoft User Data/Office 2011 Identities/Main Identity/Data Records/Exchange Sync",
"/Users/allan.clark/Downloads",
"/private/var/folders"
)

Outlook keeps copies of its mailbox and data, never removing the 2008 version when 2011 is created, so that’s 4 copies of the data on the same disk. Only the most recent seems to change, so let’s exclude that from your MDS:

sudo defaults write /.Spotlight-V100/VolumeConfiguration.plist Exclusions -array-add ~/Documents/Microsoft User Data/Office 2011 Identities/Main Identity/Data Records/Exchange Moves
sudo defaults write /.Spotlight-V100/VolumeConfiguration.plist Exclusions -array-add ~/Documents/Microsoft User Data/Office 2011 Identities/Main Identity/Data Records/Exchange Sync

As you can see, I also exclude the root of temp folders and downloads. I don’t need downloads to trigger MDS all the time.

Don’t forget to restart the service:

sudo launchctl stop com.apple.metadata.mds && sudo launchctl start com.apple.metadata.mds

Let’s see if Powerpoint still chokes…

Airport Utility 5.6.1 on MacOSX 10.8.3

How to No Comments »

This is a quick remember on how to upwrap a package and install it manually: Airport Utility-5.6.1 on MacOSX-10.8.3

I never remember the vanity cat-versions of OSX, but my 10.8.3 is not permitted to install AU-5.6.1. There’s something in the AU-5.6.1 package that refuses (or has forgotten to specifically allow) 10.8.3 has a host OS. My needs involved WEP configurability: I had to get an old laptop onto my Wifi to get it online without a Mir-Cable across the floor.

unfortunately, AU-5.6.1 doesn’t allow a guest network, whereas the frailty of WEP causes me to prefer a simple laptopinternet pipe withot access to the other resources on my LAN. I went elsewhere with this task, but I wanted ot keep some notes.

  1. download from http://supportdownload.apple.com/download.info.apple.com/Apple_Support_Area/Apple_Software_Updates/Mac_OS_X/downloads/041-0261.20120611.Vbgt6/AirPortUtility5.6.1.dmg
  2. mkdir -p ~/Desktop/apu561
  3. cd ~/Desktop/apu561
  4. xar -x -f /Volumes/AirPortUtility/AirPortUtility.pkg Payload
  5. tar xzf AirPortUtility.pkg/Payload
  6. sudo mv Applications/Utilities/AirPort Utility.app /Applications/Utilities/AirPort Utility-5.6.1.app

screencap

How to Re-Use VirtualWisdom Data in 3rd-Party Tools

BAT, How to, samplecode, Scheduler, VirtualWisdom No Comments »

Everyone I’ve personally met who has witnessed the detail of VirtualWisdom metrics tends to be first amazed, then relates it to LAN and ethernet tools, then questions why we haven’t seen this before. The next question in very large organizations is “how can we re-use this data in our [insert home-grown tool here] ?”

Incorporating VirtualWisdom into an organization has various points of “friction”: training on a new tool, understanding the metrics, collection of data to help VirtualWisdom correlate, and beginning to use it internally. As a Virtual Instruments Field Application Engineer (AE or FAE), I tend to see the initial friction (collection of data, such as nicknames, or grouping Business-Units as UDCs. The less common friction is “OK, we love VirtualWisdom, but our expansive storage team want to exploit the metrics in our custom home-grown planning tools”.

Converting VirtualWisdom into basic data-collector ignores the reporting, recording, and alerting capabilities it offers; re-using its data in multiple entities of a corporation is an expansion on VirtualWisdom’s utility, and I’m more than happy to help a customer do that. The more we can help our customers make an informed decision — including leveraging more data in their own tools — the more we can help our customers “free their data” and improve the performance and reliability of a complex data-storage environment.

My entries here on the Virtual Instruments Bast Practices blog tend to be of a how-to nature; in this article, I’d like to show how the opensource tool “MailDropQueue” can help push VirtualWisdom data into your home-grown toolset.

 

There was a time that customers tried to find our reports by digging through the “exports” directory after a new report was produced — because the most recent report.csv.zip is the correct one, right? This ran into problems when users generated reports just before scheduled reports, and when the scheduled “searcher” went searching, the wrong report would be found. Additionally, some reports took a long time, and would not always be finished by the time the customer’s scripts went searching. Customers typically knew what script they could run to consume the data and push it to their own systems, but the issues in finding that file (moreso due to a lack of shell on Windows) caused this solution to become over-complex.

Replication at the database level gives us the same problem: the data is in a schema, and it’s difficult to make sense of without the reporting engine.

A while ago, VirtualWisdom gained the ability to serialize colliding reports: if a user asks for a report at the same time the system or another user is generating a report, the requests get serialized. This allows VirtualWisdom to avoid deadlock/livelock situations at the risk of the delay we’re used to at printers: your two-page TPS Reports are waiting behind a 4000 page print of the History of the World, Part 1A. The benefit of a consistently-responsive VirtualWisdom platform are well worth this benefit. Unfortunately, the API that many users ask for poses this same risk: adding a parallel load onto VirtualWisdom that needs an immediate response, adding delay in both responses and risking concurrency delays at the underlying datastore.

The asynchronous approach — wherein VirtualWisdom can generate data to share through its reporting engine — is more cooperative to VirtualWisdom’s responsiveness, but returns us to the issue of “how do I find that report on the filesystem?  The correct report?”

MailDropQueue is a tool in the traditional nature of UNIX: small things that do specific jobs. UNIX was flush with small tools such as sed, awk, lpr, wc, nohup, nice, cut, etc that could be streamed to achieve complex tasks. In a similar way, MailDropQueue receives an email, strips off the attachment, and for messages matching certain criteria, executes actions for each.

It’s possible for VirtualWisdom to generate “the right data” (blue section, above), send it to MailDropQueue (red portion, above), and have MailDropQueue execute the action on that attachment (green part above).  In our example, let’s consider where a customer knows what they want to do with a CSV file; suppose they have a script such as:

@echo off
call DATABASE-IMPORT.BAT TheData.CSV

The actual magic in this script isn’t as important as the fact that we can indeed trigger it for every attachment we see to a certain destination. Now all we need is to make a destination trigger this script (ie the green portion of the diagram above):

 <?xml version='1.0' ?> <actions> <trigger name="all"> <condition type="true"/> <action>IMPORT</action> </trigger> <script id="IMPORT" name="import" script="DATABASE-IMPORT.BAT" parameters="$attachmentname"/> </actions> 

From the above, the “condition type=true” stands out, but it is possible to constrain this once we know it works, such as to trigger that specific script only when the recipient email matches “ftp@example.com”:

 <condition type="equal"> <recipient/> <value>ftp@example.com</value> </condition> 

Also, it’s not so obvious, but the result of every received email that matches the condition (“true”) is to run the script with the attachment as the first parameter. This means that if an email arrives with an attachment “performance.csv.zip”, MailDropQueue would Runtime.exec("DATABASE-IMPORT.BAT performance.csv.zip").

For reference, I’m running this on a host called fakemta.example.com, compiled to use the default port (8463) as:

java -jar maildropqueue.jar -c maildropqueue.xml

Where maildropqueue.jar is compiled by defaults (./configure && make) from a “git clone”, and maildropqueue.xml contains the configuration above. There’s a downloadable

Finally, We need to configure VirtualWisdom to generate and send this data attached to an email; this is a fairly simple problem for any VirtualWisdom administrator to do.  Following is a walk-thru up to confirming that the content is being generated and sent to the MailDropQueue; the composition of the report and the handler script “IMPORT-DATABASE.BAT” is too environmentally-specific to cover in this article.

  1. Create the report (outside the scope of this article) — confirm that it produces output. The following snapshot uses our internal demo-database, not actual customer data:

    Capacity / Performance Statistical Report

  2. Create a Schedule to regularly generate and send it:
    1. In the Report Generation Configuration, check that you have the hourly summary if so desired:
    2. Check that all probes are used, but you don’t need to keep the report very long:
    3. Confirm that you have file-format set to CSV unless your handler script can dismantle XLS, or you intend to publis a PDF:
    4. Choose to send email; this is the key part. The message can include anything as subject and body, but you must check “E-mail report as attachment”:
    5. …and finally: you may not yet have the distribution list set up, consider the following example. Note that the port number is 8025, and server is localhost because in this test, I’m running the MailDropQueue on the same server. The sender and recipient don’t matter unless you later determine which actions to run bases on triggers matching sender or recipient:
    6. Check that your MailDropQueue is running on the same port (this is an example running MailDropQueue using VirtualWisdom’s enclosed Java and the config example above; the two “non-body MIME skipped:” messages are from clicking “Send Test E-Mail” twice):
  3. Finally, run your MailDropQueue. The skip used above is shown here (except that running it requires removing the “-V”, highlighted), as well as the config, and an output of “java -jar maildropqueue.jar -V” to show how MailDropQueue parsed the configfile:
  4. Clicking “Run Now” on the Scheduled Action for the report generation shows end-to-end that VirtualWisdom can generate a report, send it to MailDropQueue, and cause a script to be triggered on reception. Of course, if the script configured into MailDropQueue hasn’t been written, a Java error will result, as shown:
  5. Now the only things left to do are:
    1. Write the report so that the correct data is sent as tables and Statistical Summary reports (only one View per section)
    2. Write the IMPORT-DATABASE.BAT so that it reacts correctly to the zipped archive of CSV files

Merging UDCs in VirtualWisdom to Join Manual and Generated UDCs

BAT, How to, UDC, VirtualWisdom, xmllint, xsltproc No Comments »

UDCs — User-Defined Context — can be very useful for showing the actual use, or membership of a device on the SAN, or for assigning a priority for alerting and thresholds. Often, these are hand-generated, but we do have methods of creating them from other content.

One customer has both: a UDC generated/converted from other content, and some manually-assigned content. Merging these would help him to assign filters and alerts as a single group, but the effort to merge it was looking excessive.

As you may recall, my content on Virtual Instruments Best Practices blog tend to be the how-to variety, and in this article, I’d like to share how to merge two UDCs programmatically, which can then be scripted in any automated collection scripts or tools you’re already using.

Merge, Then Cleanup

The general process we use for this is to merge the content first (using xmllint), then clean it up (using xsltproc) so that it’s back to sane, predictable UDC that is ready for routinely-scheduled import:

UDCs merged using xmllint and xsltproc

Notice in this image that the only things that change are in the upper box (“UDC Files”), which can be either manually-edited or autonomously-generated by filter or transform. As well, the result is a standard UDC from which we can generate filters or otherwise edit using XML tools.

As you can see, the tools used here are fairly standard; the only real development are the smaller scripts for each tool. UDCs are simply XML, and as such, quite easy to manipulate using standard XML tools.

Let’s break this down into the multiple steps.

Concatenation

The easiest way I found to concatenate two XML files was to use XInclude with an XPointer statement:

<?xml version="1.0"?>
<list xmlns:xi="http://www.w3.org/2001/XInclude">
    <xi:include href="File1.udc" xpointer="xpointer(//list/*)"/>
    <xi:include href="File2.udc" xpointer="xpointer(//list/*)"/>
</list>

In parts, this is really duplicates of the following:

<xi:include
href="File1.udc"
xpointer="xpointer(//list/*)"
/>

If you’ve written any sourcecode, you’d recognize an #include statement, or an import com.example.*; this is really no different: the document referenced by the “href” (File1.udc) replaces this xi:include statement. The second part, an xpointer="...", further clarifies the import by indicating that only a part of the document we include should come in — in this case, child elements of the “list” element. If you look at a UDC File, you’ll see that “list” is the root node; if that statement makes very little sense, then think of this as “we’re including all the stuff inside the outermost container, but not the container itself”. And hey, look again at the full file above: we specify a <list> and </list> around the inclusions. Coincidence? Not at all; this is a method of avoiding having two outermost root nodes, which cannot be further altered using XML because XML can only have one outermost root node.

…and it’s easier this way: don’t filter out what you can avoid including in the first case. It’s possible that there’s a better set of inclusion elements here, but this works well enough.

If we had three UDCs to merge, you can see that it would merely require another xi:include statement.

To act on this file, we execute xmllint using the “-xinclude” parameter (normal hyphen, only one “-”, not two) as follows. Note that xmllint is available on most non-Windows systems, and should be easily acquired using Microsoft Services for UNIX for a Windows system.

xmllint.exe -xinclude concatenate-UDC.xml > Merged.udc

for Windows, or for non-windows:

xmllint -xinclude concatenate-UDC.xml > Merged.udc

(using “Merged.udc” as a temporary file)

We now have a UDC file with only one outermost “list” element or root element, but it has a few problems:

  1. Every new UDC starts with Evaluation Order of 1; this is reflected in the UDC, and has to be fixed
  2. Only one default item should be given: we choose the one from the first file
  3. We only copy the first file’s definition of the UDC (Metric, set, etc) so the user needs to avoid doing this on two UDCs of different metric/set (illogical UDCs will result)

The first two issues can be fixed in the next step.

Clean the Concatenated Result

XSLT, or XSL Transformations, uses XSL (Extensible Stylesheet Language) to transform XML into a different XML, or even into a simpler form such as straight text or ambiguous markups such as CSV. In general, XSLT can map XML data from one schema to another, convert data from one schema to another, or simply extract elements of data into a text stream.

In our case, we’re using it to remove the redundant parts that will cause VirtualWisdom’s UDC parser to reject the document. There is currently no schema definition, so we have to make best efforts to make the resulting UDC look like one exported from VirtualWisdom.

The XSLT is a bit complex to post here, but it should be available by clicking on the marked-up filename below. Note that like xmllint, xsltproc is widely available as Linux and UNIX packages, or via the Microsoft Services for UNIX (currently a re-packaged Cygwin environment).

We execute the cleanup XSLT as follows:

xsltproc.exe concatenate-UDC.xsl Merged.udc > VirtualWisdomDataUDCImportCombined.udc

for Windows, or for non-windows:

xsltproc concatenate-UDC.xsl Merged.udc > VirtualWisdomDataUDCImportCombined.udc

Note here that the file we use is similarly-named, but end in “xsl”, not “xml”. Also, we write the file directly into the UDCImport directory of a VirtualWisdomData folder, which is where an import schedule would look for it.

This resulting file can be directly imported; an example import schedule is at the bottom of the Use UDCs to Collect Devices by Name Pattern article presented on May 1st, 2012. As well, because the UDC is in a standard form, it can be used to Quickly Create Filters for use in Dashboards, Reports, and Alarms.

Evolving SANs tend to have evolving naming schemes and assignment methods, so there will often be many different systems of identifiers that can be joined to work with different Business Units, customers, or functional groups; different such groups tend to cause different sources of information to be polled, and different formats to result. I hope this process help you to reduce the manually copying of data attributes which is so prone to human error and scheduling delays.

I hope this helps you to “set it and forget it” on more sources of data. Accurate data drives decisions: how can you methodically fix what you cannot measure and make sense of?

Edit 2012-08-23: concatenate-UDC.xsl was misspelled on the webserver (concatenate-UDCs.xsl), four downloads failed due to my error. Sorry, it should be easier to download now.

Use VirtualWisdom Alarms to Schedule Daily Tasks

How to, Scheduler No Comments »

The VirtualWisdom Service part of the VirtualWisdom Platform doesn’t necessarily do everything: our customers’ SANs differ in the small details as well as the larger ones, necessitating VI Services to help with some customization. In many cases, we set things up to run daily, such as grabbing zone info to convert to Nicknames, or converting Nicknames to UDCs and Filters.

In some cases, customers cannot edit the Windows Scheduler to run these, and do not have a UNIX-like system with an available scheduler. This can be due to access, or corporate policy. I wanted to share a workaround for this situation: (mis-)use the Alarm system to do so.

The following image may explain more efficiently than a walk-through:

Example daily alarm to run a batch file

As you can see by the name, the alarm policy should only be applied to one ProbeSW — to one SAN Switch.

The alarm will trigger when any data flows — you can see the trigger set to “> 0, 1 matching interval in domain of 1 interval”, and all it does it runs an external program. The configuration of that external program is also opened in the editor, and you can see that it simply runs a script (using full pathname).

The re-arm of that Alarm Policy Rule is “MB/sec != -1″. Because MB/sec can only go down to zero, “-1″ is impossible, so this rule will always match. The trick is that this has to match one triggered, and has to match for 288 intervals (288 x 5 minutes = 24 hours). Effectively, this is a logic statement that says “don’t run more often than every 24 hours”.

This Alarm Policy Rule effectively runs immediately after the Portal Service is restarted or the Alarm Policy is applied to a switch, and will run every 24 hours thereafter (understanding that 288 might need to be 287 to avoid a 5-minute skew daily).

The “meat” or complexity here would be in the BAT file: the Alarm uses the “External Script” action to run our batch file daily. This avoids configuring the OS Scheduler, but at a cost of not being able to choose the exact time. Additionally, the BAT file executes with the permissions of the Portal Server, which typically cannot view Network Shares and other remote resources.

Move Your VirtualWisdom Backups into Your Backed-Up Space

Backup, BAT, How to, Scheduler No Comments »

VirtualWisdom has an easy backup system: quite simple to configure for backups as easily as any scheduled event: as frequently as daily, at any time, and with multiple schedules possible, re-using the same configuration for each. The issue of a new filename every time — chosen by VirtualWisdom to avoid overwriting a good backup with one that might run into some exception and be incomplete — often causes a new backup file each week to be present, and no simple method of aging-out old backups.

The Post-Backup Script in the Backup Service Configuration runs after every backup, if activated: it simply executes a script with a few parameters. This allows the VirtualWisdom Administrator a certain flexibility in writing any manner of script that can run as the VirtualWisdom process to accomplish the automated moving around of backup files — or, logically, any task, even unrelated to the backup.

As defined by the underlying database vendor, our database files need to remain untouched by backup and antivirus processes which tend to lock the files for long periods. Any locked data file tends to block database writes, slowing throughput, and risking corruption of the data. This requirement also means that backups are typically outside of corporate backup tools and policies; the risk of a backup not being preserved in a catastrophic filesystem exception is clearly significant. Even though VirtualWisdom only handles measurements and data about the data, it does not handle data itself, and does not form a critical path in data I/O, loss of VirtualWisdom is loss of measurement and analysis tools which may be critical to resolve storage issues. Clearly we want the backup for VirtualWisdom to be safely archived.

In this article, I’d like to share one example of how successful backups can be moved into the filesystems covered by corporate backup policies, replacing past backups to avoid ever-increasing disk usage. My content here on the Virtual Instruments SAN Best Practices blog tends to be of a technical “how-to” nature; we hope this article may help define a customer’s backup config, giving safety to the data so that focus can return to the performance and availability of the SAN.

Overview

The basic backup process is a sequence such as:

  1. lock the database (database becomes read-only)
  2. quickly duplicate all database files
  3. unlock the database and let processing continue
  4. aggregate the backup files into a single file, optionally compressing

The feature we want to exploit to improve this process is the optional “Execute the following command upon completion” entry on a Backup Service Configuration to move the backup file to where it should be. In most cases, “where it should be” is a disk covered by corporate backup processes with sufficient space to hold the backup, compressed, accounting for organic growth (database backup grows as number of monitored ports, VMs, ESXs, and ITLs increase over time).

For our example, that is the “X” drive. Bear in mind that the backup script runs as the VirtualWisdom process, which runs as a service hence has no access to network drives. In our example, the “X” drive might even be a SAN LUN: even though we recommend that the disk not be on a SAN LUN due to the risk of being affected by the performance problems and exceptions that VirtualWisdom is trying to help users track and resolve, the backup may be on a SAN LUN because delays in the archived backup do not directly affect performance of the VirtualWisdom platform.

Example Backup Service Configuration

Typically, your backup schedule would look like the following: (except that my work server is small, so I have disabled mine by unchecking the checkbox beside the scheduled time)

Typical Backup Service Config without Post-backup script

… with a Backup Service Configuration such as:

Typical Backup Service Config without Post-backup script

Improved Backup Service Configuration

Instead of merely doing the backup, we can use the “post-backup script” to do the work for us. The “Post-Backup Script” is the name I’ve started using for the script that gets listed in the box for “Execute the following command upon completion”. An example script may be as simple as the following:

Example Post-Backup Script

As we can see, when the second parameter given to the script (“%2“) is a 1, then the filename given as the first parameter (“%1“) is moved to the consistent filename X:BackupsVirtualWisdomBackup.zip. The X: drive would be within normal backup policy, so routine backups would protect the database archive.

This batch file is run by entering it as a “post-backup script” as follows. NOTE: where possible, use a full pathname to ensure the script is found, and it’s the correct script.

Example Backup Service Confg with a post-backup script

As we can see in this Backup Service Configuration, we have enabled the “Execute the following command upon completion” checkbox, and listed our script as the script to run. The two parameters are selectable with the “Insert” box, or may be directly typed free-form.

When the script runs after a backup is complete, the $BACKUP_STATUS$ is replaced by a 1 or a 0 depending whether the backup was successful — and as noted above, if this value is “1″, the working file is moved; otherwise, it’s untouched. Perhaps an enhancement might be to raise an alert that the backup failed (VirtualWisdom logs backup failures in the Portal log, but makes no other indication), or to delete or move aside a failed backup as well for analysis and fault-resolution.

When the backup is complete, and a new backup file is created named after the time that the backup started: backup - yyyy-mm-dd-hh-MM.zip, where yyyy is the year, mm is the month (zero-padded), dd is the day (zero-padded), HH is the hour (24-hour time), MM is the minutes (zero-padded) — yes, this is intentionally very close to ISO8601 that is the basis for RFC3339, HTML5, and XML date format. With a new pseudo-random always-incrementing filename, new backups will never overwrite previous backups, but they are difficult to track down. The $BACKUP_FILE$ token is replaced by this filename, allowing the post-backup script to work with the correct filename every time.

Of course, in order to summarize the underlying behaviour, we do change the name of the schedule itself, but it’s not critical:

Backup Configuration with post-backup script

In most articles, we include complete examples, but the development and explanation of this relatively simple example is a complete example. Of course, changes will have to be made for each individual unique environment. Most backups do not run to the C: drive because there would not be sufficient space; rather, most configurations have a D: drive or E: drive for data, and that drive is used as a working drive during backups.

Quickly Create Filters for VirtualWisdom UDC Values

Filters, How to, UDC, xsltproc No Comments »

The UDC capability in VirtualWisdom enables quite a powerful ability to group fabric entities based on a number of parameters, but creating the filters to use a large UDCs can be a bit cumbersome. UDC is VirtualWisdom’s User-Defined Context, allowing a virtual metric value to be defined within summaries, calculated based on powerful expressions.

Typically, UDCs are used to separate and group entities such as:

  • Physical Datacenter to filter physical-layer alerts (such as CRCs) to the correct ticket queue for inspection
  • Business Unit (BU) UDCs to filter performance alerts (such as response-time) against Business-Unit -specific thresholds (i.e. Oracle requires 12ms response time, but the NFS filer accepts 20ms)
  • Port/Blade/ASIC calculations
  • Grouping a SuperDome’s ports or an Array’s ports for filtered reports

As well, UDCs are used for “what-if” calculations: What if the SCSI traffic from a certain HBA was zoned to a different storage port, which it overload the Queue and link speed? What-if UDCs are an extremely powerful tool to prove capacity based on historical use, but somewhat out-of-scope for this article.

My content in Virtual Instruments’ SAN Best Practices tend to be of the how-to nature; in this article, I’d like to share a simple method of creating all the “X = Y” filters for a specific UDC programmatically, which can reduce the time-to-value in new installs or changing environments. When linked with other generation how-to articles (such as nickname collection, or generating UDC by transform), this can further reduce the effort of managing a very large SAN.

Process Overview

For this process, our workflow will look like the following:

As you can see, the starting file “UDCExport.udc” can be either exported from the VirtualWisdom Portal itself, or can be generated by other means. The file is converted using xsltproc using a “program” or “script” UDC2Filter.xsl, resulting in Filters.xml which can be imported manually to VirtualWisdom.

Overview

UDC Files in VirtualWisdom are a specific schema of XML file; as such, standard easily-available license-free tools such as xpathget, xmllint, or xsltproc can be used to interrogate, validate, or convert the starting XML to a different format, even generating CSV or simple text in the process.

XSLT is the XML Stylesheet Translations; XSL is a Stylesheet for XML, similar to CSS describing the stype of a free-form HTML page. In essence, XSL can be considered an CSS in XML, but rather than markup content — such as type facing and style for large printed content — XSL can also transform and convert content. XSLT is the act of using XSL markup in a standalone processor (xsltproc) to create content based on XML content. In many cases, this is XML generating XML, but can be used to write TSV, CSV, JSON, etc.

VirtualWisdom Filters are exported as another schema of XML file, and can be similarly manipulated by standard XML tools. Even though this XML is a text-based format, trying to edit it with a text editor can be prone to human-error. We can read XML for debugging (xmllint -format), but as the size of the content gets larger, to use it as thought XML is an opaque binary format, which again leads us to the free tool “XSLT”.

In our case, a specific XSLT file is used to manipulate a UDC definition into a list of Filter definitions: UDC2Filter.xsl guides the conversion of UDC Values to Filters which match them.

Running the Script

xsltproc is available on most non-Windows platforms as an installable RPM, SSO, .deb, .pkg, or similar pre-packaged open source project; on Windows, it can be installed per SageHill’s Instructions; a file xsltproc.zip is easily obtained from any VI FAE to accelerate your install process.

Running it is quite simple:

xsltproc.exe -o Filters.xml UDC2Filter.xsl UDCExport.udc

There’s no output: all generated content goes directly to the output filter file.

Complete Example

In order to show how the full process, in case I’ve left out some details or some details seem implied, this is a full example based on data in our demo databases (which we use for demos and training):

Given the following UDC:

UDC that we start with for our example UDC2Filter

We export this UDC to Application_SW.udc, run the XSL Transform as follows:

xsltproc.exe -o Application_SW_Filters.xml UDC2Filter.xsl Application_SW.udc

The result we get in Application_SW_Filters.xml looks like this:

Clearly this example is only a few filters, no big deal. The benefit comes in when there are more than a half-dozen to build (recently, a 212-value UDC was tested). As well, if the UDC is edited (perhaps based on automated processes) then the administrator must go through and check that every value has a filter.

Unfortunately, there is no schedule-action for Filter import.

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in