Notice: register_sidebar was called incorrectly. No id was set in the arguments array for the "Sidebar 1" sidebar. Defaulting to "sidebar-1". Manually set the id to "sidebar-1" to silence this notice and keep existing sidebar content. Please see Debugging in WordPress for more information. (This message was added in version 4.2.0.) in /usr/share/wordpress/wp-includes/functions.php on line 4139 Tech Notes » virtualization

Vagrant on MacOSX-10.10 and Later

vagrant, virtualization No Comments »

If your vagrant installation isn’t working in MacOSX-10.10 (“Yosemite”) or 10.11 (“El Capitan”), add the following to your ${HOME}/.profile or ${HOME}/.bashrc

export PATH=${PATH}:/opt/vagrant/bin
Read the rest of this entry »

VMWare Copied Linux Gotchas

How to, virtualization No Comments »

When I manually-clone a VM in VMWare, there are a few things I tend to have to remember. More of a memo to myself, this post will be edited or refer to later posts as necessary. I use this because I forget, and google finds my own stuff as quickly as someone else’s…

  1. Install a new VM
  2. Choose to install from another VM
  3. Choose to duplicate, not share nor steal the disk(s)
  4. Search for the Virtual Disk.vmdk file to copy (if it’s not found, is the prototype VM stopped?)
  5. wait for the install to complete
  6. edit the new MAC into the /etc/sysconfig/network-scripts/ifcfg-eth0 file
  7. check for a butchered /etc/udev/rules.d/70-persistent-net.rules file (delete the ones from the previous MAC)

From there, the new clone acts like an independent system. I usually pop into my router and hard-set the MACAddr’s name so that the DDNS gives me the IP from the hostname when the DHCP dishes out an address. That avoids the DNS delay that most people kinda forget/dont-care in PTR lookup at connection time in, oh, everything.

Spring 2012: Storage Networking World

Best Practices, Dallas, SNW, storage, virtualization, VirtualWisdom No Comments »

It was great to be at the Storage Networking World (SNW) show in Dallas last week. We saw more customers sending people from the operations and the architecture/planning groups. It’s important for operations and architecture/planning to work together on SAN infrastructure, so it was good to see this and to hear some of the attendee’s remark they were hired to bridge the gap between these groups.

In a panel of CIOs at medium to large companies, all agreed that staffing remains a huge issue.  No one is getting new headcount, yet the number of new technologies they have to work with continues to grow.  Some saw a solution in cross-training IT staff.  One CIO is creating “pods” where architects and planners work closely with operations.  Everyone agreed that even though the effect of training and cross-training staff often results in “poaching,” it was still worth it to have a better-trained staff.  At Virtual Instruments, we agree with this trend and see cross-domain expertise taking on a more of an important role. VirtualWisdom, for instance, is designed for use by everyone in the infrastructure, from the DBAs and server admins to the fabric and storage admins.

Stew Carless, Virtual Instruments Solutions Architect, held a well-attended session on, “Exploiting Storage Performance Metrics to Optimize Storage Management Processes.”  In the session, Stew talked about how using the right instrumentation can go a long way towards eliminating a lot of the guessing game that often accompanies provisioning decisions.

Over at the Hands-on-Lab, Andrew Benrey and I led the Virtual Instruments part of the “Storage Implications for Server Virtualization” session. We had a full house for most of the sessions and we were pleased that many of the lab attendees were familiar with Virtual Instruments before they participated in the lab.

In a real-time illustration of managing the unexpected: The big news at the show came from the U.S. weather service, when a series of tornados ripped through the Dallas area to the east and west of the hotel. The SNW staff and the hotel did an excellent job of gathering everyone on the expo floor and sharing updates on what was happening. After a two-hour interruption, the SNW staff did a great job of getting the conference back underway. The expo exhibitors enjoyed the two hours of a captive audience!

With a couple of exceptions, many of the big vendors weren’t at SNW, which we see as a positive trend.  People come to these events to learn about new things, and frankly, the newest things come from the newest, smallest vendors.  At SNW, the floor was full of smaller, newer vendors who may not have direct sales forces who can blanket continents, but whose fresh insights and new approaches provided valuable insights for the SAN community.  I didn’t hear one end user complain that their favorite big vendor wasn’t there.

The next Storage Network World show will be in Santa Clara this October. We are looking forward to meeting everyone again and to catch up on what’s going on.

 

 

Three Steps to De-Risking Migration to the Private Cloud

Best Practices, Private Cloud, virtualization No Comments »

One of our customers recently completed a major datacenter consolidation, which included a move to a private cloud infrastructure for some of their applications.  I asked them how the private cloud initiative went and what they think they’ll get out of it.  During the discussion, they mentioned that they used VirtualWisdom to help with the migration, including the deployment of a major app on vSphere.  I thought I’d share the 3 discrete migration-best-practice steps they took, using VirtualWisdom, to ensure that the project went well.

1.     Find and Eliminate Connectivity Errors

This meant cleaning up multi-pathing errors, both in terms of single paths and of unbalanced paths.  To no one’s surprise, they found quite a few areas that needed clean-up.  At the same time, they monitored for physical layer issues, found one serious bottleneck they uncovered by looking at buffer to buffer credits, and remediated it.  Their private cloud migration uses virtualization at both the server and storage level, with a greater utilization of all components, so finding and fixing physical layer issues before the move was deemed essential.

2.     Ensure Optimal Performance

Because part of the project was a consolidation effort, the customer needed to review the configuration of their storage network.  Finding problems and opportunities for reducing the physical number of links without impacting performance was key.  They reviewed Queue Depth settings and found hidden performance improvements that gave them extra bandwidth head-room on the most-used links.

The customer used Exchange Completion Time, the measure of an I/O from the initiator to the LUN and back, as the key metric for performance testing.  They benchmarked application latency before the queue depth settings were changed, and after, and were able to prove the positive impact.  Then, as they brought applications over, they were able to instantly determine, to the millisecond, the impact of the migration on application latency.  This prevented potential user satisfaction issues, and they were able to prove that the consolidation project and separately, the private cloud migration did not hurt application response times.

3.     Optimize Utilization, Reduce Congestion

Good network capacity planning can help maintain networks in optimal working order.  It can reduce the risk of outages due to resource limitations, and justify future networking needs.  It’s important to look for patterns that occur at various times of day. There are often the equivalent of “rush hour” time periods where the SAN traffic will be slowed due to periods of significantly increased demands.   Using the VirtualWisdom “what if” reporting, this customer uncovered a backup job which was going to create a bottleneck if the consolidation took place exactly as planned.  So they found a less busy time of the day to run it, avoiding a potential problem.

They also found a number of no-longer-used reports that took server cycles and network bandwidth.  One of the reports created utilization on one link of approx 70% for just 2 minutes, and that alone was enough to increase transaction times well past an acceptable range.  By correlating metrics on the physical and virtual servers with link utilization, they were able to locate these rogue jobs and re-balance workloads.

Though the private cloud can help speed deployments and reduce costs, there’s little advantage to the end-user if it increases the risk to application performance and availability.  Through de-risking these areas, this customer was able to deliver the benefits of the new compute model and mitigate the risks.

 

Are You a ‘Server Hugger’? How to Virtualize More Apps

Best Practices, SAN, virtualization, VMworld No Comments »

At VMworld in Las Vegas, leading analyst Bernd Harzog, presented an intriguing case for how to increase the use of virtual servers.  In his session entitled “Six Aggressive Performance Management Practices to Achieve 80%+ Virtualization,” Bernd described both the reasons why more applications aren’t virtualized today, and what to do about it.

Since there seems to be much industry confusion about “best practices” for increased virtualization, we wanted to highlight some of Bernd’s key takeaways.  First, he accurately identifies the fact that it seems all the benefits accrue to the team managing the infrastructure, NOT to the application owners.  For the app owners, dedicated hardware is a comfort blanket they are unwilling to give up, and he affectionately refers to these folks as “server huggers.”  To these huggers, virtualization is all risk and no reward!

So what’s the answer?  According to Bernd, companies implementing these solutions should deliver better application performance on their shared services virtual infrastructure than they are able to deliver on dedicated physical hardware.  He goes on to offer best practices for HOW.  Bernd describes a six-step process, but it’s step number two that Virtual Instruments can help with the most.

  1. Implement a Resource based Performance and Capacity Management Solution
  2. Put in Place an Understanding of end-to-end Infrastructure Latency
  3. Take Responsibility for Application Response Time!
  4. Rewrite your Service Level Agreements around Response Time, Variability, and Error Rates
  5. Base Your Approach to Capacity on Response Times and Transaction Rates
  6. Make Response Time and Transaction Rate Part of your Chargeback and Workload Allocation Process

As our customers know, Virtual Instruments can help with nearly all of these.  But we’re best known for step two, understanding the end-to-end infrastructure performance – not just VMware performance or SAN performance, but literally end-to-end performance – and infrastructure response time is the key metric we offer that really differentiates us.  It’s perhaps the most valuable metric to the team supporting the virtual infrastructure.  Bernd talks about it in some detail, and he goes on to offer advice on criteria that will help accomplish this second step to increasing virtual server success.  His list:

  • Measure IRT – Monitor how long it takes the infrastructure to respond to requests for work, not how much resource it takes
  • Deterministic – Get the real data, not a synthetic transaction, or an average
  • Real Time – Get the data when it happens, not seconds or minutes later
  • Comprehensive – Get all of the data, not a periodic sample of the data
  • Zero-Configuration (Discovery) – Discover the environment and its topology, and keep this up to date in real time
  • Application (or VM) Aware – Understand where the load is coming from and where it is going
  • Application Agnostic – Work for every workload or VM type in the environment, irrespective of how the application is built or deployed

We couldn’t agree more!  I can’t do justice to Bernd’s presentation, so to hear more, go to  the Performance Management Topic at The Virtualization Practice, or listen to the webinar we did with Bernd.

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in