Unofficial vCenter Operations for PowerCLI update!

Posted by vele1655 | Posted in vCOps | Posted on 15-03-2014

Tags:

0

This is going to be a quick update relating to the vCOps for PowerCLI.  In the past few months I have added a few capabilities to module.

  • Easy assessment of pre-defined metrics
  • vCOps resource lookup and cache on connect (can be used without PowerCLI now)
  • New API implementation of getMetricValuesFromMemory

 

Download the module here.

 

vCenter Operations Assessment

 

I am going to skip over documenting the assessment portion since it is still a work in progress.  You will see the following shortcut when you uncompress the zip file and the shortcut can be used which will start an assessment based on existing metrics that are in the var directory.  There is a file called “attr_resources.txt” which includes three sections.  1) command section for collections 2) metricKey which grabs files with specific metrics for commands 3) metricKeyMatch which grabs metrics based on regular expressions.  The collection takes A LONG TIME to run since it is collecting historical information and must do this metric by metric, resource by resource =)

image

 

 

vCenter Operations Resource Lookup

 

It is now possible to run the metric collections without logging into PowerCLI/vCenter prior.  You can do this since we do a collection of all vCOps resources and their identifiers when logging in.  These items can be passed to metric collection cmdlets.

 

image

 

 

Get-vCOpsResourceMetricRecent

 

There is a newer API method that vCOps has introduced (getMetricValuesFromMemory) which allows access to metrics that reside in memory.  In addition to this, the query allows multiple resources and metrics to be requested with one query.  This helps when it comes to large scale environments looking for the most recent metrics for resources.

 

image

Avamar and VMware vCenter Operations – Part Two!

Posted by vele1655 | Posted in Avamar, vCOps | Posted on 27-08-2013

Tags:

0

This post involves integrating Avamar VM backup statistics directly with other VM statistics while leveraging deep analytics of vCenter Operations to help manage image based backups (see Chad’s videos here). Before proceeding check out a recent related post here that discussed how vCenter Log Insight can be used to receive Syslog messages and display useful Avamar backup related activities for VM admins. The difference between these two posts is primarily based from a high level on "what happened" (vCLI) versus "what is the state over time" or "what might happen" (vCOps).

Part One of the vCOps integration is also important to review . This was released back in the February 2013 timeframe during VMware’s PEX. This first iteration set the groundwork for performing active collections against Avamar for VM backup events and creating a mechanism to send these to vCenter Operations. The post details towards the end, how to configure the collections. The value in this integration was that you could overlay "change events" per Virtual Machine and see when things backups occurred on top of Health, Risk, Efficiency, and Faults. For example, if the health of a VM decreased, this could be caused by underlying Datastore performance problems. The backups could then be plotted against this decrease in Health. And even beyond this, you could change the context to view backups from multiple VMs and how they aligned against Datastore health. Value? See the impact of backups, and give more information to help plan better.

clip_image001

 

This integration was very cool and people were able to get it running with Avamar and even VDP (took modifying VDP firewall rule for Postgres access).

Now what’s the next level of coolness? Well it is actually in line with what most people think about traditional monitoring. Can we monitor the metrics of Avamar? Of course. For a VMware administrator can we extend relevant information and monitor the VM backup metrics per VM? Yes.

Are these some of the questions that a VMware admin or even a backup admin might be interested in?

  • Is this VM in violation of an RPO?
  • Which VMs are consuming the most unique space in our Protection storage?
  • Which VMs have the least deduplication rate?
  • What is the backup rate?
  • Am I impacting storage?
  • ..and when did this start and end or what is normal?

Introducing the Part Two! Take a look at the screenshot below. Notice that is a root level grouping of Avamar statistics, and along with them we have an Avamar instance grouping as well. What does this mean? Since the VM object contains these statistics we can easily relate any of our backup activities to VM stats. Do you want to see if storage latency is increased during backups? No problem. How about snapshot space usage during backups? Sure. Plenty of awesome possibilities here that I have really only started to explore!

clip_image002

 

What are we tracking per VM? Here’s the list. It is important to note that we are really tracking five things and the information is then placed into different calculated metrics.

  • Protected/Scanned bytes – The whole size of the VM being protected
  • New Bytes – The new bytes after source-based deduplication (on the image proxy) that is absorbed into the Protection storage
  • Deduplication – A calculation of New Bytes and Protected
  • Duration – How long a backup took
  • Currently Available – Recovery points available

clip_image003

 

It is also important to mention the different calculated values as well. Since vCOps doesn’t do data manipulation (WYSIWYG) without things called Super Metrics, sometimes it is more efficient to send calculated values. An important point here is that we actually need Active Collections against Avamar in addition. Why? If I we want to estimate how many backups are currently around and do other calculations from this, then I could simply look at the expiration of backups that have occurred and figure out which ones *should* be around. But that’s not good enough!

We need active collections against Avamar for things like "How many backups does this VM have right now" or "How much Protection storage is being consumed by all backups of this VM right now"? We then can take the number, and the details for those backup activities and calculate for example, averages, sums, and maxes for the duration of those backups that currently exist. These calculations can then be plotted over time. It will make more sense when you start to see it in action!

See the following for a summary of the metrics being shown.

  • Currently Available – Amount of Recovery Points for the VM
  • Deduplication Latest (%) – Latest VM Deduplication %(1-(New Bytes/Scanned Bytes))
  • Duration Average (seconds) – For recovery points that exist at this point, what is the average duration of backups
  • Duration Latest (seconds) – Latest backup duration
  • New Bytes All (%) – For recovery points that exist at this point, what is the aggregate percentage of New Bytes (Total New Bytes/Scanned Bytes)
  • New Bytes Avg (MB/sec) – Reported backup New Bytes averaged over duration and plotted at start and finish times
  • New Bytes Latest (%) – Latest New Bytes (New Bytes/Scanned Bytes)
  • New Latest (MB) – Latest New Bytes
  • New Max (MB) – For recovery points at this time, what is the maximum New Bytes
  • New Sum (MB) – For recovery points at this time, what is the total New Bytes
  • Scanned Avg (GB/sec) – Reported backup Scanned/Protected space averaged over duration and plotted at start and finish times
  • Scanned Latest (GB) – Latest space Scanned/Protected
  • Scanned Max (GB) – For recovery points at this time, what is the maximum Scanned/Protected space

Here we have a screenshot that shows a few of these metrics being graphed. Notice the top graph that we are showing between 7 and 8 recovery points at all times. The next graph is showing the Scanned rate being somewhere around 10 GB/sec during backup operations. Also notice the last graph that shows how much new data for all backups for that VM is actually living on the Protection storage at all times. It maxes at about 2GB.

Another important point to mention here is that the last graph is showing where analytics are starting to predict the growth of the space. The space in grey is considered a dynamic threshold that is generated based on past data and is increasing. These DT’s are things that be included in Health calculations and trigger alerts.

clip_image004

 

How about some more analytical coolness? The following graphs together give an idea of what is normal for rate of backups, percent of new data, and duration. As you can see our scan rate is within the grey DT, the New Bytes All (%) is slightly below the normal range, and the duration of the backups is now returning to the normal levels. Awesome?

clip_image005

 

The thresholds in vCOps can be dynamic or static. So if you decide that you want to alert on this information there are plenty of options.

One of the main benefits highlighted here is that we can track Avamar backup metrics against VMware Virtual Machine metrics. This is a hugely important point!

This can help to answer questions, and plan for better backup windows. For example, in the following screenshot we are bringing a backup and it’s average scan rate (10GB/sec) over a period of time, and then showing the storage latency. Notice how the storage latency is not impacted!

clip_image006

 

 

And taking this a step further, here is another screenshot of the same information but over a longer period. Here you can see that yes there actually is storage latency at certain times, and if desired you could then drill into those periods.

clip_image007

 

Here’s a bit of a possible use case.

I mentioned backup planning prior and this can be a very important thing. One of our major goals is to perform backups in a timely manner while not impacting SLAs or performance of a VM. This has been a major theme since the virtualization trend began since there essentially are not wasted resources anymore– all resources are pooled and have a cost. So when we plan for backups we want to leverage all the efficiency technology we can in the hypervisor stack (CBT), and the backup stack (source based dedupe) to minimize impact. So how can we leverage this information to help plan?

When it comes to image based backups one of the major things to overcome and consider is VMware snapshots. They are an essential part of getting Operating System consistency during backups, and being able to present a consistent copy of a VM to be backed up. Consistent problems across customers with VMware snapshots can be boiled down to one thing if used correctly– consolidation of the snapshot (not plural).

Based on how these snapshots work, in order to remove the snapshot the new data that was written after the snapshot was taken needs to be hydrated back into the Virtual Machine disks. Depending on how much new data is being written to your VM, at high levels of new block writes, this process can become unpredictable and thus cause issues in the long run for certain VMs. At times this process can even fail leaving orphaned snapshots. A VM with an unconsolidated snapshot can consume double the space of the VM and can increase the response times for storage operations of a VM.

For this reason understanding how your environment and specifically VMs work with VMware snapshots is key to image based protection! Wouldn’t it be nice to have this kind of insight form vCOps? Here’s a view that is actually unrelated to backups, highlighting how Snapshots relate to a VM.

clip_image008

 

Awesome! I can see the Snapshot space accumulating and being removed with consolidations at the bottom, and I can actually see the direct relationship between this and Total Latency at the top. Yes, the snapshots do cause latency! So very cool to see, and a quick lesson for image based protections for planning. In summary, we want to minimize how long snapshots occur on VM’s during backup to a) reduce how much data needs to be consolidated b) reduce any impact to storage latency and other VMs.

What else can we do? Well if you are run a copy of vCenter Operations that has the Enterprise licensing then you can build custom dashboards of this information and leverage more analysis and visualization tools or widgets such as the Heatmap. Don’t get me wrong here, all of what you saw previously is available out of the box with vCenter Operations. But if you want pre-built dashboards, Enterprise is currently it.

clip_image009

 

I hope you liked it! As you can see the capabilities of bringing backup information into vCOps and directly into the VMs is important and will be key to running and relying on image based protection for your VMs. Would love to hear your feedback on this one.

EMC-PRT-HOL3 – Puppet and EMC at the VMworld 2013 Hands-On-Labs

Posted by vele1655 | Posted in Puppet | Posted on 23-08-2013

Tags:

0

If you get a chance to pop in or remotely take a lab via Project Nee, take the EMC HOL! I had a chance along with Joel Sprouse (@longinthepocket) and Jeff Thomas (@vcemonster) to make a killer lab!

Our goal with the lab this year was to focus on how a VMware administrator can leverage Puppet via the vSphere Web Client and deliver PaaS with simplicity! If you take the lab I believe you will be pleasantly surprised to see that it is an educational lab. Yes, of course there are a few EMC plugs under the covers. But overall the meat of the lab is focused on building and running Puppet modules and how to leverage advanced features in Puppet to make it easier for VMware admins to manage infrastructure in a data driven way.

By the end of the lab I guarantee you will have learned some stuff about Puppet and will be asking for more VMware and Puppet integration to simplify PaaS even further!

 

What will you learn.

  • Basic Puppet module creation – check
  • Puppet Module deployment – check
  • Puppet Facter and Hiera (Project Steel) with VMware information – experimental, but publicly posted and sweet!
    • Literally a Puppet Agent can know what his real VM identity is
  • Puppet Modules for VMs (not operating systems) – experimental and double sweet!
    • A Puppet Module can use a custom Provider base don simple REST calls with a VM identity

 

Core technology that we cover.

  • ViPR (video)
  • vSphere Web Client with Avamar
  • Puppet Master and Agents
  • Puppet Modules – Avamar (Windows and Linux), Avamar VM, Hyperic, Apache, MSSQL
  • Puppet ENC – Hiera with vCenter!
  • vSphere Web Client with vCenter Orchestrator
  • Hyperic for PaaS and Avamar monitoring

 

Let me know what you think!

vmofsteel

EMC Avamar with vCenter Log Insight

Posted by vele1655 | Posted in Uncategorized | Posted on 22-08-2013

1

Viva la VMworld 2013! What a perfect time to discuss one of VMware’s new announcements for the Management stack and how it can be used to give visibility of backup related activities to the Virtual Machine admin.

See the vCenter Operations with Avamar – Part Two post here.

 

VMware has released a product that is calls vCenter Log Insight (currently at 1.04). This product is what I would consider a great start for having a simple and effective tool to analyze logs that come from the VMware virtualization stack. This includes being able to see events form vCenter and all of your hypervisors as well as brining in other Syslog information. See details from VMware here

Wouldn’t it be great if this tool could also be used to distribute visibility of Virtual Machine backups to the same operations focused team that would use vCenter Log Insight? If you’re interested, then this is the post for you! For EMC, this is the second public posting of a Content Pack, the first being for the VMAX storage platform written about by the guru himself, Cody Hosterman at this post.

The following information demonstrates how to configure Avamar to send Syslog events to vCenter Log Insight and how to load the Content Pack for Avamar and VM Backup Activities. What is a Content Pack? It is a pre-packaged file for Log Insight that describes queries, ways of parsing fields, dashboards, and widgets– the Content Pack does not relate to doing active collections since Log Insight’s supported integrations for 3rd party input is solely Syslog based.

The focus of the Avamar Content Pack for Log Insight at this point is solely on what we call Activities in Avamar. These are jobs that run and can be backup, restore, and replication related. Each of these activities includes certain fields which can include things like VM name, duration, total bytes scanned/protected, total new bytes (source deduplicated and new to the backup appliance), message summary, and other useful fields. The activities since generated after the fact very much focus on "what happened" rather than "what is the current state," or "this isn’t normal," or "what might happen."   Look for a video demo of this at VMworld!

Below you can find a handful of screenshots that show vCenter Log Insight in action. Personally I have a good amount of history with SIEMs platforms and have been impressed so far with the simplicity and slickness of the GUI. Although it doesn’t implement analytical capabilities or root cause analysis capabilities, it is a solid starting platform to build on for VMware.

Here is a screenshot showing the Content Pack after it has been loaded and has had some time to retrieve Syslog events from multiple Avamar instances.

clip_image001

The current content pack includes one dashboard that has the following widgets out of the box.

  • Backups – VM New Bytes
  • Backups – VM Bytes Protected
  • Backups – VM Bytes Protected by OS
  • Backups – VM New Bytes by OS
  • Backups – VM Average New Byres by OS
  • Backups – VM New Bytes per VM
  • Backups – VM Successful
  • Backups – VM Successful (exceptions)
  • Backups – VM Scheduled Runs Completed
  • Backups – VM Failed Runs
  • Backups – Bytes Protected by Dataset
  • Backups – On-Demand Completed
  • Events by Avamar Instance
  • Event Types by Avamar Instance

 

Any of the widgets can be drilled into when the Interactive Analysis window opens. This window not only displays "queries," and in this case based on the widget, but it allows you to further filter and drill into the details around the information being presented.

clip_image002

 

Another capability of vCenter Log Insight is its integration to vCenter Operations for alerts. There is a section in vCenter Log Insight that allows you to setup alarms that trigger on certain conditions which send the information into vCenter Operations. If you’re looking for sending Avamar metric data in a more meaningful way, it really requires more active collections and an adapter to get the most value out of vCOps. See an upcoming post.

clip_image003

 

So you’re sold? Continue on if you’re interested in deploying the Content Pack and configuring Avamar to send Syslog data!

  • Download the Content Pack here
  • Install vCenter Log Insight Virtual Appliance and Configure it

 

Now let’s get started and configure Syslog on your Avamar instances!

The first step is to open your Avamar Administrator console from your desktop. This will bring up a screen that then allows you to use the drop down menu to navigate to a section called Manger Profiles.

clip_image004

Here we want to now create a new profile that determines which events to send to vCenter Operations. Press the New button, and create a new profile and possibly name it the name of your vCenter Log Insight server and save it.

The next step is to Edit the profile.

clip_image005

Ensure that on the Properties page, it is enabled and that Syslog is the only service selected.

clip_image006

Press the Syslog Notification tab and enter the IP address or name of your vCenter Log Insight server. Ensure that "Include extended event data" is selected and User is the facility.

clip_image007

 

The next tab to select is the Event Codes tab. I have two screenshot shown below which list events I have selected. These are related to VMware activities and might be relevant. However, the widgets only are using events that are focused on the Summary fields that begin with "Activity." You can selected Notify next to the events you want to send to Log Insight. It is important to mention, that events other than Activities might need to be classified in the "Interactive Analysis" area so that Log Insight can properly parse the message and related fields.  Keep in mind the events shown may be more or less depending on your version of Avamar.

clip_image008

clip_image009

 

Save changes with OK.

The next step is to import the Content Pack. Look at the top of vCenter Log Insight and select the Settings button and Content Packs.

clip_image010

 

In the bottom left, press the "import Content Pack" button.

clip_image011

 

Browse to the Content Pack file and import it.

Go back to Avamar and Edit the profile that you created and press the Syslog Notification tab. Press the Send Test Syslog Entry which will generate a test event and send it to the server configured.

clip_image007[1]

 

Go back to Log Insight and press the Interactive Analytics button.

clip_image012

 

Here we are looking to find the test event to ensure communication is happening properly. Let’s create a query by pressing the Add Constraint button and selecting "source", and "equals". The input box to the right should allow you to click in it, and by pressing the down arrow a list of IP addresses or names that have sent Syslog information should be shown. Select your server from this list if the Syslog test was successful! If not try sending the message again, and return to this screen until it works.

Select your IP/name and press the Search button.

clip_image013

 

Here is the results of the query. You can see that we are showing the test event from Avamar with tags. The blue fields in the bottom of the result are how vCenter Log Insight parsed the information, and if you hover over these fields you should see the data it retrieved from parsing.

clip_image014

 

If you got here then you’re ready to go! Any events from Avamar should now be coming in so now it is a matter of waiting and the dashboard should start populating! Enjoy!

Retina, VMware and Windows Addicts

Posted by vele1655 | Posted in Uncategorized | Posted on 18-07-2013

Tags:

0

This post is a bit outside of my normal ones, but figured it may be useful for those that want Retina resolution on their Windows Fusion VMs!

A week or so back I got my hands on a new MBP which came with a 15.4” Retina screen.  Suffice to say the screen is amazing, especially when you consider I’ve been chugging on a Dell and Lenovo PCs prior and even 1920×1080 at home.

So the problems for me started to arise when it came to using this high resolution in a corporate environment.  So far I have been able to tweak a handful of settings for guests running Fusion and View that have allowed me to keep my native Retina resolution while not sacrificing my vision =) 

Vision?  The goal was to leverage VMware Fusion to pass through the 2880×1800 Retina resolution to my guest.  See the screenshot below!

 

 

 

image

 

So we have more pixels being passed through, but the problem is that my screen is only a 15”.  This leaves it almost unreadable from a size perspective.  Think of it as if a big screen was hooked up with 2880×1800 resolution, sure with 50” you would be fine reading, but not on my 15” screen.  This situation is why OS’s are starting to ship (if including nextgen resolutions) with the DPI settings modified to higher percentages.  Look at the screenshot below, instead of doing the standard 100% setting, I actually have my Fusion Windows 7 VM cranked up to 188% to make things comfortably readable while super high-def at the same time.

image

 

This is great, but it is truly a bit of a rig.  In reality the OS, and even the applications require design standards to ensure that by modifying something like this DPI setting everything still appears as it should.  In my opinion, there are problems still with modifying the DPI, but some are mutable and the benefits are definitely there to be had with a high-res Windows image running.  So if you are interested in how to make a few additional tweaks for Windows VMs running via Fusion or nested another level using View then continue on.  I at least am committed to hanging on to Retina in my Windows guest for now!

This whole Windows Fusion VM tends to be an important issue for me since I run my Corporate desktop image from Fusion so tend to spend a lot of time there.  Coming from a security background I tend to color inside of the security policies and align to Corporate requirements in all cases so this is actually my VPN connection as well.  Anyhow, anything that is corporate related needs to happen from within this Fusion VM. 

Another interesting situation then arose having to do with VMware View.  Apparently in VMware View for Mac there is a setting to allow an end-user to tweak this DPI (%) setting automatically.  But since I am actually using VMware View for Windows and the Retina resolution was not expected (I suppose), I had to manually tweak this setting from within the guest.  I also had to manually tweak this setting since the 150% max setting in the Control Panel GUI, was not enough for me.

Below I show a couple of screenshots, where you can see the custom LogPixels DWORD that was set for 180 as a DECIMAL which equates to the 180% shown in the custom value above.  I set these both in the Fusion VM along with the further nested VMware View VM.  Reboot after making these settings changes. 

(imageimage

 

In comes some of the other problems that can then arise.

The buttons in the top right seemed to be a bit out of scale, so I was able to tweak the size by modifying the Windows Color and Appearance settings under Personalization.  This can be gotten to from the “Advanced Appearance Settings..” button in the screenshot below.  Make sure to select the X button in the screen and then the “Caption Button"s” drop down will be populated.  I chose size “36” which helped in both the Fusion and VMware View nested VM.

image

 

For Putty (in VMware View session) the mouse kept on disappearing when I hovered over a Putty window.  This can be fixed by forcing a mouse profile as shown in the screenshot below.

image

 

My journey continues onward with a better view on my desktop and starting to get back some productivity that was lost in making the big switch.  It ain’t perfect!

Using VM Tools to Reinstall VM Tools in a Linux Guest

Posted by vele1655 | Posted in Automation, Powershell, Scripting | Posted on 16-07-2013

Tags:

2

Quick post!  This is a short goodie that might be useful to those that are looking to reinstall the guest tools via VM Tools.  There are definitely other ways to do this, but probably a cool one liner that may be useful!

Why is this interesting?  We needed to use VM Tools to reinstall itself.  This is done by encapsulating a whole command with “()” and then piping that to a background process that doesn’t get killed once VM Tools gets uninstalled.  Notice how we also issue a sleep command after the invoke so that we allow VM Tools to return a success for this whole command prior to removing it.  For us the following command worked for a SLES 11.1 build, but you could modify it based on your flavor or Linux OS and package manager of choice.  Remove the line breaks =)

 

$VM | Invoke-VMScriptGuestUser root –GuestPassword vmware –ScriptType Bash -ScriptText `

"(sleep 10 &&

chkconfig vmware-tools off &&

/etc/init.d/vmware-tools stop &&

rpm -e vmware-tools-* &&

rpm -e vmware-tools-nox-* &&

rpm -e vmware-tools-common-* &&

zypper addservice ––type=YUM http://packages.vmware.com/tools/esx/latest/sles11.1/x86_64 vmware-tools-collection &&

zypper -n install -c vmware-tools-collection vmware-tools-esx-kmods-default vmware-tools-esx) &> /tmp/toolsUpdate.log &"

Project Steel: Integrating Virtual Machine Guests with Puppet

Posted by vele1655 | Posted in Automation, Puppet | Posted on 10-07-2013

Tags:

0

So far with Puppet and VMware, applications have treated their platform or underlying operating system no different whether it is Virtually or Physically hosted. But there is no doubt there is plenty of opportunity that is being missed by doing this. As more and more platforms are being hosted by Virtual means there is a whole lot of value working at a lower level with the VM itself rather than inside of the guest.

For Puppet there are two reasons that stand out to be to do this. The first is for Node Classification. Depending on the size and requirements for deployment, Puppet can be used independently to manage a single platform with a pre-determined state or it can be used at larger scales. The larger scales is what we will focus on here.  The second being modules that can work at the VM layer, more to be expanded in another blog post on this.

The process that a Platform goes through to determine what its defined end state will be is called Node Classification. This process takes Facts about a Platform or Guest and then determines based on Node Classification what manifest or classes to apply to a Platform (which applications to install). The Node Classification sits as yet another place that needs to be managed for configuration of Platforms. Organizations looking at leveraging Puppet tend to want to integrate this lookup externally.

In comes the External Node Classification (ENC). Instead of statically managing site and node manifests, Puppet can use an ENC to lookup the desired classes externally. This lookup is now done natively in Puppet v3+ through Hiera. Hiera allows the classification to be returned to Puppet via Ruby scripts in a standard format. Ruby is then used to perform any sort of lookup against whatever location you choose. Some deployments may tend to look at their CMDB systems and others make look to a more dynamic place than local files such as a simple database. This is one of the many options for this.

But for VMware deployments I am introducing yet another possibility. How about storing alongside the Virtual Machine the desired Puppet classses (or applications) to be deployed? Sounds cool right? The same tool you use to manage your Virtual Machines can be the same tool you use to manage your Puppet deployments!  Other organizations may already be doing this in the opposite manner, but how about one more option to look at!

The following is purely a proof of concept and first iteration down this path!  There are three technical components that were written specifically for this.

However, one more key concept before moving on! Another reason we want to classify by VM instead of Guest Operating Systems is due to the nature of virtualization and cloud. A guest operating system sitting on a physical host tends to have some levels of guaranteed uniqueness such as BIOS UUID, MAC addresses, and possibly hostnames. But with virtualization these things can no longer be relied on since they can be dynamically created and copied at will between VMs. This leaves software that relies on this information for uniqueness at a loss when it comes to integration to the guest since they may not be able to discern between one platform or another. So basically, in order to guarantee uniqueness, VM oriented information must be used to future proof!

 

VM InstanceUuid Lookup

In order to make this whole thing work we need to rely on the VM Tools running in a guest to request information from the underlying hypervisor about itself. Lucky for us any Advanced Setting that is created on a VM with the guestinfo.* classification is available to the guest (Thanks William Lam). A bit hokey currently, but we can copy the config.instanceUuid from the VM to this Advanced Setting of guestinfo.vmsintanceuuid. This can be done easily via PowerCLI or vCenter Orchestrator. With this set, the VM Tools in the guest can now know a unique value that it’s underlying VM is known by!

You can see there is nothing reported by the Guest for vminstanceUuid.

clip_image001

 

A simple PowerCLI command then can set the vminstanceUuid into the guestinfo.vminstanceUuid Advanced Setting. This setting is stored in the VMX file of the VM for persistency.

clip_image002

 

You can now see the VM Tools command returning the proper vminstanceUuid!

clip_image003

 

VM Guest Fact

In comes Puppet and Facter. Now that the guest can lookup its own instanceUuid, we now need to return this properly to Puppet through a custom Fact whether it is Windows or Linux (Clintonskitson/vmguest).

Here is the facter that comes in the Puppet vmguest module. You must turn autodeply on for custom facter files to be deployed to agents.

clip_image004

 

From the Linux guest you can see the vminstanceUuis is being returned as a fact. Perfect time to move on!

clip_image005

 

Hiera vCenter Lookup

Now we must configure a Hiera vCenter Lookup (ENC) that will look to vCenter with this instanceUuid and then return what has been configured for that Virtual Machine under Custom Values and the "puppet.classes" field.

See the following addition to the site.pp file. Note that we need the default manifest to return nothing and the targeted node to not already be classified in the nodes.pp file. Basically, Puppet cannot error out if the node is not classified in nodes.pp.

clip_image006

 

We also need to populate the hiera.yaml file with configuration necessary to login to the vCenter instance. This account can be read-only and is used to lookup Custom Values based on instanceUUids.

clip_image007

 

This is the setting in vCenter for the Virtual Machine where we set the appropriate classes. These are separated by a comma if multiple.

clip_image008

 

You can see the setting for our Linux01 Virtual Machine.

clip_image009

 

Below we are running the Puppet Master in an interactive mode where we can see the debug information (puppet master –no-daemonize –debug). Here we can see the Hiera initialization and the successful lookup to this VM along with the Apache2 module class being returned!

clip_image010

 

Awesome stuff! So this concludes the basic lookup capability of VMs for Puppet. There are a handful of challenges still with Puppet that I have run into. First, Puppet leverages SSL certs based on FQDN of a machine. This is going to be a challenge even with this integration. The second is that Hiera handles default variables for modules differently. You may need to make modules less static by declaring variables inside of classes. Lastly, the Advanced Setting per VM is an extra step during deployment or cloning. Having VM Tools report the instanceUuid automatically would be awesome. Work in progress, but very cool so far!

If you liked this then Part 2 is going to even better. Now that we are looking things up in this way what other VMware oriented Puppet modules can we create? Go to VMworld 2013 and take the EMC Partner Lab at the Hands on Labs to see what we can do in Part 2!

A PowerCLI vSphere OVF/OVA Import with Advanced Options

Posted by vele1655 | Posted in Automation, Powershell, Scripting | Posted on 24-06-2013

Tags:

0

There are plenty of scripts and standard modules out there for deploying OVF’s into a vSphere environment but so far I haven’t ran into one that fits the bill completely.  For PowerCLI, the capability to write a full OVF importer has been hindered by seemingly by some SOAP serialization issues.  If this was working as expected, then a soup to nuts advanced import process could be done outside of OvfTool.

So in the good old sense of getting it done I decided to go ahead with OvfTool and wrap it with PowerCLI to assist.  OvfTool is an extremely advanced tool for automating which provides plenty of capabilities.  Download and check it out here.  The need to wrap it with PowerCLI was really oriented towards wanting to specify Powershell objects and simplify some questions it asked by PowerCLI lookups.  So add a little PowerCLI and see the magic..

 

 

The OvfTool Command from Powershell

Now you may be wondering what a command for OvfTool would look like.  In addition to simplifying the lookup of the VI path (see end of command) and session ticket for upload.

& ‘C:\Program Files\VMware\VMware OVF Tool\ovftool.exe’ –I:targetSessionTicket=cst-VCT-5286bbed-75d7-46e2-b443-3df963683f1a–tp-17-B5-C5-8A-96-14-04-87-03-8A-96-CF-45-3B-59-6D-5F-45-01-38 –diskMode="Thin" –datastore="vnx5700-02-nfs-02" –name="ave-02-proxy01" –noSSLVerify –net:"Isolated Network"="VLAN995" –prop:vami.ip0.EMC_Avamar_Virtual_Machine_Combined_Proxy=’172.16.0.213′ –prop:vami.DNS.EMC_Avamar_Virtual_Machine_Combined_Proxy=’172.16.255.254′ –prop:vami.netmask0.EMC_Avamar_Virtual_Machine_Combined_Proxy=’255.255.0.0′ –prop:vami.gateway.EMC_Avamar_Virtual_Machine_Combined_Proxy=’172.16.255.254′ https://url.ova "vi://172.16.0.200/DC1/host/Cluster1/bsg05035.lss.emc.com"

 

We also do some checks to ensure OvfTool will be successful.

  • Only connected to one VC
  • Reverse DNS of vCenter matches name connected with on Connect-VIServer
  • Datastore attached to target ESX Host
  • Virtual Port Group available on target ESX Host

 

Requirements

  • Written in Powershell v3, PowerCLI 5.1
  • Download the module here
  • Dir *.psm1 | Import-Module –force
  • Connect-VIServer

 

Parameters for Import-VAppAdvanced

Name = The target name of the Virtual Machine

OvfPath = The UNC/URL (compatible with Get-Item) where the the Ovf/Ova exists

Net = The source network name specified as part of the Ovf and the target VLAN (lookup from GUI import)

Datastore = The target Datastore object

VmHost = The target VMHost object

hashProp = A hashtable that includes the options of a VApp and proper responses which are normalled asked through the GUI import (lookup for import)

 

The PowerCLI Command

Import-VAppAdvanced -Name AVE02-proxy01 -OvfPath https://ave/DPNInstalls/downloads/VMWARE_PROXY/AvamarCombinedProxy-linux-sles11_64-ver.ova -Net ‘"Isolated Network"="VLAN995"’ -Datastore (Get-Datastore vnx5700-02-nfs-02) -VmHost (Get-VMHost bsg05035.lss.emc.com) –hashProp @{"vami.ip0.EMC_Avamar_Virtual_Machine_Combined_Proxy"="172.16.0.213";

"vami.netmask0.EMC_Avamar_Virtual_Machine_Combined_Proxy"="255.255.0.0";

"vami.gateway.EMC_Avamar_Virtual_Machine_Combined_Proxy"="172.16.255.254";

"vami.DNS.EMC_Avamar_Virtual_Machine_Combined_Proxy"="172.16.255.254"}

 

image

 

 

So you may at this point be asking where to get the options specified under the Net and hashParams?  One of the options of to look at an already deployed VM and its properties along with the OVF descriptor as shown below.

image

 

There is plenty more capable with OvfTool, so feel free to modify the tool as needed, enjoy!

Summer 2013 Checkin!

Posted by vele1655 | Posted in Personal | Posted on 17-06-2013

Tags:

0

    It’s been a bit of a vacation around here at vElemental but for good reasons. So as summer begins, I naturally check back (from 30K) in to provide my latest compass for some killer upcoming content before VMworld! First of all, THANK YOU to the vExpert community for adding me to the Club for 2013. I am very much humbled and excited to carry this torch and continue to contribute and engage the VMware community with the blog, in my pre-sales career, and at the marketing events.

    So I’ve been busy recently, on the personal side we added one to the family (pictured below Mason Kitson and my awesome vWife and vMom Erica, Thank You)..

    clip_image001 clip_image002

    And some new stuff for myself (Clint Kitson @clintonskitson – Virtualization Sales Engineer @ EMC). Full time husband and parent, full time VSE, and hobbyist developer. So where has the internal compass led me recently?

    New development platforms and alternative automation technologies. Ding ding.. Ruby and Puppet! Plenty of goodness for VMware, EMC, and Puppet users to come so keep tuned!

    • Puppet and VMware
    • vCenter Orchestrator and Puppet
    • Ruby Transposing for Poshers
    • My Coaches – PowerCLI and Onyx

    For all those comments that I haven’t answered, my apologies. Spam comments have been tough to filter. This is a work in progress to get that right. Twitter may be the best (@clintonskitson) to get a hold of me. As usual get ready for some outside the box and generally useful stuff relating to the following topics!

    Anything new here? You bet!.. Thanks again and looking forward to seeing you in August at VMworld 2013!

Configuring minimum vCenter privileges for an Avamar Backup Service Account

Posted by vele1655 | Posted in Automation, Avamar, Backup | Posted on 02-04-2013

Tags:

0

This comes up in most deployments when minimum levels of access is desired when creating any type of vCenter service accounts.  Typically the configuration is manually done, but is a good example of how useful PowerCLI can be as it is a much better answer for achieving consistency and integrity in a process. 

There is a script below that leverages a pre-built configuration including the necessary items to achieve what is currently documented as minimum access needs for Avamar to perform its normal operations.  We leverage PowerCLI and its built in basic cmdlets that allow you to easily perform the necessary operations.  See the following screenshot where we are showing the VIRole and VIPrivilege cmdlets. 

image

 

If you want to view the roles that are already created you can get there from the vCenter home screen, following the below screenshots.

image

 

image

 

Below are the requirements based on development environment and testing.  Keep in mind, that the script developed should work outside of these versions but the privileges might have been renamed depending on your versions.  There is a simple hashtable in the script that can be updated to accommodate any PrivilegeGroupId and Privilege Name. 

  • Powershell v2+
  • PowerCLI 5.1
  • vCenter 5.1
  • Download the script here

 

Onwards!

 

The script is very easy to run.   Simply download it, and execute it with the following line.  There are two expectations.

  • You have connected to a vCenter server already (Connect-VIServer), (Set-ExecutionPolicy is unrestricted or bypass)
  • You have created a new role (New-VIRole -name newrolename)
  • ..or you want to run the script against an existing role and ensure it has no more and no less permissions as defined in the script

 

.\UpdateVIRoleBackupServicePrivileges.ps1 -Role (Get-VIRole -Name “Backup Service”)

image

 

That’s it!  You can see the role configured below based on the privileges listed above.

 

image

 

The next step is to Add Permission for this role and a specified User/Group to the correct place in the vCenter Hierarchy.  This can be at the vCenter/Datacenter/Cluster/Host level.  See below screenshots where we are adding the Role and a User/Group at the vCenter level which assigns the “test” user the “Backup Service” role privileges for all things covered under the vCenter instance.

image

 

image

 

image

 

And there you have it!

 

Bonus

In some cases there might be the need to be more granular with what objects the new role is assigned to.  This can cause some complication, so how would you go about reporting on, for example, what datastores a specific user has access to?

Get-Datastore | Select Name,@{n="VIPermission";e={Get-VIPermission -Principal "AWESOMESAUCE\test” -Entity $_}}

image

 

In other cases you may want to automate the addition of the permissions to the proper vCenter object.  Here you can see that we created a new permission set (Role & User specified) and applied it to the vSpecialists datacenter.

New-VIPermission -Role $role -Principal "AWESOMESAUCE\test"  -Entity (Get-Datacenter vSpecialists)

image

 

image