Playing Around with CoreOS

Posted by vele1655 | Posted in CoreOs | Posted on 29-07-2014

Tags:

0

 

Over the past month I have been playing around a bit more in the Docker-sphere.  I wanted to briefly give a few tidbits that I found along the way in relation to getting going with CoreOS.  First of all, see William’s post here for some basics on how to get CoreOS uploaded to an ESXi host. 

http://www.virtuallyghetto.com/tag/coreos

 

I am seeing a lot of interest in this area, so here is some info that may help you get past a few humps!

  • Deploy CoreOS
  • Configure Static Networking
    • Auto Login Kernel Parameter
    • Create static.network File and Restart Networking
    • Change core user password
  • Forget about what you just saw!

 

Deploy CoreOS

If you want the more manual method, you could also do a conversion of the VMX and VMDK files to an OVF per the official CoreOS steps.

https://coreos.com/docs/running-coreos/platforms/vmware/

 

The deploy is currently (07/29/14) expecting that either for you

a) connect a configuration ISO (bug in stable release stops this from working)

b) have a DHCP server setup. 

The first option would actually allow the complete configuration of CoreOS and it’s services.  Let’s assume this will be fixed at some point =) But on to the second option.  If you have DHCP configured, then you’re good and can continue on to play around with CoreOS.

 

If DHCP is not configured OR you are interested in the cloud-config settings then continue on.

 

Auto Login Kernel Parameter

If you want to simply get access to the CoreOS image and set the networking information manually you can do the following.

 

1) Open the console to the CoreOS image

2) Reboot the VM

3) When the boot loader prompt comes up press any key to stop the boot process.  Enter “boot_kernel coreos.autologin” and press enter.   This will make the console’s skip the login prompt and enter straight to shell for the core user. (See here for kernel options http://coreos.com/docs/running-coreos/bare-metal/booting-with-pxe/)

image

 

4) Done!

image

 

Create static.network file and Restart Networking

Once you get access to the console, the next step is to set the static networking.  This can be done by editing the “static.network” file as part of the systemd network service.

1) sudo vi /etc/systemd/network/static.network  (or whatever editor of choice)

[Match]
Name=en*

[Network]
Address=192.168.1.213/24
Gateway=192.168.1.250
DNS=192.168.1.11

2) Restart networking (looks to be a bug in restart process, so need to stop interface to change IP settings)
sudo ip link set enp0s17 down
sudo systemctl restart systemd-networkd

 

Now if you have the networking all taken care of, then the next step is to SSH in.  As a default, the CoreOS ships with insecure SSH keys that allow password-less access.   You can either leverage this method as shown in a screenshot from the CoreOS site, or you can set a password for the core user manually.

image

 

If you want to set the password for core manually, follow these steps.

 

Change core user password

Create an insecure password using the openssl tool.  See (http://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) for generating a secure password hash.

1)  Open CoreOS console

2) sudo usermod –p `openssl passwd -1` core (not typically a good idea)

3) sudo cat /etc/shadow (make sure it has a hash)

image

4) Test ssh with a password!

 

Forget about what you just saw!

Ok, so that was all fine and good.  However, in reality this process is not how CoreOS was meant to be managed.  As a large-scale OS, it is meant to have hooks inside of it that allow the configuration of these items through some forms of automation.  So take a look at this alternate procedure.  Currently (07/29/2014) there seems to be a bug where the ISO file will not be picked up by CoreOS.

 

1) Create an ISO file that includes a Cloud-Config (http://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) file

The file would look something like this and would be contained in a ISO at the path of /openstack/latest/user_data with label config-2.

#cloud-config

hostname: testing

write_files:
   – path: /etc/systemd/network/static.network
     permissions: 0644
     content: |
      [Match]
      Name=en*

      [Network]
      Address=192.168.1.214/24
      Gateway=192.168.1.250
      DNS=192.168.1.11

coreos:
   units:
      – name: systemd-networkd.service
        command: start

users:
   – name: core
     passwd: $1$4AZiqdeL$mGemao15wzDxRh2bTg.Zw.

 

2) Deploy CoreOS

3) Attach ISO to the CoreOS VM

4) PowerOn VM

 

If you want to deploy this file to a CoreOS image and run it manually you can always leverage the “sudo coreos-cloudinit -from-file” command.

Accessing Avamar REST API from the vCO REST Plugin

Posted by vele1655 | Posted in Automation, Avamar, vCO | Posted on 18-07-2014

Tags:

0

This post is going to continue a recent thread where I show and discuss some of the new API capabilities from EMC Data Protection.  Here we will focus on getting the private cloud REST API working with a workflow engine like VMware’s vCenter Orchestrator.  There is no specific use case presented here other than basic configuration. 

By the end of the post you will see the configuration of the package and a simple call to add a client for backup.  And yes, I will be updating this and adding more functionality shortly.  Look for future posts.

Cool stuff in here?

  • vCO working with JSON
  • Augmenting the vCO generic REST adapter for authentication header use
  • Dynamic presentation layer inputs from actions
  • And of course protection using vCO!

There is some interesting stuff under the covers in the package that is available below.  A lot of the work was making the generic REST plugin’s authentication mechanism work with a REST API that leverages sessions.  As an example, you can use the REST plugin to issue generic calls against vCloud Director, but there is special option that tells the plugin to cache the X-Session identifier.  This identifier then gets constantly sent as part of the headers.  So since I didn’t build a full blown plugin, and the REST plugin does not open itself to the session style authentication outside of vCD, I had to write a couple of wrappers.  For anyone interested, see the following actions to see how it works.  Briefly, we save the session id to a configuration element.  Before making REST calls we check that we can successfully ask for the sessions, if not we issue an auth and save it.

All Actions –> invokeAvamarRestGeneric –> invokeAvamarRestBase

 

Requirements

  • vCO 5+
  • REST Plugin 1.03+
  • My vCO Avamar REST package here, and unzip it

 

 

Here are the workflows that are inside of the package.

image

 

These are the actions that support the workflows.

image

 

So let’s get started.  The first step is to login to your vCO client and switch the views to “Administer” with the top drop down.  There you will see a few icons appear in the top left.  Select the orange packages icon.

image

 

Navigate to where the package is located that you unzipped and select it.

image

 

vCO will prompt you relating to certificates, import the certificate from the package.

image

 

Here it will prompt you for which actions and workflows to import.  Ensure that you select all workflows and actions by pressing the check box in the bottom left.

image

 

 

Ok, let’s switch out of Administer view back to Run.  The workflows should now be located under the Library drop down under EMC.  Before heading there however, we need to configure the generic REST adapter.  Expand HTTP-REST –> Configuration.  From there right click the Add a REST host workflow and press Start Workflow.

image

 

You will be prompted to enter a name, feel free to enter anything there.  The next field need to be entered exactly as listed, except for the hostname.

https://your-fqdn-avamar-rest-api:8543/

image

 

Press Next, and leave the proxy settings at No.

image

 

Select the Basic authentication from the drop down.

image

 

Leave the session mode at Shared.  Enter your credentials to the REST API.

image

 

After pressing submit you are likely going to be prompted relating to the certificate.  Be sure to press submit to save the certificate from your REST API host.

image

 

Ensure you have “install this certificate” selected.

image

 

If this completes successfully, then you should see a green check box next to “Add a REST host”.

image

 

Ok, next step is to import specific REST operations that we leverage during our actions.  Navigate to EMC –> Protection –> Avamar REST –> Avamar REST Configuration.  Right click the Import REST Operations workflow and select Start workflow.

image

 

The drop down on the left should expand with your REST host that you added previously.  Grab the ID, highlight it on the right side table, and copy it to your clipboard (ctrl-c).  This will save a step coming up.

image

 

Now let the workflow run and you should see it successfully import the operations.

image

 

If you saved the ID in your clipboard then skip this step.  Otherwise, you need to grab the ID of your REST Host.  Go to the Update a REST Host workflow and run it.  When you expand the REST host on the left side, go to the table and select the ID field.  Copy it to your clipboard.  Following this cancel out of the workflow.

image

 

The next step is to switch the screen to “Design” view at the top drop down.  Also press the white Configurations tab.  From there expand Library below and right click Library and press New Folder.  Here you will be creating a configuration element that holds our authorization token from the REST API.  Enter “com.emc.dpad.avamar_rest” as the name, without quotes.

image

 

Right click the folder you created, and press New Element.  Paste the identifier you saved in your clipboard.  This will hold the token for the specific REST API instance.

image

 

When you create the element, a window will come up for configuration.  Press the Attributes tab and press the New Attribute button A+.  Enter “X-Concerto-Authorization”, without quotes.

image

 

It should look like this!

image

 

OK! All set to go, let’s try it out.

 

Select the blue workflow icon in the top left.  Expand EMC –> Protection –> Avamar REST –> Avamar REST Helpers.  From there right click the Get Sessions workflow and run it.  It will ask you to select a REST host, do so and press Select.  Press Submit.

image

 

If all goes well then you should see some JSON appear in the logs window.  If the logs window or variables, and general tabs are visbile, they are likely tucked away on the bottom.  Look towards the bottom of the workflow window and there should be a break in the window itself in the middle.  Grab that and drag up the window to provide more space for the other tabs.

image

 

If you see the JSON then you’re good!

 

I am assuming you have already configured pools, resource shares, folders, etc.  If so then you can start using the additional workflows and add a client interactively.  Browse to Protection –> Avamar REST –> General.  Right click New Client and press Start Workflow.

 

image

 

This window should appear.  Notice that after you select the REST Host, the drop down for tenant name and folder name are populated.  This is happening because we are leveraging the Presentation capabilities of vCO to execute actions dynamically as part of this window.  So as you specify a REST Host, it goes out and runs an action that returns a list of names for tenants.  Once this is selected, it then goes out and grabs a list of folders for that tenant.  Cool stuff.  The last field is one I fill in which is the client name.  In reality, this could come from any number of different places, ie. Active Directory, DNS.

image

 

And that’s it!

image

 

 

Look for plenty more capabilities, the current state is really just the foundation of what needed to take place to make the generic vCO REST plugin work with the Avamar REST interface.  If you look through the actions and workflows, you should see some good examples of working with JSON, presentation inputs, and customizing the REST interface.

PowerCLI module for Public Cloud Protection

Posted by vele1655 | Posted in Automation, Avamar, Backup, Hybrid Cloud, Powershell, vCloud Director | Posted on 09-07-2014

Tags:

0

104 cmdlets here! Add the 102 from the previous post and we have 206 Powershell cmdlets for Protection! 

With the latest release of EMC Data Protection, we have introduced a new plugin for VMware’s vCloud Director.  This plugin fills a very strategically important gap that exists in the market today in Public Cloud in regards to providing embedded protection capabilities from within Virtual Data Centers of vCD.  The plugin is built for providers that want to deliver protection services to vCD tenants which is far different than simply protecting vCD.  There are a handful of posts I have done recently to describe this below.  Here we will focus on leveraging the multi-role API (Cloud Admin, Org Admin, vApp Owner) and consumption of protection services using PowerCLI.  This means a consumer of a VMware Public Cloud that is an Org Admin (IT typically) or VApp Owner (Line of Business Admin) can use these cmdlets to manage protection if the provider a) has deployed the Avamar Plugin for vCloud Director b) has enabled control of backups to tenants through the Backup Configuration of the VDC.

http://velemental.com/2014/07/03/emc-dpad-apis-and-protection-abstraction/

http://velemental.com/2014/07/02/hello-world-software-defined-protection-from-emc/

http://velemental.com/2014/05/06/hybrid-cloud-enabling-technologies/

 

At the heart of the Avamar plugin for vCloud Director is vCloud’s Extensibility which helps simplify and standardize delivering the protection services in a Hybrid and Public cloud way.  We will provide some examples as we go through the use cases later in the post on how this works and how it helps simplify, future-proof, and standardize automation in the SDDC world.

Another important point here is that I mention only Public Cloud providers for this module.  This is however not exclusive to this use case.  If you decide to run vCD in your Private Cloud, or currently are, the introduction of the Avamar Plugin/Extensions for vCloud Director unlocks your ability to leverage these cmdlets in infrastructure you own.

Everything shown here leverages the REST API from vCloud Director, so you can use the cmdlets to learn about our Extension of this API and how you might build to it in other languages.

 

Requirements

  • Powershell v3, PowerCLI 5.5+
  • The PowerCLI module here
    • dir *.psm1 | import-module -force

 

Before moving on, I want to first lay out the Role Based Access Control and how it relates to the plugin.  The three roles (Cloud Admin or Provider, Org Admin or IT, VAppOwner or Line of Business Admin) each have their own specific use cases that are targeted.  As an example, the Cloud Admin or Provider can do everything but at the same time is mainly responsible for building, designing, deploying and the operational aspects of protection services.  The Org Admin is responsible for assigning protection and replication policies to VApps and VDCs, possibly modifying policies if allowed, and restores.  This leads to the last role, the VAppOwner which is responsible only for his own VApp and has the ability to perform backup/restore and Virtual Disk exclusion policies.

 

Notice how we have three columns in green.  This is a good summarization of which cmdlets are useful to which role.  This is always true– notice how Org Admin is a subset of Cloud Admin, and VApp Owner is a subset of Org Admin.  To the right, the action or verb is shown for the cmdlets.  There is a COA filled in where a cmdlet exists, and which role the cmdlet adheres to.

C = Cloud Admin, O = Org Admin, A = VApp Owner

image

image

 

There plenty of great use cases that can be done with this plugin.  Again, the purpose isn’t necessarily that this would be the primary means for a provider to manage the plugin.  It does however serve as a good tool to help understand how one would automate the same processes in other languages and automation tools with its (-verbose) flag on each cmdlet that demonstrates REST calls and responses. 

In my opinion, some of the coolest stuff here has to do with the roles other than Cloud Admin.  That being, as a consumer of Public Cloud, what functionality can I take advantage of.  Typically in VMware Public Clouds, this functionality would have been delivered through a Cloud Portal whereas here it is being directly exposed to tenants through vCD.

 

 

First things first!  Load PowerCLI 5.5+, import the module, and connect to a vCD Organization as a VApp Owner or Org Admin.  Being a VApp Owner (must set per VApp) is the most restrictive user you can demonstrate these capabilities with.

image

 

Use Cases

  • VApp Owner – On-Demand Backup
  • VApp Owner – VApp Backup Stats
  • VApp Owner – On-Demand Restore
  • VApp Owner – Modify Virtual Disk exclusion list
  • Org Admin – Modify policies
  • Org Admin – Backup Repository and Activities

 

Before we being however, all aspects of RBAC are off by default for Org Admin and VApp Owners.  Has the provider allowed this feature for your backup services? The provider or Cloud Admin can check Backup Configuration settings with the following command (as a Cloud Admin).

Get-OrgVdc mgc_vdc1 | Get-BackupConfiguration | %{ $_.VappOwnerAuthorizations }

image

 

This configuration can be modified using the Update-BackupConfiguration cmdlet.

$BC = Get-OrgVdc mgc_vdc1 | Get-BackupConfiguration

$BC.VAppOwnerAuthorizations.enableAdhocBackup = “true”

$BC.VAppOwnerAuthorizations.enableAdhocRestore = “true”

$BC | Update-BackupConfiguration

image

 

This can be repeated for the Org Admin, notice these authorizations.

image

 

VApp Owner – On-Demand Backup

The first use case we can show is based on a VApp Owner where they are able to perform backup/restores On-Demand. 

 

1) Start On-Demand/Adhoc Backup

The first step here is to select the proper VApp.  Next we leverage the Start-CIVAppBackup command to begin the backup.  Notice how we also specified the (-verbose) parameter here.  This allows us to view the REST API call to the vCD Extensions.  Notice the use of “extensibility” since we are issuing a request to a VApp HREF, followed by “/backups”.  Talk about simple right?  One object model, One API and authorization is handled within vCD.

The other important thing to note with this command is that we didn’t specify any policies, retention, datasets, or anything else.  Since we are able to apply default policies to the Virtual Data Centers and explicitly to VApps, the process inherently will use these and API calls can be super simple.

Get-CIVApp linux-01_new | Start-CIVAppBackup

image

 

VApp Owner/Org Admin – Backup Stats

In terms of integration, all activities within the plugin are sent to the Rabbit MQ message bus that vCD is configured with under a special exchange.  This allows for the provider to integrate for chargeback and other information.  As well, similar details can be explicitly requested per backup by the tenant.

 

1) Get latest CIVApp Backup Stats

Here you can see per VApp what the latest stats are in terms of how many backup exist, and the latest figures for how much is being protected along with unique space used on the Protection Storage.

Get-CIVApp linux-01_new | Get-CIVAppBackupStat

image

 

VApp Owner/Org Admin – On-Demand Restore

1) List Backups

Get-CIVApp linux-01_new | Get-CIVAppBackup

 image

 

2) Restore In Place

As a VApp Owner you have no access to create new VApps in a VDC from vCloud Director.  For this reason, there is no ability to perform out of place restores or restore to new VApps.  All restores for the VApp Owner must be done in place.

Get-CIVApp linux-01_new | Get-CIVAppBackup | select -first 1 | Start-CIVAppRestore -inplace

image

image

Currently there is no FLR based restore capability for the VMs. 

 

VApp Owner – Modify Virtual Disk Exclusion list

This is the only policy a VApp Owner is able to modify.  It can be used in certain cases where only specific Virtual Disks should be protected such as the OS drives and the exclusion of larger database drives.  In order to set the exclusion, you must first look at the identifiers that vCloud has set on the Virtual Disks per VM. 

 

1) Get Virtual Disk addresses

Get-CIVApp linux-01_new | Get-CIVM

Get-CIVApp linux-01_new | Get-CIVMCustom | Get-CIVMCustomHardDisk (we use Custom here since the PowerCLI module for vCD does not allow VApp Owners to see VM details through Get-CIVM)

Get-CIVApp linux-01_new | Get-CIVMCustom | Get-CIVMCustomHardDisk | %{ $_.diskexclude}

image 

Take a look at the disks and their associated addresses. 

 

2) Submit Exclusion List for the VApp

To build the exclusion list, we look at the existing VMs in the VApp.  We then apply an inclusion filter and whichever VMs are not included are then set for the exclusion.  In this case of the following we wanted to disable the first disk of the “mgc-03” VM, so we create a JSON statement as follows.  The 0:2001 represents the addressOfParent followed by addressOnParent.

Get-CIVApp linux-01_new | New-CIVAppBackupExcludeList -jsonVmFilterInclude ‘{"mgc-03":{"Disk":["0:2001"]}}’

Get-CIVApp linux-01_new | Get-CIVAppBackupExcludeList | %{ $_.vmexclude.diskexclude }

image

 

That’s it for now!  VMware Public Cloud protection—check!

 

PowerCLI module for VMware vSphere Protection

Posted by vele1655 | Posted in Automation, Avamar, Backup, Powershell, Scripting | Posted on 08-07-2014

Tags:

0

We’ve covered some basic use cases prior with an overview of using the REST API and some more functional examples of general file system agents.  We can now take the next step and show how basic VMware vCenter protection is configured from the REST API.

The following use cases are shown, and some are done with purely Powershell (Datastores) and others are done with the addition of PowerCLI (Datastores and VMs).

  • Add VMware vCenter, VM Proxy, and Datastores
  • Start Adhoc Backup
  • Configure VM Policies and Add Clients
  • VM Restores
  • FLR Restores

 

Required

  • Powershell v3, PowerCLI 5.5+

  • Download the module from this post

 

Add VMware vCenter, VM Proxy, and Datastores

 

1) Login, select the tenant, and select the folder

$tenant = Get-Tenant tenant01

$folder = $tenant | Get-AvamarFolder -name folder01

image

 

2) Add VMware vCenter as a Hypervisor Manager to a Folder

The next step is to add the vCenter to the folder.  The vCenter instance can leverage credentials that have limits around folders and other things inherent to vSphere’s hierarchy and RBAC.  There are specific minimum access requirements aside from the object access that is available in the Avamar Admin guide. 

$folder | New-HypervisorManager -name master-vc.brsvlab.local -hostname master-vc.brsvlab.local -username root -password ‘Password123!’

image

 

3) Add VMware Proxy Client

The next step is to add a VM Proxy Client that will handle the data plane aspect when VMs must be protected.  This VM is deployed as an OVF and typically is hosted per cluster in order to get locality to Datastores.  At this point the proxy can be deployed already, but it should not configured yet.

Notice how we use the resource share item ($folder.resourceShare or (Get-AvamarFolder -id $folder.id).resourceShare for a refreshed call) and pull the DataProtectionResource to fill this parameter based on the DPR instance name.  This DPR instance will be linked to this VM Proxy Client.

 

$folder | New-ClientVMProxy -name avproxy-03.brsvlab.local -dataProtectionResource ($folder.resourceShare | Get-DataProtectionResource -name ave-03)

image

 

4) Register VMware Proxy VM guest as a VMware Proxy Client

Open the console to the VM Proxy that you deployed.  You should see a menu similar to the following when it is deployed fresh from an OVF.  Enter the FQDN of the DPR instance.  Followed by this enter the folder name (folder01).  This will associate itself to the VM Proxy Client that we previously created.

image

 

5) Perform Explicit GET on VM Proxy Client

Since we synchronized an option from behind the API, we must do an explicit GET on the Client to update it.

Get-Client -id ($folder | Get-Client -name avproxy-03.brsvlab.local).id

image

 

6) Add VMware Datastores to Proxy

The next step is to associate Datastores to VM Proxy Clients.  This allows manual control over which, or how many proxies can perform backups across which Datastores.  Notice how we get the Datastore list from the HypervisorManager.

$datastore = $folder | Get-HypervisorManager -name master-vc.brsvlab.local | %{ $_.vmDatastore }

$folder | Get-Client -name avproxy-03.brsvlab.local | Update-VMProxyDatastore -hypervisormanager ($folder | Get-HypervisorManager -name master-vc.brsvlab.local) -adddatastore $datastore

image

 

In my opinion, you would likely only add Datastores that have more than one host since they represent shared Datastores among ESX Hosts.  Other Datastores are likely local Datastores and probably should not have VMs that are protected.  Not the rule however! 

$datastore = $folder | Get-HypervisorManager -name master-vc.brsvlab.local | %{ $_.vmDatastore } | where {$_.hosts.count -gt 1}

 

Let’s change this up a bit.  So far we have focused on showing how Powershell, along with this module, and the Avamar REST API can manage protection.  What if you wanted to leverage a native VMware automation tool as well like PowerCLI?  We will modify the previous step a bit by connecting PowerCLI first to the vCenter instance, and then using the Datastore list presented to us from there to update the VM Proxy Client Datastore associations.  What’s the value here?  Not much in that use case, but wait for later when we work with VMs, Config Properties, and Tags!

 

This is optional, but if you want load PowerCLI 5.5+, and import this module.

Connect-VIServer -server master-vc.brsvlab.local

$folder | Get-Client -name avproxy-03.brsvlab.local | Update-VMProxyDatastore -hypervisormanager ($folder | Get-HypervisorManager -name master-vc.brsvlab.local) -adddatastore (Get-Datastore nfs-01-ds)

image

 

Start Adhoc Backup

1) Set Hypervisor Manager Folder for placement of configuration objects

$hvmfolder = $folder | Get-HypervisorManagerFolder -path master-vc.brsvlab.local/VirtualMachines

image

 

2) Add Retention

This is a repeat from the last post, but let’s create a new retention for 60 days and place the object in the Hypervisor Manager folder.

$hvmfolder | New-Retention -name retention03 -retentionduration (New-ObjectAvamar -type retentionduration -property @{unit="days";duration=60})

image

 

3) Add VMware Dataset

Here we have create a Dataset that has default settings for VMware Linux and Windows images.  We also specify the Data Domain index to use.

$hvmfolder | New-DatasetVMware -name vmdataset01 -DataDomainIndex 1

image

 

4) Add VM as Client VM

In this step we are using PowerCLI again to get the proper VM.  Here we specify the VM itself, it would be just as valid to not use PowerCLI and specify by vSphere folder path or by instanceUuid of the VM.

$hvmfolder | New-ClientVM -vm (Get-VM -name lguest-01)

image

 

5) Start Adhoc VM Client Backup

Get-VM lguest-01 | Get-VMClient -folder $hvmfolder | Start-ClientVMBackup -retention ($hvmfolder| Get-Retention -name retention01) -dataset ($hvmfolder | Get-Dataset  -name vmdataset01)

image

 

Configure VM Policies and Add Clients

1) Add Schedule

$hvmfolder| New-Schedule -name schedule01 -dailySchedule (new-objectAvamar -type dailyschedule -property @{timeOfDays=@("00:00:00.000-07:00","01:00:00.000-07:00");maxRunHour=1})

image

 

2) Add VMware Policy

$hvmfolder | New-Policy -name policy01 -dataset ($hvmfolder | Get-Dataset -name vmdataset01) -schedule ($hvmfolder | Get-Schedule -name schedule01) -retention ($hvmfolder | Get-Retention -name retention03)

image

 

3) Add VM Client to Policy

$hvmfolder | Get-Policy -name policy01 | New-PolicyClient -Reference (Get-VM lguest-01 | Get-VMClient -folder $hvmfolder)

image

 

Virtual Machine Restores

1) Select Backup

$clientbackup = $hvmfolder | Get-Client -name lguest-01 | Get-ClientBackup -name MOD-1404792885933

image

 

2) Restore VM

The following cmdlet will restore the VM in place.

$clientbackup | Start-ClientVMRestore -inplace

image

 

The other option here would be to do a restore out of place.

$ClientBackup | Start-ClientVMRestore -outofplace -name lguest-01-restore -hypervisorManager ($folder | Get-HypervisorManager -name master-vc.brsvlab.local) -datacenter brsvlab-datacenter -datastore nfs-01-ds -esxhost esx01.brsvlab.local -vmFolder test

image

 

3) FLR restores

This one is very cool.  You can perform granular based restores using any client backups including FLR for VMs.

The first step is to select the path.  Here we will browse the backup to the /etc directory.  We can select any of the files and restore them to a specified location.  The restore will take place via the VM Tools of the VM guest operating system without leveraging any networking, purely through the Hypervisor to the Guest.

$clientbackup | New-ClientBackupBrowse -path ‘/etc’ | ft * -autosize

image

 

$clientbackup | Start-ClientVMRestoreFLR -backupSource @(“/etc/slp.spi”) -destinationPath @(“/tmp”) -username root -password changeme

image

 

That’s it for the basic VM capabilities in the REST API.  Look for follow on posts that get more advanced and specific for policy management.  There will also be updates to the PowerCLI integration.

Powershell module for EMC Protection REST API

Posted by vele1655 | Posted in Automation, Avamar, Backup, Powershell, Scripting | Posted on 07-07-2014

Tags:

1

102 Powershell cmdlets here! Exciting stuff, what is being discussed here is hugely powerful in the world of Data Protection!  Is Powershell the best choice to implement this new API? Maybe not, but it can serve as a good example of how to use the API.  Please continue on if you’re interested in the direction Data Protection should be going.

In the last post here, we discussed a protection REST API with abstraction built in and I promised more examples of how you might use it.  So here you have it, I am now releasing to the wild 102 Powershell cmdlets that implement the majority of the REST API for protection.  There were two APIs discussed, and here we focus purely on the private cloud REST API that can serve the majority of use cases (other API is focused on Public Cloud/vCloud Director use cases and will have a separate post).

The rest of the post will give examples on how to use the API for the first time along with more useful examples of policy, backup, restore, of generic file system agents.   See the next post for specifics around VMware.  This is more of a tutorial, so it can look complex by going through it but it should make more sense once you get to the end.

Requirements

  • Avamar 7.1+
  • Avamar REST API installed (simple RPM)
  • Powershell v3 and module here
    • dir *.psm1 | import-module -force

 

General Features of the Cmdlets

  • Most of cmdlets use REST calls and responses natively to help give examples for how to implement REST in other languages/automation tools/orchestrators
  • All cmdlets expose underlying REST API calls, and can be viewed with -verbose flag
  • Most cmdlets have a -param flag which enables modification of raw parameter before passing to cmdlet
  • Native REST errors are passed back to cmdlets
  • Easily modifiable objects are stored in XML and called using New-ObjectAvamar
  • Explicit GET requests drive requests or updates to abstraction layer and can be done with (Get-examplecommand -id ($object.id))
  • Asynchronous capabilities are available with a lot of cmdlets, see Get-Command command for parameters (-runAsync:$true)
  • Will build examples into cmdlet help, for now they are in the module

 

Use cases covered in this post are below.  By the end you should have a basic idea of how the tenant and provider abstraction is used along with some basic backup/recovery operations occur.

  • Creating a Reusable Resource Pool and Data Protection Resources
    • -verbose example
  • Creating a Tenant, Resource Share, and Folder from Resource Pool
  • Create Dataset, Retention, and Schedule
    • XSD lookup example
    • Data Domain example
  • Generic File System Agent – Backup, Restore, Policies
    • Windows FS Agent

 

Here is a list of the cmdlets that are inside the module along with their respective actions.  As an example, with AvamarFolder, there are associated Get/New/Remove/Update actions that can be done.

image

image

image

 

Use Case – Creating a Reusable Resource Pool and Data Protection Resources

Before showing the examples, it is important to describe the object model for tenancy that is introduced in the REST API.  When using the REST API from scratch, the first thing that needs to be done is a Tenant needs to be created.  This tenant serves as an object in the logical tenancy layer that contains resource shares.  These resource shares sit on top of resource pools which are aggregations of protection instances.  The resource share becomes a tenants slice that contains resource limitations from the shared resource pool.  On top of the resource share is the folder that contains the actual policies, clients, and other protection instance related items.

image

 

1) Login

Connect-Avamar -server brm-01.brsvlab.local (or wherever the REST API RPM is installed)

image

 

As an example, there is also a CLI that is implemented in Perl with a debug mode to show REST calls that can also help in understanding how the interface works.

image

 

2) Create Resource Pool

New-AvamarResourcePool -name pool02 -verbose

image

Notice how we used the -verbose flag here.  In this screenshot you can see the top portion which is the parameters received (name) as a PsObject, and the second portion shows us the actual API call in Json along with response code.

The following shows the actual response.  The top portion from the WebClient call, and the bottom section is the raw Json response.  If you follow the verbose further you will see that we do a call back to the API to the an explicit GET to ensure we get the latest status and return it for cmdlet completion.

image

 

 

3) Add Data Protection Resource

$dpr1 = $pool | New-DataProtectionResource -name ave-03 -user MCUser -pasword ‘MCUser1’ -hostname ave-03.brsvlab.local

image

 

Use Case – Creating a Tenant, Resource Share, and Folder from Resource Pool

The next step to getting the REST interface configured is to create a a folder.  This requires that you create a tenant, and resource share from a resource pool.  For Private Cloud deployment or situations where tenancy and restriction of resources is not a concern, there may only be one tenant created that serves all requests.  In other cases, the REST API’s abstraction layer can be fully leveraged to dedicate resources, and even be a target for full Public Cloud service providers to perform large-scale BaaS operations.

1) Create Tenant

New-Tenant -name example01

image

Store the tenant in a variable ($tenant = Get-Tenant -name example01).  This could have been done by running ($tenant = New-Tenant -name example01).

 

2) Add Resource Share

Get-AvamarResourcePool -name rp01 | New-ResourceShare -name rs01 -dataprotectionresource @(Get-AvamarResourcePool -name rp01 | Get-DataProtectionResource -name ave-03) -tenant (Get-Tenant -name tenant01) -capacityInMB (1TB/1MB)

image

 

3) Add Folder

Get-AvamarResourcePool -name rp01 | Get-ResourceShare -name rs02 | New-AvamarFolder -name rs02_folder01

image

 

Great, if you have a folder created then you are ready to proceed on to the next use cases where we actually work with Clients.  The clients reference the individual objects that hold backups for actual OS guests such as, but not limited to, a Windows System with SQL, or a VM.

 

Use Case – Create Dataset, Retention, and Schedule

These objects can be created in any folder under the tenant.  They are used when doing adhoc and scheduled backups.  The dataset is likely the least descriptive of the three.  It is used to determine specific options when backing up things like Oracle, SQL, File Systems, and VM images.  In addition it currently contains options specifying which Data Domain platform to target during backups.

1) Create Dataset

Get-Tenant -name tenant01 | Get-AvamarFolder -name folder01 | New-Dataset -name dataset11 -datasetitem (New-ObjectAvamar datasetitem -property @{name="datasetItem01";datasetInclude=(New-ObjectAvamar datasetinclude -Property @{name="include";value="ALL"});plugin=(Get-ClientPlugin -name "Windows File System");datasetTarget=@(@{name="target";value="ALL"})})

image

 

Notice how we have a Get-ClientPlugin command embedded which chooses the Windows File System.  You can select any plugin using the Get-ClientPlugin command to view the names.

image

 

How about Data Domain?  Notice we take the same command from prior and add a datasetOption for ddr and ddr-index.  The ddr-index references which Data Domain device to use.

Get-Tenant -name tenant01 | Get-AvamarFolder -name folder01 | New-Dataset -name dataset11-dd -datasetitem (New-ObjectAvamar datasetitem -property @{name="datasetItem01";datasetInclude=(New-ObjectAvamar datasetinclude -Property @{name="test";value="test2"});plugin=(Get-ClientPlugin -name "Windows File System");datasetOption=@(@{name="ddr";value="true"},@{name="ddr-index";value=1});datasetTarget=@(@{name="target";value="ALL"})})

image

 

2) Create Retention

The retention can take many forms.  The following command gives an example of the retention object we are passing defaultly which is set for never expiring.

New-ObjectAvamar -type retention

image

 

As we mentioned in the intro, the object can be passed specifically to the cmdlet to allow as much flexibility as needed to make requests against the REST API.  In this case, you may want to refer to the REST API docs to see about sending a different kind of retention.  The following URL will open the online REST docs.

https://your-rest-api-server:8543/rest-api-doc/1.0/index.html

image

 

Type retention, and bring up the RetentionType.

image

 

This should bring up a page that represents the XSD schema for the type.  There are two main sections, the attributes and elements.  Those familiar with XML can skip forward.  The Attribute represents something that will shows up in an XML element <xml attribute=”test”>.  The table shows whether these items are required and whether they can be changed.

image

 

The elements represent <xml><element/></xml> or <xml><element></element></xml>.    These elements could have sub-elements and attributes which would be shown by clicking on the link under the “type” column.image

 

Ok, so why does this matter?  In this case we want to create a retention type other than the default.  First, look for the modifiable columns since these are the only ones we would specify to the interface.  Next we see the Occurrence column which takes 4 settings.  0:unbounded == optional, array.  1:unbounded == required, array.  0:1 == option, single.  1:1 == required, single.  I can see from these attributes that I need to modify two, the Retention Type (see its description for valid values) and Retention Duration (press RetentionDurationType link).

image

Now we can see that we need to formulate a RetentionDurationType object.  So here we do a 60 day retention based on the descriptions and types (string, int) listed for the elements.

image

 

Get-Tenant -name tenant01 | Get-AvamarFolder -name folder01 | New-Retention -name retention04 -retentionduration (New-ObjectAvamar -type retentionduration -property @{unit="days";duration=60}) -retentionType Computed

image

 

3) Create Schedule

Get-Tenant -name tenant01 | Get-AvamarFolder -name folder01 | New-Schedule -name schedule01 -dailySchedule (New-ObjectAvamar -type dailyschedule -property @{timeOfDays=@("00:00:00.000-07:00","01:00:00.000-07:00");maxRunHour=1})

image

 

Great, so now you have a folder created for the tenant with dataset, retention, and schedule in it.  You’re ready to move on!

Use Case – Generic File System Agent – Backup, Restore, Policies

The first step here is to deploy an agent to a guest that you would like to add.  You can see in this case that we have an Avamar agent running  in a Windows guest already.

image

 

1) Add Client to Folder

Get-Tenant -name tenant01 | Get-AvamarFolder -name folder01 | New-Client -name 192.168.1.250 (the name should be an FQDN but can also be an IP)

image

 

Go ahead and store the folder variable for reuse later on.

$folder = Get-Tenant -name tenant01 | Get-AvamarFolder -name folder01

 

Following this you need to invite the client to activate itself.  If doing this multiple times you might need to restart the Backup Agent service followed by an invite.

$folder | Get-Client -name 192.168.1.250 | Update-ClientInvite

 

2) On-Demand Backup (replace your path, c:\wmg.txt, and possibly plugin, with appropriate plugin)

$folder | Get-Client -name 192.168.1.250  | Start-ClientBackup -dataSource @{"source"=@("c:\wmg.txt")} -plugin (Get-ClientPlugin -name "Windows File System") -retention ($folder | get-retention -name retention05)

image

image

 

If you looked at the BackupRequestType here, you would see that the dataSource element has an option to include either source or dataset.  We will include a previous dataset which will tell it to use the Data Domain that we configured (dataset11-dd).  This will also mean that we cannot specify the source element which includes the specific file we targeted.  Instead the dataset must include the include, exclude, and target options to specify this file.  You will notice this backup takes much longer since we are backing up the whole guest.

$folder | Get-Client -name 192.168.1.250  | Start-ClientBackup -dataSource @{dataset=($folder | Get-Dataset -name dataset11-dd | select href)} -plugin (Get-ClientPlugin -name "Windows File System") -retention ($folder | get-retention -name retention05)

image

 

 

3) On-Demand Restore

Let’s first browse the backups.

$folder | Get-Client -name 192.168.1.250 | Get-ClientBackup

image

 

We can then run the same command with a -name parameter to specify a filter for the specific backup.  You can use the where clause and any other method here of course. 

$clientbackup = $folder | Get-Client -name 192.168.1.250 | Get-ClientBackup -name MOD-1404755705937

$clientbackup | Start-ClientRestore -destClient @{client=$clientbackup.client} -plugin $clientbackup.plugin -destinationPath "c:\restore\"

image

$clientbackup | Start-ClientRestore -destClient @{client=$clientbackup.client} -plugin $clientbackup.plugin -backupSource "c:\wmg.txt" -destinationPath "c:\restore\"

 

4) Policy Management

A policy is the combination of dataset, schedule and retention.  It also includes specific clients that are to be backed up using these settings.

$folder | New-Policy -name policy11 -dataset ($folder | Get-Dataset -name dataset11-dd) -schedule ($folder | Get-Schedule -name schedule01) -retention ($folder | Get-Retention -name retention05)

image

 

$folder | Get-Policy -name policy11 | New-PolicyClient -reference ($folder | Get-Client -name 192.168.1.250)

image

 

$folder | Get-Policy -name policy11 | Start-PolicyBackup

image

 

 

That’s it for this post.  Abstraction—check, Scale—check, Tenancy— check, Cloud-ready– check.  The next post will show the VMware use case.

EMC DPAD APIs and Protection Abstraction

Posted by vele1655 | Posted in Automation, Avamar | Posted on 03-07-2014

Tags:

1

If you followed the blog post from yesterday, then it is probably clear that I have some technical stuff coming up.  I described yesterday what the basics are, but let’s put a bit more meat to that discussion.  Following this post I will start dishing out the goods, I promise.

http://velemental.com/2014/07/02/hello-world-software-defined-protection-from-emc/

The general EMC consensus (modified by me) of software-defined can be viewed as abstraction, decoupling, run as software, APIs, and leveraging extensibility.  These are all very key when it comes to the design centers around the APIs (for public and private cloud protection) that I discussed yesterday.

Simple and sustainable automation has been a very difficult thing to accomplish without locking down a stack and limiting customer choice.  VMware, EMC, and the industry has recognized that sustainable automation, and portability of capabilities can really only be achieved by going forward with the software-defined vision.  Sustainable?  Think of it as the responsibility for automation is placed on the product vendors.  We hope for more standards through extensibility.

Here you can see the complexity of a typical stack, where features for storage, networking, compute, and protection are typically delivered as data services to the application layer at the top. 

image

 

The solution to this complexity is based on our definition of software-defined.  If you can abstract, decouple, and embed the services through extensibility at the Virtual Data Center layer you create portability across clouds and create a more sustainable platform.

image

 

It actually all boils down to Extensibility and Abstraction. 

SDDC requires Extensibility.  Extensibility requires Abstraction. 

I mentioned yesterday a bit about the east-west notion of integration.  Things like VVOLs can be thought of as north-south because they sit in the data path.  They are actually carrying a VM level or Virtual Disk abstraction into storage arrays from North to South.  There are vendors out there (Tintri and others) that are providing a similar level of abstraction from South to North with VM level management of storage.  Which way is right?  Well if you’re a dedicated VMware shop then VVOLs is architecturally the right way to go, if you’re an open shop and the vendor is committed to performing similar management for other hypervisors then maybe the South-North approach is ok.  (Extensibility outside of VVOLs may actually also help South-North players in the data path, long future story).

Back to protection.. In this case it is out of band and considered an east to west pull (similar to south to north as described).  The protection requests all are initiated through the protection solution (even if there is a vSphere Web Client plugin).  The protection solution then initiates requests with vSphere.. But what’s so bad about that?  It’s a separate API.  It’s the wild west, each API is different.  Authorization to perform operations and allocate resources is a mess.  But this is how it’s done for everyone integrating with vSphere today (07/03/2014)!  The goal here is to move the logic inside of the product versus having an orchestration tool perform workflows to glue things together.  Minimize the orchestration, maximize the product automation.

So what would a proper North-South extensible integration look like? VMware has done it right with vCloud Director through the SDDC based abstraction for Virtual Data Centers.  With this you have a logical tenant level abstraction that represents an object in their API model that holds policies, resources, resource limits, authentication, and authorization.

There are three major layers depicted below.  We are stacking a vSphere and vCloud object model together here.  First the physical where ESX Hosts are abstracted from hardware.  The next layer is a logical abstraction (starting with Virtual Center) where hosts get aggregated into a Cluster.  These clusters are resource pools or can contain resource pools.  The next level is the Provider VDC which represents “hardware” as the first vCloud resource layer.  The next layer is the Organizational Virtual Data Center.

image

 

Now if my goal is to add extra services to my Virtual Data Center then the next thing to figure out is how to embed data services in the VDC.  Take a look at this view of VDC’s in vCloud.  At the bottom are the data services that are available to the VDCs.  These data services are all actually representing Provider VDC (physical) based resources that the Organizational VDCs share.

image

 

And here is a view of a Organizational VDC that has embedded storage and protection services.  So how do we embed these protection services in the VDC?  Extensibility. 

image

 

We are able to add custom capabilities at a VDC level since, 1) we abstracted protection from the physical up to a VDC level resource 2) we added protection extensions to the Virtual Data Center.  Take a look at the following depiction where we can show this.  On the right side you have the new protection services.  From a physical level, we are showing EMC Avamar for backup/recovery.  Just like in the vCloud world, the 2nd layer (logical, where a Provider VDC maps to a Resource Pool/Cluster) we are mapping a Backup Appliance to a physical/virtual instance.  The Backup Appliance represents the abstraction of protection services available to any VDC.  We then add another layer abstraction above this for the tenants or VDCs, which is the Backup Repository. 

image

 

This repository holds all of the following– policies, resources, resource limits, authorization except authentication. This is another bonus of Extensibility.  The API calls happen against a single API for the VDC, which means a shared authentication model for extra VDC services.  1st level authorization occurs based on a users access to a VDC or object in a VDC and then following happens at granular levels through our extensibility policies. 

How cool! We abstracted and extended. Who wants extensibility outside of vCloud?

 

Future post on how to use the APIs.

What if VMware isn’t the center of your world?  What does an object model look like that is focused on protection first?  Let’s start with the bottom of the image below.  First you have a physical/virtual protection instance.  This instance is abstracted into a Data Protection Resource (VIPR, virtual array).  This DPR would then be added to a resource pool which allows different allocation strategies (vSphere, DRS) across physical/virtual protection instances.  Following this you enter the logical abstraction for the tenants level which is where there is a resource share.  This share then has a folder abstraction where policies, schedules, retentions, and clients reside.

So where does extensibility come into play here?  This view of the world represents one where the folder has items that are controlled by the protection abstraction and can be easily extended into any cloud stack.  Think OpenStack and any other product that wants to embed advanced protection in their solution.

image

 

 

What do you think?  Pretty cool stuff!

Hello World! Software Defined Protection from EMC.

Posted by vele1655 | Posted in Automation, Avamar, Backup, vCloud Director | Posted on 02-07-2014

Tags:

1

The software-defined world is among us and if you’re not pursuing products in the VMware eco-system that take advantage of this then you are likely the odd-ball!

One of the tough things about my job at EMC is being a part of the cutting edge stuff, playing with it—then developing cool stuff for it, and not being able to say a word about it.  The most recent silent birth comes from our Data Protection division that has been busy chasing this SD* vision.

As of last Friday, June 27th the latest iteration of Avamar 7.1 is at the end of its release cycle.  In my opinion, the two most important facets are the APIs.  There are now two fully featured REST APIs!  So how does this catapult EMC forward in the software-defined world?

When it comes to SD*, there are plenty of definitions out there that you can rely on and no one correct answer.  Here at EMC we tend to think of things this way (taken from an SD-Storage view).

  • Is there a de-coupling of the control-plane from the data-plane?
    • Is the control of the resources abstracted away from physical/software driven data services
  • Are data-services driven by software where technology allows for this?
    • This means we leverage commodity based hardware (proprietary or not) to achieve efficiency in all ways possible, and even where it makes sense run these as micro-services
  • Are there rich programmatic APIs?

Now let’s expand this definition a bit.  In the Software-Defined Storage world, we tend to think of things that are in the south-bound data path, and directly accessed via block/NAS.  In the protection arena that is storage agnostic (which opens the door to full SDS), we tend to think of things pulling (or initiating requests) at the VMware stack from the East or West.  For VM image based protection this is done via standard interfaces maintained by VMware called VDDK.

This however means that there are no standard VMware based policies or objects that reference protection capabilities (as is for Network/Storage/Compute).  VMware has developed in the vCD stack, something called “extensibility” which essentially allows 3rd party vendors to introduce new objects into the API model and augment existing objects with new capabilities.  I believe EMC DPAD is the largest consumer of this capability!

In the protection world this means a Virtual Data Center gets a new object called a backup repository that places resource limitations, and enforces policies for all objects in the VDC.  It is this use case that I believe will be ultra critical moving forward when it comes to SD-Protection or SD* for things not inline the data service path like VVOL.  So this is my personal augmentation to the SD* EMC definition.

  • Simplifies control plane by exposing data service capabilities through extensibility where possible
    • This could simplify situations with you have multiple software-defined layers interacting
    • Generally simplifies tenancy since embedded control plane inside of other stacks

So what does this all boil down to?

EMC now has two public APIs for backup/recovery.  We have one API that is focused on agnostic protection automation (think VIPR with virtual arrays) and another that is hyper-focused on embedding services in the VMware cloud stack (vCD).  The use cases here can vary wildly, but for the general understanding you can put the 1st in a Private Cloud API bucket (you run it), and the 2nd in a Public Cloud API bucket (you are the consumer).

Stay tuned! You will be seeing some very cool posts (2 years coming) that focus on vCO, vCAC, Powershell, PowerCLI for protection.  Hybrid Cloud Protection has arrived!

For more info contact your EMC DPAD rep!

Hybrid Cloud Enabling Technologies

Posted by vele1655 | Posted in Hybrid Cloud | Posted on 06-05-2014

Tags:

1

 

So you want a Well-Run Hybrid Cloud? 

 

This year at EMC World we’re able to show some phenomenal progress in areas helping customers achieve Hybrid Cloud.  If you are here or are looking at the vEMCWorld content after the show, I would highly suggest you check out the “Cloud-Ready Data Protection: Built for the VMware Software-Defined Data Center and the Enterprise Hybrid Cloud”.  I am driving this session with the assistance one of our vCHS brothers, Dave Hill.  The session is focused on two audiences—the builders of Public Cloud, and the builders of Hybrid Cloud.

 

Here are some enablers covered here!

Virtual Data Center == SDDC

Embedding services in the VDC is the future

The vCloud API and Extensibility is key

 

SDDC Enables Hybrid Cloud

Well-Run Hybrid Clouds have a handful of key qualities that EMC and VMware believe help define what customers have been asking for over the past few years.  These qualities are what we believe will, in a simple and sustainable way, deliver ITaaS to their customers and allow them to become more agile to meet business demands.  Through these needs and stumbling with complication between and among stacks along the way, it has become clear that the HOW needed to change—being the tooling under the covers that enables policies, capabilities, and services to be delivered across Private and Pubic clouds and heterogeneous technology stacks. To be simple, sustainable, and portable,  applications need to rely on the next level of abstraction, the Software-Defined Data Center or Next-Generation Virtual Data Center.

It is this abstraction away from Hardware that enables a customer to consume all data center services across clouds stacks.  Notice as a VDC service, that we can now discuss EMC Data Protection as embedded in the Virtual Data Center just like a Storage, Compute, Networking, Firewalling, and other critical services.  This is HUGE!  This represents the best example to date of an technology partner enabling Hybrid Cloud by making protection capabilities portable across Private/Public Clouds.

image

 

Our notion of abstracting protection into the current VDC (OrgVdc) in Public Cloud is through an new object called a Backup Repository.  The repository itself is what lives inside of vCloud Director VDC’s and gives service providers the ability to limit and provide levels of federation, and/or tenancy above EMC Data Protection.  Again, the same way vCD abstracts storage from Virtual Center at the Provider VDC level.  Think of a Backup Appliance in this same level.  Let’s see what kind of cool stuff comes around for VDC’s in Private Cloud stacks this year!

image

 

But where are we going?

The notion here is simple.  We believe that a key aspect of enabling Hybrid Cloud is to ensure that customers can unlock use cases and delivering dependent technology consistent across Private and Public cloud stacks.  Think of availability, features, and typical Enterprise grade capabilities at massive scale in a multi-tenant way! 

 

It’s All About the API!

Now when we talk about Hybrid Cloud enabling technologies, there are enabling things (embedded in the SDDC) and also there is the notion of how this technology is turned into a service.  This tends to be where an SP focuses.  It is one thing to have an API, but it is another to make it easy for a service provider to construct, manage, and sustain the delivery of things as a service.  A product that has an API does not necessarily mean #1 it fits perfectly in a SDDC and #2 can be used efficiently in Public Cloud.

In order to align against roles and be successful here, there needs to be focus in three roles.  Let’s put this in a realistic Public and Hybrid Cloud perspective for a minute.

#1 The Customer – Service Provider Admin (SYSTEM Admin)

#2 The Customer’s Customer – Customer of a Cloud or IT Admin (Org Admin)

#3 The Customer’s Customer’s Customer – A Hybrid Cloud Consumer (vApp Owner/Author)

This is a critical point.  If we are going to be delivering to all three of these customers and truly enabling a consistent Hybrid experience across clouds this means a standard and public API must be advertised.  This API must inherently include multiple roles.  So you can ask yourself, is X product ready for Public and Hybrid Cloud?  Do you have a Private Cloud based API that you can advertise to your consumers so they can choose what tool they consume with?  Or is your Private Cloud strategy focused on building portals to consume from?  Are there supersets and subsets of capabilities up and down the roles that allow granular control of capabilities to these different roles?

 

image

 

This takes us to the final point.  The vCloud API and qualifications from VMware’s VSPP for Public Cloud providers is what will create massive seamless VMware Public Cloud.  What I mean by this is that one thing all of these providers have in common is that they advertise the vCD API.  This API is a standard, and all vCD based plugins for consumption and automation tools will work seamlessly across any of these providers.  Taking that a step further, for Hybrid Cloud, delivering services through these APIs is critical to allowing customers the ability to take advantage of value-add features from SPs.  On a contrary side, SP’s that have their own API where features are consumed for their cloud will lead down the path of divergence among features and capabilities.

Take a look at the following which demonstrates the two approaches for delivering services to customers.  On the bottom we have a custom approach which typically ends up with no consistency and delivering services in a portal to customers.  The second, product approach, means the responsibility is outside of the SP to deliver APIs all the way through to the tenants.  This is what will open the door to some very cool things to come in

image

 

Congrats for getting through this one.  No there are no scripts attached here =)  But will soon to demonstrate how to use the new Avamar Plugin for vCloud Director across any provider.

PowerCLI Meet vCloud Hybrid Services!

Posted by vele1655 | Posted in Automation, Powershell, vCHS | Posted on 03-05-2014

Tags:

0

This is a quick look at how you can use PowerCLI with vCHS.  Since vCHS is a vCloud Director powered Public Cloud, you can take advantage of the native, mature, and well supported vCD APIs.  This is extremely powerful as these APIs can be used in a standard way across any cloud provider advertising the vCD API.

There are a couple of things special with vCHS however.  Since the vCHS BU in VMware owns the vCD product we are seeing them iterate quickly on the API and develop things that may not be publicly documented or available.  However, rest assured you can always specify API versions to vCD to keep backward compatibility.  The other thing to consider is that there is a first step when talking to vCHS that allows you to lookup what they call a Compute Resource.

Update 2014/05/08 from Dr Renouf.  The following command can be used to force compatibility within PowerCLI for specific vCloud Director API versions.  Specifying 5.1 should ensure you have no complication today, and likely higher revs moving forward which should be available from the vCHS team.

[VMware.VimAutomation.Cloud.Views.CloudClient]::ApiVersionRestriction.ForceCompatibility("5.1")

 

The Compute Resource is a OrgVDC in vCD, so essentially this lookup allows them to abstract a layer above all of their vCD instances and direct customers to the correct API for the correct Compute Resource.  Everything after the CR is however is controlled by standard vCD APIs.  There are a couple of posts that already demonstrate this usage.

 

Massimo does a great job describing a workflow in vCO that allows you to spin up publicly facing VMs from a ghost Datacenter object in vSphere Web Client.

http://it20.info/2014/01/vchs-meets-vco-and-boris-becomes-a-hero/ (Massimo)

 

The vCO team also describes more in detail about how to use vCO with vCHS.

http://www.vcoteam.info/articles/learn-vco/276-getting-started-with-vco-and-vchs.html

 

These two methods are however skipping the first vCHS API.  I am going to skip this as well (showed it in my last post with CURL) since PowerCLI currently (05/2014) only works in a 100% supported way with the vCD API and not the vCHS API. 

 

The first step is to load the vCHS Dashboard and click a specific Compute Resource you want to access in PowerCLI.  On the right side you will see a link to “vCloud Director API URL” which then returns you the Organizational link that is valid for that Virtual Data Center.  Notice in this case simply the name on the last part of the URL (23-186) and the FQDN (xxx.vcd.vchs.vmware.com).  This is what we need to pass to PowerCLI!

 

Screen Shot 2014-05-03 at 9.06.50 AM

 

So the first step is to load PowerCLI which has the Cloud module installed.  Here you issue a standard Connect-CIServer call.  Notice how we specify the server AND we specify the Organization Name that we got from the vCHS portal.  There is then a returned object to show the successful login.  You should use your same credentials when prompted that you used to login to the vCHS portal.

 

image

 

And now you can start issuing any of the normal vCD cmdlets.  Here we are going to instantiate a new VApp from the Public Catalog.  Notice how we browse the catalog, and templates.

image

 

We then choose a template from the catalog as well as the OrgVdc to provision to.  From there you can see the deployment occur successfully!

image

 

 

 

Quick and dirty.  Great thing to take note of here is that although vCHS is a new service, vCloud API and the vCloud eco system are not.  Any tools that currently use vCD should be able to work with vCHS in similar ways.  Look for future posts and likely formal PowerCLI releases to allow a connection to the vCHS API to handle Compute Resource redirection to make this super easy.  Stating the obvious on that one =)

vCHS API – Hello World from CURL!

Posted by vele1655 | Posted in Automation, vCHS | Posted on 03-05-2014

Tags:

0

vCHS! vCHS! vCHS! This week I was one of the lucky few to gain access to vCHS through the VMware vExpert program.  EMC World 2014 is literally happening now, but the opportunity is too important to not get cranking right away on demonstrating how to use the vCHS and vCD API.

This post is going to focus on implementing the generic workflow highlighted in the following useful Hello World example from VMware.  We are going to use a basic CLI REST tool, curl, to perform the lookups.  Initially I actually ran into some SSL issues, so if you run into issues as well, make sure you are forcing SSL to use TLS when authenticating to the vCHS API.  Following that the vCD API is less picky.

http://pubs.vmware.com/vchs/index.jsp#com.vmware.vchs.api.doc_56/GUID-CE7C1F40-A9DF-48F0-81B7-42E13A305D5F.html

 

Now for the details.  As you may know, under the covers vCHS is using vCloud Director as cloud stack delivering services natively and through extensions.  This means that any of the work done prior with vCD is valid!  The only thing new is that you must add a few steps ahead of time to authenticate against the vCHS API extensions prior to opening a connection to a VDC.  The first steps are how vCHS is abstracting above individual vCloud Director environments.

In my case, I am in a VPC versus a Dedicated Cloud, but the process should be similar either way.  I believe in Dedicated Clouds more of the vCD objects would be available to the subscriber versus being limited to VDCs (not sure).

 

So let’s get into it! Here is the basic flow of logging in and getting to the underlying VDC.

  • vCHS API
    • Login to vCHS API (POST /api/vchs/sessions)
    • Get vCHS Services (GET /api/vchs/services)
    • Get vCHS Compute Service (GET /api/vchs/compute)
    • Get vCD Session for vCD (POST /api/vchs/compute/<computeId>/vcloud/session)
  • vCD API
    • Get vCD API Versions (GET /api/versions)
    • Get VDC Resource (GET /api/vdc/<vdcId>)

 

 

Login to vCHS API (POST /api/vchs/sessions)

This is the first step where we create a session against the vCHS API.  Note that I included Accept headers in these 4 vCHS steps, but it isn’t needed.  They are however needed for vCD with valid version numbers in the last two steps.  I am also using xmllint to properly format the XML that comes out of the request to make it more readable.

Screen Shot 2014-05-02 at 7.18.25 PM

 

 

Notice in the response that we receive a ‘x-vchs-authorization’ header which will be use in the next step.  It also includes a DOWN link that specifies the services API endpoint that we need to call next.

Screen Shot 2014-05-02 at 7.18.35 PM

 

 

Get vCHS Services (GET /api/vchs/services)

Here we are taking the previous ‘x-vcloud-authorization’ and submitting it as a header to retrieve the Compute Services available.

Screen Shot 2014-05-02 at 7.31.25 PM

 

The Compute Services represent the abstraction in vCHS land that are VDCs.  Notice there is a serviceType of “compute:vpc”.

Screen Shot 2014-05-02 at 7.31.33 PM

 

Notice the web portal of vCHS matches the VDCs listed above.

Screen Shot 2014-05-02 at 8.40.08 PM

 

 

Get vCHS Compute Service (GET /api/vchs/compute)

Now we can retrieve the link to the Compute Service (VDC) which is controlled by specific vCloud Director environments.

Screen Shot 2014-05-02 at 7.33.13 PM

 

The response gives us another DOWN link which specifies the Compute Service path where we can get our authorization needed to work with our vCloud Director VDC. 

Screen Shot 2014-05-02 at 7.33.24 PM

 

 

Get vCD Session for vCD

(POST /api/vchs/compute/<computeId>/vcloud/session)

This is the last step within the vCHS API.  Here we use the previous Compute Resource link to perform a POST which gives us our actual vCloud Director credentials.

Screen Shot 2014-05-02 at 7.35.47 PM

 

Here is the return, and notice the parameters being returned, “authorizationToken” and “authorizationHeader”.  We also get another link to the actual vCD VDC. 

Screen Shot 2014-05-02 at 7.35.55 PM

 

 

Get vCD API Versions (GET /api/versions)

The first step once we get to the vCD environment is to grab the appropriate versions of the API which can be used in our header.  Note that at this point, you could proceed by looking at the Hello World example from the vCD documentation.

http://pubs.vmware.com/vcd-55/index.jsp#com.vmware.vcloud.api.doc_55/GUID-6DC15CF5-3BCF-4426-9988-C71E7A71CBD6.html

Screen Shot 2014-05-02 at 7.40.21 PM

 

Notice the 5.7 version returned here.

Screen Shot 2014-05-02 at 7.40.38 PM

 

 

 

Get VDC Resource (GET /api/vdc/<vdcId>)

We actually are specifying the 5.6 API in the Accept header, but the 5.7 could be used as well since it is available in the versions response as shown above.  This GET request is at a VDC, where the response gives us the references to be able to work natively with the VDC!

Screen Shot 2014-05-02 at 7.59.43 PM

 

Here’s your VDC!

Screen Shot 2014-05-02 at 7.59.51 PM

 

 

 

Very cool! I hope this helps get you off the ground when it comes to starting to build automation with vCHS!