Here Comes My Sequel!

Posted by vele1655 | Posted in Containers, Hybrid Cloud, PaaS | Posted on 03-10-2014



Excited is likely the least impactful word I can think of in terms of this post, maybe Excited2 is better. I am truly lucky to be a part of an organization (EMC) that really gets it and isn’t afraid of taking chances.

Gets it

The Software Defined Enterprise is changing everything.

Taking chances

Keeping customers first, embrace and contribute to disruption, self-disruption, and cannibailization, while helping customers take advantage of the new world.

My time at EMC has always been full of going against the grain and doing things that are different all the while following and aligning to industry trends. I have always been involved in small teams that get placed squarely in the middle of the unknown. ie. What kind of disruption and at what scale will a small group of ninjas have on a cruise ship?

  • The vSpecialist group where I learned a ton about pre-sales, evangelism, high output teams, and leadership while working among some of the industry’s brightest
  • The Data Protection Virtualization group where I spread some vSpec juice and learned from some talented fellows and helped shape how to talk with customers about Data Protection with VMware, the SDDC, and Hybrid Cloud
  • And most recently in the Global Specialty Practice Integration Labs group pushing, pulling, lifting, lowering, and influencing the direction of DPAD products

All along this journey I’ve been lucky enough to have leadership that has provided rarely needed air cover for some of my radical ideas and openly fostered innovation while aligning towards corporate strategy. For this I thank Jeff Thomas (at scaleiomonster), Mike Zolla (at mikezolla), Wade O’Harrow (at wadeoharrow) and Chad Sakac (at sakacc).

What’s up next? Another journey into the unknown. At the time of the original vSpecialist group, VMware was beginning a more rapid adoption in the Enterprise but it was somewhat obvious this adoption was to turn into a landslide and massive opportunity to align technology and solutions toward VMware. Depending on who you talk with, we are at a similar point today where things seem to be changing in a less than unknown way all starting with developers but making its way through IT. Let me rephrase that, technology has already changed and Enterprises are getting closer to adopting it.

So lets term it the known unknown state in a new tech-bubble driven by things like 3rd platform applications, continuous delivery, and big data that are serving as the enablers for new-comers to un-seed large organizations from their seemingly anchored positions (Software Defined Enterprise/Economy). But wait, I said bubble! Sure, and my opinion is that soon we will see it burst again. But to me there is something more substantive being left behind this time.. a ton of open-source technology that was contributed along the way that removes the barriers to entry for huge markets leaving more competition among players and in the end a better product for the consumer. It is this same technology that will also continue to eat at the Enterprise workloads where large swaths of capital and workloads runs.

Being as how I mostly focus on the tech stuff (although majored in Business), lets look at the last few years as being an innovation bubble versus looking at it from the most obvious financial side, agree? In the dotCom tech bubble, it burst and a bunch of proprietary closed-source technology was just thrown away (ie. rusty gear). On the contrary, now we have a massive amount of stimulation to reusable technology (software, designs, architectures, and design patterns) or economically speaking, efficiency technology that mostly won’t be lost. However, my justification for this is partly in personal experience. I started my career working with Apache and open-source technology, but at that time there was a small fraction of open-source contribution happening compared to now. Guilty party included here as it was mostly a take, enhance, and keep to ourselves at the time. But now, I really can’t say that I have seen this much excitement or engagement from the community in my short purview of 15 years in the tech industry.

My short sighted view is that this comes first from the engagement in open-source projects of large tech companies that finance their employees to engage, second from the coolness of the technology being built, and third from the comradery of being a part of these initiatives.

“Software Defined Enterprises”, and “cloud” among other things is driving profound changes.

  • Applications being driven from the “cloud” and being highly globally consumer focused
  • Applications going from a North-South Availability and Persistency to Non-Persistent and Scale-Out
  • Applications adopting microservice versus monolithic designs
  • Applications designed around consumer facing APIs
  • Applications publishing more true REST/Hyper-Media APIs instead of language specific SDKs

So what is the common thread here? Applications, being the consumer of infrastructure, are changing dramatically and have a completely new set of requirements of infrastructure. This is going to and has driven all kinds of crazy shifts and further emergence of new technology and depletion of old. There is no doubt however, that this new stuff will be the new competition for our existing technology and will make existing technology even better. ie. Good news for Enterprise shops!

  • Hypervisors innovating in new ways quack like a container to compete with new efficiencies of Linux Containers and these same containers will also quack like a VM
  • VMware resisting commoditization of vSphere and sprinting on the Software Defined Data Center concept to make existing the existing Virtual Data Center concept further ITaaS aligned
  • Convergence and Hyper-Convergence continued growth to simplify and optimize the SDDC

So whats left behind with all of this work in the innovation bubble and what to expect moving forward? Some of this is here now.
– Existing monolithic applications slowly deprecated while the ones that stay further leverage Hybrid Cloud SDDC in an ITaaS model
– New convergence of the Hypervisor and Containerization layer, IaaS inclusive of PaaS
– Hybrid Cloud PaaS models inclusive of IaaS
– Auctioning layers minimizing costs and enabling perfect competition for cloud resources
– End-to-end applications completely defined by blueprints built on top of and considering PaaS frameworks
– Apps designed for containers and their availability models
– Abstraction of data center services up through the Application and native Hybrid Cloud functionality
– New programming languages existing languages optimized for efficiency, asynchronousness, networking, concurrency, big data, REST
– More open-source technology and projects that leverage other pioneering open-source initiatives

So deeper down than the apps, what is driving all of this innovation? Developers! Businesses are asking more and more for developers to innovate. They are then turning to methods like continuous integration to rapidly develop and iterate on applications. This type of environment demands more of a Developer and Operations relationship. And this leads to things like the most recent Dockernado storm where I am specifically referencing a tool (Docker) that helps both Dev and Ops teams stay apart yet work optimally with one another when building, deploying, and managing next-gen applications.

Stay apart? Developers don’t want to care about networking, underlying storage, or monitoring. They want to focus almost all of their efforts on their application. This aligns to what they’re building– scale-out architectures that don’t care about any of this stuff. They also want a way of building and running this software on their system and having it portable to production on top of any environment (sounds like a Java promise). Some of the same concepts that drove Java to #1 popularity as a language are now driving Docker and changing infrastructure.

Plenty more to speak of here, but the cloud’s Dockerization is a great example of the future where Developers Win in the end. It is going to be one of many tools that is embraced by developers for next-gen and current architectures which will drive containerization and the disruption of the legacy Hypervisor markets all driven by Developers.

To win the Developer’s mind in this era, companies that exist in “cloud” must focus on an easy to use API for their products. Let’s leave the definition of “easy to use” for further posts. At EMC, if we want to exist, maintain share and not shed our hardware as IBM has, we must embrace these changes. At the same time to be successful you likely need a good combination of knowledge to help customers get from A to B, old to new, or Platform 2 (~VMs) to Platform 3 (~VMs/Containers/Micro services/Scale out apps). Let me coin this Platform 2/3 for those customers that are born in the 2nd Platform and leveraging the 3rd for growth. Reciprocally you also probably need a skill set for Platform 3/2 where new companies need to take advantage of new tech while also running small amounts of monolithic technology. All the while you likely need folks that have a hybrid skill set across these two types of technologies. The platforms APIs, open-source applications, and programming languages in a 2/3 and 3/2 world are what will be critical to adoption and getting Developers leveraging these during the development phase is critical.

Finally, the purpose of the post!

As of now, Oct 1 2014, I am on a great new team and initiative at EMC focused on Developer Evangelism. The focus of the group will be to internally and externally align ourselves to the needs of Developers. ie. Make Developers Love EMC.

Through this charter there is now a new group of Developer Advocates at EMCII lead by a cloud veteran and co-founder of Brian Gracely (at bgracely). We are lucky to have a leader that is as involved and aware of industry trends as Brian is. His podcast is superb and has published a stunning amount of content and forward looking interviews with the industry’s brightest. I believe at the time of this post they are on #163 in a few short years!

Alongside myself I am joined by a couple of all-stars from the industry. Jonas Rosland (at virtualswede, joins the team from the EMC CTO office where he is already head down focusing on all things hipster and has early on infected the team with his SaaSification. Another brother and character that needs no introduction is Kendrick Coleman (at kendrickcoleman, who has been rocking the PaaS stacks and served as a bad *aaS player at VCE since the beginning.

So whats the least to expect out of us? Community engagement will be inherent in the success where we will have presence at Github, meetups, hackfests, conferences, IRC, social media, and frankly wherever we can.

You’re going to find us listening a ton to customers and general folks and evangelizing EMC technology all the while fighting to align our people, tools, and software to developers’ needs. is home! Here you will find Open-Source projects, sample code, examples, and documentation. We plan on having this be a central place for hosting projects, and other contributed code.

And no doubt, vElemental will be shifting from the most popular vCenter Operations, vSphere, vCloud Director, PowerCLI and Powershell posts to more hipster stuff like Golang, Cloud Foundry, GCE, Mesos, Docker etc.

We are in exciting times. Tip of the spear!

Docker Meetup #3 – Mountain View

Posted by vele1655 | Posted in cAdvisor, Containers, Docker | Posted on 18-09-2014



This was my first Docker meetup that I have been able to attend. I had high hopes for this group and this session, especially since it was being held and sponsored by Google. During the first presentation of the night, Docker Internals – Rohit Jnagal – Google, it was clear that even the most basic presentation from Google engineering would not let the audience down. Rohit said the content would be foundational and it definitely was and likely took the basics to a deeper level than most had seen.

The point of his presentation was to show some of the fundamentals of how Docker manipulates file systems, leverages namespaces for isolation, and cgroups for resource constraints.

Check the following link for slides to this session and previous sessions.

The filesystem demo was interesting where he showed using mount commands to create a read-only file system on top of a read-write filesystem using AUFS. He then lead this into discussing the structure of /var/lib/docker/aufs/ and its subdirectories where Docker is simply creating layers and mapping those layers appropriately when running containers. This demo should be relevant through the different file system types and his opinion was that BTRFS (a snapshot based approach) would likely be the winner moving forward instead of AUFS.

He then continued to show how containers are started. This was a very simple but powerful demo where you can create a detailed JSON file that describes everything about the container such as devices, hostname, cgroups, namespaces, and NSINIT leverages this information to actually spin the container up.

His demo concluded by manipulating resource limits, thresholds, and shares while viewing these changes live using Google’s cAdvisor project. The task for manipulating the containers live was pretty simple you would just echo values into specific container files that likely exist under /sys/fs/cgroup/cpu/docker/containerId.

There was a question during this time about whether cAdvisor would be able to drive alerts. He replied that cAdvisor isn’t intended to be a full monitoring platform, and that cAdvisor can be leveraged within any monitoring framework. It has an API that you would then feed the information about containers into other environments. The private version that Google maintains has a bit more functionality such as the ability to make changes to resource shares and even the suggestion of optimal settings, but this is not public quite yet.

Following this presentation was Containerized Continuous Deliver with Docker and Shippable – Avi Cavle – Co-Founder and CEO Shippable. I was again surprised with this one. This could be a pattern =), but things that are built around Docker seem to be able to take advantage of the simplicity of containers and its packaging, and able to spend time on layering on other coolness. I believe these things combined will be making for some very cool things upcoming.

Avi had a good demo of Shippable as a cloud service for Continuous Integration. He mentioned a bit of the problem statement up front where Shippable was looking to deliver this as a cloud service but things like Jenkins did not scale well and had other challenges when it came to delivering it as a service to many customers. This led them to make the decision to double-down on Docker at around the 0.2 release as a central aspect of Shippable.

One of the ideas shown during the demo was the ability to allow Shippable to monitor your code repository such as Github/Bitbucket (apparently an extensible architecture, so any) and look for commits. Shippable takes over at that point and starts a build, unit, and integration testing. Upon success, the build is then containerized and sent to a production server.

This automation allows Shippable themselves to push code 40-50 times a day to production with 10 people.

His finished the discussion with some questions which got into some of the secret sauce. Apparently there is a tool called OneBox that they built which alleviates shipping whole containers around all of the time.. need to look into this. He also called out a “container fabric” which consists of a messaging layer using RabbitMQ for communication across clouds and agents running on boxes allowing communication.

He also calls out that it is “Hybrid” where it can run between VMware in a Private Cloud and among public clouds. The state of getting code or delivering containers to a production system seems to be through SSH currently where the customer has deployed container hosts already. Kubernetes was mentioned as an option currently to allow a push to any cloud. The future should allow for plugins to common cloud APIs to actually manage and deploy a container host as part of the process. I believe Digital Ocean is the current option.

An interesting idea he discussed was around “topology based testing”. He called out most testing nowadays as unit testing since single services are typically tested by themselves and not as a whole. Shippable seems to be working on making their technology able to address whole functioning environments rather than single containers at a time.

The last presentation, Checkpoint and Restore for Docker Containers – Saled Kazemi – Google, was a view from a future facing and lab skunkworks perspective. The idea here was to leverage Checkpoint/Restore in Userspace (CRIU) to suspend and resume containers. CRIU typically works on processes and their children where you can issue commands to suspend and resume an application. The Docker use case is a bit more specific as there were changes that needed to take place in CRIU, the Kernel, and of course Docker itself.

At this point the underlying things are being changes to enable this functionality and I would assume at a later date once it is further baked Docker will then roll in the capability. The demo we saw was using a special script from CRIU that orchestrates the extra steps needed in order for Docker to work. A good example of this is when you checkpoint a container and stop it Docker will remove relevant layers. To restore the checkpoint, these layers need to be replaced first before the container can be resumed.

What is the goal?
docker checkpoint containerId
docker restore containerId

Check out these usage scenarios! cMotion anyone?

OpenStack Silicon Valley 2014 – Agile Track

Posted by vele1655 | Posted in Automation, Docker, Infrastructure, OpenStack, PaaS | Posted on 17-09-2014



This begins the second part of the day where the conference splits out into tracks. I decided to follow the Agile track where it was completely done by panels. This track focused on deployment, containers and PaaS.

Let the editorial section shortly begin to summarize what I thought from the day. There seemed to be a lot of maturity around the ideas and concepts being presented at the conference for an open source initiative. If you read the prior post for the keynotes, it might be clear that there was a good amount of information presented in the first few hours. What strikes me at this point, based on how some of the content was presented, there likely does seem to be a decent amount of effort that is lost in a “groundhog day” loop versus solutions and making progress on problems.

The balance of content between presentations and panel sessions may have been a bit too far weighted toward panels. Although I do appreciate the experts speaking openly, it is more difficult to foster these ideas real paths forward unless presented and recorded in a reproducible and scalable medium. The discussions were at times very difficult to follow and could be seen by even the panelists faces as they tried visualize what certain speakers were saying. Some panel discussions actually went very well possibly aided by the moderator being an expert and helping to summarize and lead conversation forward in a strategic way.

So what were the controversial topics of the day?
– OpenStack centralization or decentralization of authority
– OpenStack should focus on more than just core internal features and plumbing
– OpenStack Eco-System needs counterbalance leadership and large vendors
– Containers, OpenStack and where they fit
– Creating competition among project teams in OpenStack for th best solution
– Choosing the right build of OpenStack
– PaaS as the right level of abstraction for applications and not OpenStack
– Bare metal or VM hosting containers

Thought provoking points?
– Organizations need to build environments that their end users and devs enjoy consuming from and the platform with most enthusiastic devs wins
– AWS/Azure involvement in OpenStack for cloud portability
– People don’t dislike open source, they are looking for reasons to love to love it
– Data Center should be loosely coupled things
– Use OpenStack builds slightly behind cutting edge and mostly from products
– More OpenStack CI testing needed
– Hybrid cloud and whats the right level of abstraction
– Meta meta description of an application

The following represents the blow by blow or live blog from the Agile track. Ideas or statements below are not my own.


This begins with a panel on Deployment Approach including David Boucha – SaltStack, Greg Elkinbard – Mirantis, Chris Hoge – OpenStack Foundation, Jesse Proudman – Blue Box Cloud, and Monty Taylor – HP.

What do we mean when we talk about deployment at scale? What exactly does scale mean to someone? The panel seemed to conclude that a general rule of thumb is that 100-600 nodes in a cell is the sweet spot. The nodes would reference compute nodes or hypervisors. It was referenced that RackSpace tends to roll out cells in 600 node units based on its history with OpenStack.

An interesting idea that also was weighed here was around how you upgrade. The larger the node counts, the more difficult an upgrade could be.

Another point was made that the scale concept is also gated by OpenStack’s architecture where it might have been lacking key practical engineering input from thought leaders that have actually built hyper-scale public cloud architectures. With these architectures we need to allow for failures and continuous management where we bring up new nodes and decommission continuously.

On the contrary, most customers would likely fit within the 20-100 hypervisor scale and would be just fine for an out of the box deployment. Some cloud providers (Blue Box) actually choose this model based on their business model which allows them to achieve scale. A federated identity management and upgrade management systems can help clean up some downsides to this approach and glue these things together.

The next question was around automating or not. I can’t say that there was much meaningful here other than to know what you’re automating before you do so.

Somewhat combining thoughts, the next question asked about whether it matters that a specific tool is or isn’t built for. I believe this was answered with the idea that no matter what you need a system that can report the health and exact state of an environment. This is key to running at scale.

An OpenStack environment tends to be temporal and things in virtual environments tend to always be moving. The next question addressed this in terms of asking how to ensure dependencies were identified, isolated, and managed. Not very good answers here, and it sounds like no one has quite tackled the scenarios where hardware impacts workloads and how to properly report and identify up to what layer in OpenStack.

The next queston dealt with how you manage deployment of openstack infrastructure alongside existing stacks? I believe the key idea here was that there wasn’t a need to consider environments 100% OpenStack day one, there will be transitional periods so managing will be important.. I definitely missed some other points here.

The final question dealt with maintainability and cloud management after initial deployment. The consensus here around OpenStack is that it is better than before, but be careful, check your order of operations, possibly have copies of VMs and the database so you can roll back if necessary.

The next panel session under the Agile track was focused on Continuous Integration/Delivery Consuming Trunk and the Allure of Distros. I thought this was the best panel session of the day in terms of having a well moderated session (Vish Ishaya) that was able to weave the conversation through different meaningful loops and ensure thought was put into having the audience get the key points.

The panel consisted of Paul Voccio – Rackspace, Vish Ishaya – Nebula, and Joshua McKenty – Piston. In terms of choices for deployment the panel was asked what choice of distro, how to decide on version, and where do products fit in.

When considering distros some may not realize that they are actually using distro based OpenStack services. For example, if you are running apt-get then you are using the Ubuntu distribution of OpenStack. This is a different conversation from using a supported OpenStack product/solution.

In general the panel agreed that most should be consuming from a distro instead of building directly from OpenStack source. The exceptions to the rule would be the large cloud providers or those with custom kernels or requirements for packaging. There was also a question around whether to consume from distros for other things that OpenStack services depend on which is also a valid question but more likely something consumed from a distro.

In terms of adopting different versions cloud providers tend to differ on how they update their own environments based on business model. Piston as an example tended to be more conservative and adopt previous versions. Whereas, providers like RackSpace which have the responsibility and reputation of being pioneers with OpenStack tended to be just slightly behind the cutting edge or as close to the trunk as makes sense.

This cutting edge brought in another interesting conversation point. In doing this they brought their risk profile up a bit in terms of bugs they would encounter. It was agreed that CVE’s after major build releases occur would tend to be a negative thing for customers to experience when patching or doing upgrades to environments and finding the right time when they would reduce to more of a steady state was desired. At RackSpace they have a policy of fixing things and contributing them back to the community first, and then waiting for them to appear in a build from OpenStack as their best option.

The bugs were a very important point in terms of release strategy for OpenStack. The idea that they release all services in 6-month iterations at once means that other services that rely on core services may actually be very unstable on the first release. Only upon that stable state do things make more sense to adopt.

There were also problems on the contrary of being conservative. Those that waited a long time tended to have pain as well since there were more dramatic changes and rebasing of custom functionality would likely occur to get to the latest rev.

The question finished on the CI/CD part where people that do CI/CD against trunk tend to be those like RackSpace that sponsor OpenStack and.

Continuing this topic, the next question is about whether to use a product versus rolling your own. A good general statement is that there is likely little to no value in rolling your own OpenStack solution. If you’re business is running a bank, then you probably shouldn’t go out and try to do everything from scratch. The notion of everyone needing a custom solution is likely based mostly on the past state.

After this the discussion continued to a different line of thought. Here the panel focused on how to get people to use what’s in OpenStack. I belive the general point was that as a whole there are a lot of things that people don’t use in OpenStack, but if you focus on the core things that are the common denominator among shops, OpenStack does well. This is something the DefCore committee should be helping with. A proof point is that you can run things that were not planned for inside of OpenStack, ie. CloudFoundry. This in itself was stated to prove that OpenStack met about 90% of what it was set out to do.

The conversation continued to then describe the ability to run core services outside of OpenStack, and then start housing non-core services as entities inside of OpenStack. This drives towards the idea of iterating quickly and building out the extra features, or leveraging some of OpenStack’s goodness to help drive more adoption.

At RackSpace these new things that could be developed aren’t necessarily contributed to sometimes. They tend to look at new things to determine if they are of value to their customers and if so they lightly involve themselves to ensure they keep it in line architecturally to be able to deliver it to their customers at some point.

There was an interesting conversation that was had around the CI testing for services before being used. Apparently RackSpace and most larger providers do their own automated CI with unit and integration testing. The panel said that OpenStack’s ability to do so has deteriorated over time and needs to be rethought.

A somewhat unrelated topic also came up around what OpenStack environments look like. It sounds like the ones that are pilots tend to turn into the test beds that take the latest builds and organizations green field new OpenStack environments for production.

Following on the CVE discussion, the panel continued to discuss what the difficulty was with following a trunk. As stated before the frequency is challenge in terms of upgrades upon initial releases. These can even be daily and cause a lot of headaches for cloud providers that need to reach out to customers for each one. Pile on top of this that there is a public date that a security vulnerability may actually be disclosed, which can force cloud providers into undesired schedules. This is different however to patches that occur to a trunk which are more readily consumable.

There is also a cost of innovating in this space. In reference to patches they are more readily consumable byt the community. But in some cases there problems take more significant effort and by the time you invest your efforts and get something patched, the community as a whole may have gone a completely different direction. For this reason it is important to keep yourself aligned to the community and roadmaps. The general consensus was that the main scenario where a company goes out and develops something and brings it back to the community as a feature is when it is not on a roadmap and following a product manager conversation.

Following this panel was another panel focused on
Hybrid Cloud and OpenStack with Kyle MacDownald – Mirantis, Jess Proudman – Blue Box Cloud, and
Nati Shalom – Gigaspaces.

The panel had some interesting discussion but it was pretty clear that no one agreed completely on what the definition of Hybrid Cloud actually was. To me, there were topics that were being discussed that were extremely deep in the weeds of APIs and *aaS models based on where proper abstraction should occur that really required a bit more background for the general audience to follow. The consensus towards the end was that some fault may lie in OpenStack’s lack of maturity in addressing and defining these concepts.

The definition that Jess mentioned was focused on 451 Research which believes that an application that Hybrid Cloud represents an application that is co-resident in two or more stacks. The panel continued here to discuss that Hybrid Cloud is not something provided by a vendor but how a company leverages the technology.

An interesting data point was focused on the old promises of OpenStack and others. Previously burst ability or cloud bursting was a big topic, then it went to just using pricing mechanics to determine proper placement, and now we tend to think of joining cloud architectures.

A point that was made was around the idea that OpenStack hasn’t made Hybrid Cloud easy yet. The idea that you can’t take existing tooling that you built for your OpenStack private cloud and use it in an OpenStack public cloud or that the underlying OpenStack components like Swift can’t be used in this away are proof points.

The panel then continued on to a similar view that I have to Hybrid Cloud which is more about the instantiation of the application than it is about mobility. If the application you are using is built around availability then the problem becomes much easier.

A related but very important point is figuring out the right level of abstraction to achieve Hybrid Cloud. If a form of PaaS is the right answer for your applications, then how much should the platform actually do and define? At what level should the Hybrid Cloud abstraction exist? Up at the PaaS layer (like CF) or should it exist at the template layer (Heat).

This conversation continues a bit and Nati continued to make the point for the Heat layer for abstraction and the idea that we can’t expect to get to a point where there is a cloud API winner and it may not be necessary (TOSCA). I think he was stating that something like Heat is the right answer. The panel seemed to disagree a bit in terms of

Another related point was mentioned around whether Heat, the orchestration service, should have been a part of OpenStack. It was built to allow for cloud formations and templatized approaches to deployment similar to AWS but may not necessarily be best fit inside of OpenStack and might limit orchestration with other clouds.

Following this is another question focused on the best Hybrid Cloud use cases. The panel seemed to agree that if you’re using OpenStack that you are Hybrid Cloud since you don’t typically wipe and replace and it is likely connected infrastructure.

In terms of scale out applications, OpenStack seems to already be aligned for this. However for scale-up applications there needs to be more tools to help with this.

The rest of the discussion was somewhat a repeat of what was said prior. The final conclusion from some of the panel was that delivering applications aligning to Hybrid Cloud would properly be done outside of OpenStack at the Cloud Foundry layer which is the right place to provide the common layer of abstraction above heterogeneous clouds.

On to the final panel! This continued the theme of PaaS a bit and focused on Platform, Containers, or Something Else. Included in the panel were Diane Mueller – Redhat, James Watters – Cloud Foundry, Brendan Burns – Google, Scott Johnston – Docker, and Florian Leibert – Mesosphere. This one was as expected, bursting at the seams with people.

The first question came out hot– what is the future of PaaS on OpenStack?
In line with the ending of the last session, the common idea was to not focus on OpenStack but how PaaS in general sits on infrastructure. Some of the value in doing so is separating the cluster level operations from the application operations. This notion may also converge at some point where applications can be more opinionated in their resource usage versus the rather static set in PaaS today. Essentially, PaaS should be viewed as just another application that runs on top of your Data Center OS.

The next question is focused on Docker and where it sits with PaaS. Docker hosted services would live within resources in OpenStack is one route. Another was stated as the native ability for OpenStack to manage containers. I believe the maturity and lack of PaaSiness in the OpenStack platform is holding this back.

The converation turned into where the panel felt the future was for Docker. Since it is at the commodity state, it next step is to figure out how to properly manage containers and allow them to start working cooperatively as an app through networking. All panelists really aligned in this way of thinking.

So what was the challenge that Docker solved? It was typically hard to bring in new run times for developers. The choices in PaaS solutions were scarce. With Docker a developer now has a play button that he knows works against any Docker service. This allows good demarcation between application and infrastructure providers. The container management software can then take that container and knows how to play it, copy it, scale it. Brendan finished on the point that PaaS starts to look more like take out food than a recipe.

James steps in to add another important question. What problem are we really solving here? Each technology being represented on stage had their own background. Docker started as a technology that helped get developer code from laptop to a cloud. Google Compute Engine was around giving control to developers. Mesos was built around high performance compute. He finished the thought pointing to Cloud Foundry being built around solving problems with Enterprise use cases.

So what do containers actually do? They allow you to ship apps faster to have impact on the business. They also allow you to separate applications from the infrastructure and tools around it.

Devs like it because it allows them to write applications for the data center quickly. They tend to not want to wire together application. Their nirvana is to program against their data center as if they were programming against their own machine.

PaaS can play a large role in crossing the bridge between OpenStack and other cloud platforms.

The conversation drifts into audience questions where a very heated one comes in around separation and data services. Brend discusses that at Google, all applications live in containers even the data heavy applications. This involves changes at the data layer inside of applications. (apparently Google has not open sourced some of this technology). The general theme is that you don’t want the data living in the same temporal container as the application. You want it in another more persistent storage entity.

There are good examples of perfect candidates for data services in containers such as containers since it scales well considering the temporal environment. There are also examples of things that don’t scale well like MySQL since it’s scaling and fault logic is meant for OSs. The application architecture design patterns need to catch up to main stream applications here to really solve the data service in container problem. The problem will be solved gradually with the easy ones first.

Following this there is a question about how you get a company rom IaaS to PaaS. It was mentioned that the general room is still trying to do IaaS well (OpenStack) but the industry is already moving on to the next layer that could be more or less complicated.

A feeling was that showing people how to use PaaS and showing them that it makes their lives easier would be a way to convince.

Another concept thrown on there was that POSIX standards are needed for the datacenter. I believe the idea here was that Linux was truly successful because of POSIX and this type of standard could help. In the data center more focus should on programmable things and less on individual machines.

I think another question came from the audience here around if Docker should be used instead of PaaS. There are different things, Docker is something that helps ge things built and shipped and PaaS is something that hosts these builds in production. In the case of someone using Docker instead of PaaS, it could possibly be a development environment in the cloud. My personal view here is that this would be accurate if the management of the containers is essentially what PaaS is referring to which in that case is relevant whenever you are running containers in the cloud.

A future facing topic that James brings up is around the “meta meta description of an application”. He is curious to see who is working on defining the plumbing between applications, and the applications as single entity. Included in this is how deep this definition may go and much abstraction along the way.

At this point another question comes in from the audience around what would go into a decision for bare metal or VMs hosting containers. It is important to understand the containers are not a security boundary. There is some isolation, but it is really up to the application developers to be good citizens in containers. There are some forms of resource isolation but that’s its. Virtualization of Linux instance running containers can serve as the hardened multi-tenant boundary for security and resources.

In terms of benefits from containers, you can view it from a VM to container perspective. You could see substantial benefit if simply taking the step of consolidating your VMs into a single container VM as a first incremental step. From that point the next logical step could be to investigate bare metal options.

James asks a final question focused on performance management. Florian replies with how Mesos has approached performance management with containers. He says that they built in low level hooks into Mesos to allow applications to report performance.

The conversation continues to dealing with misbehaving applications. In the case of applications the large scale operations consider misbehaving services things they want to find and kill. If the application was built properly then this service should have nothing to do with the overall health of the application and killing it should trigger the scheduler to reinsantiate. In the typical Enterprise this does tend to be a bit different than what is expected where the workload would want to be quarantined and troubleshot when running. The general consensus was that workloads should have a threshold set that allow the scheduler to kill the workload if it violates this threshold.

Last thought of the day! We all win when no one has to think about PaaS a few years from now.

OpenStack Silicon Valley 2014 – Keynote

Posted by vele1655 | Posted in Automation, Infrastructure, OpenStack | Posted on 17-09-2014



This is my first live blogging that I have attempted here at VElemental. Bare with me as I figure out the right conciseness or verbosity moving forward. For now I have compiled my notes for the first half of the day into the following.

The first section of the day was filled with lots of different speakers and points of view. Please note that the ideas below are not my own but are entertaining, enlightening, and likely controversial. Welcome to the OpenStack Keynote sponsored by Mirantis!

KEYNOTE 09/16/2014

The day started with Alex Freedland (Chairman and Cofounder – Member OpenStack Foundation) giving a general overview of OpenStack in the industry.

Alex states that there has been tremendous qualitative change over four years and that OpenStack is no longer needing to prove itself and that it is the winner in the open cloud game. But as it enters the 2.0 phase, there are questions to answer. Specifically, what will it look like, who will be there, and how will it be used? Here are some of the upcoming trends mentioned.
– Landscape changing
– Companies committing at massive scale
– Mergers and M&A deals
– Proven usage patterns – better agility and management

He continues to briefly touch on the Software Defined Economy. This he terms is the nature of industries managing change faster enabled by the innovation factory that OpenStack represents. This is where it is a channel to get to market and the consumer has the choice of best innovator. Open Cloud success in this would be through community growth, different business models, and by leveraging agility and innovation at scale. The finishing thought is around community and success through the community similar to the methods of Linux, but it took time.

Next up was Martin Fink – EVP and CTO HP and Marten Mickos – CEO Eucalyptus

Martin kicked off the discussion speaking about a year in for Helion and Open Source, HP’s cloud platform. He focused on an internal concept called “student body left” where HP decided last year to redirect all cloud resources towards OpenStack. The goal was to align all BUs and cloud based services for internal and external customers, Public/Private consumption towards a common platform and strategy. He highlighted this strategy with Stackalytics reports for the Juno release and HP dramatically changing its position.

He continued to then speak about goals in embracing open source at HP. His focus was to make sure everything was done in the true spirit of open source, delivering code close to and tied to the trunks of projects all following the community process. He recognized that companies like HP being involved in open source meant that they needed to align with competitors at times and become close friends for the benefit of the project and customers (IBM as example).

Relating to what HP will be releasing, he mentioned a commercial OpenStack distro, developer platform, Helion Network (sounded like public cloud partners) to deliver cloud anywhere in the world. He concluded this thought stating that it wasn’t just a piece of software they are working towards but an experience, the whole stack including distro, hardware, all delivered across models (on premise, public cloud, cloud managed by HP, virtual private cloud managed by HP).

Martin concluded in introducing Marten as an example of HP brining in open source talent, and know-how in order to grow the capabilities and continue the open source mantra.

Marten as not quite an employee of HP yet, started by highlighting what he could speak about was focused on Eucalyptus. The focus there is to take the success of AWS APIs and design patterns (open, simple, product focused) into the OpenStack framework, and be easy to install and use– essentially where nimble meets massive. He wants a full victory where the consumers take advantage of the project and the developers get credit.

As he continued his presentation he got more to the open source theme. His focus was about open source being the right avenue since it tends to win in the long run, but the question left was around how to get there. Currently there are plenty of open source benefits and challenges. He said that at this point that people don’t tend to dislike OpenStack, they are actually saying that they would love to love OpenStack but currently they can’t because of this or that. He continued to say that a challenge is central authority around the ability to say no and someone who listens to users.

If he had a choice it sounded like the future would be to harden central functionality, and listen to the voice of the customers. He commented that the HP team was in a good position for success due to their experience and history in tight design and serious systems engineering. He concluded by stating the conceptual goal for HP and Eucalyptus. In his view, consumers should be able to move workloads back and forth in among cloud deployments. With Eucalyptus, HP should be in a good position to deliver whole clouds and achieve it with the web era.. what was done with web apps and LAMP stack in now happening with the cloud era and OpenStack.


Ken Ross – Brocade

Brocade is working on contributions and plugins. Their visibility to the demand for OpenStack was reviewed briefly and in the North East US there were around 50 customers doing testing and 12 in production. Ken believes that 2015/2016 will be where we see a sea of change around customers taking advantage of OpenStack for things like Network Function Virtualization and the orchestration of networking.

Ken concluded that a major problem with OpenStack networking currently and in the future related to the lack of understanding of how networking pieces and options fit today and where they are going to fit in the future. Without these things defined moving forward will be difficult.

Following this was Jonathan Bryce, Executive Director Openstack Foundation.

Jonathan focused on some higher level concepts which he saw as developing trends across tech, finance, and other industries. He uses the idea of the Software Defined Economy to express the idea that every company is competing with a startup in one way, shape, or form. This startup tends to move faster and gets products and services in front of customers faster. In the SDE, change tends to be easy, things move quickly, decisions are temporal, and changes are much easier to make in an ongoing basis. This all contributes to choice.

He goes on to describe the “passive consumption” model of today where IT buys hardware and solutions on multi-year product cycles and typically based around what preferred vendors sell. The new way where “I want what I want” is based on releasing things early and often and likely straight to production. In this case if IT won’t give them what they want then they’ll find a way of getting it. The problem here is that you can’t stop this from happening, so IT really needs to meet the needs that users are asking for.

The general trend that he views is that technology decisions are moving out to the edge of a business. This means that users are taking force fed technology, and the “central planning” committee that has made decisions prior is now dead.

The successful company today is building frameworks and platforms for their organizations to build environments they want their users consuming from. Jonathan finished with an important point focusing on the idea that in the recent computer evolution, IT has driven the virtualization phase due to their own benefits, and the cloud phase is now being driven by the edges of the business where benefit is being reaped by the consumer.

He finished by pointing to some existing areas to look at real world feedback,

Martin Casado – CTO of Networking VMware, was the next speaker with a very compelling dicussion around driving policy into OpenStack.

The general problem that Martin outlines is around how there are business needs, limits, requirements, or considerations that can be expressed in our English language by non-technical people to dictate how the business works. However, in order to translate those to technical world there must currently be humans involved. He makes an important point that where we are currently, based on the maturity of technology, it is time to stop talking about policy and do it with OpenStack.

He goes on to discuss how there are a few main problems that must be dealt with in getting to policy management. He has ran into these problems many times in converting the English version to Computer Science version. This structure typically involved converting it to a declarative language, compiling it, then pointing it at systems. The problems he highlights are relative to the compilers. These compilers have had to deal with
– Device Canonicalization or Heterogeneous Hardware – having all devices look the same which came with challenges in other areas like rolling out updates, and common functionality
– Distributed State Management – managing state and a distributed transactionally complete system along with making it scalable
– Topology Independence – this is the need to solve network virtualization problem which would require mapping and be built for scale and allowing convergence quickly

He believes the time is right now since we are in a new era with solutions to existing problems.
– OpenStack gives abstractions (canonicalization)
– Cloud Management Systems – handles distributed state

These changes have allowed for having a policy compiler that leverages high level abstractions already.

So what does Policy look like today? In each silo it tends to look like datalog or database tables where each row represents an entity. A single layer is then created by joining tables together.

Finishing the talk, Martin talks about the Congress project that he is involved in. He states that the focus on Congress will be to stop thinking about the details in each silo, and have a DSL to define the entire system using a unified policy layer. Declaration for things like PCI should be allowed to be enforced globally at that point.

Following Martin was Randy Bias – Founder and CEO CloudScaling discussing some dirty laundry for OpenStack. Randy makes a living by delivering OpenStack as production grade to customers.

Randy states the situation of OpenStack today and in the future. He sees a lot of complication moving forward and growing programs from the number today to possibly 3x in the future with the current structure.

The problem he focuses on in his talk is that OpenStack does not currently have a unifying vision or product strategy. With the growth in programs, OpenStack’s more consistent mission starts to be degraded and less meaning around OpenStack occurs.

He then discusses the internals of OpenStack with the board responsible for guidance as in a normal business and the technical committee ensuring releases happen on time. He believes that each program team tends to have their own view of what OpenStack is. This leads him to say that there doesn’t need to be a dictator, but there does need to be product leadership. He uses AWS as an example of success where they have small architecture review boards focusing on feature reviews and a team of product managers per product line.

Randy concludes by saying that there are many paths forward but all paths require creating ownership and a failure to fix it now may result in insurmountable challenges.

Adrian Ionel – CEO Mirantis was next up. It sounded like he was a late add to the agenda based on the consolidation of Eucalyptus and HP. His presentation is about the state of OpenStack and where it needs to go to be successful in 2016.

He begins his presentation with a datapoint from Google Trend where he shows the popularity of OpenStack vs EC2 becoming close. He also shows the job trends where OpenStack and EC2 tend to the highest among cloud topics.

Apparently internally at Mirantis they are getting 4x the customers from past year and 100x from the past few years at a rate of about 2 new customers per week. Adrian still feels they are in the early days in terms of this growth.

In terms of adoption, he feels that since companies are doing pilot projects it does not yet equal massive workloads. He feels that to looking at adoption is not a good indicator, and that the real metric should be based on workloads and developer adoption.

He continues to AWS in the top right of Gartner’s quadrant all by themselves, with all other cloud players hovering towards the center. In terms of adoption, Amazon is worth around 6 billion this year and other cloud players aren’t even scratching this number combined. A good example that he gave of a health provider that is showing heavy growth is Digital Ocean, going from 3000th to 4th largest hosting company in a few years (Netcraft). He feels this is largely due to developers liking and adopting the API.

Adrian then throws out the idea that developers always win in the end. He feels this way because developers create the future, and solve real problems. Essentially the platform with the most enthusiastic developers ends up winning. A great example of this is Docker with 20 million downloads in the past 4 months. It as well is higher in Analytics terms that virtualization topics.

The last section of his talk focuses on what can be done. “Is OpenStack in danger of do-it-yourself deadenders?” Are people more interested in control than value add? Or is it going to be android of cloud?

The general developer does not care about things below or around the application like monitoring software, storage, network, or the hypervisor. They care about API quality and ease of use, feature velocity (not about OpenStack plumbing), and portability (devs want to write things once). Is OpenStack too intrinsic and poorly focused? Are infrastructure vendors moving focus away from critical areas?

In conclusion, he believes that there are some tangible things that can be done.
– Focus on API (awesome, well documented, easy to use, consistent, backwards compatible)
– Invest in ease of use vs flexible plumbing (could have too many options)
– Don’t move up stack partner instead (LB, DBaaS, etc)
– Don’t copy AWS, aim to be open and best
– Reshape upstream engineering to foster open competition inside OpenStack projects (engineering come up with solutions and compete while staying in framework and let market forces choose) vs central planning
– Enable workload mobility to other platforms
– Take a look at other successful platforms and engineer for that

Next up is the first panel session. This was my least favorite of the three for the Keynote. The panel included Gary Chen – IDC, Boris Rensky – OpenStack/Mirantis, Steve Wilson – Citrix – CloudStack, and Marten Mickos – Eucalyptus – Helion

Is it a winner take all in open source?
Sure there are examples of this such as Linux itself. However other things like MySQL and Postgres shows there can be multiple winners.

One of the concepts discussed was that OpenStack should be seen as the glue for heterogeneous datacenter through a consistent API. The opportunity and competition is for one standard set of open source cloud APIs. The panel tended to disagree here among declaring OpenStack the defacto winner where it is still early in terms of adoption and is very small compared to adoption of other clouds. A good example was given around thinking of the cloud APIs as programming languages. They’ve been around for a while but do we have one now?

Another point that was brought up was around the health of competition. If done right it can be good, but if you focus on your competition then you end up becoming like your competition (why Linux didn’t focus on Microsoft).

There tends to also be competition beyond things like CloudStack (most active Apache project) with internal DIY initiatives that organizations aren’t aware of.

It possibly is too early in the maturity curve to even say that competition is happening now. The Amazon/CloudStack battles could be coming further out.

The panel finished by discussing some personal housekeeping.
– Mirantis – largest pure player openstack vendors, so build on core differntation of openstack
– Cloudstack – easy to setup, easy to use, works at scale. Citrix will continue to contribute to ClouStack and OpenStack through networking/hypervisor. Also needs focus on reliable delivery mechanisms for applications
– HP – Can provide end to end solution and does this with deep infrastructure skill. They have bet completely on open source in infrastructure, and believe in the power of Hybrid Clouds and the meaning of AWS design patterns in cloud

Following this is Bill Franklin – VP OpenStack and Technology Enablement – HP Cloud. He discusses HP’s contribution to OpenStack and it’s belief in one community. He made an important point in the beginning to ensure that we called attention to what HP terms “OpenStack roadies”, or the teams that operate the infrastructure behind the developers at companies like (HP, RackSpace, RedHat).

Here are some of the notable callouts for contribution.
– VMware for ESX support, ephemeral PKI endpoints for cross component communication, live migration
– Nova – extendable resource tracker, compute capacity vs cpu slots
– Neutron – distributed virtual routing (DVR)
– Heat, Tripolo, Ironic – chunking and the ability to share metadata
– Graffiti – Intel and HP project for systems admins to help collaborate on metadata across Horizon and Glance
– Designate – DNS as a Service which was moved to incubated stage
– Ironic – landed as entry driver in Nova
– Cinder – 3Par integration and things that you want to do exposed through Horizon sucg as upload to image, quality of service specs through to Horizon, resetting state for snapshots and volumes

After this begins another panel session lead by Chris Kemp of Nebula titled “If OpenStack is so awesome why doesn’t anyone use it”. Panelists include Peter Foulkes – 451 Research – Analyst, Jo Maitland – Google – Public Cloud, Alessandro Perilli – RedHat – Private Cloud

The first question is around making sense of the moves in the industry. Peter answers this one by stating that Enterprises tend to move slowly. He believes that companies are currently locked into VMware, and OpenStack represents a viable solution that is now coming from behind. He targets the market that VMware currently as a large market that will be open for competition among vendors soon. This should present “blood, mergers, and acquisitions”. The successful organizations will be embracing OpenStack and consolidating behind platform strategies. The success here will also be around hiring the right key people and betting on the technologies.

The next question posed is based on why adoption is not happening. There seems to have been a massive gap between what OpenStack was supposed to do and what it does. A lot of customers haven’t had the right impression that is probably the communities fault. The typical interested person does not understand at the outset that OpenStack is built for Scale-Out applications and not Scale-Up applications as found in the Enterprise.

This isn’t however a great conversation to have in this way. A way to discuss this without scaring people away is to speak of a phased approach. OpenStack can be additive where you continue to leverage virtualization and IaaS technologies together. But then you need something that glues these environments together.

The adoption problem tends to be based on how easy it is to consume. A good example of success are technologies like Docker for greenfield help get enterprises on board. A failed promise from OpenStack is workload portability. Containers is the right way where it is lightweight, and super efficient.

Chris steps into a comparison of OpenStack where we are building a new kind of computer, loosely coupled that take space of warehouse. OpenStack tends to be the leading candidate for this mission.

So what are the key features that would increase adoption? OpenStack needs to stop focusing on improving the foundation and focusing on things that it can do, ie. installer and storage. It needs to help manage the life cycle of whats inside of OpenStack. It also needs more layers such as security, and compliance.

If you were Bezos pulling up from the industry in 10 years what would you see? Public Cloud players are bigger.. organizations aren’t in the infrastructure business so they will be less in it in the future.

I think a very interesting final point here was around where Amazon and Google could be contributing to OpenStack. The point was made that they would be the ones interested in building a bridge to cloud to absorb any Private Cloud deployments in the Public Cloud seamlessly.

And finally we get to the last panel! Here we had Jonathan Donaldson – Intel – GM Software Defined Infrastructure, Jonathan Bryce – OpenStack Foundation, Brian Gentile – Tibco Jaspersoft,
and Neati Shalom – CTO Founder – Gigaspaces – Moved product from proprietary to open source.

The panel begins with a question to Jonathan around what business case Intel has for open source. He responded in stating that Intel is looking to ensure that customers in all aspects from the direct to the indirect consumer of their products is exposed to Intel goodness.

The following question asked what drivers there are taking open source from small to large groups or building communities.
– Needs to solve a business problem or large need
– Has a variety of methods of contribution
– Lots of ways to participate

A couple of good examples are NoSQL and Docker. NoSQL succeeded by entering a proprietary space and delivered technology that solved problems in a cheaper way. In this the demand drove it with things like MongoDB and Cassandra. Docker is an example of something small that grew hugely in 6 months form disruptive phase to commodity.

Jonathan also mentioned here that common problems tend to breed community. And when approaching this, you can’t do it from within vendor lines, it must be done with the problem in mind in order to foster mass adoption and acceleration.

What is the draw and outlets for building successful business models on OpenStack?
A main reason is that it is allowing entry in space that wasn’t possible before. It also represents a business model that doesn’t impede innovation. Open source is driving innovation at faster paces. A good example of this is that we used to pay for something even if it was shelfware. Now with open source you are paying for things that know are valuable. As a whole it seems to have forced the industry to generate more value.

The panel finished with a few comments about the health of OpenStack. THe eco-system should counter balance leadership. Currently there are too many companies arguing for equal right.

My Vyatta IPSEC L2TP and OS X Configuration Dissected

Posted by vele1655 | Posted in Infrastructure | Posted on 02-09-2014



This is an older topic, and possibly one of those you tried back in 2010 but I still personally use it so I thought it may be relevant to post for reference.  It is more common nowadays when connecting cloud environments together for infrastructure needs.  Since I am showing the tunnel termination using OS X and Windows, this is not the case here.. but similar.

There are a couple of things that are important to me in terms of running a lab environment in a corporate environment.  1) I have complete access to the environment using my native toolsets and systems 2) how the underlying infrastructure is configured does not need to be special in any way.  This might make me seem a bit antiquated =) But.. I use Vyatta in an internal lab in order to expose a larger private subnet natively to my laptop when working remotely.  This could be accomplished with any number of solutions.  For most the real or forced alternative to this method would likely be to run a VDI session remotely that exists inside of the lab or development environment.

So how do I accomplish #1 and #2 in a corporate environment? 

What you need here is a gateway VM that that sits on the public portion (likely 10.x) of your corporate network and also sits on the private portion (172.16.x for me) of your lab.  The gateway VM needs to perform NAT operations, act as an L2TP tunnel, serve DHCP to L2TP clients, forward DNS requests publicly, and/or serve as a DNS server for local requests.

You also need the clients to be able to establish VPN tunnels from with a VPN tunnel.  Wait, what?  When working remotely I am establishing a corporate VPN tunnel.  I need to as well be able to establish a tunnel inside of the corporate tunnel to actually get to the lab environment.

This is actually possible leveraging OS X.  Not to say I have dedicated much time to it, but I haven’t gotten the Windows version working as a tunnel within a tunnel.  This is probably due to the Corporate VPN software more than Windows.

Both methods however do leverage native VPN capabilities of the OS versus a special VPN client.

So if you’re interested, continue on!   I will start with the Vyatta configuration and then finish on the guest OS configurations (OS X and Windows).

Let’s break these things down based on the Vyatta configuration directly from my router below.


Vyatta Configuration

SET INTERFACE ADDRESSES = Public Address (corporate) = Private Address

interfaces {
    ethernet eth0 {
        duplex auto
        hw-id 00:50:56:b8:32:6b
        smp_affinity auto
        speed auto
    ethernet eth1 {
        duplex auto
        hw-id 00:50:56:b8:bb:c4
        smp_affinity auto
        speed auto
    ethernet eth2 {
        duplex auto
        hw-id 00:50:56:b8:06:00
        smp_affinity auto
        speed auto
    ethernet eth3 {
        duplex auto
        hw-id 00:50:56:b8:2c:94
        smp_affinity auto
        speed auto
    loopback lo {


nat {
    source {
        rule 10 {
            outbound-interface eth0
            source {
            translation {
                address masquerade


protocols {
    static {
        route {
            next-hop {


service {
    dhcp-server {
        disabled false
        shared-network-name PRIVATE {
            authoritative disable
            subnet {
                domain-name yourlabdomain.local
                lease 86400
                start {
    ssh {
        port 22


system {
    config-management {
        commit-revisions 20
    console {
        device ttyS0 {
            speed 9600
    domain-name yourlabdomain.local
    host-name vyatta01
    login {
        user vyatta {
            authentication {
                encrypted-password ****************
            level admin

    ntp {
        server {
        server {
        server {

    package {
        auto-sync 1
        repository community {
            components main
            distribution stable
            password ****************
            username ""

    syslog {
        global {
            facility all {
                level notice
            facility protocols {
                level debug
    time-zone GMT



Outside-Address = Outside Interface Address

Outside-NextHop = Outside Default Gateway
vpn {
    ipsec {
        ipsec-interfaces {
            interface eth0
        nat-networks {
            allowed-network {
        nat-traversal enable
    l2tp {
        remote-access {
            authentication {
                local-users {
                    username vyatta {
                        password ****************
                mode local
            client-ip-pool {
            dns-servers {
            ipsec-settings {
                authentication {
                    mode pre-shared-secret
                    pre-shared-secret ****************
                ike-lifetime 3600



Additional tasks on the Vyatta router would include running a DNS service like Bind9.  I am doing this, but it is really your choice.


OS X Configuration

The OS X configuration is somewhat straight forward.  The configuration revolves around a few things.  1) Configure a VPN adapter 2) Add a resolver domain to ensure lookups for the VPN domain occur to correct DNS servers 3) Create a script that runs when the VPN tunnel is established to change routing to send packets down correct route for network.


Network Preferences Configuration

The first step is to open the Network Preferences and press the + button.



Enter a valid name for the lab network.



I have a screenshot of a configured and running VPN tunnel, so yours should be blank here.  Enter the address of the Vyatta endpoint that you have configured from above.



Enter the user and shared secret password that you have entered based on the Vyatta configuration that you entered.



Ensure the checkboxes look like this for Options.



Under the TCP/IP you will likely not have anything filled in yet. 



Enter Search Domain that is being used in the lab.



Add Resolver File

The next step is to create a file under the /etc/resolver directory that includes the root domain name of the lab.  Inside of this file include one line that reads as follows specifying the DNS server that should resolve the lab names correctly.



Add PPP Startup Script

The last step is to create a script that gets executed when the tunnel successfully starts up.  This script is responsible for resetting the firewall settings to allow the appropriate packets through the VPN tunnel and also change the routing so that packets destined for the lab network go through the proper interface.  Replace the “172.16” with the proper network within your lab. 

/etc/ppp/ip-up (use sudo to create the file and chmod 0755)

sleep 5 && /sbin/ipfw delete `/sbin/ipfw -a list | grep "deny ip from any to any" | cut -c1-5` && /sbin/ipfw delete set 0 &
sleep 5 && /sbin/route change -net 172.16 -interface $1 &


Now test it out! It is handy to perform pings during the instantiation of the tunnel.  You can also check the routing table out (netstat -nr) to see which interfaces are being used for the VPN network.


Windows (7/2008) Configuration

Here is the client side configuration that works for the VPN tunnel.  I will just be pasting the screenshots below as it is very self explanatory.





You can enter the appropriate information in the dialogs that appear next.  I include screenshots below that are of a working VPN connection to the Vyatta device.














That’s it, enjoy!  With this setup I have access to the 172.16.x.x subnet from my native toolsets for developing and working with vSphere from my OS X laptop remotely.  Very cool!

Running CoreOS and Docker Behind Enterprise Security

Posted by vele1655 | Posted in Automation, CoreOs, Docker | Posted on 29-08-2014



Continuing on with more CoreOS testing, I decided to up my game and start leveraging more corporate lab gear to check it out.  Very quickly after deploying a CoreOS image using the password/network configuration hints from this post, I ran into some issues with certificates when loading Docker images. 


Like a lot of companies, at EMC there is SSL packet inspection that occurs at various points for ingress and egress traffic to the internet.  This causes problems for lots of things in Linux, and for Docker was interfering with its ability to validate certificates for HTTPS traffic from the SDNs (see related thread 


If you get the following error when leveraging Docker or the SSL libraries in Go then you need to update your CA certificates with a valid certificate. 

x509: certificate signed by unknown authority



In the case of running Docker from within EMC, this meant that I had to find the SSL certificate that is replaced during SSL inspection and load that into the CA of the CoreOS server.  This was rather easy to do in a bare CoreOS image.


1) Get the certificate file

2) scp “EMC SSL.cer” core@coreos01.brswh.local:~/.



3) Convert the certificate file to a PEM file

sudo openssl x509 -in “EMC SSL.cer” -out /etc/ssl/certs/EMC_SSL.pem -outform PEM



4) Update the CA certificate file

sudo update-ca-certificates



5) Restart Docker

sudo systemctl restart docker



6) Run Docker again with any Docker image

docker -t -I google/golang-runtime



That’s it, enjoy!

Playing Around with CoreOS

Posted by vele1655 | Posted in CoreOs | Posted on 29-07-2014




Over the past month I have been playing around a bit more in the Docker-sphere.  I wanted to briefly give a few tidbits that I found along the way in relation to getting going with CoreOS.  First of all, see William’s post here for some basics on how to get CoreOS uploaded to an ESXi host.


I am seeing a lot of interest in this area, so here is some info that may help you get past a few humps!

  • Deploy CoreOS
  • Configure Static Networking
    • Auto Login Kernel Parameter
    • Create File and Restart Networking
    • Change core user password
  • Forget about what you just saw!


Deploy CoreOS

If you want the more manual method, you could also do a conversion of the VMX and VMDK files to an OVF per the official CoreOS steps.


The deploy is currently (07/29/14) expecting that either for you

a) connect a configuration ISO (bug in stable release stops this from working)

b) have a DHCP server setup. 

The first option would actually allow the complete configuration of CoreOS and it’s services.  Let’s assume this will be fixed at some point =) But on to the second option.  If you have DHCP configured, then you’re good and can continue on to play around with CoreOS.


If DHCP is not configured OR you are interested in the cloud-config settings then continue on.


Auto Login Kernel Parameter

If you want to simply get access to the CoreOS image and set the networking information manually you can do the following.


1) Open the console to the CoreOS image

2) Reboot the VM

3) When the boot loader prompt comes up press any key to stop the boot process.  Enter “boot_kernel coreos.autologin” and press enter.   This will make the console’s skip the login prompt and enter straight to shell for the core user. (See here for kernel options



4) Done!



Create file and Restart Networking

Once you get access to the console, the next step is to set the static networking.  This can be done by editing the “” file as part of the systemd network service.

1) sudo vi /etc/systemd/network/  (or whatever editor of choice)



2) Restart networking (looks to be a bug in restart process, so need to stop interface to change IP settings)
sudo ip link set enp0s17 down
sudo systemctl restart systemd-networkd


Now if you have the networking all taken care of, then the next step is to SSH in.  As a default, the CoreOS ships with insecure SSH keys that allow password-less access.   You can either leverage this method as shown in a screenshot from the CoreOS site, or you can set a password for the core user manually.



If you want to set the password for core manually, follow these steps.


Change core user password

Create an insecure password using the openssl tool.  See ( for generating a secure password hash.

1)  Open CoreOS console

2) sudo usermod –p `openssl passwd -1` core (not typically a good idea)

3) sudo cat /etc/shadow (make sure it has a hash)


4) Test ssh with a password!


Forget about what you just saw!

Ok, so that was all fine and good.  However, in reality this process is not how CoreOS was meant to be managed.  As a large-scale OS, it is meant to have hooks inside of it that allow the configuration of these items through some forms of automation.  So take a look at this alternate procedure.  Currently (07/29/2014) there seems to be a bug where the ISO file will not be picked up by CoreOS.


1) Create an ISO file that includes a Cloud-Config ( file

The file would look something like this and would be contained in a ISO at the path of /openstack/latest/user_data with label config-2.


hostname: testing

   – path: /etc/systemd/network/
     permissions: 0644
     content: |


      – name: systemd-networkd.service
        command: start

   – name: core
     passwd: $1$4AZiqdeL$mGemao15wzDxRh2bTg.Zw.


2) Deploy CoreOS

3) Attach ISO to the CoreOS VM

4) PowerOn VM


If you want to deploy this file to a CoreOS image and run it manually you can always leverage the “sudo coreos-cloudinit -from-file” command.

Accessing Avamar REST API from the vCO REST Plugin

Posted by vele1655 | Posted in Automation, Avamar, vCO | Posted on 18-07-2014



This post is going to continue a recent thread where I show and discuss some of the new API capabilities from EMC Data Protection.  Here we will focus on getting the private cloud REST API working with a workflow engine like VMware’s vCenter Orchestrator.  There is no specific use case presented here other than basic configuration.

By the end of the post you will see the configuration of the package and a simple call to add a client for backup.  And yes, I will be updating this and adding more functionality shortly.  Look for future posts.

Cool stuff in here?

  • vCO working with JSON
  • Augmenting the vCO generic REST adapter for authentication header use
  • Dynamic presentation layer inputs from actions
  • And of course protection using vCO!

There is some interesting stuff under the covers in the package that is available below.  A lot of the work was making the generic REST plugin’s authentication mechanism work with a REST API that leverages sessions.  As an example, you can use the REST plugin to issue generic calls against vCloud Director, but there is special option that tells the plugin to cache the X-Session identifier.  This identifier then gets constantly sent as part of the headers.  So since I didn’t build a full blown plugin, and the REST plugin does not open itself to the session style authentication outside of vCD, I had to write a couple of wrappers.  For anyone interested, see the following actions to see how it works.  Briefly, we save the session id to a configuration element.  Before making REST calls we check that we can successfully ask for the sessions, if not we issue an auth and save it.

All Actions –> invokeAvamarRestGeneric –> invokeAvamarRestBase



  • vCO 5+
  • REST Plugin 1.03+
  • My vCO Avamar REST package here, and unzip it



Here are the workflows that are inside of the package.



These are the actions that support the workflows.



So let’s get started.  The first step is to login to your vCO client and switch the views to “Administer” with the top drop down.  There you will see a few icons appear in the top left.  Select the orange packages icon.



Navigate to where the package is located that you unzipped and select it.



vCO will prompt you relating to certificates, import the certificate from the package.



Here it will prompt you for which actions and workflows to import.  Ensure that you select all workflows and actions by pressing the check box in the bottom left.




Ok, let’s switch out of Administer view back to Run.  The workflows should now be located under the Library drop down under EMC.  Before heading there however, we need to configure the generic REST adapter.  Expand HTTP-REST –> Configuration.  From there right click the Add a REST host workflow and press Start Workflow.



You will be prompted to enter a name, feel free to enter anything there.  The next field need to be entered exactly as listed, except for the hostname.




Press Next, and leave the proxy settings at No.



Select the Basic authentication from the drop down.



Leave the session mode at Shared.  Enter your credentials to the REST API.



After pressing submit you are likely going to be prompted relating to the certificate.  Be sure to press submit to save the certificate from your REST API host.



Ensure you have “install this certificate” selected.



If this completes successfully, then you should see a green check box next to “Add a REST host”.



Ok, next step is to import specific REST operations that we leverage during our actions.  Navigate to EMC –> Protection –> Avamar REST –> Avamar REST Configuration.  Right click the Import REST Operations workflow and select Start workflow.



The drop down on the left should expand with your REST host that you added previously.  Grab the ID, highlight it on the right side table, and copy it to your clipboard (ctrl-c).  This will save a step coming up.



Now let the workflow run and you should see it successfully import the operations.



If you saved the ID in your clipboard then skip this step.  Otherwise, you need to grab the ID of your REST Host.  Go to the Update a REST Host workflow and run it.  When you expand the REST host on the left side, go to the table and select the ID field.  Copy it to your clipboard.  Following this cancel out of the workflow.



The next step is to switch the screen to “Design” view at the top drop down.  Also press the white Configurations tab.  From there expand Library below and right click Library and press New Folder.  Here you will be creating a configuration element that holds our authorization token from the REST API.  Enter “com.emc.dpad.avamar_rest” as the name, without quotes.



Right click the folder you created, and press New Element.  Paste the identifier you saved in your clipboard.  This will hold the token for the specific REST API instance.



When you create the element, a window will come up for configuration.  Press the Attributes tab and press the New Attribute button A+.  Enter “X-Concerto-Authorization”, without quotes.



It should look like this!



OK! All set to go, let’s try it out.


Select the blue workflow icon in the top left.  Expand EMC –> Protection –> Avamar REST –> Avamar REST Helpers.  From there right click the Get Sessions workflow and run it.  It will ask you to select a REST host, do so and press Select.  Press Submit.



If all goes well then you should see some JSON appear in the logs window.  If the logs window or variables, and general tabs are visbile, they are likely tucked away on the bottom.  Look towards the bottom of the workflow window and there should be a break in the window itself in the middle.  Grab that and drag up the window to provide more space for the other tabs.



If you see the JSON then you’re good!


I am assuming you have already configured pools, resource shares, folders, etc.  If so then you can start using the additional workflows and add a client interactively.  Browse to Protection –> Avamar REST –> General.  Right click New Client and press Start Workflow.




This window should appear.  Notice that after you select the REST Host, the drop down for tenant name and folder name are populated.  This is happening because we are leveraging the Presentation capabilities of vCO to execute actions dynamically as part of this window.  So as you specify a REST Host, it goes out and runs an action that returns a list of names for tenants.  Once this is selected, it then goes out and grabs a list of folders for that tenant.  Cool stuff.  The last field is one I fill in which is the client name.  In reality, this could come from any number of different places, ie. Active Directory, DNS.



And that’s it!




Look for plenty more capabilities, the current state is really just the foundation of what needed to take place to make the generic vCO REST plugin work with the Avamar REST interface.  If you look through the actions and workflows, you should see some good examples of working with JSON, presentation inputs, and customizing the REST interface.

PowerCLI module for Public Cloud Protection

Posted by vele1655 | Posted in Automation, Avamar, Backup, Hybrid Cloud, Powershell, vCloud Director | Posted on 09-07-2014



104 cmdlets here! Add the 102 from the previous post and we have 206 Powershell cmdlets for Protection! 

With the latest release of EMC Data Protection, we have introduced a new plugin for VMware’s vCloud Director.  This plugin fills a very strategically important gap that exists in the market today in Public Cloud in regards to providing embedded protection capabilities from within Virtual Data Centers of vCD.  The plugin is built for providers that want to deliver protection services to vCD tenants which is far different than simply protecting vCD.  There are a handful of posts I have done recently to describe this below.  Here we will focus on leveraging the multi-role API (Cloud Admin, Org Admin, vApp Owner) and consumption of protection services using PowerCLI.  This means a consumer of a VMware Public Cloud that is an Org Admin (IT typically) or VApp Owner (Line of Business Admin) can use these cmdlets to manage protection if the provider a) has deployed the Avamar Plugin for vCloud Director b) has enabled control of backups to tenants through the Backup Configuration of the VDC.


At the heart of the Avamar plugin for vCloud Director is vCloud’s Extensibility which helps simplify and standardize delivering the protection services in a Hybrid and Public cloud way.  We will provide some examples as we go through the use cases later in the post on how this works and how it helps simplify, future-proof, and standardize automation in the SDDC world.

Another important point here is that I mention only Public Cloud providers for this module.  This is however not exclusive to this use case.  If you decide to run vCD in your Private Cloud, or currently are, the introduction of the Avamar Plugin/Extensions for vCloud Director unlocks your ability to leverage these cmdlets in infrastructure you own.

Everything shown here leverages the REST API from vCloud Director, so you can use the cmdlets to learn about our Extension of this API and how you might build to it in other languages.



  • Powershell v3, PowerCLI 5.5+
  • The PowerCLI module here
    • dir *.psm1 | import-module -force


Before moving on, I want to first lay out the Role Based Access Control and how it relates to the plugin.  The three roles (Cloud Admin or Provider, Org Admin or IT, VAppOwner or Line of Business Admin) each have their own specific use cases that are targeted.  As an example, the Cloud Admin or Provider can do everything but at the same time is mainly responsible for building, designing, deploying and the operational aspects of protection services.  The Org Admin is responsible for assigning protection and replication policies to VApps and VDCs, possibly modifying policies if allowed, and restores.  This leads to the last role, the VAppOwner which is responsible only for his own VApp and has the ability to perform backup/restore and Virtual Disk exclusion policies.


Notice how we have three columns in green.  This is a good summarization of which cmdlets are useful to which role.  This is always true– notice how Org Admin is a subset of Cloud Admin, and VApp Owner is a subset of Org Admin.  To the right, the action or verb is shown for the cmdlets.  There is a COA filled in where a cmdlet exists, and which role the cmdlet adheres to.

C = Cloud Admin, O = Org Admin, A = VApp Owner




There plenty of great use cases that can be done with this plugin.  Again, the purpose isn’t necessarily that this would be the primary means for a provider to manage the plugin.  It does however serve as a good tool to help understand how one would automate the same processes in other languages and automation tools with its (-verbose) flag on each cmdlet that demonstrates REST calls and responses. 

In my opinion, some of the coolest stuff here has to do with the roles other than Cloud Admin.  That being, as a consumer of Public Cloud, what functionality can I take advantage of.  Typically in VMware Public Clouds, this functionality would have been delivered through a Cloud Portal whereas here it is being directly exposed to tenants through vCD.



First things first!  Load PowerCLI 5.5+, import the module, and connect to a vCD Organization as a VApp Owner or Org Admin.  Being a VApp Owner (must set per VApp) is the most restrictive user you can demonstrate these capabilities with.



Use Cases

  • VApp Owner – On-Demand Backup
  • VApp Owner – VApp Backup Stats
  • VApp Owner – On-Demand Restore
  • VApp Owner – Modify Virtual Disk exclusion list
  • Org Admin – Modify policies
  • Org Admin – Backup Repository and Activities


Before we being however, all aspects of RBAC are off by default for Org Admin and VApp Owners.  Has the provider allowed this feature for your backup services? The provider or Cloud Admin can check Backup Configuration settings with the following command (as a Cloud Admin).

Get-OrgVdc mgc_vdc1 | Get-BackupConfiguration | %{ $_.VappOwnerAuthorizations }



This configuration can be modified using the Update-BackupConfiguration cmdlet.

$BC = Get-OrgVdc mgc_vdc1 | Get-BackupConfiguration

$BC.VAppOwnerAuthorizations.enableAdhocBackup = “true”

$BC.VAppOwnerAuthorizations.enableAdhocRestore = “true”

$BC | Update-BackupConfiguration



This can be repeated for the Org Admin, notice these authorizations.



VApp Owner – On-Demand Backup

The first use case we can show is based on a VApp Owner where they are able to perform backup/restores On-Demand. 


1) Start On-Demand/Adhoc Backup

The first step here is to select the proper VApp.  Next we leverage the Start-CIVAppBackup command to begin the backup.  Notice how we also specified the (-verbose) parameter here.  This allows us to view the REST API call to the vCD Extensions.  Notice the use of “extensibility” since we are issuing a request to a VApp HREF, followed by “/backups”.  Talk about simple right?  One object model, One API and authorization is handled within vCD.

The other important thing to note with this command is that we didn’t specify any policies, retention, datasets, or anything else.  Since we are able to apply default policies to the Virtual Data Centers and explicitly to VApps, the process inherently will use these and API calls can be super simple.

Get-CIVApp linux-01_new | Start-CIVAppBackup



VApp Owner/Org Admin – Backup Stats

In terms of integration, all activities within the plugin are sent to the Rabbit MQ message bus that vCD is configured with under a special exchange.  This allows for the provider to integrate for chargeback and other information.  As well, similar details can be explicitly requested per backup by the tenant.


1) Get latest CIVApp Backup Stats

Here you can see per VApp what the latest stats are in terms of how many backup exist, and the latest figures for how much is being protected along with unique space used on the Protection Storage.

Get-CIVApp linux-01_new | Get-CIVAppBackupStat



VApp Owner/Org Admin – On-Demand Restore

1) List Backups

Get-CIVApp linux-01_new | Get-CIVAppBackup



2) Restore In Place

As a VApp Owner you have no access to create new VApps in a VDC from vCloud Director.  For this reason, there is no ability to perform out of place restores or restore to new VApps.  All restores for the VApp Owner must be done in place.

Get-CIVApp linux-01_new | Get-CIVAppBackup | select -first 1 | Start-CIVAppRestore -inplace



Currently there is no FLR based restore capability for the VMs. 


VApp Owner – Modify Virtual Disk Exclusion list

This is the only policy a VApp Owner is able to modify.  It can be used in certain cases where only specific Virtual Disks should be protected such as the OS drives and the exclusion of larger database drives.  In order to set the exclusion, you must first look at the identifiers that vCloud has set on the Virtual Disks per VM. 


1) Get Virtual Disk addresses

Get-CIVApp linux-01_new | Get-CIVM

Get-CIVApp linux-01_new | Get-CIVMCustom | Get-CIVMCustomHardDisk (we use Custom here since the PowerCLI module for vCD does not allow VApp Owners to see VM details through Get-CIVM)

Get-CIVApp linux-01_new | Get-CIVMCustom | Get-CIVMCustomHardDisk | %{ $_.diskexclude}


Take a look at the disks and their associated addresses. 


2) Submit Exclusion List for the VApp

To build the exclusion list, we look at the existing VMs in the VApp.  We then apply an inclusion filter and whichever VMs are not included are then set for the exclusion.  In this case of the following we wanted to disable the first disk of the “mgc-03” VM, so we create a JSON statement as follows.  The 0:2001 represents the addressOfParent followed by addressOnParent.

Get-CIVApp linux-01_new | New-CIVAppBackupExcludeList -jsonVmFilterInclude ‘{"mgc-03":{"Disk":["0:2001"]}}’

Get-CIVApp linux-01_new | Get-CIVAppBackupExcludeList | %{ $_.vmexclude.diskexclude }



That’s it for now!  VMware Public Cloud protection—check!


PowerCLI module for VMware vSphere Protection

Posted by vele1655 | Posted in Automation, Avamar, Backup, Powershell, Scripting | Posted on 08-07-2014



We’ve covered some basic use cases prior with an overview of using the REST API and some more functional examples of general file system agents.  We can now take the next step and show how basic VMware vCenter protection is configured from the REST API.

The following use cases are shown, and some are done with purely Powershell (Datastores) and others are done with the addition of PowerCLI (Datastores and VMs).

  • Add VMware vCenter, VM Proxy, and Datastores
  • Start Adhoc Backup
  • Configure VM Policies and Add Clients
  • VM Restores
  • FLR Restores



  • Powershell v3, PowerCLI 5.5+

  • Download the module from this post


Add VMware vCenter, VM Proxy, and Datastores


1) Login, select the tenant, and select the folder

$tenant = Get-Tenant tenant01

$folder = $tenant | Get-AvamarFolder -name folder01



2) Add VMware vCenter as a Hypervisor Manager to a Folder

The next step is to add the vCenter to the folder.  The vCenter instance can leverage credentials that have limits around folders and other things inherent to vSphere’s hierarchy and RBAC.  There are specific minimum access requirements aside from the object access that is available in the Avamar Admin guide. 

$folder | New-HypervisorManager -name master-vc.brsvlab.local -hostname master-vc.brsvlab.local -username root -password ‘Password123!’



3) Add VMware Proxy Client

The next step is to add a VM Proxy Client that will handle the data plane aspect when VMs must be protected.  This VM is deployed as an OVF and typically is hosted per cluster in order to get locality to Datastores.  At this point the proxy can be deployed already, but it should not configured yet.

Notice how we use the resource share item ($folder.resourceShare or (Get-AvamarFolder -id $ for a refreshed call) and pull the DataProtectionResource to fill this parameter based on the DPR instance name.  This DPR instance will be linked to this VM Proxy Client.


$folder | New-ClientVMProxy -name avproxy-03.brsvlab.local -dataProtectionResource ($folder.resourceShare | Get-DataProtectionResource -name ave-03)



4) Register VMware Proxy VM guest as a VMware Proxy Client

Open the console to the VM Proxy that you deployed.  You should see a menu similar to the following when it is deployed fresh from an OVF.  Enter the FQDN of the DPR instance.  Followed by this enter the folder name (folder01).  This will associate itself to the VM Proxy Client that we previously created.



5) Perform Explicit GET on VM Proxy Client

Since we synchronized an option from behind the API, we must do an explicit GET on the Client to update it.

Get-Client -id ($folder | Get-Client -name avproxy-03.brsvlab.local).id



6) Add VMware Datastores to Proxy

The next step is to associate Datastores to VM Proxy Clients.  This allows manual control over which, or how many proxies can perform backups across which Datastores.  Notice how we get the Datastore list from the HypervisorManager.

$datastore = $folder | Get-HypervisorManager -name master-vc.brsvlab.local | %{ $_.vmDatastore }

$folder | Get-Client -name avproxy-03.brsvlab.local | Update-VMProxyDatastore -hypervisormanager ($folder | Get-HypervisorManager -name master-vc.brsvlab.local) -adddatastore $datastore



In my opinion, you would likely only add Datastores that have more than one host since they represent shared Datastores among ESX Hosts.  Other Datastores are likely local Datastores and probably should not have VMs that are protected.  Not the rule however! 

$datastore = $folder | Get-HypervisorManager -name master-vc.brsvlab.local | %{ $_.vmDatastore } | where {$_.hosts.count -gt 1}


Let’s change this up a bit.  So far we have focused on showing how Powershell, along with this module, and the Avamar REST API can manage protection.  What if you wanted to leverage a native VMware automation tool as well like PowerCLI?  We will modify the previous step a bit by connecting PowerCLI first to the vCenter instance, and then using the Datastore list presented to us from there to update the VM Proxy Client Datastore associations.  What’s the value here?  Not much in that use case, but wait for later when we work with VMs, Config Properties, and Tags!


This is optional, but if you want load PowerCLI 5.5+, and import this module.

Connect-VIServer -server master-vc.brsvlab.local

$folder | Get-Client -name avproxy-03.brsvlab.local | Update-VMProxyDatastore -hypervisormanager ($folder | Get-HypervisorManager -name master-vc.brsvlab.local) -adddatastore (Get-Datastore nfs-01-ds)



Start Adhoc Backup

1) Set Hypervisor Manager Folder for placement of configuration objects

$hvmfolder = $folder | Get-HypervisorManagerFolder -path master-vc.brsvlab.local/VirtualMachines



2) Add Retention

This is a repeat from the last post, but let’s create a new retention for 60 days and place the object in the Hypervisor Manager folder.

$hvmfolder | New-Retention -name retention03 -retentionduration (New-ObjectAvamar -type retentionduration -property @{unit="days";duration=60})



3) Add VMware Dataset

Here we have create a Dataset that has default settings for VMware Linux and Windows images.  We also specify the Data Domain index to use.

$hvmfolder | New-DatasetVMware -name vmdataset01 -DataDomainIndex 1



4) Add VM as Client VM

In this step we are using PowerCLI again to get the proper VM.  Here we specify the VM itself, it would be just as valid to not use PowerCLI and specify by vSphere folder path or by instanceUuid of the VM.

$hvmfolder | New-ClientVM -vm (Get-VM -name lguest-01)



5) Start Adhoc VM Client Backup

Get-VM lguest-01 | Get-VMClient -folder $hvmfolder | Start-ClientVMBackup -retention ($hvmfolder| Get-Retention -name retention01) -dataset ($hvmfolder | Get-Dataset  -name vmdataset01)



Configure VM Policies and Add Clients

1) Add Schedule

$hvmfolder| New-Schedule -name schedule01 -dailySchedule (new-objectAvamar -type dailyschedule -property @{timeOfDays=@("00:00:00.000-07:00","01:00:00.000-07:00");maxRunHour=1})



2) Add VMware Policy

$hvmfolder | New-Policy -name policy01 -dataset ($hvmfolder | Get-Dataset -name vmdataset01) -schedule ($hvmfolder | Get-Schedule -name schedule01) -retention ($hvmfolder | Get-Retention -name retention03)



3) Add VM Client to Policy

$hvmfolder | Get-Policy -name policy01 | New-PolicyClient -Reference (Get-VM lguest-01 | Get-VMClient -folder $hvmfolder)



Virtual Machine Restores

1) Select Backup

$clientbackup = $hvmfolder | Get-Client -name lguest-01 | Get-ClientBackup -name MOD-1404792885933



2) Restore VM

The following cmdlet will restore the VM in place.

$clientbackup | Start-ClientVMRestore -inplace



The other option here would be to do a restore out of place.

$ClientBackup | Start-ClientVMRestore -outofplace -name lguest-01-restore -hypervisorManager ($folder | Get-HypervisorManager -name master-vc.brsvlab.local) -datacenter brsvlab-datacenter -datastore nfs-01-ds -esxhost esx01.brsvlab.local -vmFolder test



3) FLR restores

This one is very cool.  You can perform granular based restores using any client backups including FLR for VMs.

The first step is to select the path.  Here we will browse the backup to the /etc directory.  We can select any of the files and restore them to a specified location.  The restore will take place via the VM Tools of the VM guest operating system without leveraging any networking, purely through the Hypervisor to the Guest.

$clientbackup | New-ClientBackupBrowse -path ‘/etc’ | ft * -autosize



$clientbackup | Start-ClientVMRestoreFLR -backupSource @(“/etc/slp.spi”) -destinationPath @(“/tmp”) -username root -password changeme



That’s it for the basic VM capabilities in the REST API.  Look for follow on posts that get more advanced and specific for policy management.  There will also be updates to the PowerCLI integration.