In Part 1 we showed you how to deploy vCloud Connector 2.6, and in Part 2 we showed you how to configure it. Here in Part 3 we will show you how to migrate a workload from your private cloud to your public cloud. Lets go back to our vCenter and select our vCloud Connector Plugin Expand your vCenter and select the folder that contains the VM you wish to copy to vCHS.
So luckily i was selected for the June 2014 wave of vCloud Hybrid Service access as a vExpert. I have been looking forwarding to using vCHS since it went GA and am glad to finally have the opportunity to use it hands on. Once I got access I knew I wanted to copy some of my existing workloads to vCHS instead of having to start from scratch. I begin to look for some good guides on implementing vCloud Connector 2.
Here is Part 2 a continuation of Deploying vCloud Connector 2.6 and Configuring For vCHS (Part 1) (<a href="http://davidstamen.com/2014/06/03/deploying-vcloud-connector-2-6-and-configuring-for-vchs-part-1/">http://davidstamen.com/2014/06/03/deploying-vcloud-connector-2-6-and-configuring-for-vchs-part-1/</a>) where we will configure vCloud connector to your Private(vCenter) and Public(vCHS) clouds. Lets start with getting the vCloud Connector Server ready. Just in case you forgot what IP you assigned we can open the appliance and view the management information. As we can see here the URL to configure the appliace is <a href="https://192">https://192</a>.
For upcoming testing there was a need to create 140 datastores on a cluster for testing. Who wants to do that much clicking and typing? Not me! You can use the below PowerCLI commands to get the SCSI ID’s, Create the Datastore and then Rescan all hosts in the cluster. How do i get the CanonicalName for allocated disks? Get-SCSILun -VMhost 192.168.1.103 -LunType Disk | Select CanonicalName,Capacity How do I create a VMFS datastore for the CanonicalName I identified above?
After upgrading vCenter this functionality is no longer enabled by default. Please perform the following steps to enable rename of files upon successful Storage vMotion. Log into the vSphere Client as an Administrator Click Administration > vCenter Server Settings Click Advanced Settings Add this advanced parameter key: provisioning.relocate.enableRename Set the value to: true Click Add Click OK Restart the VMware VirtualCenter Server service for the changes to take effect
If you are configuring a cluster with less than 2 datastores, you will receive an HA warning “The number of heartbeat datastores for host is 1, which is less than required: 2” You can add an option to the HA Advanced Options to supress this warning. Log in to vCenter Server Right-click the cluster and click Edit Settings Click vSphere HA > Advanced Options Under Option, add an entry for das.
When setting up a cluster for testing you may not have 2 nics to use for management. to bypass the warning you can configure HA to not alert you for this issue. To perform the steps in the c# client. From the VMware Infrastructure Client, right-click on the cluster and click Edit Settings. Select vSphere HA and click Advanced Options. In the Options column, enter das.ignoreRedundantNetWarning In the Value column, type true Click OK.
This is a great script to keep handy. If you have multiple RDM’s on a VM and need to get the NAA_ID for them the below PowerCLI command will get you that information. Get-VM VMNAME| Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName,CapacityGB | fl If you then need to match the SCSI virtual disk to the Guest OS this is a great article on how to do so. KB2051606
While continuing to build out my lab for VCAP-DCA today I had to deploy the vMA (vSphere Management Assistant). Upon deployment i tried to SSH to it and unfortunately was not able to. By default SSH is turned off, perform the steps below to enable it. Logon to vMA via a Console Session Run ‘sudo vi /etc/hosts.allow’ Scroll to the very bottom and type i to insert content into the file.
I was working today on configuring NFS/Openfiler in my lab and came across an issue that my nested ESXi host’s couldn’t talk over the VSS (Standard Switch) I created. Upon further research when using nested ESXi you need to enable “Promiscuous Mode” on the VSS to allow the traffic to pass.