Executing shell commands remotely on multiple hosts

Recently, i was asked if we can automate esxi shell commands remotely on a large number of esxi server.

As there is no straight forward way to do it, at least not using PowerCLI, i had to try using Putty.

Plink is a command line tool that enables execution shell command on remote servers, so that was a start. The problem is that i did not want it to work interactively, because i had to accept all servers certificates (all self signed), and where is the automation in that?

Plink does not offer an option to automatically accept the certificate key, so i had to look else where.

Pscp is another command line tool that enable file copy to remote host, and this command does allow to automatically accept keys and add them to the computer cache so they can be used later with Plink.

All those files (plink,pscp) can be found here, with documentation.

And to wrap it all up i have written a short PowerShell script that uses these command line tools to restart the management agents on multiple esxi servers without prompting the user for any info.

the script is on github:

it does require a bit of tweaking for each environment.

Hope someone can benefit.

 

 

 

Advertisements

Load balancing Horizon View security servers

This question comes quite often in the forums, and since VMware’s documentation lacks in my opinion, i decided to give a brief explanation on how to do it.

What i will describe here is the use of load balancer and a group (2 or more) of security servers behind it (it can be implemented the same way for connection servers). I assume all servers are installed and paired with connection servers.

So here goes, first we need to understand how Horizon network flows work. When a user connects he first creates a session to one of the security servers, where he authenticates (ad/ RSA, smart card) if he authenticates another session is established between the user to the security server. This understanding of this is crucial because the all process is based on that.

So lets break the network flow a bit, first the load balancer should load balance just https, because it load balances just the authentication process (there is no load balancing of PCoIP / RDP/ Blast). So a user connects to the FQDN of the load balancer (done over https). The load balancer will direct it to one of the security servers for the authentication process. if he is authenticated, the security server will send to the client the fqdn for the second session (for the example lets say PCoIP) , the fqdn should be configured in the security server’s configuration in the administrator’s page. then the user connects directly to the security server based on the fqdn.

That means couple of things:

1. The sessions themselves are never load balanced. They are not going through the load balancer.

2. Each security server will have different fqdn’s configured in the administrator’s page.

3. the security servers must have FW rules allowing access from the internet and real public ip.

Lets say the load balancer address is lb.view.com, and we have 2 security servers ss1.view.com, ss2.view.com.

The ss1.view.com server configuration in the administrator’s page will be:

RDP: https://ss1.view.com

pcoip: 80.80.80.80

blast: https://ss1.view.com:8443

ss2.view.com configuration will be similar but with its fqdn and public ip.

Hope i made it a little bit clear.

Tagged , ,

Checking certificates expiration using Powershell

As the usage of signed certificates grows, the need for an automated way to check them grows as well.

Its not nice to discover your vCenter is disconnected from everything just because you forgot to renew its certificate.

Luckily, its quite easy to script it using powershell:

$urls = @(‘https://www.google.com’,’https://www.yahoo.com’,etc…)

foreach ($url in $urls )

{

$req = [Net.HttpWebRequest]::Create($url)

$req.GetResponse() | out-null

$expiration = $req.ServicePoint.Certificate.GetExpirationDateString()

$url, $expiration

}

This will print each website with its expiration date, you can also use ports , if the server have more then one certificate installed on it.

Tagged , ,

Adding a datastore to vCD 5.5

I have been early adapter of vCD, and been using it for a long time now.

One thing i have noticed is that VMware sometimes makes big changes across versions, and documentation is lacking.

The datastore configuration is no exception.

If you have been using vCD you know that you need to assign the datastore a storage policy in vCenter. this should be done before even searching for it in vCD. its pretty simple step, the policy is actually just a label or a description.

Once done, the datastore will automatically (or after a storage policy refresh) appear in the right policy in vCD.

I have done that many times in the past with no issues. But lately i tried it, but it did not work. I tried refreshing the storage, restarting the inventory service. nothing worked. The datastore just did not show in the right policy.

And then i noticed that after upgrading the vCenter to 5.5 , the storage policies were migrated to tags. These tagging was quite new for me, never used it. All the existing datastores had tags assigned to them, that were copies of the old storage policies.

But i did notice that they are just what they are – tags. so i tagged the new datastore, refreshed the storage policy in vCD and it worked. The datastore appeared in the right policy, and was usable.

So, if you are using vSphere 5.5, remember that do not use the fat client (no support for tagging) , use the web client to tag the datastores before using in vCD.

Tagged , , , ,

Path selection policy experience and powercli

In this post i will discuss on how to change the PSP (path selection policy) for FC luns.

The PSP determines how an ESXi host will send data to the FC storage. The default (in my scenario which is HDS) is FIXED.

FIXED is usually used for active / active array (i will not get in to details here about arrays differences) and the idea is that all paths are active but IO goes over only one of the paths. If a fail over occurs the IO will switch to one of the other paths and stay there. The problem with that policy is that the FC ports are not balanced, some of them are very utilized and some not. The other problem is that it is not best practice by VMware or HDS. The best practice is to use RoundRobin , which basically means the ESXi uses all paths and sends specific number of IO commands to each path and moves on to the next one, that way all paths are utilized.

Here is the FC ports utilization when using the FIXED psp:

unev1

This configuration can be done using the FAT / web client, but since i have too many ESXi’s and too many LUN’s i decided to go the powercli way.

At first i tried to use the set-scsilun command, and ran it on all LUN’s and ESXi’s in the cluster – something like this:

#### you can filter the luns based on needs

$luns = get-vmhost -name esxi1 | get-scsilun

$esxihosts = get-cluster -name cl | get-vmhost

foreach ($lun in $luns)

{

foreach ($esxi in $esxihosts)

{

$esxi | get-scsilun -canonicalname $lun.canonicalname | set-scsilun -multipathpolicy “roundrobin”

write-host ($esxi.name, $lun.canonicalname)

}

}

This script works well, but is very slow, each command takes about ~2 minutes. It would have taken me the whole weekend or even more just to change one of my clusters. So i decided i have to find a better way to do it, so i researched on other methods and came across the get-esxcli cmdlet , which i can to accomplish the same task but in a different way, i can change the inner loop in my script to this:

$esxcli = get-esxcli -vmhost $esxi

$esxcli.storage.nmp.device.set ($null,$lun.canonicalname, “roundrobin”)

write-host ($esxi.name, $lun.canonicalname)

It has the same outcome only much faster, each command takes less than 1 second, that means that the script took ponly couple of minutes to run, which is what i wanted.

Here is the utilization after the PSP was changed:

even1

I even took one that shows how the utilization gets even:

geteven1

Pretty nice, i think.

The get-esxcli cmdlet is quite powerful, it can be used to configure many things on ESXi’s, and to do it in an efficient way.

There is not a lot of documentation about it but its similar to the “esxcli” shell command, so its not too hard to figure the syntax.

BTW, there is an additional cmdlet to change the default PSP to roundrobin so new LUN’s will be configured automatically:

## This is satp for hds (can be different depends on array

$psp = “VMW_PSP_RR”

$satp = “VMW_SATP_DEFAULT_AA”

$esxcli.storage.nmp.satp.set ($null,$psp,$satp)

That’s about it, now i use the best practice, and have learned how to use powercli in an efficient way.

Tagged , ,

Implementing signed certificates to Horizon View servers

In this post i will describe how to implement signed certificates to Horizon view servers using opensll.

Most of Vmware products use the installer signed certificates (which is good for testing) but they usually recommend to replace them with CA signed to make the infrastructure more secure.

I found the available documentation little thin, so i thought i will describe it.

So what you will need, opensll. Openssl is a command line tool that creates / signs / converts / does anything that relates to certificates and is open.

Access to a CA server to sign the certificate, in this case i am using Microsoft CA server but any other will can be used (although with a little change in the process).

The first step is to create an openssl config file. I prefer using config files because they help avoid the interactive wizard and by that avoid mistakes.

This is an example of a config file:

[ req ]
default_bits = 2048
default_keyfile = vdi01.key
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS: vdi01, IP:10.0.0.11, DNS:vdi01.ronen.com

[ req_distinguished_name ]
countryName = US
stateOrProvinceName = NC
localityName = Durham
0.organizationName = RonenINC
organizationalUnitName = Engineering
commonName = vdi01.Ronen.com

Change the Italic parts of the file to your use case. Save the file as the name of the server – vdi01.cfg .

Important: The “commonName” field should be the fqdn of the server (if the server will be behind a load balancer , it should be the fqdn of the load balancer.

The “subjectAltName” should be the the short name of the server, its IP address , or any other url that you might use to connect to it. SAN are very usefull when you connect to a server with different names / URL’s.

Next step is to create the certificate signing request and the key file. It is done with one command using the launching Command with Administrator privileges:

openssl req -new -nodes -out vdi01.csr -keyout vdi01.key -config vdi01.cfg

This command will read the config file we created, take the info in it and create 2 files: the private key, and a certificate signing request file.

In this step we will use the Microsoft CA server to sign the csr request. You can use any other CA server (this is what i have) But the process will be a little different.

Use IE to launch the url of the ca server : https://caserver.ronen.com/certsrv

Click “Request a Certificate”

Click “Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file, or submit a renewal request by using a base-64-encoded PKCS #7 file”

Open the csr file we created and copy all its content (including header).

Paste the csr content into the CA server and select the certificate template you need (this depends on your environment).

And click “Submit”

If everything is ok, you will be prompted to accept and choose the encoding of the certificate , choose “Base 64 encoded” and “Download certificate”.

Save the certificate to the same location as your key file is.

Change the certificate file type from cer to crt (It has no affect on it) , also rename it to the name of the server (to make it clearer) : “vdi01.crt”.

Since Horizon view uses the pfx files and not just the certificate we will have to create it and import into it both the certificate and the key file:

openssl pkcs12 -export -in vdi01.crt -inkey vdi01.key -name vdm -passout pass:testpassword -out vdi01.pfx

Important: Dont change the name or the password, This is the way the server expects it to be.

Last step is to install the new certificate on the Horizon connection servers, copy the pfx file to it.

Launch the mmc from the Horizon view server, add the “certificate” snap-in and select “computer accoubnt”.

Right click the “personnel” folder and “Import”.

Select the pfx file we created, check mark “Mark the key as exportable” and “include all extended properties”.

Rename the old “vdm” certificate to another name (“vdm-old”) – the horizon view server looks for “vdm”.

Make sure the new certificate we imported is named “vdm”.

Restart the Horizon view services and make sure you can connect to the admin page and also with the client without getting certificate error on the browser or client (the services may take couple of minutes to start).

Couple of things to pay attention to:

You will have to go to the view configuration – connection server edit page, and edit the ip’s and url to match the certificate and what the users will use. generally, for internal users the default will work fine. I also found out that i can load balance the blast and https secure tunnel , although not supported by vmware and can cause all kind of issues.

PCoIP cant be load balanced.

Verify in the admin page dashboard that you dont have error message on the connection server. Usually if the certificate url does not match what is configured as url for the connection server there will be an error.

When using external security servers in the dmz (where else would you put them) make sure to create certificate with the public URL’s and IP’s. And also to configure it in the security server configuration. Again to avoid the certificate errors.

Thats it, this is how to install signed certificates to Horizon view servers.

Using the OVFTOOL

OVFTOOL is a command line tool that lets you import / export vm from any product / format to any (almost). It supports vsphere, vCD, vmware workstation, vmware fusion, ovf, ova and some more. It can also convert formats.

It is also very useful and easy to move vm’s across vCenter servers without the need to do it in 2 phases and having some kind of storage in between (this is my most useful scenario).

The syntax is simple, just add the source and destination of the vm, and if needed some options.

For example, to export a vm from vsphere to ovf format:

ovftool vi://vcenter/datacenter/vm/vmname c:\ovfs\vmname.ovf 

One thing to remember, ovftool is case sensitive, so when typing the locators they have to much to what is configured in vsphere. It can be tricky when having complex inventory tree.

When deploying into vsphere we need to configure the datastore where the vm will reside and disk mode and also the network it will be connected to (if it has 1 nic). Here is an example of copying a vm from vcenter to vcenter:

ovftool -dm=thin -ds=vol1 -nw=net1 vi://vcenter1/datacenter1/vm/vmname vi://vcenter2/datacenter2/host/cluster/ 

-dm – the disk mode , can be thin, thick, etc..

-ds – the datastore target for the vm.

-nw – the network the vm will be connected to.

The ovftool will prompt for credentials to login to the vcenters (can be injected in the command line for scripting) so the user that uses that must have access to the resources used.

Another interesting option is the –net , when the vm has 2 nics, the ovftool needs to know which nic goes to which network so we have to assign them both:

ovftool -dm=thick -ds=vol1 –net:sourcenet1=targetnet1 –net:sourcenet2=targetnet2  vi://vcenter1/datacenter1/vm/vmname

Where, “sourcenet1” is the original network, and “targetnet1” is the network that nic will be connected to in the target location.

There are alot more options like powering on the vm after deploying, changing the memory size, cpu count etc…

The user guide and bits are here.

 

vSphere Inventory Service

So, since version 5.1 the inventory service became a separate / stand alone component of the vSphere suite. You can install it on the vCenter itself or on a dedicated server.

My first thought of it was that it is used as in memory caching service for the web client, so it can serve read request and decrease load on the vCenter service, and if that’s the case no need to back it up. 

But recently i found out that it does store data that is not stored anywhere else, and that is tagging and storage profiles. If you are not using any of these you really dont need to back it up, but if you do, or if you use vCloud director (which uses storage profiles) it is very important to back it up.

This KB describes how to backup and restore the inventory service.

Another thing i found out lately, is that the inventory service uses a xml db (x-hive) and sometimes it can become corrupted (search will error / timeout). In that case you should restore the db or reset it.

If you dont need the data in it , because you dont use tagging or storage profiles, you can just reset it, the process is described in this kb , and quite easy to implement.

 

 

Tagged

Virtualizing MSCS 2012R2 on vSphere

I was working lately to Virtualize MSCS 2012R2 clusters on vSphere.

According to this kb, it is officially supported on vSphere 5.5 update 1.

I have been spending some time on it and, but for some reason the cluster validation kept failing because of “disk arbitration” test problems.

After contacting all vendors (VMware, HDS, Microsoft) i found out, there is a bug that is related to active-active arrays (in my case VSP) that causes the validation to fail.

If you use any kind of ALUA array, you are bug free, and should be able to implement this.

This bug is handled at the moment by vmware engineering and a fix should be released soon.

UPDATE: this kb popped up , that explains the situation. No solution yet.

Tagged , ,

VMware view and RSA integration

I have been using vmware view for several years now, but this is the first time that i had the opportunity to integrate it with RSA Authentication Manager.

The documentation is very limited, and you would expect it to just work with 2 clicks of a button. It seems that it does not for RSA version 8 and up, the administrator page does not support its sdconf.rec file.

But there is a kb that explains what needs to be done to work around that.

This is not enough, that process just uploads the file to the connection server, you still have to go to the GUI administrator page and enable the 2 factor authentication to RSA secureID.

rsagui

And its pretty cool, you have it integrated into the horizon client nicely.

rsaclient

Tagged ,