A consultant working with our Alcatel phone system encountered a weird issue that caused us some problems the other day. When attempting to install an Open Touch Media Server (used for receiving fax, for example), the entire vCenter client environment froze, and a reload of the page resulted in the following error message:
503 Service Unavailable (Failed to connect to endpoint: [N7Vmacore4Http20NamedPipeServiceSpecE:0x0000…] _serverNamespace = / action = Allow _pipeName =/var/run/vmware/vpxd-webserver-pipe)
A lot of searching the web led me nowhere – there were a bunch of solutions, but none of whose symptoms agreed with what I was experiencing; I had not changed IP addresses on the vCenter Appliance, nor had I changed its name, and I did not have an issue with logs reporting conflicting USB device instances.
What I did have, though, was a new OpenTouch server on one of my ESXi hosts, which did not have a network assigned to its network interface, and this, apparently is not a configuration that vCenter was written to take into consideration.
Logging on to the local web client on the specific ESXi host where the machine was running (after identifying that…), and selecting the machine in question, I got a warning message specifying the network problem, and a link to the Action menu. Simply selecting a valid network and saving the machine configuration was enough to allow me to ssh to the vCenter Appliance and start the vmware-vpxd service:
# service-control –start vmware-vpxd
We’ll just have to see how we proceed from here…
As part of my evaluation of presenting vVols to vCenter from an IBM FlashSystem V9000, I decided to start from scratch after learning a bit about the benefits and limitations of the system. That is: I like vVols a lot, but I learned some things in my tests that I wanted to do differently in actual production.
Unfortunately, once I had migrated my VMs off the vVol datastores, I still couldn’t detach the relevant storage resources from the storage service in Spectrum Control Base. The error message was clear enough: I’m not allowed to remove a storage resource that still has vVols on it. My frustration was based in the fact that vCenter showed no VMs nor files on any of the vVol datastores, but I could clearly see them (labeled as “volume copies”) in the “Volumes by Pool” section in the SVC webUI on the V9000.
At least as of version 7.6.x of the SVC firmware, there is no way of manually removing vVols from the GUI, and as usual in such cases, we turn to the CLI:
I connected to the V9000 using ssh, taking care to log on as my VASA user. All virtual disks on the V9000 can be listed using the lsvdisk command. The first two columns are their ID and name, and any of these parameters can be fed to the rmvdisk command to manually remove a volume.
Just to be clear: The rmvdisk command DELETES stuff. Do not use it unless you really mean it! With that warning out of the way; once I had removed the volumes and waited a couple of minutes for the change to propagate to Spectrum Control Base, detaching storage resources from storage services was a breeze.
While trying to test out vVols in our vSphere 6.5 environment, presented via IBM Spectrum Control Base 3.2 from a StoreWize V9000 SAN, I ran into a small issue that took me a while to figure out:
I installed Spectrum Control Base 3.2 and presented its web services via a FQDN.
To avoid the nagging of modern browsers, I used a regular wildcard certificate valid for the domain I chose to use.
After the initial setup, when I tried to add SCB as a storage provider in VMware, I got the following error message: “A problem was encountered while provisioning a VMware Certificate Authority (VMCA) signed certificate for the provider.”
A web search showed me that this was a pretty common problem with several VASA providers, but none of the suggested solutions applied to our environment. After half an hour of skimming forums and documentation I found the following quote in an ancient support document from VMware:
Note: VMware does not support the use of wildcard certificates.
So: I generated a self-signed certificate in the Spectrum Control Base server webUI, and the problem disappeared.
Lesson of today: We don’t use wildcard certificates in a VMware service context.
For driver reasons, the default disk controller in VMware guests is an emulated LSI card. However, once you install VMware Tools in Windows (and immediately after installing the OS in most modern Linux distributions), it’s possible to slightly lower the overhead for disk operations by switching to the paravirtual SCSI controller (“pvscsi”).
I’m all for lower overhead, so my server templates are already converted to use the more efficient controller, but I still have quite a lot of older Windows servers that still run the LSI controller, so I’ve made it a habit to switch controllers when I have them down for manual maintenance. There is a perfectly good way of switching Windows system drives to a pvscsi controller in VMware, and it’s well documented, so up until a couple of days ago, I’ve never encountered any issues.