So basically SRM works like this:

  • It runs a sync on the san
  • Powers down the VMs, removes the VMs from vCenter, and un-maps the storage.
  • Runs another sync (this is done so the offline sync doesn’t take that long and reduces how long the VMs are down for)
  • It then brings the synced volume up at the DR site and registers the VMs and brings them online into the correct networks and mappings.

Remember even though  you can now configure the reverse network mappings at the same time you create the forward network mappings, if you have used IP customisation’s, you will need to go to the DR sites SRM settings and edit the reverse mapping to including the IP/DNS info for failing back, otherwise there wont be any changes when you fail back and that could cause you all sorts of headaches!

Fail Over
srm_planned_migration1

Fail Back

srm_planned_migration2

As you can see SRM doesn’t hold the original IP details, so if you failed back without putting the correct info in, you’d have the same IP details you had at your DR site.

This all had to be done with csv files before, but now its all handled in the GUI which I think is great, it does it using VMware tools, and the way it does it is not exposed in any other way (so no other program/script can do it that way)

 

This is a re-post of my article originally posted here:

The Host Virtual MAC Address Riddle

So then, for ages I have been pondering over something and as trivial as it is….the fact I couldn’t get any solid info on it was driving me nuts.

Every physical NIC (pNIC) on a host has a MAC address, just as you would expect. BUT if you do:

esxcfg-info -n you will notice that every vmnic has a Virtual MAC Address too!

\==+Physical Nic :

|—-Name…………………………………………..vmnic3

|—-PCI Segment…………………………………….0

|—-PCI Bus………………………………………..2

|—-PCI Slot……………………………………….0

|—-PCI function……………………………………1

|—-MAC Address…………………………………….XX:XX:XX:32:06:1f

|—-Virtual MAC Address……………………………..00:50:56:52:06:1f

|—-FPT Shareable………………………………..true

Now as you can see the Virtual MAC Address starts off with a VMware MAC and then finishes with the end of the Physical MAC.

Now the question is why does each vmnic on a host need a Virtual MAC Address?! I asked this question everywhere, and I mean everywhere…from Reddit to Slack…to Twitter …even Experts Exchange. vExperts/VCDXs all didn’t know the exact answer, most people where like why are you even bothered? To be fair they do have a point, and it comes down to this key fact:

Someone asked me why they had a Virtual MAC Address and the fact I didn’t know and couldn’t find a solid answer……just well bugged me a lot!

I just couldn’t find any link. I then posed the question to my new friend Graham Barker (who isn’t on Tiwtter or anything). he did some digging and found something very interesting:

When you run the esxcfg-info -n command you will be given info on the shadow vmnics too. Now for those who don’t know, shadow vmnics are used as part of the the VDS Health Check Feature. They check for VLAN/MTU mismatches across your network. It was introduced in the 5.1 version of the VDS.

What is a Shadow of vmnic ?

As you can see I did a simple post on it last year, for anyone who wants a bit more info!

Now lets have a look:

|—-Client Name………………………….Shadow of vmnic4

|—-MAC Addr…………………………….00:50:56:52:5e:1e

\==+Physical Nic :

|—-Name…………………………………………..vmnic4

|—-PCI Segment…………………………………….0

|—-PCI Bus………………………………………..4

|—-PCI Slot……………………………………….0

|—-PCI function……………………………………0

|—-MAC Address…………………………………….XX:Xx:XX:XX:5e:1e

|—-Virtual MAC Address……………………………..00:50:56:52:5e:1e

As you can see the Shadow vmnic MAC Address matches the Virtual MAC address of the pNIC!

So my first thought was well that’s it, the Virtual MAC Address is assigned so that it can be used for the VDS Health Check. So the health check doesn’t impact day to day operations in some way.

Well I dug a bit deeper still, I have a couple of ESXi 4.1 hosts. The Health check modules were only available for 5.1+ and are installed as part of ESXi 5.1+ regardless of whether you have a VDS or not.

The ESXi 4.1 hosts all have Virtual MAC Addresses for their pNICS too….so this can’t be just for the VDS Health Check!

Now my team mate @ShadyMalatawey mentioned that there was a good chance that the Virtual MAC Address for the pNICS was introduced in 4.1 but never went live till 5.1, kind of like in the way NSX has features in versions that have not been enabled yet but are planned for in future releases. Now that makes sense.

I was chatting to the guy that runs http://www.govmlab.com/ he has a lot of old school knowledge and we discuss topics now and again.

He mentioned that:

“as per my knowledge, each pNIC has a Virtual MAC (last 3 bytes should be from the actual pNIC MAC)  this is used for heartbeat protocols like beacon probing”

Now I thought that is pretty interesting, he also said there was no way from him to confirm this as he just knew it from a while ago and that there wasn’t anything written that he knew of.

Our newest VCDX @Apollokre1d mentioned that the best course of action would be to raise a support request with VMware Support. To be fair my experience with them has been very hit and miss, but since I wanted to know….now is as good a time as any to try and get it from the horses mouth!

Their first reply was just a general copy and past about how VMs get their Virtual MACs, which is not what I was asking at all, I asked again and this time they pulled some info from an internal article they had which made for an interesting read.

The info was:

There are 5 ‘types’ of MAC addresses that can potentially exist on an ESX host.

1) The first and simplest is the MAC assigned to a Virtual Machine.  I’ll ignore these for this conversation.

2) The manufacturer assigned MAC address of a physical NIC.

*) On a Classic ESX system this address is used only for PXE booting the machine, afterwhich it is never used for traffic again.

*) On ESXi the MAC address of the PXE booted NIC, or the first physical NIC, by PCI slot, is stolen and given to the VMkernel TCP/IP interface vmk0.

3) A VMkernel TCP/IP interface MAC address is a generated MAC address based on a hash of the systemUUID and the name of the interface (vmkX).  These will always be in the form:  00:50:56:7X:XX:XX where XX are hash bits.

4) A Service Console interface MAC address is a generated MAC address based on a hash of the systemUUID and the name of the interface (vswifX).  These will always be in the form: 00:50:56:4X:XX:XX where XX are hash bits.

5) A Generated MAC address is then assigned to each Physical NIC for use in beaconing and all traffic that comes from the virtual switch itself.  These are generated in a special way.  They will always be in the form:  00:50:56:5X:YY:ZZ where X is 0x50 | (physicalMac[3] & 0x0f), YY and ZZ are the last 2 bytes of the physical card’s MAC address:  example:  Physical NIC MAC : 00:15:17:3a:ca:05  Produces virtual MAC address: 00:50:56:5a:ca:05.

to be fair I knew the other bits, but number 5 was the gold, which is very similar to what Mr GoVMLab had said.

So Long Story Short:

Each pNIC on an ESXi host gets  Virtual MAC Address assigned to it, which you can look at using esxcfg-info -n

This is then used by the virtual Switch itself for beaconing/heart-beating and is also used for the VDS Health Check feature.

So if anyone asks you why they have Virtual MAc Address….I have done all the digging for you!

 

So we have been doing a management IP migration, and the time had come to migrate the management interfaces on various switches to the new IP range/VLAN.

With Brocade you can do the following:

ipAddrShow – will show you your current IP settings

ipAddrSet  – will go through the motions of changing the IP.

It will go through and ask you for all your updated info and at the end will apply it. Before I did this, the older 4100 switches needed a proper serial cable. A normal Cisco one wont work as the Brocade pin-out layout is different.

So before I could begin I had to dig through cables, till I found one that worked! I am not one to risk doing this kind of change without being able to get on the switch locally if something goes wrong.

Bare in mind that changing the management IP info does not have any impact on the day to day running of the switch.

switch:admin> ipaddrset

Ethernet IP Address [current ip]: <new ip>

Ethernet Subnetmask [255.0.0.0]: <new subnet mask>

Fibre Channel IP Address [0.0.0.0]: <adjust if needed>

Fibre Channel Subnetmask [0.0.0.0]: <adjust if needed>

Gateway IP Address [0.0.0.0]: <new gateway IP>

Set IP address now? [y = set now, n = next reboot]: <yes or no>

IP address being changed… Committing configuration…Done. switch:admin>

The 5100 switches you can just use the normal light blue Cisco console cable fine and get on the switches locally as needed.

We were moving the management ports from an old 100Mb Switch to the new management core, so as I changed the IPs and the management dropped off the network, I had to re-patch the cables into the new core switches.

The networking team had configured them as access ports with the required VLAN already. I set a ping going with the new IP, which obviously was timing out, but once I did the change the pings started coming through and you could SSH back on to them.

I simply could not get the web interface to work on either the 4100 or 5100 switches, a mix of java and everything just stopped them from loading. I need specific version of java for other things, so I wasn’t willing to go messing with that, so I just did everything via SSH.

So one of my colleagues had been trying to pass through a Nvidia Quadro graphics card that he had added to a host. It wasn’t going anywhere, I tried multiple times and I could get the VM to pick up there was another graphics card but the Nvidia drivers just wouldn’t install.

If you read up around stuff, you’ll realize Nvidia cards are very hard to pass thru, even though the Quadros people have had success with. But AMD seems to be the more reliable option for various reasons.

We had a specific application in a Linux VM that needed some 3d hardware support, it was currently running on physical boxes and we wanted to P2V it…….. if we could.

So after trying for a while with the whole pass through thing, and with little joy, the 3D support option in the VM settings was greyed out, saying the Linux OS was unsupported.

But thanks to this KB :

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2092210

I added the advanced option :

mks.enable3d = TRUE

Then the VM ticked that box by default, and it became editable. You don’t get much more options wise in the thick client:

3dsupport1

But in the web client you get a few more options

3dsupport2
We left it set to automatic.

Then I pointed my colleague, who is much better at Linux than me to this article:

http://www.mesa3d.org/vmware-guest.html

Initially he had a few issues with getting the driver to install and run, but it turned out it was because of all the nvidia drivers that were installed while trying to do the pass through :

https://bugs.launchpad.net/ubuntu/+source/mesa/+bug/1410960

“I solved it. I had to remove NVIDIA drivers by uninstall using the script, then, for reinstall everything related with RADEON and MESA” – From someone in that link

Now that was sorted it, installed and it all started working. Performance was decent, and was enough to allow us to start looking at P2V being  viable option for this application!

I had a disk failure on a Dell server that was being used as a Linux file server, it had no OMSA or anything like that installed. So when I phoned Dell support they wanted the PERC logs, and all the normally getting it via the iDRAC/BIOs didn’t work because they were pretty old versions. We updated the iDRAC as that doesn’t require a reboot but we still couldn’t get the info they needed. BIOS upgrade would have required downtime, which at this moment in time was a no go.

Sources:

http://techedemic.com/2014/08/07/dell-perclsi-megacli-how-to-install/
https://topics-cdn.dell.com/pdf/poweredge-rc-h730_Reference%20Guide_en-us.pdf
https://artipc10.vub.ac.be/wordpress/2011/09/12/megacli-useful-commands/
http://erikimh.com/megacli-cheatsheet/

So first download perccli and mega cli rpms

Install them by copying them to the /tmp folder and navigate to that folder and run :

rpm -ivh MegaCli-8.07.14-1.noarch.rpm
rpm -ivh perccli-1.11.03-1.noarch.rpm

Then go to /opt/MedgaRAID/percli/ and run
./perccli64 /c0 show termlog >> perclog.txt

This will get you the perc logs that you can send to Dell support

Now since we had a hot spare as soon as I removed the disk with the predictive failure, it started rebuilding on the hot spare:

Physical Drives = 24

PD LIST :
=======

———————————————————————–
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
———————————————————————–
32:0 0 Onln 0 931.0 GB SAS HDD N N 512B ST91000640SS U
32:1 1 Onln 0 931.0 GB SAS HDD N N 512B ST91000640SS U
32:2 2 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:3 3 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:4 4 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:5 5 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:6 6 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:7 7 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:8 8 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:9 9 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:10 10 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:11 11 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:12 12 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:13 13 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:14 14 UGood – 931.0 GB SAS HDD Y N 512B ST91000642SS U
32:15 15 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:16 16 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:17 17 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:18 18 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:19 19 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:20 20 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:21 21 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:22 22 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:23 23 Rbld 1 931.0 GB SAS HDD N N 512B ST91000640SS U
———————————————————————–

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded

As you can see drive 23 was the hotspare and was now being used to rebuild the data, as expected.

Drive 14 was the replacement and it has been put in and was not actually doing anything as UGood = Unconfigured Good

So I now needed to set this to a HotSpare:

./perccli64 /c0/e32/s14 add hotsparedrive
Controller = 0
Status = Success
Description = Add Hot Spare Succeeded.

This tells it to use disk 14 on controller 0 as the hotspare!

So now when we run:

./perccli64 /c0 show

Physical Drives = 24

PD LIST :
=======

———————————————————————–
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp
———————————————————————–
32:0 0 Onln 0 931.0 GB SAS HDD N N 512B ST91000640SS U
32:1 1 Onln 0 931.0 GB SAS HDD N N 512B ST91000640SS U
32:2 2 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:3 3 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:4 4 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:5 5 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:6 6 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:7 7 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:8 8 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:9 9 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:10 10 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:11 11 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:12 12 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:13 13 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:14 14 GHS – 931.0 GB SAS HDD Y N 512B ST91000642SS U
32:15 15 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:16 16 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:17 17 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:18 18 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:19 19 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:20 20 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:21 21 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:22 22 Onln 1 931.0 GB SAS HDD N N 512B ST91000640SS U
32:23 23 Rbld 1 931.0 GB SAS HDD N N 512B ST91000640SS U
———————————————————————–

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded

Now you see 23 is still rebuilding and disk 14 is now GHS which means Global HotSpare

This can now bee seen in the iDRAC.

global hotspare perccli

Configuring Alerting

We then needed to sort out some kind of alert system for this NFS server, this server was running iDRAC7 and our other NFS server which is much older is running on a Dell 2950 running DRAC5 the DRAC is pretty crap, I set up SMTP alerts, but there is nothing to do with storage alerts. So I needed to sort something out that would work across both boxes.

So to get round this I installed MedgaCLi on the older NFS box too

Then Ir an the following commands:

./MegaCli64 -PDList -aALL
./MegaCli64 -LDInfo -Lall -aALL
./MegaCli64 -AdpAllInfo -aAll    –  This command gives the controller RAID info

This displays the status of all physical disks, and I asked our scripting guru, and he is going to set a PRTG alert that will run that command and grep for a fail (which is a valueother them 0). So any kind of disk fail even warning/predictive will trigger a PRTG alarm.

./perccli64 /call show >> output.txt will output all controller info to a text file for anything using PERC controllers

/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aAll|grep -m1 -A8 “Device Present”|tail -n 7

[MegaCli]# /opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aAll|grep -m1 -A8 “Device Present”|tail -n 7

Virtual Drives : 1
Degraded : 0
Offline : 0
Physical Devices : 3
Disks : 2
Critical Disks : 0
Failed Disks : 0

Our scripting guru used the above output, so you end up with something concise to work with, that will warn us in the future should any issues arise.

So I was tasked with converting this old Centos 5 machine running a DB into a Virtual Machine.

This seems like a good place for this meme:

48995460

We followed best practices , we took backups of the DB, and shut down all DB services, so we could get some kind of consistency.

Once it was done, I booted it up and observed a few errors:

Cannot setup NMI watchdog on CPU 0
Cannot setup NMI watchdog on CPU 1
Cannot setup NMI watchdog on CPU 2
Cannot setup NMI watchdog on CPU 3

After doing some digging around I found the KB article I was looking for:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2031297

“vCPUs presented to the guest operating system cannot have NMI enabled on them because they are abstracted representations of the host’s physical CPU cores.”

So that makes sense really, its hardware dependant and the VM/vCPUs are abstracted from that by the basis of virtualization.

So to get round this issue we just need to follow the KB:

Edit /boot/grub/grub.conf

Set nmi_watchdog=0

It will probably be set to 1 or 2, but 0 disables it,which is what we want to do!

Now save the file and reboot the VM.

I also had this error/warning on bootup:

“Memory for crash kernel (0?0 to 0?0) notwithin permissible range”

After digging around, its a common warning that happens in Centos 5, and can be safely ignored

“During the boot process you may see the message “Memory for crash kernel (0x0 to 0x0) notwithin permissible range” appear. This message comes from the new kdump infrastructure. It is a harmless message and can be safely ignored.”

Its kind of annoying, as you see it and are like……hmmmm that doesn’t look good, and then they tell you to just ignore it heh

So Mongo support had been looking into an issue, and they had mentioned as a recommendation that they had seen some memory ballooning being active and had requested that it should be turned off.

I am not overly keen on the idea of turning it off, but the DB Engineer and Mongo said it had to go, so I was like ok fair enough, I had been overruled!

I followed this KB:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1002586

So disabling memory ballooning is a pretty simple operation:

  1. Shut down the virtual machine.
  2. Right-click the virtual machine listed on the Inventory panel and click Edit Settings.
  3. Click the Options tab, then under Advanced, click General.
  4. Click Configuration Parameters.
  5. balloon1
  6. Click Add row and add the parameter sched.mem.maxmemctl in the text box.
  7. Click on the row next to it and add 0 in the text box.
  8. balloon2
  9. Click OK to save changes.

Then reboot the VM, from now on VMware memory mballooning wont be able to run.

Also  a point of note, if you want to enable it again, you actually have to edit the vmx file, it cant be removed via the client.

“Note: You cannot remove the entry via the Configuration Parameters UI once it has been added. You must edit the configuration file (.vmx) for the virtual machine to remove the entry.”

So I was assigned with the task of adjusting the NTP settings on all of our ESXi hosts. The currently were pointing to a Domain Controller that was going to be decommissioned.

The best way to do this would be via PowerCli, you can do this via the Thick/Web Client, but that’s going to take some time!

So I created this super basic script…DON’T LAUGH! ha

Connect-VIServer <insert vCenter IP/FQDN>

Get-VMHost | Add-VMHostNtpServer <ntp source 1 FQDN/IP>

Get-VMHost | Add-VMHostNtpServer <ntp source 1 FQDN/IP>

Get-VMHost | Add-VMHostNtpServer <ntp source 1 FQDN/IP>

Get-VMHost | Add-VMHostNtpServer <ntp source 1 FQDN/IP>

Get-VMHost | Add-VMHostNtpServer <ntp source 1 FQDN/IP>

Get-VMHost | Remove-VMHostNtpServer ,<old ntp source, in our case the old DC>

Get-VMHost | Get-VmHostService | Where-Object {$_.key -eq “ntpd”} | Restart-VMHostService -Confirm:$false

So basically all this script does is add in multiple new NTP soruces on each hosts it pull from the vCenter. It then removes the old source for each host. It then restarts the ntpd service on each host, so the service starts using the new sources right away.

Also to finally confirm the status of NTP and what the hosts were configured to use, I found this script:

Get-VMHost |Sort Name|Select Name, @{N=“NTPServer“;E={$_ |Get-VMHostNtpServer}}, @{N=“ServiceRunning“;E={(Get-VmHostService -VMHost $_ |Where-Object {$_.key-eq “ntpd“}).Running}} | Format-List | Out-File c:\ntp_results.txt

This script will return a list of all the hosts ntp statuses including if the service is running edited a bit from : http://www.virtu-al.net/2009/08/14/powercli-do-you-have-the-time/

 

So I had a VM that had a notification, saying that it needed disk consolidation. I found this odd as it didn’t actually have any snapshot, well none that I could see.

I did try the trick, of creating a new snapshot and them doing a delete all, but that finished with no issues, but I was still unable to consolidate the disk.

Status: An error occurred while consolidating disks: Could not open/create change tracking file

After looking around what I did was:

 

  • Shutdown the VM
  • Create a temp folder in the VM folder on the datastore
  • Move the CTK files into the temporary folder.  The files names will look like this “*-ctk.vmdk”
  • Right click the VM and select snapshot and then select consolidate

As you can see I had quite a few to contend with:

ctk3

 

This will now work and you are good to go. The ctk files are used for change block tracking, during snapshot/backups and they can go a bit dodgy every so often

 

 

For some reason one of my hosts, decided to get out of sync with its VDS configuration. I am still trying to figure out why.

Either way the fix is pretty simple and is detailed in the follow Knowledge Base Article:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2042692

You can re-sync in the Thick client and in the web client, bot are perfectly acceptable ways of doing it:

I did it via the Thick client:

To manually synchronize the host vDS information from the vSphere Client:
  1. In the Inventory section, click Home > Networking.
  2. Select the vDS displaying the alert and then click the Hosts tab.
  3. Right-click the host displaying the Out of sync warning and then click Rectify vNetwork Distributed Switch Host.

As simple as that, they do mention that there could be an underlying issue, if the issue persists or doesn’t resolve. So I am going to keep an eye on it, but so far the issue has gone away and hasnt returned