[ad_1]
Sebastian explained an interesting Cisco ACI quirk they experienced the privilege of chasing about:
We’ve encountered VM connectivity challenges soon after VM movements from one particular vPC leaf pair to a different vPC leaf pair with ACI. The difficulty did not manifest quickly (because of to ACI’s bounce entries) and only often, which built it quite tough to reproduce synthetically, but because of to DRS and a significant quantity of VMs it happened regularly more than enough, that it was a critical dilemma for us.
Here’s what they figured out:
The dilemma was, that at times the COOP database entry (ACI’s different manage aircraft for MACs and host addresses) was not updated accurately to issue to the new leaf pair.
That undoubtedly sounds like a bug, and Erik talked about in a later on comment that it was likely mounted in the meantime. On the other hand, the enjoyable section was that items worked for almost 10 minutes following the VM migration:
After the bounce entry on the previous leaf pair expired (630 seconds by default), targeted traffic to the VM was generally blackholed, considering the fact that remote endpoint studying is disabled on border leafs and constantly forwarded to the spines underlay IP handle for proxying.
A bounce entry seems to be a little something like MPLS/VPN PIC Edge – the original swap appreciates where the MAC tackle has moved to, and redirects the site visitors to the new site. Just owning that operation can make me anxious – contrary to MPLS/VPN networks wherever you could have many paths to the exact prefix (and as a result know the backup path in progress), you need to have a bounce entry for a MAC tackle only when:
- The original edge device is aware of the new switch the moved MAC address is attached to
- Other fabric customers have not recognized that nevertheless.
- The interim state persists extended plenty of to be really worth the excess hard work.
Anyway, the organization dealing with that issue made a decision to “solve” it by restricting VM migration to a single vPC pair:
In the end we gave up and confined the VM migration area to a single VPC leaf pair. VMware endorses a optimum range of 64 hosts per cluster anyway.
Getting large-availability vSphere clusters and much more than two leaf switches, and limiting the HA domain to a solitary pair of leafs, surely degrades the resilience of the total architecture, except if they resolved to restrict DRS (computerized VM migrations) to a subset of cluster nodes with VM affinity while retaining the rewards of owning the significant-availability cluster stretched across several leaf pairs. It’s unhappy that just one has to go down these kinds of paths to keep away from vendor bugs brought about by much too a lot unneeded complexity.
Want to Know More About Cisco ACI? Cisco ACI Introduction and Cisco ACI Deep Dive
Webinars are waiting around for you )
[ad_2]
Source link