The VIC virtual interface card allows within the UCS universe to dynamically create up to 256 VIF – Virtual Interfaces which dynamically bind to a virtual machine within a major hypervisor (e.g. VMWare vSphere, HyperV should be supported soon).
The VIC therefore is bypassed by the virtual switching layer within the hypervisor and provides a reasonable I/O performance advantage compared to classical virtual switching concepts. Basically a “virtual switch” module within the network framework of the hypervisor binds the pre-generated logical PCI devices to a dedicated driver within the virtual machine.
Given a proper integration the so generated virtual interface shows up within the UCS-Manager running on the Nexus based Interconnect Fabric as a classical switch-port and can be managed by the network staff accordingly.
Reading the details in the white papers, the driver component within the virtual machines supports only a limited number on interfaces, in cases as limited as eleven interfaces on one hypervisor. Due to the adapter pinning that does not only cover the general network interfaces, the number of VIFs grows with the factor of additional uplinks. The biggest number I am aware of right now is 56 interfaces with four uplinks from a FEX module. Given the two interfaces of the adapter card this comes close to the 128 advertised VIFs but you need to run Linux within the VMs and you need that many uplinks.
This week I visited the DCUCI training. The best class I had in a reasonable while. There could have been more labs and the marketing coverage – high and wide introduction was much to much considering the fact that there were two four hour eLearning-sessions upfront which covered this stuff more accurate.
Besides this annoying waste of valuable labtime it was a really great introduction into the Unified Computing System world of CISCO. Although it was not feasible to introduce the UCS preparation and uplink configuration part, covered in an appendix, everything else was covered fair. Introducing the pool concepts, updating templates for ports – interfaces, policies and in the server profile it gave a complete introduction into the hows and whens, even the caveats of the CISCO interpretation of stateless servers.
Even the connection and distributed switching integration into VMWare vSphere with the Nexus 1000v as well as the VM-FEX approach was discussed. The labs cover preferably the Nexus 1000v but due to our smart trainer we implemented the VM-FEX approach as well.
Down this road we found a lot of caveats which never have been covered in the technical deep dive classes offered from CISCO for end customers. I was very happy to hear this class, so we may judge better which way to go. In the long run the HP Virtual Connect Flex Fabric approach is not that clumsy as it looks within a superficial comparison to CISCO UCS. Honoring the caveats, the CISCO approach has to be well designed and even more carefully maintained that the HP one. Details will follow.