Some time ago, I posted on the configuration of an IRF independent resilient fabric with the HPEFlex Fabric FF5700 datacenter switches. During the operation some things arose to my attention which either have been corrected or perhaps not necessarily clear from the first place.
1.) Activate MAD
There needs to be a mechanism to detect multiple actives. This could be considered something like a quorum for the switch-cluster, intended to prohibit split brains. In my case I preferred to do so with LACP. This brings MAD (Multiple Active Detection) to the layer two and is rather simple. It should be configured on an appropriate Bridge Aggregation Group – resulting in an configuration like:
port link-type trunk
port trunk permit vlan all
link-aggregation mode dynamic
Left alone by some consultants, which charged a lot and did not accomplish to much, I ended up configuring FF5700 felx fabric switches myself. Some of the insights, other posts will follow.
To start with the basic initialization settings, configuring management access and doing initial firmware maintenance. After unpacking the switch and mounting fans and power supplies connect through the serial console – although there is dhcp client running on the switch which probably allows you to gain management access over the network. Remember there is a Gigabit- Ethernet- Port on the backside of the switch, dedicated for management access only. The console port is adjacent. Default serial settings are 9600/n/1/n as with any other HPE switch.
After the boot procedure press enter and you have access to the switch. Elevate your access level to configuration mode with:
To start with I actually disable the DHCP client and activate LLDP for further use.
undo dhcp enable
lldp global enable
After that prepare the desired VLANs according to whatever you later use. I strictly recommend leaving the default VLAN untouched, leaving the Primary VLAN ID on 1 and transport that untagged on any switch to switch link, but remove all access and server Continue reading →
Mal wieder etwas technisches: SDN – Softare Defined Networking mausert sich ja zum nächsten Hype – mit welchem Recht auch immer – und wie bei jedem Hype Thema springen die üblichen Verdächtigen zügig auf. Die Sprungrate hat dabei 2013 drastisch zugenommen – ob begründet oder unbegründet sei mal dahin gestellt.
Aufspringen bedeutet dabei ja gerne, dass man die Technologie, die man ohnehin schon im Haus hat etwas erweitert und dann seine eigene Deutung definiert, die einen natürlich zum gefragtesten Anbieter in dem Segment macht.
Nachdem VMWare als der Marktführer im Bereich Virtualisierung hier mit der Nicira Aquise hier früh 2012 den Reigen eröffnet hat lohnt sich ein genauerer Blick auf die Szenarien und Aufgabenstellungen, sowie den Status-Quo:
This week I visited the DCUCI training. The best class I had in a reasonable while. There could have been more labs and the marketing coverage – high and wide introduction was much to much considering the fact that there were two four hour eLearning-sessions upfront which covered this stuff more accurate.
Besides this annoying waste of valuable labtime it was a really great introduction into the Unified Computing System world of CISCO. Although it was not feasible to introduce the UCS preparation and uplink configuration part, covered in an appendix, everything else was covered fair. Introducing the pool concepts, updating templates for ports – interfaces, policies and in the server profile it gave a complete introduction into the hows and whens, even the caveats of the CISCO interpretation of stateless servers.
Even the connection and distributed switching integration into VMWare vSphere with the Nexus 1000v as well as the VM-FEX approach was discussed. The labs cover preferably the Nexus 1000v but due to our smart trainer we implemented the VM-FEX approach as well.
Down this road we found a lot of caveats which never have been covered in the technical deep dive classes offered from CISCO for end customers. I was very happy to hear this class, so we may judge better which way to go. In the long run the HP Virtual Connect Flex Fabric approach is not that clumsy as it looks within a superficial comparison to CISCO UCS. Honoring the caveats, the CISCO approach has to be well designed and even more carefully maintained that the HP one. Details will follow.