Reloading a VSF

Reloading a VSF cluster of Aruba switches, as in the firmware upgrade procedure discussed, is not obviously running straight forward. Since I tried to google a couple of times and really found no straight comment on what actually happens, here the summary of my findings.

Long story short. A warm boot with reload does not result in a predictable successfull manner. Never. Nowhere. That`s it!

The often cited vsf sequenced-reboot is only supported on the 5400 platform. So all the kind remarks for users with 2930F, 3810 or similar VSF enabled platforms are simply not helpful. Continue reading

How-To: Init HPE FF5700 FlexFabric Switches

Left alone by some consultants, which charged a lot and did not accomplish to much, I ended up configuring FF5700 felx fabric switches myself. Some of the insights, other posts will follow.

To start with the basic initialization settings, configuring management access and doing initial firmware maintenance. After unpacking the switch and mounting fans and power supplies connect through the serial console – although there is dhcp client running on the switch which probably allows you to gain management access over the network. Remember there is a Gigabit- Ethernet- Port on the backside of the switch, dedicated for management access only. The console port is adjacent. Default serial settings are 9600/n/1/n as with any other HPE switch.

After the boot procedure press enter and you have access to the switch. Elevate your access level to configuration mode with:


To start with I actually disable the DHCP client and activate LLDP for further use.

undo dhcp enable
lldp global enable

After that prepare the desired VLANs according to whatever you later use. I strictly recommend leaving the default VLAN untouched, leaving the Primary VLAN ID on 1 and transport that untagged on any switch to switch link, but remove all access and server Continue reading

EtherConnect Caveats

Am Wochenende eine EtherConnect des Rosa Riesen in Betrieb genommen. Über das Drama bis die Leitung tatsächlich zwischen A und B liegt will ich gar nicht klagen, aber …

Vollmundig vom Vertrieb die Einfachheit angepriesen: “Einstecken … geht, aber sie haben doch einen Router?”. Als hätte den irgendwer nicht. “Sonst könnte ich ihnen noch einen Verkaufen!”.

Der Techniker vor Ort sagt auch eher gar nichts und lässt einen mit einer silbrigweißen Kiste alleine. Also steckt man mal ein Kabel in den mit LAN beschrifteten Port und prompt antwortet einem ein: …
Continue reading

Musings on ProCurve timesync

Da waren sie wieder… meine drei Probleme. Was ich in früheren Installationen schon beobachtet habe ist, dass sich nicht alle Switches aus der HP ProCruve – ja die hat ihren Namen geändert – Familie absolut identisch benehmen. Passiert mir prompt gerade wieder. Eine Sammlung verschiedener Typen und teilweise divergenter Softwarestände und eindeutig nicht deterministisches Verhalten.

Ich mag das nicht. …

Continue reading

Musings on Software Defined Networking – SDN

Mal wieder etwas technisches: SDN – Softare Defined Networking mausert sich ja zum nächsten Hype – mit welchem Recht auch immer – und wie bei jedem Hype Thema springen die üblichen Verdächtigen zügig auf. Die Sprungrate hat dabei 2013 drastisch zugenommen – ob begründet oder unbegründet sei mal dahin gestellt.

Aufspringen bedeutet dabei ja gerne, dass man die Technologie, die man ohnehin schon im Haus hat etwas erweitert und dann seine eigene Deutung definiert, die einen natürlich zum gefragtesten Anbieter in dem Segment macht.

Nachdem VMWare als der Marktführer im Bereich Virtualisierung hier mit der Nicira Aquise hier früh 2012 den Reigen eröffnet hat lohnt sich ein genauerer Blick auf die Szenarien und Aufgabenstellungen, sowie den Status-Quo:

Continue reading

Blade New World

Most computer vendors flood the marketplace with more or less sophisticated blade solutions. Remark: Blade solutions well differentiated from blade servers.

So here some musings on blades and why I tend to differentiate between basic blade servers and the more sophisticated approach.

Basic blade concepts primarily convince, considering all the same aspects:

  • Optimized footprint aka rackspace
  • Reduced cabeling
  • Energy efficiency
  • Virtualized installation media
  • Cooling efficiency
  • Central management and maintenance
  • Easy hardware deployment and service

To whatever extend the different breeds address the different issues, these primarily are seem to be the supperficial quality criteria and argued in many decission processes, missing the core scope of the discussion. These topics are covered in the mainstream publications and are measured as the grade of quality of the “solution”.

From my point of view this is completely out of scope.

The real value of blades is interconnect virtualisation together with the so called “stateless server” approach. No surprise that many vendors try to keep the discussion on the less important facts, since according to my labs and evaluations only three major vendors even understood the issue. Depending on their legacy obligations they have more or less radical approaches to reach the goal.

The goal is to generate a server personality, its identity in terms of technical aspects, dynamically by application of a so called service- or server- profile. This profile contains the different aspects of the individual server identity such as e.g.:

  • BIOS version
  • Other firmware version, e.g. HBA or NIC
  • WWNs for port and node
  • MAC addresses
  • Server UUID
  • Interface assignements
  • Priorities for QoS and power settings

Accroding to these profiles and derived from pools of IDs, MACs, WWNs the servers identity is generated dynamically and assigned during a so called provisioning phase. Then the blade server is available for installation.

This approach allows to “on the fly” generate a server personality, that serves dedicated needs as for instance an VMWare vSphere server or an database server. Furthermore if more servers from the same type are needed the profiles my be cloned or derived from ma template so that new rollouts are quick and easy. In case of failiure or desaster recovery the profiles may even roam to other, not yet personalized servers and asuming a boot from SAN or boot from iSCSI scenario failed servers are back in minutes, transarent to even hardware based licensing issues.

Derived from specially the need for flexible interconnect assignment the classical approach of dedicated Ethernet or Fibrechannel switch modules in a blade infrastructure is of no further use any more. The classical approach needs dedicated interconnects at dedicated blade positions which is exactly the limitation a service profile wants to overcome.

With converged infrastructure, the support of FCoE and data center bridging as well as the according so called converged network adapters, thi limitation has been overcome on the interconnect side. Here the interconnect is configured in an appropriate way to cover the server side settings and assigns dynamically different NIC and HBA configurations to the single blade. Even more the connections may apply QoS or bandwith reservation settings and implment high connection availability in a simplyfied manner.

Based on that far advanced and very modern hardware operation concepts are possible. Only blade concepts, that support the full range of this essentially decoupling from hardware and service role, deserve the name “solution”. Anything else is “me too”.

Some posts on howtos from my previous evaluation and installation projects will follow. Some readers may remember my old blogg 😉