Having created a VSF stack of Aruba 2930Fs, the immediate need of firmware maintenance is obviously raising the question of how!. Dealing with that, luckily a new software had been released and I was able to test.
Daring the result … it was shocking simple and runs as every other Aruba / Procurve firmware upgrade and you just have to cover the second vsf stack member.
vsf member 1
copy tftp flash 192.168.2.5 WC_16_07_0002.swi primary
vsf member 2
copy tftp flash 192.168.2.5 WC_16_07_0002.swi primary
Verify the upload with a show flash the firmware image something like, even you may Continue reading →
Some time ago, I posted on the configuration of an IRF independent resilient fabric with the HPEFlex Fabric FF5700 datacenter switches. During the operation some things arose to my attention which either have been corrected or perhaps not necessarily clear from the first place.
1.) Activate MAD
There needs to be a mechanism to detect multiple actives. This could be considered something like a quorum for the switch-cluster, intended to prohibit split brains. In my case I preferred to do so with LACP. This brings MAD (Multiple Active Detection) to the layer two and is rather simple. It should be configured on an appropriate Bridge Aggregation Group – resulting in an configuration like:
port link-type trunk
port trunk permit vlan all
link-aggregation mode dynamic
Anybody who installed PanOS 8.1 on his Palo Alto firewall – we use the PA 220 in quite some numbers, may have experienced quite some strange behaviour if through IPSEC tunnels connected file shares user SMB. So did I.
With the latest firmware upgrade, no write or read jobs through any of these VPN tunnels succeded. The mapped drives lit up in the file explorer. in some cases even browsing directories may have succeded … perhaps even two or three levels down. Then the explorer started to hang, crashed, even some systems blue screened. Copied files showed perhaps up in the destination with a filename aka. directory entry but never any content showed up.
Since we updated the Microsoft world on top, the assumption some backward compatibility stack or group policy setting may have caused the headache. Many Continue reading →
Konkret sieht die Installation der Software zur Storage Virtualisierung wie folgt aus:
Ich installiere zuerst auf zwei Hosts vmWare vSphere 5.5 und lizenziere diese mit Enterprise Plus. Danach installiere ich einen vCenter Server – oder nehme denjenigen, der in meiner Umgebung zur Verfügung steht. Die VMWare Installation lasse ich an dieser Stelle außen vor – zu berücksichtigende Konfigurationen oder Ausstattungen folgen später im Thread.
Drittens installiere ich die VSA Appliances mit dem entsprechenden Installer. Auch hier fasse ich mich recht kurz, da das zuvor schon gepostet wurde.
Hierbei sind im Wesentlichen drei Dinge zu beachten:
Since I’m still a big fan of the Palo Alto firewall family, there are some things, which really feel strangely disturbing. Nothing functional, otherwise I won”t be as convinced but in terms of administration. The most advanced network security device is better managed by webinterface – something every network guru feels goosebumps in his neck.
The worse it is, if the webinterface hangs and you need to use the unfamiliar command line interface. Whereas many vendors simply follow SNMP logic and somehow end up with something similar to the industry standard context setup, PanOs CLI feels strangely different.
Here are your survival commands to make login on the web interface work again:
Have you rebooted the System? request restart system
Did you restart the management service? debug software restart process management-server
Did you check the file system and free space? show system disk-space
In case you need to delete crash dumps or free space anyway: delete debug-log mp-log file *
And finally if the system still does not respond due to hanging commits: commit force
This list is far from being complete, but after experiencing one software version which filled up the root file system after failed content updates and locking out the admins from the web interface, combinations of these commands helped to make the firewall accessible again.
To be fair, this was a one time error in three years running twelve of these boxes, nevertheless it felt quite uncomfortable.
Some times you may need perhaps more than one network at home connected to your Synology NAS. You are a geek and want to do strange VMWare things or you simply want your kids friends not to find the private family pictures.
Accessrights are one thing, hard network separation probably something entirely different. Even if you don’t want to separate traffic but want to support storage in different subnets probably you don’t want your homeuse- router do handle storage traffic. At least it is very smart to avoid that.
Conceptional this may be solved by interface overloading on the network interface of the storage device. You may have different network cards, to separatre traffic, but why would Continue reading →
Every serious consultant and technichian walks through this phase of having his private test environment and wants to bring whatever environment to live. In the former days this was occasionally heavy iron, piling up to some extend and everybody had somehow to handle the hardware.
Today virtualisation and the meltdown in memory pricing helps. Entire companies may be simulated virtually in little more than an PC. What doesn’t change is communications and given the fact that nice replication or virtulisation technologies sometimes shall work over a far stretched wide area connection. So the today home lab looks first compressed in a Continue reading →
Approaching a certain quality level of switching and routing, high availability evolves to be an obligation. In these terms, according to the different OSI service layers, there are many high availability protocols, securing the according network services. The Spanning Tree family as STP, RSTP, MSTP, PVST, protocols for link aggregation as LACP and layer three routing redundancy services like VRRP.
These protocols have the advantage, being vendor independent standards and presume to be interoperable. But either design gets complex, interoperability keeps its caveats or ressources are simply disabled and take over in failiure. Thats not exactly performance driving.
So vendors created stacks – which failed otherwise, or they started to create systems of higher complexity which proprietary created load sharing high availability clusters in the Continue reading →