Musings on 2017 Storage

Dieses Posting dümpelt schon eine Weile in meinem Draft-Folder und immer wenn man Muse hat, sich doch wieder abstrakt zu äußern, passiert irgendetwas, was diesem Thema einen ganz neuen Drive gibt.

So etwas rauscht gerade durch den Blätterwald: HPE ackquiriert Simplivity. Die selbe HPE, die vor achtzehn Monaten noch im Brustton festester Überzeugung bei der Diskussion um Hyperconvergente Architekturen und in der Positionierung ihrer VSA behauptet hat, dass inline Compression und Deduplizierung der Tod jedes Storage Systems sind.

Zwischenzeitlich hat Dell EMC erworben. Damit ist die Storage Kompetenz von VMWare VSAN, Nexenta und EMC klassisch unter einem Dach vereint, aber nicht mehr länger Continue reading

How To get a Palo Alto Firewall started

Die Inbetriebnahme einer Palo Alto Firewall ist ein bisschen “old school” und da es ein paar Prinzip- bedingte Eigenarten gibt, wollte ich das Wichtigste einfach mal am Stück runter schreiben. Schwer ist das ganze ja nicht – wenn man mal weiß wie es geht.

k-PA-Dashboard

Zuallererst muss man das gute Stück am Management- Port auf einen untagged Switchport verbinden. Mit Strom versorgt springt irgendwann die Status LED von orange auf grün und man kann sich mit den Default Koordinaten über das System her machen.

Continue reading

Tool of the Day: GlassWire

Diese Woche hat es mir GlassWire angetan. Ein zumindest privat frei zu nutzendes Werkzeug, das Netzwerkmonitor und Firewall in einem ist. Zur Abwechslung sieht das Ganze auch noch zeitgemäß und schön aus und ist genial einfach und intuitiv zu benutzen.

Unfassbar dass es noch solche Software gibt.

k-GlassWire-Graph

Dabei steht die Funktionalität dem schönen Aussehen in nichts nach.

Continue reading

Musings on next generation firewalling

Firewalls. Rulesets. NSA und Sicherheit und Spam und Malware und sowieso eigentlich Ärger wo man hinschaut. Das macht bald keinen Spaß mehr. Und je größer die Angriffsfläche um so mehr Action ist da geboten. Unlängst geben sich sogar Anti- Virenhersteller publikumswirksam im Kampf um die Client Security geschlagen.

So weit so fatal.

Aktuell widme ich mich wieder dem Thema Firewalling und nach einschlägigen Beobachtungen kann ich mich den oben bezogenen Ansichten nur anschließen. Ein Umdenken ist tatsächlich erforderlich.

….

Continue reading

Why Open Source Matters!

Jetzt also die “non-US”- Rechenzentren yon Google und Yahoo. Egal wo, die NSA schneidet mit.

Nun braucht man ja beim Mitschneiden einen “Angiffsvector”, sprich einen Punkt in der Datenkommunikation, an der man die Informationen ungehindert, aber auch unerkannt ausleiten kann. Der Spiegel hat dazu einen interessanten Artikel, der zwischen den Zeilen ein paar bemerkenswerte Feinheiten enthält, da normalerweise die Konzepte in Rechenzentren die Zahl der Vectoren reduzieren swie den Zugang zu den verbleibenden Zielen reduzieren.

Continue reading

VMWorld 2012 Musings

Mann geht ja auf Kongresse um zu lernen, Neuigkeiten und Ankündigungen mit zu nehmen und nicht zuletzt um mit dem guten Gefühl nach Hause zu fahren, dass man mit seinen Einschätzungen richtig liegt und insgesamt auf “einem guten Weg” ist.

Die diesjährige VMWorld ist da keine Ausnahme. Mit dem Unterschied, dass zwischenzeitlich tatsächlich eine Große Veränderung statt findet: Die Cloud kriegt gestalt und die vielen offenen Punkte, welche die Zweifler noch immer nähren sind aufgelistet, adressiert und wenn man mutig genug ist, auch gelöst.

Mir gefällt “elastic computing” als Begriff  eigentlich viel besser als Cloud, weil er nicht so nebulös daher kommt. Dabei machen drei Dinge eine funktionierende Umgebung aus.

* virtuelle Compute Ressourcen – Hypervisor auf “Blech”
* virtuelle Storage Ressourcen
* virtuelle Netzwerk Ressourcen

Die Virtualisierung aller drei Bereiche ermöglicht erst die agile – elastische Software-basierende Bereitstellung von Cloudsystemen. Und das unter der Prämisse, dass die Steuerung aus einem homogenen ganzheitlichen Managementsystem rollenbasierend – profilbasierend und Mandanten-fähig erfolgt. Letzeres übrigens auch für private Clouds, da verschiedene Abteilungen Mandanten-Züge haben.

Erkenntniss des Tages: Dass es ohne das vereinheitlichte Management vorraussichtlich nie rund laufen wird.

UCS Caveats: Uplinking the VIC

The VIC virtual interface card allows within the UCS universe to dynamically create up to 256 VIF – Virtual Interfaces which dynamically bind to a virtual machine within a major hypervisor (e.g. VMWare vSphere, HyperV should be supported soon).

The VIC therefore is bypassed by the virtual switching layer within the hypervisor and provides a reasonable I/O performance advantage compared to classical virtual switching concepts. Basically a “virtual switch” module within the network framework of the hypervisor binds the pre-generated logical PCI devices to a dedicated driver within the virtual machine.

Given a proper integration the so generated virtual interface shows up within the UCS-Manager running on the Nexus based Interconnect Fabric as a classical switch-port and can be managed by the network staff accordingly.

Reading the details in the white papers, the driver component within the virtual machines supports only a limited number on interfaces, in cases as limited as eleven interfaces on one hypervisor. Due to the adapter pinning that does not only cover the general network interfaces, the number of VIFs grows with the factor of additional uplinks. The biggest number I am aware of right now is 56 interfaces with four uplinks from a FEX module. Given the two interfaces of the adapter card this comes close to the 128 advertised VIFs but you need to run Linux within the VMs and you need that many uplinks.

DCUCI: Datacenter Unified Computing Implementation

This week I visited the DCUCI training. The best class I had in a reasonable while. There could have been more labs and the marketing coverage – high and wide introduction was much to much considering the fact that there were two four hour eLearning-sessions upfront which covered this stuff more accurate.

Besides this annoying waste of valuable labtime it was a really great introduction into the Unified Computing System world of CISCO. Although it was not feasible to introduce the UCS preparation and uplink configuration part, covered in an appendix, everything else was covered fair. Introducing the pool concepts, updating templates for ports – interfaces, policies and in the server profile it gave a complete introduction into the hows and whens, even the caveats of the CISCO interpretation of stateless servers.

Even the connection and distributed switching integration into VMWare vSphere with the Nexus 1000v as well as the VM-FEX approach was discussed. The labs cover preferably the Nexus 1000v but due to our smart trainer we implemented the VM-FEX approach as well.

Down this road we found a lot of caveats which never have been covered in the technical deep dive classes offered from CISCO for end customers. I was very happy to hear this class, so we may judge better which way to go. In the long run the HP Virtual Connect Flex Fabric approach is not that clumsy as it looks within a superficial comparison to CISCO UCS. Honoring the caveats, the CISCO approach has to be well designed and even more carefully maintained that the HP one. Details will follow.

Configure VRRP on hp networking E5400-Family

Configuring redundant gateway services for hp ProCurve ProVision based switches is not a very big miracle. Basically its about switching it on. VRRP, saying virtual router redundancy protocol is similar to HSRP from  CISCO or CARP from Open BSD. VRRP itself as of today is standardized in RFC5798 by the IETF and follows hp’s habit of using industry standard protocols.

Why is it not redundant routing ? Well since the VRRP feature is enabled on an per VLAN basis and even more, it only defines a redundant IP interface within the according VLAN. The actual routing is covered independently from this. Assuming that the routing is configured properly the failing over IP interface ensures that the routing can happen. Itself provides only a redundant IP interface, which could be used as routing gateway, so we name the function redundant gateway services.

Different to other approaches by other vendors, VRRP only provides pairwise redundancy where the virtual IP interface is the same than the according VLAN IP on the owning routing switch. This address is failed over to the backup switch, who has a second IP interface configured in the VLAN. This is necessary to check the proper operation of the primary interface.

The partnering happens based on a so called virtual router ID, VRID which is defined within the VRRP configuration. This enables administrators to configure even different redundant IP gateways within one VLAN, if static routing requirements have this need.

So configuration on the Master works as follows. First configure the proper VLAN IP address. Naming VLANs is a clever approach and helps in the long run.

vlan 10
name "production east"
ip address 192.168.0.1 255.255.255.0

Then the VRRP feature is globally enabled:

router vrrp

Then the actual redundant IP interface is configured within the according VLAN context:

vlan 10
vrrp vrid 1
owner
virtual-ip-address 192.168.0.1 255.255.255.0
enable

Be aware that the VRID context is independently activated on a per VLAN basis and enabled within each VRID definition.

On the backup routing switch the VLAN IP configuration as wel as the VRRP activation look pretty much the same.


vlan 10
name "production east"
ip address 192.168.0.2 255.255.255.0

router vrrp

Within the VLAN based VRRP configuration here the backup role is defined:

vlan 10
vrrp vrid 1
backup
virtual-ip-address 192.168.0.1 255.255.255.0
enable

Voila, redundant gateway interfaces should be available.

Especially the strict creation of pairs is different to other implementations. Often sets of interfaces may be created. As well the assignment of the virtual IP as the identical IP that the VLAN IP on the owning router is not necessarily something other vendors do the same way. Very often the virtual IP is one and the local VLAN IPs are two different dedicated ones.

Blade New World

Most computer vendors flood the marketplace with more or less sophisticated blade solutions. Remark: Blade solutions well differentiated from blade servers.

So here some musings on blades and why I tend to differentiate between basic blade servers and the more sophisticated approach.

Basic blade concepts primarily convince, considering all the same aspects:

  • Optimized footprint aka rackspace
  • Reduced cabeling
  • Energy efficiency
  • Virtualized installation media
  • Cooling efficiency
  • Central management and maintenance
  • Easy hardware deployment and service

To whatever extend the different breeds address the different issues, these primarily are seem to be the supperficial quality criteria and argued in many decission processes, missing the core scope of the discussion. These topics are covered in the mainstream publications and are measured as the grade of quality of the “solution”.

From my point of view this is completely out of scope.

The real value of blades is interconnect virtualisation together with the so called “stateless server” approach. No surprise that many vendors try to keep the discussion on the less important facts, since according to my labs and evaluations only three major vendors even understood the issue. Depending on their legacy obligations they have more or less radical approaches to reach the goal.

The goal is to generate a server personality, its identity in terms of technical aspects, dynamically by application of a so called service- or server- profile. This profile contains the different aspects of the individual server identity such as e.g.:

  • BIOS version
  • Other firmware version, e.g. HBA or NIC
  • WWNs for port and node
  • MAC addresses
  • Server UUID
  • Interface assignements
  • Priorities for QoS and power settings

Accroding to these profiles and derived from pools of IDs, MACs, WWNs the servers identity is generated dynamically and assigned during a so called provisioning phase. Then the blade server is available for installation.

This approach allows to “on the fly” generate a server personality, that serves dedicated needs as for instance an VMWare vSphere server or an database server. Furthermore if more servers from the same type are needed the profiles my be cloned or derived from ma template so that new rollouts are quick and easy. In case of failiure or desaster recovery the profiles may even roam to other, not yet personalized servers and asuming a boot from SAN or boot from iSCSI scenario failed servers are back in minutes, transarent to even hardware based licensing issues.

Derived from specially the need for flexible interconnect assignment the classical approach of dedicated Ethernet or Fibrechannel switch modules in a blade infrastructure is of no further use any more. The classical approach needs dedicated interconnects at dedicated blade positions which is exactly the limitation a service profile wants to overcome.

With converged infrastructure, the support of FCoE and data center bridging as well as the according so called converged network adapters, thi limitation has been overcome on the interconnect side. Here the interconnect is configured in an appropriate way to cover the server side settings and assigns dynamically different NIC and HBA configurations to the single blade. Even more the connections may apply QoS or bandwith reservation settings and implment high connection availability in a simplyfied manner.

Based on that far advanced and very modern hardware operation concepts are possible. Only blade concepts, that support the full range of this essentially decoupling from hardware and service role, deserve the name “solution”. Anything else is “me too”.

Some posts on howtos from my previous evaluation and installation projects will follow. Some readers may remember my old blogg 😉