Golden Eye for Better Solution Design with IBM I/O Provided by Emulex
For as far as I can remember, Emulex has been providing IBM with enterprise-class connectivity products for IBM System z, IBM Power Systems, and IBM System x not to mention all the storage chips we have onboard IBM disk storage (DS) and tape solutions, and more recently, IBM XiV, IBM System Storage SAN Volume Controller (SVC) and IBM Storwize® V7000. A few years ago, Emulex only provided two products for the IBM System x and BladeCenter portfolio: a dual-port Fibre Channel Host Bus Adapter (HBA) and a single-port Fibre Channel HBA.
Emulex has come a long way since then and when IBM announced their latest Romley-based server products, such as the IBM BladeCenter HS23, it ushered in a new era for IBM interconnects. IBM has shaken not stirred (ahem…couldn’t resist) the competition with IBM BladeCenter HS23 because the IBM LAN On Motherboard (LOM) 10Gb Ethernet (10GbE) chip provided by Emulex that is embedded on the HS23 blade has changed the economics of IBM BladeCenter. Through superior design, IBM now has more I/O available within the IBM BladeCenter HS23 than competitive offerings from other vendors.
Onboard the IBM BladeCenter HS23, IBM has a quad-port LOM chip provided by Emulex that delivers two ports of 1Gb Ethernet (1GbE), ideal for management AND two ports of 10GbE. Now for those of you familiar with IBM’s Virtual Fabric Architecture (VFA), you know that on previous generations of the IBM Virtual Fabric adapter (Emulex 10GbE Virtual Fabric Adapter II for IBM BladeCenter, P/N: 90Y3550), it is possible to carve up two 10GbE ports in to up to eight virtual NICs (vNICs).
However, with the HS23 LAN on Motherboard (LOM) chip, it is only possible to carve up the two ports of 10GbE into six vNICs. I know what you are thinking… Why does the onboard IBM LOM chip embedded into the IBM BladeCenter HS23 have less vNICs than a standard PCI Express (PCIe) or CFFh version of the IBM Virtual Fabric adapter, which is capable of eight vNICs? Let me explain why.
Each IBM CFFh or PCIe Virtual Fabric adapter based on Emulex technology has eight PCIe functions available on the card … It is no secret then why we can carve up the two physical 10GbE ports in to up to eight vNICs. However, for the IBM BladeCenter HS23 LOM chip, we have two ports for 1GbE and two ports for 10GbE. Therefore, with two of the PCIe functions taken up by 1GbE, that leaves only six functions available for the vNIC.
IBM has given customers the ability to change the game with unparalleled levels of I/O for each IBM BladeCenter HS23 blade, because of the way the IBM Virtual Fabric adapter (Emulex 10GbE Virtual Fabric Adapter II forIBM BladeCenter HS23, P/N: 81Y3120) works with the IBM onboard LOM chip also provided by Emulex.
You will notice a special connector that I have highlighted on the new IBM Virtual Fabric adapter (Emulex 10GbE Virtual Fabric Adapter II for IBM BladeCenter HS23, P/N: 81Y3120). This is the only adapter in IBM’s extensive portfolio for the IBM BladeCenter HS23 that allows you four ports of 10GbE (two ports from the onboard IBM LOM chip and two from the IBM CFFh VFA pictured above) with the additional benefit of the two dedicated 1GbE ports, also on the LOM. Consequently, this means that by choosing the IBM solution provided by Emulex, you can achieve up to 14 vNICs that you can use just for data, or you can even (through a Feature on Demand upgrade from IBM) run Fibre Channel over Ethernet (FCoE) or iSCSI over four vNICs across redundant ASICs. The fact that we can now utilise FCoE or iSCSI over the two ASICs is a big improvement over previous generation of IBM blades, because the older IBM VFA II was CFFh-only. That meant one ASIC, meaning storage guys everywhere were getting nervous about running business-critical storage traffic on one card without redundancy.
The other cool thing about the connector highlighted is that the IBM Virtual Fabric adapter (Emulex 10GbE Virtual Fabric Adapter II for IBM BladeCenter HS23, P/N: 81Y3120) is the ONLY card available that is able to deliver four ports of 10GbE into the IBM BladeCenter HS23. If you configure the IBM BladeCenter HS23 with a different CFFh card, you will disable the onboard LOM chip’s 10GbE. Therefore, if you configure another IBM CFFh card not provided by Emulex, you will disable two ports of 10GbE that IBM provides on the LOM just to add another two ports from vendor to different vendor. It doesn’t make sense, right? The fact is that IBM selected Emulex to provide I/O on the LOM, and as a result, the economics have changed.
So there are some key points here. IBM is providing an industry-leading solution with huge I/O virtualisation, made possible by the onboard LOM and IBM Virtual Fabric adapter (Emulex 10GbE Virtual Fabric Adapter II (2+2) for IBM BladeCenter HS23, P/N: 81Y3120). Furthermore, by configuring a different IBM CFFh card in the HS23, you disable the onboard 10GbE ports, increasing the cost of the overall solution and decreasing the chances to win against the competition. Have a golden eye for better solution design, configure the IBM onboard LOM and IBM Virtual Fabric adapter (Emulex 10GbE Virtual Fabric Adapter II (2+2) for IBM BladeCenter HS23, P/N: 81Y3120) for the best functionality with the most vNICs for the most competitive price.
The advantages don’t stop there. IBM I/O solutions provided by Emulex have:
1. One universal driver: which helps avoid complicated support and installation issues
2.Single pain of glass management: which makes it easier to manage I/O within IBM System Director and VMware vCenter environments
3.Consistent ASIC: which makes configuration of NIC teaming much easier
4. Investment protection: upgrade to FCoE and iSCSI via Feature on Demand available from IBM
5. No charge for vNIC functionality no matter what switch platform you connect to: which, unlike competing solutions, require customers to use their expensive proprietary switches
Beat the living daylights (OK, last Bond pun) out of your competition and configure IBM I/O connectivity solutions provided by Emulex.
Want to know more about convergence? Read my blog, Convergence – Field of Dreams or Design Reality for more information on why convergence is necessary in tomorrow’s data center.
Follow me on Twitter: @AlHollingworth