«DATA CENTER Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1 DATA CENTER DESIGN GUIDE CONTENTS Introduction Building ...»
Data Center Network and vCenter In current datacenter environments, vCenter is primarily used to manage Vmware ESX hosts. VMs are instantiated using the vCenter user interface. In addition to creating these VMs the server administrator also associates these with VSs (Virtual Switches), PGs (Port Groups), and DVPGs (Distributed Virtual Port Groups). VMs and their network properties are primarily configured and managed on the vCenter/vSphere. Many VM properties—such as MAC addresses—are automatically configured by the vCenter, while some properties—such as VLAN and bandwidth—are assigned by vCenter through VAL.
Network OS Virtual Asset Discovery Process The Brocade switch that is connected to hosts/VMs needs to be aware of network policies in order to allow or disallow traffic. In Network OS v2.1.0, the discovery process starts upon boot-up, when the switch is preconfigured with the relevant vCenters that exist in its environment. The discovery process entails making appropriate queries to the vCenter. The queries are in the form of Simple Object Access Protocol (SOAP) requests/responses sent to the vCenter.
Figure 15: Virtual Asset Discovery Process VM-Aware Network Automation MAC Address Scaling In Network OS v2.1.1, the VM-aware network automation feature is enhanced to support 8000 VM MAC addresses.
VM-aware network automation is now capable of detecting up to 8000 VM MACs and supporting VM mobility of this scale within a VCS fabric.
Authentication In Network OS v2.1.1, before any discovery transactions are initiated, the first order of transactions involves authentication with the vCenter. In order to authenticate with a specific vCenter, the following vCenter properties are configured at the switch—URL, Login, and Password. A new CLI is added to support this configuration.
Port Profile Management After discovery, the switch/Network OS enters the port profile creation phase, where it creates port profiles on the switch based on discovered DVPGs and port groups. The operation creates port profiles in the running-config of the switch. Additionally, Network OS also automatically creates the interface/VLANs that are configured in the port profiles, which also end up in the running-config.
The AMPP mechanism built into Brocade switches may provide a faster way to correlate the MAC address of a VM to the port it is associated with. Network OS continues to allow this mechanism to learn the MAC address and associate the port profile with the port. The The vCenter automation process simply enhances this mechanism by providing automatic creation of port profiles or preassociating the MAC addresses before the VM is powered up.
Usage Restriction and Limits Network OS creates port profiles automatically based on discovered DVPGs or port groups. The port profiles created in this way contain the prefix “auto-“ in their names. The user is urged not to modify the policies within these port groups. If the user modifies these port profiles, the discovery mechanism at some point may overwrite the changes
the user made. The user is also urged never to create port profiles whose names begin with “auto-“ via CLI or from a file replay.
The maximum number of VLANs that can be created on the switch is 3583, and port profiles are limited to 256 profiles per switch. A vCenter configuration that exceeds this limit leads to an error generated by Network OS.
Third-Party Software To support the integration of vCenter and Network OS, the following third-party software is added in Network OS
Open Source PHP Module • Net-cdp-0.09 • Libnet 1.1.4 •
sw0# show vnetwork vms VirtualMachine Associated MAC IP Addr Host =============== ================= =========== ================================= RH5-078001 00:50:56:81:2e:d5 192.168.5.2 esx4-248803.englab.brocade.com RH5-078002 00:50:56:81:08:3c - esx4-248803.englab.brocade.com Please refer to the Network OS Administrator’s Guide, v2.1.1 for more information on VM-Aware Network Automation.
BUILDING A 2-SWITCH TOR VCS FABRICTraditionally, at the access layer in the data center, servers have been configured with standby links to Top of Rack (ToR) switches running STP or other link level protocols to provide resiliency. As server virtualization increases the density of servers in a rack, the demand for active/active server connectivity, link-level redundancy, and multichassis EtherChannel for node-level redundancy is increasing. A 2-switch, ToR VCS Fabric satisfies each of these conditions with minimal configuration and setup.
Topology The variables that affect a 2-switch ToR design are oversubscription, the number of ports required for server/storage connections, and bandwidth (1/10 GbE). Latency is not a consideration here, as only a single switch is traversed (under normal operating conditions), as opposed to multiple switches.
Oversubscription, in a 2-switch topology, requires a simple ratio of uplinks to downlinks. In a 2×60 port switch ToR with 4 ISL links, 112 usable ports remain. Of these 112 ports, if 40 are used for uplink and 80 for downlink, oversubscription will be 2:1. However, if the servers are dual-homed in an active/active topology, there only 40 servers will be connected, with 1:1 oversubscription.
Licensing VCS will operate in a 2-switch topology without the need to purchase additional software licenses. However, if FCoE support is needed, a separate FCoE license must be purchased.
For VCS configurations that exceed two switches, VCS licenses are required to form a VCS fabric. In addition, if FCoE is required, an FCoE license—in addition to a VCS license—is required.
Implementation Figure 16 shows a sample topology using a 2×60 switch Brocade VDX 6720 configuration. This topology provides 2.5:1 oversubscription and 80 server ports to provide active/active connectivity for a rack of 50 servers and/or storage elements. Table 2 shows the Bill of Materials (BOM).
BUILDING A 2-SWITCH AGGREGATION LAYER USING VCSAt the aggregation layer in the data center, ToR switches have traditionally had standby links to ToR switches running STP, or other link level protocols, to provide resiliency. As server virtualization increases the density of servers in a rack, the demand for bandwidth from the access switches must increase, to reduce oversubscription. This in turn drives the demand for active/active uplink connectivity from ToR switches. This chapter discusses the best practices for setting up a two-node VCS Fabric for aggregation, which expands the Layer 2 domain without the need for spanning tree.
Topology The variables that affect a 2-switch design are oversubscription and latency. Oversubscription is directly dependent on the number of uplinks, downlinks, and ISLs. Depending upon the application, latency can be a deciding factor in the topology design.
Oversubscription, in a 2-switch topology, is a simple ratio of uplinks to downlinks. In a 2×60 port switch Fabric with 4 ISL links, 112 usable ports remain. Any of these 112 ports can be used as either uplinks or downlinks to give the desired oversubscription ratio.
Licensing VCS operates in a 2-switch topology without the need to purchase additional software licenses. However, if FCoE support is needed, a separate FCoE license must be purchased.
For VCS configurations that exceed 2 switches, VCS licenses are required to form a VCS fabric. In addition, if FCoE is required, an FCoE license in addition to a VCS license is required.
Figure 17 shows a sample topology using a 2×60 switch Brocade VDX 6720. This topology provides a 2.5:1
oversubscription and 80 downlink ports to provide active/active connectivity to 20 Brocade FCX 648 switches with 4×10G uplinks each. Each of these Brocade FCX switches have 48x1G downlink ports, providing 960 (48×20) × 1G server ports. Table 3 shows the BOM.
Figure 17: 2-Switch VCS Fabric in Aggregation Building the Fabric Please refer to the VCS Nuts and Bolts (Within the Fabric) section. The same best practices apply to a 2switch ToR solution.
APPENDIX A: VCS USE CASESBrocade VCS fabric technology can be used in multiple places in the network. Traditionally, data centers are built using three-tier architectures with access layers providing server connectivity, aggregation layers aggregating the access layer devices, and the data center core layer acting as an interface between the campus core and the data center. This appendix describes the value that VCS fabric technology delivers in various tiers of the data center.
VCS Fabric Technology in the Access Layer Figure 19 shows a typical deployment of VCS Fabric technology in the access layer. The most common deployment model in this layer is the 2-switch ToR, as discussed previously. In the access layer, VCS fabric technology can be inserted in existing architectures, as it fully interoperates with existing LAN protocols, services, and architecture. In addition, VCS Fabric technology delivers additional value by allowing active-active server connectivity to the network without additional management overhead.
At the access layer, VCS Fabric technology allows 1 GbE and 10 GbE server connectivity and flexibility of oversubscription ratios, and it is completely auto-forming, with zero configuration. Servers see the VCS ToR as a single switch and can fully utilize the provisioned network capacity, thereby doubling the bandwidth of network access.
Figure 19: VCS Fabric Technology in the Access Layer
VCS Fabric Technology in the Collapsed Access/Aggregation Layer Traditionally, Layer 2 (L2) networks have been broadcast-heavy, which forced the data center designers to build smaller L2 domains to limit both broadcast domains and failure domains. However, in order to seamlessly move virtual machines in the data center, it is absolutely essential that the VMs are moved within the same Layer 2 domain. In traditional architectures, therefore, VM mobility is severely limited to these small L2 domains.
Brocade has taken a leadership position in the market by introducing Transparent Interconnection of Lots of Links (TRILL)-based VCS Fabric technology, which eliminates all these issues in the data center. Figure 20 shows how a scaled-out self-aggregating data center edge layer can be built using VCS Fabric technology. This architecture allows customers to build resilient and efficient networks by eliminating STP, as well as drastically reducing network management overhead by allowing the network operator to manage the whole network as a single logical switch.
Figure 20: VCS Fabric Technology in the Access/Aggregation Layer
VCS Fabric Technology in a Virtualized Environment Today, when a VM moves within a data center, the server administrator needs to open a service request with the network admin to provision the machine policy on the new network node where the machine is moved. This policy may include, but is not limited to, VLANs, Quality of Service (QoS), and security for the machine. VCS Fabric technology eliminates this provisioning step and allows the server admin to seamlessly move VMs within a data center by automatically distributing and binding policies in the network at a per-VM level, using the Automatic Migration of Port Profiles (AMPP) feature. AMPP enforces VM-level policies in a consistent fashion across the fabric and is completely hypervisor-agnostic. Figure 21 shows the behavior of AMPP in a 10-node VCS fabric.
Figure 21: VCS Fabric Technology in a Virtualized Environment
VCS Fabric technology in Converged Network Environments VCS Fabric technology has been designed and built ground-up to support shared storage access to thousands of applications or workloads. VCS Fabric technology allows for lossless Ethernet using DCB and TRILL, which allows VCS Fabric technology to provide multi-hop, multi-path, highly reliable and resilient FCoE and Internet Small Computer Systems Interface (iSCSI) storage connectivity. Figure 22 shows a sample configuration with iSCSI and FCoE storage connected to the fabric.
Figure 22: VCS Fabric Technology in a Converged Network
Brocade VDX 6710 Deployment Scenarios In this deployment scenario, the Brocade VDX 6710s (VCS fabric-enabled) extend benefits natively to 1 GbE servers.
Figure 23: Brocade VDX 6710 Deployment Scenario
RELATED DOCUMENTSFor more information about Brocade VCS Fabric technology, please see the Brocade VCS Fabric Technical
For the Brocade Network Operating System Admin Guide and Network OS Command Reference:
http://www.brocade.com/downloads/documents/product_manuals/B_VDX/NOS_AdminGuide_v211.pdf http://www.brocade.com/downloads/documents/product_manuals/B_VDX/NOS_CommandRef_v211.pdf The Network OS Release notes can be found at http://my.brocade.com
For more information about the Brocade VDX Series of switches, please see the product Data sheets:
Brocade VDX 6710 Data Center Switch:
Brocade VDX 6720 Data Center Switch:
Brocade VDX 6730 Data Center Switch:
ABOUT BROCADEAs information becomes increasingly mobile and distributed across the enterprise, today’s organizations are transitioning to highly virtualized infrastructure, which often increases overall IT complexity. To simplify this process, organizations must have reliable, flexible network solutions that utilize IT resources whenever and wherever needed—enabling the full advantages of virtualization and cloud computing.
As a global provider of comprehensive networking solutions, Brocade has more than 15 years of experience in delivering Ethernet, storage, and converged networking technologies that are used in the world’s most missioncritical environments. Based on the Brocade One™ strategy, this unique approach reduces complexity and disruption by removing network layers, simplifying management, and protecting existing technology investments. As a result, organizations can utilize cloud-optimized networks to achieve their goals of non-stop operations in highly virtualized infrastructures where information and applications are available anywhere.
For more information, visit www.brocade.com.
© 2012 Brocade Communications Systems, Inc. All Rights Reserved. 04/12 GA-DG-434-00 Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, MLX, SAN Health, VCS, and VDX are registered trademarks, and AnyIO, Brocade One, CloudPlex, Effortless Networking, ICX, NET Health, OpenScript, and The Effortless Network are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.