CCIE Data Center :: Unified Computing

Nexus 1000v Overview


 


Table of Contents
Course Files
Transcript
  • 1 Introduction Closed Caption 0h 30m
    2 Background and Physical Architecture Closed Caption 0h 58m
    3 A Look at the Hardware Closed Caption 0h 31m
    4 Hardware Options Closed Caption 0h 40m
    5 Configuration Options Closed Caption 0h 29m
    6 Virtualization and Service Profiles Closed Caption 0h 31m
    7 Initial Setup of UCS Closed Caption 0h 40m
    8 LAN Connectivity :: Part 1 Closed Caption 0h 17m
    9 LAN Connectivity :: Part 2 Closed Caption 1h 14m
    10 LAN Connectivity :: Part 3 Closed Caption 0h 45m
    11 SAN Connectivity Closed Caption 0h 25m
    12 Server Pools Closed Caption 0h 06m
    13 Addressing Recommendations Closed Caption 0h 29m
    14 Building Pools and Profiles :: Part 1 Closed Caption 1h 18m
    15 Building Pools and Profiles :: Part 2 Closed Caption 1h 34m
    16 Spinning up Blades :: Part 1 Closed Caption 1h 16m
    17 Spinning up Blades :: Part 2 Closed Caption 0h 54m
    18 Spinning up Blades :: Part 3 Closed Caption 0h 14m
    19 QoS and Network Control Closed Caption 1h 00m
    20 Nexus 1000v Overview Closed Caption 0h 54m
    21 Nexus 1000v Installation :: Part 1 Closed Caption 0h 42m
    22 Nexus 1000v Installation :: Part 2 Closed Caption 1h 02m
    23 Nexus 1000v Migration Closed Caption 0h 37m
    24 Nexus 1000v vPC-HM Mac Pinning Closed Caption 0h 33m
    25 Nexus 1000v ACLs QoS and vMotion Closed Caption 0h 21m
    26 VM-FEX :: Part 1 Closed Caption 0h 41m
    27 VM-FEX :: Part 2 Closed Caption 0h 38m
    28 Adapter-FEX Closed Caption 0h 29m
    29 Administration Closed Caption 1h 05m
    30 Application Networking Services Closed Caption 1h 30m
    Total Duration   22h 23m
  • 0:00:22 And now its time to take a look at the Nexus 1000v.
    0:00:26 We'll be spending a good amount of time on that.
    0:00:28 And then we will take a look at VMFex.
    0:00:32 And then also adapter Fex.
    0:00:37 So first of all the Nexus 1000v, this is just another modular Cisco chassis switch.
    0:00:43 The only difference is there is no CAT. What do I mean by this?
    0:00:47 Well, this is more commonly used in the wireless world.
    0:00:51 But Albert Einstein, was once asked to described radio, and he said, Wire telegraph is this kind of very very long CAT.
    0:00:59 We pull this tail in New York, and his head is meowing in Los Angeles.
    0:01:03 And radio operates in the same way, you send signals here and they receive them there.
    0:01:07 The only difference is there's no CAT.
    0:01:10 So anyway, I just used this to explain that its the same thing as a modular Cisco Nexus switch, its just like a 7K.
    0:01:18 It's just there's no hardware.
    0:01:19 So what does this mean?
    0:01:21 Well, this creates a distributed virtual switch or also referred to as a virtual distributed switch or VDS,
    0:01:29 in VMware and actually now it is supported on HyperV.
    0:01:33 And will be supported on another HyperVisor platforms as well.
    0:01:39 But this is made up of Virtual supervisor module or always a pair so modules.
    0:01:46 And of course those control or they take care of the control and management plane.
    0:01:52 And then Virtual Ethernet modules or VEMs.
    0:01:56 And these are referred to the data plane
    0:01:59 Virtual supervisor module are either hardware or software or take a look at in diagrams,
    0:02:04 and talk about whatt the different options are for this.
    0:02:07 The VEMs, the Virtual Ethernet Modules,
    0:02:10 These are actually software that gets installed on your HyperVisor itself.
    0:02:16 So for instance its a VEM file that gets installed in VMware ESXI or ESX.
    0:02:24 This can also include other modules called VSBs or Virtual Service Blades.
    0:02:32 And these can be things such as VSG, Virtual Security Gateway.
    0:02:37 The ASA 1000v.
    0:02:40 VWAAS.
    0:02:41 And these use something called VPath 2.0 for interception and control of packets.
    0:02:49 And there's also something soon to be release, was announced that Cisco live this past year
    0:02:55 called the, Cloud Services Router or CSR.
    0:03:01 So this basically is running on the same or utilizing the same features, functionalities and architecture of the Nexus 1000v.
    0:03:12 Although instead of a switch, its for...
    0:03:17 It's for a obviously a router platform.
    0:03:24 Each server in the datacenter is represented as basically a line card in the Cisco Nexus 1000v.
    0:03:31 And they can be manage as if were a line card in a physical Cisco switch.
    0:03:35 Now where gonnna take a look a some diagram as soon as we get down with the slides.
    0:03:38 And this should begin to collect and make a lot more sense when you see these diagrams.
    0:03:46 Taking a look at Cisco Nexus 1000v, and how it pertains to or needs or utilizes UCS.
    0:03:54 Well, first of all the UCS is compatible with Nexus 1000v despite what some people have said.
    0:04:00 And that is to say that they were perfectly weld together, but one doesn't really need to know of the other necessarilly.
    0:04:07 So the UCS doesn't really need to know about the Nexus 1000v.
    0:04:10 And the Nexus 1000v doesn't necessarilly need to know that its running on top of the UCS blade series.
    0:04:17 As long as we take not as the administrators and dont mis-probition it.
    0:04:24 So, for instance NExus 1000v is compatible with something called VPCHM or Virtual Port Channel Host Mode.
    0:04:34 As long as we're using MAC pinning, if were using LACP that would not work on a B-Series chassis because we cannot pin traffic or aggregate traffic together,
    0:04:49 across both fabric interconnects, facric interconnect A and fabric interconnect B.
    0:04:56 We can load balance traffic and say, "Hey, if", leys say we have 10 virtual machines on 10 V-Eth ports on the N 1000v.
    0:05:07 And we say, All the even number of VM are gonna go out with fabric interconnect A,
    0:05:12 and all the odd VMs are gonna go out with fabric interconnect B.
    0:05:18 That's perfectly supported and that's essentially what MAC pinning does, pings based on the source MAC address.
    0:05:27 Now N1Kv or Nexus 1000v is not compatible with using allocating dynamic VNICs, when your creating your service profile.
    0:05:39 Dynamic VNICs create VMFex, sometimes referred to as hardware VM tag.
    0:05:45 VMFex and N1Kv are mutually exclusive from one another.
    0:05:51 So really, both the VMFex and the Nexus 1000v create a, they both create distributed virtual switches.
    0:06:03 In VMware or whatever hypervisor.
    0:06:06 But they obviuosly cannot both create a distributed virtual switch at the same time on the same ESXI host.
    0:06:12 Doesn't make any sense, it can't be, I supposed you could be, but it doesn't make any sense to be running multiple distributed virtual switches.
    0:06:20 Not to mention that the vid files that you see we install, those are actually different, slightly different from each other.
    0:06:31 It looks the same, so just to confuse you, Cisco used the same, or at least similar looking vem files to implement this.
    0:06:39 Really, they don't do it to confuse you, they did it because the virtual Ethernet module that goes on the ESXI host.
    0:06:45 This is what they create the distributed virtual switch.
    0:06:48 The difference between VMFex and the Nexus 1000v.
    0:06:53 Is the controlling and management plane.
    0:06:55 The Nexus 1000v, we have virtual supervisor modules or VSMs and those can be virtual machines
    0:07:02 They can be running on hardware, but thats using the VSMs.
    0:07:06 With VMFex, UCS manager, really the fabric interconnect itself, that is the control plane.
    0:07:15 Okay, so V-Eth ports are created there on the fabric interconnects.
    0:07:19 And we're gonna install both of these from scratch.
    0:07:22 And we'll take a look at both of them independent of one another.
    0:07:27 So we'll create, 2 ESXI host on blade 1 and 2 for our Nexus 1000v.
    0:07:34 And then we'll go back and we'll power those down later and we'll power up a new blade with a new ESXI host install.
    0:07:43 Just for VMFex.
    0:07:47 Now why are there multiple options?
    0:07:48 Well, one of the original reasons was because Nexus 1000v, while its currently free and what they call they're 'free-mium
    0:07:58 Interesting marketing term.
    0:08:00 Freemium packaging which basically means that you get all the base features of the Nexus 1000v.
    0:08:06 And the virtual supervisor module and the virtual Ethernet module is for free.
    0:08:10 And the advanced features such as DHCP snooping, Dynamic ARP inspection.
    0:08:17 IP source guard all three of those.
    0:08:19 The three of you always go together.
    0:08:21 And then things like, SGTs or Scaggles, Security Group tags, Security Group ACLs,
    0:08:28 With the new, Bring your own device type.
    0:08:33 Way of tagging and marking and and controlling traffic in your network.
    0:08:39 Then you have to pay an additional premium, and I think its $6.95 per CPU socket.
    0:08:48 But originally the Nexus 1000v cost just for the basic install, it didn't used to be free.
    0:08:56 And so one of the ways that your could have insight from your UCS manager blade server.
    0:09:03 or blase series and cluster.
    0:09:06 In to the VMs, was to be able to use VMFex.
    0:09:11 Okay and so this would allow the Fabric interconnects top provision port profiles and port groups.
    0:09:19 For consumption by your ESXI VMware cluster.
    0:09:26 And VMs.
    0:09:27 So this is the reason that there are two separate types of distributed virtual switches, but just keep in mind they're mutually exclusive.
    0:09:34 Now I'll unpack this a lot more coming up in just a bit.
    0:09:39 So VPath, the VPath protocols are always running in Vitrual Ethernet Module.
    0:09:45 And this directs traffic, you're gonna see in various Cisco documentation, it either reffered to as a VSN or VSB.
    0:09:53 A VSN is Virtual Services Node, A VSB is a Virtual Service Blade, Okay?
    0:09:59 But anyhow, this directs traffic to assuming that we have Virtual Service Nodes or Blades aside just from our Virtual Supervisor module.
    0:10:09 This directs traffic to those blades such as the VSG, the Virtual Services, or sorry,
    0:10:16 Virtual Security Gateway.
    0:10:17 or even the ASA 1000v.
    0:10:20 And it applies security or optimization policies once it gets to that services node.
    0:10:27 And then the traffic is sent back to the virtual Ethernet module along with the ability to now fastswitch.
    0:10:34 or kind of safe switch the traffic directly in the VEM.
    0:10:37 So basically it takes a look at the first few and by few it means as many packets as necessarry to denote what the flow,
    0:10:46 Basically to identify the flow of traffic.
    0:10:49 And once its get the flow of traffic, the virtual security gateway or the ASA 1000v.
    0:10:57 Possibly even the combination of both, or the VWAAS, all of those either security or optimization nodes.
    0:11:05 Then tag the traffic and then send that traffic back to the VEM, the actual Virtual Ethernet Module running in the ESXI.
    0:11:15 And tell it," Hey, for all future traffic for this flow, this is where its allowed to go.
    0:11:21 These are the other VM Guest that's allowed to talk to." And this actually becomes really extremely important as we see all of our server computing platforms being virtualize in the datacenter.
    0:11:36 And we have a lot more East to West traffic, by that I mean the traffic is not leaving
    0:11:42 Its not necessarilly coming in from the outside world or leaving to go out through the fabric interconnects to the upstream switches.
    0:11:52 Another was we're not passing through physical hardware anymore.
    0:11:55 Sometimes we are, certainly but not all the time.
    0:11:59 And a lot of the traffic these days is just east to west.
    0:12:02 So its just going between virtual machines.
    0:12:06 Whether on a different, ESXI host, but still staying on the same fabric interconnect where we dont really have that granular policy control or anything like that.
    0:12:17 We can't do ACLs yet.
    0:12:21 Or things of that nature, but we need to be able to apply and do very granular netwrok analysis.
    0:12:31 Optimization.
    0:12:34 Different, and these even, could be well anyway, I'll save that for later.
    0:12:40 But create security groups, don't allow certain service to talk to each other.
    0:12:45 Allow certain servers in any group to talk to each other.
    0:12:48 We need these security and sometimes optimization techniques, that we have all these east to west traffic.
    0:12:55 And I'll show this a lot more in the diagram, so this is really what the VPath protocol allows for.
    0:13:03 Okay so as I mention, we only need traffic flows past to the first sent to the VSN or VSB and then subsequent flow are forwarded directly on the ESXI host itself.
    0:13:14 Now on we go to the installation of the Nexus 1000v.
    0:13:18 The virtual supervisor module is going to install something called Opaque data in the VMware VCenter
    0:13:26 for its Distributed Virtual Switch.
    0:13:29 This is done using something called an SVS connection or Server Virtualization Switch.
    0:13:35 VSMs and VEMs, they should all be in the same version, its important that they are.
    0:13:41 There only time that they are out of version with each other is when your doing an upgrade.
    0:13:45 And there's obviously a very planned and specific upgrade path on Cisco.com for any given, from any version to any version.
    0:13:56 The control and management networks should probably goes with the saying be quite low latency
    0:14:03 But this is actually more critical that bandwidth itself.
    0:14:08 So VCenter dwonlaods this information into ESXI for the VEMs to used whenever the host would be added to the NExus 1000v Districuted Virtual Switch.
    0:14:22 Now All VEM modules or Virtual Ethernet Modules, all their heartbeats should be increase at roughly the same rate.
    0:14:29 And you can used the command -show module VEM counters, which are the heartbeats
    0:14:35 to see if they are increasing at the same rate.
    0:14:38 This will tell you if they're being connected and staying connected.
    0:14:43 Now VEM module can miss, I dont think I actually have this on here.
    0:14:47 But a VEM module can miss, or I should really say, a vitual supervisor module,
    0:14:52 A VSM can miss up to 6 heartbeats from a VEM before considering it offline.
    0:14:59 So, no more than 6 seconds can elapse from the time because a heartbeat is sent every one second.
    0:15:09 So, no more than 6 seconds can elapse without it thinking that the Ethernet module or blade got dynamically moved from the virtual chassis.
    0:15:23 And we're always, It's always a good idea, a good practice to hardcode the VEM to the module number,
    0:15:30 before you ass the ESXI host to Nexus 1000v.
    0:15:35 Basically the Nexus 1000v supervisor modules have a command called, module.
    0:15:44 VSM your primary and secondary SUPs are always gonna be module 1 and 2. Always.
    0:15:51 Then, I'll have up to 64 additional VEM modules per chassi, per Virtual chassis.
    0:16:03 So it means I can have up to 64 ESXI host per Distributed Virual Switch or per Nexus 1000v, virtual switch.
    0:16:10 And the modules, the ESXI host are identified to the supervisor modules.
    0:16:17 by their UUID.
    0:16:19 So what I can do, is before I ever bring them online.
    0:16:23 I can get the UUID of the ESXI host.
    0:16:27 In fact I have this right here, get the UUID from the EXSI host by going out to the Shell.
    0:16:33 SSH or direct shell.
    0:16:35 And saying ESX-cfg-info -u
    0:16:41 And then you just make sure that you use lower case letters.
    0:16:44 When you are copying. So this actually return uppercase letters, I'll just make sure to convert this interface.
    0:16:50 And then you will paste that UUID under the module command in the VSM.
    0:16:57 Before you bring that VEM online.
    0:17:00 And then that VEM will get added in the particular order that you wish.
    0:17:05 You can also just do them dynamically, and it will allow you to populate them dynamically.
    0:17:11 But best practice is to hardcode it.
    0:17:13 In the virtual Ethernet module the VEM, were not so much provisioning ports directly as much as we will be provisioning port profiles.
    0:17:25 And then those port profiles will be attached to the ports.
    0:17:29 And anytime we make a change to the port profile, it will change the port.
    0:17:36 So there is a command called inherit port profile, we're gonna be doing all this live.
    0:17:40 So, if you, this is for your reference you can come back and take a look at it later again.
    0:17:46 But we also just wanna talk about it before we go out and do these things.
    0:17:51 So there's two different types of port profiles,
    0:17:54 And first of all we have hardware ports, Eth and then we have virtual ports V-Eth.
    0:18:01 So the Eth ports profile, these are basically tied to hardware network interface cards,
    0:18:08 And these are uplinks.
    0:18:10 Now, in the case of Nexus, Sorry, In the case of UCS manager and blade servers.
    0:18:18 Then we actually, these hardware NICs, we know are another level of virtualization, they are VNICs.
    0:18:26 Okay? But for all intense and purposes, Eth is a hardware uplink.
    0:18:31 Okay? So think of it as a physical card, whether its a VNIC or an actual NIC, that's an uplink.
    0:18:38 V-Eth on the other hand, these are tied to the southbound virtual machines.
    0:18:47 Now there's also something called VLAN, well of course there are VLANs in our Eth and V-Eth profiles.
    0:18:53 But there's also something called system VLANs.
    0:18:56 Now VLANs not system VLANs but just VLANs are traditional VLANs. Okay?
    0:19:02 802.1Q tagging, whatever the VLAN number is we create it in a virtual switch.
    0:19:10 It sends up, it sends up to a trunk we can have access ports typically our V-Eth our access mode.
    0:19:19 Where switcport mode access with a particular VLAN, so whatever traffic comes in.
    0:19:24 With untag no .1Q header, we put it in that system, Im sorry. Put it in that switcport access VLAN.
    0:19:32 And then our uplink Ethernets are trunkports just the same as a standard switch.
    0:19:39 So whats this idea of the system VLAN?
    0:19:41 Well, the idea is that it;s used to give immediate cut-through access to the VM kernel.
    0:19:47 So, since this is a virtual switch, its not a hardware switch.
    0:19:52 The actual software or really the operating system of, and we're just gonna use ESXI and VMware.
    0:20:01 Since it was the first and most heavily used like I said about 95% of the installation of hypervisors out there are using that.
    0:20:09 So that's we'll talk to mostly.
    0:20:12 But let's just say ESXI is booting.
    0:20:14 Well the VM kernel, the management VMkernel needs to be able to talk to VCenter.
    0:20:21 Well what if it's writing over top of an new distributed virtual switch.
    0:20:27 That doesn't really come on line until it talks to Vcenter.
    0:20:32 And talks to the virtual supervisor module. So we kinda got this chicken and egg situation,
    0:20:38 where which one comes first?
    0:20:40 So the idea of this system VLAN basically allows us cut-through access.
    0:20:44 It basically says, hey VM kernel you can go ahead and on this VLAN only.
    0:20:49 Or VLANs if we configure multiple, you can go ahead and talk through basically like your own local switch pseudo local switch.
    0:20:58 To the network on these VLANs and then when the Virtual Ethernet module comes up and online registered in the Distributed virtual Siwtch.
    0:21:09 Then you'll go ahead and talk through that Ethernet blade the Virtual Ethernet Module Balde.
    0:21:18 Okay? So we dont have to run, in fact we'll show an example of this in a diagram in just a moment.
    0:21:24 We dont have to run our ESXI kernels through distributed virtual switch, they can continue to run in their own local switch.
    0:21:34 VSwitch 0 or whatever.
    0:21:37 But its a good practice and there's a lot of benefits to running your ESXI on the actual VEM itself on the Nexus 1000v.
    0:21:50 Okay, there are two different modes to the Nexus 1000v.
    0:21:54 There's Layer 2 mode and there's also Layer 3 mode.
    0:21:58 We mean Layer 2 mode, the VEMs have to be on the same VLAN as the virtual supervisor module control VLAN.
    0:22:07 Now that was an older model and for the last few years now, Cisco has recommended Layer 3.
    0:22:13 So basically, as the VEMs come online, and they talk their standard APIC protocol which is the standard protocol that any Nexus 7000 switch,
    0:22:28 or even the CAT-6500 switch, the blades talk to the supervisor module using that protocol.
    0:22:35 All that traffic is encapsulated in the UDP 4785,
    0:22:40 using the command, capability l3control on the V-Eth profile for the ESXI VMkernel
    0:22:49 And that has to be there before we migrate that kernel interface over from the standard virtual switch VSwitch 0.
    0:22:56 Over to distributed Virtual Switch of the N1Kv.
    0:23:01 So that traffic is encapsulated and its sent up to the virtual supervisor module across VLAN.
    0:23:09 So my ESXI module do not have to be on the same VLANs as my VSMs,
    0:23:16 Now even if they are in the same VLANs, it's still a good idea to use Layer 3 mode just because all of the sudden Layer 3 tools become available to us
    0:23:25 for utilization and troubleshooting, things like, really simple things like ping and trace route.
    0:23:31 Okay? It's also important that system VLANs are used for both the V-Eth and the Eth port profiles for this Layer 3 control.
    0:23:43 We'll gonna take a look in a little bit, the actual configuration after we did the installation of the Nexus 1000v.
    0:23:51 We'll take a look at creating port channels.
    0:23:53 Now, there is the ability to do, what I mention which is the VPC host mode virtual port channel host mode.
    0:24:04 Based on Mac pinning and if we're not in a blade server we can also do, based on LACP or based on CDP.
    0:24:14 And there's also something called LACP off load where the negotiation of LACP is offloaded from the virtual supervisor module to the virtual Ethernet module.
    0:24:27 And this is important if we were doing LACP and remember the Nexus 1000v does not have to be running on the UCS blade series.
    0:24:37 Okay? it could be on UCS C-series chassis, it could also be on absolutely any other vendors hardware server platform.
    0:24:46 As long as we're running a supported hypervisor, so like for instance ESXI-4, ESX-4 and ESXI-5,
    0:24:55 Are all supported in the Nexus 1000v.
    0:24:57 So that could be running on anyone server hardware.
    0:25:00 If its running on an actual physical server, and lets say 4 NICs, 4 hardware Ethernet NICs.
    0:25:09 At least NIC ports whether they're on 1 PCIE car or 2 PCIE cards, really makes no difference.
    0:25:15 In the physical pizza boz server, the rack mount server.
    0:25:19 Then I can have all those LACP aggregated up to the Northbound switch.
    0:25:25 Okay? You cannot use LACP with UCS blade series, but you can use Mac pinning.
    0:25:30 But if I do want to do LACP, from let's say, a pizza box rack mount server.
    0:25:36 And I want that to go Northbound to the switch.
    0:25:38 Then, what happens if my virtual supervisor module, really what happens to both of them go offline?
    0:25:47 Well, because this is a truly distributed virtual switch, just like in distributed physical switch.
    0:25:54 Actually, this is kind of unlike a physical virtual, physical switch.
    0:26:00 If a physical switch loses both supervisor modules, traffic stops.
    0:26:05 But in a Nexus 1000v, even if both VSMs go offline or are not reachable, traffic still continues to pass just fine.
    0:26:15 The problem might become, what if LACP need to get re-negotiated, maybe and upstream switch gets rebooted, something like that.
    0:26:25 Well, there is something called LACP offload and that allows the LACP not to be negotiated by the virtual supervisor modules.
    0:26:35 But instead to be negotiated by the virtual Ethernet module or the ESXI host, DVS itself.
    0:26:44 So taking a look at the show commands.
    0:26:49 Kind of a little bit of com eluded command makes sense if you actually, think about what its doing, but basically module vem 3 execute,
    0:27:01 So, if I want to SSH into anyone of my ESXI host and run commands directly on that Shell itself, I can do that.
    0:27:10 And the command would be vem command show port (vemcmd show port) or directly on the ESXI host,
    0:27:17 vem command show pinning (vemcmd show pinning)
    0:27:20 So if Im on the Virtual supervisor module lets say I've SSH to the Nexus 1000v not to the ESXI host itself but directly to the N1Kv.
    0:27:31 Then I can say, "Hey, go out to module, virtual Ethernet module 3 through 66 because I can have 64 blades plus my 2 supervisors.
    0:27:42 So go up to blade 3, go up to blade 5, go up to blade 20.
    0:27:47 And execute this command, there on that host."
    0:27:50 And that command is the command that you could run there on the host natively which is vemcmd show pinning or show port.
    0:27:57 I'll put those in there for reference, we'll be taking a look at those commads specifically with VPC host mode.
    0:28:03 So now, lets break from Nexus 1000v for a moment and lets talk about VMFex.
    0:28:08 And then we'll talk about adapter Fex.
    0:28:11 And then we will move on and actually do the installation and then all the configuration and testing of the Nexus 1000v including ACLs and QoS and things like that.
    0:28:23 And then we'll move on and we'll talk about or actually do the demonstration separately of VMFex.
    0:28:30 And then finally Adapter Fex.
    0:28:33 So VMFex creates the same type of distributed virtual switch in VMware as the Nexus 1000v does.
    0:28:39 And its now supported on KVM and HyperV, as of UCS 2.1.
    0:28:46 So this is made up of, well its still got the same virtual Ethernet module installed in ESXI host for the data plane.
    0:28:56 But now that UCS really the fabric interconnects at the sort of the VSM, or virtual supervisor module.Okay?
    0:29:02 So there is no standalone virtual supervisor module, instead the fabric interconnect is yur control and management plane.
    0:29:12 Okay? So everything is configured and controlled from the UCM, sorry, from the UCMS manager GUI.
    0:29:20 Then thats specifically where we look at the VM tab.
    0:29:23 So when we were doing walkthroughs earlier and we kind of avoid it but there's a VM tab in UCS manager.
    0:29:32 That's not for Nexus 1000v at all.
    0:29:35 It is for VMFex which is an alternate DVS.
    0:29:40 So the lets talk about adapter Fex.
    0:29:43 This is yet another Fex solution from Cisco.
    0:29:46 So this time, its used to extend a Nexus 5000 down to a pizza box or C-series rack mount server.
    0:29:54 More specifically, to extend the fabric down to a P81E Palo card or the next generation of that card is the VIC1225 PCI Express and I said CAN,
    0:30:09 that should be CNA, let me just fix that real quick.
    0:30:14 I mistype there. Converge Network Adapter.
    0:30:17 So this creates V-Eth and VFC ports in the Nexus 5K and we will be doint this as well.
    0:30:25 So if there are lets say on the C-series server on the VIC 1225 or actually what we have is the P81E NR
    0:30:34 Particular C-Series because that's whats on the exam.
    0:30:37 You have two 10Gigabit Ethernet SFP physical ports on the PCIE card.
    0:30:43 And each of those actually have two logical channels, so there's 4 logical channels.
    0:30:49 Two physical ports, four logical channels.
    0:30:52 So this breaks out to port 1 channel 1, is gonna be used for Ethernet with optional hardware failover to physical port 2.
    0:31:03 Then it will still be physical port 1 but logical channel 2, this is gonna create HBA 0 or our first HBA for fabric A for SAN.
    0:31:13 Then of course, there's no failover, standard multipacking software will be used.
    0:31:18 Then physical port 2 but logical channel 3 will be used for Ethernet with hardware failover supported to physical port 1.
    0:31:29 And then physical port 2 logical channel 4 will create HBA 1 for our standard SAN fabric B connectivity.
    0:31:41 Now an alternative to adapter Fex, is to use UCS manager to actually manage your C-Series server.
    0:31:50 So this is not in conjunction with anything, these is no adapter Fex here.
    0:31:56 It essentially is kind of using, It's not under the marketing term, lets say that of the adapter Fex.
    0:32:03 But instead, what happens is our C-Series servers connect up to a pair of Nexus 2000s, specifically the 2232PP.
    0:32:15 And those act as the I/O modules in a sort of a virtual or pseudo blade chassis.
    0:32:24 Okay? So, And we'll show a diagram of this as well.
    0:32:31 But basically, we have one pair of Nexus 2232s, So Nexus 2232A would go up to fabric interconnect A.
    0:32:43 And Nexus 2232B would go up to fabric interconnect B.
    0:32:48 And port 1 of my VIC1225 or P81E card on my C-Series server, LEts say C200 server.
    0:32:58 Would go up to Nexus 2232A.
    0:33:02 And port 2 would go up to Nexus 2232B.
    0:33:07 Okay? So this would look similar at least to the UCS manager into the fabric interconnects, it looks similar to a blade chassis.
    0:33:16 Each pair of 2232s look like I/O modules.
    0:33:21 Now they show up in a slightly different place.
    0:33:24 In the equipment tab, they show up under rack mount servers instead of blade chassis servers.
    0:33:32 But they function very similar, we can create service profiles, we can allocate those to our rack mount servers.
    0:33:39 So it gives us a lot if nice flexibility there.
    0:33:42 While still having, for whatever reason they maybe preferred rack mount servers preferred to blade servers.
    0:33:51 Now the version of software that your running depends on how many wires you need or how many cables.
    0:34:01 This required 4 cables in UCS 2.0.
    0:34:03 So we had two 1 Gigabit Ethernet cables connected from the C-Series server, LAN on motherboard ports.
    0:34:10 To the 2232 Fexes to provide out of band control and management plane.
    0:34:15 And then we had two 20Gigabit cables connected to the C-Series server SFP ports to the 2232 Fex to provide the data plane.
    0:34:25 Now in UCS 2.1, a feature called single wire management came about and this allow a single pair of 10GigE cables from the C-Series SFP to the 2232 Fexes.
    0:34:40 To provide both the data and management control planes.
    0:34:43 So we no longer needed the 1Gigabit Ethernet LAN on motherboard ports connected at all.
    0:34:49 Now we dont have this configured in our lab and heres the reason why.
    0:34:55 If, and again the course is centered around the CCIE data center exam.
    0:35:00 This is perfectly fine to use but then it makes your C-Series server look just like a blade server, Right?
    0:35:08 Like a blade chassis.
    0:35:10 So if we've already gone through or the lab has already had us go through configure a blade chassis.
    0:35:17 Why would it have us configure rack mount servers in the same fashion as blade servers work.
    0:35:23 Assuming that they dont change the version from 2.0 to 2.1 anytime soon.
    0:35:29 2.1 is what gave us FCoE northbound to the FIs.
    0:35:35 But 2.0 is still requiring, there is no northboudn FCoE support.
    0:35:43 So that means that if they were using the C200 server connected to the blade, or sorry to the fabric interconnects.
    0:35:55 Then there would be no way for them to test you on fiber channel over Ethernet or FCoE.
    0:36:03 So if there, instead using actual adapter Fex as we've listed here with the 5Ks.
    0:36:12 That's where we can break everything out as V-Eth and VHBAs on the Nexus 5K.
    0:36:20 And specifically create V-Eth and VFCs and do all the binding that we normally do.
    0:36:26 For Converge Network Adapter in Fiber Channel over Ethernet on our Nexus 5Ks.
    0:36:32 So I think this is what their much more likely to used those C200s specifically those listed in the hardware blueprint for.
    0:36:40 But that's just my take on it.
    0:36:42 So that's what we'll be doing.
    0:36:50 So lets go ahead and switch over and we'll then take, we'll begin by doing the installation of the Nexus 1000v.
    0:36:57 Configuration, all of the testing of it.
    0:37:00 Then we'll go on to VMFex.
    0:37:03 And then after we've done VMFex, then we will go ahead and configure Adapter Fex.
    0:37:10 So before we move on to the actual configuration and installation of the Nexus 1000v.
    0:37:17 I wanted to bring up a diagram and kind of help, hope to help explain how this really works.
    0:37:25 So we have what up here is a virtual chassis switch and what happens is we have our virtual supervisor module our primary and our secondary which are gonna be modules 1 and 2.
    0:37:42 And these are gonna be running on a pair of hardware devices such as the Nexus 1110S or 1110X.
    0:37:50 The old part number was the Nexus 1010.
    0:37:56 So these are basically just appliances that are running a hypervisor, that are running these virtual machines.
    0:38:04 Okay, which you do want to have separate physical devices which run each of the VSMs.
    0:38:13 For redundancy so that if one physical device dies you dont lose both of your primary and secondary SUP modules in one fail SUP.
    0:38:22 Now of course that's kind of a waste to just run one virtual machine on a powerful hadrware appliance like that.
    0:38:30 So thats why we have the different -S and different -X modules, they allow you to run, I think the S is like 6 VMs and the X I think is 10VMs.
    0:38:42 It might be even more than that.
    0:38:44 Check the product guide on cisco.com.
    0:38:47 But essentially you can run, lets say a VSM primary on one physical device
    0:38:52 You could also run a VSG primary, ASA 1000,
    0:39:01 You could run a VWAAS.
    0:39:09 So you can run a number of different devices on a number of different virtual machines.
    0:39:13 You also could run your VSM primary for your N1Kv switch number 1 and maybe you have switch number 2, switch number 3,
    0:39:24 Because each of these can handle up to 66, really 64 VEM modules.
    0:39:30 For a total of 66 modules because module 1 and 2 are your VSMs.
    0:39:35 So maybe you've got, 300 ESXI host, your obviously gonna need more than one Nexus 1K virtual switch.
    0:39:44 So I could run VSM P1, P2, for Nexus, VSM Primary 1 Primary 2, Secondary 1 Secondary 2.
    0:40:03 Primary 3 and Secondary 3 on these hardware platforms.
    0:40:11 And then the VEM modules, So the VEM modules installed, I've drawn this out is your Ethernet,
    0:40:19 Im sorry. This is your server your hardware, whether its a blade or whether its a pizza box server.
    0:40:25 You've got that hardwarer running on top of that is the hypervisor.
    0:40:30 And then your VEM module.
    0:40:33 Okay, So your VEM module in this particular instance is serving not only these individual VMs.
    0:40:41 But its also serving for management for VMotion and for fault tolerance.
    0:40:49 Now what we see is that and again the diagram that you'll be able to download in class files.
    0:40:55 And blow up a little bit more, you'll see that these colored instances are going to fabric interconenect A and fabric interconnect B.
    0:41:05 The green is A and the blue is B.
    0:41:10 And those are Ethernet, so Ethernet 3/1, 3 because its in module 3.
    0:41:17 It was the first ESXI VEM module to be brought online to the N1Kv switch.
    0:41:24 So 3/1, 3/2 Eth 3/3, 4 and 5, and 3/6 and 7.
    0:41:32 And then handed down to the actual VM kernel for management is V-Eth 1 and 2, V-Eth 3 and V-Eth 4 and 5.
    0:41:42 And then a bunch of V-Eths 1 or multiple, however many you need to consume per virtual machine.
    0:41:50 So these V-Eths are all going to go up through these two active-active based on VPC.
    0:42:02 Host mode.
    0:42:03 Mac pinning something we'll gonna take a look at here in just a moment.
    0:42:07 And I supposed this should really be capital.
    0:42:16 Okay? So going up through fabric interconnect A and B, active-active, So maybe its splitting this one.
    0:42:24 This one off to A, this one off to A and this one off to A, and then this one is splitting off to B and this one off to B.
    0:42:32 And that's how its doing its Mac pinning up northbound to the fabric interconnects and then northbound switches beyond there.
    0:42:39 And then these two Ethernet interrfaces are being used for these two V-Eth interfaces for active-active fault tolerance.
    0:42:47 Maybe VMotion has active to A with failover to B?
    0:42:55 Okay? That's not how were going to set it up now.
    0:42:58 We're gonna set it up as we have active and passive.
    0:43:02 Okay and the ESXI host is managing the active passive.
    0:43:09 But this is just giving and alternate view or example of the things we've talked about as possible best practices.
    0:43:17 And so you also do see here that each of these ESXI host have a VEM module.
    0:43:25 And its running all of the functions not only all the VMs, but also all of the kernel functions as well.
    0:43:31 Certainly does not have to be that way.
    0:43:34 And so for an alternate view of not only, alternate switches but also,
    0:43:43 I'll just take this out of the way here.
    0:43:46 But also of having our VSMs run on VEMs.
    0:43:55 So here's my virtual supervisor primary module 1 and my virtual supervisor secondary module 2.
    0:44:03 And these VSMs are running on their own VEMs, thats perfecty supported.
    0:44:08 And in fact thats recommended, unless your running,
    0:44:11 A good idea is the 1110s one because cisco gets to more money in more hardware.
    0:44:17 No. Seriously, just to haveseparate management, so your server guys aren't managing your supervisor modules that kinda get a little sticky sometimes.
    0:44:30 So not a bad idea to have those running on separate hardware platforms.
    0:44:34 But its perfectly supported to have these running on their own VEMs.
    0:44:38 No problem. Its also perfectly acceptable to have a VSwitch, the individual standard VSwitch.
    0:44:47 Lets say for ESXI, running fior management and another one for VMotion, another one for fault tolerance.
    0:44:54 Another on for VRS whatever, and then your VEM only running your VMs.
    0:45:00 As well as its perfectly acceptable for it to run your kernel functions as well.
    0:45:07 So here's an alternate view.
    0:45:11 Now one of the things that is commonly misunderstood and I see misconfigured from time to time,
    0:45:18 So I wanna point out is the thought that...
    0:45:24 Here's another view of the N1Kv switch, so this dotted line is the switch.
    0:45:31 Here are my physical Ethernet uplinks whether they're actualyl VNICs in a Nexus,
    0:45:36 Sorry, UCS B-Series chassis, Or whether they're actual real NICs in a physical server, really makes no differences.
    0:45:45 But these are my Ethernet ports.
    0:45:48 And then these are my V-Eth ports that are handed down to and consumed by my VMs.
    0:45:54 And of course I create Ethernet port profiles to be inherited by my Ethernet ports.
    0:46:02 And I create V-Eth port profiles to be inherited by my V-Eth ports.
    0:46:08 Now one of the misconceptions is that as long as I have.
    0:46:13 In my view, So if I do a show run.
    0:46:16 And I see that I have Ethernet ports.
    0:46:19 Then any V-Eth ports can used those.
    0:46:22 And that's not exactly true, here's really specifically where thats not exactly true.
    0:46:28 First of all, if I have a V-Eth port being consumed by a VM, and its possible that I have assigned a VM 2 ports.
    0:46:37 I might assign it 2 NICs.
    0:46:39 I might have 2 NICs.
    0:46:40 Maybe this one only has one NIC.
    0:46:43 This one has one NIC.
    0:46:45 This V-Eth port, Let's just say it's V-Eth 200.
    0:46:51 It will never change.
    0:46:53 If I move it from 1 ESXI host to another.
    0:46:57 In fact let me just bring up a, this example again.
    0:47:03 If I move, this VM, If I VMotion it over here.
    0:47:11 So this ESXI host, it keeps its V-Eth port.
    0:47:14 So if that was V-Eth 18, it keeps V-Eth 18.
    0:47:18 Now my physical uplinks those are bound to the VEM module of course those are bound to the physical ESXI host.
    0:47:25 But my V-Eth ports they travel with me, and thats what gives me the ability to keep my sam ACLs, my same QoS,
    0:47:33 my same rate limiting or prioritization, same security, all of that.
    0:47:42 So back to this, the idea that as long as and these are typically switch port mode access
    0:47:52 switch port mode access VLAN 110, then as long as one of these has VLAN 110 as part of its upbound trunk.
    0:48:02 .1Q trunk, then it should be able switch that traffic right?
    0:48:07 Or sort of?
    0:48:08 And here's the sort of.
    0:48:13 The sort of, let me just erase all this so that its clear
    0:48:19 Okay so this is what we started with.
    0:48:20 Now I'm gonna overlay, this on it.
    0:48:24 And you're gona see that realy behind the scenes, those Ethernet ports where actually on 1 ESXI host.
    0:48:32 And there were, there's another ESXI host here that's representing part of the switch.
    0:48:37 And I dont have any ports assigned yet.
    0:48:40 So I need to draw those or I need to create those.
    0:48:42 Really what I need to do is log in to the ESXI host.
    0:48:46 Go to the distribute, Go to the networking, but first of all, go to configuration tab, then to networking.
    0:48:52 Then click on distributed virtual switch, and say managed physical adapters.
    0:48:58 And then assign physical adapters to this ESXI host to the distributed virtual switch.
    0:49:06 Because otherwise the V-Eth ports that are currently being run because the VMs are currently being run on this ESXI host.
    0:49:15 They can get out.
    0:49:17 But I can't have VMs that are running on another host.
    0:49:22 Okay, I can't.
    0:49:23 Opps!
    0:49:25 I should have created another layer.
    0:49:32 I cant have those VMs and their ports.
    0:49:35 Miraculously wirelessly jump across, I haven't actually seen ESXI 5 vesion wireless yet.
    0:49:43 So they can't wirelessly hop across and get up to another physical port outbound.
    0:49:49 And even if this were on the same UCS chassis, there's still no physical traces connecting these two physically separate blades.
    0:50:00 So I don need to make sure thatI actually have physical uplinks for those VMs and their corresponding V-Eths to traverse northbound.
    0:50:11 And then finally, I will not gonna really look or talk too much about this but I did create.
    0:50:17 Or help create a drawing here that kind of puts everything together, kind of brings everything together from,
    0:50:27 this being the last class in our section where we,
    0:50:33 Or sorry, the last class in our series where we started out talking about the Nexus switching line.
    0:50:40 We begin talking about OTV, fabric path, talking about VPC.
    0:50:46 And then we did our storage class.
    0:50:50 With our MDS.
    0:50:52 And our JBODs.
    0:50:54 And then now we're on the UCS portion with our fabric interconnects.
    0:50:59 Our blade server.
    0:51:01 QoS thats assigned to those individual, our Palo VIC card.
    0:51:09 Mezzanine adapter card.
    0:51:10 VNICs that have been created, QoS thats been created and assigned.
    0:51:14 And then now rounding it out with the Nexus 1000v.
    0:51:19 And ESXI VMware.
    0:51:21 Brought it all together, so this would be kind of complete picture, probably hard to see from this scaled out perspective.
    0:51:30 But if you zoom in, you'll be able to see everything a little bit better.
    0:51:37 And I certainly include that as part of the class files.
    0:51:43 Okay it shows me,
    0:51:45 Shows my fiber channel no drop.
    0:51:49 On... Opps!
    0:51:51 On my VHBAs, how those VHBAs create V-Eth and VFC ports as long as their part of a Palo card.
    0:51:59 If they're not, we already took a look how to still create VFCs, but how are those bound to the actual Fex port itself.
    0:52:10 Okay.
    0:52:12 How my VNICs in ESXI ultimately create VMNICs.
    0:52:18 How those VNICs correspond to physical Eth interfaces or at least pseudo physical Eth interfaces in my Nexus 1000v.
    0:52:31 And then how those are passed on down to port profiles for uplinks.
    0:52:37 Port profiles for system uplinks.
    0:52:40 And how those get passed on to V-Eths and VNICs down to
    0:52:45 or actually really VMNICs down to the virtual machine itself.
    0:52:50 So I'll include that with the class files for your reference.
CCIE Data Center :: Unified Computing
Title: CCIE Data Center :: Unified Computing
Duration: 22h 23m
Instructor: Mark Snow, CCIEx4 #14073 (Collaboration, Voice, Data Center, Security)
Get instant access to our entire library!
Sign Up


© 2003 - 2014 INE All Rights Reserved