Spine Switches – Cisco Describing Cisco ACI

Spine Switches

Spine switches interconnect leaf switches and provide the backbone of the ACI fabric. Spine switches are available with various port speeds, ranging from 40Gbps to 400Gbps. Within a pod, all tier-1 leaf switches connect to all spine switches, and all spine switches connect to all tier-1 leaf switches, but no direct connectivity is allowed between spine switches, between tier-1 leaf switches, or between tier-2 leaf switches. If you incorrectly cable spine switches to each other or leaf switches in the same tier to each other, the interfaces will be disabled. You may have topologies in which certain leaf switches are not connected to all spine switches, but traffic forwarding may be suboptimal in this scenario. Spine switches can also be used to build a Cisco ACI MultiPod fabric by connecting a Cisco ACI pod to an IP network, or they can connect to a supported WAN device for external Layer 3 connectivity. Spine switches also store all the endpoint-to-VTEP mapping entries (spine switch proxies). Nexus 9000 Series switches used in the ACI fabric run the ACI operating system instead of NX-OS.

Cisco APIC

The Cisco Application Policy Infrastructure Controller (APIC) is the central point of management for the ACI fabric. It is a clustered network control and policy system that provides image management, bootstrapping, and policy configuration for the Cisco ACI fabric. APIC translates a policy created on it into a configuration and pushes it to the right switches. The APIC appliance is deployed as a cluster. A minimum of three infrastructure controllers are configured in a cluster to provide control of the scale-out Cisco ACI fabric. The ultimate size of the controller cluster is directly proportionate to the size of the Cisco ACI deployment and is based on the transaction-rate requirements. Any controller in the cluster can service any user for any operation, and a controller can be transparently added to or removed from the cluster. If you lose one of the controllers, you can still change and add new configurations through the remaining controllers. Since the APIC is not involved in data plane forwarding, even if all the controllers in the fabric go down, the traffic flow is not impacted, and forwarding continues through the leaf and spine switches. If configuration changes need to be made, you must bring the Cisco APIC back up.

Cisco APICs are equipped with two network interface cards (NICs) for fabric connectivity. These NICs should be connected to different leaf switches for redundancy. Cisco APIC connectivity is automatically configured for active-backup teaming, which means that only one interface is active at any given time.

Figure 8-8 shows Cisco APIC fabric connectivity.

   

Figure 8-8 APIC Fabric Connectivity

The Cisco APIC provides the following control functions:

  • Policy Manager: Manages the distributed policy repository responsible for the definition and deployment of the policy-based configuration of Cisco ACI.
  • Topology Manager: Maintains up-to-date Cisco ACI topology and inventory information.
  • Observer: The monitoring subsystem of the Cisco APIC; serves as a data repository for Cisco ACI operational state, health, and performance information.
  • Boot Director: Controls the booting and firmware updates of the spine and leaf switches as well as the Cisco APIC elements.
  • Appliance Director: Manages the formation and control of the Cisco APIC appliance cluster.
  • Virtual Machine Manager (or VMM): Acts as an agent between the policy repository and a hypervisor and is responsible for interacting with hypervisor management systems such as VMware vCenter.
  • Event Manager: Manages the repository for all the events and faults initiated from the Cisco APIC and the fabric switches.
  • Appliance element: Manages the inventory and state of the local Cisco APIC appliance.

Leave a Reply

Your email address will not be published. Required fields are marked *