Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

In the current VPC model in CloudStack VPC VR provides many L3-L7 services. One of the services provided by VPC VR is to route inter-tier traffic. Entire VPC's inter-tier traffic has to get routed by VPC VR. As the size of VPC increases, VPC VR can easily become choke-point. VPC VR is also a single point-of-failure in current VPC model. There is also traffic trombone [1] problem where routing by VPC VR can become in-efficient if the source and destination VM's are placed far (in different pod/zone for e.g) from the VPC VR. Traffic trombone could become serious problem in case of region-level VPC [2].

Programmability of virtual switches in hypervisor combined with ability to process and take actions on data path flows with OpenFlow opens up different possibilities where L2-L4 services typically provided by virtual/physical appliances are pushed on to edge switches in the hypervisors. Current VPC network services, network ACL and inter-tier routing provided by CloudStack for east-west traffic (inter-tier traffic) can be orchestrated to be provided by virtual switches in hypervisors. Goal of this proposal to add distributed routing and ACL functionality to native SDN controller that leverages OpenVswitch capabilities to provide inter-tier routing and network ACL's at hypervisor level in distributed fashion. This would enable a scale-out model and VPC VR being choke point is avoided. Also traffic trombone problem is eliminated as traffic gets routed directly to destination hypervisor from source hypervisor.

References

[1]http://blog.ipspace.net/2011/02/traffic-trombone-what-it-is-and-how-you.html

...

[4]https://cwiki.apache.org/confluence/display/CLOUDSTACK/OVS+Tunnel+Manager+for+CloudStack

Glossary & Conventions

Bridge: bridge in this document refers to a OpenVswitch bridge on XenServer/KVM

...

tier: term tier is used interchangeably to network in vpc  

Conceptual model 

This section will describe conceptually how distributed routing and network ACL's are achievable with use of openflow rules and an additional bridge doing L3 routing between one or more L2 switches. Further sections builds on the concepts introduced in this section to elaborate the architecture and design on how CloudStack and OVS plug-in can orchestrate setting up VPC's with distributed routing and network ACL's. 

...

  • Consider case where VM1 (assume with IP 10.1.1.20) in tier1 running on host 1, wants to communicate with VM1 (10.1.2.30) in tier 2 running on host 2. sequence of flow would be:
    • 10.1.1.20 sends ARP request for 10.1.1.1 (gateway for tier1)
    • VPC VR sends ARP response with MAC address (say 3c:07:54:4a:07:8f) on which 10.1.1.1 can be reached
    • 10.1.1.20 sends packet to 10.1.2.30 with ethernet destination 3c:07:54:4a:07:8f
    • flow rule on tier 1 bridge on host 1, over rides the default flow (normal l2 switching) and sends the packet on patch port
    • logical router created for VPC on host 1 receives packet on patch port 1. logical router does route look up (flow table 1 action) and does ingress and egress ACL's and modifies source mac address with mac address of 10.1.2.1 and modifies destination mac address with mac address of 10.1.2.30 and sends packet on patch port2.
    • tier 2 bridge on host 1 receives packet on patch port, does a mac lookup.
    • if the destination mac address is found, then sends packet on the port else floods packets on all the ports
    • tier 2 bridge on host 2 receives packet and forward to VM1. 
  • Consider case where VM3 (assume IP with 10.1.1.30) in tier 1 running on host 3 wants to communicate with VM1 in tier 2 running on host 2. Sequence of flow would be:
    • 10.1.1.30 sends are request for 10.1.1.1
    • VPC VR sends ARP response with MAC address (say 3c:07:54:4a:07:8f) on which 10.1.1.1 can be reached
    • 10.1.1.30 sends packet to 10.1.2.30 with ethernet destination 3c:07:54:4a:07:8f
    • VPC VR receives packet does a route look up, sends packets out on to tier 2 bridge on host 3, after modifying the packets source and destination mac address with that of 10.1.2.1 and mac address at which 10.1.2.30 is present (possibly after ARP resolution)
    • tier 2 bridge on host 2 receives packet and forward to VM1.  

Architecture & Design description

This section describes design changes that shall be implemented in CloudStack management server and OVS plug-in to enable distributed routing and network acl functionality.

API & Service layer changes

  • introduce new 'Connectivity' service capability 'distributedrouting'. This capability shall indicate 'Connectivity' service providers ability to perform distributed routing.
  • createVPCOffering API shall be enhanced to take 'distributedrouting' as capability for 'Connectivity' service. Provider specified for the 'Connectivity' service shall be validated with capabilities declared by the service provider, to ensure provider supports 'distributedrouting' capability.
  • listVPCOfferings API shall return VpcOfferingResponse response that shall contain 'Connectivity' service's  'distributedrouting' capability details of the offering if it is configured
  • createNetworkOffering API shall throw exception if distributedrouting' capability is specified for 'Connectivity' service. 

OVS plug-in enhancements

  • OVS element shall declare 'distributedrouting' as supported capability for 'Connectivity' service.
  • OvsElement uses prepare() phase in NIC life cycle to implement tunnels and setup bridges on hypervisors. Following changes shall be needed in nic prepare phase:
    • current logic of preparing a NIC is described as below, if the VM's is first VM from the network being launched on a host.
      • get the list of hosts on which network spans currently
      • create tunnel from the current host on which VM being launched to all the host on which network spans
      • create tunnel from all the host on which network spans to the current host on which VM being launched
    • check shall be made if network is part of VPC, if its part of VPC, and VPC offering does not have 'distributedrouting' capability enabled current flow of actions outlined above shall be performed during the nic prepare phase
    • if network is part of VPC, and VPC offering has 'distributedrouting' capability enabled then following actions shall be performed.
      • if there is VPC VR running on the current host on which VM is being launched then proceed with steps outlined above (i.e setting up tunnels just with the bridge corresponding to network).
      • if VPC VR is running on different host than the current host on which VM is being launched, then following actions shall be performed:
        • for each network in the VPC create a bridged network
        • for each of the bridge created for the tier in the VPC, form full mesh of tunnels with the hosts on which network/tier spans
        • create a bridge that shall act as logical router and connect each bridge created in previous step with patch port to logical router
        • set up flow rules on each bridge to;
          • exclude mac learning and flooding on patch port
          • for traffic destined to other tiers send the traffic on the patch port
          • for the rest of the traffic from VIF's connected to VM, tunnel interface and patch port do normal (L2 switching) processing
        • set up flow rules on logical router bridge to:
          • reflect flows corresponding to current ingress ACL's and egress ACL's set on tier
          • set flows to route traffic on appropriate patch port based on the destination ip's subnet
  • OvsElement release() phases in NIC life cycle to destroy tunnels and bridges on hypervisors. Following changes shall be needed in nic release phase:
    • current logic of releasing a NIC is described as below, if the VM's is last VM from the network being deleted on the host.
      • get the list of hosts on which network spans currently
      • delete tunnel from all the hosts on which network spans to the current host on which VM being deleted
      • destroy the bridge
    • check shall be made if network is part of VPC, if its part of VPC, and VPC offering does not have 'distributedrouting' capability enabled current flow of actions outlined above for release phase shall be performed during the nic release
    • if network is part of VPC, and VPC offering has 'distributedrouting' capability enabled & VM is not the LAST vm from VPC on the host then proceed with above steps for release phase
    • if network is part of VPC, and VPC offering has 'distributedrouting' capability enabled & VM is LAST vm from VPC on the host then following steps shall be performed
      • for each network/tier in the VPC:
        • get the list of hosts on which tier spans
        • delete tunnels from the all the hosts on which tier spans to the current host on which VM is being deleted
        • destroy the bridge for the tier
      • destroy the logical router
  • OvsElement destory() phases in network life cycle shall need following changes:
  • replaceNetworkACLList enhancements:
    • OvsTunnel manager shall subscribe to replaceNetworkACLList events
    • on event trigger, if the VPC offering of the VPC that contains the network, has 'distributedrouting' capability enabled then following actions shall be performed
    • get the list of the hosts on which network spans
    • on each host flush the ingress/egress ACL represented as flows on logical router bridge and apply new flows corresponding to the new ACL list

resource layer commands

Following new resource layer commands shall be introduced.

  • OvsCreateLogicalRouter
  • OvsDeleteLogicalRouter
  • OvsCreateFlowCommand
  • OvsDeleteFlowCommand

script enhancements

ovstunnel: setup_logical_router

...