
Configuring Neutron and Open vSwitch
Configuration of OVS and Neutron involves running OVS commands to configure the software switch and a number of configuration files for Neutron. For this section, we are configuring the network
node. We will be using eth2
for creating Neutron tenant networks and eth3
for creating an externally routable network.
Getting ready
Ensure that you have a suitable server available for installation of the OpenStack network components. If you are using the accompanying Vagrant environment, this will be the network
node that we will be using.
Ensure that you are logged in to the network
node and it has the required packages in our environment for running OVS and Neutron. If you created this node with Vagrant, you can execute the following command:
vagrant ssh network
How to do it...
To configure OVS and Neutron on our OpenStack network node, carry out the following steps:
- With the installation of the required packages complete, we can now configure our environment. To do this, we first configure our OVS switch service. We need to configure a bridge that we will call
br-int
. This is the integration bridge that glues our bridges together within our SDN environment. The command is as follows:sudo ovs-vsctl add-br br-int
- We now configure the Neutron Tenant tunnel network bridge, which will allow us to create GRE and VXLAN tunnels between our Compute hosts and
network
node to give us our Neutron network functionality within OpenStack. This interface iseth2
so we need to configure a bridge calledbr-eth2
within OVS as follows:sudo ovs-vsctl add-br br-eth2 sudo ovs-vsctl add-port br-eth2 eth2
- We now assign the IP address that was previously assigned to our
eth3
interface to this bridge:sudo ifconfig br-eth2 10.10.0.201 netmask 255.255.255.0
Tip
This address is on the network that we will use to create the GRE and VXLAN Neutron tunnel mesh networks. Instances within OpenStack will attach to the OpenStack created networks encapsulated on this network. We assigned this range as
10.10.0.0/24
, as described in thevagrant
file:network_config.vm.network :hostonly, "10.10.0.201", :netmask => "255.255.255.0"
- Next add an external bridge that is used on our external network. This will be used to route traffic to/from the outside of our environment and onto our SDN network:
sudo ovs-vsctl add-br br-ex sudo ovs-vsctl add-port br-ex eth3
- We now assign the IP address that was previously assigned to our
eth3
interface to this bridge:sudo ifconfig br-ex 192.168.100.201 netmask 255.255.255.0
- We need to ensure that we have set the following in
/etc/sysctl.conf
:net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
- To pick up these system changes in this file, run the following command:
sysctl -p
- We now need to configure the backend database store, so we first create the
neutron
database inMariaDB
. We do this as follows (where we have a user in MariaDB calledroot
, with passwordopenstack
, that is able to create databases):MYSQL_ROOT_PASS=openstack mysql -uroot -p$MYSQL_ROOT_PASS -e "CREATE DATABASE \ neutron;"
- It is good practice to create a user that is specific to our OpenStack Networking service, so we create a
neutron
user in the database as follows:MYSQL_NEUTRON_PASS=openstack mysql -uroot -p$MYSQL_ROOT_PASS -e "GRANT ALL PRIVILEGES ON \neutron.* TO 'neutron'@'localhost' IDENTIFIED BY \ '$MYSQL_KEYSTONE_PASS';" mysql -uroot -p$MYSQL_ROOT_PASS -e "GRANT ALL PRIVILEGES ON \neutron.* TO 'neutron'@'%' IDENTIFIED BY '$MYSQL_NEUTRON_PASS';"
- Next we will edit the Neutron configuration files. There are a number of these to edit on our
network
node. The first is the/etc/neutron/neutron.conf
file. Edit this file and insert the following content:[DEFAULT] verbose = True debug = True state_path = /var/lib/neutron lock_path = $state_path/lock log_dir = /var/log/neutron use_syslog = True syslog_log_facility = LOG_LOCAL0 bind_host = 0.0.0.0 bind_port = 9696 # Plugin core_plugin = ml2 service_plugins = router allow_overlapping_ips = True # auth auth_strategy = keystone # RPC configuration options. Defined in rpc __init__ # The messaging module to use, defaults to kombu. rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = 172.16.0.200 rabbit_password = guest rabbit_port = 5672 rabbit_userid = guest rabbit_virtual_host = / rabbit_ha_queues = false # ===== Notification System Options ========== notification_driver = neutron.openstack.common.notifier.rpc_notifier [agent] root_helper = sudo [keystone_authtoken] auth_uri = https://192.168.100.200:35357/v2.0/ identity_uri = https://192.168.100.200:5000 admin_tenant_name = service admin_user = neutron admin_password = neutron insecure = True [database] connection = mysql://neutron:openstack@172.16.0.200/neutron
- After this, we edit the
/etc/neutron/l3_agent.ini
file with the following content:[DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True
- Locate the
/etc/neutron/dhcp_agent.ini
file and insert the following content:[DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
- Create a file called
/etc/neutron/dnsmasq-neutron.conf
and add in the following content to alter the maximum transmission unit (MTU) of our Neutron Tenant interface of our guests:# To allow tunneling bytes to be appended dhcp-option-force=26,1400
- After this, we edit the
/etc/neutron/metadata_agent.ini
file to insert the following content:[DEFAULT] auth_url = https://192.168.100.200:5000/v2.0 auth_region = RegionOne admin_tenant_name = service admin_user = neutron admin_password = neutron nova_metadata_ip = 172.16.0.200 metadata_proxy_shared_secret = foo auth_insecure = True
- The last Neutron service file we need to edit is the
/etc/neutron/plugins/ml2/ml2_conf.ini
file. Insert the following content:[ml2] type_drivers = gre,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1:1000 [ml2_type_vxlan] vxlan_group = vni_ranges = 1:1000 [vxlan] enable_vxlan = True vxlan_group = local_ip = 10.10.0.201 l2_population = True [agent] tunnel_types = vxlan vxlan_udp_port = 4789 [ovs] local_ip = 10.10.0.201 tunnel_type = vxlan enable_tunneling = True [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
- With our environment and switch configured, we can restart the relevant services to pick up the changes:
sudo service neutron-plugin-openvswitch-agent restart sudo service neutron-dhcp-agent restart sudo service neutron-l3-agent restart sudo service neutron-metadata-agent restart
How it works...
We completed the configuration of a new node in our environment that runs the software networking components of our SDN environment.
Once we installed our applications and service dependencies and started the services, we configured our environment by assigning a bridge that acts as the integration bridge that internally bridges our instances with the rest of the network. It also connects bridges to our interfaces on the Tenant and Provider networks.
We then edit a number of files to get Neutron up-and-running in our environment. The first is the /etc/neutron/neutron.conf
file. This is the main configuration file for our Neutron services. In this file, we define how Neutron is configured and what components, features, and plugins should be used.
In the /etc/neutron/l3_agent.ini
file, we specify that we are allowing tenants to create overlapping IP ranges (use_namespaces = True
). This means that Tenant A users can create and use a private IP CIDR that also exists within Tenant B. We also specify that we are to use OVS to provide L3 routing capabilities.
The /etc/neutron/dhcp_agent.ini
file specifies that we are going to use Dnsmasq as the service for DHCP within our environment. We also reference the /etc/neutron/dnsmasq-neutron.conf
file, which allows us to pass extra options to Dnsmasq when it starts up processes for that network. We do this so we can specify an MTU of 1400 that gets set on the instance network interfaces. This is because the default of 1500 conflicts with the extra bytes that tunneling adds to the packets and its inability to handle fragmentation. By lowering the MTU, all the normal IP information plus the extra tunneling information can be transmitted at once without fragmentation.
The /etc/neutron/metadata_agent.ini
file notifies Neutron and our instances where to find the metadata service. It points to our controller
node and ultimately the nova API service. Here, we set a secret key as described in the metadata_proxy_shared_secret = foo
line that matches the same random keyword that we will eventually configure in /etc/nova/nova.conf
on our controller
node: neutron_metadata_proxy_shared_secret=foo
.
The last configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini
, configures the L2 plugins within our environment and describes our L2 capabilities. The configuration options are as follows:
[ml2] type_drivers = gre,vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch
We're configuring our networking type to be either Generic Routing Encapsulation (GRE) or Virtual eXtensible LAN (VXLAN) tunnels. This allows our SDN environment to capture a wide range of protocols over the tunnels we create.
We specify that VXLAN tunnels are to be created when a non-admin user creates their own private Neutron networks. An admin user is able to specify GRE as an option on the command line:
[ml2_type_gre] tunnel_id_ranges = 1:1000
This specifies that, when a user specifies a private Neutron tenant network without specifying an ID range for a GRE network, an ID is taken from this range. OpenStack ensures that each tunnel created is unique. The code is as follows:
[ml2_type_vxlan] vxlan_group = vni_ranges = 1:1000 [vxlan] enable_vxlan = True vxlan_group = local_ip = 10.10.0.201 [agent] tunnel_types = vxlan vxlan_udp_port = 4789
The preceding sections describe our VXLAN options. In the same way as for GRE, we have an endpoint IP that is the IP assigned to the interface that we want tunneled traffic to flow over, and we specify the valid vxlan
IDs to use within our environment. The code is as follows:
[ovs] local_ip = 10.10.0.201 tunnel_type = gre enable_tunneling = True
The preceding section describes the options to pass to OVS, and details our tunnel configuration. Its endpoint address is 10.10.0.201
and we're specifying the tunnel type GRE to be used. The code is as follows:
[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
The preceding code tells Neutron to use IPtables
rules when creating security groups.