You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

How to set up Layer 7 Traffic Control



Resources

Some resources that greatly helped to figure this stuff out
L7-Homepage
LARTC-Guide
Freifunk-Wiki
HTB Linux queue manual
HTB How To

Prerequisites

To get l7-filter to work, you will need to install the package 'iproute' and patch your kernel and iptables. Some distros already do that and have pre-built packages, but for the sake of completeness I cover these topic here too. First we need to get 2 packages:

Patching the kernel

After downloading and untar'ing, we need to patch the running kernel with patch provided by the l7-filter package. I assume the running kernel source is to be found at '/usr/src/linux'. At the time of writing the latest available patch was 'kernel-2.6.25-2.6.28-layer7-2.22.patch' which applied fine against 2.6.31.

~# tar xjpf netfilter-layer7-v2.xx.tar.bz2
~# cd netfilter-layer7-v2.xx
~# cp kernel-2.6.25-2.6.28-layer7-2.22.patch /usr/src/linux && cd /usr/src/linux
patch -p1 < kernel-2.6.25-2.6.28-layer7-2.22.patch
make menuconfig


In the kernel configuration menu you have to select the folowing options:
Directly copied from the L7-filter How-to page:

  • "Prompt for development and/or incomplete code/drivers" (under "Code maturity level options")
  • "Network packet filtering framework" (Networking ? Networking support ? Networking Options)
  • "Netfilter Xtables support" (on the same screen)
  • "Netfilter connection tracking support" (... ? Network packet filtering framework ? Core Netfilter Configuration), select "Layer 3 Independent Connection tracking"
  • "Connection tracking flow accounting" (on the same screen)
  • And finally, "Layer 7 match support"
  • Optional but highly recommended: Lots of other Netfilter options, notably "FTP support" and other matches. If you don't know what you're doing, go ahead and enable all of them.

After you're done configuring, safe and exit the configuration and install the kernel:

~# make && make modules && make modules_install && make bzImage && make install

(warning) Now reboot the computer with the new kernel.

Patching iptables

The l7-filter package also comes with extensions for iptables. The files 'libxt_layer7.c' and 'libxt_layer7.man' have to be copied to the iptables 'extensions' folder before compiling the new iptables binary:

~# tar xjpf netfilter-layer7-v2.xx.tar.bz2
~# tar xzvf iptables-1.4.3.2.tar.gz
~# cp netfilter-layer7-v2.xx/iptables-1.4.3forward-for-kernel-2.6.20forward/libxt_layer7* iptables-1.4.3.2/extensions/
~# ./configure --prefix=/usr/local/iptables --enable-devel --enable-libipq --enable-shared --enable-static --enable-ipv6 --with-ksource=/usr/src/linux
make
make install 

(warning) Be careful not to install your distributors and your private version of iptables in parallel to avoid conflicts.

Testing the setup

At the shell prompt type:

~# /usr/local/iptables/sbin/iptables -m layer7 -h
~# tc -help

The first command will produce lots of information about iptables, the second one will show a brief description of how to operate 'tc'.

In no case you should recieve an error message like:

~# /usr/local/sbin/iptables -m layer7 -h
iptables v1.4.3.2: Couldn't load match `layer7':/lib/xtables/libipt_layer7.so: cannot open shared object file: No such file or directory
(iptables hasn't been patched properly)

or

~# tc -help
-bash: tc: command not found
(package iptroute is missing)

(warning) Should any of these errors occur review the previous steps.

Traffic Control Script

After getting the above steps to work you can now happily copy'n'paste the codeblock into a shellscript, make it executable and off you go. Just change the variables at the top that define how much bandwith is available, how it should be distributed and which protocols are to be prioritized. The variable 'NETDEVICE' has to be set to the WAN-interface.

#!/bin/sh

# Copyleft 2010 David Gabriel <b2c> at <dest-unreachable> dot <net>
# This program is licensed under the terms of the GPL version 2 or higher.
# Use at yee own risk. It works for me, but then I wrote it.
# Thanks to all the people out there who have struggled with l7/tc before
# and have left helpful information in their tracks.
# This script builds upon Jochen E.'s and Björns tc scripts from freifunk.net,
# my own scarce knowledge and various other sources found out there...

# if you compiled iptables yourself add the correct path to 'PATH' here,
# otherwise the script won't find it!
PATH="$PATH:/usr/local/sbin:/usr/local/bin"

TC=`which tc 2> /dev/null`
IPTABLES=`which iptables 2> /dev/null`
IPT="$IPTABLES -t mangle"             

# available protocols (01.02.2010)
# 100bao aim aimwebcontent applejuice ares armagetron
# battlefield1942 battlefield2 battlefield2142 bgp
# biff bittorrent chikka cimd ciscovpn citrix counterstrike-source
# cvs dayofdefeat-source dazhihui dhcp directconnect
# dns doom3 edonkey fasttrack finger freenet ftp gkrellm
# gnucleuslan gnutella goboogy gopher guildwars h323
# halflife2-deathmatch hddtemp hotline http-rtsp http ident
# imap imesh ipp irc jabber kugoo live365 liveforspeed
# lpd mohaa msn-filetransfer msnmessenger mute napster
# nbns ncp netbios nntp ntp openft pcanywhere poco pop3
# pplive qq quake-halflife quake1 radmin rdp replaytv-ivs
# rlogin rtp rtsp runesofmagic shoutcast sip skypeout
# skypetoskype smb smtp snmp socks soribada soulseek ssdp
# ssh ssl stun subspace subversion teamfortress2 teamspeak
# telnet tesla tftp thecircle tonghuashun tor tsp
# unknown unset uucp validcertssl ventrilo vnc
# whois worldofwarcraft x11 xboxlive xunlei yahoo zmaap

# filter on which interface
NETDEVICE="eth0"

# max interface speed in kbp/s
MAX_RATE="2000"

# protos, queuing and speeds
# SLOW will always be qeued last, NORMAL catches all the random stuff while FAST will try to push time-critical services ahead.
# PROTOS specifies the affected protocols
# RATE specifies the maximum speed that these protocols are granted
# CEIL specifies the maximum speed if more bandwidth is available and lower prioritized queues can borrow bandwith from higher prioritized ones.
# This way even filesharing won't be restricted at times of low traffic and only will be slowed down on 'rush hour'.

SLOW_PROTOS="100bao applejuice ares bittorrent directconnect edonkey fasttrack freenet ftp gnucleuslan gnutella gopher imesh ipp live365 liveforspeed msn-filetransfer mute napster openft poco pplive qq smb soulseek subspace tftp tonghuashun tsp unknown xunlei zmaap"
SLOW_RATE="200"
SLOW_CEIL="1700"

NORMAL_PROTOS="aimwebcontent armagetron battlefield1942 battlefield2 battlefield2142 bgp biff chikka cimd counterstrike-source cvs dayofdefeat-source dazhihui dhcp dns doom3 finger gkrellm goboogy gopher guildwars h323 halflife2-deathmatch hddtemp hotline ident jabber kugoo lpd mohaa msnmessenger nbns ncp netbios nntp ntp pcanywhere quake-halflife quake1 replaytv-ivs rtp rtsp runesofmagic snmp socks soribada ssdp stun subversion teamfortress2 telnet tesla thecircle tor tsp unset uucp whois worldofwarcraft x11 xboxlive xunlei"
NORMAL_RATE="1000"
NORMAL_CEIL="1800"

FAST_PROTOS="aim ciscovpn citrix http-rtsp http imap irc pcanywhere radmin rdp rlogin shoutcast sip skypeout skypetoskype ssh ssl teamspeak validcertssl ventrilo vnc"
FAST_RATE="1000"
FAST_CEIL="1900"


########################################################
#                                                      #
########### !! NO SERVICEABLE PARTS BELOW !! ###########
#                                                      #
########################################################


# chains
IPT_CHAINS="track protomatch packetsize typeofservice markit split"

# som prios to cling to ;)
PRIO_1="11"
PRIO_2="12"
PRIO_3="13"

# some reusables

log() {
logger -s -t "l7shaper" "$1"
}

die()  {
logger -s -t "l7shaper" "$1"
stop_prio
exit 1
}

# create chains and flush them
create_chains() {
for chain in ${IPT_CHAINS}; do
        $IPT -N ${chain} || die "Could not create chains, aborting"
done
}

flush_chains() {
for chain in ${IPT_CHAINS}; do
        $IPT -F ${chain} || die "Could not flush chains, aborting"
done
}

# add them to the postrouting mangle table
mangle_chains() {
for chain in ${IPT_CHAINS}; do
        $IPT -A POSTROUTING -j ${chain} || die "Could not add chains to mangle table, aborting"
done
}

filter_chains() {
# find already established connections an restore them
        $IPT -A track -p tcp -j CONNMARK --restore-mark || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A track -m mark --mark $PRIO_1 -j ACCEPT || die "An error occured while setting up the L7 Filter chains, aborting" 
        $IPT -A track -m mark --mark $PRIO_2 -j ACCEPT || die "An error occured while setting up the L7 Filter chains, aborting" 
        $IPT -A track -m mark --mark $PRIO_3 -j ACCEPT || die "An error occured while setting up the L7 Filter chains, aborting" 

# new connection? classify protos and mark them accordingly
for proto in ${SLOW_PROTOS}; do
        $IPT -A protomatch -m layer7 --l7proto ${proto} -j MARK --set-mark $PRIO_3 || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A protomatch -m mark --mark $PRIO_3 -j markit || die "An error occured while setting up the L7 Filter chains, aborting"
done

for proto in ${NORMAL_PROTOS}; do
        $IPT -A protomatch -m layer7 --l7proto ${proto} -j MARK --set-mark $PRIO_2 || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A protomatch -m mark --mark $PRIO_2 -j markit || die "An error occured while setting up the L7 Filter chains, aborting"
done

for proto in ${FAST_PROTOS}; do
        $IPT -A protomatch -m layer7 --l7proto ${proto} -j MARK --set-mark $PRIO_1 || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A protomatch -m mark --mark $PRIO_1 -j markit || die "An error occured while setting up the L7 Filter chains, aborting"
done

# we can't find a matching protocol. let's test TOS and jump to markit
        $IPT -A typeofservice -m tos --tos Minimize-Delay -j MARK --set-mark $PRIO_3 || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A typeofservice -m tos --tos Maximize-Throughput -j MARK --set-mark $PRIO_3 || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A typeofservice -m mark --mark $PRIO_3 -j markit || die "An error occured while setting up the L7 Filter chains, aborting"

# mark the connection and jump to split
        $IPT -A markit -j CONNMARK --save-mark || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A markit -j split || die "An error occured while setting up the L7 Filter chains, aborting"

# this chain catches pakets that haven't met any criteria yet
# we mark them according to their size
        $IPT -A packetsize -m length --length 0:128 -j MARK --set-mark $PRIO_2  || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A packetsize -m length --length 129: -j MARK --set-mark $PRIO_3 || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A packetsize -m mark --mark $PRIO_2 -j split || die "An error occured while setting up the L7 Filter chains, aborting"
        $IPT -A packetsize -m mark --mark $PRIO_3 -j split || die "An error occured while setting up the L7 Filter chains, aborting"

# now we initialize the queues for the incoming traffic which has been marked before
# no bandwith management is done here, only protocol proirization

        # the root class and qdisk...
        $TC qdisc add dev $NETDEVICE root handle 1:0 htb r2q 1 default 12 || die "An error occured while setting up the tc root qdisc, aborting"
        $TC class add dev $NETDEVICE parent 1:0 classid 1:1 htb rate ${NORMAL_RATE}kbit || die "An error occured while setting up the tc root class, aborting"

        # ... and the children
        $TC class add dev $NETDEVICE parent 1:1 classid 1:11 htb rate ${FAST_RATE}kbit ceil ${FAST_CEIL}kbit prio 1 || die "An error occured while setting up a tc parent class, aborting"
        $TC class add dev $NETDEVICE parent 1:2 classid 1:12 htb rate ${NORMAL_RATE}kbit ceil ${NORMAL_CEIL}kbit prio 2 || die "An error occured while setting up a tc parent class, aborting"
        $TC class add dev $NETDEVICE parent 1:3 classid 1:13 htb rate ${SLOW_RATE}kbit ceil ${SLOW_CEIL}kbit prio 3 || die "An error occured while setting up a tc parent class, aborting"

        $TC qdisc add dev $NETDEVICE parent 1:11 htb || die "An error occured while setting up a tc parent qdisc, aborting"
        $TC qdisc add dev $NETDEVICE parent 1:12 htb || die "An error occured while setting up a tc parent qdisc, aborting"
        $TC qdisc add dev $NETDEVICE parent 1:13 htb || die "An error occured while setting up a tc parent qdisc, aborting"

# these are the filters which distribute the now marked packets to the correct queues
        $TC filter add dev $NETDEVICE parent 1: protocol ip prio 1 handle $PRIO_1 fw flowid 1:1 || die "An error occured while setting up a tc filter, aborting"
        $TC filter add dev $NETDEVICE parent 1: protocol ip prio 2 handle $PRIO_2 fw flowid 1:2 || die "An error occured while setting up a tc filter, aborting"
        $TC filter add dev $NETDEVICE parent 1: protocol ip prio 3 handle $PRIO_3 fw flowid 1:3 || die "An error occured while setting up a tc filter, aborting"
}

stop_prio() {

# deletion of all traffic control settings, flushing the mangle table
$TC qdisc del dev $NETDEVICE root && log "Taffic control disabled."  

for chain in ${IPT_CHAINS}; do
        $IPT -D ${chain} 2> /dev/null
        $IPT -F ${chain} 2> /dev/null
        $IPT -X ${chain} 2> /dev/null
done;
$IPT -F -t mangle; $IPT -X -t mangle && log "Chains deleted."
}

# print status aof classes and queues
list_table() {
htb_status
if [ $HTB == off ]; then
        log "Traffic shaping is not enabled."
        exit 1
else
        $TC -p -s -d class show dev $NETDEVICE 2>/dev/null
        $TC -p -s -d qdisc show dev $NETDEVICE 2>/dev/null
fi
}

# quick status test for start/stop/status to use
htb_status() {
$TC qdisc show dev $NETDEVICE | grep htb >/dev/null
if [ $? -eq 0 ]; then
        HTB="on"
else
        HTB="off"
fi
}

case $1 in
        start)
        htb_status
        if [ $HTB == off ]; then
                log "Starting traffic shaping on $NETDEVICE"
                create_chains && log "Chains created."
                flush_chains && log "Chains flushed."
                mangle_chains && log "Chains added to mangle table."
                filter_chains && log "Filter an traffic control configured."
                log "L7-Filter successfully initialized on $NETDEVICE. Maximum available bandwidth: $MAX_RATE kbp/s"

        else
                log "Traffic shaping already active, aborting."
                exit 1
        fi
        ;;

        stop)
        htb_status
        if [ $HTB == on ]; then
                log  "Stopping traffic shaping on $NETDEVICE"
                stop_prio
                log "L7-Filter shut down successfully."
                exit 0
        else
                log "Traffic shaping is not enabled."
                exit 1
        fi
        ;;

        status)
                list_table
        ;;

        restart)
        $0 stop
        $0 start
        ;;

        *)
        echo "Usage: $0 start|stop|status|restart"
        ;;
esac


IP Accounting

Now that we layed rampaging protocols in chains, we might be interested in the usage of our precious bandwidth and its (hopefully) fair distribution amongst users. This can be accomplished through ip accounting, which keeps track of exactly that. I'll be using the two programs 'pmacct' and 'pmgraph' in this tutorial. Both are free as speech, grab them from:

(warning) Note to Debian/Ubuntu users: you are in luck as packages for both programs are already available through a PPA. Check the installation details here.

Dependencies

The rest of you follow me over here to walk you through the manual installation process. First we will need to fulfill the dependencies of both pmacct and pmgraph. These can normally be obtained through your distributors package management system. If you're not familiar with some or any of them I suggest you to search for documentation on the interwebs as there is plenty out there and describing a basic LAMP/Tomcat setup is just out of the scope of this how-to.
In short, you will need:

  • MySQL-5
  • JDK 1.6
  • Tomcat-6
  • jdbc-mysql
  • pmacct (compiled with MySQL support!)
  • pmgraph itself can be obtained through the PPA. Chose the latest .tar.gz and download it to a temporary location, we will just need some parts of it. Install and configure all the packages described above. Done? Good, let's look into the pmacct/pmgraph setup.

Setting up pmacct and pmgraph

pmacct

Unzip the 'pmgraph'-package first as it also contains the MySQL database schema. At the time of writing the latest available package was pmgraph_1.3-2.tar.gz so this is the version number i'll be using throughout the how-to.

~# mkdir /tmp/pmgraph
~# tar xzpf pmgraph_1.3-2.tar.gz -C /tmp/pmgraph && cd /tmp/pmgraph
~# cd dist


Edit the file pmacct-create-db_v6.mysql with you favourite editor. You only have to change the password string which is set to 'secret' as default.

~# vim pmacct-create-db_v6.mysql

create database if not exists pmacct;                                          
GRANT SELECT,INSERT, LOCK TABLES ON pmacct.* TO 'pmacct'@'localhost' identified by 'abetterpasswordthansecret';
use pmacct;
...


Now import the schema into Mysql:

~# mysql -uroot -p < pmacct-create-db_v6.mysql


Edit the file pmacctd.conf. The setting 'pcap_filter' has to represent your local subnet, otherwise it won't work. A 10.1.0.0/16 subnet config would look like this:

pcap_filter: not (src and dst net 10.1.0.0/16)


Also change the DB settings (check which ip and port your database binds to, it may not be localhost as in the example!):

sql_host: localhost
sql_user: pmacct
sql_passwd: abetterpasswordthansecret


Now replace the default pmacct configuration with the new one and start the service:

~# mv /etc/pmacctd.conf /etc/pmacctd.conf.orig
~# cp pmacctd.conf /etc/pmacctd.conf
~# /etc/init.d/pmacct start


pmgraph

Still being in the 'dist'-folder, unzip the file pmgraph.war to the tomcats 'webapp'-folder (usually '/var/lib/tomcat-6/webapps' but might also be found at '/usr/share/tomcat-6/webapps' depending on the distribution):

~# unzip pmgraph.war -d /var/lib/tomcat-6/webapps/pmgraph


Now we only have to edit the pmgraph configuration. As before we only need to change the database settings and fill in the correct subnet. the notation is a bit different here, to declare a 10.1.0.0/16 subnet you only need the first two octets: '10.1.'. Again, if your database does not bind to localhost, change the setting accordingly:

~# cd /var/lib/tomcat-6/webapps/pmgraph/WEB-INF/classes
~# vim database.properties

...
<entry key="JdbcDriver">com.mysql.jdbc.Driver</entry>
<entry key="DatabaseUser">pmacct</entry>
<entry key="DatabasePass">abetterpasswordthansecret</entry>
<entry key="DatabaseURL">jdbc:mysql://localhost:3306/pmacct</entry>
<entry key="LocalSubnet">10.1.</entry>
...

Restart the tomcat server two aplly the new configuration (this can also be done through tomcats management panel if the tomcat also holds other apps which you don't want to interrupt).

Thank you for bearing with me that long, i hope this how-to was helpful. Questions, Suggestions or the bad stuff? Hit me at:
<b2c> at <dest-unreachable> dot <net>

  • No labels