Monday, March 29, 2010

Route all traffic from certain programs through a PPTP VPN tunnel on FreeBSD

The problem

You’d like to route certain traffic over a VPN tunnel, but you only know which program is generating the traffic, not which ports it will be using or which IP ranges it will be connecting to. The only way to crack this nut seems to be to set the default route to the VPN tunnel, but this will send all traffic through the tunnel, which may be a problem if the tunnel is slow and laggy, e.g. if the endpoint happens to be the IPREDator service in Sweden. The common solution seems to be “run the program in a virtual machine,” but that seems like overkill. What to do?

The solution

If you’re on FreeBSD, you’re in luck: since 7.1, FreeBSD has supported multiple routing tables (FIBs). You can set up a secondary routing table that goes through the VPN tunnel by default, and bind any program you wish to that specific routing table.

Kernel support

Your kernel must be rebuilt with support for at least 2 FIBs; the default kernel only has support for one. Add

options    ROUTETABLES=2

to your kernel configuration and rebuild.

MPD configuration

The PPP client of choice is the netgraph-based Multilink PPP Daemon (mpd5 in ports). The relevant section of mpd.conf looks like this:

pptp_client:
        create bundle static B1
        set iface up-script /usr/local/etc/mpd5/pptp-up.sh
        set iface down-script /usr/local/etc/mpd5/pptp-down.sh
        set ipcp ranges 0.0.0.0/0 0.0.0.0/0

        set bundle enable compression
        set ccp yes mppc
        set mppc yes e40
        set mppc yes e128
        set bundle enable crypt-reqd
        set mppc yes stateless

        create link static L1 pptp
        set link action bundle B1
        set auth authname ***********
        set auth password ***********
        set link max-redial 0
        set link mtu 1460
        set link keep-alive 20 75
        set pptp peer vpn.ipredator.se
        open

Note the lack of a route command. The IPREDator folks have set up their VPN service in a slightly problematic way: the endpoint of the tunnel is also the remote router. The best route to this network node is therefore the direct link over the PPP tunnel, and the kernel will attempt to send the PPP packets over the tunnel itself, like a truck trying to drive up its own tailpipe. Any time this happens, MPD will kill the link. To avoid this condition, you’ll have to set up the routes manually in the up- and down-scripts, which MPD runs whenever the link is brought up or torn down:

interface=$1
proto=$2
tun_ip=$3
tun_endpoint=$4

tun_iface=ng0
eth_iface=em0

eth_gateway=`route get default -iface em0 | awk '/gateway/ { print $2}'`

route delete $tun_endpoint
route add $tun_endpoint $eth_gateway

# make this the default route for the secondary FIB
setfib -1 route flush
setfib -1 route add $tun_endpoint $eth_gateway
setfib -1 route add default $tun_endpoint

The trick is to delete the direct link and force traffic to the tunnel endpoint to run over the physical ethernet interface. The last part of the script does the same trick with the secondary FIB and then adds a default route pointing to the tunnel.

In the down script, you just have to delete the route to the tunnel endpoint. I also flush the routing table in the secondary FIB, since I want traffic to halt completely if the tunnel isn’t available.

interface=$1
proto=$2
tun_ip=$3
tun_endpoint=$4

eth_iface=em0

route delete $tun_endpoint

# kill all routing on the secondary FIB
setfib -1 route flush

Use

You can run a program in the tunneled environment via setfib -1 $cmd. Hooray!

1 comment:

Unknown said...

Hi,

I’m trying to setup mpd5 and Ipredator and it isn’t working for me. I’m trying to do it without the use of several FIBs though. Could you please post (or email me) the output from netstat –r before and after the VPN is up and also the output from the mpd5 output.

/Dre (tmp at imap.cc)