This article will be about generic routing encapsulation (GRE) tunnels.

Using tunneling, it’s possible to carry packets of one protocol within (encapsulated) another protocol. The carried protocol goes by the name of payload protocol, while the protocol that encapsulates this data goes by the name of transport protocol.

GRE is one of the many possible tunneling mechanisms that use IP as a transport protocol. The payload protocols that could be carried by GRE are many and diverse.

The tunnels act as point-to-point virtuals.

The tunnels are implemented through a virtual interface that is configured by the user based on what is needed. The tunnel interface itself is not tied to any specific payload protocol or transport protocol.

The topology used for this article and for the simulation is shown below:

The goal of the lab is that the two hosts can reach each other via the GRE tunnel.

After the tunnel interface is configured as shown on the diagram, on each router you should configure a static route towards the subnet where the remote host resides through the tunnel interface.

Once you download the files (the link is at the beginning of the article), you will notice that, along with GNS3 topology file, you will get the, let’s say, startup configuration files for this lab.

If you use these configuration files, please adapt the path to them in the GNS3 topology file.

Once the topology is loaded and all the devices are powered on, the next step is to configure the two hosts (PC_1 and PC_2) with IP addresses and default gateway.

There are two things not shown on the diagram:

  • R1, R2 and R3 are running OSPF protocol so that R1 and R3 can reach each other. The source of the tunnel must be able to reach the destination of the tunnel in order to bring up the tunnel.
  • The subnet used on the tunnel interfaces will be 1.1.1.0/24. 1.1.1.1/24 will be configured on R1 and 1.1.1.3/24 will be configured on R3.

This is the routing table of R1:

R1#show ip route | begin Gateway
Gateway of last resort is not set

     10.0.0.0/24 is subnetted, 3 subnets
C       10.10.1.0 is directly connected, FastEthernet0/0
C       10.10.12.0 is directly connected, FastEthernet1/0
O       10.10.23.0 [110/2] via 10.10.12.2, 00:23:07, FastEthernet1/0
R1#

And this is the routing table of R3:

R3#show ip route | begin Gateway
Gateway of last resort is not set

     10.0.0.0/24 is subnetted, 3 subnets
C       10.10.2.0 is directly connected, FastEthernet1/0
O       10.10.12.0 [110/2] via 10.10.23.2, 00:22:28, FastEthernet0/0
C       10.10.23.0 is directly connected, FastEthernet0/0
R3#

Before you can configure the hosts, you should know that they are emulated using a lightweight version of Linux. You can download it from here: http://sourceforge.net/projects/gns-3/files/Qemu%20Appliances/linux-microcore-3.8.2.img. Once you download it, you need to configure GNS3. Go to Edit – Preferences – Qemu. You should see something similar to the figure below. Keep in mind that the path location might be different, depending on where you decided to store the Linux image.

Once you start the hosts, because the configuration doesn’t survive any device power-off, you will need to configure the IP address on eth0 of each host and the default gateway pointing to the router to which they are connected, as shown on the diagram.

Keep in mind that using “tc” as username when you access the hosts using the console will log you in directly to shell without asking for any password.

This is needed on PC_1 to change the hostname, to add the right IP address on eth0, and to add the default route pointing to R1. Do the similar configuration on PC_2.

tc@box:~$ sudo hostname PC_1
tc@PC_1:~$ sudo ifconfig eth0 10.10.1.100 netmask 255.255.255.0
tc@PC_1:~$ sudo route add default gw 10.10.1.1 eth0

This is the confirmation that everything is configured correctly. You can ping your gateway:

tc@PC_1:~$ ping 10.10.1.1
PING 10.10.1.1 (10.10.1.1): 56 data bytes
64 bytes from 10.10.1.1: seq=0 ttl=255 time=49.112 ms
^C
--- 10.10.1.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 49.112/49.112/49.112 ms
tc@PC_1:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
127.0.0.1       0.0.0.0         255.255.255.255 UH    0      0        0 lo
10.10.1.0       0.0.0.0         255.255.255.0   U     0      0        0 eth0
0.0.0.0         10.10.1.1       0.0.0.0         UG    0      0        0 eth0
tc@PC_1:~$

You can paste this configuration on R1 to configure the tunnel interface and to add the route towards PC_2 subnet through the tunnel interface:

R1#show running-config interface Tunnel0
Building configuration...

Current configuration : 132 bytes
!
interface Tunnel0
 ip address 1.1.1.1 255.255.255.0
 keepalive 10 3
 tunnel source 10.10.12.1
 tunnel destination 10.10.23.3
end

R1#

R1#show running-config | i 10.10.2.0
ip route 10.10.2.0 255.255.255.0 1.1.1.3
R1#

And this is the configuration from R3:

R3#show running-config interface Tunnel0
Building configuration...

Current configuration : 132 bytes
!
interface Tunnel0
 ip address 1.1.1.3 255.255.255.0
 keepalive 10 3
 tunnel source 10.10.23.3
 tunnel destination 10.10.12.1
end

R3#

R3#show running-config | i 10.10.1.0
ip route 10.10.1.0 255.255.255.0 1.1.1.1
R3#

If this has been configured, it’s time to check the operational status of the tunnel interface. This is on R1:

R1#show interfaces Tunnel0
Tunnel0 is up, line protocol is up
  Hardware is Tunnel
  Internet address is 1.1.1.1/24
  MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation TUNNEL, loopback not set
  Keepalive set (10 sec), retries 3
  Tunnel source 10.10.12.1, destination 10.10.23.3
  Tunnel protocol/transport GRE/IP
    Key disabled, sequencing disabled
    Checksumming of packets disabled
  Tunnel TTL 255
  Fast tunneling enabled
  Tunnel transmit bandwidth 8000 (kbps)
  Tunnel receive bandwidth 8000 (kbps)
  Last input 00:05:44, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/0 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     50 packets input, 2604 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     66 packets output, 3510 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
R1#

As you can see, the interface is up and the transport protocol is GRE (as we configured it). The keepalive is configured with the default timers and values, 10 seconds between each keepalive, and the tunnel will be brought down if three keepalives are missed, hence you need 30 seconds to detect a failure between the source and the destination of the tunnel.

The purpose of the lab was to have the two hosts communicating through the tunnel interfaces configured on R1 and R3.

After the static route was configured, it will appear in the routing table. For instance on R1:

R1#show ip route static
     10.0.0.0/24 is subnetted, 4 subnets
S       10.10.2.0 [1/0] via 1.1.1.3
R1#

And PC_1 should be able to ping PC_2:

tc@PC_1:~$ ping 10.10.2.100
PING 10.10.2.100 (10.10.2.100): 56 data bytes
64 bytes from 10.10.2.100: seq=0 ttl=62 time=69.858 ms
^C
--- 10.10.2.100 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 69.858/69.858/69.858 ms
tc@PC_1:~$

Also, when you are checking the status of the interface, you can see that there is traffic through the tunnel and it cannot be anything but the traffic between the two hosts or the actual keepalive packets.

Let’s clear the interface counters on R1 and send five icmp packets from PC_1 to PC_2 and check after that again the counters on R1.

Clear the counters:

R1#clear counters
Clear "show interface" counters on all interfaces [confirm]
R1#
*Mar  1 00:55:43.695: %CLEAR-5-COUNTERS: Clear counter on all interfaces by console
R1#

Send the ICMP packets from PC_1 to PC_2:

tc@PC_1:~$ ping 10.10.2.100
PING 10.10.2.100 (10.10.2.100): 56 data bytes
64 bytes from 10.10.2.100: seq=0 ttl=62 time=66.542 ms
64 bytes from 10.10.2.100: seq=1 ttl=62 time=85.004 ms
64 bytes from 10.10.2.100: seq=2 ttl=62 time=53.098 ms
64 bytes from 10.10.2.100: seq=3 ttl=62 time=72.986 ms
64 bytes from 10.10.2.100: seq=4 ttl=62 time=80.438 ms
^C
--- 10.10.2.100 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 53.098/71.613/85.004 ms
tc@PC_1:~$

Check the counters again on R1:

R1#show interfaces Tunnel0
Tunnel0 is up, line protocol is up
  Hardware is Tunnel
  Internet address is 1.1.1.1/24
  MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation TUNNEL, loopback not set
  Keepalive set (10 sec), retries 3
  Tunnel source 10.10.12.1, destination 10.10.23.3
  Tunnel protocol/transport GRE/IP
    Key disabled, sequencing disabled
    Checksumming of packets disabled
  Tunnel TTL 255
  Fast tunneling enabled
  Tunnel transmit bandwidth 8000 (kbps)
  Tunnel receive bandwidth 8000 (kbps)
  Last input 00:13:41, output 00:00:07, output hang never
  Last clearing of "show interface" counters 00:00:06
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/0 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     5 packets input, 540 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     5 packets output, 540 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
R1#

As you can see, there is a match between the number of the packets sent and the number of packets that went through the tunnel.

The reason that one would configure a tunnel interface would be to keep the data hidden to the devices in between the source and destination of the tunnel or to make devices believe that the routers configured with the source and destination of the tunnels are directly connected.

Check the output when a traceroute is issued from PC_2 towards PC_1:

tc@PC_2:~$ traceroute 10.10.1.100
traceroute to 10.10.1.100 (10.10.1.100), 30 hops max, 38 byte packets
 1  10.10.2.1 (10.10.2.1)  12.950 ms  4.201 ms  38.739 ms
 2  1.1.1.1 (1.1.1.1)  85.533 ms  37.812 ms  71.899 ms
 3  10.10.1.100 (10.10.1.100)  79.307 ms  106.123 ms  131.049 ms
tc@PC_2:~$

As you can see, the first hop is R3, the second hop is R1 (1.1.1.1 is configured on the tunnel interface on R1) and the third one is PC_1.

There is no mention about R2 anywhere. It’s as if it doesn’t exist.

However it does exist but, for R2, any type of traffic sent between R1 and R3 through the tunnel will appear as GRE, even though it might be ICMP, FTP or HTTP or something else.

As you can see, the GRE tunnel configuration required for CCNA exam is pretty straightforward. At the minimum, you need only to configure the source, the destination of the tunnel and the IP address.

However, the problems with GRE tunnels are very common and this is because they traverse domains that are not under your authority.

Always check if both ends can reach the destination of the tunnels, check if the GRE protocol is allowed on the networks that are not under your administration.

References

  1. How to configure a GRE tunnel(link to https://supportforums.cisco.com/document/13576/how-configure-gre-tunnel)
  2. How GRE Keepalives Work(link to http://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/63760-gre-keepalives-63760.html