Point-to-point Protocol is an important WAN protocol which has many useful features. Because it’s widely used, it’s an important topic in the CCNA course as well. Although Packet Tracer is a valuable tool for simulating PPP, some features are missing and here GNS3 comes into play: We can study the behavior of PPP and can configure all of its features needed to completely understand the protocol at the level of CCNA. Fortunately we don’t need complex topologies either, so our labs won’t need too many resources. We’ll discuss the following PPP features: compression, link quality monitoring, multilink and authentication.

Our first lab consists of only two routers, with WIC-2T modules. To conserve resources, we use 1700 series routers (64, or even 32 MB of RAM is enough). First, because we have real IOS, we can observe the process of PPP link negotiation, with the help of debug ppp negotiation command. On R1 issue the following commands:

When the debug is active, we can see the following output:

PPP Link Control Protocol (LCP) periodically sends out Configure-Request messages (“O CONFREQ”), but this time the other end doesn’t respond, so after some time it enters Listen state. We can see another PPP specific value: MagicNumber. This is used to detect loops. Configure R2 in the same way, except that we don’t configure its IP address yet. We can see a lot of debug messages on R1:

R1 sends and receives CONFREQ messages, and later on it sends and receives CONFACK (Configure-Acknowledge) messages. This is LCP when negotiating the communication parameters. The phases the process goes through can be seen also. When LCP finishes, some Network Control Protocols (NCPs) come into play, namely CDPCP for CDP and IPCP for IP. These also send and receive CONFREQ messages to negotiate the parameters used to transfer specific protocols, and when they get acknowledged, then PPP is ready to transfer that protocol. Because CDP is a Layer2 protocol, its negotiation is successful, but R1 got a reject message from R2 for IPCP. It is because R2 doesn’t have an IP address, so when we configure 10.1.1.2/30 on R2, the following can be seen on either router after issuing the show interfaces s0/0 command:

The important part is at the bottom: the encapsulation and LCP/NCP states. Now start Wireshark to observe the traffic on the PPP link: move the mouse over the serial link, and when the label appears, right click on it. Choose R1 s0/0 PPP encapsulation. In the Captures window, right click on R1’s name, and choose Start Wireshark. The program displays CDP messages between the devices and PPP Echo Request/Reply messages. When we issue a ping from R1 to R2, it’ll be clear that ICMP Echo Request/Reply is different traffic. Look at the packet length of such an ICMP message: 104 bytes – we need this information later.

Now let’s configure compression. This feature is useful if we have low bandwidth and we need to transfer data which is not compressed already (for example, AVI, JPG or ZIP files). On the other hand, if we have enough bandwidth, it may be unnecessary as compression is made by software and uses resources on the router. The configuration can be done in interface configuration mode:

We can choose from some methods, now I use Predictor (just because it has a fancy name). Of course, I configure it on the other side. We can see a new protocol to start, Compression Control Protocol (CCP). From now on Wireshark starts to display packets with the name “Compressed data”. These are compressed CDP packets. Now issue another ping from R1 and see the captured packet sizes. Remember the original value of ICMP packet length? It was 104 bytes. Now it’s around 40-50 bytes. Obviously, that compression works. Experiment with other methods to see which the best is.

For the next exercise turn off the debug by the undebugall command, but let Wireshark run in the background. Turn off compression also with the no compress command and see Termination Request/Ack messages from CCP. Now we configure the PPP Link Quality Monitoring (LQM) feature. It can be useful if we have a poor quality line. We’ll set a threshold, for example, of 95 percent, which means that 95 percent of the packets must be good and error free in a specific time interval. If more packets get bad, LQM will get the link down. So far so good, but how can we simulate a bad link in GNS3?

The software has a nice feature for this, called filter. Under the topology window there’s the Console window and when we write the filter command in, the following help can be seen:

Nice! The freq_drop subcommand will be our friend. Enter the following command:

filter R1 s0/0 freq_drop out 5

What does that mean? We put a filter to R1 on s0/0 interface in the outbound direction, which instructs the emulator to drop one packet out of every five. This means that the quality of our linkwill be just 80 percent outbound, so we can see LQM in action.The configuration is really simple: issue the ppp quality 95 command on each interface. PPP will immediately start the exchange of the Link Quality Report messages. Now issue a ping from R1 and see what happens. The link should be down after a short time, because the bad traffic is over the threshold.

Before we’ll start our final exercise with this lab, let’s reset the configuration of the serial interfaces with the default interface s0/0 command. This will put S0/0 into factory default state except that the no shutdown is still active. It can be useful if we configured a lot of things and want to start with clean slate.

Maybe the most useful feature in PPP is the ability to authenticate the peer. Authentication is important to maintain security. The best known PPP authentication methods are Password Authentication Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP). The former is a simple method but not so secure: the passwords are traversing through the link in cleartext. CHAP uses a lot more secure mechanism: a shared secret is configured on each peer that doesn’t appear on the link, but a digest computed from it and other data. When a peer wants to authenticate itself, it will get a so-called challenge. This is a special message to answer – think of it like in the tales: “If you can answer my question, you’ll get the hand of my daughter and half of my kingdom.” The authenticator sends the challenge message, the other party answers it and if the answer is correct, the authentication is successful. Moreover, authentication can be one-way or two-way.

Start with one-way PAP authentication. On R1 we need to set up the authentication data in interface configuration mode:

On R2 first issue is the debug ppp authentication command in order to see the process. We need to create a user who has credentials corresponding to the configured value on R1, so R2 will recognize that user, and finally we need to enable PAP authentication:

On R2 the debug messages are the following:

It can be seen that R2 got Authentication Request as input from user1, and at the end it sent an Authentication Ack. In Wireshark we can even see the password along with peer-ID: PAP definitely is not the best security method. Now let’s see the output of show caller user command:

In the middle the (<-) characters shows that the authentication is one-way. Now configure two-way authentication on your own and issue this command again: you should see (<–>) instead. Note: we don’t need to use identical passwords on the routers, because we specify the user data in the ppp sent-username command.

Delete PPP authentication from the interfaces and set up one-way CHAP. Because we need to have a common shared secret, create users with the same password on each peer, but using the peer’s hostname (for example, on R1 use the username of R2). The same passwords are now important. CHAP by default uses the hostname, but we can define another username by the ppp chap hostname command. Then we just need to put the ppp authentication chapcommand on the authenticator (this is R2 in this case) and watch the debug output:

The first line shows the outbound challenge message from R2, the second line is the input response, and the last line is the output message about the successful authentication. It clearly demonstrates the 3-way handshaking used in CHAP. Two-way authentication is really easy: just put the ppp authentication chap command on R1 and it’s done. To check things, issue the show caller user R2 on router R1 or vice versa. Finally check the captured traffic in Wireshark. Now you won’t see any password traversing the line, just the value of the challenge and the response. A third party without the shared password cannot use these values to crack the authentication. Moreover CHAP repeats this process from time to time to increase security.

For the last useful feature to study we need to close this lab and open another one. In this topology we connected the routers by two serial interfaces, because we want to try the Multilink feature. Multilink is a technique to bond some physical interfaces into one logical interface to achieve higher bandwidth and availability. We complemented the topology with two Microcore Linux virtual machines. We’ll use them as remote endpoints that need high availability.I If one of the serial interfaces will be unusable, another one still works in the bundle and moreover the bandwidth will be higher. NOTE: in GNS3 the stress test of multilink (bandwidth measuring and simulating interface error by shutdown) didn’t work for me, so try these with real equipment instead.

The Microcore VMs can be downloaded from here:

http://www.gns3.net/appliances/

I used the Linux Microcore 3.8.2 QEMU version.I It’s a bit simpler to integrate into GNS3. In the Edit/Preferences/Qemu menu, on the Qemu guest tab I have the following settings:

Start the devices. When the Qemu VMs are ready (don’t worry, they will come up a bit slowly in Windows), you’ll see the prompt “tc@box:~$.” This means that we are under username of “tc” at the machine name “box.” This user is an unprivileged user, so we need to use the “sudo” system if we want to run programs with superuser privileges, or alternatively we can switch to root user by issuing the sudo su command (I personally prefer this for this lab). We need to configure IP networking on Microcore: IP address for the Ethernet interface called eth0, and a default route using the output interface, so on the left side VM enter the following:

sudo su
ip address add 192.168.1.2/24 dev eth0
ip route add default dev eth0

On the R1 router configure Fa0/0 interface to 192.168.1.1/24, and check if it can ping the Linux machine. If yes, then set up the right side VM with the IP address of 192.168.2.2/24, then R2’s Fa0/0 to 192.168.2.1/24, and check that connection.

Now we’re ready to configure PPP multilink. The configuration steps are the following:

  • first we set up the two physical interfaces into multilink mode and the corresponding multilink group
  • secondly we’ll bring up the logical multilink interface for this group and give it an IP address.

From now on the router treats the multilink interface as the connection between the routers, just like, for example, EtherChannel treats the PortChannel interface between switches.

Let’s begin on R1, the necessary commands for the physical interfaces:

Then we create the multilink logical interface:

The configuration is almost identical on R2 except the IP address of the multilink1 interface. If we know the necessary commands for a configuration which are repeating, it’s a good practice to enter them into a word processor (like Notepad) and paste the text into the terminal window.

When the interfaces are up, we can check the connectivity by pinging and then check the settings by the show ppp multilink command:

The most important information for us: “total bandwidth 3088.” A single interface has the bandwidth of 1544 kbit/s, so it really seems that our bandwidth increased. But how can we be absolutely sure in this? Use some measuring tool. Microcore Linux has a tool named iperf.:It’s a client/server based application. On the left side VM start by issuing iperf –s, then on the other VM enter iperf –c 192.168.1.2 command. The two endpoints connect and soon after we should see some result that proves the bandwidth is twice as big as with one single interface. After this we can simulate a link error by shutting down one of the serial interfaces, and see if the connection is still intact between the VMs, although the bandwidth is half of the previous value. Here my configuration didn’t work as expected, so I advice again to use real equipments instead.

I hope that this article helped to better understand PPP behind the scenes and showed some useful tools also.

Useful links:

PPP authentication:
http://www.cisco.com/c/en/us/support/docs/wan/point-to-point-protocol-ppp/10313-config-pap.html
http://www.cisco.com/c/en/us/support/docs/wan/point-to-point-protocol-ppp/25647-understanding-ppp-chap.html

PPP multilink:
http://www.firewall.cx/cisco-technical-knowledgebase/cisco-routers/822-cisco-router-ppp-multilink.html

PPP link quality monitoring:
https://sites.google.com/site/amitsciscozone/home/ppp/ppp-link-quality-monitoring