Sunday, August 25, 2013

FW: A Network Engineer's Notes on Telstra's NextG 3G Network



Like Frame Relay, ISDN, DSL and so on before it, 3G wireless networking encompasses a slew of concepts and pitfalls that can be foreign to a network engineer new to the technology. The following notes were assembled during a programme to deploy 3G wireless WAN links to approximately 70 sites across Qld, NSW, Vic, and WA. The 3G links were primarily deployed as a backup for 1-4 Mb/s SHDSL, Frame Relay (FR) and some ADSL primary links for offices of 5-20 staff. Some remote sites, beyond reach of xDSL or FR, use 3G as the primary link. The entirety of the WAN is connected to the same provider VPN cloud (Telstra's NextIP) – this uniform approach has significant advantages in terms of consistency in design, operations and management. Staff use a mix of standalone PCs and Citrix terminals, roughly four Citrix terminals to one PC.
These notes are focussed on the current Cisco 3G WAN card, the HWIC-3G-GSM. This card is supported by Cisco's 1841, 1861, 2800-series and 3800-series ISR routers. This card only supports High-Speed Downlink Packet Access (HSDPA) "up to" 3.6 Mb/s downlink, 384 kb/s uplink (presumably HSDPA Category 5/6, but they don't actually say). Actual measured speeds are more like 0.8-1 Mb/s down, 200-250 kb/s up.
Some other devices were examined, including the
• Ericsson W25 router
• Call Direct CDR-780seu
• Maxon USB3-8521 "orange" USB modem
• Telstra Turbo 7 Series Express Card and USB (AC880E, AC880U) "blue" modems (rebranded Sierra Wireless cards)
The Cisco was bested by all of the above devices in link throughput tests, however other technical, business, operations and management factors meant that the Cisco solution was deployed (most significantly that the WAN design uses DMVPN and EIGRP; the former has limited open support at present, the latter is Cisco-proprietary).
The Cisco 881G 'soho' router will be available sometime in late 2008 – importantly, the 881G will contain a High Speed Uplink Packet Access (HSUPA)-capable modem for much-improved uplink speeds.
What Your Telco Won't Tell You
Telstra's published NextG information is very consumer-oriented, which is to say severely lacking in any technical detail. The most marketing-free source of device and confuguration information is Telstra's so-low-key-as-to-be-almost-invisible web site [note: this URL no longer works]. To complicate matters Telstra's NextG network is sold as a number of separate products: data-focused products (the business-oriented Telstra Mobile Wireless Broadband and the consumer-oriented Bigpond Wireless Broadband), and various voice/data Mobile Phone 3G plans. A warning: many Telstra representatives will not be au fait with, or even aware of, many of the company's "other" products; also be very careful in terminology – much confusion can arise if you get lazy with product names (they're all very similar). From the perspective of this document, the primary difference between the services is in the network connectivity: the business products can be connected to an existing NextIP IPWAN service (i.e. private WAN) or the Internet.
3G Overview
It can be helpful to know a little of the background; first, some buzz-word bingo:
"3G" is a broad category of standards and services around "broadband" mobile wireless voice and data. Universal Mobile Telecommunications System (UMTS) is part of this family and is the standard used for 3G services in Australia. Telstra's NextG product is a UMTS implementation using a Wideband CDMA (W-CDMA) radio carrier in the 850 MHz band. There are "legacy" pockets of 2100 MHz used in some areas. Most modems are capable of automatically switching between the two bands, though not always of making the best choice. Some Telstra technical staff recommend sticking to the 850 MHz band.
High Speed Packet Access (HSPA) is a collection of mobile telephony protocols that extend and improve the performance of existing UMTS protocols. Two standards, HSDPA and HSUPA, have been established and a further standard, HSPA+, is soon to be released. The Ericsson whitepaper Basic Concepts of HSPA has a good technical introduction to HSDPA and HSUPA.
High Speed Downlink Packet Access (HSDPA) provides "up to" 14.4 Mb/s down, 384 kb/s up (earlier HSPDA versions only had 1.8 Mb/s down, 128 kb/s up). The various releases, called Categories, of HSDPA in summary (all 384 Kb/s up):
Category 3,4 = 1.8 Mb/s down
Category 5,6 = 3.6 Mb/s
Category 7,8 = 7.2 Mb/s
Category 10 = 14.4 Mb/s
High Speed Uplink Packet Access (HSUPA) provides improved up-link performance of "up to" 5.76 Mb/s (HSUPA Category 6). Telstra's network currently supports HSDPA Category 10 (14.4 Mb/s down), HSUPA Category 6 (5.76 Mb/s up) – though there are no currently available phones or modems that can support these speeds.
Of significant omission on the readily available specifications for HSPA is latency – the "Basic Concepts of HSPA" Ericsson paper states measured (one-way) latency on HSDPA networks as below 70 ms, real-world measurements of round-trip-time on Telstra's NextG network is typically around the 100-120 ms mark. Latency can markedly increase during medium and network congestion. Empirical testing has shown that 3G round trip times can impact interactive applications such as Citrix – an informal survey of four staff using unoptimised Citrix over a NextG link (measured latency ~100-120 ms) showed most staff noticed the lag, but for the majority it wasn't a serious distraction. Staff began to object when background traffic increased round trip times over the 200ms mark. This testing was on a NextIP IPWAN service (APN=telstra.corp), round-trip-times on the Internet-connected service (APN=telstra.internet) are significantly higher at 200-300 ms.
Three components are required to use a 3G data connection: a USIM, a radio modem, and a PC or router. The USIM identifies the subscriber (for billing, etc). The radio modem does the heavy-lifting in providing physical layer (Layer 1) access to the local 3G base station. The PC or router typically usesPPP as the Layer 2 data link to the provider's Network Access Server (NAS), and from there is connected to the provider's Layer 3 network (which may be a private VPN, or public Internet). The overall network architecture is more or less the same as is used for xDSL (PPPoE, PPPoA) or traditional dialup/Frame Relay/ISDN.
USIM, iSim, We All Sim
UMTS SIM (USIM) is a smart card used to store identification and authentication information, in particular the mobile subscriber ID (IMSI) and secret authentication key (shared with the carrier).
The SIM is uniquely identified via its ICCID; part of this ID is printed on the SIM (the "SIM number"); a full ICCID is 19 (or 20?) characters. The SIM may be protected by a PIN, if so the SIM can not be used without first being given the PIN (once per "session"). A strong PIN will prevent use of a stolen SIM, however Cisco IOS does not provide any facility to automatically unlock a SIM (e.g. on reload), thus it is not practical to use a PIN on a SIM installed in a 3G WIC. Note that a stolen USIM alone will not allow access to a private IPWAN – the network access server credentials (typically CHAP) are also required to connect. A stolen unprotected USIM would allow connection to Telstra's Internet service, which does not require NAS authentication.
Telstra use the phone number and SIM number as their unique account identifier (for billing, fault reporting, etc). A Telstra SIM number as printed on the card and quoted by Telstra is only the most significant 8 digits of the 12-digit account ID, plus an extra 20th ICCID digit. (Optus SIM numbers are the full 12-digit ICCID account number plus the check digit.) It is possible to extract the ICCID via the modem using the 'AT!ICCID?' modem command (there is no corresponding IOS command); the phone number can't be determined from the SIM or modem.
MM = Constant (ISO 7812 Major Industry Identifier, = 89 for "Telecommunications administrations and private operating agencies")
CC = Country Code (61 = Australia)
II = Issuer Identifier (AAPT = 14, EZI-PhoneCard = 88, Hutchison = 06, Optus = 02/12/21/23, Telstra = 01, Telstra Business = 00/61/62, Vodafone = 03)
N{12} = Account ID ("SIM number")
C = Checksum (of the entire 19 digit string)
x = An extra 20th digit is returned by the 'AT!ICCID?' command, and is also printed on Telstra SIMs, but doesn't seem to be an official part of the ICCID (?)
The following are example ICCIDs and corresponding SIM numbers:

8961023412352120898F Optus 34 12352 12089 8
89610155555542000070 Telstra 5555 5542 0P
89610155543235000034 Telstra 5554 3235 4P
Optus print all 12 account digits and checksum digit on the SIM, Telstra Next G print only the left-most 8 account digits, omit the checksum and include an unknown 2-character suffix (one of which is returned as the 20th digit in the 'AT!ICCID?' command).
Useless fact: the ICCID is an instance of an ISO 7812 ID, the same format used for magnetic stripe cards including ATM and credit cards.
Modems and Profiles
The cellular modem needs to make a "data call" (establish a Packet Data Protocol (PDP) context); once connected a PPP session is established to the network access server. The modem requires age-old AT commands to make the call, and also to interrogate the SIM, etc. IOS provides an interface to a handful of modem features via the 'cellular' exec command and chat script(s). Either through a limitiation of the modem, or IOS, AT commands can only be issued when the modem is idle (not in a call).
Unlike a traditional PSTN modem, there is no phone number to dial out to – rather the modem is configured with at least one "profile" which stores an Access Point Name (APN) and optionally a username and password; this profile is then "dialled" to establish the connection.
Telstra APNs include
telstra.internet Internet connectivity (NATted)
telstra.corp private IPWAN
Profiles are stored in the modem, not the USIM nor router's NVRAM or flash memory. Profiles must be configured using 'exec' mode IOS commands (which wrap appropriate 'AT' modem commands). Note that a modem profile and an IOS dialer profile are two separate things.
Cisco's HWIC-3G-GSM wireless WAN card is basically a Sierra Wireless MC8775 modem carried on a HWIC. IOS presents two interfaces:
– low-speed asynchronous "control" interface ('line x/x/x')
– high-speed synchronous interface ('interface cellular x/x/x')
There is also a physical "diag" port on the front of the WIC for debugging the modem (requires proprietary Qualcomm software).
You can connect to the modem on its command port via the standard "reverse telnet" (i.e. telnet <local IP> 2000+portnum), but only when the modem is not in a call.
The WIC has a Received Signal Strength Indication (RSSI) LED
• Off: Low RSSI (under -100 dBm)
• Slow Green Blink: Low or medium RSSI (-99 to -90 dBm)
• Fast Green Blink: Medium RSSI (-89 to -70 dBm)
• Solid Green: High RSSI (-69 dBm or higher)
• Solid Yellow: No service
Other cards
USB-based modem cards are often used by computers. These present to the operating system as USB-hosted serial devices. Modem-style AT commands are then issued over the pseudo-serial device and PPP started when a connection is reported.
Note carefully that the PPP endpoint is in the wireless LAN card, not to a PPP server across the air. This implies that some network features, such as IPv6, require firmware upgrades on the consumer's USB card.
Some vendors cards offer multiple emulated serial interfaces. This allows AT commands to be issued whilst PPP is in use. There is often a vendor-specific protocol run over one of the serial links, which is useful for signal strength.
Router Configuration
The simplest IOS configuration is as follows:
– a simple chat script to "dial" a profile stored in the modem
– traditional Dial-on-Demand Routing (DDR) config
– basic PPP with CHAP authentication
DDR can't keep the cell interface permanently up (it is -on-demand, after all), but a Dialer Profile can using the 'dialer persistent' command (note that pointing a static at the cell interface and hoping there's always going to be interesting traffic isn't quite the same; most any network is idle at some point or other); i.e. dialer configuration is required for a permanent connection.
3G is generally considered a remote access technology, rather than internetwork. As such Telstra don't provide any dynamic routing protocols over the 3G link. They can inject routes at the NAS on behalf of the remote network via RADIUS "Framed Route" attribute (22, not to be confused with "Framed-Routing", attribute 10), but that's an ugly solution compared to true dynamic routing.
Overlaying a Dynamic Multipoint Virtual Private Network (DMVPN), the trio of multipoint GRENHRP and IPSec, has the benefit of making the NextG network totally transparent. The DMVPN tunnel allows any routing protocol, unsurprisingly Cisco recommend EIGRP. Running EIGRP in stub mode over the tunnel is reasonably efficient; with the default hello timers (5s) on both neighbours, an idle link ticks over at under 25 B/s each way, or ~50-60 Mb/month; dropping the hello timer to 30s brings this down to ~10 Mb/month. Actual results on an idle link with four routes down and one summary up, over a 36.9 hour period: Tx 503686 / Rx 414811 bytes = 9.5 / 7.8 MB/month.
The following steps are required to configure a HWIC-3G-GSM:
1. configure the SIM and modem
2. configure the router
The modem and SIM steps only need to be done once (per carrier). The modem configuration is carrier-specific, Telstra's "Configuring the Cisco HWIC-3G-GSM for Internet and IP WAN Connectivity", v1.0 (Oct 2007) has a number of voodoo AT commands that aren't publicly documented anywhere.
The notes below are all extracts from real console sessions and are both necessary and sufficient. Both Cisco's and Telstra's documentation includes copious extraneous material in their configurations. (The only caveat here is the initial modem configuration, which as mentioned is not documented anywhere so you have to take Telstra's word.). Any configuration beyond the interface (e.g. NAT, DMVPN, routing, etc) is pretty much independent of the Cellular interface config, and is left as an exercise for the reader.
1. Configure the modem
a) unlock the SIM

Router#sh cell 0/0/0 security
Card Holder Verification (CHV1) = Enabled
SIM Status = Locked
SIM User Operation Required = Enter CHV1
Number of Retries remaining = 3

Router#cellular 0/0/0 gsm sim unlock NNNN
!!!WARNING: SIM will be unlocked with pin=NNNN(4), call will be disconnected!!!
Are you sure you want to proceed?[confirm]
b) confirm firmware version (should be H1_1_8_3MCAP, apparently)

ISR1841#sh cel 0/0/0 hard
Modem Firmware Version = H1_1_8_3MCAP C:/WS/
Modem Firmware built = 03/08/07
Hardware Version = 1.0
International Mobile Subscriber Identity (IMSI) = 00000
International Mobile Equipment Identity (IMEI) = 352678013223949
Factory Serial Number (FSN) = D28239720801020
Modem Status = Online
Current Modem Temperature = 19 deg C, State = Normal
Note: the above IMSI (0000), seems to occur when the SIM is locked or has just been been unlocked and has not been used yet (recall the IMSI is stored in the SIM).
c) modem config
When entering AT commands, only one line at a time is accepted (i.e. pasting multiple lines will not work). The Sierra Wireless AT commands are documented, but don't include much of the following.

! temporary, remove after configuring the modem and/or configuring a real loopback:
interface Loopback0
ip address

line 0/0/0
transport input all

telnet 2002

if the 'AT!CUSTOM?' doesn't list the four items, enter:

- the SCDFTPROF (Query/set the default profile ID) gives an error if the profile 1 exists or not
– the chat script below uses an explicit profile, so the default doesn't matter

*Jun 18 14:21:21.401: %CELLWAN-2-MODEM_DOWN: Cellular0/0/0 modem is DOWN
*Jun 18 14:21:35.101: %CELLWAN-2-MODEM_UP: Cellular0/0/0 modem is now UP
*Jun 18 14:21:35.101: %CELLWAN-2-MODEM_DOWN: Cellular0/0/0 modem is DOWN
*Jun 18 14:21:45.337: %CELLWAN-2-MODEM_UP: Cellular0/0/0 modem is now UP
set band:

Index, Name
00, All bands
01, N/A (Defaults to ALL)
02, N/A (Defaults to ALL)
03, N/A (Defaults to ALL)
04, N/A (Defaults to ALL)
06, N/A (Defaults to ALL)
07, N/A (Defaults to ALL)
09, N/A (Defaults to ALL)
0A, N/A (Defaults to ALL)
0B, N/A (Defaults to ALL)
0C, WCDMA 850 GSM 900/1800
0D, WCDMA 850

0C, WCDMA 850 GSM 900/1800
0D, WCDMA 850
d) disconnect, clean up
no interface Loopback0
e) configure the modem profile(s)
- you DON'T need the CHAP username/pass here

Router#cellular 0/0/0 gsm profile create 4 telstra.corp
Profile 4 will be created with the following values:
APN = telstra.corp
Are you sure? [confirm]
Profile 4 written to modem
- the profile is shown (sh cell 0/0/0 profile) as ACTIVE when a call is in progress, INACTIVE otherwise
Router#sh cel 0/0/0 profile
Profile 4 = ACTIVE
PDP Type = IPv4
PDP address =
Access Point Name (APN) = telstra.corp
Authentication = None
Username: , Password:

 * – Default profile
2. Configure the router
- the following is the bare essential config:

chat-script ipwan "" "ATDT*98*4#" TIMEOUT 30 CONNECT

interface Cellular0/0/0
encapsulation ppp
ppp chap hostname 
ppp chap password 0 mysecret
async mode interactive
ip address negotiated
dialer in-band
! dialer string is required by IOS, but has no meaning for the cell interface; use the chat script label
dialer string ipwan
dialer-group 1

! default is 120s
dialer idle-timeout 300

! allow any ip traffic to bring up the link
dialer-list 1 protocol ip permit

line 0/0/0
script dialer ipwan

! send something (anything) to the cell interface to get it going...
ip route Cellular0/0/0
3. Notes
- 'speed' commands may appear under the line 0/0/0, these can't be removed and seem to be ignored with a warning: "This command has no effect on this line; use modem AT commands instead"
- the 'ppp ipcp dns request' command is not useful for the VPN (Telstra IPWAN) – Telstra's DNS server(s) will not be reachable from within the VPN cloud, nor will they contain useful information for the private domain
- a possible Catch-22: when idle, the cellular interface is spoofed up and won't have an IP address, so it can't source traffic; the router won't (can't) generate any traffic unless it has a configured local interface (any network/mask)
– i.e. the router needs to generate traffic to trigger the dialer via the static default route, so make sure to have another interface up; once the cell interface is up (and assigned an IP) the router will then use that address (as the "closest" interface) to source traffic
- Telstra inject a host route into the NextIP VPN when the wireless node connects, it can take a short time (~30s with RIP) before that route propagates across the cloud
- you can only connect to the modem (via telnetting to the line VTY port, e.g. 2002 for 0/0/0) when the modem is not in a call
– 'show cell 0/0/0 profile' will be INACTIVE when idle, ACTIVE when in a call
– 'show line' will have an 'I' in the first column when the line is idle, 'A' when active
– don't forget to allow telnet access to the port (e.g. 'transport input all')
- to arbitrarily take the PPP connection down, use the 'clear interface cell0/0/0' command
- traffic in and out of the Cell interface: 'sh cell 0/0/0 connection | i Data'
– counters reset on boot or 'clear counters c0/0/0'
4. Example Output
a) Basic Config

Router#debug chat
Router#debug ppp negotiation
Router#debug ppp error

! no locally configured interfaces (i.e. just the cell0/0/0)
Router#sh ip ro
Gateway of last resort is to network

S* is directly connected, Cellular0/0/0
% Unrecognized host or address, or protocol not running.
- add a loopback, for example:

Router#sh ip ro
Gateway of last resort is to network is subnetted, 1 subnets
C is directly connected, Loopback0
S* is directly connected, Cellular0/0/0


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:

*Jul 16 01:27:26.047: CHAT0/0/0: Attempting async line dialer script
*Jul 16 01:27:26.047: CHAT0/0/0: Dialing using Modem script: ipwan & System script: none
*Jul 16 01:27:26.051: CHAT0/0/0: process started
*Jul 16 01:27:26.051: CHAT0/0/0: Asserting DTR
*Jul 16 01:27:26.051: CHAT0/0/0: Chat script ipwan started
*Jul 16 01:27:26.051: CHAT0/0/0: Sending string: ATDT*98*4#
*Jul 16 01:27:26.051: CHAT0/0/0: Expecting string: CONNECT
*Jul 16 01:27:26.095: CHAT0/0/0: Completed match for expect: CONNECT
*Jul 16 01:27:26.095: CHAT0/0/0: Chat script ipwan finished, status = Success.
*Jul 16 01:27:28.227: %LINK-3-UPDOWN: Interface Cellular0/0/0, changed state to up
*Jul 16 01:27:28.227: Ce0/0/0 PPP: Using dialer call direction
*Jul 16 01:27:28.227: Ce0/0/0 PPP: Treating connection as a callout
*Jul 16 01:27:28.227: Ce0/0/0 PPP: Session handle[2D00000E] Session id[5]
*Jul 16 01:27:28.227: Ce0/0/0 PPP: Phase is ESTABLISHING, Active Open
*Jul 16 01:27:28.227: Ce0/0/0 PPP: No remote authentication for call-out
*Jul 16 01:27:28.227: Ce0/0/0 LCP: O CONFREQ [Closed] id 9 len 20
*Jul 16 01:27:28.227: Ce0/0/0 LCP: ACCM 0x000A0000 (0x0206000A0000)
*Jul 16 01:27:28.227: Ce0/0/0 LCP: MagicNumber 0x1F5A1582 (0x05061F5A1582)
*Jul 16 01:27:28.227: Ce0/0/0 LCP: PFC (0x0702)
*Jul 16 01:27:28.227: Ce0/0/0 LCP: ACFC (0x0802)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: I CONFREQ [REQsent] id 8 len 25
*Jul 16 01:27:28.231: Ce0/0/0 LCP: ACCM 0x00000000 (0x020600000000)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: AuthProto CHAP (0x0305C22305)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: MagicNumber 0x9B6BDFE3 (0x05069B6BDFE3)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: PFC (0x0702)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: ACFC (0x0802)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: O CONFACK [REQsent] id 8 len 25
*Jul 16 01:27:28.231: Ce0/0/0 LCP: ACCM 0x00000000 (0x020600000000)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: AuthProto CHAP (0x0305C22305)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: MagicNumber 0x9B6BDFE3 (0x05069B6BDFE3)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: PFC (0x0702)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: ACFC (0x0802)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: I CONFACK [ACKsent] id 9 len 20
*Jul 16 01:27:28.231: Ce0/0/0 LCP: ACCM 0x000A0000 (0x0206000A0000)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: MagicNumber 0x1F5A1582 (0x05061F5A1582)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: PFC (0x0702)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: ACFC (0x0802)
*Jul 16 01:27:28.231: Ce0/0/0 LCP: State is Open
*Jul 16 01:27:28.231: Ce0/0/0 PPP: Phase is AUTHENTICATING, by the peer
*Jul 16 01:27:28.235: Ce0/0/0 CHAP: I CHALLENGE id 1 len 35 from "UMTS_CHAP_SRVR"
*Jul 16 01:27:28.235: Ce0/0/0 CHAP: Using hostname from interface CHAP
*Jul 16 01:27:28.235: Ce0/0/0 CHAP: Using password from interface CHAP
*Jul 16 01:27:28.235: Ce0/0/0 CHAP: O RESPONSE id 1 len 42 from ""
*Jul 16 01:27:28.239: Ce0/0/0 CHAP: I SUCCESS id 1 len 4
*Jul 16 01:27:28.239: Ce0/0/0 PPP: Phase is FORWARDING, Attempting Forward
*Jul 16 01:27:28.239: Ce0/0/0 PPP: Phase is ESTABLISHING, Finish LCP
*Jul 16 01:27:28.239: Ce0/0/0 PPP: Phase is UP
*Jul 16 01:27:28.239: Ce0/0/0 IPCP: O CONFREQ [Closed] id 1 len 10
*Jul 16 01:27:28.239: Ce0/0/0 IPCP: Address (0x030600000000)
*Jul 16 01:27:28.239: Ce0/0/0 PPP: Process pending ncp packets.
*Jul 16 01:27:29.243: Ce0/0/0 IPCP: I CONFNAK [REQsent] id 1 len 16
*Jul 16 01:27:29.243: Ce0/0/0 IPCP: PrimaryDNS (0x81060A0B0C0D)
*Jul 16 01:27:29.243: Ce0/0/0 IPCP: SecondaryDNS (0x83060A0B0C0E)
*Jul 16 01:27:29.243: Ce0/0/0 IPCP: Ignoring unrequested options!
*Jul 16 01:27:29.243: Ce0/0/0 IPCP: O CONFREQ [REQsent] id 2 len 10
*Jul 16 01:27:29.243: Ce0/0/0 IPCP: Address (0x030600000000)
*Jul 16 01:27:30.247: Ce0/0/0 IPCP: I CONFNAK [REQsent] id 2 len 16
*Jul 16 01:27:30.247: Ce0/0/0 IPCP: PrimaryDNS (0x81060A0B0C0D)
*Jul 16 01:27:30.247: Ce0/0/0 IPCP: SecondaryDNS (0x83060A0B0C0E)
*Jul 16 01:27:30.247: Ce0/0/0 IPCP: Ignoring unrequested options!
*Jul 16 01:27:30.247: Ce0/0/0 IPCP: O CONFREQ [REQsent] id 3 len 10
*Jul 16 01:27:30.251: Ce0/0/0 IPCP: Address (0x030600000000)
*Jul 16 01:27:31.255: Ce0/0/0 IPCP: I CONFNAK [REQsent] id 3 len 16
*Jul 16 01:27:31.255: Ce0/0/0 IPCP: PrimaryDNS (0x81060A0B0C0D)
*Jul 16 01:27:31.255: Ce0/0/0 IPCP: SecondaryDNS (0x83060A0B0C0E)
*Jul 16 01:27:31.255: Ce0/0/0 IPCP: Ignoring unrequested options!
*Jul 16 01:27:31.255: Ce0/0/0 IPCP: O CONFREQ [REQsent] id 4 len 10
*Jul 16 01:27:31.255: Ce0/0/0 IPCP: Address (0x030600000000)
*Jul 16 01:27:31.399: Ce0/0/0 IPCP: I CONFREQ [REQsent] id 4 len 4
*Jul 16 01:27:31.399: Ce0/0/0 IPCP: O CONFACK [REQsent] id 4 len 4
*Jul 16 01:27:31.399: Ce0/0/0 IPCP: I CONFNAK [ACKsent] id 4 len 10.
*Jul 16 01:27:31.399: Ce0/0/0 IPCP: Address (0x03060A07003E)
*Jul 16 01:27:31.399: Ce0/0/0 IPCP: O CONFREQ [ACKsent] id 5 len 10
*Jul 16 01:27:31.399: Ce0/0/0 IPCP: Address (0x03060A07003E)
*Jul 16 01:27:31.403: Ce0/0/0 IPCP: I CONFACK [ACKsent] id 5 len 10
*Jul 16 01:27:31.403: Ce0/0/0 IPCP: Address (0x03060A07003E)
*Jul 16 01:27:31.403: Ce0/0/0 IPCP: State is Open
*Jul 16 01:27:31.403: Ce0/0/0 IPCP: Install negotiated IP interface address
Success rate is 0 percent (0/5)

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 0 percent (0/5)

! wait a bit


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 304/337/348 ms

! wait for timeout

*Jul 16 01:29:47.515: Ce0/0/0 PPP: Sending Acct Event[Down] id[13]
*Jul 16 01:29:47.515: Ce0/0/0 IPCP: State is Closed
*Jul 16 01:29:47.515: Ce0/0/0 PPP: Phase is TERMINATING
*Jul 16 01:29:47.515: Ce0/0/0 LCP: O TERMREQ [Open] id 10 len 4
*Jul 16 01:29:47.527: Ce0/0/0 LCP: I TERMACK [TERMsent] id 10 len 4
*Jul 16 01:29:47.527: Ce0/0/0 LCP: State is Closed
*Jul 16 01:29:47.527: Ce0/0/0 PPP: Phase is DOWN
*Jul 16 01:29:49.527: %LINK-5-CHANGED: Interface Cellular0/0/0, changed state to reset
*Jul 16 01:29:54.659: %LINK-3-UPDOWN: Interface Cellular0/0/0, changed state to down
b) Cellular status
idle, no active call:
- the active state is the same, except:
– Profile Information:
Profile 4 = ACTIVE
PDP address = (note this is not the cell host address, rt. the network address)
– Network Information: Packet Session Status = Active
Packet Service = HSDPA (Attached)
Packet Session Status = Active
Router#sh cell 0/0/0 all
Hardware Information
Modem Firmware Version = H1_1_8_3MCAP C:/WS/
Modem Firmware built = 03/08/07
Hardware Version = 1.0
International Mobile Subscriber Identity (IMSI) = 505023435470642
International Mobile Equipment Identity (IMEI) = 352678013222925
Factory Serial Number (FSN) = D28289730191031
Modem Status = Online
Current Modem Temperature = 33 deg C, State = Normal

Profile Information
Profile 4 = INACTIVE
PDP Type = IPv4
Access Point Name (APN) = telstra.corp
Authentication = None
Username: , Password:

 * – Default profile

Data Connection Information
Data Transmitted = 7821 bytes, Received = 15546 bytes
Profile 1, Packet Session Status = INACTIVE
Inactivity Reason = Normal inactivate state
Profile 2, Packet Session Status = INACTIVE
Inactivity Reason = Normal inactivate state
Profile 3, Packet Session Status = INACTIVE
Inactivity Reason = Normal inactivate state
Profile 4, Packet Session Status = INACTIVE
Inactivity Reason = Unknown
Profile 5, Packet Session Status = INACTIVE
Inactivity Reason = Normal inactivate state
Profile 16, Packet Session Status = INACTIVE
Inactivity Reason = Normal inactivate state

Network Information
Current Service Status = Normal, Service Error = None
Current Service = Combined
Packet Service = UMTS/WCDMA (Attached)
Packet Session Status = Inactive
Current Roaming Status = Home
Network Selection Mode = Automatic
Country = AUS, Network = Telstra
Mobile Country Code (MCC) = 505
Mobile Network Code (MNC) = 1
Location Area Code (LAC) = 336
Routing Area Code (RAC) = 1
Cell ID = 9261
Primary Scrambling Code = 201
PLMN Selection = Automatic
Registered PLMN = , Abbreviated =
Service Provider = Telstra

Radio Information
Current Band = WCDMA 850, Channel Number = 4436
Current RSSI(RSCP) = -62 dBm
Band Selected = WCDMA V 850

Modem Security Information
Card Holder Verification (CHV1) = Disabled
SIM Status = OK
SIM User Operation Required = None
Number of Retries remaining = 3
5. Troubleshooting
debug chat
debug ppp negotiation
debug ppp error
a) a new SIM doesn't work:


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:

*Jul 16 03:47:22.611: CHAT0/0/0: Attempting async line dialer script
*Jul 16 03:47:22.611: CHAT0/0/0: Dialing using Modem script: ipwan & System script: none
*Jul 16 03:47:22.655: CHAT0/0/0: Chat script ipwan finished, status = Success.
*Jul 16 03:47:24.791: %LINK-3-UPDOWN: Interface Cellular0/0/0, changed state to up
*Jul 16 03:47:24.791: Ce0/0/0 PPP: Using dialer call direction
*Jul 16 03:47:24.795: Ce0/0/0 LCP: State is Open
*Jul 16 03:47:24.795: Ce0/0/0 PPP: Phase is AUTHENTICATING, by the peer
*Jul 16 03:47:24.799: Ce0/0/0 CHAP: I CHALLENGE id 1 len 35 from "UMTS_CHAP_SRVR"
*Jul 16 03:47:24.799: Ce0/0/0 CHAP: Using hostname from interface CHAP
*Jul 16 03:47:24.799: Ce0/0/0 CHAP: Using password from interface CHAP
*Jul 16 03:47:24.799: Ce0/0/0 CHAP: O RESPONSE id 1 len 42 from ""
*Jul 16 03:47:24.803: Ce0/0/0 CHAP: I SUCCESS id 1 len 4
*Jul 16 03:47:24.803: Ce0/0/0 PPP: Phase is FORWARDING, Attempting Forward
*Jul 16 03:47:24.803: Ce0/0/0 PPP: Phase is ESTABLISHING, Finish LCP
*Jul 16 03:47:24.803: Ce0/0/0 PPP: Phase is UP
*Jul 16 03:47:24.803: Ce0/0/0 IPCP: O CONFREQ [Closed] id 1 len 10
*Jul 16 03:47:24.803: Ce0/0/0 IPCP: Address (0x030600000000)
*Jul 16 03:47:24.803: Ce0/0/0 PPP: Process pending ncp packets.
*Jul 16 03:47:25.807: Ce0/0/0 IPCP: I CONFNAK [REQsent] id 1 len 16
*Jul 16 03:47:25.807: Ce0/0/0 IPCP: PrimaryDNS (0x81060A0B0C0D)
*Jul 16 03:47:25.807: Ce0/0/0 IPCP: SecondaryDNS (0x83060A0B0C0E)
*Jul 16 03:47:25.807: Ce0/0/0 IPCP: Ignoring unrequested options!
*Jul 16 03:47:25.807: Ce0/0/0 IPCP: O CONFREQ [REQsent] id 2 len 10
*Jul 16 03:47:25.807: Ce0/0/0 IPCP: Address (0x030600000000)..
*Jul 16 03:47:27.803: Ce0/0/0 IPCP: Timeout: State REQsent
*Jul 16 03:47:27.803: Ce0/0/0 IPCP: O CONFREQ [REQsent] id 3 len 10
*Jul 16 03:47:27.803: Ce0/0/0 IPCP: Address (0x030600000000)
*Jul 16 03:47:28.151: Ce0/0/0 PPP: Sending Acct Event[Down] id[5]
*Jul 16 03:47:28.151: Ce0/0/0 IPCP: State is Closed
- look up the SIM, omitting check digit (2nd last) but including final digit:

Router#telnet 2002
Trying, 2002 ... Open
!ICCID: 89610155543235000034


Closing connection to [confirm]
- SIM from the above is "5554 3235 4"
- call Telstra, request they add the "VPN codes"
Example of a Cisco 1841 using an external 3G router 
- Using a Cisco 1841 router to establish the PPPoE session to the IPWAN
- A CDR-780seu cellular router with PPPoE client disabled is connected to Fa0/1

interface FastEthernet0/1
ip address
ip tcp adjust-mss 1420
duplex auto
speed auto
pppoe enable
pppoe-client dial-pool-number 1

interface Dialer1
mtu 1452
ip address negotiated
ip nat outside
encapsulation ppp
dialer pool 1
dialer-group 1
ppp authentication chap callin
ppp chap hostname username
ppp chap password 0 sekritpw

- The PPPoE connection should now be working, and assigned an DHCP IP address by the IPWAN

#show ip int brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0/1 YES NVRAM up up
NVI0 unassigned YES unset up up
Virtual-Access1 unassigned YES unset up up
Dialer1 YES IPCP up up 

- So we can run EIGRP over the layer 3 network, we need to set up a GRE tunnel between the endpoint routers:

interface Tunnel0
ip address
no ip mroute-cache
keepalive 10 3
tunnel source Dialer1
tunnel destination

- On the remote router, create the other GRE tunnel endpoint (change serial interface as required):

interface Tunnel0
ip address
no ip mroute-cache
keepalive 10 3
tunnel source Serial0/0/0.16
tunnel destination

- The keepalive command will bring the tunnel down if 3 keepalive packets are lost (sending one every 10 seconds).

Saturday, August 24, 2013

Git vs Mercurial vs SVN


[1] I use git on personal projects to sort of "collaborate with myself." I have repositories on a linux box on my home network that's accessible via a tunnel from anywhere. I then will clone it to my home desktop, my laptop, maybe a machine at work, and I can see it or work on it anywhere I go. I can commit changes, get the latest, and have backups in various places. It's very nice the ease and speed with which git allows you to switch branches. Found a bug? Switch to 'master', fix it, commit, push, then switch back to what you're doing. Easier and faster than cvs or subversion.
Also, I use git a lot for small directories that aren't even projects. The config directory for the apache server hosting my web site is git'd, and likewise the tomcat config directory for the same web site.
I use it at work for everything, even though at work we're on CVS moving to Subversion. I don't use git-cvs or git-svn, I just use git alongside either product, and keep my branches local. Very handy to be able to switch to another developer's latest commit, check something, then switch back.
Then, of course, there's bisect, which can be a huge help, for work or home projects.
Also, if at work they're still using punch cards, cvs, or subversion, then using git at home is a great way to stay current, and find out for yourself the impact it can have.
I don't get excited about technologies unless they bring something genuinely new to the table. Git does. I'm a fan. You probably figured that out already.

Mercurial Version Control Tool


Mercurial is a cross-platformdistributed revision control tool for software developers. It is mainly implemented using the Python programming language, but includes a binary diff implementation written in C. It is supported on Windows and Unix-like systems, such as FreeBSDMac OS X andLinux. Mercurial is primarily a command line program but graphical user interface extensions are available. All of Mercurial's operations are invoked as arguments to its driver program hg, a reference to the chemical symbol of the element mercury.
Mercurial's major design goals include high performance and scalability, decentralized, fully distributed collaborative development, robust handling of both plain text and binary files, and advanced branching and merging capabilities, while remaining conceptually simple.[3] It includes an integrated web interface. Mercurial has also taken steps to ease the transition for SVN users.
The creator and lead developer of Mercurial is Matt Mackall. Mercurial is released as free software under the terms of the GNU GPL v2 (or any later version[4]).

Friday, August 23, 2013




By Erik Rodriguez
This article describes how TCP and UDP work, the difference between the two, and why you would choose one over the other. 


TCP (Transmission Control Protocol) is the most commonly used protocol on the Internet. The reason for this is because TCP offers error correction. When the TCP protocol is used there is a "guaranteed delivery." This is due largely in part to a method called "flow control." Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, the client re-requests the packet from the server until the whole packet is complete and is identical to its original. 

UDP (User Datagram Protocol) is anther commonly used protocol on the Internet. However, UDP is never used to send important data such as webpages, database information, etc; UDP is commonly used for streaming audio and video. Streaming media such as Windows Media audio files (.WMA) , Real Player (.RM), and others use UDP because it offers speed! The reason UDP is faster than TCP is because there is no form of flow control or error correction. The data sent over the Internet is affected by collisions, and errors will be present. Remember that UDP is only concerned with speed. This is the main reason why streaming media is not high quality.

On the contrary, UDP has been implemented among some trojan horse viruses. Hackers develop scripts and trojans to run over UDP in order to mask their activities. UDP packets are also used in DoS (Denial of Service) attacks. It is important to know the difference between TCP port 80 and UDP port 80. If you don't know what ports are gohere

Frame Structure

As data moves along a network, various attributes are added to the file to create a frame. This process is called encapsulation. There are different methods of encapsulation depending on which protocol and topology are being used. As a result, the frame structure of these packets differ as well. The images below show both the TCP and UDP frame structures. 



The payload field contains the actually data. Notice that TCP has a more complex frame structure. This is largely due to the fact the TCP is a connection-oriented protocol. The extra fields are need to ensure the "guaranteed delivery" offered by TCP.



TCP and UDP[edit]

The TCP and UDP protocols are 2 different protocols that handle data communications between terminals in an IP network (the Internet). This page will talk about what TCP and UDP are, and what the differences are between them.
In the OSI model, TCP and UDP are "Transport Layer" Protocols.

Connection-Oriented vs Connectionless[edit]



After going through the various layers of the Model, it’s a time to have a look at the TCP protocol and to study it’s functionality. This section will help the reader to get know about the concepts of the TCP, charactersitics of the TCP and then would gradually take him into the details of TCP like connection establishment/closing, communication in TCP and why the TCP protocol is called as reliable as well as adaptive protocol. This section will end with a comparison between UDP and TCP followed by a nice exercise which would encourage readers to solve more and more problems.
Before writing this section, the information has been studied from varied sources like TCP guide, RFC's, tanenbaum book and the class notes.
What is TCP ?
In theory, a transport layer protocol could be a very simple software routine, but TCP protocol cannot be called simple. Why use a transport layer which is as complex as TCP? The most important reason depends on IP's unreliability. In fact all the layers below TCP are unreliable and deliver the datagram hop-by-hop. IP layer delivers the datagram hop-by-hop and does not guarantee delivery of a datagram; it is a connectionless system. IP simply handles the routing of datagrams; and if problems occur, IP discards the packet without a second thought generating an error message back to the sender in the process. The task of ascertaining the status of the datagrams sent over a network and handling the resending of information if parts have been discarded falls to TCP.
Most users think of TCP and IP as a tightly knit pair, but TCP can (and frequently is) used with other transport protocols. 
For example, TCP or parts of it are used in the File Transfer Protocol (FTP) and the Simple Mail Transfer Protocol (SMTP), both of which do not use IP.
The Transmission Control Protocol provides a considerable number of services to the IP layer and the upper layers. Most importantly, it provides a connection-oriented protocol to the upper layers that enable an application to be sure that a datagram sent out over the network was received in its entirety. In this role, TCP acts as a message-validation protocol providing reliable communications. If a datagram is corrupted or lost, it is usually TCP (not the applications in the higher layers) that handles the retransmission.
TCP is not a piece of software. It is a communications protocol.
TCP manages the flow of datagrams from the higher layers, as well as incoming datagrams from the IP layer. It has to ensure that priorities and security are respected. TCP must be capable of handling the termination of an application above it that was expecting incoming datagrams, as well as failures in the lower layers. TCP also must maintain a state table of all data streams in and out of the TCP layer. The isolation of these services in a separate layer enables applications to be designed without regard to flow control or message reliability. Without the TCP layer, each application would have to implement the services themselves, which is a waste of resources.
TCP resides in the transport layer, positioned above IP but below the upper layers and their applications, as shown in Figure below. TCP resides only on devices that actually process datagrams, ensuring that the datagram has gone from the source to target machines. It does not reside on a device that simply routes datagrams, so there is no TCP layer in a gateway. This makes sense, because on a gateway the datagram has no need to go higher in the layered model than the IP layer.

Fig1 net tcp.jpg
Figure 1:TCP providing reliable End-to-End communication

Because TCP is a connection-oriented protocol responsible for ensuring the transfer of a datagram from the source to destination machine (end-to-end communications), TCP must receive communications messages from the destination machine to acknowledge receipt of the datagram. The term virtual circuit is usually used to refer to the handshaking that goes on between the two end machines, most of which are simple acknowledgment messages (either confirmation of receipt or a failure code) and datagram sequence numbers. It is analogous to a telephone conversation; someone initiates it by ringing a number which is answered, a two-way conversation takes place, and finally someone ends the conversation. socket pair identifies both ends of a connection, i.e. the virtual circuit. It may be recalled that the socket consists of the IP address and the port number to identify the location. The Servers use well-known port numbers (< 1K) for standardized services (Listen). Numbers over 1024 are available for users to use freely.Port numbers for some of the standard services are given below in a table.

Portvalue table.jpg
Figure 1 Port number of some standard services

Byte stream or Message Stream?
Well, the message boundaries are not preserved end to end in the TCP. For example, if the sending process does four 512-byte writes to a TCP stream, these data may be delivered to the receiving process as four 512-byte chunks, two 1024-byte chunks, one 2048-byte chunk), or some other way. There is no way for the receiver to detect the unit(s) in which the data were written. A TCP entity accepts user data streams from local processes, breaks them up into pieces not exceeding 64 KB (in practice, often 1460 data bytes in order to fit in a single Ethernet frame with the IP and TCP headers), and sends each piece as a separate IP datagram. When datagrams containing TCP data arrive at a machine, they are given to the TCP entity, which reconstructs the original byte streams. For simplicity, we will sometimes use just TCP to mean the TCP transport entity (a piece of software) or the TCP protocol (a set of rules). From the context it will be clear which is meant. For example, in The user gives TCP the data, the TCP transport entity is clearly intended. The IP layer gives no guarantee that datagrams will be delivered properly, so it is up to TCP to time out and retransmit them as need be. Datagrams that do arrive may well do so in the wrong order; it is also up to TCP to reassemble them into messages in the proper sequence. In short, TCP must furnish the reliability that most users want and that IP does not provide.

Characteristics of TCP
TCP provides a communication channel between processes on each host system. The channel is reliable, full-duplex, and streaming. To achieve this functionality, the TCP drivers break up the session data stream into discrete segments, and attach a TCP header to each segment. An IP header is attached to this TCP packet, and the composite packet is then passed to the network for delivery. This TCP header has numerous fields that are used to support the intended TCP functionality. TCP has the following functional characteristics:
Unicast protocol : TCP is based on a unicast network model, and supports data exchange between precisely two parties. It does not support broadcast or multicast network models.
Connection state : Rather than impose a state within the network to support the connection, TCP uses synchronized state between the two endpoints. This synchronized state is set up as part of an initial connection process, so TCP can be regarded as a connection-oriented protocol. Much of the protocol design is intended to ensure that each local state transition is communicated to, and acknowledged by, the remote party.

Reliable : Reliability implies that the stream of octets passed to the TCP driver at one end of the connection will be transmitted across the network so that the stream is presented to the remote process as the same sequence of octets, in the same order as that generated by the sender. This implies that the protocol detects when segments of the data stream have been discarded by the network, reordered, duplicated, or corrupted. Where necessary, the sender will retransmit damaged segments so as to allow the receiver to reconstruct the original data stream. This implies that a TCP sender must maintain a local copy of all transmitted data until it receives an indication that the receiver has completed an accurate transfer of the data.

Full duplex : TCP is a full-duplex protocol; it allows both parties to send and receive data within the context of the single TCP connection.

Streaming : Although TCP uses a packet structure for network transmission, TCP is a true streaming protocol, and application-level network operations are not transparent. Some protocols explicitly encapsulate each application transaction; for every write , there must be a matching read . In this manner, the application-derived segmentation of the data stream into a logical record structure is preserved across the network. TCP does not preserve such an implicit structure imposed on the data stream, so that there is no pairing between write and read operations within the network protocol. For example, a TCP application may write three data blocks in sequence into the network connection, which may be collected by the remote reader in a single read operation. The size of the data blocks (segments) used in a TCP session is negotiated at the start of the session. The sender attempts to use the largest segment size it can for the data transfer, within the constraints of the maximum segment size of the receiver, the maximum segment size of the configured sender, and the maxi-mum supportable non-fragmented packet size of the network path (path Maximum Transmission Unit [MTU]). The path MTU is refreshed periodically to adjust to any changes that may occur within the network while the TCP connection is active.

Rate adaptation : TCP is also a rate-adaptive protocol, in that the rate of data transfer is intended to adapt to the prevailing load conditions within the network and adapt to the processing capacity of the receiver. There is no predetermined TCP data-transfer rate; if the network and the receiver both have additional available capacity, a TCP sender will attempt to inject more data into the network to take up this available space. Conversely, if there is congestion, a TCP sender will reduce its sending rate to allow the network to recover. This adaptation function attempts to achieve the highest possible data-transfer rate without triggering consistent data loss.

TCP Header structure[edit]

TCP segments are sent as Internet datagrams. The Internet Protocol header carries several information fields, including the source and destination host addresses. A TCP header follows the Internet header, supplying information specific to the TCP protocol. This division allows for the existence of host level protocols other than TCP.
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|          Source Port          |       Destination Port        |
|                        Sequence Number                        |
|                    Acknowledgment Number                      |
|  Data |           |U|A|P|R|S|F|                               |
| Offset| Reserved  |R|C|S|S|Y|I|            Window             |
|       |           |G|K|H|T|N|N|                               |
|           Checksum            |         Urgent Pointer        |
|                    Options                    |    Padding    |
|                             data                              |

                         TCP Header Format

       Note that one tick mark represents one bit position.
Source Port: 16 bits The source port number.
Destination Port: 16 bits The destination port number.
Sequence Number: 32 bit The sequence number of the first data octet in this segment (except when SYN is present). If SYN is present the sequence number is the initial sequence number (ISN) and the first data octet is ISN+1.
Acknowledgment Number: 32 bits If the ACK control bit is set this field contains the value of the next sequence number the sender of the segment is expecting to receive. Once a connection is established this is always sent.

Data Offset: 4 bits The number of 32 bit words in the TCP Header. This indicates where the data begins. The TCP header (even one including options) is an integral number of 32 bits long.

Reserved: 6 bits Reserved for future use. Must be zero.

Control Bits: 6 bits (from left to right):
URG: Urgent Pointer field significant
ACK: Acknowledgment field significant
PSH: Push Function
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: No more data from sender

Window: 16 bits The number of data octets beginning with the one indicated in the acknowledgment field which the sender of this segment is willing to accept.

Checksum: 16 bits The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header and text. If a segment contains an odd number of header and text octets to be checksummed, the last octet is padded on the right with zeros to form a 16 bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros.
The checksum also covers a 96 bit pseudo header conceptually prefixed to the TCP header. This pseudo header contains the Source Address, the Destination Address, the Protocol, and TCP length. This gives the TCP protection against misrouted segments. This information is carried in the Internet Protocol and is transferred across the TCP/Network interface in the arguments or results of calls by the TCP on the IP.
The TCP Length is the TCP header length plus the data length in octets (this is not an explicitly transmitted quantity, but is computed), and it does not count the 12 octets of the pseudo header.

Urgent Pointer: 16 bits This field communicates the current value of the urgent pointer as a positive offset from the sequence number in this segment. The urgent pointer points to the sequence number of the octet following the urgent data. This field is only be interpreted in segments with the URG control bit set.

Options: variable Options may occupy space at the end of the TCP header and are a multiple of 8 bits in length. All options are included in the checksum. An option may begin on any octet boundary. There are two cases for the format of an option:
Case 1: A single octet of option-kind.
Case 2: An octet of option-kind, an octet of option-length, and the actual option-data octets. The option-length counts the two octets of option-kind and option-length as well as the option-data octets. Note that the list of options may be shorter than the data offset field might imply. The content of the header beyond the End-of-Option option must be header padding (i.e., zero).

A TCP must implement all options

Ethereal Capture
The TCP packet can be viewed using Ethereal capture. One such TCP packet is captured and shown below. See that the ACK-flag and PUSH-flag are set to '1' in it. Tcp example eth.JPG

Communication in TCP[edit]

Before TCP can be employed for any actually useful purpose—that is, sending data—a connection must be set up between the two devices that wish to communicate. This process, usually called connection establishment, involves an exchange of messages that transitions both devices from their initial connection state (CLOSED) to the normal operating state (ESTABLISHED).

Connection Establishment Functions

The connection establishment process actually accomplishes several things as it creates a connection suitable for data exchange:
Contact and Communication: The client and server make contact with each other and establish communication by sending each other messages. The server usually doesn’t even know what client it will be talking to before this point, so it discovers this during connection establishment.
Sequence Number Synchronization: Each device lets the other know what initial sequence number it wants to use for its first transmission.
Parameter Exchange: Certain parameters that control the operation of the TCP connection are exchanged by the two devices.
Control Messages Used for Connection Establishment: SYN and ACK
TCP uses control messages to manage the process of contact and communication. There aren't, however, any special TCP control message types; all TCP messages use the same segment format. A set of control flags in the TCP header indicates whether a segment is being used for control purposes or just to carry data. Following flags are altered while using control messages.
SYN: This bit indicates that the segment is being used to initialize a connection. SYN stands for synchronize, in reference to the sequence number synchronization I mentioned above.
ACK:' This bit indicates that the device sending the segment is conveying an acknowledgment for a message it has received (such as a SYN).

Normal Connection Establishment: The "Three Way Handshake"

To establish a connection, each device must send a SYN and receive an ACK for it from the other device. Thus, conceptually, four control messages need to be passed between the devices. However, it's inefficient to send a SYN and an ACK in separate messages when one could communicate both simultaneously. Thus, in the normal sequence of events in connection establishment, one of the SYNs and one of the ACKs is sent together by setting both of the relevant bits (a message sometimes called a SYN+ACK). This makes a total of three messages, and for this reason the connection procedure is called a three-way handshake.
Key Concept:  The normal process of establishing a connection between a TCP client and server involves three steps: 
the client sends a SYN message; the server sends message that combines an ACK for the client’s SYN and contains the server’s SYN; and then the client sends an ACK for the server’s SYN. This is called the TCP three-way handshake.
New tcp action1.JPG
A connection progresses through a series of states during its lifetime.
The states are: LISTEN, SYN-SENT, SYN-RECEIVED,ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and the fictional state CLOSED. CLOSED is fictional because it represents the state when there is no TCB, and therefore, no connection. Briefly the meanings of the states are:
LISTEN - represents waiting for a connection request from any remote TCP and port.
SYN-SENT - represents waiting for a matching connection request after having sent a connection request.
SYN-RECEIVED - represents waiting for a confirming connection request acknowledgment after having both received and sent a connection request.
ESTABLISHED - represents an open connection, data received can be delivered to the user. The normal state for the data transfer phase of the connection.
FIN-WAIT-1 - represents waiting for a connection termination request from the remote TCP, or an acknowledgment of the connection termination request previously sent.
FIN-WAIT-2 - represents waiting for a connection termination request from the remote TCP.
CLOSE-WAIT - represents waiting for a connection termination request from the local user.
CLOSING - represents waiting for a connection termination request acknowledgment from the remote TCP.
LAST-ACK - represents waiting for an acknowledgment of the connection termination request previously sent to the remote TCP (which includes an acknowledgment of its connection termination request).
TIME-WAIT - represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request.
CLOSED - represents no connection state at all.
A TCP connection progresses from one state to another in response to events. The events are the user calls, OPEN, SEND, RECEIVE, CLOSE, ABORT, and STATUS; the incoming segments, particularly those containing the SYN, ACK, RST and FIN flags; and timeouts.
The state diagram in figure 6 illustrates only state changes, together with the causing events and resulting actions, but addresses neither error conditions nor actions which are not connected with state changes. In a later section, more detail is offered with respect to the reaction of the TCP to events.

Key Concept: If one device setting up a TCP connection sends a SYN and then receives a SYN from the other one before its SYN is acknowledged, the two devices perform a simultaneous open, which consists of the exchange of two independent SYN and ACK message sets. The end result is the same as the conventional three-way handshake, but the process of getting to the ESTABLISHED state is different. The possibility of collision normally occurs in Peer-2-Peer connection.
Buffer management When the Sender(assume client in our case) has a connection to establish, the packet comes to the Transmission Buffer. The packet should have some sequence number attached to it. This sender chooses the sequence number to minimize the risk of using the already used sequence number. The client sends the packet with that sequence number and data along with the packet length field. The server on receiving the packet sends ACK of the next expected sequence number. It also sends the SYN with it’s own sequence number.
Ntwk buffer mng.JPG
The client on receiving both the messages ( SYN as well as ACK), sends ACK to the receiver with the next expected sequence number from the Receiver. Thus, the sequence number are established between the Client and Server. Now, they are ready for the data transfer. Even while sending the data, same concept of the sequence number is followed.

TCP transmission Policy
The window management in TCP is not directly tied to acknowledgements as it is in most data link protocols. For example, suppose the receiver has a 4096-byte buffer, as shown in Figure below. If the sender transmits a 2048-byte segment that is correctly received, the receiver will acknowledge the segment. However, since it now has only 2048 bytes of buffer space (until the application removes some data from the buffer), it will advertise a window of 2048 starting at the next byte expected.
Now the sender transmits another 2048 bytes, which are acknowledged, but the advertised window is 0. The sender must stop until the application process on the receiving host has removed some data from the buffer, at which time TCP can advertise a larger window.
When the window is 0, the sender may not normally send segments, with two exceptions. First, urgent data may be sent, for example, to allow the user to kill the process running on the remote machine. Second, the sender may send a 1-byte segment to make the receiver reannounce the next byte expected and window size. The TCP standard explicitly provides this option to prevent deadlock if a window announcement ever gets lost.
Senders are not required to transmit data as soon as they come in from the application. Neither are receivers required to send acknowledgements as soon as possible. When the first 2 KB of data came in, TCP, knowing that it had a 4-KB window available, would have been completely correct in just buffering the data until another 2 KB came in, to be able to transmit a segment with a 4-KB payload. This freedom can be exploited to improve performance.
Consider a telnet connection to an interactive editor that reacts on every keystroke. In the worst case, when a character arrives at the sending TCP entity, TCP creates a 21-byte TCP segment, which it gives to IP to send as a 41-byte IP datagram. At the receiving side, TCP immediately sends a 40-byte acknowledgment (20 bytes of TCP header and 20 bytes of IP header). Later, when the editor has read the byte, TCP sends a window update, moving the window 1 byte to the right. This packet is also 40 bytes. Finally, when the editor has processed the character, it echoes the character as a 41-byte packet. In all, 162 bytes of bandwidth are used and four segments are sent for each character typed. When bandwidth is scarce, this method of doing business is not desirable.
One approach that many TCP implementations use to optimize this situation is to delay acknowledgments and window updates for 500 msec in the hope of acquiring some data on which to hitch a free ride. Assuming the editor echoes within 500 msec, only one 41-byte packet now need be sent back to the remote user, cutting the packet count and bandwidth usage in half. Although this rule reduces the load placed on the network by the receiver, the sender is still operating inefficiently by sending 41-byte packets containing 1 byte of data. A way to reduce this usage is known as Nagle's algorithm (Nagle, 1984). What Nagle suggested is simple: when data come into the sender one byte at a time, just send the first byte and buffer all the rest until the outstanding byte is acknowledged. Then send all the buffered characters in one TCP segment and start buffering again until they are all acknowledged. If the user is typing quickly and the network is slow, a substantial number of characters may go in each segment, greatly reducing the bandwidth used. The algorithm additionally allows a new packet to be sent if enough data have trickled in to fill half the window or a maximum segment.
Nagle's algorithm is widely used by TCP implementations, but there are times when it is better to disable it. In particular, when an X Windows application is being run over the Internet, mouse movements have to be sent to the remote computer. (The X Window system is the windowing system used on most UNIX systems.) Gathering them up to send in bursts makes the mouse cursor move erratically, which makes for unhappy users.
Another problem that can degrade TCP performance is the silly window syndrome. This problem occurs when data are passed to the sending TCP entity in large blocks, but an interactive application on the receiving side reads data 1 byte at a time. To see the problem, look at the figure below. Initially, the TCP buffer on the receiving side is full and the sender knows this (i.e., has a window of size 0). Then the interactive application reads one character from the TCP stream. This action makes the receiving TCP happy, so it sends a window update to the sender saying that it is all right to send 1 byte. The sender obliges and sends 1 byte. The buffer is now full, so the receiver acknowledges the 1-byte segment but sets the window to 0. This behavior can go on forever.
Clark's solution is to prevent the receiver from sending a window update for 1 byte. Instead it is forced to wait until it has a decent amount of space available and advertise that instead. Specifically, the receiver should not send a window update until it can handle the maximum segment size it advertised when the connection was established or until its buffer is half empty, whichever is smaller.
Furthermore, the sender can also help by not sending tiny segments. Instead, it should try to wait until it has accumulated enough space in the window to send a full segment or at least one containing half of the receiver's buffer size (which it must estimate from the pattern of window updates it has received in the past).
Nagle's algorithm and Clark's solution to the silly window syndrome are complementary. Nagle was trying to solve the problem caused by the sending application delivering data to TCP a byte at a time. Clark was trying to solve the problem of the receiving application sucking the data up from TCP a byte at a time. Both solutions are valid and can work together. The goal is for the sender not to send small segments and the receiver not to ask for them.
The receiving TCP can go further in improving performance than just doing window updates in large units. Like the sending TCP, it can also buffer data, so it can block a READ request from the application until it has a large chunk of data to provide. Doing this reduces the number of calls to TCP, and hence the overhead. Of course, it also increases the response time, but for noninteractive applications like file transfer, efficiency may be more important than response time to individual requests. Another receiver issue is what to do with out-of-order segments. They can be kept or discarded, at the receiver's discretion. Of course, acknowledgments can be sent only when all the data up to the byte acknowledged have been received. If the receiver gets segments 0, 1, 2, 4, 5, 6, and 7, it can acknowledge everything up to and including the last byte in segment 2. When the sender times out, it then retransmits segment 3. If the receiver has buffered segments 4 through 7, upon receipt of segment 3 it can acknowledge all bytes up to the end of segment 7.

Explained Example: Connection Establishment and Termination[edit]

Establishing a Connection
A connection can be established between two machines only if a connection between the two sockets does not exist, both machines agree to the connection, and both machines have adequate TCP resources to service the connection. If any of these conditions are not met, the connection cannot be made. The acceptance of connections can be triggered by an application or a system administration routine.
When a connection is established, it is given certain properties that are valid until the connection is closed. Typically, these will be a precedence value and a security value. These settings are agreed upon by the two applications when the connection is in the process of being established.
In most cases, a connection is expected by two applications, so they issue either active or passive open requests. Figure below shows a flow diagram for a TCP open. The process begins with Machine A's TCP receiving a request for a connection from its ULP, to which it sends an active open primitive to Machine B. The segment that is constructed will have the SYN flag set on (set to 1) and will have a sequence number assigned. The diagram shows this with the notation SYN SEQ 50 indicating that the SYN flag is on and the sequence number (Initial Send Sequence number or ISS) is 50. (Any number could have been chosen.) Ntwk conn est.JPG
The application on Machine B will have issued a passive open instruction to its TCP. When the SYN SEQ 50 segment is received, Machine B's TCP will send an acknowledgment back to Machine A with the sequence number of 51. Machine B will also set an Initial Send Sequence number of its own. The diagram shows this message as ACK 51; SYN 200 indicating that the message is an acknowledgment with sequence number 51, it has the SYN flag set, and has an ISS of 200.
Upon receipt, Machine A sends back its own acknowledgment message with the sequence number set to 201. This is ACK 201 in the diagram. Then, having opened and acknowledged the connection, Machine A and Machine B both send connection open messages through the ULP to the requesting applications.
It is not necessary for the remote machine to have a passive open instruction, as mentioned earlier. In this case, the sending machine provides both the sending and receiving socket numbers, as well as precedence, security, and timeout values. It is common for two applications to request an active open at the same time. This is resolved quite easily, although it does involve a little more network traffic.
Data Transfer
Transferring information is straightforward, as shown in Figure below. For each block of data received by Machine A's TCP from the ULP, TCP encapsulates it and sends it to Machine B with an increasing sequence number. After Machine B receives the message, it acknowledges it with a segment acknowledgment that increments the next sequence number (and hence indicates that it received everything up to that sequence number). Figure shows the transfer of only one segment of information - one each way. Ntwk data trans.JPG
The TCP data transport service actually embodies six different subservices:
Full duplex: Enables both ends of a connection to transmit at any time, even simultaneously.
Timeliness: The use of timers ensures that data is transmitted within a reasonable amount of time.
Ordered: Data sent from one application will be received in the same order at the other end. This occurs despite the fact that the datagrams may be received out of order through IP, as TCP reassembles the message in the correct order before passing it up to the higher layers.
Labeled: All connections have an agreed-upon precedence and security value.
Controlled flow: TCP can regulate the flow of information through the use of buffers and window limits.
Error correction: Checksums ensure that data is free of errors (within the checksum algorithm's limits).
Closing Connections
To close a connection, one of the TCPs receives a close primitive from the ULP and issues a message with the FIN flag set on. This is shown in Figure 8. In the figure, Machine A's TCP sends the request to close the connection to Machine B with the next sequence number. Machine B will then send back an acknowledgment of the request and its next sequence number. Following this, Machine B sends the close message through its ULP to the application and waits for the application to acknowledge the closure. This step is not strictly necessary; TCP can close the connection without the application's approval, but a well-behaved system would inform the application of the change in state.
After receiving approval to close the connection from the application (or after the request has timed out), Machine B's TCP sends a segment back to Machine A with the FIN flag set. Finally, Machine A acknowledges the closure and the connection is terminated.
An abrupt termination of a connection can happen when one side shuts down the socket. This can be done without any notice to the other machine and without regard to any information in transit between the two. Aside from sudden shutdowns caused by malfunctions or power outages, abrupt termination can be initiated by a user, an application, or a system monitoring routine that judges the connection worthy of termination. The other end of the connection may not realise an abrupt termination has occurred until it attempts to send a message and the timer expires.
Ntwk conn close.JPG
To keep track of all the connections, TCP uses a connection table. Each existing connection has an entry in the table that shows information about the end-to-end connection. The layout of the TCP connection table is shown below-
Ntwk conn table.JPG
The meaning of each column is as follows:
State: The state of the connection (closed, closing, listening, waiting, and so on).
Local address: The IP address for the connection. When in a listening state, this will set to
Local port: The local port number.
Remote address: The remote's IP address.
Remote port: The port number of the remote connection.

TCP Retransmission and Timeout[edit]

We know that the TCP does provide reliable data transfer. But, how does it know when to retransmit the packet already transmitted. It is true that the receiver does acknowledges the received packets with the next expected sequence number. But what if sender does not receive any ACK.
Consider the following two scenarios:
ACK not received: In this case the receiver does transmit the cumulative ACK, but this frame gets lost somewhere in the middle. Sender normally waits for this cumulative ACK before flushing the sent packets from its buffer. But for that it has to develop some mechanism by which the sender can take some action if the ACK is not received for too long time. The mechanism used for this purpose here is the timer. The TCP sets a timer as soon as it transfers the packet. If before the time-out the ACK comes, then the TCP flushes those packets from it’s buffer to create a space. If the ACK does not arrive before the time-out, then in this case the TCP retransmits the packet again. But from where this time-out interval is chosen. Well we will be seeing the procedure to find out this shortly.
Duplicate ACK received: In this case the receiver sends the ACK more than one time to the sender for the same packet received. But, ever guessed how can this happen. Well, such things may happen due to network problem sometimes, but if receiver does receive ACK more than 2-3 times there is some sort of meaning attached to this problem. All this problem starts from the receiver side. Receiver keeps on sending ACK to the received frames. This ACK is of the cumulative nature. It means that the receiver is having a buffer with it. The algorithm used for sending cumulative ACK can depend on amount of buffer area filled or left or it may depend upon the timer. Normally, timer is set so that after specific interval of time, receiver sends the cumulative ACK. But what if the sender rate is very high. In this case the receiver buffer becomes full & after that it looses capacity to store any more packets from the sender side. In this case receiver keeps on sending the duplicate ACK, meaning that the buffer is full and no more packets after that have been accepted. This message helps the sender to control the flow rate.
This whole process makes TCP a adaptive flow control protocol. Means that in case of congestion TCP adapts it’s flow rate. More on this will be presented in the Congestion control topic. Also there is no thing like the negative ACK in the TCP. Above two scenario’s convey the proper message to the sender about the state of the receiver. Let’s now concentrate on how the TCP chooses the time-out-interval.
Choosing the Time out interval:
The timer is chosen based on the time a packet takes to complete a round-trip from a sender to the receiver. This round trip time is called as the RTT. But the conditions i.e. the RTT cannot remain same always. In fact RTT greatly varies with the time. So some average quantity is to be included into the calculation of the time-out interval. The following process is followed.
1. Average RTT is calculated based on the previous results.(Running average)
2. For that particular time RTT is measured and this value depends on the conditions & the congestion in a network at that time.(Measured)
3. To calculate a time out interval:
                0.8*(Running avg. )  + (1- 0.8)*(Measured)
The value 0.8 may be changed as required but it has to be less than 1.
4. To arrive at more accurate result this procedure may be repeated many times.
Thus, we have now arrived at the average value a packet takes to make a round trip. In order to choose a time-out interval, this value needs to be multiplied by some factor so as to create some leeway.
5. Thus,
Time-out interval = 2*(value arrived in 4th step)
If we go on plotting a graph for the running average and a measured value at that particular time we see that the running average value remains almost constant and the measured value fluctuates more. Below is the graph drawn for both the values. This explains why a running average is multiplied by a value greater than value used for multiplying a measured time.

Comparison: TCP and UDP[edit]

The User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) are the “siblings” of the transport layer in the TCP/IP protocol suite. They perform the same role, providing an interface between applications and the data-moving capabilities of the Internet Protocol (IP), but they do it in very different ways. The two protocols thus provide choice to higher-layer protocols, allowing each to select the appropriate one depending on its needs.
Below is the table which helps illustrate the most important basic attributes of both protocols and how they contrast with each other:

Exercise Questions[edit]

The exercise questions here include the assignment questions along with the solutions. This will help students to grab the concept of TCP and would encourage them to go for more exercise questions from the Kurose and the Tanenbaum book.
1) UDP and TCP use 1’s complement for their checksums. Suppose you have the following three 8-bit bytes: 01010101, 01110000, 01001100. What is the 1’s complement of the sum of these 8-bit bytes? (Note that although UDP and TCP use 16-bit words in computing the checksum, for this problem you are being asked to consider 8-bit summands.) Show all work. Why is it that UDP takes the 1’s complement of the sum; that is, why not just use the sum? With the 1’s complement scheme, how does the receiver detect errors? Is it possible that a 1-bit error will go undetected? How about a 2-bit error?
Solution: 01010101 + 01110000 + 11000101 = 110001010
One's complement of 10001010 = Checksum = 01110101.
At the receiver end, the 3 messages and checksum are added together to detect an error. Sum should always contain only binary 1. If the sum contains 0 term, receiver knows that there is an error. Receiver will detect 1-bit error. But this may not always be the case with 2-bit error as two different bits may change but the sum may still be same.

2) Answer true or false to the following questions and briefly justify your answer:
a) With the SR protocol, it is possible for the sender to receive an ACK for a packet that falls outside of its current window.
True. Consider a scenario where a first packet sent by sender doesn't receive ACK as the timer goes down. So it will send the packet again. In that time the ACK of first packet is received. so the sender empties it's buffer and fills buffer with new packect. In the meantime, the ACK of second frame may be received. So ACK can be received even if the packet falls outside the current window.

b) With GBN, it is possible for the sender to receive an ACK for a packet that falls outside of its current window.
True. Same argument provided for (a) holds here.

c) The alternating bit protocol is the same as the SR protocol with a sender and receiver window size of 1.
True. Alternating bit protocol deals with the 0 & 1 as an alternating ACK. Here, the accumulative ACK is not possible as ACK needs to be sent after each packet is received. So SR protocol starts behaving as Alternating bit protocol.

d) The alternating bit protocol is the same as the GBN protocol with a sender and receiver window size of 1.
True. Same argument holds here.

3)Consider the TCP positions for estimating RTT. Suppose that a=0.1 Let sample RTT1 be the most recent sample RTT, Let sample RTT2 be the next most recent sample RTT, and so on.
a) For a given TCP connection, suppose four acknowledgments have been returned with corresponding sample RTTs Sample RTT4, SampleRTT3, SampleRTT2, SampleRTT1. Express EstimatedRTT in terms of four sample RTTs.
b) Generalize your formula for n sample RTTs.
c) For the formula in part (b) let n approach infinity. Comment on why this averaging procedure is called an exponential moving average.
EstimatedRTT1 = SampleRTT1
EstimatedRTT2 = (1-a)EstimatedRTT1 + aSampleRTT2 = (1-a)SampleRTT1 + aSampleRTT2
EstimatedRTT3 = (1-a)EstimatedRTT2 + aSampleRTT3 = (1-a)2SampleRTT1 + (1-a)aSampleRTT2 + aSampleRTT3''
EstimatedRTT4 = (1-a)EstimatedRTT3 + aSampleRTT4 = (1-a)3SampleRTT1 + (1-a)2aSampleRTT2 + (1-a)aSampleRTT3 + aSampleRTT4
EstimatedRTTn = (1-a)(n-1)SampleRTT1 + (1-a)(n-2)aSampleRTT2 + (1-a)(n-3)aSampleRTT3 + . . . (1-a)aSampleRTTn-1 + aSampleRTTn

4) We have seen from text that TCP waits until it has received three duplicate ACKs before performing a fast retransmit. Why do you think that TCP designers chose not to perform a fast retransmit after the first duplicate ACK for a segment is received?
Solution: Suppose a sender sends 3 consecutive packets 1,2 & 3. As soon as a receiver receives 1, it sends ACK for it. Suppose if instead of 2 receiver receives 3 due to reordering. As receiver hasn't received 2, it again sends ACK for 1. So the sender has received 2nd ACK for 1. Still it continues waiting. Now when the receiver receives 2, it sends ACK 2 & then 3. So it is always safe to wait for more than 2 ACK's before re-transmitting packet.

5) Why do you think TCP avoids measuring the SampleRTT for retransmitted segments?
Solution: Let's look at what could wrong if TCP measures SampleRTT for a retransmitted segment. Suppose the source sends packet P1, the timer for P1 expires, and the source then sends P2, a new copy of the same packet. Further suppose the source measures SampleRTT for P2 (the retransmitted packet). Finally suppose that shortly after transmitting P2 an acknowledgment for P1 arrives. The source will mistakenly take this acknowledgment as an acknowledgment for P2 and calculate an incorrect value of SampleRTT.


Unlike TCP, UDP doesn't establish a connection before sending data, it just sends. Because of this, UDP is called "Connectionless". UDP packets are often called "Datagrams". An example of UDP in action is the DNS service. DNS servers send and receive DNS requests using UDP.


In this section we have to look at User Datagram protocol. It’s a transport layer protocol. This section will cover the UDP protocol, its header structure & the way with which it establishes the network connection.
As shown in Figure 1,the User Datagram Protocol (UDP) is a transport layer protocol that supports Network Application. It layered on just below the ‘Session’ and sits above the IP(Internet Protocol) in open system interconnection model (OSI). This protocol is similar to TCP (transmission control protocol) that is used in client/ server programs like video conference systems expect UDP is connection less.
Fig1 osilayer.jpg
Figure 1:UDP in OSI Layer Model

What is UDP?[edit]

Fig2 UDPwork.jpg
'Figure 2:UDP

UDP is a connectionless and unreliable transport protocol.The two ports serve to identify the end points within the source and destination machines. User Datagram Protocol is used, in place of TCP, when a reliable delivery is not required.However, UDP is never used to send important data such as web-pages, database information, etc. Streaming media such as video,audio and others use UDP because it offers speed.
Why UDP is faster than TCP?
The reason UDP is faster than TCP is because there is no form of flow control. No error checking,error correction, or acknowledgment is done by UDP.UDP is only concerned with speed. So when, the data sent over the Internet is affected by collisions, and errors will be present.

UDP packet's called as user datagrams with 8 bytes header. A format of user datagrams is shown in figur 3. In the user datagrams first 8 bytes contains header information and the remaining bytes contains data.
Fig3 udp userdatagrams.jpg
Figure 3:UDP datagrams
Source port number: This is a port number used by source host,who is transferring data. It is 16 bit longs. So port numbers range between 0 to 65,535.
Destination port number: This is a port number used by Destination host, who is getting data. It is also 16 bits long and also same number of port range like source host.
length: Length field is a 16 bits field. It contains the total length of the user datagram, header and data.
Checksum: The UDP checksum is optional. It is used to detect error fro the data. If the field is zero then checksum is not calculated. And true calculated then field contains 1.
Characteristics of UDP
The characteristics of UDP are given below.
• End-to-end. UDP can identify a specific process running on a computer.
• Unreliable, connectionless delivery (e.g. USPS)::
UDP uses a connectionless communication setup. In this UDP does not need to establish a connection before sending data. Communication consists only of the data segments themselves
• Same best effort semantics as IP
• No ack, no sequence, no flow control
• Subject to loss, duplication, delay, out-of-order, or loss of connection
• Fast, low overhead
1.Suit for reliable, local network
2.RTP(Real-Time Transport Protocol)

Use of ports in Communication[edit]

After receiving the data,computer must have some mechanism what to do with it.Consider that user has three application open say a web browser,a telnet session and FTP session.All three application are moving data over the network. So, there should be some mechanism for determining what piece of traffic is bound for which application by operating system.To handle this situation , network ports are used.Available port's range is 0 to 65535.In them 0 to 1023 are well-known ports, 1023 to 49151 are registered ports and 49152 to 65535 are dynamic ports.

Figure 4: Port
List of well-known ports used by UDP:
Fig5 udpport.jpg
Figure 5:List of ports used by UDP

UDP Header structure[edit]

It contains four section. Source port, Destination port, Length and Checksum.
Header of UDP.jpg
Figure 6: UDP Header
Source port
Source port is an optional field. When used, it indicates the port of the sending process and may be assumed to be the port to which a reply should be addressed in the absence of any other information. If not used, a value of zero is inserted.
Destination port
It is the port number on which the data is being sent.
It include the length of UDP Header and Data.
The length in octets of this user datagram, including this header and the data. The minimum value of the length is eight.
The main purpose of checksum is error detection.It guarantees that message arrived at correct destination.To verify checksum,the receiver must extract this fields from IP Header .12-byte psuedo header is used to compute checksum.
It is the application data.or Actual message.

Ethereal Capture
The UDP packet can be viewed using Ethereal capture. One such UDP packet is captured and shown below.
Header of ethereal.jpg
Figure 7: ethereal capture

Communication in UDP[edit]

In UDP connection,Client set unique source port number based on the program they started connection. UDP is not limited to 1-to-1 interaction. A 1-to-many interaction can be provided using broadcast or multi-cast addressing . A many-to-1 interaction can be provided by many clients communicating with a single server. A many-to-many interaction is just an extension of these techniques.

UDP Checksum and Pseudo-Header[edit]

The main purpose of UDP checksum is to detect errors in transmitted segment.
UDP Checksum is optional but it should always be turned on.
To calculate UDP checksum a "pseudo header" is added to the UDP header. The field in the pseudo header are all taken from IP Header. They are used on receiving system to make sure that IP datagram is being received by proper computer. Generally , the pseudo-header includes:
Psuedo Header1.jpg
Figure 8 : UDP Psuedo Header
IP Source Address 4 bytes
IP Destination Address 4 bytes
Protocol 2 bytes
UDP Length 2 bytes

Checksum Calculation[edit]

Sender side :
1. It treats segment contents as sequence of 16-bit integers.
2. All segments are added. Let's call it sum.
3. Checksum : 1's complement of sum.(In 1's complement all 0s are converted into 1s and all 1s are converted into 0s).
4. Sender puts this checksum value in UDP checksum field.
Receiver side :
1. Calculate checksum
2. All segments are added and than sum is added with sender's checksum.
3. Check that any 0 bit is presented in checksum. If receiver side checksum contains any 0 than, error is detected. So,the packet is discarded by receiver.

Here we have explained simple checksum calculation. As an example, suppose that we have the bitstream 0110011001100110 0110011001100110 0000111100001111:
This bit stream is divided into segments of 16-bits integers.
So, it looks like this:
0110011001100110 (16 bits integer segment)
The sum of first of these 16-bits words is:

Adding the third word to the above sum gives

1100101011001010 (sum of all segments)
Now to calculate checksum 1's complement of sum is taken. As I mentioned earlier , 1's complement is achieved by converting all 1s into 0s and all 0s into 1s. So,the checksum at sender side is : 0011010100110101.
Now at the receiver side, again all segments are added . and sum is added with sender's checksum.
If no error than check of receiver would be : 1111111111111111.
If any 0 bit is presented in the header than there is an error in checksum.So,the packet is discarded.
You may wonder why UDP provides a checksum in the first place, as many link-layer protocols (including the popular Ethernet protocol) also provide error checking? The reason is that there is no guarantee that all the links between source and destination provide error checking -- one of the links may use a protocol that does not provide error checking. Because IP is supposed to run over just about any layer-2 protocol, it is useful for the transport layer to provide error checking as a safety measure. Although UDP provides error checking, it does not do anything to recover from an error. Some implementations of UDP simply discard the damaged segment; others pass the damaged segment to the application with a warning.


UDP is a transport layer protocol. UDP is a connectionless and unreliable protocol. UDP does not do flow control, error control or retransmission of a bad segment. UDP is faster then TCP. UDP is commonly used for streaming audio and video . UDP never used for important documents like web-page, database information, etc. UDP transmits segments consisting of an 8-byte header. Its contains Source port, Destination port, UDP length and Checksum. UDP checksum used for detect “errors” in transmitted segment.

Exercise Questions[edit]

1. Calculate UDP checksum of the following sequence: 11100110011001101101010101010101.
Answer : To calculate checksum follow the following steps:
       1. first of all divide the bit stream on to two parts of 16-bit each.
          So,the two bit stream will be  1110011001100110 and  1101010101010101.
       2. Add this two bit stream.So ,the addition will be 
              1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
              1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
            1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1  
              1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0   
       3. Now apply one's complement to this bit's complement is achieved by converting all 1s into 0s and all 0s into 1s.
          So, the checksum will be : 0100010001000011.

2. What is the advantage of keeping checksum field turned off and when it is appropriate to keep checksum field turned off?
Answer  :
           By keeping checksum field turned off,this might save computational load and speed up data transfer.
           When we are transmitting data over wide area network(WAN) , it is not a good idea to keep checksum off.
           We can keep checksum turned off when we are transmitting data over a Local Area Network(LAN),because switching infrastructure   
           would catch transmission error in the Ethernet protocol's checksum


Congestion occurs when the source sends more packets than the destination can handle. When this congestion occurs performance will degrade. Congestion occurs when these buffers gets filled on the destination side. The packets are normally temporarily stored in the buffers of the source and the destination before forwarding it to their upper layers.
What is Congestion?
Let us assume we are watching the destination. If the source sends more number of packets than the destination buffer can handle, then this congestion occurs. When congestion occurs, the destination has only two options with the arriving packets, to drop it or keep it. If the destination drops the new arriving packets and keeps the old packets then this mechanism is called `Y’ model. If the destination drops the old packets and fills them with new packet, then this mechanism is called Milk model. In both the cases packets are dropped. Two common ways to detect congestion are timeout and duplicate acknowledgement.
Congestion control
Congestion control can be used to calculate the amount of data the sender can send to the destination on the network. Determining the amount of data is not easy, as the bandwidth changes from time to time, the connections get connected and disconnected. Based on these factors the sender should be able to adjust the traffic. TCP congestion control algorithms are used to detect and control congestion. The following are the congestion algorithms we will be discussing.
  • Additive Increase/ Multiplicative Decrease.
  • Slow Start
  • Congestion Avoidance
  • Fast Retransmit
  • Fast recovery
Additive Increase / Multiplicative Decrease
This algorithm is used on the sender side of the network. The congestion window SSIZE is the amount of data the sender can send into the network before receiving the ACK. Advertised window RSIZE is the amount of data the receiver side can receive on the network. The TCP source set the congestion window based on the level of congestion on the network. This is done by decreasing the congestion window when the congestion increases and increases the congestion window if the congestion decreases. This mechanism is commonly called as Additive Increase/ Multiplicative Decrease.
The source determines the congestion based on packet loss. The packet loss is determined when the timeout happens. The source waits until the timeout time for the acknowledge to arrive. In normal cases packets are not lost, so the source assumes congestion has occurred when timeout happens. Whenever the timeout happens the source sets the SSIZE to half of the previous value. This mechanism is called Multiplicative Decrease. If timeout happens continuously, the window size is decreased until the size becomes 1. This is because the minimum value for congestion window is 1. When the sender determines that congestion has not happened, it increases the congestion window by one. This increase happens after every successful ACK received by the sender as shown below.

Slow start
The main disadvantage in the Additive Increase/ Multiplicative Decrease method is the sender decreases the congestion by half when it detects congestion and increase only by one for each successful ACK received. If the window size is large and/or the congestion window size is increased from 1, then we waste many congestion windows. The slow start algorithm is used to solve this problem of increment by one. The SSIZE is the amount of data the sender can send into the network before receiving the ACK. RSIZE is the amount of data the receiver side can receive on the network. The SSTHOLD is the slow start threshold used to control the amount of data flow on the network. The slow start algorithm is used when the SSIZE is less than the threshold SSTHOLD. In the beginning the sender does not know how much data to send. It has to find how much data to send. Initially the SSIZE much be less than or equal to 2*SMSS bytes and must not be more than 2 segments. As the packets are sent the SSIZE is increased exponentially until SSIZE become greater than SSTHOLD or when congestion is detected. Congestion2.jpg

When the sender detects congestion, then it decreases the congestion window by half of the previous value. Again, the slow start algorithm is used for increasing the congestion window.
Congestion avoidance
The SIZE is the amount of data the sender can send into the network before receiving the ACK. RSIZE is the amount of data the receiver side can receive on the network. The SSTHOLD is the slow start threshold used to control the amount of data flow on the network. The congestion avoidance algorithm is used when the SSIZE is greater than the threshold SSTHOLD. As the packets are sent the SSIZE is increased by one full size segment per roundtrip rime. This continues until congestion is detected.
Fast retransmission
Both the above three algorithms use timeout for detecting the congestion. The disadvantage here is the sender need to wait for the timeout to happen. To improve the congestion detection the sender uses duplicate ACK. Every time a packet arrives at the receiving side, the receiver sends an ACK to the sender. When a packet arrives out of order at the receiving side, TCP cannot yet acknowledge the data the packet contains because the earlier packet has not yet arrived. The receiver sends the same ACK which it sent last time resulting in duplicate ACK. This is illustrated below.
From the senders point of view Duplicate ACKs can arise from number of network problems. The sender cannot assume the packet sent was lost, the Duplicate ACKs may be triggered by reorder the segments, Replication of the ACK or segment. So the sender waits for 3 duplicate ACKs to determine the packet loss. TCP performs a retransmission of what appears to be the missing segment, without waiting for the retransmission timer to expire.
Fast recovery
Fast recovery algorithm governs the transmission of new data until a non-duplicate ACK arrives. The reason for not performing slow start is that the receipt of the duplicate ACKs not only indicates that a segment has been lost, but also that segments are most likely leaving the network The fast retransmit and fast recovery algorithms are usually implemented together as follows. 1. When the third duplicate ACK is received, set STHOLD no more than STHOLD = max (FlightSize / 2, 2*SMSS), where FlightSize is the amount of outstanding data in the network 2. Retransmit the lost segment and set SSIZE to STHOLD plus 3*SMSS. This artificially "inflates" the congestion window by the number of segments (three) that have left the network and which the receiver has buffered. 3. For each additional duplicate ACK received, increment SSIZE by SMSS. This artificially inflates the congestion window in order to reflect the additional segment that has left the network. 4. Transmit a segment, if allowed by the new value of SSIZE and the receiver's advertised window. 5. When the next ACK arrives that acknowledges new data, set SSIZE to STHOLD (the value set in step 1). This is termed "deflating" the window. This ACK should be the acknowledgment elicited by the retransmission from step 1, one RTT after the retransmission (though it may arrive sooner in the presence of significant out-of-order delivery of data segments at the receiver).Additionally, this ACK should acknowledge all the intermediate segments sent between the lost segment and the receipt of the third duplicate ACK, if none of these were lost.
What causes this congestion? Congestion occurs when the source sends more packets than the destination can handle. When this congestion occurs performance will degrade. Congestion occurs when these buffers gets filled on the destination side. The packets are normally temporarily stored in the buffers of the source and the destination before forwarding it to their upper layers. Let us assume we are watching the destination. If the source sends more number of packets than the destination buffer can handle, then this congestion occurs.
What happens when congestion occurs? When congestion occurs, the destination has only two options with the arriving packets, to drop it or keep it. If the destination drops the new arriving packets and keeps the old packets then this mechanism is called `Y’ model. If the destination drops the old packets and fills them with new packet, then this mechanism is called Milk model. In both the cases packets are dropped
How do you detect congestion? Two common ways to detect congestion are timeout and duplicate acknowledgement.