Showing posts with label mcast. Show all posts
Showing posts with label mcast. Show all posts

Sunday, April 1, 2012

IP Multicast Routing Lab

IP multicast routing must first be enabled using the ip multicast-routing global configuration command regardless of the multicast routing protocol implemented.

Sample IP Multicast Network


PIM Dense Mode Configuration

PIM-DM Configuration on RT1, RT2, and RT3

The following commands are often used in lab environments where no multicast servers and receivers are configured. These can cause multicast traffic to flow to a network segment.
  • Issue the ip igmp join-group {group-address} interface subcommand to configure a router to join and become a member of the specified group. Note that accepting multicast packets prevents the router from performing fast switching. Once the router become a multicast group member and supports the protocol that is being transmitted to the group, eg: ICMP Echo Request packets, it can respond to the requests addressed to the group.
  • Issue the ip igmp static-group {group-address} interface subcommand to configure a router as a statically connected member of the specified group. The router does not accept the packets destined for the group, but only forwards them; hence, allows fast switching. The outgoing interface appears in the IGMP cache, but the router itself is not a member, as evidenced by the lack of an L (local) flag in the multicast route entry, as shown in the show ip mroute EXEC command.

Below shows the status of RT1 upon PC1 joining the multicast group 224.11.22.33:
RT1#sh ip igmp groups
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter
224.0.1.40       FastEthernet0/0          00:07:16  00:02:53  192.168.1.1
RT1#
======================================================================
PC1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
PC1(config)#int fa0/0
PC1(config-if)#ip igmp join-group 224.11.22.33
PC1(config-if)#do sh ip igmp groups
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter
224.11.22.33     FastEthernet0/0          00:00:30  stopped   192.168.1.11
PC1(config-if)#
======================================================================
RT1#sh ip igmp groups
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter
224.11.22.33     FastEthernet0/0          00:00:09  00:02:50  192.168.1.11
224.0.1.40       FastEthernet0/0          00:07:37  00:02:32  192.168.1.1
RT1#
RT1#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.11.22.33), 00:00:11/00:02:48, RP 0.0.0.0, flags: DC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet1/0, Forward/Dense, 00:00:11/00:00:00
    FastEthernet0/0, Forward/Dense, 00:00:11/00:00:00

(*, 224.0.1.40), 00:07:09/00:02:29, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet1/0, Forward/Dense, 00:06:03/00:00:00
    FastEthernet0/0, Forward/Dense, 00:07:09/00:00:00

RT1#

Below shows that all members of the multicast group will reply the ICMP Echo Requests destined for the multicast group.
PC1#ping 224.11.22.33

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.11.22.33, timeout is 2 seconds:

Reply to request 0 from 192.168.1.11, 4 ms
Reply to request 0 from 192.168.2.22, 128 ms
Reply to request 0 from 192.168.3.33, 128 ms
PC1#

Enabling PIM on an interface also enables IGMPv2 operation on the interface. The ip igmp version {1 | 2 | 3} interface subcommand changes the IGMP version for a particular interface.

There are 2 versions of the PIM protocol – PIMv1 and PIMv2. PIMv2 is used on router interface by default. The ip pim version {1 | 2} interface subcommand changes the version.

The show ip mroute [group-addr] [summary] [count] [active [kbps]] EXEC command that displays the IP multicast routing table is the most useful command for determining the state of multicast sources and groups from the router perspective. It represents a part of the multicast distribution tree with an incoming interface and a list of outgoing interfaces. The summary keyword displays a one-line, abbreviated summary of each entry in the IP multicast routing table. The count keyword displays statistics about the group and source, including number of packets, packets per second, average packet size, and bytes per second. The active parameter displays the rate at which active sources are sending to multicast groups; active sources are those sending at the rate specified as the kbps value or higher (the default value is 4 kbps).
RT1#sh ip mroute summary
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.11.22.33), 00:01:11/stopped, RP 0.0.0.0, OIF count: 2, flags: DC
  (192.168.1.11, 224.11.22.33), 00:00:08/00:02:59, OIF count: 1, flags: T
  (192.168.2.22, 224.11.22.33), 00:00:06/00:02:59, OIF count: 1, flags: T
  (192.168.3.33, 224.11.22.33), 00:00:04/00:02:59, OIF count: 1, flags: T

(*, 224.0.1.40), 00:03:09/00:03:29, RP 0.0.0.0, OIF count: 2, flags: DCL

RT1#
RT1#sh ip mroute count
IP Multicast Statistics
5 routes using 3074 bytes of memory
2 groups, 1.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 224.11.22.33, Source count: 3, Packets forwarded: 3, Packets received: 3
  Source: 192.168.1.11/32, Forwarding: 1/0/100/0, Other: 1/0/0
  Source: 192.168.2.22/32, Forwarding: 1/0/100/0, Other: 1/0/0
  Source: 192.168.3.33/32, Forwarding: 1/0/100/0, Other: 1/0/0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0
RT1#
RT1#sh ip mroute active
Active IP Multicast Sources - sending >= 4 kbps

Group: 224.11.22.33, (?)
   Source: 192.168.1.11 (?)
     Rate: 8 pps/6 kbps(1sec), 2 kbps(last 40 secs), 2 kbps(life avg)
RT1#

Interpret the multicast forwarding entries in IP multicast forwarding table in the following way:
  • (S, G) – For the source S sending to the group G; traffic is forwarded from the source via the shortest path. These entries typically reflect a source tree, but may also appear on a shared tree. (S, G) entries consume more router CPU and memory resources as there is an entry for each source and group pair. The multicast traffic is sent over the optimal path to each receiver, hence minimizing the delay in packet delivery.
  • (*, G) – For any source (*) sending to the group G; traffic is forwarded via an RP for this group. These entries reflect a shared tree, but are also created for any existing (S, G) entry. (*, G) entries consume less router CPU and memory resources, but may result in suboptimal paths from a source to receivers, hence introducing extra delay in packet delivery.
The show ip pim interface [intf-type intf-num] [count | detail | stats] EXEC command displays information about interfaces configured for PIM. The optional count keyword displays the number of packets received upon and sent out from the interface. The optional detail keyword displays PIM details of each interface. The optional stats keyword displays multicast PIM interface octet counts.
RT1#sh ip pim interface

Address          Interface                Ver/   Nbr    Query  DR     DR
                                          Mode   Count  Intvl  Prior
192.168.1.1      FastEthernet0/0          v2/D   0      30     1      192.168.1.1
12.12.12.1       FastEthernet1/0          v2/D   1      30     1      12.12.12.2
RT1#
RT1#sh ip pim interface count

State: * - Fast Switched, D - Distributed Fast Switched
       H - Hardware Switching Enabled
Address          Interface                FS  Mpackets In/Out
192.168.1.1      FastEthernet0/0          *   1/2
12.12.12.1       FastEthernet1/0          *   15/1
RT1#
RT1#sh ip pim interface stats

Interface        Mpackets In    Mpackets Out         Octets In        Octets Out
Fa0/0                      1               2               100               200
Fa1/0                     15               1               980               100
RT1#
Note: If all the routers on a multi-access link have the same priority (the default value is 1), highest IP address is the tiebreaker. Point-to-point links do not have DRs (shows 0.0.0.0). The PIM DR has 2 major responsibilities – send IGMP queries onto the LAN; and if PIM sparse mode is running, transmit PIM Join and Register messages to the RP.

The show ip pim neighbor [intf-type intf-num] EXEC command displays the PIM neighbors discovered by PIMv1 Router-Query messages or PIMv2 Hello messages.
RT1#sh ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      S - State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
12.12.12.2        FastEthernet1/0          00:11:36/00:01:39 v2    1 / DR S
RT1#

The mrinfo [hostname | ip-addr] EXEC command displays information about multicast routers that are peering with the local router (if no address is specified) or with the specified router.
RT1#mrinfo
192.168.1.1 [version  12.3] [flags: PMA]:
  192.168.1.1 -> 0.0.0.0 [1/0/pim/querier/leaf]
  12.12.12.1 -> 12.12.12.2 [1/0/pim]

RT1#
The flags indicate the following:

P Prune capable
M mtrace capable
S SNMP capable
A Auto-RP capable

The Cisco IOS multicast traceroute tool – the mtrace {source-name | source-addr} [dest-name | dest-addr [group-name | group-addr]] EXEC command traces the path from a source to a destination branch for a multicast distribution tree. The trace request generated by the mtrace command is multicast to the multicast group to find the last hop router to the specified destination. The trace then follows the multicast path from destination to source by passing the mtrace request packet via unicast to each hop. Responses are unicast to the querying router by the first hop router to the source. This command can isolate multicast routing failures.
RT1#mtrace 192.168.1.11 192.168.1.11 224.11.22.33
Type escape sequence to abort.
Mtrace from 192.168.1.11 to 192.168.1.11 via group 224.11.22.33
From source (?) to destination (?)
Querying full reverse path...
 0  192.168.1.11
-1  192.168.1.1 PIM  [192.168.1.0/24]
-2  192.168.1.11
RT1#
RT1#mtrace 192.168.1.11 192.168.2.22 224.11.22.33
Type escape sequence to abort.
Mtrace from 192.168.1.11 to 192.168.2.22 via group 224.11.22.33
From source (?) to destination (?)
Querying full reverse path...
 0  192.168.2.22
-1  12.12.12.2 PIM  [192.168.1.0/24]
-2  12.12.12.1 PIM  [192.168.1.0/24]
-3  192.168.1.11
RT1#
RT1#mtrace 192.168.1.11 192.168.3.33 224.11.22.33
Type escape sequence to abort.
Mtrace from 192.168.1.11 to 192.168.3.33 via group 224.11.22.33
From source (?) to destination (?)
Querying full reverse path...
 0  192.168.3.33
-1  23.23.23.3 PIM  [192.168.1.0/24]
-2  23.23.23.2 PIM  [192.168.1.0/24]
-3  12.12.12.1 PIM  [192.168.1.0/24]
-4  192.168.1.11
RT1#

The mstat {source-name | source-addr} [dest-name | dest-addr [group-name | group-addr]] EXEC command displays IP multicast packet rate and loss information.
RT1#mstat
VRF name:
Source address or name: 192.168.1.11
Destination address or name: 192.168.2.22
Group address or name: 224.11.22.33
Multicast request TTL [64]: [Enter]
Response address for mtrace: [Enter]
Type escape sequence to abort.
Mtrace from 192.168.1.11 to 192.168.2.22 via group 224.11.22.33
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:

  Source        Response Dest   Packet Statistics For     Only For Traffic
192.168.1.11    192.168.1.1     All Multicast Traffic     From 192.168.1.11
     |       __/  rtt 27   ms   Lost/Sent = Pct  Rate     To 224.11.22.33
     v      /     hop 2435 ms   ---------------------     --------------------
192.168.1.1
12.12.12.1      ?
     |     ^      ttl   0
     v     |      hop 0    ms    -3/0 = --%      0 pps    0/0 = --%  0 pps
12.12.12.2      ?
     |      \__   ttl   1
     v         \  hop -2   s         0         0 pps           0    0 pps
192.168.2.22    192.168.1.1
  Receiver      Query Source

RT1#


PIM Sparse Mode Configuration

PIM-SM Configuration on RT1, RT2, and RT3

Below shows the IP multicast routing table on RT1 after PC1, PC2, and PC3 have ping 224.11.22.33.
RT1#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.11.22.33), 00:09:11/stopped, RP 192.168.2.1, flags: SJCF
  Incoming interface: FastEthernet1/0, RPF nbr 12.12.12.2
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:09:11/00:02:55

(192.168.1.11, 224.11.22.33), 00:00:18/00:03:24, flags: FT
  Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0, Registering
  Outgoing interface list:
    FastEthernet1/0, Forward/Sparse, 00:00:18/00:03:20, A

(192.168.2.22, 224.11.22.33), 00:00:48/00:02:54, flags: JT
  Incoming interface: FastEthernet1/0, RPF nbr 12.12.12.2
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:00:48/00:02:55

(192.168.3.33, 224.11.22.33), 00:00:39/00:02:24, flags: JT
  Incoming interface: FastEthernet1/0, RPF nbr 12.12.12.2
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:00:39/00:02:55

(*, 224.0.1.40), 00:10:28/00:02:53, RP 192.168.2.1, flags: SJCL
  Incoming interface: FastEthernet1/0, RPF nbr 12.12.12.2
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:10:28/00:02:53

RT1#

The configuration above uses the ip pim rp-address {rp-addr} global configuration command to statically define RT2 as the RP for the multicast groups operating in PIM sparse mode.
Use the ip pim send-rp-announce {intf-type intf-num | ip-addr} scope {ttl} [group-list aclglobal configuration command to use the Auto-RP mechanism to define a router as an RP and distribute Group-to-RP mappings. This command causes the router to send an Auto-RP announcement message to the well-known multicast group Cisco-RP-Announce (224.0.1.39) to announce the router as a candidate RP for the groups in the range described by the access list.
Use the ip pim send-rp-discovery [intf-type intf-num] scope {ttl} global configuration command to configure a router as an RP mapping agent. An RP mapping agent listens to the 224.0.1.39 multicast group to receives Auto-RP announcement messages, stores them into its local Group-to-RP mapping cache, uses the information in the Auto-RP announcement messages to elect the RP (highest IP address), and sends Auto-RP discovery messages to the 224.0.1.40 Cisco-RP-Discovery multicast group. Other PIM routers that join the 224.0.1.40 multicast group by default automatically discover the RP from the RP mapping agent and store the information about the RP in their local Group-to-RP mapping caches.

The ip pim spt-threshold {rate | infinity} [group-list acl] global configuration command configures when a PIM leaf router should switchover from the shared tree to the shortest path source tree in PIM sparse mode. When this command is not used, the PIM leaf router joins the shortest path source tree immediately after the first packet arrives from a new source. The infinity keyword indicates that the switchover will never occur – will use the shared tree. If a source sends at a rate ≥ the rate value (in kbps), a PIM join message is triggered toward the source to construct a source tree. When the traffic rate from the source drops below the threshold traffic rate, the leaf router will switch back to the shared tree and send a prune message toward the source. Specify an access list to define the multicast groups to which the threshold applies.

The RP for a multicast group operating in PIM sparse mode must be reachable and from the router. The show ip pim rp [group-name | group-addr] | mapping] EXEC command is often used to troubleshoot RP in addition to the standard tools, eg: unicast ping to check the RP reachability. This command displays RP information about active groups or as specified with the group name or group address. The mapping keyword displays the contents of the Group-to-RP mapping cache indicating which RP is active for which group range. The mapping cache is populated via the static RP assignments, Auto-RP, or BSR mechanisms. It contains detailed information such as the IP address of the router that distributed the information, or local when the source of the information is the local router that either has manual RP configuration or is a source of automatically distributed information, the mechanism by which the information was determined – Static, Auto-RP, or BSR; and whether the router is operating as a candidate RP, mapping agent, or BSR.
RT2#sh ip pim rp
Group: 224.11.22.33, RP: 192.168.2.1, next RP-reachable in 00:00:59
Group: 224.0.1.40, RP: 192.168.2.1, next RP-reachable in 00:00:59
RT2#
RT2#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 192.168.2.1 (?)
RT2#

The show ip rpf {ip-addr | name} EXEC command displays RPF information for the RP or any source specified. The specified address is not necessarily to be a currently active source. Specifying the address of the RP is useful in determining the RPF information for the shared tree.
RT2#sh ip rpf 192.168.2.1
RPF information for ? (192.168.2.1)
  RPF interface: FastEthernet0/0
  RPF neighbor: ? (192.168.2.1) - directly connected
  RPF route/mask: 192.168.2.0/24
  RPF type: unicast (connected)
  RPF recursion count: 0
  Doing distance-preferred lookups across tables
RT2#
RT2#sh ip rpf 192.168.1.11
RPF information for ? (192.168.1.11)
  RPF interface: FastEthernet1/0
  RPF neighbor: ? (12.12.12.1)
  RPF route/mask: 192.168.1.0/24
  RPF type: unicast (eigrp 100)
  RPF recursion count: 0
  Doing distance-preferred lookups across tables
RT2#

The RPF interface is the interface in the direction of the source or RP; while the RPF neighbor is the address of the next-hop router in the direction of the source or RP. The RPF type indicates the source of RPF information; unicast indicates that the information was derived from the unicast routing table; other RPF types include DVMRP, Multiprotocol Border Gateway Protocol (MBGP) Extensions for IP Multicast, or static. RPF information is essential in multicast routing, and requires attention when inspecting the PIM-SM information because of the possible coexistence of shared and source trees.


IGMP Snooping

IGMP snooping is enabled on all switch ports and VLAN interfaces by default on Cisco Catalyst switch platforms that support it. The [no] ip igmp snooping global configuration command enable or disable IGMP snooping respectively.

Below shows the output of the show ip igmp snooping EXEC command on a Catalyst 2950. It displays the IGMP snooping configuration of the switch.
Switch#sh ip igmp snooping
Global IGMP Snooping configuration:
-----------------------------------
IGMP snooping              : Enabled
IGMPv3 snooping (minimal)  : Enabled
Report suppression         : Enabled
TCN solicit query          : Disabled
TCN flood query count      : 2
Last member query interval : 1000

Vlan 1:
--------
IGMP snooping                       : Enabled
Immediate leave                     : Disabled
Multicast router learning mode      : pim-dvmrp
Source only learning age timer      : 10
Last member query interval          : 1000
CGMP interoperability mode          : IGMP_ONLY

Switch#

Friday, March 30, 2012

Protocol Independent Multicast (PIM)

A router performs routing table lookup upon the destination address and forwards a unicast packet out the appropriate interface. However, a router may have to forward a multicast packet out to multiple interfaces. PIM is commonly implemented on multicast routers to dynamically build the distribution trees that determine the paths to deliver the multicast traffic to all receivers.

Multicast routers consider both the source and destination addresses of a multicast packet and use the distribution tree to forward the packet away from the source toward the destination. Below describes the 2 types of distribution trees:
  • Source Tree – created for each source that is sending traffic to each multicast group. It has its root at the source and has branches throughout the network to the receivers. Source trees are also known as source-routed or shortest path trees (SPTs) as the tree takes a direct and shortest path from the source to its receivers.
  • Shared Tree – a single tree that is shared between all sources for each multicast group. A shared tree has a single common root known as the Rendezvous Point (RP). Sources initially send their multicast packets to the RP, which in turn forwards data through a shared tree to the members of the group.
Source Distribution Tree

The figure above shows 2 source trees between Source 1, Receiver 1, and Receiver 2; as well as between Source 2, Receiver 1, and Receiver 2. The path between the source and receivers is the path with the lowest cost. Packets are forwarded according to the source and group address pair along the tree. The forwarding state associated with the source tree is identified using the notation (S, G) (pronounced as “S comma G”), where S is the IP address of the source and G is the multicast group address. A separate unique tree is built for every source S sending to group G.

Shared Distribution Tree

The figure above shows a shared distribution tree. RT3, the RP, is the root of the shared tree. The tree is built from RT3 to RT5 and RT6 toward Receiver 1 and Receiver 2. Source 1 and Source 2 send multicast packets toward the RP via source distribution trees; the packets are then forwarded from the RP to the receivers according to the shared distribution tree. The default forwarding state for the shared tree is identified using the notation (*, G) (pronounced as “star comma G”); where * is a wildcard entry meaning any source, and G is the multicast group address. The root is not necessarily the multicast source – it is a router that is centrally located in the network that is called the Rendezvous Point (RP).

Think that the multicast forwarding paths as a tree structure. The source resides at the root of the tree and blindly sending IP packets to a multicast address. The source is never aware of the recipients that are members of a multicast group. The source depends upon multicast routers and switches to deliver the multicast packets. The multicast routers or switches reside at a branch of the tree replicate the multicast packets upon interfaces that have downstream recipients.

Reverse Path Forwarding (RPF) is the concept of forwarding multicast traffic away from the source, rather than toward the receiver – opposite of the normal unicast packet forwarding. RPF ensures that multicast packets are not being replicated back into the network in order to avoid routing loops. In multicasting, the source IP address indicates the known source, while the destination IP address indicates a group of unknown receivers.
“toward the destination” and “away from the source” sound like the same thing, but they are not!

Multicast routers uses the unicast routing table to determine the upstream (toward the source) and downstream (away from the source) neighbors and ensures that only one router interface is considered as the incoming interface for a specific multicast source – verify that the packet is received upon the same interface used to reach the source. It this is true, the packet can be forwarded or replicated toward the multicast recipient; if it is not true, the packet is discarded. Packet received on an interface and forwarded out another interface might be replicated around the network and forwarded back to the same router on a different interface. RPF ensures that this packet is not being forwarded again.

PIM Dense Mode (PIM-DM) is defined in RFC 3973 – Protocol Independent Multicast – Dense Mode (PIM-DM): Protocol Specification (Revised). PIM-DM uses a push approach that uses source trees to flood multicast traffic to the entire network. Routers that do not need the data (because they are not connected to receivers that want the data or to other routers that want it) request to prune the tree so that they do not receive the multicast packets.

PIM-DM Initial Flooding and Pruning

PIM-DM initially floods multicast traffic throughout the entire network. The traffic is sent out of all non-RPF interfaces where there is another PIM-DM neighbor or a directly connected member. As each router receives the multicast traffic via its RPF interface (the interface in the direction of the source), it forwards the multicast traffic to all its PIM-DM neighbors. The (S, G) state entry is created in every router in the network.

The flooding may result in some traffic arriving upon a non-RPF interface as with RT3 and RT6. Packets arriving via the non-RPF interfaces are discarded. PIM-DM Prune messages are sent to stop unwanted traffic when there is no host registered for the multicast group using IGMP. Prune messages are sent out of an RPF interface when the router has no downstream receivers for multicast traffic from the specific source. Prune messages are also sent out of non-RPF interfaces to terminate the multicast traffic flow as it is arriving via an interface that is not on the shortest path to the source. PIM-DM Prune messages are sent to 224.0.0.13 – PIM.

There is only one receiver in the scenario above, and therefore all other paths are pruned. Although the multicast traffic flow does not reach and pass through most routers in the network, the (S, G) state entry remains in all routers and will remain there until the source stops sending. In PIM-DM, all prune messages expire in 3 minutes. After that, the multicast traffic is flooded again to all routers. This periodic flood-then-prune operation or behavior is normal and must be taken into account when a network is intended to use PIM-DM.

PIM-DM routers assume that the recipients of a multicast group are located on every subnet – the multicast group is densely populated across the network; few senders, but many receivers; there will be a great amount of multicast traffic; and the multicast streams will be constant.

PIM-DM Multicast Traffic Flow after Pruning

PIM Sparse Mode (PIM-SM) is defined in RFC 2362 – Protocol Independent Multicast – Sparse Mode (PIM-SM): Protocol Specification. PIM-SM uses a different pull approach to forward multicast traffic only to the portions of network that need it. It uses a shared tree and therefore required to define an RP. In sparse mode, sources register with the RP, multicast routers along the path from active receivers that have explicitly requested to join a specific multicast group would join the tree – the multicast tree is not extended to a router unless a host has joined the group. The multicast tree is built and grown in reverse by beginning with the group members at the end leaves and extended back toward the central root. Multicast routers calculate using the unicast routing table whether they have a better metric to the RP or to the source itself. They forward the join messages to the device with which they have the better metric.

Sparse mode multicast flows are described as (*, G) as the multicast tree allows any source to send to a group. As a receiver joins a multicast group via IGMP, the local router forwards the membership report toward the RP at the root of the tree. Each router along the way adds that branch to the shared tree. Pruning is performed only when a member leaves the group.

PIM-SM Shared Tree Join

When an receiver attached to the leaf router RT6 joins the multicast group G, the last-hop router – RT6 which knows the IP address of the RP router for multicast group G sends a (*, G) join for the group toward the RP. The PIM Join travels hop-by-hop toward the RP to build a branch of the shared tree that extends from the RP to the last-hop router directly connected to the receiver. The traffic of multicast group G may then flow down the shared tree to the receiver. The (*, G) state entry is created only along the shared tree.

Both PIM-DM and PIM-SM modes construct identical tree structures and therefore result in the same multicast traffic flow patterns. PIM-SM is appropriate for wide-scale deployment for both densely and sparsely populated groups in an enterprise network. PIM-SM is preferred over PIM-DM for all production networks regardless of size and membership density.

PIM Sparse-Dense Mode allows a PIM router to operate in both sparse and dense modes on a per-group basis on the same router interface. Sparse mode is used if a group has an RP defined; otherwise, dense mode is used. PIM sparse-dense mode also supports automatic RP discovery. Multiple RPs can be implemented with each RP in an optimum location for maximum efficiency. Configuring, managing, and troubleshooting multiple RPs can be difficult if done manually. However, PIM sparse-dense mode supports automatic selection of RPs for each multicast source, eg: RT1 could be the RP for Source 1 and RT2 could be the RP for Source 2.
If no RP is discovered for the multicast group or none is manually configured, PIM sparse-dense mode will operate in dense mode. Therefore, automatic RP discovery should be implemented with PIM sparse-dense mode.

Cisco recommends PIM sparse-dense mode for IP multicast, as PIM-DM does not scale well and requires many router resources, and PIM-SM has limited RP configuration options. Additionally, it can use either statically defined RPs, Auto-RP, or BSR with the least configuration effort.

Below are some extensions, optimizations, and enhancements upon PIM:
  • Bidirectional PIM Mode, which is designed for many-to-many applications – many hosts multicasting to each other.
  • Source-Specific Multicast (SSM), which is a variant of PIM-SM that builds only source specific shortest path trees and does not need an active RP for source-specific groups in the address range 232.0.0.0/8.
Comparison of PIM modes:

Multicast Flows Tree Construction Tree Refinements
Dense Mode (S, G) Root to leaves.
Source is the root.
Receivers are the leaf nodes.
First flood, then prune.
Sparse Mode (*, G) Leaves to root.
RP is the root.
Source can be anywhere. Receivers are the leaf nodes.
Group extended from receivers toward RP. Pruning only when member leaves group.
Sparse-Dense Mode (S, G) or (*, G) Hybrid on a per-group basis N/A

The ip pim dense-mode interface subcommand configures PIM dense mode on an interface.
The ip pim sparse-mode interface subcommand configures PIM sparse mode on an interface.
The ip pim sparse-dense-mode interface subcommand configures PIM sparse-dense mode on an interface.

PIMv1 RPs can be configured manually or using the dynamic auto-RP process. The ip pim rp-address {ip-addr} [access-list] [override] global configuration command manually identify an RP. An access list limits the range of multicast groups supported by the RP. The override keyword causes the RP to be preferred over any automatically determined RP. Because the RP does not advertise itself, its address and function must be defined on every router in the PIM domain, including the RP itself. Future changes upon the RP location are difficult as every router must be reconfigured with the new RP address.

Auto-RP is a Cisco-proprietary process that automatically informs PIM-SM routers about the appropriate RP for a group by identifying a centrally located and well-connected router to function as the mapping agent that learns all the candidate RPs that are announced through the Cisco-RP-Announce multicast address 224.0.1.39, in which all PIM-SM routers must join by default.

The ip pim send-rp-discovery [intf-type intf-num] scope {ttl} global configuration command configures a router as a RP mapping agent. The optional intf-type intf-num defines the interface type and number that is to be used as the source address of the RP mapping agent; and the optional ttl parameter specifies the Time-to-Live (TTL) value that limits the scope of the Auto-RP discovery messages – how many router hops away the information will reach and valid. The RP mapping agent sends Group-to-RP mapping information to all PIM routers over the Cisco-RP-Discovery multicast address 224.0.1.40.

Each candidate RP router must then be explicitly defined with the ip pim send-rp-announce {intf-type intf-num | ip-addr} scope {ttl} [group-list acl] global configuration command. A router begins sending announcements to the RP mapping agent when it knows it can be an RP. The interface must be specified to indicate the advertised RP address and identifies where to reach the mapping agent. TTL limits the scope of Auto-RP announcements by the number of router hops. The router can also advertise itself as a candidate RP for the multicast groups permitted through the optional group-list access list. The default announcement interval is 60 seconds.

PIMv2 also includes an industry-standard dynamic Group-to-RP mapping advertisement mechanism that is known as Bootstrapping, which is similar to the Cisco Auto-RP method.

A bootstrap router (BSR) that learns about RP candidates for a group and advertises them to PIM routers must first be identified using the ip pim bsr-candidate {intf-type intf-num[1]} [hash-mask-length] [priority] global configuration command; followed by defining the candidate RP routers that advertise themselves to the BSR as PIMv2 candidate RPs using the ip pim rp-candidate {intf-type intf-num[2]} [ttl] [group-list acl] [priority priority] global configuration command. The priority value ranges from 0 to 255. The BSR or RP with the larger priority is preferred. The router with the higher IP address becomes the BSR or RP if the priority values are the same.
Note: The Cisco IOS implementation of PIM BSR which predates the draft-ietf-pim-sm-bsr IETF draft uses the value 0 as the default priority for candidate RPs and BSRs. Explicitly set the priority value to 192 to comply with the IETF draft that specifies 192 as the default priority value.
[1] – The IP address associated with this interface determines the candidate BSR address.
[2] – Advertises the IP address associated with this interface as the candidate RP address.

Once the BSR and candidate RPs are configured, all other PIM routers will learn the appropriate RP from the BSR. The selection of RP for a group is based on a hashing function. The length of the hash mask controls the number of multicast groups that are being hashed to the same RP.

The bootstrap messages are propagated throughout the entire PIM domain by default. The scope of the advertisements can be limited by defining PIMv2 border routers using the ip pim border global configuration command.

A small network with only one or some L2 or L3 switches and without a multicast router always support multicast. When a host sends an IGMP Membership Report to join a multicast group, it does not know about multicast routers at all; it just sends out a request to join and hopes that it will start receiving traffic destined for the multicast group. Even if a multicast router is present, it does not send a reply to a host upon joining a multicast group. A multicast router only sends out Membership Queries periodically asking if the hosts still want to remain as a member of a group. In such small network, L2 switches simply flood the multicast traffic out all ports on a VLAN; CGMP is not in action to prune the multicast traffic. L3 switches can use IGMP snooping to constrain the flooding of multicast traffic.

Thursday, March 29, 2012

IGMP Snooping and Cisco Group Management Protocol (CGMP)

Usually hosts are not directly connected to routers but are connected to switches instead. IGMP is a L3 network layer protocol, and therefore switches that operate at the L2 data link layer do not understand nor participate in IGMP and hence are not aware of which directly connected hosts are members of which particular multicast groups. L2 switches flood multicast frames out to all ports within a VLAN by default (except the port from which the frame is originated); the same way that unknown unicast frames are flooded – even if only a host on a port required the multicast data stream.

A Cisco Catalyst switch can be configured to manually associate a multicast MAC address with multiple ports in which it only forwards multicast frames destined for a particular multicast group. However, this method is not scalable as IP multicast hosts dynamically join and leave groups.

IGMP snooping allows a switch to eavesdrop the IGMP Membership Reports that are originated from hosts to the local multicast routers when they are joining multicast groups, and update its MAC address table or Content Addressable Memory (CAM) table accordingly. The switch must understand IGMP to process IGMP Membership Reports and IGMP Leave Group messages. Catalyst switches that support IGMP snooping always have specific built-in ASIC hardware that enhances the processing of IGMP messages, which in turn directly affect the overall cost.

Note: IGMP snooping is not supported on all Cisco Catalyst switch platforms. Google about Multicast Catalyst Switches Support Matrix for the support of IGMP Snooping and CGMP.

CGMP was developed for Cisco Catalyst switch platforms that do not support IGMP snooping. CGMP is a Cisco-proprietary protocol that operates between a multicast router and a switch. A multicast router informs all its directly connected switches regarding IGMP registrations that were received from hosts through the switches, which in turn forwards multicast frames only to ports on which the requesting hosts are connected rather than flooding the frames out to all ports.
Note: A multilayer switch configured for multicast routing also can be configured for CGMP.

CGMP messages are sent to the well-known address 0100.0cdd.dddd that is flooded everywhere as a special case so that CGMP messages can be propagates across non-CGMP switches. A CGMP that is originated from a multicast router contains the request type (join or leave), the L2 multicast MAC address, and the actual MAC address of the host.

IGMP and CGMP

IGMP is used between a host and its local router; while CGMP is used between a router and a switch. CGMP is disabled on all interfaces on multicast routers by default. The ip cgmp interface subcommand enable CGMP on an interface. Only the multicast router must be configured for CGMP. By default, all Catalyst switches and automatically process CGMP messages from routers.

IGMP snooping and CGMP are mutually exclusive – cannot be used at the same time on a switch. IGMP snooping is enabled by default for switches that have IGMP snooping capability; while CGMP is enabled by default for switches that do not support IGMP snooping. Always remember to enable CGMP on upstream multicast router or multilayer switches for networks that have legacy L2 switches for efficient flooding of multicast traffic.

Internet Group Management Protocol (IGMP)

Hosts use Internet Group Management Protocol (IGMP) to register with their local routers to join and leave specific multicast groups. The routers are then aware that they need to forward the multicast data stream destined to the registered hosts of a specific multicast group. Understanding the IGMP multicast group membership join and leave process is important for configuring and troubleshooting IP multicasting.

RFC 1112 – Host Extensions for IP Multicasting specifies IGMPv1 specifies that multicast routers periodically send Membership Queries (every 60 to 120 seconds) to the All-Hosts multicast address group 224.0.0.1. Hosts that are interested to receive traffic for specific multicast group send Membership Reports to the multicast address of the group they want to join. Hosts send Membership Reports either when they want to first join a multicast group or in response to Membership Queries originated by multicast routers (known as IGMPv1 queriers) to indicate that they want to continue receive traffic destined for the group that they have joined.
Note: The source does not necessarily have to join a multicast group; it only needs to know the multicast address to send traffic to the receivers on the group.

The report suppression process is in which only one member per group responds to a query on each subnet in order to conserve bandwidth on the subnet and minimize processing by hosts.

IGMPv1 specifies that there must be at least an active member of a multicast group on a segment if multicast traffic is to be forwarded to that segment.

IGMPv1 does not define a mechanism for hosts to leave a multicast group; therefore IGMPv1 hosts leave a multicast group silently anytime without notifying the local router. It is OK when there are multiple members remain on a segment, as the multicast traffic must still be delivered to the segment. However, when the last member on a segment leaves the multicast group, the router would still forward multicast traffic into a segment unnecessarily for a period of time (3 minutes) until it timeout the group after 3 consecutive queries without a report from a host. This approach is inefficient when there are many groups or there is a lot of traffic in the groups.

RFC 2236 – Internet Group Management Protocol, Version 2 was introduced to overcome the limitations and restrictions discovered in IGMPv1. IGMPv2 is backward compatible with IGMPv1. IGMPv2 focus upon the issues of leave and join latencies as well as the ambiguities of IGMPv1.

Below summarizes the important changes in IGMPv2:
  • Group-Specific Query that allows a router to query membership for only a single group instead of all groups; an optimized method to quickly find out whether any members are left in a particular group without asking all groups to report. The difference between the General Query and Group-Specific Query is that a General Query is multicast to the All-Host address 224.0.0.1; whereas a Group-Specific Query for a specific group is multicast to the multicast address of the group.
  • Leave Group message that allows hosts to inform their local routers when leaving a group. This reduces the leave latency for the group when the last member is leaving the group. The specification mentions the timing of when Leave Group message must be sent.
  • Querier Election mechanism that specifies that when there are multiple IGMP routers on the same segment (broadcast domain), the router with the highest IP address is elected as the designated querier.
  • Query intervals and response times that are specified in a general or group-specific query and are used to control the burstiness of membership reports. The query indicates the response time that the members must respond to a query by issuing a report.

IGMPv2 members do not need to wait for a query to join a multicast group; they send an unsolicited Membership Report to indicate the interest to join a group. This process reduces the join latency when there is no other members exist in the group.

The output of the show ip igmp groups EXEC command below indicates that the multicast group 224.1.1.1 has been active on the Fa0/0 interface for 49 seconds, the multicast group expires and will be deleted in 2 minutes and 51 seconds if an IGMP Host Membership Report for the group is not received within that period, and the last host that reported its membership was PC2 (10.1.1.12) when it joined the multicast group.
RT1#sh ip igmp groups
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter
224.1.1.1        FastEthernet0/0          00:00:51  00:02:41  10.1.1.12
224.0.1.40       FastEthernet0/0          00:01:20  00:02:10  10.1.1.1
RT1#

IGMPv1 hosts leave group passively and silently; they just stop reporting their membership upon the membership queries. IGMPv2 introduces explicit Leave Group messages. When an IGMPv2 router receives a Leave Group message, it sends a Group-Specific Query for the multicast group to find out whether there are other hosts that are still interested to receive traffic for the group. This process reduces the leave latency.

IGMPv2 – Leaving a Multicast Group

PC1 and PC2 are members of the multicast group 224.1.1.1. Host PC1 leaves the group and by sending an IGMPv2 Leave Group message to the All-Routers multicast address 224.0.0.2. RT1 receives the Leave Group message and sends a Group-Specific Query to find out whether there are other members are still present in the group. PC2 that has not left the multicast group 224.1.1.1 yet responds with a Membership Report message to inform RT1 that it is still interested to receive traffic destined for 224.1.1.1 and request RT1 to forward traffic for the group. Below shows the output of the show ip igmp groups EXEC command after PC1 has left the group.
RT1#sh ip igmp groups
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter
224.1.1.1        FastEthernet0/0          00:01:41  00:02:20  10.1.1.12
224.0.1.40       FastEthernet0/0          00:02:09  00:02:23  10.1.1.1
RT1#

When PC2 leaves the multicast group by sending an IGMPv2 Leave Group message, RT1 sends a Group-Specific Query and expect a report from other active group members. However, since PC2 is the last member of the multicast group 224.1.1.1, RT1 will not receive any Membership Report for 224.1.1.1, and the group time out and RT1 stops forwarding multicast traffic for the multicast group into the segment (broadcast domain).

RFC 3376 – Internet Group Management Protocol, Version 3 introduced the support for multicast source filtering, in which hosts can determine to receive multicast traffic only from particular sources within a multicast group, results in efficient utilization of routing resources.

A host sends an IGMPv3 Membership Report that may specify a source list that is used for source filtering to 224.0.0.22 (the IGMP multicast address) upon joining a multicast group. A source list is used to indicate the multicast sources that the host will accept (INCLUDE) or will not accept (EXCLUDE) packets from. Multicast routers can now avoid delivering multicast packets from specific sources to networks where there are no interested receivers.

IGMPv3 multicast routers refresh the group membership state of members on a segment by sending General Queries periodically to request group membership information; and all IGMPv3 members respond with IGMPv3 Membership Reports that contain Current-State Group Records.

IGMPv3 is mainly used for Source-Specific Multicast (SSM) that is based on PIM sparse mode, which uses a separate source-distribution tree for each source within each multicast group.

The show ip igmp interface [type num] EXEC command displays multicast-related information for an interface, eg: the IGMP version that is active on an interface.
RT1#sh ip igmp interface
FastEthernet0/0 is up, line protocol is up
  Internet address is 10.1.1.1/24
  IGMP is enabled on interface
  Current IGMP host version is 2
  Current IGMP router version is 2
  IGMP query interval is 60 seconds
  IGMP querier timeout is 120 seconds
  IGMP max query response time is 10 seconds
  Last member query count is 2
  Last member query response interval is 1000 ms
  Inbound IGMP access group is not set
  IGMP activity: 2 joins, 0 leaves
  Multicast routing is enabled on interface
  Multicast TTL threshold is 0
  Multicast designated router (DR) is 10.1.1.1 (this system)
  IGMP querying router is 10.1.1.1 (this system)
  Multicast groups joined by this system (number of users):
      224.0.1.40(1)
RT1#

Multicast IP and MAC Addresses

The Internet Assigned Numbers Authority (IANA) assigns the ranges and multicast addresses. The latest list can be obtained from http://www.iana.org/assignments/multicast-addresses/. Multicast IP addresses use the Class D address space, which is indicated by the high-order 4 bits set to binary 1110. Therefore, the multicast address range is 224.0.0.0 to 239.255.255.255.
Local Scope multicast addresses are in the range of 224.0.0.0 to 224.0.0.255 and are reserved for network protocol use. This address range is also known as the local network control block. Packets destined for multicast addresses in this range are never forwarded out from the local network broadcast domain regardless of the Time to Live (TTL) field in the IP packet header. The TTL is usually set to 1. Below lists some examples of local scope multicast IP addresses:

224.0.0.1 All multicast systems on the subnet
224.0.0.2 All multicast routers on the subnet
224.0.0.4 All Distance Vector Multicast Routing Protocol (DVMRP) routers on the subnet
224.0.0.5 All Open Shortest Path First (OSPF) routers on the subnet
224.0.0.6 All Open Shortest Path First (OSPF) designated routers (DRs) on the subnet
224.0.0.9 All Routing Information Protocol Version 2 (RIPv2) routers on the subnet
224.0.0.10 All Enhanced Interior Gateway Routing Protocol (EIGRP) routers on the subnet

Transient multicast IP addresses are dynamically assigned for multicast applications and then returned for others to use when no longer needed. Transient multicast IP addresses are assigned from the remainder of the IP multicast address space and are divided into the following 2 types:
  • Globally-scoped multicast addresses, in the range of 224.0.1.0 to 238.255.255.255, are to be allocated dynamically throughout the Internet. Ex: The 224.2.0.0/16 range may be used for Multicast Backbone (MBone) applications. MBone consists of Internet routers that support IP multicasting and form a virtual network (multicast channel) for delivering various public and private audio and video programs. MBone was originally created by the IETF to multicast audio and video meetings.
  • Limited-scoped or Administratively-scoped multicast addresses, in the range of 239.0.0.0 to 239.255.255.255. RFC 2365 – Administratively Scoped IP Multicast defines that these multicast addresses are reserved for use inside the private domains. The Administratively-scoped multicast address space includes the following scopes:
    • Site-Local Scope – 239.252.0.0/16, 239.253.0.0/16, 239.254.0.0/16, 239.255.0.0/16.
    • Organization-Local Scope – 239.192.0.0 to 239.251.255.255

The IEEE 802.3 LAN specification defines that the bits 0 and 1 (most significant bits) of the 1st octet indicate a broadcast or multicast frame and a locally administrated frame respectively. A universally administered address is uniquely assigned to a device by its manufacturer – burned-in address (BIA). A locally administered address is assigned to a device by an admin.

IEEE 802.3 MAC Address Format

Take note upon the location of the most significant byte and most significant bit in each byte. IEEE documentation specifies Ethernet MAC addresses with the most significant byte on the left; inside each byte, the leftmost and rightmost bits are the least and most significant bits respectively.
This bit order is referred to as canonical or little-endian.
Note: When overriding the MAC address to use a local address, the device or device driver often does not enforce the setting of the U/L bit to a value of 1.

The IANA owns a block of Ethernet MAC addresses that start with hexadecimal 01:00:5E. The lower half of this block (0100.5E00.0000 – 0100.5E7F.FFFF) is allocated for multicast MAC addresses. The translation between L3 IP multicast and L2 MAC multicast addresses is achieved by mapping the low-order 23 bits of the L3 IP multicast address into the low-order 23 bits of the L2 MAC address.

IP Multicast to Ethernet MAC Multicast Address Mapping
As there are 28 bits (32 – 4 Class D prefix) of unique address space for an IP multicast address and only 23 bits are mapped into the IEEE MAC address, there are 5 bits of overlap (28 – 23). These 5 bits represent 25 = 32 addresses. Therefore, there is a 32:1 overlap of IP addresses to MAC addresses – 32 IP multicast addresses are mapped to the same MAC multicast address. As an example, below lists all the IP multicast addresses that are mapped to the same MAC multicast address 01-00-5E-7F-00-01.

224.127.0.1 225.127.0.1 226.127.0.1 227.127.0.1 228.127.0.1
229.127.0.1 230.127.0.1 231.127.0.1 232.127.0.1 233.127.0.1
234.127.0.1 235.127.0.1 236.127.0.1 237.127.0.1 238.127.0.1
239.127.0.1 224.255.0.1 225.255.0.1 226.255.0.1 227.255.0.1
228.255.0.1 229.255.0.1 230.255.0.1 231.255.0.1 232.255.0.1
233.255.0.1 234.255.0.1 235.255.0.1 236.255.0.1 237.255.0.1
238.255.0.1 239.255.0.1

Since one multicast MAC address can actually representing 32 multicast IP addresses, a host must receive and examine every frame destined for the multicast group that it has joined, regardless of the destination multicast IP address that the frame is destined for; followed by examine the IP address in the frame to verify the multicast group that the frame is destined for.

Note: Switches forward frames based on MAC addresses. If a switch is configured for Layer 2 multicast snooping, it will forward frames to all members belong to other multicast groups with the same MAC address mapping, even if the frames belong to a different multicast group. Most Layer 2 switches flood all multicast frames that falls within the MAC address range of 0100.5E00.00xx out to all switch ports even if IGMP snooping is enabled.

Whenever a multicast application is started on a receiver, the application must learn about the available multicast sessions or streams, which are mapped to one or more IP multicast group. The application may then request to join the corresponding multicast groups.

Below lists the available methods for multicast applications to learn about the multicast sessions:
  • The application may join a well-known, predefined group in which another application sends announcements about the available sessions.
  • The application may contact the appropriate directory servers if they are available.
  • The application may be launched from a web page on which the sessions are listed as URLs.
  • The application may join a session upon clicking the email link that announced the session.

Another option is the use of the application which called Session Discovery (sd) that acts like a TV program guide to display the multicast contents. A client directory application uses either the Session Description Protocol (SDP) or Session Announcement Protocol (SAP) to learn about the available contents. Note: SDP/SAP is referred to as sdr in Cisco documentation.

The initial sd tool was revised and become the sdr session directory application that provides functions that announce available sessions along with their descriptions, and create new sessions. When SDR is used at the sender side, it creates new sessions and avoids address conflicts. Senders consult their SDR caches upon session creation to choose an unused multicast address. When a new session is created, the sender announces it with all the information that is necessary for receivers to join the session. When SDR is used at the receiver side, it learns about the available sessions. A join to a particular multicast group is initiated when the user selects a multicast channel listed in SDR.

RFC 4566 – Session Description Protocol (SDP) defines the variables that describe the sessions. Most of the variables were inherited from the sdr tool. The RFC does not define the transport of the SDP packets. Below lists the mechanisms to transmit the packets that describe the sessions:
  • RFC 2974 – Session Announcement Protocol (SAP) that broadcasts multicast session info.
  • RFC 3261 – Session Initiation Protocol (SIP) that defines a signaling protocol for Internet conferencing, telephony, telepresence, event notification, instant messaging, etc.
  • RFC 2326 – Real Time Streaming Protocol (RTSP) that serves mainly as a control protocol in multimedia environments. RTSP allows videocassette recorder (VCR)-like controls, eg: select, forward, rewind, pause, stop; and also carries the info of a session.
  • Multipurpose Internet Mail Exchange (MIME)-format emails that may carry SDR packets describing multicast sessions.
  • Web pages that may provide session descriptions in standardized SDR format.

Cisco IP/TV is an example of IP multicast applications. Cisco IP/TV generally has 3 components – the server (the source), the content manager (the directory server), and the viewer (the receiver). Viewers may contact the content manager directly via unicast and request the list of available programs (sessions or streams) from it. Viewers may also listen to periodic SAP announcements. Cisco IP/TV uses SAP to transport the SDR sessions to the viewers using standard SDR formation.

The term Internet Protocol Television (IPTV) first appeared in 1995 with the founding of Precept Software, which was acquired by Cisco Systems in 1998. The IP/TV technology was then integrated into various Cisco product lines, eg: Cisco IP/TV Software, Cisco Application and Content Networking System (ACNS) Software, Cisco Content Delivery Engine (CDE), Cisco Wide Area Application Engine (WAE), etc.

Sunday, March 25, 2012

IP Multicast Routing

Many types of data can be transferred between devices over an IP network, eg: document files, image files, voice, video, etc. However, IP network is not efficient to deliver the same data to many destinations – the data is sent in unicast packets and being replicated for each destination, which would unnecessarily consume many network bandwidth resources.

IP multicast technology delivers data over networks to a group of destinations in the efficient way. The data is sent from the source as a single data stream that travels as far as it can in the network. Network devices only replicate the data when they need to send it out on multiple interfaces or segments to reach all members of the multicast group. Multicast traffic is generally unidirectional.

IP multicast groups are identified by the Class D IP addresses, which are in the range from 224.0.0.0 to 239.255.255.255. IP multicast introduced some new network protocols, including 2 for informing the network devices for which end systems require which multicast data stream – Internet Group Management Protocol (IGMP) and Cisco Group Management Protocol (CGMP); and a few for determining the best way to route multicast traffic (the multicast routing protocols) – Distance Vector Multicast Routing Protocol (DVMRP), Multicast OSPF (MOSPF), Core-Based Trees (CBT), and Protocol Independent Multicast (PIM).

When IP multicast is used to send packets to multiple receivers, the packets are not duplicated for every receiver, but instead are sent in a single stream using a single copy of each packet. Downstream routers and switches replicate the packets only upon links where receivers exist. The sender or source of multicast does not have to know the unicast addresses of the receivers. Unicast transmission sends multiple copies of data packets, one copy for each receiver.

Unicast vs Multicast

The most common types of multicast applications are one-to-many and many-to-many. In one-to-many applications, one sender sends data to many receivers, eg: applications used for audio or video distribution, push-media, announcements, monitoring, etc. If the sender of a one-to-many application needs feedback from the receivers, it becomes a many-to-many application. In many-to-many applications, multiple hosts can send data to the same multicast group, many receivers also act as senders, and a host can act as a sender and receiver at the same time. Receiving data from multiple sources increases the complexity of applications and introduces various management challenges. Using the many-to-many multicast concept, new applications, eg: collaboration, concurrent processing, distributed interactive simulations, can then be built.

Another model of multicast applications is many-to-one, mainly used in financial applications, in which many receivers are sending data back to one sender (via unicast of multicast) that may be used for resource discovery, data collection, auctions, polling, etc.

Many new multicast applications are emerging as demand. Real-time applications include live TV, radio, corporate broadcasts, financial data delivery, whiteboard collaboration, e-learning or distance learning, and videoconferencing. Non-real-time applications include file transfer, data and file replication, and video on demand (VoD). Ghosting multiple PC images simultaneously is a file transfer application. Some types of e-learning are also non-real-time.
Note: Some web technologies, eg: webcasting – transmission of audio or video over the Internet, actually use unicast to deliver data to each individual media player through several data streams.

Below lists some advantages of multicast transmission over unicast transmission:
  • Enhanced efficiency. Available network bandwidth and end system resources are utilized more efficiently as multiple data streams are replaced with a single data stream.
  • Optimized performance. The sender requires much less processing resources and network bandwidth as fewer copies of data require forwarding and processing.
  • Support for distributed applications. Distributed multipoint applications using unicast transmission do not scale well as the demand and usage grows, as the traffic load and the number of clients increase proportionally at a 1:1 rate with unicast transmission. Multicast enables many new applications that were not possible with unicast.

Most multicast applications that are implemented based upon the User Datagram Protocol (UDP) have some disadvantages when compared to unicast applications implemented upon the Transmission Control Protocol (TCP). Below lists some disadvantages of multicast applications:
  • UDP does not have any reliability mechanisms. Reliability issues must be addressed by the multicast applications themselves if reliable data transmission is necessary.
  • UDP best-effort delivery without the acknowledgement process results in occasional packet drops. Multicast applications must not expect reliable data delivery and should be designed accordingly to handle reliable data transmission at the application layer.
  • UDP without any congestion control mechanism, eg: windowing and slow-start as in TCP, may result in network congestion and service degradation as the popularity and wide deployment of UDP-based multicast applications. Multicast applications should be designed to attempt to detect and avoid congestion conditions if possible.
  • Duplicate packets may be generated upon the changes of multicast network topologies. Multicast applications should be designed to expect occasional duplicate packets and handle them accordingly.
  • Out-of-order packet delivery may occur upon the convergence of the underlying network. Multicast applications should be designed to handle them accordingly.
  • Some security issues with multicast, eg: restricting multicast traffic to be delivered to a selected group of receivers to avoid eavesdropping, are not successfully addressed yet. Some commercial applications will become possible only when these issues are resolved.