Multicast PIM NBMA Mode
Multicast over frame-relay can be tricky when you try to run it over a hub and spoke topology. In a previous lesson, I described the issue when you are using auto-rp and your mapping agent is behind a spoke. This time we’ll take a look at PIM NBMA mode.
Let me show you the topology that I will use to explain and demonstrate this to you:

Above, you see 3 routers. R1 is the hub router, and R2 and R3 are my spokes. We are using point-to-multipoint frame-relay, so there is only a single subnet. R1 is also the RP (Rendezvous Point)
PIM treats our frame-relay network as a broadcast media. It expects that all routers can hear each other directly. This, however, is only true when we have a full mesh…using a hub and spoke topology like the network above, this doesn’t apply because there is only a PVC between the hub and spoke routers. The spoke routers cannot reach each other directly. They have to go through the hub router.
This causes some issues. First, whenever a spoke router sends a multicast packet, it will be received by the hub router, but the hub router doesn’t forward it to other spoke routers because of the RPF rule (never send a packet out of the interface you received it on). One of the methods of dealing with this problem is by using point-to-point sub-interfaces, as it solves the split horizon problem.
The other problem is that spoke routers don’t hear each other’s PIM messages. For example, let’s say that R2 and R3 are both receiving a certain multicast stream. After a while, there are no users behind R2 that are interested in this stream, and as a result, R2 will send a PIM prune message to R1.
If R3 still has active receivers, it normally sends a PIM override message to let R1 know that we still want to keep receiving the multicast stream. R1, however, assumes that all PIM routers hear the prune message from R2, but this is not the case in our hub and spoke topology…only the hub router has received it, and R3 never heard this PIM prune message. As a result, R1 will prune the multicast stream, and R3 will not receive anything anymore…
PIM NBMA mode solves these issues that I just described to you. Basically, it will tell PIM that the frame-relay network should be treated as a collection of point-to-point links, not as a multi-access network. Let’s look at the example above and configure it so you can see how it works.
OSPF has been configured to advertise the loopback0 interface of R1 so that we can use it as the IP address for the RP. Let’s start by enabling PIM on the interfaces:
R1(config)#interface serial 0/0
R1(config-if)#ip pim sparse-mode
R2(config)#interface serial 0/0
R2(config-if)#ip pim sparse-mode
R3(config-if)#interface serial 0/0
R3(config-if)#ip pim sparse-mode
This will activate PIM on all serial interfaces. Let’s verify that we have PIM neighbors:
R1#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
192.168.123.3 Serial0/0 00:03:51/00:01:21 v2 1 / DR S
192.168.123.2 Serial0/0 00:04:04/00:01:35 v2 1 / S
That’s looking good. Now let’s configure the RP:
R1(config)#ip pim rp-address 1.1.1.1
R2(config)#ip pim rp-address 1.1.1.1
R3(config)#ip pim rp-address 1.1.1.1
I will use a static RP as it saves the hassle of configuring auto-RP and a mapping agent. Let’s configure R3 as a receiver for the 239.1.1.2 multicast group address. I will use R2 as a source by sending pings:
R3(config-if)#ip igmp join-group 239.1.1.2
R2#ping 239.1.1.2 repeat 9999
Type escape sequence to abort.
Sending 9999, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:
.....
As you can see, nothing, no packets are arriving. Let’s take a closer look to see what is going on:
R3#show ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.1.1.2), 00:01:40/00:02:27, RP 1.1.1.1, flags: SJPCL
Incoming interface: Serial0/0, RPF nbr 192.168.123.1
Outgoing interface list: Null
R3 has registered itself at the RP but doesn’t receive anything.
R1#show ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.1.1.2), 00:02:10/00:03:18, RP 1.1.1.1, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/0, Forward/Sparse, 00:02:10/00:03:18
(192.168.123.2, 239.1.1.2), 00:01:38/00:02:00, flags: PJT
Incoming interface: Serial0/0, RPF nbr 0.0.0.0
Outgoing interface list: Null
R1 receives traffic from R2 but doesn’t forward it out of the same interface to R3 (Serial0/0).
Let’s enable IP PIM NBMA mode on R1 now:
R1(config)#interface serial 0/0
R1(config-if)#ip pim nbma-mode
Suddenly the pings start working:
R2#ping 239.1.1.2 repeat 9999
Type escape sequence to abort.
Sending 9999, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:
Reply to request 0 from 192.168.123.3, 4 ms
Reply to request 0 from 192.168.123.3, 4 ms
Reply to request 1 from 192.168.123.3, 1 ms
And you can see the difference here in the multicast routing table of R1:
R1#show ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.1.1.2), 00:04:20/00:03:06, RP 1.1.1.1, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/0, 192.168.123.3, Forward/Sparse, 00:00:23/00:03:06
(192.168.123.2, 239.1.1.2), 00:03:48/00:03:28, flags: JT
Incoming interface: Serial0/0, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/0, 192.168.123.3, Forward/Sparse, 00:00:23/00:03:06
Above, you see that it added the Serial0/0 interface to the list of outgoing interfaces.
And finally, let’s check the multicast routing table of R3:
R3#show ip mroute 239.1.1.2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.1.1.2), 00:04:13/stopped, RP 1.1.1.1, flags: SJPCLF
Incoming interface: Serial0/0, RPF nbr 192.168.123.1
Outgoing interface list: Null
(192.168.123.2, 239.1.1.2), 00:00:16/00:02:59, flags: PLFT
Incoming interface: Serial0/0, RPF nbr 0.0.0.0
Outgoing interface list: Null
Table of Content
Unit 1. Introduction to Multicast
Unit 2: IGMP (Internet Group Management Protocol)
- Multicast IGMP Version 1
- Multicast IGMP Version 2
- Multicast IGMP Version 3
- Multicast IGMP Filter
- Multicast IGMP Proxy
Unit 3: Multicast L2
- Multicast IGMP Snooping
- IGMP Snooping without Router
- Multicast CGMP (Cisco Group Management Protocol)
Unit 4: Multicast L3
- Multicast Routing
- Multicast PIM Dense Mode
- Multicast PIM Sparse Mode
- Multicast PIM Sparse-Dense Mode
- Multicast PIM Auto RP
- Multicast PIM BSR (Bootstrap)
- RPF (Reverse Path Forwarding)
- Multicast Tunnel RPF Failure
- PIM Designated Router
- PIM Assert
- Multicast PIM Prune Override
- Multicast PIM Register Message
- Anycast RP
- Multicast MSDP SA Filtering
- Multicast Bidirectional PIM
- Multicast Stub Routing and IGMP Helper
- Source Specific Multicast
- Multicast PIM Accept RP
- Multicast PIM Accept Register
- Multicast Auto-RP Mapping agent behind Spoke
- PIM NBMA Mode
- Multicast Boundary Filtering
- Multicast PIM Snooping