{ "blogs-2018-09-06-dynamic-packet-buffering": { "title": "Dynamic Packet Buffering", "content": " On This Page Intelligent Packet Buffering on ASR9000 4th Generation Ethernet Line cards Document Purpose High Bandwidth buffering and Model Shifting Intelligent dynamic packet buffer management Static packet buffer management - fixed and chunk based Summary Intelligent Packet Buffering on ASR9000 4th Generation Ethernet Line cardsDocument PurposeThe ASR9000 4th Generation Ethernet line cards implement intelligent packet buffering techniques to provide high performance deep buffering. In addition to enhancements to the existing static buffer management logic, the new generation of Ethernet line cards introduces dynamic buffer management.Dynamic buffer management takes advantage of on-demand buffer allocation for efficient handling of instantaneous traffic bursts and it is optimized for high bandwidth interfaces on dense line cards. Packet buffers are designed to absorb incoming traffic temporary bursts, while guarantee for high priority traffic forwarding. For such scenarios, dynamic buffer management techniques are more efficient than static allocations, as they allow assignment of available buffers to physical interfaces on an interim basis only. They become even more relevant in the case of high-speed interfaces due to the amount of memory required to buffer traffic even for very short periods of time.The following sections will describe these new and enhanced buffer allocation mechanisms in detail.High Bandwidth buffering and Model ShiftingService provider edge (PE) router plays a crucial role in the network, aggregating large quantity of traffic requiring network service handling (e.g. business L2 or L3VPN) from tens to hundreds of access devices. Therefore, the aggregation capacity of a PE router, in terms of number of access devices per port, is high. Simultaneously, each of those devices, expects to receive large volumes of downstream data from content delivery and service providers over the core network. That can often result in congestion at the PE due to bulk of traffic waiting to be delivered into the access network. There can be instantaneous traffic burst or even periods of sustained network congestions on the edge routers. As a result, the packet-buffering model must find efficient ways to carve memory during congestion in any given subset of ports, while ensuring fairness across all ports and honoring the different traffic priorities.The dramatic increase in forwarding throughput and port densities in the recent years, coupled with the growing aggregation scale on converged network elements, has pushed the boundaries on achievable packet processing speeds and per-packet memory access performances. Integrating very large packet buffer memories with the super-high throughput network processor (NP) is prohibitive from a cost and power perspective. That is due to asymmetric technology improvements in NP throughput growth vs. commodity memory performance. As a rule of thumb, edge routers require 50ms of line-rate output queue buffers. This translates to 625 MB of high-speed packet buffering for a single 100G interface, compared to the mere 6.25 MB for 1G ports.Figure 1.1 Interface Bandwidth and Buffer Space DemandingFigure 1.1 shows how the increase in port bandwidth has directly affected the requirements for buffer space. These changes are driving more efficient and flexible packet buffer allocation models on ASR9000 4th Generation line cards.Moreover, ASR9000 4th Generation Ethernet linecards employ a new memory technology known as High Bandwidth Memory (HBM), which provides high performance memory bandwidth at decreased power consumption. The HBM memory has multiple access channels that can be used simultaneously by different NP components, resulting in a very high memory access throughput.Intelligent dynamic packet buffer managementWith Ethernet interface speeds transitioning from 10Gbps to 100Gbps or even 400Gbps, and denser line cards, the system demand for packet buffering has exploded to up to hundreds of GBs of high bandwidth memory. This is several orders of magnitude higher than what the current HBM technologies can provide at a reasonable cost and power consumption. The ASR9000 4th Generation Ethernet line cards are addressing this transition, by implementing a dynamic, shared packet buffer behavior.The new approach enables the HBM to maximize burst absorption through on-demand buffer allocation, while sharing resources among all network ports that are part of the same Network Processor (NP). As an example, this enables an HBM memory that could provide 50ms of per-port buffering in a static buffer management model to buffer up to 100ms, when not all ports on the NP are congested. Moreover, the 4th Generation NP offers flexible packet buffer allocation while ensuring# priority protection protection of critical control traffic guaranteed 10 ms of port buffering on all ports Due to the better burst absorption capabilities and efficient sharing of high-performance memory resources across NP ports, the ASR9000 4th Generation Ethernet line cards implement dynamic buffer management by default.This dynamic buffer allocation model manages the 3GB of HBM memory available for packet buffering by defining 3 logical regions# Shared region – 2.5 GB of HBM is available for shared packet buffering amongst all ports of the NP. This region is consumed first on a first-come and first-serve basis by those ports. Reserved region - 200 MB of HBM is reserved for internal ports and critical control protocol protection, yield at 4ms buffering per port. This region is not available for transit packet buffering. Protection region - 300 MB of HBM is available for port guarantees and for priority traffic protection. Ports that have not exhausted their guaranteed 10ms of buffering are allowed to use this region. This region is also used for ports that have not exhausted their 100ms limit to packet buffer under congestion. Transit priority packets are scheduled first out of any port; so, priority buffering is rare and happens only for transient bursts on a well-designed network. Figure 2.1 illustrates few use cases of how the default dynamic buffer management at the NP allocates buffers for single port, two ports and all ports congestion scenarios. Congestion levels are based on the number of ports at the NP experiencing congestion. As you can see, the three regions together provide a flexible mechanism to share buffers while still providing some degree of protection to priority and per-port buffering. Diagnostics and critical protocol traffic have their own reserved region for hitless prioritization even during maximum NP congestion.Figure 2.1 Dynamic buffer management scenariosStatic packet buffer management - fixed and chunk basedAs discussed earlier, the ASR9000 4th Generation Ethernet line cards use a new high-end memory referred as HBM. Unlike with earlier commodity off-chip memories, cost considerations for the HBM memory do not allow its size to grow at the same rate as interface speeds. This motivates rethinking of static packet buffer management techniques as implemented in earlier systems. In addition to the automatic assignment of equal amount of buffer across ports, which doesn’t keep into consideration the interface actual needs, manual allocation is now possible. Manual allocation allows to customize buffer allotment to the actual port buffering demands in a specific deployment.Automatic static buffer management techniques allocate buffers equally, on a per-port or per-port-group basis, thus exclusively guaranteeing a predefined and fixed amount of buffer memory for each port under traffic congestion. Once this buffer carving is done during NP initialization, regions allocated to different ports cannot be leveraged to relieve congestion on an affected interface.Manual static buffer allocation ensures specific router interface the amount of queuing capacity so that it can build the guaranteed per-priority-buffer limits up to 8 priorities queuing, or budget for network oversubscription. The latter allows edge routers to be configured to absorb bursts in lieu of downstream access devices with very low buffers. Buffer allocation to a specific port is done in multiple of 10ms up to 120ms. The remaining buffer space in the HBM memory is equally divided across the remaining ports in a given NP.The 4th Generation ASR9000 Ethernet line cards provide a configuration knob to set buffer management to static mode where required. Manual customization of packet buffer for a specific interface is done by configuring the buffer size for the port’s chunk, as shown in Figure 3.1. A “chunk” is an index that refers to the portion of HBM memory buffer space that is assigned to that port. “port” here denotes a single 100Gbps interface, or the aggregate of breakout interfaces of one 100Gbps port.A line card can function with mix mode NPs, as different port groups may have different traffic buffering requirements.Figure 3.1 Static packet buffer managementFigure 3.1 illustrates the two options available when a user explicitly configures the NP to operate in a static buffer management mode. This mode still reserves 200 MB of HBM memory for internal ports and critical protocol protection. Therefore, only the “shared” and “protection” regions can be carved for static buffer management.When the user configures “hw-module loc qos port-buffer-static-limit np ~, the remaining 2.8 GB of HBM are equally carved between the different NP chunks. If the user adds a chunk keyword, then the specified amount of buffering is allocated to that particular chunk. The remaining 2.8 GB, minus the carved-out space for that one chunk, is then equally distributed among the remaining chunks. The amount of buffering that can be assigned to a given chunk ranges from 10ms to 120ms.As an example, without any specific per-chunk carving, each chunk gets 700 MB of memory, which translates to 50ms of buffering. However, if the one block is configured for 120ms buffering, the remaining three chunks will only get 35ms each.This enhancement to static packet buffer management enables very granular control of memory allocation on each port. It better handles per priority buffer limits control of the port, so that the top priority queue can be allowed to use up to the total buffer assigned to the port, thus reducing the top priority traffic drop probability. The other priority queues (P2 to P8) share the remaining buffer using an allocation logic similar to the top priority P1’s, based on their order of priority. Furthermore, since the configuration knob is available on a per-chunk basis, it applies to different port speeds and future card variants.SummaryThis paper highlighted how the 4th generation NP on the Cisco ASR9000 product family introduces a dynamic, adaptive and on-demand architecture for packet buffering. In addition, the fallback option of static buffer management, the support for mixed mode NPs on the same line card and the enhancements of chunk-based user defined buffer carving allow for backward compatibility where needed.", "url": "/blogs/2018-09-06-dynamic-packet-buffering/", "author": "Cisco Web Team", "tags": "" } , "blogs-2018-09-06-power": { "title": "Optimize Power Consumption", "content": " On This Page Optimize Power Consumption with the 4th generation of ASR 9000 line cards Why Power Consumption matters Taking On Power Consumption Linecard Slicing and on-demand power saving Conclusion Optimize Power Consumption with the 4th generation of ASR 9000 line cardsWhy Power Consumption mattersPower consumption has always been a heated topic and there are several good reasons to that.The most obvious is that each Watt consumed by a router generates some direct costs. The electricutility company has to be paid for all the power the box consumes. Therefore, optimizing a router’s power consumption has a direct impact on OpEx costs.However, this is not the only reason why Service Providers are keeping a close eye on powerconsumption. There are two other areas where power consumption drives additional costs.First, the whole onsite power installation also generates costs. If the datacenter power consumption increases, Service Providers have to expand their power installations. This generates installation building costs, as well as additional connection fees from the utility provider. This is in addition to the increasing power consumption costs.Second, a device does not really consume power. It transforms electricity into heat, which isexhausted from the router in the form of hot air. This air has to be cooled down in order to maintain a healthy ambient temperature in the datacenter. Therefore, a higher power consumption leads to higher cooling costs.Additional considerations arise when a router is installed in a third-party location (i.e. Internet Exchange Point IXP). In the earlier days of the Internet, space availability in an IXP was the main area of concern. Today, space is no longer the biggest issue, due to the reduced device footprint required to achieve comparable densities. However, the delivery of sufficient power to a cabinet and/or proper cooling are. Sometimes, this leads to situations where Service Providers rent a rack in an IXP only to partially fill it because power and air-cooling limitations do not allow for additional equipment.Last, but not least, is environmental sustainability. Reduction in power consumption is paramount to protect the environment and help businesses meet their green objectives.Let’s see now how the 4th generation of ASR 9000 line cards deliver strong power savingoptimizations.Taking On Power ConsumptionEvery new generation of ASR9000 linecards has brought a huge increase in interface speed and portdensity per slot. While the overall power consumption of a fully loaded chassis, with denser andfaster cards, has also increased, the actual power consumption per Gigabit of interface capacity has decreased dramatically over time.In less than a decade, continuous innovations have enabled the ASR 9000 to achieve a 95% reductionper Gbps in power consumption, and a 99% reduction per Gbps in physical footprint.At the very heart of these new Linecards is the new and powerful 4th Generation Network Processor(NP). Its new integrated design, which combines several discrete elements into a single component,and its higher throughput enable it to provide the flexibility, scalability and feature richness you come to expect from the ASR9000 product family, in a much-reduced power profile.A significant part of a router’s power consumption is generated by the fans. The cooling system has been improved by using the latest fan and cooling technology, resulting into a more power-efficient cooling. This has a big impact on power savings.In addition, newer ASR99xx chassis designs allow for highly-dense 100GE systems. It minimizes howthe power consumption of common components affects the overall system power draw for everyGbit of traffic forwarded. Common components include, but are not limited to, Route Processors,Switch Fabric, FANs and the power system itself.Compared to the 3rd generation of ASR 9000 linecards, the new linecards enable to achievecomparable densities in approximatively a third of the space and with less than half the powerconsumption.Linecard Slicing and on-demand power savingThe new linecards are based on a modular design. Each linecard consists of identical slices, each one holding several physical interfaces. Slices are replicated multiple times to achieve the interface density and throughput desired for a given linecard.Interestingly, each slice can be put into power-saving mode. When all ports in a slice are currently unused, linecard power consumption can be significantly reduced by administratively disabling the associated slice. Moreover, enabling or disabling of a slice has no impact on ports in any of the other slices.Another interesting aspect of linecard slicing is that fabric bandwidth is dynamically allocated to slices based on their enabled/disabled status. This allows a partially disabled linecard to gain additional fabric redundancy in the event the router would suffer from multiple fabric failures.ConclusionThe new Cisco ASR9000 4th Generation linecards and common components bring severalenhancements that directly tackle power management on the ASR 9000 product family.A new compact linecard design, more efficient chipsets, and denser cards and chassis have loweredthe ASR9000 power profile dramatically. In addition, a comprehensive linecard slicing design hasenabled effective on-demand power saving and further enhanced fabric availability.As a result, power consumption on the ASR 9000 product family has reached new lows and astaggering 95% decrease compared to the original system.", "url": "/blogs/2018-09-06-power/", "author": "Cisco Web Team", "tags": "" } , "#": {} , "tutorials-xr-embedded-packet-tracer": { "title": "XR Embedded Packet Tracer", "content": " On This Page Introduction Packet Tracer YouTube Demo Video XR Embedded Packet Tracer Framework Architecture User Interaction CLI Commands Summary Start/Stop Packet Tracing Clear Packet Tracer Counters And Conditions Specify Packet Tracer Conditions Packet Tracer Conditions - Interfaces Packet Tracer Conditions - Offset/Value/Mask Packet Tracer Status Packet Tracer Results Packet Tracer Counter Descriptions XR Embedded Packet Tracer Performance Considerations Use Cases Use Case 1# Tracing On A P Router In L2VPN Use Case 2# Packet Drops In NP Microcode Use Case 3# Packet Drops Outside Of NP Microcode Use Case 4# Processing Of ICMP Echo Requsts XR Embedded Packet Tracer Restrictions And Limitations XR release 7.1.2# Appendix 1# XR Embedded Packet Tracer Framework Architecture Details IntroductionIOS XR Embedded Packet Tracer is a framework that provides the user with a capability to trace custom flows through the router, for service validation or troubleshooting purposes.IOS XR Embedded Packet Tracer is protocol agnostic, it works on any type of unicast or multicast packets.This document is a user guide, with additional insight into the architecture of the XR Packet Tracer.IOS XR Embedded Packet Tracer support starts with IOS XR release 7.1.2 and ASR 9000. Other XR product families will be supported in future. XR release 7.1.2 provides the very basic functionality that the packet tracer can deliver. Further development directions will depend much on your feedback.Packet Tracer YouTube Demo VideoFor a quick overview of what you can expect from the XR Embedded Packet Tracer, watch this short video#XR Embedded Packet Tracer Framework ArchitectureWhen packet tracing is enabled on an interface, the Network Processor (NP) checks whether received packets are matching the specified condition. If packet matches the condition, a flag is set in the internal packet header. This flag in the internal packet header allows for the tracing of this packet on all elements in the data-path and punt-path inside the router.For more details on the packet tracer architecture, refer to Appendix 1.In XR release 7.1.2 packet tracing is supported only in ASR9000 data-path. Support is available on 3rd, 4th and 5th generation line cards (aka Tomahawk, Lightspeed and Lightspeed Plus). Packet tracing support by processes participating in the punt/inject path will be available in future IS XR releases.On ASR9000 network processor microcode (in case of Tomahawk) and packet processing engine code (in case of Lightspeed) participate in the packet tracer framework. HW ASICs (FIA, XBAR, PHY) do not have the capability to participate in the packet tracer framework. Therefore any actions performed by microcode are reported to the packet tracer infrastructure, but actions peformed by HW ASICs are not.User InteractionThe main pillar of the XR Embedded Packet Tracer architecture is simplicity of user experience.At this stage of XR Embedded Packet Tracer framework development, user interface is provided through a CLI.User interaction with the packet tracer framework is entirely contained in the user mode. There is no need for any configuration changes to enable the packet tracer functionality.The following diagram represents the packet tracer workflow#CLI Commands Summary Command Syntax Description clear packet-trace conditions all Clears all buffered packet-trace conditions. Command is allowed only while packet tracing is inactive. clear packet-trace counters all Resets all packet-trace counters to zero. packet-trace condition interface interface Specify interfaces on which you expect to receive packets that you want to trace through the router. packet-trace condition n offset offset value value mask mask Specify set(s) of the Offset/Value/Mask that define the flow of interest. packet-trace start Start packet tracing. packet-trace stop Stop packet tracing. show packet-trace description See all counters registered with the packet tracer framework along with their descriptions. show packet-trace status [detail] See conditions buffered by the pkt_trace_master process running on the active RP and the packet tracer status (Active/Inactive). The detailed option of the command shows which processes are registered with the packet tracer framework on every card in the router. If packet tracer status is Active, output also shows which conditions were successfully programmed in data-path. show packet-trace result See the non-zero packet tracer counters. show packet-trace result counter name [source source] [location location] See the most recent 1023 increments of a specific packet-trace counter. Start/Stop Packet TracingCommands to start/stop packet tracer#packet-trace startpacket-trace stopClear Packet Tracer Counters And ConditionsCommand to clear packet tracer counters#clear packet-trace counters allPacket tracer counters can be cleared at any time.Command to clear packet tracer counters#clear packet-trace conditions allBy design, packet tracer conditions can be cleared only while packet tracing is inactive. Interpertation of packet tracer results would othewrise we be dubious.Specify Packet Tracer ConditionsPacket tracer conditions comprise two entities# Physical interface(s) on which packets are expected to be received Offset/Value/Mask triplets that define a flow of interestPacket Tracer Conditions - InterfacesSpecify the physical interface(s) on which packets are expected to be received.packet-trace condition interface hu0/5/0/6packet-trace condition interface hu0/5/0/7packet-trace condition interface hu0/5/0/8When tracing on sub-interfaces, the Offset/Value/Mask specification must take into account the dot1q or QinQ encapsulation.Packet Tracer Conditions - Offset/Value/MaskDefining a flow as a set of Offset/Value/Mask triplets allows the packet tracer framework to be completely protocol agnostic. The Offset/Value/Mask can represent any part of any header in the protocol stack and does not even have to end on the header boundary.To address the usability aspect of this approach, we have developed the “XR Packet Tracer Condition Generator Web App”.Source code and installation instructions of this Web App are available on GitHub# XR Embedded Packet Tracer - Condition Generator.This Web App allows you to draw the protocol stack in your frame of interest, specify which of them are of interest for defining the condition and finally enter the values (with optional masks) that define your flow of interest.Starting page of the Web App shows the supported protocol headers.Click on the + sign to add a header to the stack in the desired order. Clicking on the - sign removes from the stack the last header of that type. If you want to reset the stack completely, just reload the page.Make sure you add all the headers before the one on which you want to match the traffic because the offset calculation depends on it. Don’t forget the PW control word if it’s in use. ;)The outermost header should be the first one you select. Then click on the other headers to draw the protocol stack until the innermost header that you want to match on. For example#In the Web App, it would look like this#Click on the checkbox next to the headers on which you want to match. You can click on more than one. A frame for each selected header will apper to the right. Enter the value/mask of your choice and click on Submit button in the frame.Click on the Copy icon to copy the Offset/Value/Mask to clipboard.When the Offset/Value/Mask is copied to clipboard, use it to specify the conditions#packet-trace condition 1 Offset 53 Value 0x01 Mask 0xffpacket-trace condition 3 Offset 60 Value 0xc0a84d02 Mask 0xffffff00On ASR 9000, you can specify a maximum of three 4-octet Offset/Value/Mask sets.Packet Tracer StatusUse the following command to see the packet tracer status#show packet-trace status [detail]Use the show packet-trace status command to check which conditions were buffered so far by the pkt_trace_master process running on the active RP and what is the aggregate packet tracer status (active/inactive).RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace status------------------------------------------------------------Packet Trace Master Process# Buffered Conditions# Interface HundredGigE0/5/0/6 1 offset 53 value 0x1 mask 0xff 2 offset 56 value 0xc0a84d01 mask 0xffffffff 3 offset 60 value 0xc0a84d02 mask 0xffffffff Status# ActiveRP/0/RSP0/CPU0#CORE-TOP#The detailed option of the status command “show packet-trace status detail” can be used to see which processes are registered with the packet tracer framework on every card in the router. If packet tracer status is Active, you can also verify which conditions were programmed in data-path. Packet tracer conditions are broadcast to all participatig processes when the packet-trace start command is issued, but only the NPs that own the interfaces specified in the packet tracer condition are programming it in HW.RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace status detail------------------------------------------------------------Location# 0/5/CPU0Available Counting Modules# 6 #1 spp_pd Last errors# #2 netio_pd Last errors# #3 prm_server_to Last errors# #4 spio_pd_LACP Last errors# #5 spio_pd_ARP Last errors# #6 spio_pd_LLDP Last errors#Available Marking Modules# 1 #1 prm_server_to Interfaces# 1 HundredGigE0/5/0/6 Conditions# 3 1 offset 53 value 0x1 mask 0xff 2 offset 56 value 0xc0a84d01 mask 0xffffffff 3 offset 60 value 0xc0a84d02 mask 0xffffffff Last errors#------------------------------------------------------------Packet Trace Master Process# Buffered Conditions# Interface HundredGigE0/5/0/6 1 offset 53 value 0x1 mask 0xff 2 offset 56 value 0xc0a84d01 mask 0xffffffff 3 offset 60 value 0xc0a84d02 mask 0xffffffff Status# Active------------------------------------------------------------Location# 0/RSP0/CPU0Available Counting Modules# 2 #1 spp_pd Last errors# #2 netio_pd Last errors#Available Marking Modules# 0RP/0/RSP0/CPU0#CORE-TOP#Packet Tracer ResultsUse the following command to see the packet tracer results#show packet-trace result [counter <name> [source <source>] [location <location>]]The simple form of this command shows the aggregate status of all non-zero counters. In particular, you can see the following# Location of the counter. Counter source. In case of packet tracing on ASR9k data-path, source represents the NP. Counter name. Counter type# drop or pass Last Attribute. With every counter update the owner of the counter may decide to provide an additional explanation along with the counter update. In case of drop counters, you should expect to see the drop reason. Counter valueSample output#RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace resultsT# D - Drop counter; P - Pass counterLocation | Source | Counter | T | Last-Attribute | Count------------ ------------ ------------------------- - ---------------------------------------- ---------------0/5/CPU0 NP3 PACKET_MARKED P HundredGigE0_5_0_6 10000/5/CPU0 NP3 PACKET_TO_FABRIC P 10000/5/CPU0 NP0 PACKET_FROM_FABRIC P 10000/5/CPU0 NP0 PACKET_TO_INTERFACE P HundredGigE0_5_0_1 1000RP/0/RSP0/CPU0#CORE-TOP#When using the “show packet-trace results counter ~ option, you can see# the most recent 1023 increments of the given counter. Note that the packet tracer framework may receive a counter increment that is higher than one, depending on how the process that collects data-path counter from NP updates the packet tracer framework. the timestamp of the cunter increment any additional attribute communicated with that counter update.The asterisk next to the counter name shows the most recent update. This is important if the counter was updated more than 1023 times, in which case the oldest entries are overwritten.Sample output#RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace results counter PACKET_MARKED source NP3 location 0/5/CPU0Tue Aug 25 16#21#00.616 UTCT# D - Drop Counter; P - Pass Counter; M - Marking Master Counter; * - Last Updated Index of Counter------------------------------------------Location# 0/5/CPU0------------------------------------------Timestamp | Source | Counter | T | Attribute | Count---------------------- ------------ ------------------------- - ---------------------------------------- ---------------Aug 25 09#37#22.230 NP3 PACKET_MARKED M HundredGigE0_5_0_6 2Aug 25 09#37#25.230 NP3 PACKET_MARKED M HundredGigE0_5_0_6 3Aug 25 09#37#28.230 NP3 PACKET_MARKED M HundredGigE0_5_0_6 4<i<..snip..>Aug 25 16#20#52.230 NP3 PACKET_MARKED M HundredGigE0_5_0_6 5Aug 25 16#20#55.230 NP3 PACKET_MARKED M HundredGigE0_5_0_6 7Aug 25 16#20#58.229 NP3 *PACKET_MARKED M HundredGigE0_5_0_6 6RP/0/RSP0/CPU0#CORE-TOP#Packet Tracer Counter DescriptionsUse the following command to see which counters are registered with the packet tracer framework and their descriptions#show packet-trace descriptionsXR Embedded Packet Tracer Performance ConsiderationsPerformance impact of packet tracing is observed only on the network processor (NP) that owns interfaces specified in the packet tracer condition. Headers of all packets received on those interfaces (and only on those interfaces) must be compared to the specified condition. All other elements in the data-path and punt/inject path that participate in the packet tracer framework only need to check the value of a single bit in the internal packet header. Hence you will not observe any performance impact on other elements in the packet processing path.Performance impact on Lightspeed and Lightspeed Plus is negligeable even on the NP where the incoming packets are checked against the specified condition. For example, NP load remains 28% in test setup with and without packet tracing.Performance impact on Tomahawk line cards needs to be considered before enabling packet tracing condition on interfaces owned by Tomahawk NP. If the NP utilisation is ~20%, enabling packet tracer may increase the NP utilisation to ~60%. Note that performance impact is not observed if Tomahawk NP is the egress NP.Use CasesThe followig three use cases illustrate what can you expect to achieve with packet tracer on ASR 9000.Use Case 1# Tracing On A P Router In L2VPNI’m sure you will agree that this is one of the most difficult issues to troubleshoot# packet pertaining to a L2VPN PW are dropped somewhere in the core. XR Embedded Packet Tracer is the only feature that allows you to troubleshoot this easily.The following image shows the topology, relevant configurations, protocol stacks, test flow (ICMP echo) direction and the point where packet tracing is applied#Protocol stack on any flow will depend very much on the network configuration. You can either derive it by walking the control path from encapsulation PE to the P node where packet tracer is enabled to see what kind of header rewrites are performed on each node (e.g.# is dot1q header stripped on encapsulation PE, how many labels are pushed, is PW control word enabled, etc.) or you can run a monitor session once on the P router to confirm the protocol stack on the flow of interest.Packet tracer condition in this use case was developed using the XR Embedded Packet Tracer - Condition Generator Web App.The Offset/Value/Mask triplets in below snapshot represent the following fields of the IPv4 packet encapsulated into L2VPN frame# ICMP protocol source IPv4 address destination IPv4 addressThe simple set of commands below was sufficient to confirm that all packets of this flow are successfuly sent towards the egress interface#RP/0/RSP0/CPU0#CORE-TOP#packet-trace stopRP/0/RSP0/CPU0#CORE-TOP#clear packet-trace conditions allRP/0/RSP0/CPU0#CORE-TOP#clear packet-trace counters allRP/0/RSP0/CPU0#CORE-TOP#packet-trace condition interface hu0/5/0/6RP/0/RSP0/CPU0#CORE-TOP#packet-trace condition 1 Offset 53 Value 0x01 Mask 0xffRP/0/RSP0/CPU0#CORE-TOP#packet-trace condition 2 Offset 56 Value 0xc0a84d01 Mask 0xffffffffRP/0/RSP0/CPU0#CORE-TOP#packet-trace condition 3 Offset 60 Value 0xc0a84d02 Mask 0xffffffffRP/0/RSP0/CPU0#CORE-TOP#show packet-trace status------------------------------------------------------------Packet Trace Master Process# Buffered Conditions# Interface HundredGigE0/5/0/6 1 offset 53 value 0x1 mask 0xff 2 offset 56 value 0xc0a84d01 mask 0xffffffff 3 offset 60 value 0xc0a84d02 mask 0xffffffff Status# InactiveRP/0/RSP0/CPU0#CORE-TOP#packet-trace startRP/0/RSP0/CPU0#CORE-TOP#show packet-trace status------------------------------------------------------------Packet Trace Master Process# Buffered Conditions# Interface HundredGigE0/5/0/6 1 offset 53 value 0x1 mask 0xff 2 offset 56 value 0xc0a84d01 mask 0xffffffff 3 offset 60 value 0xc0a84d02 mask 0xffffffff Status# ActiveRP/0/RSP0/CPU0#CORE-TOP#sh packet-trace resultsT# D - Drop counter; P - Pass counterLocation | Source | Counter | T | Last-Attribute | Count------------ ------------ ------------------------- - ---------------------------------------- ---------------0/5/CPU0 NP3 PACKET_MARKED P HundredGigE0_5_0_6 10000/5/CPU0 NP3 PACKET_TO_FABRIC P 10000/5/CPU0 NP0 PACKET_FROM_FABRIC P 10000/5/CPU0 NP0 PACKET_TO_INTERFACE P HundredGigE0_5_0_1 1000RP/0/RSP0/CPU0#CORE-TOP#Use Case 2# Packet Drops In NP MicrocodePacket tracer framework can explicity report packet drops if the packet drop was a decision made by any entity that participates in the packet tracer framework. This means that packets dropped by the NP microcode are explicitly reported.In this use case an egress policer was applied to the Hu0/5/0/1 interface of the P router#Note that, as the policer drop is a decision made by the NP microcode, you can see the PACKET_EGR_DROP counter increment in the output of show packet-trace result counter.You can also see that the additional information passed with the PACKET_EGR_DROP counter increment explains the drop reason. In this use case drop reason was RSV_DROP_QOS_DENY, which is a policer drop in NP microcode parlance.You can also see that the number of drops matches exactly the drops in the output of show policy-map interface hu0/5/0/1 output command.Further you can observe using the “show packet-trace results counter ~ the drop reason in all increments of `PACKET_EGR_DROP` counter.RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace resultsT# D - Drop counter; P - Pass counterLocation | Source | Counter | T | Last-Attribute | Count------------ ------------ ------------------------- - ---------------------------------------- ---------------0/5/CPU0 NP3 PACKET_MARKED P HundredGigE0_5_0_6 10000/5/CPU0 NP3 PACKET_TO_FABRIC P 10000/5/CPU0 NP0 PACKET_FROM_FABRIC P 10000/5/CPU0 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 770/5/CPU0 NP0 PACKET_TO_INTERFACE P HundredGigE0_5_0_1 923RP/0/RSP0/CPU0#CORE-TOP#sh policy-map interface hu0/5/0/1 outputHundredGigE0/5/0/1 output# packet-trace-testClass exp4 Classification statistics (packets/bytes) (rate - kbps) Matched # 1000/144000 8 Transmitted # N/A Total Dropped # 77/11088 1 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 923/132912 7 Policed(exceed) # 77/11088 1 Policed(violate) # 0/0 0 Policed and dropped # 77/11088Class class-default Classification statistics (packets/bytes) (rate - kbps) Matched # 25/2052 0 Transmitted # N/A Total Dropped # N/ARP/0/RSP0/CPU0#CORE-TOP#RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace results counter PACKET_EGR_DROP location 0/5/CPU0T# D - Drop Counter; P - Pass Counter; M - Marking Master Counter; * - Last Updated Index of Counter------------------------------------------Location# 0/5/CPU0------------------------------------------Timestamp | Source | Counter | T | Attribute | Count---------------------- ------------ ------------------------- - ---------------------------------------- ---------------Aug 30 16#49#10.070 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 1Aug 30 16#49#13.069 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#16.068 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#49#19.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#22.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#25.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#28.068 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 4Aug 30 16#49#31.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#34.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#37.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#49#40.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#43.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#46.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#49.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 4Aug 30 16#49#52.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#55.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#49#58.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#01.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#50#04.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#07.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 1Aug 30 16#50#10.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#13.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 1Aug 30 16#50#16.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#19.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#22.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#25.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#50#28.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 1Aug 30 16#50#31.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 4Aug 30 16#50#34.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#50#37.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#50#40.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 3Aug 30 16#50#43.067 NP0 PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2Aug 30 16#50#46.068 NP0 *PACKET_EGR_DROP D RSV_DROP_QOS_DENY 2RP/0/RSP0/CPU0#CORE-TOP#Use Case 3# Packet Drops Outside Of NP MicrocodePackets dropped by any HW ASIC (e.g. FIA, Xbar) are not explicitly reported. The number of such packet drops can be inferred from the mismatch between the number of expected and counted packets. E.g. if PACKET_FROM_FABRIC count on egress NP is less than the PACKET_TO_FABRIC count on ingress NP, this indicates that packets are dropped in FIA. While this doesn’t directly explain the drop reason, it does indicate where are the drops happening and helps point further troubleshooting in the right direction.In this use case the egress policer on Hu0/5/0/1 was replaced with a shaper of the same rate#NP Traffic Manager is a hardware component of the ASR 9000 network processor. As the Traffic Manager receives the packet after the packet has already been processed by the NP microcode, drops by Traffic Manager are not reported to the packet tracer framework. They can, however, be observed in the QoS statistics.RP/0/RSP0/CPU0#CORE-TOP#sh policy-map interface hu0/5/0/1 outputSun Aug 30 17#09#45.008 UTCHundredGigE0/5/0/1 output# packet-trace-testClass exp4 Classification statistics (packets/bytes) (rate - kbps) Matched # 1000/1344000 606 Transmitted # 75/100800 38 Total Dropped # 925/1243200 0 Queueing statistics Queue ID # 122898 High watermark # N/A Inst-queue-len (packets) # 0 Avg-queue-len # N/A Taildropped(packets/bytes) # 925/1243200 Queue(conform) # 75/100800 38 Queue(exceed) # 0/0 0 RED random drops(packets/bytes) # 0/0Class class-default Classification statistics (packets/bytes) (rate - kbps) Matched # 2/176 0 Transmitted # 4/340 0 Total Dropped # 0/0 0 Queueing statistics Queue ID # 122899 High watermark # N/A Inst-queue-len (packets) # 0 Avg-queue-len # N/A Taildropped(packets/bytes) # 0/0 Queue(conform) # 4/340 0 Queue(exceed) # 0/0 0 RED random drops(packets/bytes) # 0/0RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace resultsSun Aug 30 17#09#50.904 UTCT# D - Drop counter; P - Pass counterLocation | Source | Counter | T | Last-Attribute | Count------------ ------------ ------------------------- - ---------------------------------------- ---------------0/5/CPU0 NP3 PACKET_MARKED P HundredGigE0_5_0_6 10000/5/CPU0 NP3 PACKET_TO_FABRIC P 10000/5/CPU0 NP0 PACKET_FROM_FABRIC P 10000/5/CPU0 NP0 PACKET_TO_INTERFACE P HundredGigE0_5_0_1 1000RP/0/RSP0/CPU0#CORE-TOP#Use Case 4# Processing Of ICMP Echo RequstsThis use case demonstrates how packet tracer framework can be used to trace punted packets. It also shows the specific way in which ASR 9000 is responding to ICMP echo requests.Topology and relevant configuration#Applied packet tracer condition to match VLAN ID 2020 and IPv4 address 202.202.202.201#RP/0/RSP0/CPU0#CORE-TOP#sh packet-trace status------------------------------------------------------------Packet Trace Master Process# Buffered Conditions# Interface HundredGigE0/5/0/0 1 offset 14 value 0x7e4 mask 0xfff 2 offset 34 value 0xcacacac9 mask 0xffffffff Status# ActiveRP/0/RSP0/CPU0#CORE-TOP#Packet trace results#From this result the packet flow of ICMP echo processing inside ASR9k#You can also observe that NetIO on IOS XR routers reuses the buffer carrying the ICMP echo request packet when it generates the echo reply, thus preserving the packet tracer flag in the internal packet header.XR Embedded Packet Tracer Restrictions And LimitationsXR release 7.1.2# Packet marking is supported on 5th, 4th and 3rd generation line cards (aka Lightspeed Plus, Lighstpeed, Tomahawk). Packet tracing is supported in the NP microcode of 5th, 4th and 3rd generation line cards. You can specify a maximum of three 4-octet Offset/Value/Mask sets. Embedded Packet Tracer is not HA (high availability) aware. Specified packet tracer conditions are not synchronised with the standby RP. By design, packet tracer conditions cannot be updated while packet tracing is active.Appendix 1# XR Embedded Packet Tracer Framework Architecture DetailsPacket tracer master process pkt_trace_master on active route processor (RP) card is responsible for user interaction and sending instructions to processes participating in the packet tracer framework. When packet-trace start command is issued, pkt_trace_master process broadcasts the specified conditions to all participating processes. Receiving process detects whether the condition applies to entities it owns and acts accordingly. That way only the NPs that own the interfaces specified in the packet tracer condition programme the condition in HW.When packet tracing is enabled on an interface, the Network Processor (NP) checks whether received packets are matching the specified condition. If packet matches the condition, a flag is set in the internal packet header. This flag in the internal packet header allows for the tracing of this packet on all elements in the data-path and punt-path inside the router.On every line card (LC) and route processor card (RP) a pkt_trace_agent process maintas the array of counters registered by each process participating in the packet tracer framework. Processes participating in the packet tracer framework communicate counter updates to the pkt_trace_agent process. When you issue the show packet-trace result command, the pkt_trace_master process on active RP polls data from all cards and displays the non-zero counters.", "url": "/tutorials/xr-embedded-packet-tracer/", "author": "Aleksandar Vidakovic", "tags": "iosxr, cisco" } , "tutorials-troubleshoot-slow-bgp-convergence-due-to-rpl": { "title": " Troubleshoot Slow BGP Convergence Due to Suboptimal Route Policies on IOS-XR", "content": " Troubleshoot Slow BGP Convergence Due to Suboptimal Route Policies on IOS-XR Introduction Background Problem Solution Verification Conclusion Acknowledgements IntroductionThis document describes how to diagnose slow BGP convergence issue on Cisco IOS® XR routers which happens because of non optimal route policies.BackgroundBGP Convergence time consists of a number of factors. One of them is the time to process ingress or egress BGP updates by configured route policies. There are multiple ways to write a route policy to do a specific task. An optimal way helps to improve BGP convergence, minimize potential traffic drops, and avoid temporary routing loops. Cisco IOS® XR includes a profiling tool which measures time spent by specific route policy in order to estimate its processing time.Tests were realized in October 2021 using IOS-XR 6.7.3. Hardware used was ASR 9000. Actual performance depends on multiple factors (size and way of writing of a route policy, regular expression patterns, amount of prefixes which go through the route policy, etc.)ProblemSlow BGP Convergence is sometimes the result of route policies written in a non optimal way.SolutionThere is an policy profiling tool for route policies which can be used without impact on performance in order to measure the time spent in each statement of a route policy at a specific attach point. You can check the run time of the route policy at this specific attach point. By default, the profiling is enabled only for aggregate route policy stats.router bgp 65000neighbor 10.0.54.6remote-as 65000update-source Loopback0address-family ipv4 unicastroute-policy INGRESS-ROUTE-POLICY in!neighbor 10.0.54.11remote-as 65001ebgp-multihop 255update-source Loopback0address-family ipv4 unicastroute-policy EGRESS-ROUTE-POLICY outRP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-in-dflt default-IPv4-Uni-10.0.54.6 policy profilePolicy profiling dataPolicy # INGRESS-ROUTE-POLICYPass # 1440233Drop # 0# of executions # 1440233Total execution time # 57095msecRP/0/RSP1/CPU0#XR1#show bgp ipv4 unicast neighbors 10.0.54.11 | i Update groupUpdate group# 0.3 Filter-group# 0.5 No Refresh request being processedRP/0/RSP1/CPU0#XR1#RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-out-dflt default-IPv4-Uni-UpdGrp-0.3-Out policy profilePolicy profiling dataPolicy # EGRESS-ROUTE-POLICYPass # 726751Drop # 0# of executions # 726751Total execution time # 108099msecYou see the cumulative amount of time spent in order to process INGRESS-ROUTE-POLICY and EGRESS-ROUTE-POLICY.The profiling can be applied for ingress or egress route policy for any attach point.RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 ? debug-policy Attachpoint name permnet Attachpoint name import Attachpoint name export Attachpoint name interafi-import Attachpoint name source-rt Attachpoint name interafi-export Attachpoint name retain-rt Attachpoint name addpath Attachpoint name neighbor-in-dflt Attachpoint name neighbor-in-vrf Attachpoint name neighbor-out-dflt Attachpoint name neighbor-out-vrf Attachpoint name orf-dflt Attachpoint name orf-vrf Attachpoint name dampening-dflt Attachpoint name dampening-vrf Attachpoint name default-originate-dflt Attachpoint name default-originate-vrf Attachpoint name clear-policy Attachpoint name show-policy-node0_RSP1_CPU0 Attachpoint name aggregation-dflt Attachpoint name aggregation-vrf Attachpoint name nexthop Attachpoint name allocate-label Attachpoint name label-mode Attachpoint name l2vpn-import Attachpoint name l2vpn-export Attachpoint name redistribution-dflt Attachpoint name redistribution-vrf Attachpoint name rib-install-dflt Attachpoint name rib-install-vrf Attachpoint name network-dflt Attachpoint name network-vrf Attachpoint name redistribution-dflt Attachpoint name redistribution-vrf Attachpoint name rib-install-dflt Attachpoint name rib-install-vrf Attachpoint name network-dflt Attachpoint name network-vrf Attachpoint name l2vpn-export-mp2mp Attachpoint name l2vpn-export-vfi Attachpoint name l2vpn-export-evi Attachpoint name l2vpn-export-mspw Attachpoint name l2vpn-export-instance Attachpoint name WORD Attachpoint nameRP/0/RSP1/CPU0#XR1#You can clear stats as needed#RP/0/RSP1/CPU0#XR1#clear pcl protocol bgp speaker-0 neighbor-in-dflt default-IPv4-Uni-10.0.54.6 policy profileRP/0/RSP1/CPU0#XR1#clear pcl protocol bgp speaker-0 neighbor-out-dflt default-IPv4-Uni-UpdGrp-0.3-Out policy profileIf you enable debug pcl profile detail, then you get detailed stats per route policy entry.RP/0/RSP1/CPU0#XR1#debug pcl profile detailThese outputs were collected after the full Internet BGP table scale was received and propagated further.RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-in-dflt default-IPv4-Uni-10.0.54.6 policy profilePolicy profiling dataPolicy # INGRESS-ROUTE-POLICYPass # 720100Drop # 0# of executions # 720100Total execution time # 222788msec !!!! about 3.7 minutes to process ingress updatesNode Id Num visited Exec time Policy engine operation--------------------------------------------------------------------------------PXL_0_1 720100 221796msec if as-path aspath-match ... then <truePath>PXL_0_3 3525 3msec set local-preference 150 3525 0msec <end-policy/> </truePath> <falsePath>PXL_0_2 716575 225msec set local-preference 50 716575 82msec <end-policy/> </falsePath>RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-out-dflt default-IPv4-Uni-UpdGrp-0.3-Out policy profilePolicy profiling dataPolicy # EGRESS-ROUTE-POLICYPass # 720105Drop # 0# of executions # 720105Total execution time # 221975msec !!!! about 3.7 minutes to process egress updatesNode Id Num visited Exec time Policy engine operation--------------------------------------------------------------------------------PXL_0_1 720105 3005msec if as-path aspath-match ... then <truePath>PXL_0_5 0 0msec set med 70 0 0msec <end-policy/> </truePath> <falsePath>PXL_0_2 720105 218008msec if as-path aspath-match ... then <truePath>PXL_0_3 25 0msec set med 80 25 0msec <end-policy/> </truePath> <falsePath>PXL_0_4 720080 145msec set med 90 720080 76msec <end-policy/> </falsePath> </falsePath>RP/0/RSP1/CPU0#XR1# As you can see, line PXL_0_1 for INGRESS-ROUTE-POLICY and PXL_0_2 for EGRESS-ROUTE-POLICY are especially time consuming and slow the convergence down.If you correlate them with the route policies, then you can see that AS-PATH-SET-11 in INGRESS-ROUTE-POLICY and AS-PATH-SET-22 in EGRESS-ROUTE-POLICY cause the problem. Each AS path set is made of 100 regular expression lines which is a pretty inefficient way to write a policy since it does not leverage any possible optimization or the power of regex.route-policy INGRESS-ROUTE-POLICY if as-path in AS-PATH-SET-11 then set local-preference 150 else set local-preference 50 endifend-policyroute-policy EGRESS-ROUTE-POLICY if as-path in AS-PATH-SET-21 then set med 70 elseif as-path in AS-PATH-SET-22 then set med 80 else set med 90 endifend-policyas-path-set AS-PATH-SET-11 ios-regex '^65101 65201_', ios-regex '^65102 65202_', ios-regex '^65103 65203_', ios-regex '^65104 65204_', ios-regex '^65105 65205_', --- removed 90 similar lines --- ios-regex '^65195 65295_', ios-regex '^65196 65296_', ios-regex '^65197 65297_', ios-regex '^65198 65298_', ios-regex '^65199 65299_'end-setas-path-set AS-PATH-SET-21 ios-regex '^$'end-setas-path-set AS-PATH-SET-22 ios-regex '^65169(_65169)*$', ios-regex '^65392(_65392)*$', ios-regex '^65133(_65133)*$', ios-regex '^65231(_65231)*$', ios-regex '^65161(_65161)*$', --- removed 90 similar lines --- ios-regex '^65281(_65281)*$', ios-regex '^65336(_65336)*$', ios-regex '^65238(_65238)*$', ios-regex '^65381(_65381)*$', ios-regex '^65103(_65103)*$'end-setIn order to improve policy performance, you can evaluate configuration of AS Path sets with native as-path match operation instead of regular expression. Alternatively, you can use regular expressions inside route policies in a collapsed manner hence reduce the number of regular expression lines used.This table lists AS path match criteria offered by Route Policy Language (RPL). The native matching functions use a binary matching algorithm which offers better performance compared to the regular expression match engine. Most of the common ios-regex match scenarios could be written with them (or their combinations). Command Syntax Description is-local Determines if the router (or another router within this autonomous system or confederation) originated the route length Performs a conditional check based on the length of the AS path neighbor-is Tests the autonomous system number or numbers at the head of the AS path against a sequence of one or more integral values or parameters. originates-from Tests an AS path against the AS sequence from the start with the AS number that originated a route. passes-through Tests to learn if the specified integer or parameter appears anywhere in the AS path or if the sequence of integers and parameters appears. unique-length Performs specific checks based on the length of the AS path ignoring duplicates VerificationThese are rearranged route policies written with the help of native match criteria. This leads to substantially reduced processing time.route-policy INGRESS-ROUTE-POLICY if as-path in AS-PATH-SET-11 then set local-preference 150 else set local-preference 50 endifend-policyroute-policy EGRESS-ROUTE-POLICY if as-path is-local then set med 70 elseif as-path in AS-PATH-SET-22 and as-path unique-length is 1 then set med 80 else set med 90 endifend-policyas-path-set AS-PATH-SET-11 neighbor-is '65101 65201', neighbor-is '65102 65202', neighbor-is '65103 65203', neighbor-is '65104 65204', neighbor-is '65105 65205',--- removed 90 similar lines --- neighbor-is '65195 65295', neighbor-is '65196 65296', neighbor-is '65197 65297', neighbor-is '65198 65298', neighbor-is '65199 65299'end-setas-path-set AS-PATH-SET-22 originates-from '65169', originates-from '65392', originates-from '65133', originates-from '65231', originates-from '65161',--- removed 90 similar lines --- originates-from '65281', originates-from '65336', originates-from '65238', originates-from '65381', originates-from '65103'end-setThese outputs are collected after the full Internet BGP table scale is received and propagated further.RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-in-dflt default-IPv4-Uni-10.0.54.6 policy profilePolicy profiling dataPolicy # INGRESS-ROUTE-POLICYPass # 720100Drop # 0# of executions # 720100Total execution time # 9612msec !!!! about 10 seconds to process ingress updatesNode Id Num visited Exec time Policy engine operation--------------------------------------------------------------------------------PXL_0_1 720100 8540msec if as-path aspath-match ... then <truePath>PXL_0_3 7128 2msec set local-preference 150 7128 1msec <end-policy/> </truePath> <falsePath>PXL_0_2 712972 276msec set local-preference 50 712972 80msec <end-policy/> </falsePath>RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-out-dflt default-IPv4-Uni-UpdGrp-0.3-Out policy profilePolicy profiling dataPolicy # EGRESS-ROUTE-POLICYPass # 720126Drop # 0# of executions # 720126Total execution time # 12399msec !!!! about 12 seconds to process egress updatesNode Id Num visited Exec time Policy engine operation--------------------------------------------------------------------------------PXL_0_1 720126 190msec if as-path is-local then <truePath>PXL_0_7 0 0msec set med 70 0 0msec <end-policy/> </truePath> <falsePath>PXL_0_2 720126 11190msec if as-path aspath-match ... then <truePath>PXL_0_4 262734 65msec if as-path unique-length is 1 then <truePath>PXL_0_5 25 0msec set med 80 25 0msec <end-policy/> </truePath> <falsePath>PXL_0_6 720101 164msec set med 90 720101 57msec <end-policy/> </falsePath> </truePath> <falsePath> <reference>GOTO # PXL_0_6 </reference> </falsePath> </falsePath>RP/0/RSP1/CPU0#XR1# Alternatively, ios-regex lines could be collapsed. This also helps to improve the performance.route-policy INGRESS-ROUTE-POLICY if as-path in (ios-regex '^(65101_65201|65102_65202|65103_65203|65104_65204|65105_65205|65106_65206|65107_65207|65108_65208|65109_65209|65110_65210)') then set local-preference 150 endif if as-path in (ios-regex '^(65111_65211|65112_65212|65113_65213|65114_65214|65115_65215|65116_65216|65117_65217|65118_65218|65119_65219|65120_65220)') then set local-preference 150 endif if as-path in (ios-regex '^(65121_65221|65122_65222|65123_65223|65124_65224|65125_65225|65126_65226|65127_65227|65128_65228|65129_65229|65130_65230)') then set local-preference 150 endif if as-path in (ios-regex '^(65131_65231|65132_65232|65133_65233|65134_65234|65135_65235|65136_65236|65137_65237|65138_65238|65139_65239|65140_65240)') then set local-preference 150 endif if as-path in (ios-regex '^(65141_65241|65142_65242|65143_65243|65144_65244|65145_65245|65146_65246|65147_65247|65148_65248|65149_65249|65150_65250)') then set local-preference 150 endif if as-path in (ios-regex '^(65151_65251|65152_65252|65153_65253|65154_65254|65155_65255|65156_65256|65157_65257|65158_65258|65159_65259|65160_65260)') then set local-preference 150 endif if as-path in (ios-regex '^(65161_65261|65162_65262|65163_65263|65164_65264|65165_65265|65166_65266|65167_65267|65168_65268|65169_65269|65170_65270)') then set local-preference 150 endif if as-path in (ios-regex '^(65171_65271|65172_65272|65173_65273|65174_65274|65175_65275|65176_65276|65177_65277|65178_65278|65179_65279|65180_65280)') then set local-preference 150 endif if as-path in (ios-regex '^(65181_65281|65182_65282|65183_65283|65184_65284|65185_65285|65186_65286|65187_65287|65188_65288|65189_65289|65190_65290)') then set local-preference 150 else set local-preference 50 endifend-policyroute-policy EGRESS-ROUTE-POLICY if as-path in (ios-regex '^$') then set med 70 endif if as-path in (ios-regex '^65169(_65169)*$|^65392(_65392)*$|^65133(_65133)*$|^65231(_65231)*$|^65161(_65161)*$') then set med 80 endif if as-path in (ios-regex '^65354(_65354)*$|^65331(_65331)*$|^65342(_65342)*$|^65295(_65295)*$|^65208(_65208)*$') then set med 80 endif if as-path in (ios-regex '^65149(_65149)*$|^65350(_65350)*$|^65115(_65115)*$|^65300(_65300)*$|^65322(_65322)*$') then set med 80 endif if as-path in (ios-regex '^65102(_65102)*$|^65329(_65329)*$|^65237(_65237)*$|^65218(_65218)*$|^65153(_65153)*$') then set med 80 endif if as-path in (ios-regex '^65263(_65263)*$|^65116(_65116)*$|^65112(_65112)*$|^65114(_65114)*$|^65378(_65378)*$') then set med 80 endif if as-path in (ios-regex '^65105(_65105)*$|^65296(_65296)*$|^65211(_65211)*$|^65317(_65317)*$|^65115(_65115)*$') then set med 80 endif if as-path in (ios-regex '^65371(_65371)*$|^65214(_65214)*$|^65325(_65325)*$|^65354(_65354)*$|^65384(_65384)*$') then set med 80 endif if as-path in (ios-regex '^65220(_65220)*$|^65277(_65277)*$|^65219(_65219)*$|^65213(_65213)*$|^65336(_65336)*$') then set med 80 endif if as-path in (ios-regex '^65249(_65249)*$|^65112(_65112)*$|^65314(_65314)*$|^65385(_65385)*$|^65152(_65152)*$') then set med 80 endif if as-path in (ios-regex '^65196(_65196)*$|^65252(_65252)*$|^65162(_65162)*$|^65271(_65271)*$|^65357(_65357)*$') then set med 80 endif if as-path in (ios-regex '^65317(_65317)*$|^65360(_65360)*$|^65198(_65198)*$|^65256(_65256)*$|^65246(_65246)*$') then set med 80 endif if as-path in (ios-regex '^65356(_65356)*$|^65359(_65359)*$|^65302(_65302)*$|^65118(_65118)*$|^65346(_65346)*$') then set med 80 endif if as-path in (ios-regex '^65225(_65225)*$|^65307(_65307)*$|^65313(_65313)*$|^65189(_65189)*$|^65288(_65288)*$') then set med 80 endif if as-path in (ios-regex '^65381(_65381)*$|^65292(_65292)*$|^65145(_65145)*$|^65325(_65325)*$|^65361(_65361)*$') then set med 80 endif if as-path in (ios-regex '^65156(_65156)*$|^65184(_65184)*$|^65367(_65367)*$|^65302(_65302)*$|^65290(_65290)*$') then set med 80 endif if as-path in (ios-regex '^65351(_65351)*$|^65116(_65116)*$|^65341(_65341)*$|^65123(_65123)*$|^65258(_65258)*$') then set med 80 endif if as-path in (ios-regex '^65397(_65397)*$|^65302(_65302)*$|^65188(_65188)*$|^65187(_65187)*$|^65358(_65358)*$') then set med 80 endif if as-path in (ios-regex '^65217(_65217)*$|^65107(_65107)*$|^65203(_65203)*$|^65377(_65377)*$|^65381(_65381)*$') then set med 80 endif if as-path in (ios-regex '^65219(_65219)*$|^65308(_65308)*$|^65364(_65364)*$|^65277(_65277)*$|^65396(_65396)*$') then set med 80 endif if as-path in (ios-regex '^65281(_65281)*$|^65336(_65336)*$|^65238(_65238)*$|^65381(_65381)*$|^65103(_65103)*$') then set med 80 else set med 90 endifend-policyThese outputs were collected after the full Internet BGP table scale is received and propagated further.RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-in-dflt default-IPv4-Uni-10.0.54.6 policy profilePolicy profiling dataPolicy # INGRESS-ROUTE-POLICYPass # 720100Drop # 0# of executions # 720100Total execution time # 30119msec !!!! about 30 seconds to process ingress updatesNode Id Num visited Exec time Policy engine operation--------------------------------------------------------------------------------PXL_0_1 720100 4434msec if as-path aspath-match ... then <truePath>PXL_0_2 361 0msec set local-preference 150PXL_0_3 720100 3039msec if as-path aspath-match ... then <truePath>--- removed lines ---GOTO # PXL_0_3 </reference> </falsePath>RP/0/RSP1/CPU0#XR1#show pcl protocol bgp speaker-0 neighbor-out-dflt default-IPv4-Uni-UpdGrp-0.3-Out policy profilePolicy profiling dataPolicy # EGRESS-ROUTE-POLICYPass # 720110Drop # 0# of executions # 720110Total execution time # 106566msec !!!! about 1.8 minutes to process egress updatesNode Id Num visited Exec time Policy engine operation--------------------------------------------------------------------------------PXL_0_1 720110 2958msec if as-path aspath-match ... then <truePath>PXL_0_2 0 0msec set med 70PXL_0_3 720110 5222msec if as-path aspath-match ... then <truePath>PXL_0_4 3 0msec set med 80PXL_0_5 720110 4979msec if as-path aspath-match ... then <truePath>PXL_0_6 set med 80--- removed lines ---GOTO # PXL_0_3 </reference> </falsePath>RP/0/RSP1/CPU0#XR1#ConclusionThe performance of a route policy can be improved with native as-path match or collapsed regular expression patterns.AcknowledgementsI would like to thank Serge Krier for the original creation of this article.", "url": "/tutorials/troubleshoot-slow-bgp-convergence-due-to-rpl/", "author": "Vladimir Deviatkin", "tags": "iosxr, BGP, Troubleshooting" } , "tutorials-asr9k-inline-map-t-border-relay-configuration-and-troubleshooting": { "title": "ASR 9000 Inline MAP-T Border Relay Configuration and Troubleshooting", "content": " On this page Introduction Scenario Border Router Address Translation Configuration Troubleshooting PBR Translation NPU counters Conclusion Additional Resources# IntroductionASR9k can act as a Border Relay MAP-T function (explained in RFC 7599) without the need of a Service Module Line Card. It is supported with 4th and 5th generation of Ethernet Line Cards. Please see the Cisco documentation for the Details and Restrictions to configure it. Each MAP-T instance creates a Policy Based Routing rule which steers the traffic from ingress service-inline interface to the CGv6 application which removes the need for ISM/VSM service module. This Tutorial will provide the step-by-step configuration and Troubleshooting approach to enable this feature and verify it is working correctly.ScenarioIn the current Example we will consider the following scenario#Host with IPv4 address 166.1.32.1 (port 2321) in the Private IPv4 domain needs to connect to Internet server with IPv4 address 8.8.8.8 (port 2123) through the pure IPv6 Backbone. From the connectivity perspective IP flow 166.1.32.1 <-> 8.8.8.8 will be translated twice prior (MAPT-CE) and after (MAPT-BR) IPv6 Backbone.We will explore the ASR9k role as MAP-T Inline (no Service Modules) Border Router. MAP-T CE functionality is not considered in this Tutorial (not supported by ASR9000).We will use 3601#d01#3344##/48 subnet to translate the Internet Host address (external domain) and 2701#d01#3344##/48 for Private host translation (CPE domain). Next section is going to explain the magic behind the translation.Border Router Address TranslationIn IPv4→IPv6 translation, destination address and port are translated based on RFC 7599 and RFC 7597 and source address on RFC 6052.In IPv6→IPv4 translation, source address and port are translated based on RFC 7597 and RFC 7599 and destination address on RFC 6052.Lets examine this based on IPv4 to IPv6 translation (IPv6 to IPv4 will be similar)# Destination Address Translation (166.1.32.1 → 2701#d01#3344##)#We need to define key numbers for Port-Mapping Algorithm (rfc7597). Based on our scenario configuration (see Configuration Section) those will be# Parameter Value Calculation k 6 2^k = 64 (sharing ratio) m 3 2^m = 8 (contiguous-ports) a 7 16-k-m A 512 2^(16-a) PSID 0x22 Port 2321 = 100100010001 = 100100010001 (m=3) => 100100010 (k=6) IPv4 Suffix 00000001 32 - 24 (IPv4 prefix length) = 8 => 166.1.32.00000001 EA bit 00000001100010 IPv4 Suffix + PSID = 8 + 6 = 14 => 00000001100010 Subnet 2 64 - 48 (IPv6 prefix length) - 14 (EA bits) = 2 IPv6 Address 2701#d01#3344#188#0#a601#2001#22 IPv6 CPE suffix + EA BITS + Subnet + 16 bit 0’s + 32 bit ipv4 address + 16 bit PSID => 2701#d01#3344#00000001100010 00#0#a601#2001#22 Source Address Translation (8.8.8.8 → 3601#d01#3344)#This translation is more straightforward and defined by RFC 6052# /48 IPv6 prefix | v4(16) | U | (16) | suffix | Thus final prefix will look like# 3601#d01#3344#808#8#800##I’m using the traffic generator for this scenario and based on translations above my packets will look like#-IPv4 to IPv6#-IPv6 to IPv4#ConfigurationThis is the configuration template for Inline MAP-T# configure service cgv6 instance-name service-inline interface type interface-path-id service-type map-t-cisco instance-name cpe-domain ipv4 prefix length value cpe-domain ipv6 vrf vrf-name cpe-domain ipv6 prefix length value sharing ratio number contiguous-ports number cpe-domain-name cpe-domain-name ipv4 prefix address/prefix ipv6 prefix address/prefix ext-domain-name ext-domain-name ipv6 prefix address/prefix ipv4-vrf vrf-nameNote# CPE V6 Prefix /64 and with V4 Prefix /24 are the best for quick testing as there are no port-sharing in that case and finding correct IP syntax is much easier (see details in the config section).Within this tutorial we will focus on the following configuration and explain it in more details# configure service cgv6 CGV6-MAP-T service-inline interface TenGigE0/6/0/0/0 service-inline interface TenGigE0/6/0/0/1 service-type map-t-cisco MAPT-1 cpe-domain ipv6 vrf default cpe-domain ipv6 prefix length 48 cpe-domain ipv4 prefix length 24 sharing-ratio 64 contiguous-ports 8 cpe-domain-name cpe1 ipv4-prefix 166.1.32.0 ipv6-prefix 2701#d01#3344## ext-domain-name ext1 ipv6-prefix 3601#d01#3344##/48 ipv4-vrf defaultLets verify this configuration in more details# Announce the cgv6 service and select the proper name#\tservice cgv6 CGV6-MAP-T Traffic coming from the interfaces configured under the service will bethe subject for applying the MAP-T rules. In our example 4th generation Tomahawk Line Card is used#\tservice-inline interface TenGigE0/6/0/0/0\tservice-inline interface TenGigE0/6/0/0/1 Next configure the CPE domain to specify corresponding parameters. Please mind the domain name as it will be used in troubleshooting commands#\tservice-type map-t-cisco MAPT-1 Specify the CPE domain parameters# We can use either default or single non-default VRF for IPv6 traffic. After IPv4 to IPv6 translation, packet will be forwarded to that VRF#\tcpe-domain ipv6 vrf default Select the prefix length both for IPv4 and IPv6. This is needed to define if additional information required for sharing-ratio and contiguous-ports which are used in port/IP translation verification (based on RFC 7599 and 7597). If IPv6 prefix length is /64 or /128 and IPv4 length is /32 then sharing-ratio and contiguous-ports will not be considered in the translation and may not to be configured. Sharing-ratio and contiguous port will define k and m values explained above.\tcpe-domain ipv6 prefix length 64\tcpe-domain ipv4 prefix length 24\tsharing-ratio 256\tcontiguous-ports 8\t Finally we configure the translation rules# IPv4 to IPv6 rules are defined by the cpe-domain config and after translation traffic will go out of the IPv6 VRF defined above. In particular example, traffic destined to 166.1.32.0/24 subnet will be translated to 2701#d01#3344##/48 subnet and send out VRF default (as configured in our example) based on the routing rule (see step 6 below)#\tcpe-domain-name cpe1 ipv4-prefix 166.1.32.0 ipv6-prefix 2701#d01#3344## IPv6 to IPv4 rules are defined based on ext-domain config. CGN will automatically derive corresponding IPv4 address from the Source and Destination addresses based on the translation algorithm. In the example below traffic towards 3601#d01#3344##/48 will find the portion of IP representing the IPv4 host and port and then route it accordingly based on the routing rule in corresponding VRF#\text-domain-name ext1 ipv6-prefix 3601#d01#3344##/48 ipv4-vrf default Make sure you have Routing Entry and Adjacency for the translated addresses (otherwise traffic will be lost after translation)#~show cef 8.8.8.8~ 8.8.8.0/24, version 134, internal 0x1000001 0x0 (ptr 0x721fc418) [1], 0x0 (0x721bd668), 0xa20 (0x726cd688) Updated Apr 27 15#08#18.684 remote adjacency to TenGigE0/6/0/0/1 Prefix Len 24, traffic index 0, precedence n/a, priority 3 via 192.168.1.2/32, TenGigE0/6/0/0/1, 4 dependencies, weight 0, class 0 [flags 0x0] path-idx 0 NHID 0x0 [0x72a6bb08 0x0] next hop 192.168.1.2/32 remote adjacency local label 24028 labels imposed {None}~show cef ipv6 2701#d01#3344##~2701#d01#3344##/64, version 10, internal 0x1000001 0x0 (ptr 0x724f27ac) [1], 0x0 (0x724bd9f0), 0x0 (0x0) Updated Apr 27 14#47#31.296 remote adjacency to TenGigE0/6/0/0/0 Prefix Len 64, traffic index 0, precedence n/a, priority 3 via a##2/128, TenGigE0/6/0/0/0, 4 dependencies, weight 0, class 0 [flags 0x0] path-idx 0 NHID 0x0 [0x7344d0c8 0x0] next hop a##2/128 remote adjacencyTroubleshootingPBR Verifying MAP-T we first need to make sure that corresponding PBR policies have been applied correctly. First we will check the policy-map created for it automatically# “show policy-map transient type pbr” policy-map type pbr CGN_0\thandle#0x38000002 \ttable description# L3 IPv4 and IPv6 \tclass handle#0x78000003 sequence 1 \t match destination-address ipv4 166.1.32.0 255.255.255.0 \t punt service-node type cgn index 1001 app-id 0 local-id 0x1389\t !\tclass handle#0x78000004 sequence 1\t match destination-address ipv6 3601#d01#3344##/48 \t punt service-node type cgn index 3001 app-id 0 local-id 0x1b59\t !\tclass handle#0xf8000002 sequence 4294967295 (class-default) \t ! end-policy-mapWe can see three classes created (1 for each domain rule plus default class for non-matching traffic)# 0x78000003 for IPv4 to IPv6 translation, 0x78000004 for for IPv6 to IPv4 translation and default class 0xf8000002. Missing any of the classes or not seeing proper IP addresses associated with those would mean that configuration did no apply correctly. One recommendation would be to try removing configuration for the whole instance and applying it back. Before we check the PBR programming we need to make sure that corresponding Null0 routes have been created for traffic Destination Addresses to be translated. That is done for PBR to be able to intercept this traffic to send further for translation. As we see from policy-map output above we need to have Null0 for prefixes 166.1.32.0/24 and 3601#d01#3344##/48. This routing entrees will be created automatically by the system#~show route 166.1.32.0/24~Routing entry for 166.1.32.0/24 Known via ~connected~, distance 1, metric 0 Installed Apr 27 13#49#38.809 for 00#18#05 Routing Descriptor Blocks directly connected, via Null0 Route metric is 0 No advertising protos. ~show route ipv6 3601#d01#3344##/48~Routing entry for 3601#d01#3344##/48 Known via ~connected~, distance 0, metric 0 (connected) Installed Apr 27 13#49#38.904 for 00#18#21 Routing Descriptor Blocks directly connected, via Null0 Route metric is 0 No advertising protos. Information from the policy-map is used further to program corresponding rules in the Hardware#~show pbr-pal ipolicy CGN_0 detail location 0/6/CPU0~ policy name # CGN_0 number of iclasses # 3 number of VMRs # 3 ucode format # 13 vmr id for NP0 # 3 interface count # 2 interface list # Te0/6/0/0/0 Te0/6/0/0/1~show pbr-pal ipolicy CGN_0 iclass all vmr location 0/6/CPU0~Policy name# CGN_0iclass handle # 0x78000003 ifh # x protocol # x source ip addr # x dest ip addr # 166.1.32.0/255.255.255.0 source port # x dest port # x DSCP # x ethertype # x vlan id # x vlan cos # x source mac # x dest mac # x packet length # x result # 110000ac 8cc60001 65030003 e9030013 89000000 00000000 00000000 00000000iclass handle # 0x78000004 ifh # x protocol # x source ipv6 addr # x dest ipv6 addr # 3601#d01#3344##/48 source port # x dest port # x DSCP # x ethertype # x vlan id # x vlan cos # x source mac # x dest mac # x packet length # x result # 110000ae 8cc60001 6503000b b903001b 59000000 00000000 00000000 00000000iclass handle # 0xf8000002 ifh # x protocol # x source ip addr # x dest ip addr # x source port # x dest port # x DSCP # x ethertype # x vlan id # x vlan cos # x source mac # x dest mac # x packet length # x result # 11000050 8dc60000 00000000 00000000 00000000 00000000 00000000 00000000Make sure, that both IPv4 and IPv6 addresses are listed in corresponding VMRs. If not then verify if step (1) info above is correct and all interfaces are programmed for the Line Card (highlighted “interface list” above). Removing and re-applying service instance configuration can be helpful as well once all errors are fixed. Once traffic has started we can see counters in the corresponding classes (you can match the iclass id with the corresponding policy-map class to find the translation direction)#~show pbr-pal ipolicy CGN_0 iclass all stats loc 0/6/CPU0~Policy name# CGN_0 iclass packets/bytes drop packets/drop bytes 78000006 1494879/149487900 0/0 78000007 18391078/1839107800 0/0 f8000005 0/0 0/0Seeing the counters in the corresponding classes means that PBR properly intercepted the traffic and sent it to Service Engine for TranslationTranslationOnce traffic hit the correct PBR class it is being sent for translation where system will do its magic to transform the Source/Destination IP addresses and ports into the new addresses. See the “Border Router Address Translation” section above for the details. We can verify the translation counters using the counters below#~show cgv6 map-t-cisco MAPT-1 statistics~\t \tMap-t-cisco IPv6 to IPv4 counters#======================================Translated Udp Count# 76200085Translated Tcp Count# 0Translated Icmp Count# 0PSID Drop Udp Count# 0PSID Drop Tcp Count# 0PSID Drop Icmp Count# 0Map-t-cisco IPv4 to IPv6 counters#======================================Translated Udp Count# 5209786Translated Tcp Count# 0Translated Icmp Count# 0PSID Drop Udp Count# 0PSID Drop Tcp Count# 0PSID Drop Icmp Count# 0\tMap-t-cisco exception IPv6 to IPv4 counters#======================================TCP Incoming Count# 0TCP NonTranslatable Drop Count# 0TCP Invalid NextHdr Drop Count# 0TCP NoDb Drop Count# 0TCP Translated Count# 0TCP Psid Drop Count# 0\tUDP Incoming Count# 0UDP NonTranslatable Drop Count# 0UDP Invalid Next Hdr Drop Count# 0UDP No Db Drop Count# 0\tUDP Translated Count# 0UDP Psid Drop Count# 0\tICMP Total Incoming Count# 0ICMP No DB Drop Count# 0ICMP Fragment drop count# 0\tICMP Invalid NxtHdr Drop Count# 0ICMP Nontanslatable Drop Count# 0ICMP Nontanslatable Fwd Count# 0\tICMP UnsupportedType Drop Count# 0ICMP Err Translated Count# 0ICMP Query Translated Count# 0ICMP Psid Drop Count# 0\tSubsequent Fragment Incoming Count# 0Subsequent Fragment NonTranslateable Drop Count# 0Invalid NextHdr Drop Count# 0Subsequent Fragment No Db Drop Count# 0Subsequent Fragment Translated Count# 0Extensions/Options Incoming Count# 0Extensions/Options Drop Count# 0Extensions/Options Forward Count# 0Extensions/Options No DB drop Count# 0Unsupported Protocol Count# 0Map-t-cisco exception packets IPv4 to IPv6 counters#======================================TCP Incoming Count# 0TCP No Db Drop Count# 0TCP Translated Count# 0TCP Psid Drop Count# 0UDP Incoming Count# 0UDP No Db Drop Count# 0UDP Translated Count# 0\tUDP FragmentCrc Zero Drop Count# 0UDP CrcZeroRecy Sent Count# 0UDP CrcZeroRecy Drop Count# 0UDP Psid Drop Count# 0\tICMP Total Incoming Count# 0ICMP No Db Drop Count# 0ICMP Fragment drop count# 0ICMP UnsupportedType Drop Count# 0ICMP Err Translated Count# 0ICMP Query Translated Count# 0ICMP Psid Drop Count# 0\tSubsequent Fragment Incoming Count# 0Subsequent Fragment No Db Drop Count# 0Subsequent Fragment Translated Count# 0Subsequent Fragment Drop Count# 0Subsequent Fragment Throttled Count# 0Subsequent Fragment Timeout Drop Count# 0Subsequent Fragment TCP Input Count# 0Subsequent Fragment UDP Input Count# 0Subsequent Fragment ICMP Input Count# 0Options Incoming Count# 0Options Drop Count# 0Options Forward Count# 0Options No DB drop Count# 0Unsupported Protocol Count# 0\tICMP generated counters #=======================IPv4 ICMP Messages generated count# 0IPv6 ICMP Messages generated count# 0Normally “Translated <> Count” is incrementing when everything is good. Other specific counters will increment in case of a problem. E.G. if the traffic port is not matching the programmed port in IPv6 to IPv4 translation (as PSID is programmed into the IPv6 address)#Map-t-cisco exception IPv6 to IPv4 counters#======================================TCP Incoming Count# 0TCP NonTranslatable Drop Count# 0TCP Invalid NextHdr Drop Count# 0TCP NoDb Drop Count# 0TCP Translated Count# 0TCP Psid Drop Count# 0UDP Incoming Count# 0UDP NonTranslatable Drop Count# 0UDP Invalid Next Hdr Drop Count# 0UDP No Db Drop Count# 0UDP Translated Count# 0UDP Psid Drop Count# 634576NPU counters1.Normal countersFollowing counters will increment during the normal work of the MAP-T translation#~show controllers np counters np0 loc 0/6/CPU0 | ex ~ 0~~\tRead 53 non-zero NP counters#Offset Counter FrameValue Rate (pps)------------------------------------------------------------------------------------- 17 MDF_TX_WIRE 132257833 310077 21 MDF_TX_FABRIC 132257175 310077 33 PARSE_FAB_RECEIVE_CNT 132257832 310079 45 PARSE_ENET_RECEIVE_CNT 312065555 310081 53 PARSE_TOP_LOOP_RECEIVE_CNT 558144612 620162 70 RSV_OPEN_NETWORK_SERVICE_TRIGGER_SVC 279072350 310081 99 RSV_OPEN_NETWORK_SERVICE_PHASE 279072439 310081 544 MDF_PIPE_LPBK 558238357 620439 552 MDF_OPEN_NETWORK_SERVICE_MODULE_ENTER 558238405 620439 556 MDF_OPEN_NETWORK_SERVICE_TRGR_FWD_LKUP 279119214 310220 678 VIRTUAL_IF_PROTO_IPV4_UCST_INPUT_CNT 227572818 205272 679 VIRTUAL_IF_PROTO_IPV6_UCST_INPUT_CNT 50264145 210802010 PARSE_OPEN_NETWORK_SERVICE_SVC_LKUP 279123639 312507 Counters 17, 21, 33, 45 and 53 are general platform counters for traffic passing through Wire, Fabric, etc. Other counters are specific to PBR and Translation operations so you can match those against the rate of traffic sent in each direction.E.G. I send 200k pps of IPv6 to IPv4 flow and 100k pps of IPv4 to IPv6f flow which match the corresponding counters rate#Offset Counter FrameValue Rate (pps)-------------------------------------------------------------------------------------678 VIRTUAL_IF_PROTO_IPV4_UCST_INPUT_CNT 227572818 205272679 VIRTUAL_IF_PROTO_IPV6_UCST_INPUT_CNT 50264145 21080 Some counters may show cumulative rate as they cover both translations together. E.G.Offset Counter FrameValue Rate (pps)-------------------------------------------------------------------------------------544 MDF_PIPE_LPBK 558238357 620439552 MDF_OPEN_NETWORK_SERVICE_MODULE_ENTER 558238405 620439556 MDF_OPEN_NETWORK_SERVICE_TRGR_FWD_LKUP 279119214 3102202.NP counters in case of a problem/drop In the Translation section above I made an example of incorrect port used in the packets not matching the IPv6 address (embedded PSID)#Offset Counter FrameValue Rate (pps)-------------------------------------------------------------------------------------560 MDF_OPEN_NETWORK_SERVICE_PSID_IPV6_FAIL 931002 12354 Counter identifies that the port used on the packets does not match the PSID programmed in the IPv6 address (see “Border Router Address Translation” above for PSID programming details). E.G. the port on the packet is “12345” and PSID is programmed based on port “2321”. In case of a PBR programming issue the traffic will be punted to CPU hitting the Null0 route but not intercepted by PBR (missing SERVICE related counters above)#Offset Counter FrameValue Rate (pps)-------------------------------------------------------------------------------------946 PUNT_IPV6_ADJ_NULL_RTE 3420 2947 PUNT_IPV6_ADJ_NULL_RTE_EXCD 2680386 1405 If Translation engine wont be able to define how to translate the prefix following counter will increment#Offset Counter FrameValue Rate (pps)-------------------------------------------------------------------------------------541 MDF_OPEN_NETWORK_SERVICE_PICK_UNKNOWN_ACTION 874220815 34715 One possible scenario for it#PBR intercept packets based on destination IP address. It also going to translate the source address. Thus if that is not matching the configured entry you may see these drops. if packet source is 2701#D01#3344#4517#0#A601#2045#17 and cpe-domain rule# cpe-domain-name cpe1 ipv4-prefix 166.1.32.0 ipv6-prefix 2701#d01#3344##As configured IPv6 prefix length is /64 than cpe-domain address not matching the packet source# 2701#d01#3344#4517## = 2701#d01#3344#4517#0##/64 vs 2701#D01#3344#0##/64 However this is an Umbrella counter which will show up for other reasons as well. In case of unidentified problem following TECHs will be required for analysis#show tech services cgnshow tech pbr Additionally it is helpful to capture and examine the packet hitting the corresponding counter. In the LAB environment it can be collected using the “monitor np counter” tool#NOTE# This tool will have to reset the NPU upon the traffic collection completion which can cause ~150msec of traffic loss on this NPU thus its recommended to use it only in the LAB environment or during the Maintenance Window.~monitor np counter MDF_TX_WIRE.1 np0 loc 0/6/CPU0~Usage of NP monitor is recommended for cisco internal use only.Please use instead 'show controllers np capture' for troubleshooting packet drops in NPand 'monitor np interface' for per (sub)interface counter monitoringWarning# Every packet captured will be dropped! If you use the 'count' option to capture multiple protocol packets, this could disrupt protocol sessions (eg, OSPF session flap). So if capturing protocol packets, capture only 1 at a time.Warning# A mandatory NP reset will be done after monitor to clean up. This will cause ~150ms traffic outage. Links will stay Up. Proceed y/n [y] > y Monitor MDF_TX_WIRE.1 on NP0 ... (Ctrl-C to quit)Tue Apr 25 20#43#20 2023 -- NP0 packet From Fabric# 88 byte packet0000# ac bc d9 3e 22 22 ac bc d9 3e 71 30 86 dd 60 00 .........0010# 00 00 00 22 2c 3f 36 01 0d 01 33 44 55 66 00 08 ...~,?6...3DUf..0020# 08 08 08 00 00 00 27 01 0d 01 33 44 45 17 00 00 ......'...3DE...0030# a6 01 20 01 00 00 11 00 00 00 00 00 00 00 08 4b .. ............K0040# 09 11 00 1a 57 f0 00 01 02 03 04 05 06 07 08 09 ....Wp..........0050# 0a 0b 0c 0d 0e 0f 10 11 ........ConclusionI hope this tutorial will be helpful in building the Proof-of-Concept LAB or troubleshooting the real life scenario. It can navigate through the components used and isolate the missing/broken part. Let us know if there are any questions.Additional Resources#MAP-T Configuration guide for ASR9000", "url": "/tutorials/asr9k-inline-map-t-border-relay-configuration-and-troubleshooting/", "author": "Nikolai Karpyshev", "tags": "iosxr, ASR 9000, CGNAT" } , "tutorials-asr-9000-bng-scale-best-practices": { "title": "Reaching Full BNG Scale on ASR 9000", "content": "IntroductionDid you ever wonder how does ASR 9000 platform manage Broadband Network Gateway (BNG) function with Quality of Service (QoS) applied? Did you ever want to increase subscribers’ from ASR9000 BNG node but were limited by a current design limiting the possibilities? Are you currently working on your BNG design and do you need to define the best and durable solution?These questions come regularly from ASR 9000 BNG customers and the following article aims to provide all necessary information to shape your solution and get the most out of the platform.We will present internal hardware design and how it is used when it comes to BNG subscribers using QoS, the way you can affect the standard behavior and direct your subscribers to use all available resources, and finally we will discuss overall design solution that can help you reach the full potential of your ASR 9000 platform.Scope All concepts and principles that are discussed apply to IPOE and PPPOE type subscribers. All concepts and principles that are discussed apply to any ASR 9000 system belonging to the 3rd or 5th generation (line card or fixed chassis). The article considers subscriber QoS using queuing feature# policy-map actions can be# priority, bandwidth remaining ratio, shaper, queue-limit, WRED etc. policy-map can be flat or hierarchical; e.g. a parent policy-map to rate-limit subscriber’s overall bandwidth and a child-policy-map dedicated to QoS actions per traffic classification The article mostly considers egress subscriber QoS as it is not a best practice to use ingress subscriber QoS queuing. The article will mostly use Bundle-Ether interface type as examples, but the reasoning is the same for Pseudowire Headend (PW-HE) interface type. BNG QoS Queuing Resources Default AllocationOnce established, a subscriber is managed during its lifespan by a virtual interface that is dynamically created on top of the access interface you have configured. This virtual interface gathers the subscriber’s parameters# IP addressing, forwarding VRF, MTU… and QoS.As a reminder, you can apply QoS to subscribers using several techniques# through RADIUS (dynamic-template included), Parameterized QoS or QoS shaper parameterization.BNG QoS for subscriber feature is deployed on the Network Processor Unit (NPU) that handles the BNG access-interface on top of which the subscriber is established. The NPU has a unit called Traffic Manager (TM) that is responsible for allocating and managing NPU QoS queuing resources for any needs (including QoS queuing not related to BNG). Depending on the line card type and generation there can be one or two TM per NPU.A TM splits its queuing resources into 4 chunks# chunk0, chunk1, chunk2 and chunk3.Every port managed by an NPU is mapped to one of the chunks of the TM. To be comprehensive, all the sub-interfaces belonging to one port will be mapped by default to the same TM chunk.To illustrate the structure, here is a diagram highlighting the NPUs of the 5th generation line card A9K-8HG-FLEX-SE which has two NPUs managing eight HundredGigabitEthernet ports#We can easily retrieve this TM/chunk to port default mapping with the following commands (the command must be executed for all the NPUs of the considered line card)#RP/0/RSP0/CPU0#BNG#show qoshal ep np 0 location 0/0/CPU0 Sun Jun 30 10#21#40.830 CESTTY Options argc#6 nphal_show_chk -p 33312 ep -n 0x0 Done show qoshal ep np np location node front end Subslot 0 Ifsubsysnum 0 NP_EP #0 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 0, L1 0Subslot 0 Ifsubsysnum 1 NP_EP #1 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 1, L1 0Subslot 0 Ifsubsysnum 2 NP_EP #2 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 2, L1 0Subslot 0 Ifsubsysnum 3 NP_EP #3 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 3, L1 0RP/0/RSP0/CPU0#BNG#show qoshal ep np 1 location 0/0/CPU0 Sun Jun 30 23#21#44.871 CESTTY Options argc#6 nphal_show_chk -p 33312 ep -n 0x1 Done show qoshal ep np np location node front end Subslot 0 Ifsubsysnum 4 NP_EP #0 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 0, L1 0Subslot 0 Ifsubsysnum 5 NP_EP #1 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 1, L1 0Subslot 0 Ifsubsysnum 6 NP_EP #2 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 2, L1 0Subslot 0 Ifsubsysnum 7 NP_EP #3 State #1 Ifsub Type #0x10030 Num Ports# 1 Port Type # 100GPort# 0 Egress # Chunk 3, L1 0Now that we can identify the chunk to port default mapping, it is important to understand that every subscriber using QoS queuing consumes one QoS hardware entity that is called L3(8Q) or L3(16Q) depending on the generation.A quick word about QoS entities# the ASR 9000 scheduler is implemented through several entity levels depending on how the QoS queuing is configured; port level, sub-interface level, parent/child policy-map etc.Each line card generation has its own specification regarding the number of L3 entities available per chunk# 3rd generation / Tomahawk# 8000 L3 entities 5th generation / LightSpeed+# 1500 L3 entitiesLet’s take a practical example and consider that we use 4 access-interfaces to serve our subscribers with QoS queuing, all the 4 access-interfaces are sub-interfaces (S-VLAN) belonging to one Bundle-Ether interface of one port, port Hu0/x/0/5 in our example.Access sub-interfaces configuration is straightforward#interface Bundle-Ether1.10 ipv4 point-to-point ipv6 enable service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 10!interface Bundle-Ether1.20 ipv4 point-to-point ipv6 enable service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 20!interface Bundle-Ether1.30 ipv4 point-to-point ipv6 enable service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 30!interface Bundle-Ether1.40 ipv4 point-to-point ipv6 enable service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 40!QoS queuing subscribers that are established on the 4 sub-interfaces will only use chunk1 of NPU1 because of the default chunk to port mapping#RP/0/RSP0/CPU0#BNG#show pppoe summary per-access-interface Sun Jun 30 10#40#45.830 CEST0/RSP0/CPU0----------- COMPLETE# Complete PPPoE Sessions INCOMPLETE# PPPoE sessions being brought up or torn downInterface BBA-Group READY TOTAL COMPLETE INCOMPLETE-------------------------------------------------------------------------------BE1.10 \t BNG_PMAP \t Y 200 200 0BE1.20 \t BNG_PMAP \t Y 300 300 0BE1.30 \t BNG_PMAP \t Y 400 400 0BE1.40 \t BNG_PMAP \t Y 600 600 0 ----------------------------------TOTAL 4 1500 1500 0RP/0/RSP0/CPU0#BNG# show qoshal resource summary np 1 location 0/0/CPU0 | begin ~CLIENT # QoS-EA~Mon Jul 1 00#19#54.131 CESTCLIENT # QoS-EA Policy Instances# Ingress 0 Egress 1500 Total# 1500 TM 0 Entities# (L4 level# Queues) Level Chunk 0 Chunk 1 Chunk 2 Chunk 3 L4 0( 0/ 0)10280(10280/10280) 0( 0/ 0) 0( 0/ 0) L3(8Q) 0( 0/ 0) 1500( 1500/ 1500) 0( 0/ 0) 0( 0/ 0) L3(16Q) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) L2 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) L1 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) Policers # 3072(3072)As the above example shows, the default TM chunk to port mapping will limit the number of subscribers to the capacity of one TM chunk for a BNG usage of one port. The other NPU QoS queue resources available are wasted.Leveraging Available Free QoS Queuing ResourcesSupported on IOS-XR 64-bit, the feature Subscriber Port Density (SPD) allows to allocate a specific TM chunk to an access sub-interface that has an S-VLAN configured; hence unlocking all QoS queuing resources to reach the potential NPU full scale.To achieve SPD, you first need to configure a “dummy” QoS policy-map that shapes the class-default to the port rate (100G in our case). To pursue our example here’s the “dummy” shaper configuration#policy-map dummyshaper class class-default shape average 100 gbps ! end-policy-map!Now the idea is to apply this policy-map to every BNG access sub-interface and bind a distinct TM chunk to each access sub-interface thanks to the keyword “subscriber-parent resource-id”. Here it is#interface Bundle-Ether1.10 ipv4 point-to-point ipv6 enable service-policy output dummyshaper subscriber-parent resource-id 0 service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 10!interface Bundle-Ether1.20 ipv4 point-to-point ipv6 enable service-policy output dummyshaper subscriber-parent resource-id 1 service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 20!interface Bundle-Ether1.30 ipv4 point-to-point ipv6 enable service-policy output dummyshaper subscriber-parent resource-id 2 service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 30!interface Bundle-Ether1.40 ipv4 point-to-point ipv6 enable service-policy output dummyshaper subscriber-parent resource-id 3 service-policy type control subscriber BNG_PMAP pppoe enable bba-group BBAGROUP load-interval 30 encapsulation dot1q 40!Now, let’s see how the QoS queuing resources are distributed. We re-establish the same number of subscribers as before (1500), on the access sub-interfaces#RP/0/RSP0/CPU0#BNG#show pppoe summary per-access-interface Sun Jun 30 11#12#01.212 CEST0/RSP0/CPU0----------- COMPLETE# Complete PPPoE Sessions INCOMPLETE# PPPoE sessions being brought up or torn downInterface BBA-Group READY TOTAL COMPLETE INCOMPLETE-------------------------------------------------------------------------------BE1.10 \t BNG_PMAP \t Y 200 200 0BE1.20 \t BNG_PMAP \t Y 300 300 0BE1.30 \t BNG_PMAP \t Y 400 400 0BE1.40 \t BNG_PMAP \t Y 600 600 0 ----------------------------------TOTAL 4 1500 1500 0RP/0/RSP0/CPU0#BNG# show qoshal resource summary np 1 location 0/0/CPU0 | begin ~CLIENT # QoS-EA~Mon Jul 1 00#19#54.131 CESTCLIENT # QoS-EA Policy Instances# Ingress 0 Egress 1504 Total# 1504 TM 0 Entities# (L4 level# Queues) Level Chunk 0 Chunk 1 Chunk 2 Chunk 3 L4 1551( 1551/ 1551) 2312( 2312/ 2312) 3407( 3407/ 3407) 3014( 3014/ 3014) L3(8Q) 201( 201/ 201) 301( 301/ 301) 401( 401/ 401) 601( 601/ 601) L3(16Q) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) L2 1( 1/ 1) 1( 1/ 1) 1( 1/ 1) 1( 1/ 1) L1 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) 0( 0/ 0) Policers # 3072(3072)Note# when activating SPD on a sub-interface, it is expected to consume one L2, one L3 and one L4 QoS entities of the related chunk.The 1500 subscribers are now distributed across the 4 TM chunks according to the configured binding. Each of the chunk can be used to its scale limit# hence it is possible to reach 4 times the scale of the default chunk to port mapping with SPD and finally reach 4 times the subscriber limit.When it comes to defining which chunk to allocate to which S-VLAN, several strategies can be used# round-robin allocation if you have a subscriber forecast per S-VLAN, you can distribute the TM chunks to access sub-interfaces accordingly monitor the TM chunks usage and modify the bindingsA thought to keep in mind when allocating TM chunk to BNG access sub-interfaces# the queuing resources that you bind to BNG needs will be shared with other non-BNG queuing needs; if you have another used interface within the same NPU as the BNG port, it will use its default TM chunk to port mapping for queuing.SPD is only applicable to BNG access sub-interfaces# the BNG main interface subscribers will still use the default chunk to port mapping.Design ConsiderationsThanks to the SPD feature, we can achieve higher scale on the platform and maximize the QoS queuing resources per NPU. It comes with the need to deliver subscriber traffic to the ASR9000 BNG node with distinct dot1q VLANs in order to populate subscribers across multiple access sub-interfaces.Depending on your access/aggregation network, the following solutions could suit the need to deliver subscriber traffic with multiple VLANs to the ASR 9000 BNG node# if BNG nodes are decentralized in your aggregation network, you can work on the neighbor switch or OLT to manipulate VLAN tagging (add/remove/translate). if BNG nodes are rather centralized and use L2VPN technologies to deliver subscriber traffic, you can either work on the neighbor L2VPN PEs to manipulate VLAN tagging or reflect on a local solution based on a loopback cable and a bridge-domain structure than allows local VLAN manipulations. BNG Pseudowire Headend can also be a solution to explore as any PW-Ether sub-interfaces can be attached to any TM chunk# subscriber traffic using same S-VLAN coming from several L2VPN PW neighbors can be bound to distinct TM chunks.Subscriber QoS evolutionThe discussed topic implies the usage of QoS queuing to manage subscriber traffic. Nowadays, considering the progress of high-speed plans offered to Service Provider’s customers, the need of complex QoS using queuing can be re-considered# queuing being not as much mandatory as previously when it comes to QoE. Even with a precisely defined QoS queuing solution, voice traffic congestion management for instance is not necessarily giving great QoE results; and that, without considering Forward Error Correction techniques that now allow to partly loose packets without much of a QoE degradation.In this context, you can think about transitioning some QoS offers to policing solutions. Here are two examples of QoS policy-map conversions from shaper to policer#Note# the QoS policer solution with child-aware feature for BNG subscribers is available on 5th generation line card introduced with IOS-XR 7.11 release.As a policer is a less system costly technique compared to queuing, the ASR 9000 platform provides significantly more policing capabilities than queuing ones. Also, policing is simply implemented on the NPU of each line card without the need to allocate them if you want to scale more.ConclusionWe have explored the options that can lead to a more defined and more scalable BNG solution within your network. Since the BNG engineering is often dependent on how the aggregation network is built, you have all the tools to find the best fit to your network and leverage the full ASR 9000 BNG capabilities.", "url": "/tutorials/asr-9000-bng-scale-best-practices/", "author": "Paul Blaszkiewicz", "tags": "ASR 9000, BNG, QoS" } , "#": {} }