{ "blogs-2020-09-04-ncs-5500-buffering-architecture": { "title": "NCS 5500 Buffering Architecture", "content": " NCS 5500 Buffering Architecture Executive Summary Router Buffering Architectures Off-chip vs. On-chip buffering Motivation for a New Design An Innovative Approach – Hybrid Buffering Jericho ASIC Architecture Deep Buffering Future ASIC Architecture Direction Summary The NCS 5500 uses an innovative design to provide deep buffering while maintaining high performance and power efficiency. This paper explores this design and shows its strengths over traditional forwarding architectures. It also will address criticism of these optimizations coming from other vendors.Executive SummaryBuffers are the shock absorbers in networks. Their primary role is to manage temporary congestion in a manner that controls loss and latency while allowing end nodes to adapt to available bandwidth or complete short transfers without loss. Note that this does not mean preventing all packet loss and that buffers cannot solve persistent or artificial congestion that does not respond to signals from the network.Router Buffering ArchitecturesOff-chip vs. On-chip bufferingOne of the tradeoffs a router architect needs to make is where to buffer packets. Traditionally, there have been two options# on-chip or off-chip.On-chip buffering minimizes power and board space but doesn’t allow for buffering beyond 10s or 100s of microseconds. It is well suited to data centers where round trip times allow end nodes to adjust their speed very quickly and where bandwidth can be overprovisioned via inexpensive fiber runs. In some cases, it may still have limitations due to TCP Incast traffic flows. On-chip SRAM buffers are a 10,000th the size of off-chip buffers so it’s not a small difference. On-chip buffering allows for higher-bandwidth devices as it allows more of the ASIC’s resources to be used for physical ports rather than connecting to off-chip memories. On-chip vs. off-chip buffering is one of the key factors underlying the wide range of port counts and power consumption between routers with the two models. As of 2018, fabric-capable forwarding chips with off-chip buffers currently range from 200 to 900 Gbps while System on Chip models shipping range up to 3.2 Tbps.With off-chip buffering, two key requirements must be met. First, the memory must be large enough to buffer the required packets. This is a separate topic, but note that the NCS 5500 ha very large buffers. Second, it must be fast enough to maintain the forwarding rate. The bandwidth component of memory performance is a key challenge for buffering. This paper discusses how the NCS 5500 balances the bandwidth constraint with other design goals such as performance, power, and cost.For deep buffers, a router architect must currently choose between high-speed custom memories or large banks of commodity memories. While not as difficult as FIB memory requirements (which require high operations per second), memory bandwidth can be a challenge as commodity memories are not designed for the operations needed for networking. Traditionally, deep-buffered routers pass every packet through the off-chip memory. Note that these memories are also used for forwarding tables, which may or may not be stored in the same memory bank.High-performance memories can be made to a wide range of specifications, including bandwidth, operations per second, capacity, cost, and physical size. They save board space but are significantly more expensive and consume more power as performance increases.Off-chip buffering with commodity memory is less expensive but often requires more board space than custom memories due to the need to overprovision the capacity in order to get sufficient aggregate memory bandwidth.Using a mid-performance commodity memory such as graphics memory (e.g., GDDR5) helps, but still doesn’t meet the performance of high-end memory devices.Cisco uses custom high-performance memories on the CRS and NCS 6000. Commodity memory is used for buffering on the ASR 9000 and NCS 5500. The NCS 5000 has on-chip buffers only.Motivation for a New DesignThere are two key drivers for rethinking the traditional approach to off-chip buffers when designing new chips. First, valuable bandwidth (and thus power) is used to perform an off-chip write/read for packets that don’t require moderate or deep buffers. Second, in the near future even high-performance memories will no longer be able to keep up with the requirements of ASICs as silicon logic and on-chip memory will continue to outpace off-chip memory performance.An Innovative Approach – Hybrid BufferingNew chip designs must address the memory bandwidth challenge while still acheiving the overall system goals to balance price, performance, power, and functionality. Commodity memory bandwidth currently maxes out at approximately 900G half duplex. High-performance memories are available supporting 400G & 500G ASICs at line rate (some are ~500G full duplex, others are ~1T half duplex). Future generations of custom memory (notably HBM which is discussed later) will increase performance but still not be able to keep up with highest-bandwidth processors.A solution to this challenge to implement both on-chip and off-chip buffers and only use off-chip buffers as needed. This is the design of the NCS 5500. Packets in congested queues are buffered off-chip while packets in empty and lightly congested queues (less than approximately 5000 packets) remain on-chip. This is called an evict / readmit model in which queues can transition on and off chip as they fill or empty. It uses memory bandwidth more efficiently and allows the chip to run faster than the off-chip memory. This approach has an additional benefit of reduced power consumption relative to buffering all packets off-chip.This design is based on the same principle that underlies much of network and server design – statistical multiplexing. Not all clients (of the network or of memory) need to use the full bandwidth at the same time. This oversubscription is small for the worst-case and negligible in practice in Jericho and Jericho+. Oversubscription will increase in the future as the gap between memory technologies and forwarding logic continues to grow so this design will become even more critical in the future.The next section explains how hybrid buffering is implemented in NCS 5500. Later, the conditions needed to see corner cases in the lab are explained.Jericho ASIC ArchitectureThe diagram below shows a high-level view of the Jericho & Jericho+ ASICs used in NCS 5500. There are two packet cores. The cores share on-chip buffers with separate pools for ingress and egress (OTM in the graphic). The on-chip buffers are approximately 16MB each. The egress buffer supports reassembling packets but doesn’t provide externally visible QoS. In addition, it doesn’t drop packets due to the VoQ scheduling, which only allows packets to be sent to egress once they can be transmitted. The configured QoS is implemented by the ingress traffic manager. The ingress on-chip buffer contains VoQs for every output queue in the system. A vast majority of packets pass only through these on-chip buffers.If a queue becomes moderately congested, the queue is “evicted” and additional packets for that queue only will be stored in an off-chip GDDR5 memory. Eviction occurs on a per-queue basis so all other traffic to the destination physical port and all other ports remains in the on-chip buffers.Figure 1# Jericho ASIC Logical DiagramThe aggregate half-duplex bandwidth between the forwarding cores and the off-chip memory is approximately 900 Gbps. With GDDR5, this bandwidth can be used for read or write, which is an important component of the design as, in theory, even line rate bursts on all interfaces on 900G Jericho+ can be absorbed before shifting the bandwidth allocation back to read the packets out. In Jericho, at very high levels of memory bandwidth usage, writes are given a higher priority in order to absorb large bursts into deep queues. With sustained high rates near the maximum memory bandwidth, the allocation will return to 50/50 to allow the off-chip buffers to drain. At that point, packet loss is inevitable in any router so managing the drops becomes very important. If the allocated write bandwidth is exceeded, packets for the specific queues that are tail dropping in the off-chip memory will temporarily be dropped on-chip. This preserves memory bandwidth for packets that may not need to be dropped. Packets to other queues utilizing the deep buffers will receive priority for storage in off-chip memory. Meanwhile, the configured QoS policies are being implemented, which may further reduce the memory bandwidth required.Some other memory technologies do not have the flexibility of half duplex bandwidth that can be shared between read and write. In practice, that is not an issue today, but may be a challenge in the future.This condition is clearly an extreme and contrived corner case. It will only be seen in a lab test with a traffic generator and almost every packet forced to heavily congested queues.Deep BufferingThe NCS 5500 can provide extremely deep buffers when required. This is enabled by the size of the off-chip memory as well as the distributed VoQ architecture. The off-chip memory on each chip is 4 GB and can store up to 3 million packets (1.5M packet descriptors on each core). After buffer carving, the effective capacity is approximately 3GB.There are also system-level factors that are key to the buffering architecture in multi-ASIC systems (the 2 RU NCS 5502, the line card based NCS 5504/5508/5516, and the newer 1 RU systems with multiple Jericho+ ASICs). When significant congestion of a VoQ is occurring, it is likely to enter the router on more than one NPU. This means that the aggregate memory available for buffering to a single egress queue comprises the memory on all ingress ASICs receiving traffic destined to that queue. When this is occurring, each ASIC individually moves queues on and off-chip as needed. With this model, the total buffering for a queue is larger than any router with two-stage queuing.When all the ingress traffic enters a single ASIC (such as the single-chip NCS 5501) the system can buffer up to approximately 30 msec on all ports at the same time when all traffic is going to congested output queues. If fewer queues are congested, more memory is available to the congested queues, up to 1 GB and 390k packets per queue.The default queue depths on NCS 5500 are set to 10 msec per NPU, but they can be increased significantly if required by the network designer. Care should be taken as too much buffering can cause just as many problems as not enough.For a practical analysis of memory capacity and bandwidth, it is important to understand that all the queues will not be highly congested at the same time.Future ASIC Architecture DirectionIncreases in ASIC logic and on-chip memory performance will continue to outpace off-chip memory. This gap will grow significantly over time. In the near future, many routers with high-end custom memories will need to embrace this model. The only other option is to use an increasing number of relatively small ASICs, still with high- performance memories.In 2018 or 2019, networking ASICs will begin shipping with a new technology called High Bandwidth Memory (HBM). HBM is a high-end commodity component that must be tightly integrated with the on-die logic by placing it into the ASIC package. This new option will deliver a significant increase in memory bandwidth as well as a decrease in power.SummaryThis paper has shown the benefits of the hybrid buffering architecture and how it is implemented on Cisco’s NCS 5500 routers. It has also addressed the criticism of this design. While it should be clear that hybrid buffering is an optimal design in many cases, Cisco will still be implementing the traditional off-chip approach, especially in extensions to existing platforms.", "url": "/blogs/2020-09-04-ncs-5500-buffering-architecture/", "author": "Lane Wigley", "tags": "iosxr, cisco" } , "blogs-2020-09-04-persistent-load-balancing-or-sticky-ecmp": { "title": "Persistent Load Balancing or "Sticky ECMP"", "content": " Persistent Load Balancing or ~Sticky ECMP~ Introduction Datacenter loadbalancing Implementation details Configuration Verification of operation Auto recovery Restrictions and limitations IntroductionThis document applies to NCS5500 and ASR9000 routers and has been verified as such.Traditional ECMP or equal cost multipath loadbalances traffic over a number of available paths towards a destination. When one path fails, the traffic gets re-shuffled over the available number of paths.This means that a flow that was before taking path “1”, could now be taking path “3” although only path “2” failed.This reshifting occurs because the hash of althogh the flow remains the same resulting in the same bucket, but the bucket may get reassigned to a new path.To understand flows, buckets and traditional ECMP a bit better, you could reference the Loadbalancing Architecture document and consult the Cisco Live ID 2904 from Las Vegas 2017.While this flow redistribution is not a problem in traditional core networks, because the end to end connectivity is preserved and the user would not experience any situation from it, in data center loadbalancing this can be a problem.Datacenter loadbalancingThis rehashing as mentioned can be troublesome in data center environments where many servers advertise a “service prefix” to a loadbalancer/gateway in a sort of “anycast” way.This results in a user connecting with its l3/l4 tupple would be delegated to one particular server for a session.If for whatever reason a server fails, we don’t want the established session to a server to be rehashed to a new server as that will reset the tCP connection sicne that new server may have no clue (~socket #) about this session it just got a packet for.To visualize#Persistent Loadbalancing or Sticky ECMP defines a prefix in such a way that we dont rehash flows on existing paths and only replace those bucket assignments of the failed server.The good thing is that established sessions to servers wont get rehashed.The downside of this is that you could see more load on one server then another now. (Traditional ECMP would try to achieve equal spread, at the cost of that rehashing).Implementation details How to map prefixes for sticky ECMP ?Use an RPL to define prefixes that require persistent load balancing. User would match some BGP community to set sticky ecmp flag What happens when a path in an ECMP goes down ?In FIB each prefix has a path list, say for example a prefix ‘X’ has a path list (p1, p2, p3) and when a path say ‘p2’ fails with sticky ECMP enabled new path list become (p1, p1, p3), instead of the default rehash logic, which results (p1, p3, p1) What happens when a link comes back ?There are 2 modes of operation#DEFAULT# No rehashing is done and the link will not be utilized until one of thefollowing happens, which results a complete recalculation of paths. New path addition to ECMP. User driven clear operation using “clear route” command.CONFIGURABLE# Auto recovery. If the server comes back or the path gets reenabled, we automatically reshuffle the sessions, this will result in sessions that were moved from the failed path to a new server will now be rehashed BACK to the original server that got back online, this will result in session disruption ONLY for those sessions.There is no one size fits all answer here hence we provide the 2 options#manual recovery or automatic recovery, with both pros and cons.ConfigurationNow that you’re all excited about this new functionality, you want to try it out right? here is the configuration sequence on how to establish it#First define the route policy that will direct which prefixes are to be marked as sticky.route-policy sticky-ecmp if destination in (192.168.3.0/24) then set load-balance ecmp-consistent else pass endifend-policyApply that route policy to BGP through the table-policy directive#router bgp 7500 address-family ipv4 unicast table-policy sticky-ecmp maximum-paths ebgp 64 maximum-paths ibgp 32 ! need to have multipath enabled obviouslyThat’s it!Verification of operationLet’s verify the CEF display before a failure occurred#Show cef detail LDI Update time Sep 5 11#22#38.201 via 10.1.0.1/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 0 NHID 0x0 [0x57ac4e74 0x0] next hop 10.1.0.1/32 via 10.1.0.1/32 via 10.2.0.1/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 1 NHID 0x0 [0x57ac4a74 0x0] next hop 10.2.0.1/32 via 10.2.0.1/32 via 10.3.0.1/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 2 NHID 0x0 [0x57ac4f74 0x0] next hop 10.3.0.1/32 via 10.3.0.1/32 Load distribution (persistent)# 0 1 2 (refcount 1) Hash OK Interface Address 0 Y GigabitEthernet0/0/0/0 10.1.0.1 1 Y GigabitEthernet0/0/0/1 10.2.0.1 2 Y GigabitEthernet0/0/0/2 10.3.0.1 We see 3 paths identified with 3 next hops (10.1/2/3.0.1) via 3 different gig interfaces. We can also see here that the stickiness is enabled through the “persistent” keyword.After a path failure, in this example we brought gig 0/0/0/1 down#Show cef detail LDI Update time Sep 5 11#23#13.434 via 10.1.0.1/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 0 NHID 0x0 [0x57ac4e74 0x0] next hop 10.1.0.1/32 via 10.1.0.1/32 via 10.3.0.1/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 1 NHID 0x0 [0x57ac4f74 0x0] next hop 10.3.0.1/32 via 10.3.0.1/32 Load distribution (persistent) # 0 1 2 (refcount 1) Hash OK Interface Address 0 Y GigabitEthernet0/0/0/0 10.1.0.1 1* Y GigabitEthernet0/0/0/0 10.1.0.1 2 Y GigabitEthernet0/0/0/2 10.3.0.1Notice the replacement of bucket 1 with gig 0/0/0/0 and the “*” denoting that this path is a replacement as it took a hit before.We keep the bucket sequence in tact, we jsut replace it with an available path index.Note that this will keep this way irrespective of gig0/0/0/1 coming back alive.To recover the paths and put gig0/0/0/1 back in service on the hashing use#clear route <prefix>Auto recoveryTo enable the auto recovery, configurecef consistent-hashing auto-recoveryA full trace sequence is given here with some show commands and verification#RP/0/RSP0/CPU0#PE1#sh run | i cefBuilding configuration... bgp graceful-restartcef consistent-hashing auto-recoveryRP/0/RSP0/CPU0#PE1#sho cef 192.168.3.0/24 detail 192.168.3.0/24, version 674, internal 0x5000001 0x0 (ptr 0x722448fc) [1], 0x0 (0x0), 0x0 (0x0) Updated Nov 4 08#14#21.731 Prefix Len 24, traffic index 0, precedence n/a, priority 4 BGP Attribute# id# 0x6, Local id# 0x2, Origin AS# 0, Next Hop AS# 0 ASPATH # Community# gateway array (0x72ce5574) reference count 1, flags 0x2010, source rib (7), 0 backups [1 type 3 flags 0x48441 (0x72180850) ext 0x0 (0x0)] LW-LDI[type=0, refc=0, ptr=0x0, sh-ldi=0x0] gateway array update type-time 1 Nov 4 08#14#21.731 LDI Update time Jan 1 21#23#30.335 Level 1 - Load distribution (consistent)# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [0] via 12.4.18.2/32, recursive [1] via 12.5.19.2/32, recursive [2] via 12.6.20.2/32, recursive [3] via 12.101.45.2/32, recursive [4] via 12.104.46.2/32, recursive [5] via 12.105.47.2/32, recursive [6] via 12.106.49.2/32, recursive [7] via 12.107.43.2/32, recursive [8] via 12.111.48.2/32, recursive [9] via 12.112.44.2/32, recursive [10] via 12.122.18.2/32, recursive [11] via 12.150.16.2/32, recursive [12] via 12.151.17.2/32, recursive [13] via 12.152.9.2/32, recursive [14] via 12.153.23.2/32, recursive [15] via 12.154.0.2/32, recursiveRP/0/RSP0/CPU0#PE1#cle counters aClear ~show interface~ counters on all interfaces [confirm]RP/0/RSP0/CPU0#Jan 1 21#25#20.059 PDT# statsd_manager_g[1167]# %MGBL-IFSTATS-6-CLEAR_COUNTERS # Clear counters on all interfaces RP/0/RSP0/CPU0#PE1#LC/0/1/CPU0#Jan 1 21#25#28.050 PDT# ifmgr[215]# %PKT_INFRA-LINK-3-UPDOWN # Interface TenGigE0/1/0/5/0, changed state to DownLC/0/1/CPU0#Jan 1 21#25#28.050 PDT# ifmgr[215]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface TenGigE0/1/0/5/0, changed state to Down RP/0/RSP0/CPU0#PE1#show int tenGigE 0/1/0/5/0 acTenGigE0/1/0/5/0 Protocol Pkts In Chars In Pkts Out Chars Out IPV4_UNICAST 1 59 98123 96355844RP/0/RSP0/CPU0#PE1#cle counters Clear ~show interface~ counters on all interfaces [confirm]RP/0/RSP0/CPU0#Jan 1 21#25#38.896 PDT# statsd_manager_g[1167]# %MGBL-IFSTATS-6-CLEAR_COUNTERS # Clear counters on all interfaces RP/0/RSP0/CPU0#PE1#RP/0/RSP0/CPU0#PE1#LC/0/1/CPU0#Jan 1 21#25#43.353 PDT# pfm_node_lc[302]# %PLATFORM-CPAK-2-LANE_0_LOW_RX_POWER_ALARM # Set|envmon_lc[163927]|0x1005005|TenGigE0/1/0/5/0 RP/0/RSP0/CPU0#PE1#LC/0/1/CPU0#Jan 1 21#25#50.110 PDT# ifmgr[215]# %PKT_INFRA-LINK-3-UPDOWN # Interface TenGigE0/1/0/5/0, changed state to Up LC/0/1/CPU0#Jan 1 21#25#50.110 PDT# ifmgr[215]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface TenGigE0/1/0/5/0, changed state to Up RP/0/RSP0/CPU0#PE1#show int tenGigE 0/1/0/5/0 acTenGigE0/1/0/5/0 Protocol Pkts In Chars In Pkts Out Chars Out ARP 1 60 1 42RP/0/RSP0/CPU0#PE1#show int tenGigE 0/1/0/5/0 acTenGigE0/1/0/5/0 Protocol Pkts In Chars In Pkts Out Chars Out IPV4_UNICAST 0 0 24585 24142470 ARP 1 60 1 42RP/0/RSP0/CPU0#PE1#sho cef 192.168.3.0/24 detail 192.168.3.0/24, version 674, internal 0x5000001 0x0 (ptr 0x722448fc) [1], 0x0 (0x0), 0x0 (0x0) Updated Nov 4 08#14#21.731 Prefix Len 24, traffic index 0, precedence n/a, priority 4 BGP Attribute# id# 0x6, Local id# 0x2, Origin AS# 0, Next Hop AS# 0 ASPATH # Community# gateway array (0x72ce5fc4) reference count 1, flags 0x2010, source rib (7), 0 backups [1 type 3 flags 0x48441 (0x721807d0) ext 0x0 (0x0)] LW-LDI[type=0, refc=0, ptr=0x0, sh-ldi=0x0] gateway array update type-time 1 Nov 4 08#14#21.731 LDI Update time Jan 1 21#25#53.128 Level 1 - Load distribution (consistent)# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [0] via 12.4.18.2/32, recursive [1] via 12.5.19.2/32, recursive [2] via 12.6.20.2/32, recursive [3] via 12.101.45.2/32, recursive [4] via 12.104.46.2/32, recursive [5] via 12.105.47.2/32, recursive [6] via 12.106.49.2/32, recursive [7] via 12.107.43.2/32, recursive [8] via 12.111.48.2/32, recursive [9] via 12.112.44.2/32, recursive [10] via 12.122.18.2/32, recursive [11] via 12.150.16.2/32, recursive [12] via 12.151.17.2/32, recursive [13] via 12.152.9.2/32, recursive [14] via 12.153.23.2/32, recursive [15] via 12.154.0.2/32, recursiveRestrictions and limitations Sticky load balancing is more resource intensive operation so it is not advised to enable it for all prefixes. Only supported for BGP prefixes Sticky ECMP is available in XR 6.3.2 for NCS5500 and ASR9000 Auto Recovery is available in XR 6.5.1", "url": "/blogs/2020-09-04-persistent-load-balancing-or-sticky-ecmp/", "author": "Xander Thuijs", "tags": "iosxr, cisco" } , "tutorials-2017-08-02-understanding-ncs5500-resources-s01e01": { "title": "Understanding NCS5500 Resources (S01E01)", "content": " Understanding NCS5500 Resources S01E01 The Platforms NCS5500 Portfolio Using external TCAM Not using external TCAM Resources / Memories You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow following this link.S01E01 The PlatformsIn the marketing datasheet, you probably read that NCS5501-SE supports up to 2.7M+ routes or that NCS5502 support up to 1.1M routes. It’s true, but it’s actually a bit more complex since it will not be 2.7M of any kind of routes. So, how many routes can I actually use ? Well, it depends…This series of posts aims at explaining in details how NCS5500 routers use the different memory resources available for each type of features or prefixes. But we will go further than just discussing “how many routes” and we will try to identify how other data types (Next-hop, load balancing information, ACL entries, …) are affecting the scale.Today, we will start describing the hardware implementation then we will explain how “databases” are used, which profiles can be enabled and how they can be monitored and troubleshot.NCS5500 PortfolioRouters in the NCS5500 portfolio offer diverse form-factors. Some are fixed (1RU, 2RU), others are modular (4-slot, 8-slot, 16-slot) with multiple line cards types.In August 2017, with one exception covered in a follow-up xrdocs post, we are leveraging Qumran-MX or Jericho forwarding ASICs (FA). Qumran is used for System-on-Chip (SoC) routers like NCS5501 and NCS5501-SE, all other systems are using several Jerichos interconnected via Fabric Engines.Update# In December 2017, Jericho+ systems are available in line cards (36x 100G with NG eTCAM) and in fixed formed 1RU (36x 100G with or without NG eTCAM, 24x 100G with a larger internal memory). They will be described in follow-up posts.Update2# In August 2018, we introduced a new modular line card NC55-MOD-* and two new 2-RU fixed chassis based on the same philosophy of modular “MPA”. All of them are powered by Jericho+ ASICs.We can categorize these systems and line cards in two families#Using external TCAM(named “Scale” and identified with -SE in the product ID) NCS5501-SE NCS5502-SE NC55-24X100G-SE NC55-24H12F-SE NC55-36X100G-A-SERP/0/RP0/CPU0#Router#sh platform | i XR RUN0/RP0/CPU0 NCS-5501-SE(Active) IOS XR RUN NSHUTRP/0/RP0/CPU0#Router#RP/0/RP0/CPU0#Router#sh plat | i XR RUN0/1/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/6/CPU0 NC55-24H12F-SE IOS XR RUN NSHUT0/7/CPU0 NC55-24X100G-SE IOS XR RUN NSHUT0/RP0/CPU0 NC55-RP(Active) IOS XR RUN NSHUT0/RP1/CPU0 NC55-RP(Standby) IOS XR RUN NSHUTRP/0/RP0/CPU0#Router#Not using external TCAMonly the memories inside the FA (named “Base”) NCS5501 NCS5502 NCS-55A1-24H NCS-55A2-MOD-S / NCS-55A2-MOD-HD-S NC55-36X100G NC55-18H18F NC55-36x100G-S (MACsec card) NC55-6X200-DWDM-S (Coherent card) NC55-MOD-A-SRP/0/RP0/CPU0#Router#show platform | i XR RUN0/RP0/CPU0 NCS-5501(Active) IOS XR RUN NSHUTRP/0/RP0/CPU0#Router#RP/0/RP0/CPU0#Router#sh platform | i XR RUN0/0/CPU0 NC55-36X100G IOS XR RUN NSHUT0/1/CPU0 NC55-18H18F IOS XR RUN NSHUT0/RP0/CPU0 NC55-RP(Active) IOS XR RUN NSHUT0/RP1/CPU0 NC55-RP(Standby) IOS XR RUN NSHUTRP/0/RP0/CPU0#Router#Note# Inside a modular chassis, we can mix and match eTCAM and non-eTCAM line cards. A feature is available to decide where the prefixes should be programmed (differentiating IGP and BGP, and using specific ext-communities). You can check the blog post dedicated to this topic here.So basically, this external memory used to extend the scale in terms of routes and classifiers (Access-list entries for instance) is what differentiates the systems and line cards.eTCAM should not be confused with the 4GB external packet buffer which is present on the side of each FA, regardless the type of system or line card.The external packet buffer will be used in case of queue congestion only. It’s a very rapid graphical memory, specifically used for packets.The eTCAM only handles prefixes and ACEs, not packets.If you are familiar with traditional IOS XR routers, there are some similarities and some differences with the line cards classification “-SE vs -TR” on ASR9000, or “-FP vs -MSC vs -LSP” on CRS routers# route and feature scales can be different among the different types of LC but not the number of queues or the capability to support Hierarchical QoS (it’s not the case for NCS5500 routers, QoS capability is the same on -SE and non-SE)On Jericho-based systems, we have two eTCAM blocks per FA offering up to 2M additional routes and they are soldered to the board. It’s not a field-replaceable part. This means you can not convert a NC55-36X100G non-eTCAM card into an eTCAM card.On systems running Jericho+, we have a new generation eTCAM qualified for 4M IPv4 routes but supporting much more if needed in the future.Resources / MemoriesEach forwarding ASIC is made of two cores (0 and 1). Also we have an ingress and an egress pipeline. Each pipeline itself is made of different blocks. For clarity and intellectual property reasons, we will simplify the description and represent the series of blocks as just a Packet Processor (PP) and a Traffic Manager (TM).Along the pipeline, the different blocks can access (read or write) different “databases”.They are memory entities used to store specific type of information.In follow up posts, we will describe in detail how they are used, but let’s introduce them right now. The Longest Prefix Match Database (LPM sometimes referred to as KAPS for KBP Assisted Prefix Search, KBP being itself Knowledge Based Processor) is an SRAM used to store IPv4 and IPv6 prefixes. It’s an algorithmic memory qualified for 256k entries IPv4 and 128k entries IPv6 in the worst case. We will see it can go much higher with internet distribution. One exception with the Jericho+ used in NCS55A1-24H where LPM can store more than 1M IPv4 routes. The Large Exact Match Database (LEM) is used to store IPv4 and IPv6 routes also, plus MAC addresses and MPLS labels. It scales to 786k entries. The Internal TCAM (iTCAM) is used for Packet classification (ACL, QoS) and is 48k entries large. The FEC database is used to store NextHop (128k entries), containing also the FEC ECMP (4k entries). Egress Encapsulation DB (EEDB) is used for egress rewrites (96k entries), including adjacency encapsulation like link-local details from ARP, ND and for MPLS labels or GRE headers.All these databases are present inside the Forwarding ASIC. The external TCAMs (eTCAM) are only present in the -SE line cards and systems and, as the name implies, are not a resource inside the Forwarding ASIC. They are used to extend unicast route and ACL / classifiers scale (up to 2M or to 4M IPv4 entries).RP/0/RP0/CPU0#NCS5501-622#show contr npu resources all location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXXX (X %) iproute # XXXXX (X %) ip6route # XXXXX (X %) mplslabel # XXXXX (X %)HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 351346 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXXX (X %) iproute # XXXXX (X %) ip6route # XXXXX (X %) ipmcroute # XXXXX (X %)HW Resource Information Name # encapOOR Information NPU-0 Estimated Max Entries # 100000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXX (X %) ipnh # XXX (X %) ip6nh # XXX (X %) mplsnh # XXX (X %)HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXXXX (X %) iproute # XXXXXX (X %) ipmcroute # XXXXX (X %)HW Resource Information Name # ext_tcam_ipv6_shortOOR Information NPU-0 Estimated Max Entries # 0 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXXX (X %) ip6route # XXXXX (X %)HW Resource Information Name # ext_tcam_ipv6_longOOR Information NPU-0 Estimated Max Entries # 0 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXXX (X %) ip6route # XXXXX (X %)HW Resource Information Name # fecOOR Information NPU-0 Estimated Max Entries # 126976 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXX (X %) ipnhgroup # XXXX (X %) ip6nhgroup # XXXX (X %)HW Resource Information Name # ecmp_fecOOR Information NPU-0 Estimated Max Entries # 4096 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # XXXXX (X %) ipnhgroup # XXXXX (X %) ip6nhgroup # XXXXX (X %)RP/0/RP0/CPU0#NCS5501-622#Depending on the address family (IPv4 or IPv6), but also depending on the prefix subnet length, routes will be sorted and stored in LEM, LPM or eTCAM. Route handling will depend on the platform type, the IOS XR release running and the profile activated. That’s what we will cover in the next episode", "url": "/tutorials/2017-08-02-understanding-ncs5500-resources-s01e01/", "author": "Nicolas Fevrier", "tags": "NCS5500, NCS 5500, LPM, LEM, eTCAM" } , "tutorials-2017-08-03-understanding-ncs5500-resources-s01e02": { "title": "Understanding NCS5500 Resources (S01E02)", "content": " Understanding NCS5500 Resources S01E02 IPv4 Prefixes Previously on “Understanding NCS5500 Resources” IPv4 routes and FIB Profiles Lab verification Real use-cases You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow following this link.In IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/S01E02 IPv4 PrefixesPreviously on “Understanding NCS5500 Resources”In the previous post, we introduced the different routers and line cards in NCS5500 portfolio. We classified them into two categories# with or without external TCAM (eTCAM). And we introduced the different databases available to store information, inside and outside the Forwarding ASIC (FA).All the principles described below and the examples used to illustrate them were validated in August 2017 with Jericho-based systems, using scale (with eTCAM) and base (without eTCAM) line cards and running the two IOS XR releases available# 6.1.4 and 6.2.2. Jericho+ based systems will be used in a follow up post in the same series (season 2 ;)IPv4 routes and FIB ProfilesA quick refresh will be very useful to understand how routes are stored in NCS5500# LPM# Longest Prefix Match Database (sometimes referred to as KAPS for KBP Assisted Prefix Search, KBP being itself Knowledge Based Processor) is an SRAM used to store IPv4 and IPv6 prefixes. Scale# variable from 128k to 400k entries. We can perform variable length prefix lookup in LPM. LEM# Large Exact Match Database also used to store specific IPv4 and IPv6 routes, plus MAC addresses and MPLS labels. Scale# 786k entries. We perform exact match lookup in LEM. eTCAM# external TCAMs, only present in the -SE “scale” line cards and systems. As the name implies, they are not a resource inside the Forwarding ASIC, it’s an additional memory used to extend unicast route and ACL / classifiers scale. Scale# 2M IPv4 entries. We can also perform variable length prefix lookup in eTCAM.The origin of the prefixes is not relevant. They can be received from OSPF, ISIS, BGP but also static routes. It doesn’t influence which database will be used to store them. Only the address-family (IPv4 in this discussion) and the subnet length of the prefix will be used in the decision process.Hardware programming is done through an abstraction layer# Data-Plane Agent (DPA)Also, it’s important to remember we are not talking about BGP paths here but about FIB entries# if we have 10 internet transit providers advertising more or less the same 700k-ish routes (with 10 next-hop addresses), we don’t have 7M entries in the FIB but 700k. Few exceptions exist (like using different VRFs for each transit provider) but they are out of the scope of this post.Originally, IPv4/32 are going in LEM and all other prefix lengths (IPv4/31-/0) are stored in LPM.We changed this default behavior by implementing FIB profiles# Host-optimized or Internet-Optimized.RP/0/RP0/CPU0#NCS5500(config)#hw-module fib ipv4 scale ? host-optimized-disable Configure Host optimization by default internet-optimized Configure Intetrnet optimizedRP/0/RP0/CPU0#NCS5500(config)#Host-optimized is the default option. Committing a change in the configuration will prompt you to reload the line-cards or chassis to enable the new profile.Note# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/For a base line card (those without -SE in the product ID), we will have the following order of operation#When a packet is received, the FA performs a lookup on the destination address# first lookup is performed in the LEM searching for a IPv4/32 exact match second lookup is accessing the LPM searching for a variable length match between IPv4/31 and IPv4/25 third lookup is done in the LEM again, searching for a IPv4/24 exact match finally, the fourth lookup is checking the LPM a second time searching for a variable length match between IPv4/23 and /0All is done in one single clock tick, it doesn’t require any kind of recirculation and doesn’t impact the performance (in bandwidth or in packet per second).This mode is particularly useful with a large number of IPv4/32 and IPv4/24 in the routing table. It could be the case for hosting companies or data centers.Using the configuration above, you can decide to enable the Internet-optimized mode. This is a feature activated globally and not per line card. After reload, you will see a very different order of operation and prefix distribution in the various databases with base line cards and systems#The order of operation LEM/LPM/LEM/LPM is now replaced by an LPM/LEM/LEM/LPM approach. first lookup is in LPM searching for a match between IPv4/32 and IPv4/25 second lookup is performed in LEM for an exact match on IPv4/24 and IPv4/23. third lookup is done in LEM too, and this time for also an exact match on IPv4/20 fourth and final step, a variable length lookup is executed in LPM for everything between IPv4/22 and /0Here again, everything is performed in one cycle and the activation of the Internet Optimized mode doesn’t impact the forwarding performance.As the name implies, this profile has been optimized to move the largest route population present on the Internet (IPv4/24, IPv4/23, IPv4/20) into the largest memory database# the LEM. If you followed carefully, you noticed that a couple of improvement are needed to implement this sequence of lookup.First, the match on LEM needs to be on an exact prefix length but the step two is done on IPv4/24 and IPv4/23. Indeed a function in DPA splits all IPv4/23 received from the upper FIB process in two. Each IPv4/23 is programmed as two sub-sequent IPv4/24s in hardware. We will illustrate this case with an example in the lab later in this post, advertising 300,000 IPv4/23.Second, the exact match in step 3 for IPv4/20 in LEM is only possible if we don’t have any IPv4/22 or IPv4/21 prefixes overlapping with this IPv4/20. This implies the system performs another pro-active check to verify we don’t have overlap. If an overlap happens, the IPv4/20 prefix is moved from LEM to LPM dynamically. We will illustrate this mechanism later in the post, advertising 100,000 IPv4/20 first, then advertising 100,000 IPv4/21 overlapping on the IPv4/20. We will see the IPv4/20 moved into LPM automatically.With this Internet Optimized profile activated, it’s possible to store a full internet view on base systems and line cards (we will present a couple of examples at the end of the documents). LEM is 786k large LPM scales from 256k to 350-400k (depending on the internet distribution, this algorithmic memory is dynamically optimized) Total IPv4 scale for base systems is 786k + 350k = 1,136k routesWhat about the scale line cards and routers (NCS5501-SE, NCS5502-SE and all the -SE line cards) ?The two optimized profiles described earlier don’t impact the lookup process on scale systems, which will always follow this order of operation#Just a two-step lookup here# first lookup is in LEM for an exact match on IPv4/32 second and last lookup in the large eTCAM for everything between IPv4/31 and /0Needless to say, it is all done in one single operation in the Forwarding ASIC. LEM is 786k large eTCAM can offer up to 2M IPv4 entries Total IPv4 scale for scale systems is 786k + 2M = 2,786k routesLab verificationLet’s try to illustrate it in the lab, injecting different types of IPv4 routes.On NCS5500, the IOS XR CLI to verify the resource utilization is “show controller npu resources all location 0/x/CPU0”.We will advertise prefixes from a test device and check the memory utilization. It’s not an ideal approach because, for the sake of simplicity, the routes advertised through BGP are contiguous#RP/0/RP0/CPU0#NCS5500#sh route bgpB 2.0.0.0/32 [20/0] via 192.168.1.2, 04#13#13B 2.0.0.1/32 [20/0] via 192.168.1.2, 04#13#13B 2.0.0.2/32 [20/0] via 192.168.1.2, 04#13#13B 2.0.0.3/32 [20/0] via 192.168.1.2, 04#13#13B 2.0.0.4/32 [20/0] via 192.168.1.2, 04#13#13B 2.0.0.5/32 [20/0] via 192.168.1.2, 04#13#13B 2.0.0.6/32 [20/0] via 192.168.1.2, 04#13#13[...]RP/0/RP0/CPU0#NCS5500#Despite common belief, it’s not an ideal situation. On the contrary, algorithmic memories (like LPM) will be capable of much higher scale with real internet prefix-length distribution. Nevertheless, it’s still an ok approach to demonstrate where the prefixes are stored (based on the subnet length).We will take a look at two systems using scale line cards (24H12F) in slot 0/6 and base line cards (18H18F) in slot 0/0, and running two different IOS XR releases (6.1.4 and 6.2.2).200k IPv4 /32 routesLet’s get started with the advertisement of 200,000 IPv4/32 prefixes.On base line cards running Host-optimized profile, IPv4/32 routes are going to LEM.RP/0/RP0/CPU0#NCS5500-614#sh route sumRoute Source Routes Backup Deleted Memory(bytes)connected 6 1 0 1680local 7 0 0 1680static 2 0 0 480ospf 100 0 0 0 0dagr 0 0 0 0bgp 100 200000 0 0 48000000Total 200015 1 0 48003840RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /32 | utility wc -l200000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green[...]Current Usage NPU-0 Total In-Use # 200131 (25 %) iproute # 200029 (25 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)[...]RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 87036 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 148 (0 %) iproute # 5 (0 %) ip6route # 117 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#Estimated Max Entries (and the Current Usage percentage derived from it) are only estimations provided by the Forwarding ASIC based on the current memory occupation and prefix distribution. It’s not always linear and should be taken with a grain of salt.On base line cards running Internet-optimized profile, IPv4/32 routes are going to LPM#RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /32 | utility wc -l200000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 107 (0 %) iproute # 5 (0 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 323057 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 200171 (62 %) iproute # 200029 (62 %) ip6route # 116 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#Finaly, on scale line card, regardless of the profile enabled, the IPv4/32 are stored in LEM and not eTCAM#RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/6/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 200127 (25 %) iproute # 200024 (25 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources exttcamipv4 location 0/6/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 10 (0 %) iproute # 10 (0 %) ipmcroute # 0 (0 %)RP/0/RP0/CPU0#NCS5500-614#500k IPv4 /24 routesIn this second example, we announce 500,000 IPv4/24 prefixes.With both host-optimized and internet-optimized profiles on base line cards, we will see these prefixes moved into LEM.RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /24 | utility wc -l500000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 500131 (64 %) iproute # 500029 (64 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 87036 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 148 (0 %) iproute # 5 (0 %) ip6route # 117 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#On scale line cards, only IPv4/32s are going to LEM. The rest (that includes our 500,000 IPv4/24s) will be pushed to the external TCAM#RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/6/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 127 (0 %) iproute # 24 (0 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/6/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 118638 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 144 (0 %) iproute # 0 (0 %) ip6route # 117 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources exttcamipv4 location 0/6/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 500010 (24 %) iproute # 500010 (24 %) ipmcroute # 0 (0 %)RP/0/RP0/CPU0#NCS5500-614#300k IPv4 /23 routesIn this third example, we announce 300,000 IPv4/23 prefixes.With the Host-optimized profiles on base line cards, they will be moved to the LPM.RP/0/RP0/CPU0#NCS5500-614#sh route sumRoute Source Routes Backup Deleted Memory(bytes)connected 6 1 0 1680local 7 0 0 1680static 2 0 0 480ospf 100 0 0 0 0dagr 0 0 0 0bgp 100 300000 0 0 72000000Total 300015 1 0 72003840RP/0/RP0/CPU0#NCS5500-614#sh route bgpB 110.0.0.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.2.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.4.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.6.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.8.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.10.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.12.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.14.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.16.0/23 [20/0] via 192.168.1.2, 00#18#17B 110.0.18.0/23 [20/0] via 192.168.1.2, 00#18#17^cRP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /23 | utility wc -l300000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 131 (0 %) iproute # 29 (0 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 261968 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Red OOR State Change Time # 2017.Aug.04 06#59#54 UTCCurrent Usage NPU-0 Total In-Use # 261714 (100 %) iproute # 261571 (100 %) ip6route # 117 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#Only 261k IPv4/23 prefixes out of the 300k were programmed. Then, we reached the max of the memory capacity (we removed the error messages reporting that extra entries have not been programmed in hardware because the LPM capacity was exceeded).Let’s enable the Internet-optimized profile (and reload).This time, the 300,000 IPv4/23 will be split into two, creating 600,000 IPv4/24. And we will move them into LEM.Note# routes are not split into RIB/FIB but just when programmed into the hardware. That’s why a show route will display 300,000 entries and the show contr npu resource will display 600,000.RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /23 | utility wc -l300000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 600107 (76 %) iproute # 600005 (76 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 140729 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 171 (0 %) iproute # 29 (0 %) ip6route # 116 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#The same example with scale line cards will not be dependant on the optimized profile activated, all the IPv4/23 routes will be stored in external TCAM#RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/6/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 127 (0 %) iproute # 24 (0 %) ip6route # 0 (0 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/6/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 117819 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 143 (0 %) iproute # 0 (0 %) ip6route # 116 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources exttcamipv4 location 0/6/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 300010 (15 %) iproute # 300010 (15 %) ipmcroute # 0 (0 %)RP/0/RP0/CPU0#NCS5500-614#100k IPv4 /20 routesIn this last example, we announce 100,000 IPv4/20 prefixes.You got it, so no need to describe# the host-optimized profile on base line cards where these 100k will be stored in LPM the scale cards where these routes will be pushed to the external TCAMLet’s focus on the behavior with base line cards running an Internet-optimized profile.We only advertise IPv4/20 routes and no overlapping routes, they will be all stored in LEM.RP/0/RP0/CPU0#NCS5500-614#sh bgp sum[...]Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd10.11.12.1 0 100 13287 1466 4300005 0 0 13#11#23 100000RP/0/RP0/CPU0#NCS5500-614#sh route bgpB 3.0.0.0/20 [200/0] via 192.168.1.2, 10#05#09B 3.0.16.0/20 [200/0] via 192.168.1.2, 10#05#09B 3.0.32.0/20 [200/0] via 192.168.1.2, 10#05#09B 3.0.48.0/20 [200/0] via 192.168.1.2, 10#05#09B 3.0.64.0/20 [200/0] via 192.168.1.2, 10#05#09B 3.0.80.0/20 [200/0] via 192.168.1.2, 10#05#09B 3.0.96.0/20 [200/0] via 192.168.1.2, 10#05#09[...]RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /20 | utility wc -l100000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green Current Usage NPU-0 Total In-Use # 100002 (13 %) iproute # 100002 (13 %) ip6route # 0 (0 %) mplslabel # 0 (0 %) RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 148883 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green Current Usage NPU-0 Total In-Use # 153 (0 %) iproute # 19 (0 %) ip6route # 113 (0 %) ipmcroute # 0 (0 %) RP/0/RP0/CPU0#NCS5500-614#Now, we advertise 100,000 new IPv4/21 routes. All are overlapping the IPv4/20 we announced earlier. The IPv4/20 will no longer be stored in LEM but will be moved into LPM, for a total of 200,000 entries#RP/0/RP0/CPU0#NCS5500-614#sh bgp sumNeighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd10.11.12.1 0 100 13901 1474 4600005 0 0 13#19#24 200000RP/0/RP0/CPU0#NCS5500-614#sh route bgpSat Aug 5 00#15#38.987 UTCB 3.0.0.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.0.0/21 [200/0] via 192.168.1.2, 00#02#27B 3.0.16.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.16.0/21 [200/0] via 192.168.1.2, 00#02#27B 3.0.32.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.32.0/21 [200/0] via 192.168.1.2, 00#02#27B 3.0.48.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.48.0/21 [200/0] via 192.168.1.2, 00#02#27B 3.0.64.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.64.0/21 [200/0] via 192.168.1.2, 00#02#27B 3.0.80.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.80.0/21 [200/0] via 192.168.1.2, 00#02#27B 3.0.96.0/20 [200/0] via 192.168.1.2, 00#00#02B 3.0.96.0/21 [200/0] via 192.168.1.2, 00#02#27[...]RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /20 | utility wc -l100000RP/0/RP0/CPU0#NCS5500-614#sh route bgp | i /21 | utility wc -l100000RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 2 (0 %) iproute # 2 (0 %) ip6route # 0 (0 %) mplslabel # 0 (0 %)RP/0/RP0/CPU0#NCS5500-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 539335 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 200153 (37 %) iproute # 200019 (37 %) ip6route # 113 (0 %) ipmcroute # 0 (0 %)RP/0/RP0/CPU0#NCS5500-614#Real use-casesTo conclude, let’s illustrate with real but anonymized use-cases.On a base system running IOS XR 6.2.2 with internet-optimized profile and a “small” internet table.RP/0/RP0/CPU0#5501#show route sumRoute Source Routes Backup Deleted Memory(bytes)connected 2 3 0 1200 local 5 0 0 1200 local LSPV 1 0 0 240 static 2 0 0 480 ospf 1 677 2 0 163072 bgp xxxx 615680 10 0 147765600 dagr 0 0 0 0 Total 616367 15 0 147931792 RP/0/RP0/CPU0#5501#show dpa resources iproute location 0/0/CPU0~iproute~ DPA Table (Id# 17, Scope# Global)--------------------------------------------------IPv4 Prefix len distribution Prefix Actual Prefix Actual /0 1 /1 0 /2 0 /3 0 /4 1 /5 0 /6 0 /7 0 /8 15 /9 9 /10 35 /11 102 /12 277 /13 527 /14 955 /15 1703 /16 12966 /17 7325 /18 12874 /19 23469 /20 35743 /21 39283 /22 72797 /23 60852 /24 346773 /25 3 /26 19 /27 21 /28 17 /29 13 /30 229 /31 0 /32 368 [...]RP/0/RP0/CPU0#5501#show contr npu resources all location 0/0/CPU0 HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 498080 (63 %) iproute # 507304 (65 %) ip6route # 12818 (2 %) mplslabel # 677 (0 %)HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 510070 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 192543 (38 %) iproute # 176254 (35 %) ip6route # 15583 (3 %) ipmcroute # 0 (0 %)Same route distribution on a scale system running IOS XR 6.2.2#RP/0/RP0/CPU0#5501-SE#show route sum Route Source Routes Backup Deleted Memory(bytes)connected 4 3 0 1680 local 7 0 0 1680 local LSPV 1 0 0 240 static 2 0 0 480 ospf 1 677 2 0 163072 bgp xxxx 615681 10 0 147765840 dagr 0 0 0 0 Total 616372 15 0 147932992 RP/0/RP0/CPU0#5501-SE#show dpa resources iproute location 0/0/CPU0 ~iproute~ DPA Table (Id# 17, Scope# Global)--------------------------------------------------IPv4 Prefix len distribution Prefix Actual Capacity Prefix Actual Capacity /0 1 20 /1 0 20 /2 0 20 /3 0 20 /4 1 20 /5 0 20 /6 0 20 /7 0 20 /8 15 20 /9 9 20 /10 35 205 /11 102 409 /12 277 818 /13 527 1636 /14 955 3275 /15 1703 5731 /16 12966 42368 /17 7325 25379 /18 12874 42571 /19 23469 86576 /20 35743 127308 /21 39283 141634 /22 72797 231894 /23 60852 207107 /24 346773 1105235 /25 4 4298 /26 19 4503 /27 21 3275 /28 17 2865 /29 13 6959 /30 231 2865 /31 0 205 /32 376 20 […]RP/0/RP0/CPU0#5501-SE#show contr npu resources all location 0/0/CPU0 HW Resource Information Name # lem OOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green Current Usage NPU-0 Total In-Use # 13887 (2 %) iproute # 376 (0 %) ip6route # 12827 (2 %) mplslabel # 677 (0 %) HW Resource Information Name # lpm OOR Information NPU-0 Estimated Max Entries # 551346 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green Current Usage NPU-0 Total In-Use # 15612 (3 %) iproute # 0 (0 %) ip6route # 15589 (3 %) ipmcroute # 0 (0 %) HW Resource Information Name # ext_tcam_ipv4 OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green Current Usage NPU-0 Total In-Use # 616012 (30 %) iproute # 616012 (30 %) ipmcroute # 0 (0 %) Examples above show it’s possible to store a full internet view in a base system.With a relatively small table of 616k routes, we have LEM used at approximatively 65%. But it’s frequent to see larger internet tables (closer to 700k in August 2017), with many peering routes and internal routes. It still fits in but doesn’t give much room for future growth.We advise to prefer scale line cards and systems for such use-cases.In the next episode, we will cover IPv6 prefixes. Stay tuned.", "url": "/tutorials/2017-08-03-understanding-ncs5500-resources-s01e02/", "author": "Nicolas Fevrier", "tags": "NCS5500, NCS 5500, LPM, LEM, eTCAM, XR, IOSXR, Memory" } , "tutorials-2017-08-07-understanding-ncs5500-resources-s01e03": { "title": "Understanding NCS5500 Resources (S01E03)", "content": " Understanding NCS5500 Resources S01E03 IPv6 Prefixes Previously on “Understanding NCS5500 Resources” IPv6 routes and FIB Profiles Lab verification You can find more content related to NCS5500 including routing in VRF, URPF, ACLs, Netflow following this link.Important update# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/S01E03 IPv6 PrefixesPreviously on “Understanding NCS5500 Resources”In the previous posts, we introduced the different routers and line cards in NCS5500 portfolio and we explained how IPv4 prefixes are sorted in LEM, LPM and eTCAM.All the principles described below and the examples used to illustrate them were validated in August 2017 with Jericho-based systems, using scale (with eTCAM) and base (without eTCAM) line cards and running the two IOS XR releases available# 6.1.4 and 6.2.2.IPv6 routes and FIB ProfilesPlease take a few minutes to read the S01E02 to understand the different databases used to store routes in NCS5500# LPM# Longest Prefix Match Database (or KAPS) is an SRAM used to store IPv4 and IPv6 prefixes. LEM# Large Exact Match Database also used to store specific IPv4 and IPv6 routes, plus MAC addresses and MPLS labels. eTCAM# external TCAMs, only present in the -SE “scale” line cards and systems. As the name implies, they are not a resource inside the Forwarding ASIC, it’s an additional memory used to extend unicast route and ACL / classifiers scale.We explained how the different profiles influenced the prefixes storing in different databases for base and scale systems or line cards. The principles for IPv6 are similar but things are actually simpler# the order of operation will be exactly the same, regardless of the FIB profile activated and regardless of the type of line card (base or scale).The logic behind this decision# IPv6/48 prefixes are by far the largest population of the public table.(From BGPv6 table on Twitter)To avoid any misunderstanding, let’s review the IPv6 resource allocation / distribution for each profile and line card type quickly, starting with the Base systems with Host-optimized FIB profile#Base systems with Internet-optimized FIB profile#Note# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/Scale systems regardless of FIB profile#See ? Pretty easy. By default, IPv6/48 are moved into LEM and the all other IPv6 prefixes are pushed into LPM.Lab verificationLPM is an algorithmic memory. That means, the capacity will depend on the prefix distribution and how many have been programmed at a given moment. We will use a couple of examples below to illustrate below how the routes are moved but you should not rely on the “estimated capacity” to based your capacity planning. Only a real internet table will give you a reliable idea of the available space.In slot 0/0, we have a base line card (18H18F) using an Internet-optimized profile. In slot 0/6, we use a scale line card (24H12F-SE).Also, keep in mind we are announcing ordered prefixes which are fine in a lab context to verify where the system will store the routes but it’s not a realistic scenario (compared to a real internet table for instance).IPv6/48 RoutesIPv6/48 prefixes are stored in LEM#First we advertise 20,000 IPv6/48 routes and check the different databases.On Base line cards#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /48 | utility wc -l20000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lem location 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 20107 (3 %) iproute # 5 (0 %) ip6route # 20000 (3 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 117926 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 172 (0 %) iproute # 29 (0 %) ip6route # 117 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#Note# for readability, we will only display NPU-0 information. In the full output of the show command, we will have from NPU-0 to NPU-0 on 18H18F and from NPU-0 to NPU-3 on the 24H12F-SE.On Scale line cards#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lem location 0/6/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 20127 (3 %) iproute # 24 (0 %) ip6route # 20000 (3 %) mplslabel # 102 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 118638 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 144 (0 %) iproute # 0 (0 %) ip6route # 117 (0 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#With 20,000 IPv6/48 prefixes, as expected, it’s only 3% of the 786,432 entries of LEM.Just for verification, we will advertise 200,000 then 400,000 IPv6/48 routes. And of course the LEM estimated max entries will stay constant. LEM is very different than LPM from this perspective.RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /48 | utility wc -l200000RP/0/RP0/CPU0#NCS5508-1-614#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lem location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 786432 Total In-Use # 200107 (25 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 117926 Total In-Use # 172 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lem location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 786432 Total In-Use # 200127 (25 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 118638 Total In-Use # 144 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /48 | utility wc -l400000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lem location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 786432 Total In-Use # 400107 (51 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 117926 Total In-Use # 172 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lem location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 786432 Total In-Use # 400127 (51 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 118638 Total In-Use # 144 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#Non IPv6/48 Routes ?From IPv6/1 to IPv6/47 and from IPv6/49 to IPv6/128, all these prefixes will be stored in LPM.The estimated max prefixes will be very different for each test and will also differ depending on the number of routes we advertise.IPv6/32 RoutesLet’s see is the occupation for 20,000 / 40,000 and 60,000 IPv6/32 prefixes.On base line cards with 20,000 IPv6/32#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /32 | utility wc -l20000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 493046 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 20172 (4 %) iproute # 29 (0 %) ip6route # 20117 (4 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#On scale line cards with IPv6/32#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 492362 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 20144 (4 %) iproute # 0 (0 %) ip6route # 20117 (4 %) ipmcroute # 50 (0 %)RP/0/RP0/CPU0#NCS5508-1-614#40,000 IPv6/32 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 477274 Total In-Use # 40172 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 478572 Total In-Use # 40144 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#60,000 IPv6/32 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 459800 Total In-Use # 60172 (13 %)RP/0/RP0/CPU0#NCS5508-1-614#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 460686 Total In-Use # 60144 (13 %)RP/0/RP0/CPU0#NCS5508-1-614#IPv6/56 Routes20,000 IPv6/56 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /56 | utility wc -l20000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 239664 Total In-Use # 20172 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 489198 Total In-Use # 20144 (4 %)RP/0/RP0/CPU0#NCS5508-1-614#40,000 IPv6/56 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /56 | utility wc -l40000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 220600 Total In-Use # 40172 (18 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 475320 Total In-Use # 40144 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#60,000 IPv6/56 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /56 | utility wc -l60000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 201192 Total In-Use # 60172 (30 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 458492 Total In-Use # 60144 (13 %)RP/0/RP0/CPU0#NCS5508-1-614#IPv6/64 Routes20,000 IPv6/64 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /64 | utility wc -l20000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 239664 Total In-Use # 20172 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 489198 Total In-Use # 20144 (4 %)RP/0/RP0/CPU0#NCS5508-1-614# 40,000 IPv6/64 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /64 | utility wc -l40000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 220600 Total In-Use # 40172 (18 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 475320 Total In-Use # 40144 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#60,000 IPv6/64 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /64 | utility wc -l60000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 201192 Total In-Use # 60172 (30 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 458492 Total In-Use # 60144 (13 %)RP/0/RP0/CPU0#NCS5508-1-614#IPv6/128 Routes20,000 IPv6/128 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /128 | utility wc -l20000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 238848 Total In-Use # 20172 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 239330 Total In-Use # 20144 (8 %)RP/0/RP0/CPU0#NCS5508-1-614#40,000 IPv6/128 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /128 | utility wc -l40000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 220186 Total In-Use # 40172 (18 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 220446 Total In-Use # 40144 (18 %)RP/0/RP0/CPU0#NCS5508-1-614#60,000 IPv6/128 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /128 | utility wc -l60000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 200914 Total In-Use # 60172 (30 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 201098 Total In-Use # 60144 (30 %)RP/0/RP0/CPU0#NCS5508-1-614#80,000 IPv6/128 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh route ipv6 bgp | i /128 | utility wc -l60000RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 181075 Total In-Use # 80173 (44 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 181219 Total In-Use # 80145 (44 %)RP/0/RP0/CPU0#NCS5508-1-614#100,000 IPv6/128 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 161334 Total In-Use # 100172 (62 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 161456 Total In-Use # 100144 (62 %)RP/0/RP0/CPU0#NCS5508-1-614#120,000 IPv6/128 prefixes#RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/0/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 141370 Total In-Use # 120172 (85 %)RP/0/RP0/CPU0#NCS5508-1-614#sh contr npu resources lpm location 0/6/CPU0 | i ~(Estim|In-Use)~ Estimated Max Entries # 141476 Total In-Use # 120144 (85 %)RP/0/RP0/CPU0#NCS5508-1-614# Pfx Base Max Pfx Scale Max Pfx 20k IPv6/32 LPM# 489,903 LPM# 492,387 40k IPv6/32 LPM# 475,663 LPM# 478,583 60k IPv6/32 LPM# 458,713 LPM# 460,693 80k IPv6/32 LPM# 440,257 LPM# 440,929 100k IPv6/32 LPM# 421,187 LPM# 421,733 200k IPv6/32 LPM# 322,395 LPM# 323,017 250k IPv6/32 LPM# 272,903 LPM# 273,141 20k IPv6/48 LEM# 786,432 LEM# 786,432 40k IPv6/48 LEM# 786,432 LEM# 786,432 60k IPv6/48 LEM# 786,432 LEM# 786,432 80k IPv6/48 LEM# 786,432 LEM# 786,432 100k IPv6/48 LEM# 786,432 LEM# 786,432 200k IPv6/48 LEM# 786,432 LEM# 786,432 20k IPv6/56 LPM# 486,773 LPM# 489,223 40k IPv6/56 LPM# 474,051 LPM# 475,331 60k IPv6/56 LPM# 457,623 LPM# 458,501 80k IPv6/56 LPM# 439,433 LPM# 440,103 100k IPv6/56 LPM# 420,525 LPM# 421,069 200k IPv6/56 LPM# 322,061 LPM# 322,349 250k IPv6/56 LPM# 272,637 LPM# 272,873 20k IPv6/64 LPM# 239,675 LPM# 489,223 40k IPv6/64 LPM# 220,605 LPM# 475,331 60k IPv6/64 LPM# 201,195 LPM# 458,501 80k IPv6/64 LPM# 181,283 LPM# 440,103 100k IPv6/64 LPM# 161,503 LPM# 421,069 120k IPv6/64 LPM# 141,511 LPM# 401,163 20k IPv6/128 LPM# 238,848 LPM# 239,330 40k IPv6/128 LPM# 220,186 LPM# 220,446 60k IPv6/128 LPM# 200,914 LPM# 201,098 80k IPv6/128 LPM# 181,075 LPM# 181,219 100k IPv6/128 LPM# 161,334 LPM# 161,456 120k IPv6/128 LPM# 141,370 LPM# 141,476 Again this chart is just provided for information with “aligned”/”sorted” routes, not really representing a real internet distribution. Take a look at the former post for a production router with public view IPv4+IPv6.In next posts, we will cover Encapsulation database, FEC and ECMP FEC database, MPLS use-cases and the classifiers/ACLs. Stay tuned.", "url": "/tutorials/2017-08-07-understanding-ncs5500-resources-s01e03/", "author": "Nicolas Fevrier", "tags": "ncs5500, ncs 5500, lpm, lem, routes, prefixes, eTCAM" } , "tutorials-2017-12-30-full-internet-view-on-base-ncs-5500-systems-s01e04": { "title": "Full Internet View on "Base" NCS 5500 Systems (S01E04)", "content": " Understanding NCS5500 Resources S01E04 Full Internet View on “Base” NCS 5500 Systems Previously on “Understanding NCS5500 Resources” The demo Config and CLI “Wet-finger” Internet Growth Projection You can find more content related to NCS5500 including routing in VRF, URPF, ACLs, Netflow following this link.S01E04 Full Internet View on “Base” NCS 5500 SystemsDisclaimer# In the following page and video, you’ll find a demo of the internet-optimized mode on NCS5500 routers based on Jericho (or Jericho+). This demo ran in late 2017 is still accurate. With recent internet distribution you should still be able to store a full view v4 and v6 in the internal memories (LEM and LPM). Does it mean it’s recommended? Probably not# it will occupy 80% or more of the LEM which doesn’t give more room for growth and potential routing incident. In September 2019, we recommend you select devices with eTCAM for internet peering roles unless you are running a smart system of route filtering to contain the table size.Important update# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/Previously on “Understanding NCS5500 Resources”In previous posts, we presented# the different routers and line cards in NCS5500 portfolio we explained how IPv4 prefixes are sorted in LEM, LPM and eTCAM and how IPv6 prefixes are stored in the same databases.Let’s illustrate how we can handle a full IPv4 and IPv6 Internet view on base systems and line cards (i.e. without external TCAM, only using the LEM and LPM internal to the forwarding ASIC).The demoFollowing YouTube video will demonstrate we can handle multiple internet peers in the global routing table (15 full views on our demo) on base systems with the appropriate optimizatios.We will demo how we can monitor the important resources used to store routing information and finally we will project what could be the internet size if it follows 2017 trends and how long the systems will be handle the full v4/v6 views.https#//www.youtube.com/watch?v=8Tq4nyP2wuAThis video shows NCS5500 using Jericho-based line cards without external TCAM (also valid for fixed-form systems) handling the current internet table with still significant growth margin.Config and CLIOn this line card we have#RP/0/RP0/CPU0#NCS5508#sh route sumRoute Source Routes Backup Deleted Memory(bytes)connected 9 1 0 2400local 10 0 0 2400static 2 0 0 480ospf 100 0 0 0 0bgp 100 698440 0 0 167625600isis 1 0 0 0 0dagr 0 0 0 0Total 698461 1 0 167630880 RP/0/RP0/CPU0#NCS5508#sh route ipv6 un sum Route Source Routes Backup Deleted Memory(bytes)connected 5 0 0 1320connected l2tpv3_xconnect 0 0 0 0local 5 0 0 1320static 0 0 0 0bgp 100 61527 0 0 16243128Total 61537 0 0 16245768RP/0/RP0/CPU0#NCS5508#sh bgp sumBGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 1000324BGP main routing table version 1000324BGP NSR Initial initsync version 624605 (Reached)BGP NSR/ISSU Sync-Group versions 1000324/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 1000324 1000324 1000324 1000324 1000324 1000324Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 1000 665841 210331 1000324 0 0 6d23h 655487192.168.100.152 0 45896 666232 210331 1000324 0 0 6d23h 656126192.168.100.153 0 7018 664430 210331 1000324 0 0 6d23h 654330192.168.100.154 0 1836 669052 210331 1000324 0 0 6d23h 658948192.168.100.155 0 50300 646305 210331 1000324 0 0 6d23h 636208192.168.100.156 0 50304 667405 210331 1000324 0 0 6d23h 657301192.168.100.157 0 57381 667411 210331 1000324 0 0 6d23h 657307192.168.100.158 0 4608 687673 276201 1000324 0 0 6d23h 677487192.168.100.159 0 4777 676317 210331 1000324 0 0 6d23h 666213192.168.100.160 0 37989 298774 210331 1000324 0 0 6d23h 288706192.168.100.161 0 3549 664480 210331 1000324 0 0 6d23h 654376192.168.100.163 0 8757 642587 210331 1000324 0 0 6d23h 632483192.168.100.164 0 3257 1319462 360784 1000324 0 0 6d22h 654661192.168.100.165 0 3258 664741 262418 1000324 0 0 6d22h 654661192.168.100.166 0 4609 687524 163237 1000324 0 0 6d22h 677487RP/0/RP0/CPU0#NCS5508#sh bgp ipv6 un sumBGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 64373BGP main routing table version 64373BGP NSR Initial initsync version 61139 (Reached)BGP NSR/ISSU Sync-Group versions 64373/0BGP scan interval 60 secs BGP is operating in STANDALONE mode. Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 64373 64373 64373 64373 64373 64373 Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 68925 10070 64373 0 0 6d23h 588602001#111##152 0 100 51103 10070 64373 0 0 6d23h 410382001#111##153 0 100 52945 109776 64373 0 0 6d23h 428802001#111##155 0 100 52754 10070 64373 0 0 6d23h 426892001#111##156 0 100 40481 10070 64373 0 0 6d23h 304172001#111##157 0 100 13997 10070 64373 0 0 6d23h 39322001#111##158 0 100 52944 10070 64373 0 0 6d23h 428792001#111##159 0 100 51465 10070 64373 0 0 6d23h 414002001#111##160 0 100 52737 10070 64373 0 0 6d23h 426722001#111##161 0 100 52918 10070 64373 0 0 6d23h 428532001#111##163 0 100 51428 10070 64373 0 0 6d23h 41364 RP/0/RP0/CPU0#NCS5508#sh bgp scale VRF# default Neighbors Configured# 26 Established# 26 Address-Family Prefixes Paths PathElem Prefix Path PathElem Memory Memory Memory IPv4 Unicast 698440 9481781 698440 98.58MB 795.74MB 71.27MB IPv6 Unicast 61527 430984 61527 9.39MB 36.17MB 6.28MB ------------------------------------------------------------------------------ Total 759967 9912765 759967 107.97MB 831.91MB 77.55MB Total VRFs Configured# 0 RP/0/RP0/CPU0#NCS5508#sh bgpBGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 1000324BGP main routing table version 1000324BGP NSR Initial initsync version 624605 (Reached)BGP NSR/ISSU Sync-Group versions 1000324/0BGP scan interval 60 secs Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path*> 1.0.4.0/22 192.168.100.151 0 1000 1299 4826 38803 56203 i* 192.168.100.152 0 45896 45896 4826 38803 56203 i* 192.168.100.153 0 7018 7018 3257 4826 38803 56203 i* 192.168.100.154 0 1836 1836 6939 4826 38803 56203 i* 192.168.100.155 0 50300 50300 6939 4826 38803 56203 i* 192.168.100.156 0 50304 50304 6939 4826 38803 56203 i* 192.168.100.157 0 57381 57381 6939 4826 38803 56203 i* 192.168.100.158 0 4608 4608 4826 38803 56203 i* 192.168.100.159 0 4777 4777 2516 4713 2914 15412 4826 38803 56203 i* 192.168.100.160 0 37989 37989 4844 4826 38803 56203 i* 192.168.100.161 2514 0 3549 3549 3356 1299 4826 38803 56203 i* 192.168.100.163 0 8757 8758 6939 4826 38803 56203 i* 192.168.100.164 10 0 3257 3257 4826 38803 56203 i* 192.168.100.165 10 0 3258 3257 4826 38803 56203 i* 192.168.100.166 0 4609 4608 4826 38803 56203 i*> 1.0.4.0/24 192.168.100.151 0 1000 1299 4826 38803 56203 i* 192.168.100.152 0 45896 45896 4826 38803 56203 i* 192.168.100.153 0 7018 7018 3257 4826 38803 56203 i* 192.168.100.154 0 1836 1836 6939 4826 38803 56203 i* 192.168.100.155 0 50300 50300 6939 4826 38803 56203 i* 192.168.100.156 0 50304 50304 6939 4826 38803 56203 i* 192.168.100.157 0 57381 57381 6939 4826 38803 56203 i* 192.168.100.158 0 4608 4608 4826 38803 56203 i* 192.168.100.159 0 4777 4777 2516 4713 2914 15412 4826 38803 56203 i* 192.168.100.161 2514 0 3549 3549 3356 1299 4826 38803 56203 i* 192.168.100.163 0 8757 8758 6939 4826 38803 56203 i* 192.168.100.164 10 0 3257 3257 4826 38803 56203 i* 192.168.100.165 10 0 3258 3257 4826 38803 56203 i* 192.168.100.166 0 4609 4608 4826 38803 56203 i*> 1.0.5.0/24 192.168.100.151 0 1000 1299 4826 38803 56203 i* 192.168.100.152 0 45896 45896 4826 38803 56203 i* 192.168.100.153 0 7018 7018 3257 4826 38803 56203 i* 192.168.100.154 0 1836 1836 6939 4826 38803 56203 i* 192.168.100.155 0 50300 50300 6939 4826 38803 56203 i* 192.168.100.156 0 50304 50304 6939 4826 38803 56203 i* 192.168.100.157 0 57381 57381 6939 4826 38803 56203 i* 192.168.100.158 0 4608 4608 4826 38803 56203 i* 192.168.100.159 0 4777 4777 2516 4713 2914 15412 4826 38803 56203 i* 192.168.100.161 2514 0 3549 3549 3356 1299 4826 38803 56203 i* 192.168.100.163 0 8757 8758 6939 4826 38803 56203 i* 192.168.100.164 10 0 3257 3257 4826 38803 56203 i* 192.168.100.165 10 0 3258 3257 4826 38803 56203 i* 192.168.100.166 0 4609 4608 4826 38803 56203 i... RP/0/RP0/CPU0#NCS5508#sh bgp ipv6 unBGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 64373BGP main routing table version 64373BGP NSR Initial initsync version 61139 (Reached)BGP NSR/ISSU Sync-Group versions 64373/0BGP scan interval 60 secs Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path*>i##/0 2001#111##151 0 100 0 1299 i* i2001##/32 2001#111##151 0 100 0 286 1103 1101 i*>i 2001#111##152 100 0 7018 6939 i* i 2001#111##155 100 0 57821 6939 i* i 2001#111##156 100 0 8758 6939 i* i 2001#111##158 100 0 50304 2603 1103 1101 i* i 2001#111##159 100 0 22652 6939 i* i 2001#111##160 100 0 6881 6939 i* i 2001#111##161 100 0 50300 6939 i* i 2001#111##163 100 0 1836 6939 i*>i2001#4#112##/48 2001#111##151 0 100 0 6724 112 i* i 2001#111##152 100 0 7018 6939 112 i* i 2001#111##153 100 0 57381 42708 112 i* i 2001#111##155 100 0 57821 6939 112 i* i 2001#111##156 100 0 8758 9002 112 i* i 2001#111##158 100 0 50304 2603 1103 112 i* i 2001#111##159 100 0 22652 112 i* i 2001#111##160 100 0 6881 112 i* i 2001#111##161 100 0 50300 112 i* i 2001#111##163 100 0 1836 112 i*>i2001#5#5##/48 2001#111##151 0 100 0 174 48260 i* i 2001#111##152 100 0 7018 174 48260 i* i 2001#111##153 100 0 57381 42708 1299 174 48260 i* i 2001#111##155 100 0 57821 12586 3257 174 48260 i* i 2001#111##158 100 0 50304 1299 174 48260 i* i 2001#111##159 100 0 22652 174 48260 i* i 2001#111##160 100 0 6881 25512 174 48260 i* i 2001#111##161 100 0 50300 174 48260 i* i 2001#111##163 100 0 1836 174 48260 i*>i2001#200##/32 2001#111##151 0 100 0 174 2914 2500 2500 i* i 2001#111##152 100 0 7018 2914 2500 2500 i* i 2001#111##153 100 0 57381 6939 2914 2500 2500 i* i 2001#111##155 100 0 57821 6939 2914 2500 2500 i* i 2001#111##156 100 0 8758 174 2914 2500 2500 i* i 2001#111##158 100 0 50304 1299 2914 2500 2500 i* i 2001#111##159 100 0 22652 3356 2914 2500 2500 i* i 2001#111##160 100 0 6881 6939 2914 2500 2500 i* i 2001#111##161 100 0 50300 2914 2500 2500 i* i 2001#111##163 100 0 1836 174 2914 2500 2500 i... RP/0/RP0/CPU0#NCS5508#sh dpa resources iproute loc 0/2/CPU0~iproute~ DPA Table (Id# 21, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 3 /1 0 /2 0 /3 0 /4 3 /5 0 /6 0 /7 0 /8 16 /9 14 /10 37 /11 107 /12 288 /13 557 /14 1071 /15 1909 /16 13546 /17 7986 /18 14000 /19 25861 /20 40214 /21 44565 /22 83070 /23 70129 /24 388562 /25 1522 /26 1338 /27 877 /28 314 /29 510 /30 782 /31 60 /32 1170 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 NPU-4 NPU-5 In Use# 698511 698511 698511 698511 698511 698511 Create Requests Total# 698546 698546 698546 698546 698546 698546 Success# 698546 698546 698546 698546 698546 698546 Delete Requests Total# 35 35 35 35 35 35 Success# 35 35 35 35 35 35 Update Requests Total# 141991 141991 141991 141991 141991 141991 Success# 141991 141991 141991 141991 141991 141991 EOD Requests Total# 0 0 0 0 0 0 Success# 0 0 0 0 0 0 Errors HW Failures# 0 0 0 0 0 0 Resolve Failures# 0 0 0 0 0 0 No memory in DB# 0 0 0 0 0 0 Not found in DB# 0 0 0 0 0 0 Exists in DB# 0 0 0 0 0 0 RP/0/RP0/CPU0#NCS5508#sh dpa resources ip6route loc 0/2/CPU0 ~ip6route~ DPA Table (Id# 22, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Prefix Actual /0 3 /1 0 /2 0 /3 0 /4 0 /5 0 /6 0 /7 0 /8 0 /9 0 /10 3 /11 0 /12 0 /13 0 /14 0 /15 0 /16 10 /17 0 /18 0 /19 2 /20 9 /21 3 /22 4 /23 4 /24 20 /25 6 /26 15 /27 17 /28 79 /29 1899 /30 156 /31 128 /32 9773 /33 640 /34 435 /35 410 /36 1618 /37 284 /38 680 /39 161 /40 2285 /41 212 /42 397 /43 117 /44 2283 /45 180 /46 1465 /47 371 /48 20154 /49 13 /50 12 /51 4 /52 16 /53 0 /54 1 /55 8 /56 11621 /57 16 /58 2 /59 0 /60 3 /61 0 /62 1 /63 0 /64 5177 /65 0 /66 0 /67 0 /68 0 /69 0 /70 0 /71 0 /72 0 /73 0 /74 0 /75 0 /76 0 /77 0 /78 0 /79 0 /80 0 /81 0 /82 0 /83 0 /84 0 /85 0 /86 0 /87 0 /88 0 /89 0 /90 0 /91 0 /92 0 /93 0 /94 0 /95 0 /96 1 /97 0 /98 0 /99 0 /100 0 /101 0 /102 0 /103 0 /104 3 /105 0 /106 0 /107 0 /108 0 /109 0 /110 0 /111 0 /112 0 /113 0 /114 0 /115 4 /116 0 /117 0 /118 0 /119 0 /120 0 /121 0 /122 71 /123 0 /124 15 /125 0 /126 24 /127 18 /128 735 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 NPU-4 NPU-5 In Use# 61568 61568 61568 61568 61568 61568 Create Requests Total# 61572 61572 61572 61572 61572 61572 Success# 61572 61572 61572 61572 61572 61572 Delete Requests Total# 4 4 4 4 4 4 Success# 4 4 4 4 4 4 Update Requests Total# 2629 2629 2629 2629 2629 2629 Success# 2628 2628 2628 2628 2628 2628 EOD Requests Total# 0 0 0 0 0 0 Success# 0 0 0 0 0 0 Errors HW Failures# 0 0 0 0 0 0 Resolve Failures# 0 0 0 0 0 0 No memory in DB# 0 0 0 0 0 0 Not found in DB# 0 0 0 0 0 0 Exists in DB# 0 0 0 0 0 0 RP/0/RP0/CPU0#NCS5508#sh contr npu resources lem location 0/2/CPU0HW Resource Information Name # lem OOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green NPU-1 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green NPU-2 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green NPU-3 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green NPU-4 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green NPU-5 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green Current Usage NPU-0 Total In-Use # 409888 (52 %) iproute # 389732 (50 %) ip6route # 20154 (3 %) mplslabel # 2 (0 %) NPU-1 Total In-Use # 409888 (52 %) iproute # 389732 (50 %) ip6route # 20154 (3 %) mplslabel # 2 (0 %) NPU-2 Total In-Use # 409888 (52 %) iproute # 389732 (50 %) ip6route # 20154 (3 %) mplslabel # 2 (0 %) NPU-3 Total In-Use # 409888 (52 %) iproute # 389732 (50 %) ip6route # 20154 (3 %) mplslabel # 2 (0 %) NPU-4 Total In-Use # 409888 (52 %) iproute # 389732 (50 %) ip6route # 20154 (3 %) mplslabel # 2 (0 %) NPU-5 Total In-Use # 409888 (52 %) iproute # 389732 (50 %) ip6route # 20154 (3 %) mplslabel # 2 (0 %) RP/0/RP0/CPU0#NCS5508#sh contr npu resources lpm location 0/2/CPU0HW Resource Information Name # lpm OOR Information NPU-0 Estimated Max Entries # 430200 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2017.Dec.22 09#30#17 PST NPU-1 Estimated Max Entries # 430200 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2017.Dec.22 09#30#17 PST NPU-2 Estimated Max Entries # 430200 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2017.Dec.22 09#30#17 PST NPU-3 Estimated Max Entries # 430200 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2017.Dec.22 09#30#17 PST NPU-4 Estimated Max Entries # 430200 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2017.Dec.22 09#30#17 PST NPU-5 Estimated Max Entries # 430200 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2017.Dec.22 09#30#17 PST Current Usage NPU-0 Total In-Use # 350206 (81 %) iproute # 308779 (72 %) ip6route # 41414 (10 %) ipmcroute # 0 (0 %) NPU-1 Total In-Use # 350206 (81 %) iproute # 308779 (72 %) ip6route # 41414 (10 %) ipmcroute # 0 (0 %) NPU-2 Total In-Use # 350206 (81 %) iproute # 308779 (72 %) ip6route # 41414 (10 %) ipmcroute # 0 (0 %) NPU-3 Total In-Use # 350206 (81 %) iproute # 308779 (72 %) ip6route # 41414 (10 %) ipmcroute # 0 (0 %) NPU-4 Total In-Use # 350206 (81 %) iproute # 308779 (72 %) ip6route # 41414 (10 %) ipmcroute # 0 (0 %) NPU-5 Total In-Use # 350206 (81 %) iproute # 308779 (72 %) ip6route # 41414 (10 %) ipmcroute # 0 (0 %) RP/0/RP0/CPU0#NCS5508#Note# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/Configuration to enable the internet-optimized mode on non-eTCAM line cards#RP/0/RP0/CPU0#NCS5508#sh run | i hw-Building configuration...hw-module fib ipv4 scale internet-optimizedRP/0/RP0/CPU0#NCS5508#Configuration to enable streaming telemetry for the counters used for this video#RP/0/RP0/CPU0#NCS5508#sh run telemetry model-driven telemetry model-driven destination-group DGroup1 vrf default address-family ipv4 192.168.100.141 port 5432 encoding self-describing-gpb protocol tcp ! ! sensor-group fib sensor-path Cisco-IOS-XR-fib-common-oper#fib/nodes/node/protocols/protocol/vrfs/vrf/summary ! sensor-group brcm sensor-path Cisco-IOS-XR-fretta-bcm-dpa-hw-resources-oper#dpa/stats/nodes/node/hw-resources-datas/hw-resources-data ! sensor-group routing sensor-path Cisco-IOS-XR-ipv4-bgp-oper#bgp/instances/instance/instance-active/default-vrf/process-info sensor-path Cisco-IOS-XR-ip-rib-ipv4-oper#rib/vrfs/vrf/afs/af/safs/saf/ip-rib-route-table-names/ip-rib-route-table-name/protocol/bgp/as/information sensor-path Cisco-IOS-XR-ip-rib-ipv6-oper#ipv6-rib/vrfs/vrf/afs/af/safs/saf/ip-rib-route-table-names/ip-rib-route-table-name/protocol/bgp/as/information ! subscription fib sensor-group-id fib strict-timer sensor-group-id fib sample-interval 1000 destination-id DGroup1 ! subscription brcm sensor-group-id brcm strict-timer sensor-group-id brcm sample-interval 1000 destination-id DGroup1 ! subscription routing sensor-group-id routing strict-timer sensor-group-id routing sample-interval 1000 destination-id DGroup1 !!RP/0/RP0/CPU0#NCS5508#“Wet-finger” Internet Growth ProjectionWe take the data graciously provided by Darren’s (https#//twitter.com/mellowdrifter) twitter pages# https#//twitter.com/bgp4_table https#//twitter.com/bgp6_table Date %/24 %/23 %/22 %/21-/19 %18-16 15/12/2016 56,1 10 11,7 16,8 5,4 18/01/2017 56,2 10 11,7 16,7 5,4 15/02/2017 56,4 10,1 11,7 16,5 5,3 15/03/2017 56,4 10 11,8 16,4 5,4 12/04/2017 56,5 10 11,8 16,3 5,4 17/05/2017 56,6 10 11,8 16,2 5,3 14/06/2017 56,5 10,1 11,9 16,2 5,3 12/07/2017 56,5 10,1 11,9 16,1 5,3 16/08/2017 56,6 10,1 12 16,1 5,3 13/09/2017 56,7 10,1 12,1 15,9 5,2 10/10/2017 56,6 10,1 12,1 15,9 5,2 15/11/2017 56,6 10,1 12,1 15,9 5,2 Which can be converted in numbers of prefixes per prefix-length# Date Total v4 /24 /23 /22 /21-/19 /18-/16 15/12/2016 638707 358315 63871 74729 107303 34491 18/01/2017 643504 361650 64351 75290 107466 34750 15/02/2017 650916 367117 65743 76158 107402 34499 15/03/2017 652062 367763 65207 76944 106939 35212 12/04/2017 658828 372238 65883 77742 107389 35577 17/05/2017 663357 375461 66336 78277 107464 35158 14/06/2017 666765 376723 67344 79346 108016 35339 12/07/2017 669699 378380 67640 79695 107822 35495 16/08/2017 673374 381130 68011 80805 108414 35689 13/09/2017 677348 384057 68413 81960 107699 35223 10/10/2017 679210 384433 68601 82185 107995 35319 15/11/2017 684059 387178 69090 82772 108766 35572 Same approach for IPv6# Date %/48 %/32 %/44 %/40 %/36 %/29 15/12/2016 46,9 24 4,6 4,8 3,9 4,1 18/01/2017 46,5 23,8 4,6 4,8 3,8 4,2 15/02/2017 45,8 23,6 4,6 5,4 3,6 4,2 15/03/2017 45,5 23,3 4,8 5,4 3,6 4,2 12/04/2017 45,8 22,7 4,7 5,3 3,6 4,2 17/05/2017 45,8 22,8 4,7 5,3 3,6 4,2 14/06/2017 46,2 22,7 4,7 5,3 3,5 4,3 12/07/2017 45,9 22,8 4,8 5,2 3,6 4,3 16/08/2017 46,5 22,1 4,8 5,3 3,6 4,2 13/09/2017 46,6 21,9 4,8 5,2 3,6 4,2 18/10/2017 46,5 22,1 5 5,1 3,6 4,3 15/11/2017 46,3 22,1 5,2 5,3 3,7 4,3 Date Total v6 /48 /32 /44 /40 /36 /29 Rest 15/12/2016 35118 16471 8429 1616 1686 1370 1440 4144 18/01/2017 35970 16727 8561 1655 1727 1367 1511 4425 15/02/2017 36801 16855 8686 1693 1988 1325 1546 4674 15/03/2017 37826 17211 8814 1816 2043 1362 1589 5031 12/04/2017 39152 17932 8888 1841 2076 1410 1645 5403 17/05/2017 40147 18388 9154 1887 2128 1446 1687 5460 14/06/2017 40737 18821 9248 1915 2160 1426 1752 5459 12/07/2017 40860 18755 9317 1962 2125 1471 1757 5517 16/08/2017 42911 19954 9484 2060 2275 1545 1803 5879 13/09/2017 43540 20290 9536 2090 2265 1568 1829 5965 18/10/2017 43389 20176 9589 2170 2213 1563 1866 5771 15/11/2017 44025 20384 9730 2290 2334 1629 1894 5812 Which gives us the following graphs#We can extrapolate the route count in LEM and LPM now. Year LEM LPM 2017 545742 274488 2018 592543 296045 2019 639344 317602 2020 686145 339159 2021 732946 360717 2022 779747 382274 2023 826548 403832 2024 873349 425389 2025 920150 446947 It’s certainly a very simplistic approach, feel free to provide other sources or your own growth projection in the comments, we will re-do the math with it.", "url": "/tutorials/2017-12-30-full-internet-view-on-base-ncs-5500-systems-s01e04/", "author": "Nicolas Fevrier", "tags": "ncs5500, ncs 5500, demo, video, youtube, lem, lpm, routes, base, prefixes" } , "tutorials-2018-01-25-s01e05-large-routing-tables-on-scale-ncs-5500-systems": { "title": "Large Routing Tables on "Scale" NCS 5500 Systems (S01E05)", "content": " Understanding NCS5500 Resources S01E05 Large Routing Tables on “Scale” NCS 5500 Systems Previously on “Understanding NCS5500 Resources” The demo CLI outputs You can find more content related to NCS5500 including routing in VRF, URPF, ACLs, Netflow following this link.S01E05 Large Routing Tables on “Scale” NCS 5500 SystemsPreviously on “Understanding NCS5500 Resources”In previous posts, we presented# the different routers and line cards in NCS5500 portfolio we explained how IPv4 prefixes are sorted in LEM, LPM and eTCAM we covered how IPv6 prefixes are stored in the same databases. and finally we demonstrated in a video how we can handle a full IPv4 and IPv6 Internet view on base systems and line cards (i.e. without external TCAM, only using the LEM and LPM internal to the forwarding ASIC)Today, we are pushing the limits further. We will take a much larger existing (ie. real) routing table (internet + a very large number of host routes), we will add a projection of the internet table to year 2025 and we will see how it can fit in a Jericho-based system with External TCAMThe demoIn this YouTube video, we will# describe the line cards and systems using Jericho / Qumran-MX forwarding ASICs with external TCAM (identified with the “-SE” at the end of the Product ID) explain the different memories we can use to store the routes and what logic is used to decide where the different prefix types will go run the demo first with a real very large routing table of 1.2M IPv4 and 64k IPv6 routes (the v4 table size comes from a full internet view, a large number of peering routes and 436k host routes) then we will project ourself to 2025 and guesstimate how large the v4 and v6 public table will be we will advertise these extra routes, see how the router absorbs them and how much free space we have left in the different memories https#//www.youtube.com/watch?v=lVC3ppgi7akCLI outputsWe jump directly to the larger use-case# large internet table from 2025 with 436k IPv4 host routes.Such large number of host routes can be caused by DDoS mitigation systems (the /32s being used to divert the traffic targeted to specific victims) or by L3 VMs migration between domains.RP/0/RP0/CPU0#TME-5508-6.2.3#sh route sumRoute Source Routes Backup Deleted Memory(bytes)local 2 0 0 480connected 2 0 0 480bgp 100 1612272 0 0 386945280dagr 0 0 0 0static 0 0 0 0Total 1612276 0 0 386946240RP/0/RP0/CPU0#TME-5508-6.2.3#sh route ipv6 un sumRoute Source Routes Backup Deleted Memory(bytes)local 2 0 0 528connected 2 0 0 528connected l2tpv3_xconnect 0 0 0 0bgp 100 108243 0 0 28576152static 0 1 0 264Total 108247 1 0 28577472RP/0/RP0/CPU0#TME-5508-6.2.3#sh dpa resources iproute loc 0/6/CPU0~iproute~ DPA Table (Id# 18, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Capacity Prefix Actual Capacity /0 1 16 /1 0 16 /2 0 16 /3 0 16 /4 1 16 /5 0 16 /6 0 16 /7 0 16 /8 16 16 /9 14 16 /10 37 163 /11 107 327 /12 288 654 /13 557 1309 /14 1071 2620 /15 1909 4585 /16 13572 33905 /17 8005 20309 /18 23343 34068 /19 38018 69283 /20 40443 101879 /21 45082 113343 /22 148685 185575 /23 116728 165738 /24 651486 884472 /25 2085 3439 /26 3362 3603 /27 5736 2620 /28 15909 2292 /29 17377 5568 /30 42507 2292 /31 112 163 /32 435847 16 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 1612298 1612298 1612298 1612298 Create Requests Total# 2285630 2285630 2285630 2285630 Success# 2285630 2285630 2285630 2285630 Delete Requests Total# 673332 673332 673332 673332 Success# 673332 673332 673332 673332 Update Requests Total# 2680653 2680653 2680653 2680653 Success# 2680651 2680651 2680651 2680651 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#TME-5508-6.2.3#sh dpa resources ip6route loc 0/6/CPU0~ip6route~ DPA Table (Id# 19, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Prefix Actual /0 1 /1 0 /2 0 /3 0 /4 0 /5 0 /6 0 /7 0 /8 0 /9 0 /10 1 /11 0 /12 0 /13 0 /14 0 /15 0 /16 4 /17 0 /18 0 /19 2 /20 9 /21 3 /22 4 /23 4 /24 20 /25 6 /26 15 /27 17 /28 80 /29 2889 /30 189 /31 132 /32 10091 /33 684 /34 454 /35 423 /36 3738 /37 292 /38 686 /39 165 /40 7578 /41 219 /42 417 /43 129 /44 7899 /45 187 /46 1498 /47 376 /48 52338 /49 13 /50 12 /51 4 /52 16 /53 0 /54 1 /55 8 /56 11622 /57 16 /58 2 /59 0 /60 3 /61 0 /62 1 /63 0 /64 5152 /65 0 /66 0 /67 0 /68 0 /69 0 /70 0 /71 0 /72 0 /73 0 /74 0 /75 0 /76 0 /77 0 /78 0 /79 0 /80 0 /81 0 /82 0 /83 0 /84 0 /85 0 /86 0 /87 0 /88 0 /89 0 /90 0 /91 0 /92 0 /93 0 /94 0 /95 0 /96 1 /97 0 /98 0 /99 0 /100 0 /101 0 /102 0 /103 0 /104 1 /105 0 /106 0 /107 0 /108 0 /109 0 /110 0 /111 0 /112 0 /113 0 /114 0 /115 4 /116 0 /117 0 /118 0 /119 0 /120 0 /121 0 /122 71 /123 0 /124 15 /125 0 /126 24 /127 18 /128 731 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 108265 108265 108265 108265 Create Requests Total# 171646 171646 171646 171646 Success# 171646 171646 171646 171646 Delete Requests Total# 63381 63381 63381 63381 Success# 63381 63381 63381 63381 Update Requests Total# 4 4 4 4 Success# 2 2 2 2 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#TME-5508-6.2.3#sh contr npu resources lem loc 0/6/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-1 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-2 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-3 Estimated Max Entries # 786432 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 488186 (62 %) iproute # 435847 (55 %) ip6route # 52338 (7 %) mplslabel # 0 (0 %) NPU-1 Total In-Use # 488186 (62 %) iproute # 435847 (55 %) ip6route # 52338 (7 %) mplslabel # 0 (0 %) NPU-2 Total In-Use # 488186 (62 %) iproute # 435847 (55 %) ip6route # 52338 (7 %) mplslabel # 0 (0 %) NPU-3 Total In-Use # 488186 (62 %) iproute # 435847 (55 %) ip6route # 52338 (7 %) mplslabel # 0 (0 %)RP/0/RP0/CPU0#TME-5508-6.2.3#sh contr npu resources lpm loc 0/6/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 486043 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-1 Estimated Max Entries # 486043 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-2 Estimated Max Entries # 486043 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-3 Estimated Max Entries # 486043 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 55950 (12 %) iproute # 0 (0 %) ip6route # 55927 (12 %) ipmcroute # 0 (0 %) NPU-1 Total In-Use # 55950 (12 %) iproute # 0 (0 %) ip6route # 55927 (12 %) ipmcroute # 0 (0 %) NPU-2 Total In-Use # 55950 (12 %) iproute # 0 (0 %) ip6route # 55927 (12 %) ipmcroute # 0 (0 %) NPU-3 Total In-Use # 55950 (12 %) iproute # 0 (0 %) ip6route # 55927 (12 %) ipmcroute # 0 (0 %)RP/0/RP0/CPU0#TME-5508-6.2.3#sh contr npu resources exttcamipv4 loc 0/6/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 1638400 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-1 Estimated Max Entries # 1638400 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-2 Estimated Max Entries # 1638400 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # Green NPU-3 Estimated Max Entries # 1638400 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage NPU-0 Total In-Use # 1176451 (72 %) iproute # 1176451 (72 %) ipmcroute # 0 (0 %) NPU-1 Total In-Use # 1176451 (72 %) iproute # 1176451 (72 %) ipmcroute # 0 (0 %) NPU-2 Total In-Use # 1176451 (72 %) iproute # 1176451 (72 %) ipmcroute # 0 (0 %) NPU-3 Total In-Use # 1176451 (72 %) iproute # 1176451 (72 %) ipmcroute # 0 (0 %)RP/0/RP0/CPU0#TME-5508-6.2.3#The 2025 internet estimation is described in the previous post. As mentioned in this post and in the video, the method is certainly a matter of debate. Let’s take it for what it is# an estimation.In these use-cases with large public routing table and extreme amount of host routes, we are far from reaching the limits of the systems based on Jericho ASICs with External TCAMs.We are using 62% of LEM, 12% of LPM and 72% of eTCAM.All these counters can be streamed with telemetry. Example of visualization with Grafana#Important to understand that default carving in IOS XR 6.2.3 is allocating 20% for hybrid ACLs. This default behavior will change in releases 6.3.x onwards where we will allocate 100% of the space to IPv4 prefixes and it will be only when configuring hybrid ACLs that we will re-carve to allocation eTCAM space.We can verify the current carving status with the following#RP/0/RP0/CPU0#TME-5508-6.2.3#sh contr npu externaltcam loc 0/6/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 461952 1176448 15 IPV4 DC0 1 80b FLP 28672 0 76 INGRESS_IPV4_SRC_IP_EXT0 2 80b FLP 28672 0 77 INGRESS_IPV4_DST_IP_EXT0 3 160b FLP 26624 0 78 INGRESS_IPV6_SRC_IP_EXT0 4 160b FLP 26624 0 79 INGRESS_IPV6_DST_IP_EXT0 5 80b FLP 28672 0 80 INGRESS_IP_SRC_PORT_EXT0 6 80b FLP 28672 0 81 INGRESS_IPV6_SRC_PORT_EXT1 0 80b FLP 461952 1176448 15 IPV4 DC1 1 80b FLP 28672 0 76 INGRESS_IPV4_SRC_IP_EXT1 2 80b FLP 28672 0 77 INGRESS_IPV4_DST_IP_EXT1 3 160b FLP 26624 0 78 INGRESS_IPV6_SRC_IP_EXT1 4 160b FLP 26624 0 79 INGRESS_IPV6_DST_IP_EXT1 5 80b FLP 28672 0 80 INGRESS_IP_SRC_PORT_EXT1 6 80b FLP 28672 0 81 INGRESS_IPV6_SRC_PORT_EXT2 0 80b FLP 461952 1176448 15 IPV4 DC2 1 80b FLP 28672 0 76 INGRESS_IPV4_SRC_IP_EXT2 2 80b FLP 28672 0 77 INGRESS_IPV4_DST_IP_EXT2 3 160b FLP 26624 0 78 INGRESS_IPV6_SRC_IP_EXT2 4 160b FLP 26624 0 79 INGRESS_IPV6_DST_IP_EXT2 5 80b FLP 28672 0 80 INGRESS_IP_SRC_PORT_EXT2 6 80b FLP 28672 0 81 INGRESS_IPV6_SRC_PORT_EXT3 0 80b FLP 461952 1176448 15 IPV4 DC3 1 80b FLP 28672 0 76 INGRESS_IPV4_SRC_IP_EXT3 2 80b FLP 28672 0 77 INGRESS_IPV4_DST_IP_EXT3 3 160b FLP 26624 0 78 INGRESS_IPV6_SRC_IP_EXT3 4 160b FLP 26624 0 79 INGRESS_IPV6_DST_IP_EXT3 5 80b FLP 28672 0 80 INGRESS_IP_SRC_PORT_EXT3 6 80b FLP 28672 0 81 INGRESS_IPV6_SRC_PORT_EXTRP/0/RP0/CPU0#TME-5508-6.2.3#We have still 461952 routes left in the external TCAM.In a follow up post, we will detail the mechanisms available to mix base and scale line cards in the same chassis, and how your network needs to be designed for such requirements.", "url": "/tutorials/2018-01-25-s01e05-large-routing-tables-on-scale-ncs-5500-systems/", "author": "Nicolas Fevrier", "tags": "ncs 5500, ncs5500, demo, video, youtube, large scale, routing, lem, lpm, etcam, internet" } , "tutorials-port-assignments-on-ncs5500-platforms": { "title": "Port Assignments on NCS5500 and NCS5700 Platforms", "content": " Port Assignments on NCS5500 and NCS5700 Introduction Port allocation NCS5501(-SE) (Base and Scale version) NCS5502(-SE) (Base and Scale version) NCS55A1-24H NCS55A1-36H(-SE) NCS55A2-MOD(-SE)-S NCS-55A1-48Q6H NCS-55A1-24Q6H-S NCS-55A1-24Q6H-SS NCS57B1-6D24H NCS57B1-5DSE NCS57C3-MOD(-SE)-S NCS55-36X100G and NC55-36X100G-S NCS55-24X100G-SE NCS55-18H18F NCS55-24H12F-SE NCS55-36X100G-A-SE NC55-MOD-A-S NC57-24DD NC57-18DD-SE NC57-36H-SE NC57-36H6D-S You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.Authors# Nicolas Fevrier & Tejas LadIntroductionThis short post will help understanding how the ports are allocated to NPU for each line card and systems.It will be useful for future post(s) and particularly on topics like Netflow/IPFIX.For example, it’s important to understand the way our ports are distributed among forwarding ASICs when considering the amount of sample traffic from an NPU to the LC CPU is shaped 133Mbps or 200Mbps.Let’s review platform by platform and line card by line card, how we do this allocation.The following CLI is used to identify the port assignment, looking at the “local” VOQ port type.RP/0/RP0/CPU0#Router#show contr npu voq-usage interface all instance 1 location 0/7/CPU0-------------------------------------------------------------------Node ID# 0/7/CPU0Intf Intf NPU NPU PP Sys VOQ Flow VOQ Portname handle # core Port Port base base port speed (hex) type (Gbps)----------------------------------------------------------------------Hu0/7/0/9 3800278 1 0 9 1833 12104 25928 local 100Hu0/7/0/8 3800280 1 1 17 1841 12168 25992 local 100Hu0/7/0/7 3800288 1 1 13 1837 12136 25960 local 100Hu0/7/0/6 3800290 1 1 21 1845 12200 26024 local 100Hu0/7/0/10 38001a8 1 0 5 1829 12072 25896 local 100Hu0/7/0/11 38001b0 1 0 1 1825 12040 25864 local 100...RP/0/RP0/CPU0#Router#Port allocationNCS5501(-SE) (Base and Scale version)NCS5501 and NCS5501-SE are using a single Qumran-MX ASIC and all the SFP ports are connected to core 0 while all QSFP ports are connected to core 1. NCS5502(-SE) (Base and Scale version)NCS5502s are made of 8 Jericho ASICs interconnected with 2x fabric engine (FE3600) Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/0/0/0 0 / 1 Hu0/0/0/12 2 / 1 Hu0/0/0/24 4 / 1 Hu0/0/0/36 6 / 1 Hu0/0/0/1 0 / 1 Hu0/0/0/13 2 / 1 Hu0/0/0/25 4 / 1 Hu0/0/0/37 6 / 1 Hu0/0/0/2 0 / 1 Hu0/0/0/14 2 / 0 Hu0/0/0/26 4 / 1 Hu0/0/0/38 6 / 1 Hu0/0/0/3 0 / 0 Hu0/0/0/15 2 / 0 Hu0/0/0/27 4 / 0 Hu0/0/0/39 6 / 0 Hu0/0/0/4 0 / 0 Hu0/0/0/16 2 / 0 Hu0/0/0/28 4 / 0 Hu0/0/0/40 6 / 0 Hu0/0/0/5 0 / 0 Hu0/0/0/17 2 / 0 Hu0/0/0/29 4 / 0 Hu0/0/0/41 6 / 0 Hu0/0/0/6 1 / 1 Hu0/0/0/18 3 / 1 Hu0/0/0/30 5 / 1 Hu0/0/0/42 7 / 1 Hu0/0/0/7 1 / 1 Hu0/0/0/19 3 / 1 Hu0/0/0/31 5 / 1 Hu0/0/0/43 7 / 1 Hu0/0/0/8 1 / 1 Hu0/0/0/20 3 / 1 Hu0/0/0/32 5 / 1 Hu0/0/0/44 7 / 1 Hu0/0/0/9 1 / 0 Hu0/0/0/21 3 / 0 Hu0/0/0/33 5 / 0 Hu0/0/0/45 7 / 0 Hu0/0/0/10 1 / 0 Hu0/0/0/22 3 / 0 Hu0/0/0/34 5 / 0 Hu0/0/0/46 7 / 0 Hu0/0/0/11 1 / 0 Hu0/0/0/23 3 / 0 Hu0/0/0/35 5 / 0 Hu0/0/0/47 7 / 0 NCS55A1-24HNCS55A1-24H is made of two Jericho+ connected back-to-back (no fabric engine) Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/0/0/0 0 / 1 Hu0/0/0/9 0 / 0 Hu0/0/0/18 1 / 1 Hu0/0/0/1 0 / 0 Hu0/0/0/10 0 / 1 Hu0/0/0/19 1 / 0 Hu0/0/0/2 0 / 1 Hu0/0/0/11 0 / 0 Hu0/0/0/20 1 / 1 Hu0/0/0/3 0 / 0 Hu0/0/0/12 1 / 1 Hu0/0/0/21 1 / 0 Hu0/0/0/4 0 / 1 Hu0/0/0/13 1 / 0 Hu0/0/0/22 1 / 1 Hu0/0/0/5 0 / 0 Hu0/0/0/14 1 / 1 Hu0/0/0/23 1 / 0 Hu0/0/0/6 0 / 1 Hu0/0/0/15 1 / 0     Hu0/0/0/7 0 / 0 Hu0/0/0/16 1 / 1     Hu0/0/0/8 0 / 1 Hu0/0/0/17 1 / 0     NCS55A1-36H(-SE)NCS55A1-36Hs are made of 4 Jericho+ ASICs interconnected through a FE3600 ASIC. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/0/0/1 0 / 0 Hu0/0/0/9 1 / 0 Hu0/0/0/18 2 / 0 Hu0/0/0/27 3 / 0 Hu0/0/0/1 0 / 0 Hu0/0/0/10 1 / 0 Hu0/0/0/19 2 / 0 Hu0/0/0/28 3 / 0 Hu0/0/0/2 0 / 0 Hu0/0/0/11 1 / 0 Hu0/0/0/20 2 / 0 Hu0/0/0/29 3 / 0 Hu0/0/0/3 0 / 0 Hu0/0/0/12 1 / 0 Hu0/0/0/21 2 / 0 Hu0/0/0/30 3 / 0 Hu0/0/0/4 0 / 1 Hu0/0/0/13 1 / 1 Hu0/0/0/22 2 / 1 Hu0/0/0/31 3 / 1 Hu0/0/0/5 0 / 1 Hu0/0/0/14 1 / 1 Hu0/0/0/23 2 / 1 Hu0/0/0/32 3 / 1 Hu0/0/0/6 0 / 1 Hu0/0/0/15 1 / 1 Hu0/0/0/24 2 / 1 Hu0/0/0/33 3 / 1 Hu0/0/0/7 0 / 1 Hu0/0/0/16 1 / 1 Hu0/0/0/25 2 / 1 Hu0/0/0/34 3 / 1 Hu0/0/0/8 0 / 1 Hu0/0/0/17 1 / 1 Hu0/0/0/26 2 / 1 Hu0/0/0/35 3 / 1 NCS55A2-MOD(-SE)-S2RU chassis made of a single Jericho+ ASIC. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Te0/x/0/0 0 / 0 Te0/x/0/14 0 / 1 TF0/x/0/28 0 / 0 Hu0/x/1/2/0 0 / 1 Te0/x/0/1 0 / 0 Te0/x/0/15 0 / 1 TF0/x/0/29 0 / 0 Te0/x/2/0 0 / 1 Te0/x/0/2 0 / 0 Te0/x/0/16 0 / 0 TF0/x/0/30 0 / 0 Te0/x/2/1 0 / 0 Te0/x/0/3 0 / 0 Te0/x/0/17 0 / 0 TF0/x/0/31 0 / 0 Te0/x/2/2 0 / 1 Te0/x/0/4 0 / 0 Te0/x/0/18 0 / 0 TF0/x/0/32 0 / 1 Te0/x/2/3 0 / 0 Te0/x/0/5 0 / 0 Te0/x/0/19 0 / 0 TF0/x/0/33 0 / 1 Te0/x/2/4 0 / 1 Te0/x/0/6 0 / 0 Te0/x/0/20 0 / 1 TF0/x/0/34 0 / 1 Te0/x/2/5 0 / 0 Te0/x/0/7 0 / 0 Te0/x/0/21 0 / 1 TF0/x/0/35 0 / 1 Te0/x/2/6 0 / 1 Te0/x/0/8 0 / 1 Te0/x/0/22 0 / 1 TF0/x/0/36 0 / 0 Te0/x/2/7 0 / 0 Te0/x/0/9 0 / 1 Te0/x/0/23 0 / 1 TF0/x/0/37 0 / 0 Te0/x/2/8 0 / 0 Te0/x/0/10 0 / 1 TF0/x/0/24 0 / 1 TF0/x/0/38 0 / 0 Te0/x/2/9 0 / 1 Te0/x/0/11 0 / 1 TF0/x/0/25 0 / 1 TF0/x/0/39 0 / 0 Te0/x/2/10 0 / 0 Te0/x/0/12 0 / 1 TF0/x/0/26 0 / 1 Hu0/x/1/0 0 / 0 Te0/x/2/11 0 / 1 Te0/x/0/13 0 / 1 TF0/x/0/27 0 / 1 Hu0/x/1/1 0 / 1 - - NCS-55A1-48Q6HThis is 1RU box with 2xJ+ ASICs. It is available in only base version powered with Large LPM. It is also capable of MACSEC and Timing. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core TF0/0/0/0 0 / 1 TF0/0/0/14 0 / 0 TF0/0/0/28 1 / 0 TF0/0/0/42 1 / 0 TF0/0/0/1 0 / 1 TF0/0/0/15 0 / 0 TF0/0/0/29 1 / 0 TF0/0/0/43 1 / 0 TF0/0/0/2 0 / 1 TF0/0/0/16 0 / 0 TF0/0/0/30 1 / 0 TF0/0/0/44 1 / 0 TF0/0/0/3 0 / 1 TF0/0/0/17 0 / 0 TF0/0/0/31 1 / 0 TF0/0/0/45 1 / 0 TF0/0/0/4 0 / 0 TF0/0/0/18 0 / 0 TF0/0/0/32 1 / 0 TF0/0/0/46 1 / 1 TF0/0/0/5 0 / 0 TF0/0/0/19 0 / 0 TF0/0/0/33 1 / 0 TF0/0/0/47 1 / 1 TF0/0/0/6 0 / 0 TF0/0/0/20 0 / 0 TF0/0/0/34 1 / 0 Hu0/0/1/0 0 / 1 TF0/0/0/7 0 / 0 TF0/0/0/21 0 / 0 TF0/0/0/35 1 / 0 Hu0/0/1/1 0 / 1 TF0/0/0/8 0 / 0 TF0/0/0/22 0 / 1 TF0/0/0/36 1 / 0 Hu0/0/1/2 0 / 1 TF0/0/0/9 0 / 0 TF0/0/0/23 0 / 1 TF0/0/0/37 1 / 0 Hu0/0/1/3 1 / 1 TF0/0/0/10 0 / 0 TF0/0/0/24 1 / 1 TF0/0/0/38 1 / 0 Hu0/0/1/4 1 / 1 TF0/0/0/11 0 / 0 TF0/0/0/25 1 / 1 TF0/0/0/39 1 / 0 Hu0/0/1/5 1 / 1 TF0/0/0/12 0 / 0 TF0/0/0/26 1 / 1 TF0/0/0/40 1 / 0     TF0/0/0/13 0 / 0 TF0/0/0/27 1 / 1 TF0/0/0/41 1 / 0     NCS-55A1-24Q6H-SSystem-on-chip with one Jericho+. Capable of Class B timing and MACSEC on only 100G ports and 16 out of the 24x SFP28. Oversubscribed by 1.44 Tbps Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Te0/0/0/0 0 / 0 Te0/0/0/14 0 / 1 TF0/0/0/28 0 / 0 TF0/0/0/42 0 / 1 Te0/0/0/1 0 / 0 Te0/0/0/15 0 / 1 TF0/0/0/29 0 / 0 TF0/0/0/43 0 / 1 Te0/0/0/2 0 / 0 Te0/0/0/16 0 / 0 TF0/0/0/30 0 / 0 TF0/0/0/44 0 / 0 Te0/0/0/3 0 / 0 Te0/0/0/17 0 / 0 TF0/0/0/31 0 / 0 TF0/0/0/45 0 / 0 Te0/0/0/4 0 / 0 Te0/0/0/18 0 / 0 TF0/0/0/32 0 / 1 TF0/0/0/46 0 / 0 Te0/0/0/5 0 / 0 Te0/0/0/19 0 / 0 TF0/0/0/33 0 / 1 TF0/0/0/47 0 / 0 Te0/0/0/6 0 / 0 Te0/0/0/20 0 / 1 TF0/0/0/34 0 / 1 Hu0/0/1/0 0 / 1 Te0/0/0/7 0 / 0 Te0/0/0/21 0 / 1 TF0/0/0/35 0 / 1 Hu0/0/1/1 0 / 0 Te0/0/0/8 0 / 1 Te0/0/0/22 0 / 1 TF0/0/0/36 0 / 0 Hu0/0/1/2 0 / 1 Te0/0/0/9 0 / 1 Te0/0/0/23 0 / 1 TF0/0/0/37 0 / 0 Hu0/0/1/3 0 / 0 Te0/0/0/10 0 / 1 TF0/0/0/24 0 / 1 TF0/0/0/38 0 / 0 Hu0/0/1/4 0 / 1 Te0/0/0/11 0 / 1 TF0/0/0/25 0 / 1 TF0/0/0/39 0 / 0 Hu0/0/1/5 0 / 0 Te0/0/0/12 0 / 1 TF0/0/0/26 0 / 1 TF0/0/0/40 0 / 1     Te0/0/0/13 0 / 1 TF0/0/0/27 0 / 1 TF0/0/0/41 0 / 1     NCS-55A1-24Q6H-SSSystem-on-chip with one Jericho+. Capable of Class B timing and MACSEC on all ports. It is powered by Large LPM. Oversubscribed by 1.44 Tbps Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Te0/0/0/0 0 / 0 Te0/0/0/14 0 / 1 TF0/0/0/28 0 / 0 TF0/0/0/42 0 / 1 Te0/0/0/1 0 / 0 Te0/0/0/15 0 / 1 TF0/0/0/29 0 / 0 TF0/0/0/43 0 / 1 Te0/0/0/2 0 / 0 Te0/0/0/16 0 / 0 TF0/0/0/30 0 / 0 TF0/0/0/44 0 / 0 Te0/0/0/3 0 / 0 Te0/0/0/17 0 / 0 TF0/0/0/31 0 / 0 TF0/0/0/45 0 / 0 Te0/0/0/4 0 / 0 Te0/0/0/18 0 / 0 TF0/0/0/32 0 / 1 TF0/0/0/46 0 / 0 Te0/0/0/5 0 / 0 Te0/0/0/19 0 / 0 TF0/0/0/33 0 / 1 TF0/0/0/47 0 / 0 Te0/0/0/6 0 / 0 Te0/0/0/20 0 / 1 TF0/0/0/34 0 / 1 Hu0/0/1/0 0 / 1 Te0/0/0/7 0 / 0 Te0/0/0/21 0 / 1 TF0/0/0/35 0 / 1 Hu0/0/1/1 0 / 0 Te0/0/0/8 0 / 1 Te0/0/0/22 0 / 1 TF0/0/0/36 0 / 0 Hu0/0/1/2 0 / 1 Te0/0/0/9 0 / 1 Te0/0/0/23 0 / 1 TF0/0/0/37 0 / 0 Hu0/0/1/3 0 / 0 Te0/0/0/10 0 / 1 TF0/0/0/24 0 / 1 TF0/0/0/38 0 / 0 Hu0/0/1/4 0 / 1 Te0/0/0/11 0 / 1 TF0/0/0/25 0 / 1 TF0/0/0/39 0 / 0 Hu0/0/1/5 0 / 0 Te0/0/0/12 0 / 1 TF0/0/0/26 0 / 1 TF0/0/0/40 0 / 1     Te0/0/0/13 0 / 1 TF0/0/0/27 0 / 1 TF0/0/0/41 0 / 1     NCS57B1-6D24HFirst fixed platform based on J2 ASIC. MACSEC capable on all 100G and 400G ports. Class C timing ready platform. Base version with support of ZR/ZR+ optics. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/0/0/0 0 / 0 Hu0/0/0/8 0 / 0 Hu0/0/0/16 0 / 0 FH0/0/0/24 0 / 1 Hu0/0/0/1 0 / 0 Hu0/0/0/9 0 / 0 Hu0/0/0/17 0 / 0 FH0/0/0/25 0 / 1 Hu0/0/0/2 0 / 0 Hu0/0/0/10 0 / 0 Hu0/0/0/18 0 / 0 FH0/0/0/26 0 / 1 Hu0/0/0/3 0 / 0 Hu0/0/0/11 0 / 0 Hu0/0/0/19 0 / 0 FH0/0/0/27 0 / 1 Hu0/0/0/4 0 / 0 Hu0/0/0/12 0 / 0 Hu0/0/0/20 0 / 1 FH0/0/0/28 0 / 1 Hu0/0/0/5 0 / 0 Hu0/0/0/13 0 / 0 Hu0/0/0/21 0 / 1 FH0/0/0/29 0 / 0 Hu0/0/0/6 0 / 0 Hu0/0/0/14 0 / 0 Hu0/0/0/22 0 / 1     Hu0/0/0/7 0 / 0 Hu0/0/0/15 0 / 0 Hu0/0/0/23 0 / 1     NCS57B1-5DSEFixed Platform Scaled version with J2 ASIC Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/0/0/0 0 / 0 Hu0/0/0/8 0 / 0 Hu0/0/0/16 0 / 0 FH0/0/0/24 0 / 1 Hu0/0/0/1 0 / 0 Hu0/0/0/9 0 / 0 Hu0/0/0/17 0 / 0 FH0/0/0/25 0 / 1 Hu0/0/0/2 0 / 0 Hu0/0/0/10 0 / 0 Hu0/0/0/18 0 / 0 FH0/0/0/26 0 / 1 Hu0/0/0/3 0 / 0 Hu0/0/0/11 0 / 0 Hu0/0/0/19 0 / 0 FH0/0/0/27 0 / 1 Hu0/0/0/4 0 / 0 Hu0/0/0/12 0 / 0 Hu0/0/0/20 0 / 1 FH0/0/0/28 0 / 1 Hu0/0/0/5 0 / 0 Hu0/0/0/13 0 / 0 Hu0/0/0/21 0 / 1     Hu0/0/0/6 0 / 0 Hu0/0/0/14 0 / 0 Hu0/0/0/22 0 / 1     Hu0/0/0/7 0 / 0 Hu0/0/0/15 0 / 0 Hu0/0/0/23 0 / 1     Note# Core0# 20x 100G and Core1# 4x100G + 5x400GKeep it in mind for the snake testsNCS57C3-MOD(-SE)-SFirst fixed platform based on a single J2C ASIC but offering a lot of flexibility via its Modular Port Adapters.Jericho 2C is made of a single core, the chart becomes very simple ;) Interface NPU/Core All 0 / 0 NCS55-36X100G and NC55-36X100G-SIn these cards we have 6 Jericho ASICs. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/x/0/0 0 / 1 Hu0/x/0/9 1 / 0 Hu0/x/0/18 3 / 1 Hu0/x/0/27 4 / 0 Hu0/x/0/1 0 / 1 Hu0/x/0/10 1 / 0 Hu0/x/0/19 3 / 1 Hu0/x/0/28 4 / 0 Hu0/x/0/2 0 / 1 Hu0/x/0/11 1 / 0 Hu0/x/0/20 3 / 1 Hu0/x/0/29 4 / 0 Hu0/x/0/3 0 / 0 Hu0/x/0/12 2 / 1 Hu0/x/0/21 3 / 0 Hu0/x/0/30 5 / 1 Hu0/x/0/4 0 / 0 Hu0/x/0/13 2 / 1 Hu0/x/0/22 3 / 0 Hu0/x/0/31 5 / 1 Hu0/x/0/5 0 / 0 Hu0/x/0/14 2 / 1 Hu0/x/0/23 3 / 0 Hu0/x/0/32 5 / 1 Hu0/x/0/6 1 / 1 Hu0/x/0/15 2 / 0 Hu0/x/0/24 4 / 1 Hu0/x/0/33 5 / 0 Hu0/x/0/7 1 / 1 Hu0/x/0/16 2 / 0 Hu0/x/0/25 4 / 1 Hu0/x/0/34 5 / 0 Hu0/x/0/8 1 / 1 Hu0/x/0/17 2 / 0 Hu0/x/0/26 4 / 1 Hu0/x/0/35 5 / 0 NCS55-24X100G-SEThe scale 24x100G are made of 4 Jericho ASICs. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/x/0/0 0 / 1 Hu0/x/0/6 1 / 1 Hu0/x/0/12 2 / 1 Hu0/x/0/18 3 / 1 Hu0/x/0/1 0 / 1 Hu0/x/0/7 1 / 1 Hu0/x/0/13 2 / 1 Hu0/x/0/19 3 / 1 Hu0/x/0/2 0 / 1 Hu0/x/0/8 1 / 1 Hu0/x/0/14 2 / 1 Hu0/x/0/20 3 / 1 Hu0/x/0/3 0 / 0 Hu0/x/0/9 1 / 0 Hu0/x/0/15 2 / 0 Hu0/x/0/21 3 / 0 Hu0/x/0/4 0 / 0 Hu0/x/0/10 1 / 0 Hu0/x/0/16 2 / 0 Hu0/x/0/22 3 / 0 Hu0/x/0/5 0 / 0 Hu0/x/0/11 1 / 0 Hu0/x/0/17 2 / 0 Hu0/x/0/23 3 / 0 NCS55-18H18FBy default, the base combo card offers 36 ports 40G, and it’s possible to upgrade half of them to 100G.This line card is made of 3 Jericho ASICs. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Fo0/x/0/0 0 / 0 Hu0/x/0/9 0 / 0 Hu0/x/0/18 1 / 1 Fo0/x/0/27 2 / 1 Fo0/x/0/1 0 / 0 Hu0/x/0/10 0 / 0 Hu0/x/0/19 1 / 1 Fo0/x/0/28 2 / 0 Fo0/x/0/2 0 / 1 Hu0/x/0/11 0 / 0 Hu0/x/0/20 1 / 1 Fo0/x/0/29 2 / 1 Fo0/x/0/3 0 / 1 Fo0/x/0/12 1 / 0 Hu0/x/0/21 1 / 0 Hu0/x/0/30 2 / 1 Fo0/x/0/4 0 / 0 Fo0/x/0/13 1 / 0 Hu0/x/0/22 1 / 0 Hu0/x/0/31 2 / 1 Fo0/x/0/5 0 / 1 Fo0/x/0/14 1 / 1 Hu0/x/0/23 1 / 0 Hu0/x/0/32 2 / 1 Hu0/x/0/6 0 / 1 Fo0/x/0/15 1 / 1 Fo0/x/0/24 2 / 0 Hu0/x/0/33 2 / 0 Hu0/x/0/7 0 / 1 Fo0/x/0/16 1 / 0 Fo0/x/0/25 2 / 0 Hu0/x/0/34 2 / 0 Hu0/x/0/8 0 / 1 Fo0/x/0/17 1 / 1 Fo0/x/0/26 2 / 1 Hu0/x/0/35 2 / 0 NCS55-24H12F-SEBy default, the scale combo card offers 36 ports 40G, and it’s possible to upgrade two third of them to 100G.This line card is made of 4 Jericho ASICs with eTCAM. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Fo0/x/0/0 0 / 0 Fo0/x/0/9 1 / 1 Fo0/x/0/18 2 / 0 Fo0/x/0/27 3 / 0 Fo0/x/0/1 0 / 0 Fo0/x/0/10 1 / 0 Fo0/x/0/19 2 / 0 Fo0/x/0/28 3 / 0 Hu0/x/0/2 0 / 1 Fo0/x/0/11 1 / 0 Hu0/x/0/20 2 / 1 Fo0/x/0/29 3 / 1 Hu0/x/0/3 0 / 1 Hu0/x/0/12 1 / 1 Hu0/x/0/21 2 / 1 Hu0/x/0/30 3 / 1 Hu0/x/0/4 0 / 1 Hu0/x/0/13 1 / 1 Hu0/x/0/22 2 / 1 Hu0/x/0/31 3 / 1 Hu0/x/0/5 0 / 0 Hu0/x/0/14 1 / 1 Hu0/x/0/23 2 / 0 Hu0/x/0/32 3 / 1 Hu0/x/0/6 0 / 0 Hu0/x/0/15 1 / 0 Hu0/x/0/24 2 / 0 Hu0/x/0/33 3 / 0 Hu0/x/0/7 0 / 0 Hu0/x/0/16 1 / 0 Hu0/x/0/25 2 / 0 Hu0/x/0/34 3 / 0 Fo0/x/0/8 0 / 1 Hu0/x/0/17 1 / 0 Fo0/x/0/26 2 / 1 Hu0/x/0/35 3 / 0 NCS55-36X100G-A-SEFinally, this line card is using 4 Jericho+ with new generation eTCAM. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/x/0/0 0 / 1 Hu0/x/0/9 1 / 1 Hu0/x/0/18 2 / 1 Hu0/x/0/27 3 / 1 Hu0/x/0/1 0 / 1 Hu0/x/0/10 1 / 1 Hu0/x/0/19 2 / 1 Hu0/x/0/28 3 / 1 Hu0/x/0/2 0 / 1 Hu0/x/0/11 1 / 1 Hu0/x/0/20 2 / 1 Hu0/x/0/29 3 / 1 Hu0/x/0/3 0 / 0 Hu0/x/0/12 1 / 0 Hu0/x/0/21 2 / 0 Hu0/x/0/30 3 / 0 Hu0/x/0/4 0 / 0 Hu0/x/0/13 1 / 0 Hu0/x/0/22 2 / 0 Hu0/x/0/31 3 / 0 Hu0/x/0/5 0 / 0 Hu0/x/0/14 1 / 0 Hu0/x/0/23 2 / 0 Hu0/x/0/32 3 / 0 Hu0/x/0/6 0 / 0 Hu0/x/0/15 1 / 0 Hu0/x/0/24 2 / 0 Hu0/x/0/33 3 / 0 Hu0/x/0/7 0 / 1 Hu0/x/0/16 1 / 1 Hu0/x/0/25 2 / 1 Hu0/x/0/34 3 / 1 Hu0/x/0/8 0 / 0 Hu0/x/0/17 1 / 0 Hu0/x/0/26 2 / 0 Hu0/x/0/35 3 / 0 NC55-MOD-A-SThe modular line card is offering fixed ports but also 2 bays for MPAs, powered by a single Jericho+ ASIC. Interface NPU/Core Interface NPU/Core Te0/x/0/0 0 / 0 Te0/x/0/7 0 / 1 Te0/x/0/1 0 / 0 Te0/x/0/8 0 / 0 Te0/x/0/2 0 / 0 Te0/x/0/9 0 / 0 Te0/x/0/3 0 / 0 Te0/x/0/10 0 / 0 Te0/x/0/4 0 / 1 Te0/x/0/11 0 / 0 Te0/x/0/5 0 / 1 Fo0/x/0/12 0 / 1 Te0/x/0/6 0 / 1 Fo0/x/0/13 0 / 1 MPA 4x100# Interface NPU/Core Hu0/x/y/0 0 / 0 Hu0/x/y/1 0 / 1 Hu0/x/y/2 0 / 0 Hu0/x/y/3 0 / 1 NC57-24DDHigh Density 400G Line card based on 2xJ2 chipset. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core FH0/x/0/0 0 / 0 FH0/x/0/6 0 / 1 FH0/x/0/12 1 / 0 FH0/x/0/18 1 / 1 FH0/x/0/1 0 / 0 FH0/x/0/7 0 / 1 FH0/x/0/13 1 / 0 FH0/x/0/19 1 / 1 FH0/x/0/2 0 / 0 FH0/x/0/8 0 / 1 FH0/x/0/14 1 / 0 FH0/x/0/29 1 / 1 FH0/x/0/3 0 / 0 FH0/x/0/9 0 / 1 FH0/x/0/15 1 / 0 FH0/x/0/21 1 / 1 FH0/x/0/4 0 / 0 FH0/x/0/10 0 / 1 FH0/x/0/16 1 / 0 FH0/x/0/22 1 / 1 FH0/x/0/5 0 / 0 FH0/x/0/11 0 / 1 FH0/x/0/17 1 / 0 FH0/x/0/23 1 / 1 NC57-18DD-SEHigh Density Line card with combination of 100G and 400G native ports. It is based on 2xJ2 chipset. Allows Flexible combination of 400G/100G/200G/40G optics and breakout options. Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/x/0/0 0 / 0 Hu0/x/0/10 0 / 1 FH0/x/0/20 1 / 0 Hu0/x/0/1 0 / 0 Hu0/x/0/11 0 / 1 FH0/x/0/21 1 / 1 Hu0/x/0/2 0 / 0 Hu0/x/0/12 0 / 1 FH0/x/0/22 1 / 1 Hu0/x/0/3 0 / 0 Hu0/x/0/13 0 / 1 FH0/x/0/23 1 / 1 Hu0/x/0/4 0 / 0 Hu0/x/0/14 1 / 0 FH0/x/0/24 1 / 1 Hu0/x/0/5 0 / 0 Hu0/x/0/15 1 / 0 FH0/x/0/25 1 / 1 Hu0/x/0/6 0 / 0 Hu0/x/0/16 1 / 0 FH0/x/0/26 0 / 1 Hu0/x/0/7 0 / 0 Hu0/x/0/17 1 / 0 FH0/x/0/27 0 / 1 Hu0/x/0/8 0 / 0 FH0/x/0/18 1 / 0 FH0/x/0/28 0 / 1 Hu0/x/0/9 0 / 0 FH0/x/0/19 1 / 0 FH0/x/0/29 0 / 1 NC57-36H-SELine card offering 100G connectivity (up to 3.6Tbps) and high scale routing. It is based on a single J2 chipset and OP2 eTCAM. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/x/0/0 0 / 0 Hu0/x/0/10 0 / 0 Hu0/x/0/20 0 / 1 Hu0/x/0/39 0 / 1 Hu0/x/0/1 0 / 0 Hu0/x/0/11 0 / 0 Hu0/x/0/21 0 / 1 Hu0/x/0/31 0 / 1 Hu0/x/0/2 0 / 0 Hu0/x/0/12 0 / 0 Hu0/x/0/22 0 / 1 Hu0/x/0/32 0 / 1 Hu0/x/0/3 0 / 0 Hu0/x/0/13 0 / 0 Hu0/x/0/23 0 / 1 Hu0/x/0/33 0 / 1 Hu0/x/0/4 0 / 0 Hu0/x/0/14 0 / 0 Hu0/x/0/24 0 / 1 Hu0/x/0/34 0 / 1 Hu0/x/0/5 0 / 0 Hu0/x/0/15 0 / 0 Hu0/x/0/25 0 / 1 Hu0/x/0/35 0 / 1 Hu0/x/0/6 0 / 0 Hu0/x/0/16 0 / 0 Hu0/x/0/26 0 / 1     Hu0/x/0/7 0 / 0 Hu0/x/0/17 0 / 0 Hu0/x/0/27 0 / 1     Hu0/x/0/8 0 / 0 Hu0/x/0/18 0 / 0 Hu0/x/0/28 0 / 1     Hu0/x/0/9 0 / 0 Hu0/x/0/19 0 / 0 Hu0/x/0/29 0 / 1     NC57-36H6D-SLine card offering a mix of 100G, 200G and 400G connectivity (up to 4.8Tbps). It is based on a single J2 chipset without eTCAM. Interface NPU/Core Interface NPU/Core Interface NPU/Core Interface NPU/Core Hu0/x/0/0 0 / 0 Hu0/x/0/10 0 / 0 Hu0/x/0/20 0 / 1 FH0/x/0/39 0 / 1 Hu0/x/0/1 0 / 0 Hu0/x/0/11 0 / 0 Hu0/x/0/21 0 / 1 FH0/x/0/31 0 / 1 Hu0/x/0/2 0 / 0 Hu0/x/0/12 0 / 0 Hu0/x/0/22 0 / 1 FH0/x/0/32 0 / 1 Hu0/x/0/3 0 / 0 Hu0/x/0/13 0 / 0 Hu0/x/0/23 0 / 1 FH0/x/0/33 0 / 1 Hu0/x/0/4 0 / 0 Hu0/x/0/14 0 / 0 FH0/x/0/24 0 / 1 FH0/x/0/34 0 / 1 Hu0/x/0/5 0 / 0 Hu0/x/0/15 0 / 0 FH0/x/0/25 0 / 1 FH0/x/0/35 0 / 1 Hu0/x/0/6 0 / 0 Hu0/x/0/16 0 / 0 FH0/x/0/26 0 / 1     Hu0/x/0/7 0 / 0 Hu0/x/0/17 0 / 0 FH0/x/0/27 0 / 1     Hu0/x/0/8 0 / 0 Hu0/x/0/18 0 / 0 FH0/x/0/28 0 / 1     Hu0/x/0/9 0 / 0 Hu0/x/0/19 0 / 0 FH0/x/0/29 0 / 1     ", "url": "/tutorials/port-assignments-on-ncs5500-platforms/", "author": "Nicolas Fevrier", "tags": "NCS5500, ncs 5500, Port" } , "tutorials-2018-02-19-netflow-sampling-interval-and-the-mythical-internet-packet-size": { "title": "Netflow, Sampling-Interval and the Mythical Internet Packet Size", "content": " Netflow, Sampling-Interval and the Mythical Internet Packet Size Introduction NCS5500 internals Netflow principles Netflow processes Measured packet sizes Long-lived or short-lived flows? New flows rate? Ok, that’s “interesting”, but what should I configure on my routers? Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, following this link.IntroductionIn this post, we will try to clarify key concepts around Netflow technology and potentially correct some common misconceptions. Particularly we will explain why the “what is the sampling-rate you support?” is not the right question.We will describe in extensive details the NCS5500 implementation too.Also, we will share what we measured in different networks in Europe and North America. This information will be helpful to understand the parameters of this equation.We will provide tools to answer questions like# how many new flows per second? how long the flows exist and how long a representation of them will stay in cache? are we dropping samples because of the protection shaper?It’s certainly not meant to be a state-of-the-art but more an invitation to comment with your own findings.To understand the basic of the technology, we invite you to start with Xander’s post on Cisco supportforum.It has been written for ASR9000 five years ago, so multiple differences exist, but it’s still a very good resource to get familiar with Netflow.NCS5500 internalsNCS5500 supports NetFlow v9 and IPFIX (not sFlow or former versions of Netflow).Netflow is used to create a statistical view of the flow matrix from the router or line card perspective.In a chassis, Netflow activities will be performed at the line card level and will involve the NPU (for instance Qumran-MX, Jericho or Jericho+) and the Line Card CPU. Aside from the configuration and show commands, nothing will be performed at the Route Processor level.Before jumping into the Netflow specifics, let’s describe some key internal parts of an NCS5500 line card or system.Two internal “networks” exist and interconnect the various elements# the EOBC network (for “Ethernet Out-of-Band Channel” used for inter-process communication the EPC network (for “Ethernet Protocol Channel”) for all the punted packets.Note# the fixed-form systems like NCS5501(-SE), NCS55A1-24H, NCS55A1-36H(-SE)-S, we don’t have similar internal design, you will not be able to use the “admin show controller switch xxx” CLI. It’s valid for NCS5504, NCS5508 and NCS5516 chassis but also for NCS5502(-SE) systems.sysadmin-vm#0_RP0# show controller switch reachableRack Card Switch---------------------0 SC0 SC-SW0 SC0 EPC-SW0 SC0 EOBC-SW0 SC1 SC-SW0 SC1 EPC-SW0 SC1 EOBC-SW0 LC1 LC-SW0 LC2 LC-SW0 LC6 LC-SW0 LC7 LC-SW0 FC0 FC-SW0 FC1 FC-SW0 FC3 FC-SW0 FC5 FC-SWsysadmin-vm#0_RP0#The sampled packets and the netflow records will transit over the EPC network.The number of NPUs and the bandwidth of EPC/EOBC channels will vary between systems.Here is a diagram representing a line card 24x100G w/ eTCAM with 4x Jericho ASICs. Each NPU is connected at 2.5Gbps to the EPC switch and the LC CPU is connected with 3x 2.5 = 7.5Gbps to the same switch#sysadmin-vm#0_RP0# show controller switch summary location 0/LC7/LC-SWRack Card Switch Rack Serial Number--------------------------------------0 LC7 LC-SW FGE194XXXXX Phys Admin Port Protocol ForwardPort State State Speed State State Connects To--------------------------------------------------------------------4 Up Up 2.5-Gbps - Forwarding LC CPU (EPC 0)5 Up Up 2.5-Gbps - Forwarding LC CPU (EPC 1)6 Up Up 2.5-Gbps - Forwarding LC CPU (EPC 2)7 Up Up 2.5-Gbps - Forwarding LC CPU (EOBC)8 Up Up 2.5-Gbps - Forwarding NPU29 Up Up 2.5-Gbps - Forwarding NPU110 Up Up 2.5-Gbps - Forwarding NPU011 Up Up 2.5-Gbps - Forwarding NPU312 Down Down 1-Gbps - - FC013 Up Up 1-Gbps - Forwarding FC114 Down Down 1-Gbps - - FC215 Down Down 1-Gbps - - FC316 Down Down 1-Gbps - - FC417 Down Down 1-Gbps - - FC518 Up Up 1-Gbps - Forwarding SC0 EOBC-SW19 Down Down 1-Gbps - - SC1 EOBC-SWsysadmin-vm#0_RP0#Here, we are representing a line card 36x100G w/ eTCAM with 4x Jericho+. Each NPU is connected at 2.5Gbps to the EPC switch and the LC CPU is connected at 10Gbps to the same switch#sysadmin-vm#0_RP0# show controller switch summary location 0/LC1/LC-SWRack Card Switch Rack Serial Number--------------------------------------0 LC1 LC-SW FGE194XXXXX Phys Admin Port Protocol ForwardPort State State Speed State State Connects To-------------------------------------------------------------------4 Up Up 2.5-Gbps - Forwarding NPU05 Up Up 2.5-Gbps - Forwarding NPU16 Up Up 2.5-Gbps - Forwarding NPU27 Up Up 2.5-Gbps - Forwarding NPU38 Up Up 10-Gbps - Forwarding LC CPU (EPC)9 Up Up 10-Gbps - Forwarding LC CPU (EOBC)12 Down Down 1-Gbps - - FC013 Down Down 1-Gbps - - FC114 Down Down 1-Gbps - - FC215 Down Down 1-Gbps - - FC316 Down Down 1-Gbps - - FC417 Up Up 1-Gbps - Forwarding FC518 Down Down 1-Gbps - - SC0 EOBC-SW19 Up Up 1-Gbps - Forwarding SC1 EOBC-SWsysadmin-vm#0_RP0#Inside the EPC switch, we have multiple VLANs to differentiate the Netflow sampled traffic coming from the various NPUs.sysadmin-vm#0_RP0# show controller switch vlan information location 0/LC1/LC-SWRack Card Switch Rack Serial Number--------------------------------------0 LC1 LC-SW FGE194XXXXXSDRIdentifier SDR Name VLAN VLAN Use------------------------------------------------------------------------1 sysadmin-vm 1 (0x001) Platform EMON 17 (0x011) Platform HOST 3073 (0xC01) Calvados IPC2 default-sdr 1282 (0x502) SDR 2 Platform Netflow 1 1298 (0x512) SDR 2 Platform Netflow 2 1314 (0x522) SDR 2 Platform Netflow 3 1330 (0x532) SDR 2 Platform Netflow 4 1346 (0x542) SDR 2 Platform Netflow 5 1362 (0x552) SDR 2 Platform Netflow 6 1538 (0x602) SDR 2 Platform SPP 1554 (0x612) SDR 2 Platform BFD 1570 (0x622) SDR 2 Platform MAC learning 1794 (0x702) SDR 2 Third Party Applications 3074 (0xC02) SDR 2 IPCsysadmin-vm#0_RP0#To protect the line card CPU, each NPU is shaping the sampled traffic.In chassis line cards, this shaper is configured at 133Mbps while in fixed-form platforms, it’s configured at 200Mbps. This parameter is fixed and not configurable via CLI. This NPU shaper guarantees that CPU is not overloaded while processing the samples. It’s expected to see high CPU utilization in some netflow process threads and it will not reach 100% of a CPU core.RP/0/RP0/CPU0#R1#sh flow platform pse policer-rate location 0/1/CPU0Npu id #0Netflow Platform Pse Policer Rate#Ingress Policer Rate# 133 MbpsNpu id #1Netflow Platform Pse Policer Rate#Ingress Policer Rate# 133 MbpsNpu id #2Netflow Platform Pse Policer Rate#Ingress Policer Rate# 133 MbpsNpu id #3Netflow Platform Pse Policer Rate#Ingress Policer Rate# 133 MbpsRP/0/RP0/CPU0#R1#We will come back later on this shaper since it will directly influence the netflow performance and capabilities.Netflow principlesNetflow is a technology ratified as an IETF standards in 2004 via the RFC 3954https#//www.ietf.org/rfc/rfc3954.txtIt is used on routing devices to generate flow records from packet streams by extracting fields from sampled packets. A database, or cache, is used to store the current flows and their accounting information. Based on multiple criteria, we decide to “expire” a cache entry and generate a flow records which will be transmitted to an external collector.The process can be described as such#1- we sample packets (1 packet every x)2- we pick the first 128B of the packet and add internal header (total# 144B per sampled packet)3- this sampled packet is passed to the LC CPU via the EPC switch4- we extract information for the IP header5- we create cache entries representing flow accounting6- every time a sampled packet of the same flow is received, we update the cache entry7- we maintain multiple timers and when one expires, a NF record is generated8- we send the record to the external collector(s)This concept of timers is very important since it will dictate how fast we flush the cache content, and inform the remote collector of the existence of a flow. In the case of DDoS attack detection, it’s key to speed up the process# Inactive timer represents the time without receiving a sampled packet matching a particular cache entry. Active timer, in the other hand, represents the maximum time of existence of a particular cache entry, even if we still receive sampled packets matching it.The basic configuration for Netflow#flow monitor-map monitor1 record ipv4 exporter export1 cache entries 1000000 cache timeout active 15 cache timeout inactive 2 cache timeout rate-limit 2000!flow exporter-map export1 version v9 options interface-table options sampler-table ! transport udp 9951 source Loopback0 destination 1.3.5.7!sampler-map sampler1 random 1 out-of 4000!interface HundredGigE0/7/0/0 flow ipv4 monitor monitor1 sampler sampler1 ingressSeveral potential bottlenecks need to be understood when using Netflow in your networks. 133 Mbps or 200 Mbps shaper (not configurable)This shaper will have a direct impact on the amount of sampled packets we can pass to the LC CPU#133-200Mbps / [ ( packet size up to 101B + 43B ) * 8]Ex# packets larger than 101 Bytes @ Layer2 will represent 133Mbps / (144*8) = 115,451 PPS per NPU Flow table sizeDefault 64k, configurable up to 1M per monitor-map Export rate-limiterDefault 2000 records / sec, configurableNote# the export “rate-limiter” name is often creating confusion in operator’s mind because we will not “drop” records if we exceed this limit, but instead we will keep the entry longer in the cache despite the timer expiration. At the potential risk of reaching the maximum size of this cache.Netflow processesSeveral processes are involved in the Netflow operation and configuration. They are present in both Route Processor and Line Card.RP/0/RP0/CPU0#NCS5508#show processes cpu location 0/RP0/CPU0 | include~ nf~4599 0% 0% 0% nfmgr4890 0% 0% 0% nfmaRP/0/RP0/CPU0#NCS5508#RP/0/RP0/CPU0#NCS5508#show processes cpu location 0/7/CPU0 | include~ nf~4036 0% 0% 0% nfma4776 0% 0% 0% nfea4790 12% 12% 12% nfsvr4810 2% 2% 2% nf_producerRP/0/RP0/CPU0#NCS5508#1- NetFlow Manager (nfmgr) accepts configuration and maintains global objects .i.e. sampler, flow monitor, and flow exporter2- NetFlow MA (nfma) accepts the interface level configuration3- NetFlow EA (nfea) sends config to NF Server and ASIC4- NetFlow Producer receives NF packets from ASIC and adds them to the shared memory data ring for passing data to the NetFlow Server process5- NetFlow Server (nfsvr) receives NF record packets from nf_producer, creates a new flow cache if not already created, or update a existing flow cache (packet / byte count) for the flow monitor, periodically ages entries form the NF cache into NF export packets sends the NF export packets to the NF collector using UDP6- show commands are polling information for nfsvr7- netio is used to transport the recordsMeasured packet sizes“Average Internet packet size” or “IMIX packet sizes”… Those are concepts very frequently mentioned in the networking industry. It’s an important parameter when considering the forwarding performance of a device.Indeed, the forwarding ASICs or Network Processing Units (NPUs) are often characterized by their port density or bandwidth but also by their forwarding performance expressed in PPS (packets per second).It represents the numbers of packets we can route, filter, encapsulate or decapsulate, count, police, remark, … every second.In the same vein, you may have heard about NDR (Non-Drop Rate) to express the minimal packet size a system can forward at line rate on all ports simultaneously.Packet size, NDR, bandwidth and performance in PPS are of course directly linked to each others.The average packet size parameter is very important to qualify the Netflow/IPFIX capability of a device and how far we can push in term of sampling-interval. It will be covered in the next part of this blog post.So, understanding the traffic profiles and the average packet size per link and per ASIC is mandatory to qualify your network. Also it will be necessary to know precisely the port allocation to each ASIC, something we documented in this post.You will find a large variety of answers in the litterature or when using your favorite search engine# commonly from 350 bytes to 500 bytes per packet. But the real answer should be, as usual# it depends.Let’s take routers facing the internet and list the different types of interfaces# core facing internet# peering and PNI internet# transit providing a full internet view internet or local cache engines from various CDNIt’s fairly easy to measure it, just collect the output of a “show interface” from the router, and devide the byte counter by the packet counter.We collected numbers from multiple ISP in the US and Western Europe, here are some numbers (expressed in Bytes) Peering/Transit Ingress Avg Egress Avg Cust 1 Transit 1 1136 328 Cust 1 Transit 2 927 526 Cust 1 Transit 3 1138 346 Cust 2 Transit 1192 237 Cust 3 Transit 999 238 Cust 4 Transit 1249 202 Cable Cust 1 Peering 714 413 Cable Cust 2 Peering 603 285 Cust 1 Peering 496 643 Cust 2 Peering 706 516 Cust 3 Peering 594 560 Peering GIX 819 426 It seems difficult to derive a “rule” from these numbers but we can say# transit traffic is usually more important in ingress from 900B to 1200B average while the egress is usually shorter from 250B to 500B peering traffic is more balanced between ingress and egress, with an average packet size from 400B to 700BBut other type of interfaces will present more “extreme” numbers, like the CDNs (and they compose the largest and fastest growing amount of traffic in networks today). CDN / Content Providers Ingress Avg Egress Avg Cust 1 Google GGC 1277 329 Cust 2 Google GGC 1370 350 Cust 3 Google GGC 1393 284 Cust 1 Netflix 1495 73 Cust 2 Netflix 1470 74 Cust 1 Level3 CDN 1110 274 Cust 2 Level3 CDN 1378 176 Facebook 1314 158 Apple 1080 679 EdgeCast 1475 72 Fastly 1407 140 Twitter 1416 232 Yahoo 964 197 Cust 1 Akamai 1471 72 Cust 2 Akamai 1443 82 Cust 1 Twitch 1490 121 Cust 2 Twitch 1490 116 Cust 3 Twitch 1328 395 It’s showing very clearly that we have a very asymmetrical traffic distribution. Something totally expected considering the type of service they deliver (content). But also it’s showing that ingress traffic is very often “as large as your MTU”. It will be a key parameter when discussing the sampling-interval we have to configure on our interfaces.I hope it convinced you that you can not simply take 350B or 500B as your average packet size. In reality, it will depend on the type on service you are connected to and the port allocation to NPUs.Long-lived or short-lived flows?Now that we understand the average packet size profiles for each type of services, it could be also interesting to study how long the flows last. Or to put it differently, how many packets will represent a normal / average flows before it ends.Indeed, if we have very long streams, the cache entry will stay present for a long time and it will require the expiration of the active timer to generate a record and clear the entry. In the other hand, if we have just very short streams, very likely they will be represented by a single packet cache entry which will be flushed when we will reach the inactive timer limit.The stress on the LC CPU will be higher if we have a larger proportion of 1-packet flows because it will imply we have to create new entries in the cache (with all the appropriate fields) instead of just updating an existing entries.We can take a statistical approach, simply by checking the number of packets we have for each flow entry present in the cache of some real production routers.Quick example#RP/0/RP0/CPU0#R1#show flow monitor fmm cache match counters packets eq 1 include interface ingress location 0/1/cpu0 | utility wc -l 65308RP/0/RP0/CPU0#R1#show flow monitor fmm cache match counters packets neq 1 include interface ingress location 0/1/cpu0 | utility wc -l 12498RP/0/RP0/CPU0#R1#sh run formal | i randomBuilding configuration...sampler-map SM random 1 out-of 2048RP/0/RP0/CPU0#R1#This output with “eq” represents the number of entries with just one packet. With “neq”, we count flows with more than one packet.Let’s check a couple of routers using sampling-interval of 1#2048 Router with 1#2048 eq 1 neq 1 Ratio (%) R1 0/0 43680 11122 75/25 R2 0/0 39723 11421 71/29 R2 0/1 31907 8628 73/27 R2 0/2 35110 9168 74/26 R3 0/3 21563 6541 70/30 Some other routers with a sampling-interval of 1#4000 Router with 1#4000 eq 1 neq 1 Ratio (%) R1 11312 1725 85/15 R2 56321 15177 73/27 We will let you run the test on your own devices, but it looks like the proportion of “1-packet flow” is more important than the others.It appears with 1#2000 but it’s even clearer with 1#4000.It only proves that statistically, streams are less than 2000- or 4000-packets long for the most part.And that a large majority of the samples will generate new entries in the cache table instead of updating existing entries.The flow entries will be cleared out of the cache (and will generate a NF record) when reaching the inactive timer.New flows rate?Another interesting question# how can I check the number of new flows per second?With the following show command, we can monitor the Cache Hits and Cache Misses. Hits# every time we sample a packet and it matches an existing entry in the cache, we update the counters Misses# no entry in the cache for this sample, we create a new flow entryRP/0/RP0/CPU0#R1#sh flow monitor FM cache internal loc 0/0/CPU0 | i CacheFri Feb 9 15#08#14.611 CETCache summary for Flow Monitor #Cache size# 1000000Cache Hits# 9797770256Cache Misses# 22580788616Cache Overflows# 0Cache above hi water# 0RP/0/RP0/CPU0#R1#sh flow monitor FM cache internal loc 0/0/CPU0 | i CacheFri Feb 9 15#09#55.314 CET Cache summary for Flow Monitor #Cache size# 1000000Cache Hits# 9798220473Cache Misses# 22581844681Cache Overflows# 0Cache above hi water# 0RP/0/RP0/CPU0#R1#Simple math now between the two measurements# Between the two show commands# 15#09#55.314-15#08#14.611 = (60+55)x1000+314-(14x1000+611) = 100,703 ms ROUND [ (22581844681-22580788616) / (100.703) ] = 10487 samples creating new flow entries / second ROUND [ (9798220473-9797770256) / (100.703) ] = 4471 samples with existing entries / secondOk, that’s “interesting”, but what should I configure on my routers?We explained the netflow principles, we detailed the internals of NCS5500 routers, reviewed the potential bottlenecks and we provided couple of data points to redefine what is an internet packet average size, the proportion of 1-packet flows in the cache, etc.Indeed that was a lot of concepts, but what can I do more practically?It’s time to address a common misconception and recenter the discussion. A frequent question is “what is the sampling-rate you support?”.Since it’s the only parameter you can configure with CLI, network operators wonder what they should use but it’s inherently the wrong question.Because the answer “1#1” could be valid.But it doesn’t mean we can sample every single packet at every speed, on every interface, with every average packet size.It’s capital to understand that the only relevant parameter is the number of sampled packets we can send from the NPU to the line card CPU.This information can be easily derived from following parameters# average packet size (depends on the charts presented above) are we using ingress only or both ingress and egress (currently egress NF is not supported in NCS5500) how the ports configured for netflow are connected to the forwarding ASIC sum of bandwidth for all the ports connected to the NPU (an estimation can be taken from peak hour traffic, or the projection of growth, or even the biggest DDoS attack) and finally, the sampling-interval we configuredIf we take the assumption that sampled packets will be mostly larger than 101B, we will transport 144B packets to the CPU.This traffic will be rate-limited by the shaper we mentioned above# 133Mbps or 200Mbps depending on the platform#Something we can not really anticipate is the ratio of sampled packets that will be <101B. But this number will not represent much, except in case of specific DDoS attack.Let’s take a couple of examples to illustrate the formulas above# you have 6 ports on the Jericho NPU but only 4 are used with Netflow the average packet size on this ports connected to CDN is 1400B the load is heavy and the ports are used at 70% total, at peak hour but the customer would like to anticipate the worst case if all ports are transmitting line rateSo the math will be#Most aggressive sampling-interval = Total-BW / ( Avg-Pkt-Size x 133Mbps ) x ( 144 x 8 ) = 400,000,000,000 / ( 1400 x 8 x 133,000,000 ) x ( 144 x 8 ) = 309–> in this example, it will be possible to use an 1#309 sampling-interval before reaching the limit of the 133Mbps shaper.Another example# you have 9 ports on the Jericho NPU+ all configured for NFv9 the average packet size on this ports connected to peering partners is 800B the load not huge and the ports are used at a total of 40% at peak hour but the customer takes some margin of growth (and error) and pick 70%That gives us#Most aggressive sampling-interval = Total-BW / ( Avg-Pkt-Size x 133Mbps ) x ( 144 x 8 ) = 900,000,000,000 x 0.7 / ( 800 x 8 x 133,000,000 ) x ( 144 x 8 ) = 852–> in this example, it will be possible to use an 1#852 sampling-interval before reaching the limit of the 133Mbps shaper.To check if your sampling is too aggressive and you are hitting the shaper limit, you need to look at the droppedPkts count of the VOQ24 / COS2 in the following show command#RP/0/RP0/CPU0#5508-6.3.2#sh controllers npu stats voq base 24 instance 0 location 0/7/CPU0Asic Instance = 0VOQ Base = 24 ReceivedPkts ReceivedBytes DroppedPkts DroppedBytes-------------------------------------------------------------------COS0 = 0 0 0 0COS1 = 0 0 0 0COS2 = 904365472 90918812004 3070488403 308867834524COS3 = 14 1668 0 0COS4 = 1955 201438 0 0COS5 = 0 0 0 0COS6 = 0 0 0 0COS7 = 0 0 0 0RP/0/RP0/CPU0#5508-6.3.2#Having this COS 2 DroppedPkts counter increasing is the proof we are exceeding the shaper and you need to reduce the sampling-interval. The “instance” here represents the NPU ASIC.Note# In releases before 6.3.x, Netflow was transported over VOQ 32 / COS3 so the CLI to use was “sh controllers npu stats voq base 32 instance 0 location 0/7/CPU0”ConclusionWe hope this article helped provided useful information on the nature of packets and streams in Internet.Also, we hope we clarified some key concepts related to netflow v9 on NCS5500.Particularly, the notion of “interval-rate” should be considered irrelevant if we don’t specify the traffic more precisely.In a follow up post, we will perform stress and performance testing on Netflow to illustrate all this. Stay tuned.Acknowledgements# Thanks a lot to the following engineers who helped preparing this article.Benoit Mercier des Rochettes, Thierry Quiniou, Serge Krier, Frederic Cuiller, Hari Baskar Sivasamy, Jisu Bhattacharya", "url": "/tutorials/2018-02-19-netflow-sampling-interval-and-the-mythical-internet-packet-size/", "author": "Nicolas Fevrier", "tags": "ncs5500, ncs 5500, netflow, nf, NFv9" } , "tutorials-understanding-ncs5500-jericho-plus-systems": { "title": "Understanding NCS5500 Jericho+ Systems and their scalability", "content": " Understanding NCS5500 Resources S01E06 Introduction of the Jericho+ based platforms and impact on the scale Previously on “Understanding NCS5500 Resources” Jericho+ New systems using this J+ ASIC Let’s talk about route scale NCS55A1-36H-S / NCS-55A2-MOD-HD-S / NCS-55A2-MOD-S Scale NCS55A1-36H-SE-S Scale NCS55A1-24H Scale Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.S01E06 Introduction of the Jericho+ based platforms and impact on the scaleUpdate# This article has been edited in June 2018 to fix an error on the 6.3.2 behavior.Update2# In August 2018, we added information on the MOD systems and line cards.Update3# In Nov 2019, clarification on the lack of MACsec support on 1G interfaces.Update4 in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/Previously on “Understanding NCS5500 Resources”In previous posts, we presented# the different routers and line cards in NCS5500 portfolio we explained how IPv4 prefixes are sorted in LEM, LPM and eTCAM we covered how IPv6 prefixes are stored in the same databases. we demonstrated in a video how we can handle a full IPv4 and IPv6 Internet view on “Base” systems and line cards (i.e. without external TCAM, only using the LEM and LPM internal to the forwarding ASIC) finally in the fifth post, we demonstrated in another video the scale we can reach on Jericho-based systems with an external TCAMIn this episode we will introduce and study a second generation of line cards and systems based on an evolution of the Forwarding ASIC.Jericho+This Forwarding ASIC from Broadcom re-uses all of the principles of the Jericho generation, simply extending some scales# the bandwidth capabilities and consequently, the interfaces count# we can now accomodate 9x 100G interfaces line rate per ASIC the forwarding capability , extending it to 835MPPS (the performance is the same with lookups in internal databases or external TCAM) some memories like LPM (for certain models) and EEDB. Also we will use a new generation eTCAM (significantly larger).“Certain models”? Yes, J+ exists in different flavors. Some are re-using the same LEM/LPM scale than Jericho and some others have a larger LPM memory (qualified for 1M to 1.3M instead of 256K to 350+K IPv4 entries).Both Jericho and Jericho+ can be used with current Fabric Cards (FE3600). Some restrictions may apply for the 16-slot chassis, please contact your local account team to discuss the specifics.New systems using this J+ ASICIn the modular chassis#In March 2018, we had a single line card using Jericho+# the 36x100G-A-SE (more LC are coming in the summer).The line card is timing capable (note# an RP-E is necessary to use these timing features) and only exists in scale version (with eTCAM). The supported scale in current release is 4M+ IPv4 entries. It does not include MACsec chipset. It’s also the first line card supporting break-out cable 4x 25G.Internally, the line is composed of 4 Jericho+ (each one handling 9 ports QSFP). As shown in this diagram, each Jericho+ Forwarding ASIC is connected to the fabric cards via 8x 25G SERDES instead of 6 in the case of Jericho-based line cards.In July 2018, we added the support of MOD line cards# NC55-MOD-A-SThis line card offers more flexbility with fixed ports and two MPA bays. It’s powered by a single Jericho+ ASIC.We are also extending the fixed-form factor portfolio with 3x 1RU and 2x 2RU NCS55A1-36H-S NCS55A1-36H-SE-S NCS55A1-24H NCS-55A2-MOD-HD-S NCS-55A2-MOD-SLet’s get started with the 36 ports options.These standalone systems are MACsec + timing capable and are available in base (NCS55A1-36H-S) and scale versions (NCS55A1-36H-SE-S). Both have the same port density.The base version shows the same route scale than a Jericho systems without external TCAM while the scale version uses a new generation eTCAM extending the scale to 4M IPv4 routes (potentially much more in the future).Internally, the system is composed of 4 Jericho+ ASICs (each one handling 9 ports QSFP) interconnected via an FE3600 chipset.The third router# NCS55A1-24H.It’s a cost optimized, oversubscribbed, system that provides 24 ports QSFP. It is timing-capable but doesn’t support MACsec.As shown in this diagram, the forwarding ASICs are connected back-to-back without using any fabric engine. Each ASIC handles 12 ports for a 900Gbps forwarding capability (hence the oversubscription).We will describe it in more details in the next sections but this system uses the largest version of Jericho+ ASICs. It doesn’t use external TCAM but has a large LPM (1M to 1.3M prefixes instead of the 256K-350K we use on other systems in chassis or in the NCS55A1-36H-S).Moving on the second category, the 2-RU Modular Fixed Systems#Modular and Fixed… Hmmm…Indeed these routers are not chassis in the sense they don’t have slots to host line cards, but still they offer a lot of flexbility. They offer both fixed ports (40x SFP+) and two bays to host MPAs# 12x 10G, with 10G LAN, WAN, OTN, 10G DWDM 2x CFP2_DCO (OTN, 100G/200G DWDM) 1x CFP2_DCO (OTN, 100G/200G DWDM) + 2x QSFP28 (4x10G / 40G / 100G) 4x QSFP28 (4x10G / 40G /100G)In 6.5.1, we offer two flavors of the chassis# base and hardened. The second being temperature hardened, it can support more challenging environmental conditions.The system is powered by a single Jericho+ and is capable of MACsec on the 16 first ports and on MPAs. Note that MACsec is not supported with 1G optics.Let’s talk about route scaleFirst, a quick reminder# the order of operation for route lookup in the NCS5500 family. It applies for both Jericho and Jericho+ systems.The prefixes are stored in LEM, LPM and when possible eTCAM.NCS55A1-36H-S / NCS-55A2-MOD-HD-S / NCS-55A2-MOD-S ScaleOn these systems, the principles of prefixes storage are exactly the same than Jericho systems without eTCAM.So it’s possible to use two different modes# by default# the host mode changed by configuration# the internet modeI invite you take a look at the second and third episode of this series. You will find detailed explanations and examples with real internet views.NCS55A1-36H-SE-S ScaleThe NCS55A1-36H-SE-S is using the same Jericho+ ASIC but completed with a new generation and much larger external TCAM. In current release, it’s certified for 4M IPv4 prefixes but the memory capabilities are significantly larger. We will decide in the future if it’s necessary to increase the tested/validated scale.Also, please note that the way we sort routes is different between 6.3.15 and 6.3.2.The uRPF does not affect the scale of this eTCAM (on the contrary of the first generation where it was necessary to disable the dual capacity feature, reducing the eTCAM to 1M entries). Also, the hybrid ACLs are using a different zone of the eTCAM memory and don’t affect the overall scale.RP/0/RP0/CPU0#5508-6.3.2#sh route sumRoute Source Routes Backup Deleted Memory(bytes)local 7 0 0 1680local LSPV 1 1 0 480connected 6 1 0 1680static 1 0 0 240ospf 1 5 0 0 1200bgp 100 1186410 0 0 284738400isis 1 0 0 0 0dagr 0 0 0 0Total 1186430 2 0 284743680RP/0/RP0/CPU0#5508-6.3.2#sh route ipv6 sumRoute Source Routes Backup Deleted Memory(bytes)local 7 0 0 1848local LSPV 1 1 0 528connected 5 2 0 1848connected l2tpv3_xconnect 0 0 0 0static 0 0 0 0bgp 100 58860 0 0 15539040isis 1 0 0 0 0Total 58873 3 0 15543264RP/0/RP0/CPU0#5508-6.3.2#sh dpa resource iproute loc 0/1/CPU0~iproute~ DPA Table (Id# 24, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 5 /1 0 /2 0 /3 0 /4 5 /5 0 /6 0 /7 0 /8 16 /9 13 /10 35 /11 106 /12 285 /13 550 /14 1066 /15 1880 /16 13419 /17 7773 /18 13636 /19 25026 /20 38261 /21 43073 /22 80751 /23 67073 /24 376991 /25 567 /26 2032 /27 4863 /28 15599 /29 16868 /30 41735 /31 52 /32 434792 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 1186472 1186472 1186472 1186472 Create Requests Total# 1186472 1186472 1186472 1186472 Success# 1186472 1186472 1186472 1186472 Delete Requests Total# 0 0 0 0 Success# 0 0 0 0 Update Requests Total# 8 8 8 8 Success# 6 6 6 6 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#5508-6.3.2#sh dpa resource ip6route loc 0/1/CPU0~ip6route~ DPA Table (Id# 25, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Capacity Prefix Actual Capacity /0 5 0 /1 0 0 /2 0 0 /3 0 0 /4 0 0 /5 0 0 /6 0 0 /7 0 0 /8 0 0 /9 0 0 /10 5 0 /11 0 0 /12 0 0 /13 0 0 /14 0 0 /15 0 0 /16 16 0 /17 0 0 /18 0 0 /19 2 0 /20 9 0 /21 3 0 /22 4 0 /23 4 0 /24 19 0 /25 6 0 /26 15 0 /27 17 0 /28 78 0 /29 1848 0 /30 153 0 /31 127 0 /32 9279 0 /33 487 0 /34 345 0 /35 357 0 /36 1436 0 /37 199 0 /38 673 0 /39 154 0 /40 2239 0 /41 206 0 /42 369 0 /43 113 0 /44 2213 0 /45 178 0 /46 1451 0 /47 356 0 /48 19222 0 /49 0 0 /50 0 0 /51 1 0 /52 1 0 /53 0 0 /54 0 0 /55 0 0 /56 11540 0 /57 16 0 /58 0 0 /59 0 0 /60 0 0 /61 0 0 /62 0 0 /63 0 0 /64 4940 0 /65 0 0 /66 0 0 /67 0 0 /68 0 0 /69 0 0 /70 0 0 /71 0 0 /72 0 0 /73 0 0 /74 0 0 /75 0 0 /76 0 0 /77 0 0 /78 0 0 /79 0 0 /80 0 0 /81 0 0 /82 0 0 /83 0 0 /84 0 0 /85 0 0 /86 0 0 /87 0 0 /88 0 0 /89 0 0 /90 0 0 /91 0 0 /92 0 0 /93 0 0 /94 0 0 /95 0 0 /96 1 0 /97 0 0 /98 0 0 /99 0 0 /100 0 0 /101 0 0 /102 0 0 /103 0 0 /104 6 0 /105 0 0 /106 0 0 /107 0 0 /108 0 0 /109 0 0 /110 0 0 /111 0 0 /112 0 0 /113 0 0 /114 0 0 /115 4 0 /116 0 0 /117 0 0 /118 0 0 /119 0 0 /120 0 0 /121 0 0 /122 71 0 /123 0 0 /124 0 0 /125 0 0 /126 0 0 /127 18 0 /128 722 0 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 58908 58908 58908 58908 Create Requests Total# 58908 58908 58908 58908 Success# 58908 58908 58908 58908 Delete Requests Total# 0 0 0 0 Success# 0 0 0 0 Update Requests Total# 2 2 2 2 Success# 1 1 1 1 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources lem loc 0/1/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 0 (0 %) iproute # 0 (0 %) ip6route # 0 (0 %) mplslabel # 4 (0 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources lpm loc 0/1/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 338879 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 26 (0 %) iproute # 0 (0 %) ip6route # 0 (0 %) ipmcroute # 1 (0 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources exttcamipv4 loc 0/1/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green -- SNIP --Current Usage NPU-0 Total In-Use # 1186457 (30 %) iproute # 1186472 (30 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources exttcamipv6 loc 0/1/CPU0HW Resource Information Name # ext_tcam_ipv6OOR Information NPU-0 Estimated Max Entries # 2000000< Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 58878 (3 %) ip6route # 58908 (3 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#sh contr npu externaltcam loc 0/1/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 7300158 1186457 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 10963645 58878 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 80b FLP 4096 0 81 INGRESS_IPV4_SRC_IP_EXT0 5 80b FLP 4096 0 82 INGRESS_IPV4_DST_IP_EXT0 6 160b FLP 4096 0 83 INGRESS_IPV6_SRC_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_DST_IP_EXT0 8 80b FLP 4096 0 85 INGRESS_IP_SRC_PORT_EXT0 9 80b FLP 4096 0 86 INGRESS_IPV6_SRC_PORT_EXT-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#NCS55A1-24H ScaleThe NCS55A1-24H is very different from the other NCS5500 routers because it uses a pair of Jericho+ with large LPM. So it occupies a particular place between the non-eTCAM and the eTCAM systems.This large LPM is algorithmic, so even if it’s marketed for 1M IPv4 entries, it can fit much more depending on the prefix distribution#HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenHW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 1384333 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenLike the other non-eTCAM systems, we can use two different configurations# the host-optimized mode (default) and the internet-optimized mode.Note# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/Let’s take a full internet table made of 655487 v4 and 42852 v6 real routesand check how it fits in this system.With the host optimized mode#And with the internet optimized mode#ConclusionThree different options with the Jericho+ systems# J+ with Jericho-scale, J+ with large LPM, J+ with new generation eTCAM. They are used in one new line card offering very high route scalability (4M+ routes), and in three new 1RU systems.", "url": "/tutorials/Understanding-ncs5500-jericho-plus-systems/", "author": "Nicolas Fevrier", "tags": "iosxr, xr, ncs5500, jericho+, j+" } , "#": {} , "tutorials-ncs5500-urpf": { "title": "NCS5500 URPF: Configuration and Impact on Scale", "content": " Understanding NCS5500 Resources S01E07 NCS5500 URPF Configuration and Impact on Scale Previously on “Understanding NCS5500 Resources” Definition URPF relevancy NCS5500 Implementation Configuration and impact on base systems (no eTCAM) Configuration and impact on scale Jericho systems (with eTCAM) Configuration and impact on scale Jericho+ systems (with NG eTCAM) Verification Conclusion You can find more content related to NCS5500 including routing memory management, VRF, ACLs, Netflow following this link.Edited in August 2018 to add a note on the lack of support of S-RTBH, and to fix an error pointed by Muffadal Presswala (thanks #) related to the behavior with eTCAM systems.Nov2020# S-RTBH support has been added in IOSXR 7.2.1 but only for line cards and platforms powered by Jericho2 and eTCAM (NC57-18DD-SE for example).Nov2020# Change of configuration in 7.xDec2020# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/S01E07 NCS5500 URPF Configuration and Impact on ScalePreviously on “Understanding NCS5500 Resources”In previous posts, we presented# the different routers and line cards in NCS5500 portfolio we explained how IPv4 prefixes are sorted in LEM, LPM and eTCAM we covered how IPv6 prefixes are stored in the same databases. we demonstrated in a video how we can handle a full IPv4 and IPv6 Internet view on “Base” systems and line cards (i.e. without external TCAM, only using the LEM and LPM internal to the forwarding ASIC) in the fifth post, we continued with a new video where we demonstrated the very high scale we can reach on Jericho-based systems when using an external TCAM last post, we introduced systems based on the Jericho+ forwarding ASIC and we detailed the routing distribution between the different memories and the scale they can achieve.In this new episode, we will cover the impact of activating URPF on the NCS5500 routers.DefinitionURPF stands for Unicast Reverse Path Forwarding.Definition from the CCO website#“This security feature works by enabling a router to verify the reachability of the source address in packets being forwarded. This capability can limit the appearance of spoofed addresses on a network. If the source IP address is not valid, the packet is discarded.Unicast RPF in strict mode, the packet must be received on the interface that the router would use to forward the return packet.Unicast RPF in loose mode, the source address must appear in the routing table.”It’s a feature configured at the interface level.URPF relevancyRegardless of the Forwarding ASIC (Qumran-MX, Jericho or Jericho+), the NCS5500 only supports URPF in loose mode today.Configuring URPF comes at a cost in term of scale on some of the NCS5500 family members. It will be detailed extensively in this article. That’s why it’s important to understand what are the benefits of enabling this feature.As explained in the definition section above, the loose mode simply verify that source addresses of the packets received are in of the routable space. To bypass this “protection”, it’s fairly easy for an attacker to pick source addresses inside existing routes when forging the packet instead of totally random addresses.We invite the operators to check how much traffic is currently dropped by the URPF loose mode if they have it enabled on production routers.Example# to check this on an ASR9000#RP/0/RP0/CPU0#Router#show cef drops | i RPF drops RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 0 RPF drops packets # 50065 RPF drops packets # 0 RPF drops packets # 1262 RPF drops packets # 3627918 RPF drops packets # 1262 -- SNIP --And compare these figures to the packet count per interface to understand how much traffic it represents. The impact it could have on route scale and the protection efficiency it offers need to be put in perspective before deciding if it is worth enabling URPF.Now said, some other very good reasons to enable URPF loose mode exist. For example, it’s a mandatory brick of a Source-based Remotely Triggered Black Hole architecture (S-RTBH).But… S-RTBH is not supported currently on most of the NCS5500 platforms (even if the URPF loose-mode is supported, it can not be used for this particular use-case). Only exception# the NC57-18DD-SE line cards and all future platforms based on Jericho2 (with eTCAM).NCS5500 ImplementationWe don’t support URPF strict mode today. URPF loose mode is available on NCS5500 since IOS XR 6.2.2 for IPv4 and IPv6. The feature is supported on Jericho and Jericho+ systems, with or without eTCAM.The configuration implies the deactivation of some profiles, different on “base” and “scale” systems. After this preliminary operation, the configuration is applied at the interface level.Deactivating URPF on an interface implies to do it for both IPv4 and IPv6.Allow-self-ping is the default mode and allow-default is not supported.Configuration and impact on base systems (no eTCAM)On “base” systems (without external TCAM)#Since URPF requires two accesses to the LEM (lookup for source address then for destination address in the packet header), we have to disable the optimizations present by default or after a configuration#hw-module fib ipv4 scale host-optimized-disablehw-module fib ipv6 scale internet-optimized-disableNote# depending on the IOS XR version, the options could be different and actually could be the opposite of “disable”, be attentive at what is availabe in the CLI.Note# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/With the optimization disabled and after the line cards / system reload, we have now#Important# with such mode, it will no longer be possible to handle a full internet view (v4+v6 or v4-only).The configuration can now be applied on the interfaces#hw-module fib ipv4 scale host-optimized-disablehw-module fib ipv6 scale internet-optimized-disable!interface HundredGigE0/7/0/0 ipv4 address 192.168.1.1 255.255.255.252 ipv4 verify unicast source reachable-via any ipv6 verify unicast source reachable-via any ipv6 address 2001#10#1##1/64Note# Starting from 6.7.x and 7.x.y, it’s mandatory to enable both IPv4 and IPv6 URPF configuration at the same time. The configuration for just one address-family will be rejected by the commit.Configuration and impact on scale Jericho systems (with eTCAM)Now, let’s consider the scale systems and line cards with Jericho ASICs#The eTCAM is a 80bit memory and in normal condition we use it in two blocks of 40 bits to double the capacity. The first access being performed on the first half and the second access in the pipeline being done on the second half.With URPF, we need these two accesses to check source and destination, it’s no longer possible to use the double capacity mode# it needs to be disabled.RP/0/RP0/CPU0#NCS5508-632(config)#hw-module tcam fib ipv4 scaledisableRP/0/RP0/CPU0#NCS5508-632(config)#hw-module fib ipv4 scale host-optimized-disableRP/0/RP0/CPU0#NCS5508-632(config)#hw-module fib ipv6 scale internet-optimized-disableRP/0/RP0/CPU0#NCS5508-632(config)#commitThe impact on scale is significative since we lost 1M out of the 2M of the eTCAM capacity.Let’s check with a large routing table (internet v4 + internet v6 + 435k host routes) what is the impact#RP/0/RP0/CPU0#5508-6.3.2#sh bgp sumBGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 1720749BGP main routing table version 1720749BGP NSR Initial initsync version 634456 (Reached)BGP NSR/ISSU Sync-Group versions 1720749/0BGP scan interval 60 secs BGP is operating in STANDALONE mode. Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 1720749 1720749 1720749 1720749 1720749 1720749 Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 1000 706753 307270 1720749 0 0 5w0d 655487192.168.100.152 0 45896 707144 307270 1720749 0 0 5w0d 656126192.168.100.153 0 7018 705342 307270 1720749 0 0 5w0d 654330192.168.100.154 0 1836 709963 307270 1720749 0 0 5w0d 658948192.168.100.155 0 50300 687217 307270 1720749 0 0 5w0d 636208192.168.100.156 0 50304 708316 307270 1720749 0 0 5w0d 657301192.168.100.157 0 57381 708322 307270 1720749 0 0 5w0d 657307192.168.100.158 0 4608 728503 812358 1720749 0 0 5w0d 677487192.168.100.159 0 4777 717228 307270 1720749 0 0 5w0d 666213192.168.100.160 0 37989 339686 307270 1720749 0 0 5w0d 288706192.168.100.161 0 3549 705390 307270 1720749 0 0 5w0d 654376192.168.100.163 0 8757 683499 307270 1720749 0 0 5w0d 632483192.168.100.164 0 3257 705671 307270 1720749 0 0 5w0d 654661192.168.100.166 0 10051 1186443 217145 1720749 0 0 00#28#05 1186410 RP/0/RP0/CPU0#5508-6.3.2#sh dpa resource iproute loc 0/7/CPU0 ~iproute~ DPA Table (Id# 24, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Capacity Prefix Actual Capacity/0 3 20 /1 0 20/2 0 20 /3 0 20/4 3 20 /5 0 20/6 0 20 /7 0 20/8 16 20 /9 14 20/10 37 204 /11 107 409/12 288 818 /13 557 1636/14 1071 3275 /15 1909 5732/16 13572 42381 /17 8005 25387/18 14055 42585 /19 25974 86603/20 40443 127348 /21 45082 141679/22 83722 231968 /23 71750 207173/24 395142 1105590 /25 2085 4299/26 3362 4504 /27 5736 3275/28 15909 2866 /29 17377 6961/30 42508 2866 /31 112 204/32 435868 20 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 1224707 1224707 1224707 1224707 Create Requests Total# 1224713 1224713 1224713 1224713 Success# 1224713 1224713 1224713 1224713 Delete Requests Total# 6 6 6 6 Success# 6 6 6 6 Update Requests Total# 341539 341539 341539 341539 Success# 341538 341538 341538 341538 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0 RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources lem loc 0/7/CPU0HW Resource Information Name # lem OOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP -- Current Usage NPU-0 Total In-Use # 467163 (59 %) iproute # 435868 (55 %) ip6route # 31304 (4 %) mplslabel # 0 (0 %)-- SNIP -- RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources lpm loc 0/7/CPU0HW Resource Information Name # lpm OOR Information NPU-0 Estimated Max Entries # 530552 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 5235 (1 %) iproute # 0 (0 %) ip6route # 5219 (1 %) ipmcroute # 0 (0 %)-- SNIP -- RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources exttcamipv4 loc 0/7/CPU0HW Resource Information Name # ext_tcam_ipv4 OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP -- Current Usage NPU-0 Total In-Use # 788839 (39 %) iproute # 788839 (39 %) ipmcroute # 0 (0 %)-- SNIP -- RP/0/RP0/CPU0#5508-6.3.2#So we have 13 times Internet (coming from actual internet full views provided by different customers) and a lot of host routes (435k). In a Jericho + eTCAM card, before enabling URPF it occupies# LEM# 59% LPM# 1% eTCAM# 39%Let’s now remove the double capacity mode and configure the URPF on interfacesRP/0/RP0/CPU0#5508-6.3.2#sh run | i hw-mBuilding configuration...RP/0/RP0/CPU0#5508-6.3.2#sh run int hu 0/7/0/0interface HundredGigE0/7/0/0cdpipv4 address 192.168.1.1 255.255.255.252ipv6 address 2001#10#1##1/64load-interval 30flow ipv4 monitor fmm sampler fsm1 ingress!RP/0/RP0/CPU0#5508-6.3.2#confRP/0/RP0/CPU0#5508-6.3.2(config)#hw-module tcam fib ipv4 scaledisableIn order to activate this new scale, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#5508-6.3.2(config)#commitRP/0/RP0/CPU0#5508-6.3.2(config)#endRP/0/RP0/CPU0#5508-6.3.2#adminroot connected from 127.0.0.1 using console on 5500-6.3.2sysadmin-vm#0_RP0# hw-module location 0/7 reloadReload hardware module ? [no,yes] yesresult Card graceful reload request on 0/7 succeeded.sysadmin-vm#0_RP0# exitRP/0/RP0/CPU0#5508-6.3.2#confRP/0/RP0/CPU0#5508-6.3.2(config)#int hu 0/7/0/0RP/0/RP0/CPU0#5508-6.3.2(config-if)# ipv4 verify unicast source reachable-via anyRP/0/RP0/CPU0#5508-6.3.2(config-if)# ipv6 verify unicast source reachable-via anyRP/0/RP0/CPU0#5508-6.3.2(config-if)#commitRP/0/RP0/CPU0#5508-6.3.2(config-if)#endRP/0/RP0/CPU0#5508-6.3.2#RP/0/RP0/CPU0#5508-6.3.2#sh run | i hw-mBuilding configuration...hw-module tcam fib ipv4 scaledisableRP/0/RP0/CPU0#5508-6.3.2#sh run int hu 0/7/0/0interface HundredGigE0/7/0/0cdpipv4 address 192.168.1.1 255.255.255.252ipv4 verify unicast source reachable-via anyipv6 verify unicast source reachable-via anyipv6 address 2001#10#1##1/64load-interval 30flow ipv4 monitor fmm sampler fsm1 ingress! RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources lem loc 0/7/CPU0HW Resource Information Name # lem OOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP -- Current Usage NPU-0 Total In-Use # 467173 (59 %) iproute # 435868 (55 %) ip6route # 31304 (4 %) mplslabel # 0 (0 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources lpm loc 0/7/CPU0HW Resource Information Name # lpm OOR Information NPU-0 Estimated Max Entries # 171722 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 5234 (3 %) iproute # 0 (0 %) ip6route # 5218 (3 %) ipmcroute # 0 (0 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#sh contr npu resources exttcamipv4 loc 0/7/CPU0HW Resource Information Name # ext_tcam_ipv4 OOR Information NPU-0 Estimated Max Entries # 1024000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 788839 (77 %) iproute # 788839 (77 %) ipmcroute # 0 (0 %)-- SNIP --RP/0/RP0/CPU0#5508-6.3.2#With URPF configured (and dual capacity mode disabled) and the same very large table we have# LEM# 59% LPM# 3% eTCAM# 77%In conclusion of this demo, enabling URPF implies the deactivation of the dual capacity mode and reduces by half the eTCAM memory. Nevertheless, routes are also stored in LEM and LPM. A very large internet table can still fit in the system, even if the room for growth is reduced.Configuration and impact on scale Jericho+ systems (with NG eTCAM)The Jericho+ w/ eTCAM systems don’t need to disable the dual capacity mode to enable URPF.The same configuration than above can be re-used (except the hw-module commands).The impact on scale is not null but is signficantly less than it was on the Jericho-based systems. Since the J+/eTCAM systems are qualified for 4M entries which is much less than its actual capacity, the 25% impact doesn’t change the officially supported numbers# with URPF enabled we still support 4M routes in eTCAM.VerificationPackets dropped by URPF can be counted at the NPU level with#RP/0/RP0/CPU0#NCS5508-6.3.2#show contr npu stats traps-all instance 0 location 0/7/CPU0 | inc RpfRxTrapUcLooseRpfFail 0 84 0x54 32035 0 0 RxTrapUcStrictRpfFail 0 137 0x89 32035 0 0 ConclusionURPF loose mode can be configured on all NCS5500 systems. On Jericho w/ eTCAM, the impact is significative but we demonstrated we still support a very large public table and a lot of host routes. On Jericho+ w/ eTCAM, URPF doesn’t affect the supported scale of 4M entries.", "url": "/tutorials/ncs5500-urpf/", "author": "Nicolas Fevrier", "tags": "iosxr, ncs5500, urpf, internet, scale" } , "tutorials-ncs5500-routing-in-vrf": { "title": "NCS5500 Routing in VRF", "content": " Understanding NCS5500 Resources S01E08 NCS5500 Routes in VRF Previously on “Understanding NCS5500 Resources” Let’s configure it on a eTCAM card And on a non-eTCAM card Allocation mode Conclusion You can find more content related to NCS5500 including routing memory management, URPF, ACLs, Netflow following this link.S01E08 NCS5500 Routes in VRFPreviously on “Understanding NCS5500 Resources”In previous posts… Well ok, you got it now. You can check all the former articles in the page here. We presented the different platforms, based on Qumran-MX, Jericho and Jericho+. We detailed all the mechanisms used to optimized the routes sorting inside the various memories and we also detailed the impact of features like URPF.Last week, we’ve been asked if it’s possible “to run the Internet Feed inside a VRF/VPN”.It’s indeed a very good question since we used to have some platforms where the scale of routes inside VRF was significantly different than the capability in Global Routing Table.Short answer# yes we support it. But it’s important to set it up correctly to avoid surprises.It has been explain extensively in the former posts, so we suppose you are now familiar with the logic of sorting and storing routes in different memories depending on the product (whether or not we have external TCAM) and on the prefix length.Let’s configure it on a eTCAM cardLet’s configure an interface and advertise 85k routes (IPv4/27). For this example, we will use a Jericho line cards with eTCAM (NC55-24X100G-SE) running IOS XR 6.3.2.Note# L3VPN was available in some specific images but is officially supported only in 6.3.2. Before this release, it was possible to configure VRF-lite. What will be described below applies for both.vrf TESTaddress-family ipv4 unicast!!interface HundredGigE0/7/0/2cdpvrf TESTipv4 address 192.168.21.1 255.255.255.0!router bgp 100vrf TEST rd 113579#13579 address-family ipv4 unicast ! neighbor 192.168.21.2 remote-as 100 update-source HundredGigE0/7/0/2 address-family ipv4 unicast route-policy ROUTE-FILTER in maximum-prefix 8000000 75 route-policy PERMIT-ANY out ! !!!And the BGP routes received from my neighbor#RP/0/RP0/CPU0#5508-1-6.3.2#sh bgp vrf TEST summaryBGP VRF TEST, state# ActiveBGP Route Distinguisher# 113579#13579VRF ID# 0x60000003BGP router identifier 1.1.1.1, local AS number 100Non-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000003 RD version# 85622BGP main routing table version 85622BGP NSR Initial initsync version 1 (Reached)BGP NSR/ISSU Sync-Group versions 85622/0BGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 85622 85622 85622 85622 85622 85622Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.21.2 0 100 355498 44 85622 0 0 00#39#18 85614RP/0/RP0/CPU0#5508-1-6.3.2#These routes being IPv4/27, they will be stored in external TCAM.Let’s examine the memory resources#RP/0/RP0/CPU0#5508-1-6.3.2#sh contr npu resources all loc 0/7/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 85645 (11 %) iproute # 40 (0 %) ip6route # 0 (0 %) mplslabel # 85614 (11 %)-- SNIP --HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 251311 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 52 (0 %) iproute # 0 (0 %) ip6route # 38 (0 %) ipmcroute # 1 (0 %)-- SNIP --HW Resource Information Name # encapOOR Information NPU-0 Estimated Max Entries # 80000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 4 (0 %) ipnh # 2 (0 %) ip6nh # 2 (0 %) mplsnh # 0 (0 %)-- SNIP --HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 85623 (4 %) iproute # 85635 (4 %)-- SNIP --HW Resource Information Name # fecOOR Information NPU-0 Estimated Max Entries # 126976 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 65 (0 %) ipnhgroup # 55 (0 %) ip6nhgroup # 10 (0 %)-- SNIP --HW Resource Information Name # ecmp_fecOOR Information NPU-0 Estimated Max Entries # 4096 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 0 (0 %) ipnhgroup # 0 (0 %) ip6nhgroup # 0 (0 %) -- SNIP --As expected we found 85635 iproute in the externaltcamv4, but also we notice the presence of 85614 mplslabel entries in the LEM memory. So for each prefix learnt in the VRF TEST, we associate by default a label and it consumes one entry in the LEM even if it’s not a IPv4/32.This is indeed the default behavior# per-prefix label allocation…If your design permits it (and it should be the case in 99% of the times), we advise you modify the label allocation mode to “per-vrf”.Addendum#Several comments received on this aspect. Let’s create a dedicated section on the allocation mode.RP/0/RP0/CPU0#5508-1-6.3.2#confRP/0/RP0/CPU0#5508-1-6.3.2(config)#RP/0/RP0/CPU0#5508-1-6.3.2(config)#router bgp 100RP/0/RP0/CPU0#5508-1-6.3.2(config-bgp)# vrf TESTRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf)# address-family ipv4 unicastRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)# label mode per-vrfRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)#commitRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)#endRP/0/RP0/CPU0#5508-1-6.3.2#Let’s now check the impact on LEM#RP/0/RP0/CPU0#5508-1-6.3.2#sh contr npu resources all loc 0/7/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 46 (0 %) iproute # 40 (0 %) ip6route # 0 (0 %) mplslabel # 1 (0 %)-- SNIP --HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 85623 (4 %) iproute # 85635 (4 %)-- SNIP -- It changes everthing now that we allocate only one entry for the MPLS label. For instance, it permits to push easily a full internet view and much more (for instance 435k extra host routes).RP/0/RP0/CPU0#5508-1-6.3.2#sh bgp vrf TEST summaryBGP VRF TEST, state# ActiveBGP Route Distinguisher# 113579#13579VRF ID# 0x60000003BGP router identifier 1.1.1.1, local AS number 100Non-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000003 RD version# 1877292BGP main routing table version 1877292BGP NSR Initial initsync version 1 (Reached)BGP NSR/ISSU Sync-Group versions 1877292/0BGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 1877292 1877292 1877292 1877292 1877292 1877292Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.21.2 0 100 1261510 148 1877292 0 0 02#18#40 1186410RP/0/RP0/CPU0#5508-1-6.3.2#sh route vrf TEST sumRoute Source Routes Backup Deleted Memory(bytes)local 1 0 0 240connected 1 0 0 240dagr 0 0 0 0bgp 100 1186410 0 0 284738400static 1 0 0 240Total 1186413 0 0 284739120RP/0/RP0/CPU0#5508-1-6.3.2#sh contr npu resources all loc 0/7/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 434784 (55 %) iproute # 434793 (55 %) ip6route # 0 (0 %) mplslabel # 1 (0 %)-- SNIP --HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 251311 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 52 (0 %) iproute # 0 (0 %) ip6route # 38 (0 %) ipmcroute # 1 (0 %)-- SNIP --HW Resource Information Name # encapOOR Information NPU-0 Estimated Max Entries # 80000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 4 (0 %) ipnh # 2 (0 %) ip6nh # 2 (0 %) mplsnh # 0 (0 %)-- SNIP --HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green-- SNIP --Current Usage NPU-0 Total In-Use # 751665 (37 %) iproute # 751677 (37 %)-- SNIP --And on a non-eTCAM cardAlso we can check how a non-eTCAM line cards (NC55-36X100G) can do with the full view (we just filter the IPv4/32 from the same BGP peer)#RP/0/RP0/CPU0#5508-1-6.3.2#sh run route-policy ROUTE-FILTERroute-policy ROUTE-FILTER if destination in (0.0.0.0/0 le 31) then pass else drop endifend-policyRP/0/RP0/CPU0#5508-1-6.3.2#RP/0/RP0/CPU0#5508-1-6.3.2#sh route vrf TEST summaryRoute Source Routes Backup Deleted Memory(bytes)local 1 0 0 240connected 1 0 0 240dagr 0 0 0 0bgp 100 751657 0 0 180397680static 1 0 0 240Total 751660 0 0 180398400RP/0/RP0/CPU0#5508-1-6.3.2#RP/0/RP0/CPU0#5508-1-6.3.2#sh dpa resources iproute loc 0/2/CPU0~iproute~ DPA Table (Id# 24, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 4 /1 0 /2 0 /3 0 /4 4 /5 0 /6 0 /7 0 /8 15 /9 13 /10 35 /11 106 /12 285 /13 550 /14 1066 /15 1880 /16 13419 /17 7773 /18 13636 /19 25026 /20 38261 /21 43073 /22 80751 /23 67073 /24 376990 /25 567 /26 2032 /27 4863 /28 15599 /29 16868 /30 41736 /31 52 /32 39 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 NPU-4 NPU-5 In Use# 751716 751716 751716 751716 751716 751716 RP/0/RP0/CPU0#5508-1-6.3.2#sh contr npu resources all loc 0/2/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2018.Apr.02 10#32#37 PDT -- SNIP --Current Usage NPU-0 Total In-Use # 377026 (48 %) iproute # 349729 (44 %) ip6route # 0 (0 %) mplslabel # 1 (0 %) -- SNIP --HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 418492 Red Threshold # 95 Yellow Threshold # 80 OOR State # Yellow OOR State Change Time # 2018.Apr.02 09#39#01 PDT -- SNIP --Current Usage NPU-0 Total In-Use # 374731 (90 %) iproute # 374687 (90 %) ip6route # 38 (0 %) ipmcroute # 1 (0 %) -- SNIP --We notice inconsistencies in the LEM numbers between Total-in-use and the total below. It’s a cosmetic issue that will be handled in the next release.So, we can verify with this output that we are not consuming entries with mplslabel in LEM for each prefix.Allocation modeWe received several comments just after posting this article, related to the allocation mode used here. Let’s try to summarize the key points.TL;DR# per-ce is the best bet if you don’t know which one to select instead of the per-prefix.Several use-cases involving “maximum-path eiBGP” can be broken by per-vrf allocation and he recommends to use per-CE when possible. To compare the different options#Per-prefix (default) Good label diversity for core loadbalancing, able to get MPLS statistics (note# this last comment is not applicable for NCS5500 since we don’t have statistics in the “show mpls forwarding”, it’s more appropriate for CRS, ASR9k, …) Can cause scale issuesPer-CE (resilient) Single label allocated by CE, whatever number of prefixes, improved scale EIBGP multipath, PIC is supported, single Label lookupPer-VRF Single label allocated for the whole VRF, thus additional lookup required to forward traffic Potential forwarding loop during local traffic diversion to support PIC No support for EIBGP multipathA lot of litterature is available for free on places like CiscoLive (London 2013 BRKIPM-2265).Let’s verify that per-ce allocation is not changing anything in the resource usage#RP/0/RP0/CPU0#5508-1-6.3.2#confRP/0/RP0/CPU0#5508-1-6.3.2(config)#router bgp 100RP/0/RP0/CPU0#5508-1-6.3.2(config-bgp)#vrf TESTRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf)#address-family ipv4 unicastRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)# label mode per-?per-ce per-prefix per-vrfRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)# label mode per-ceRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)#commitRP/0/RP0/CPU0#5508-1-6.3.2(config-bgp-vrf-af)#endRP/0/RP0/CPU0#5508-1-6.3.2#RP/0/RP0/CPU0#5508-1-6.3.2#sh contr npu resources lem loc 0/2/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2018.Apr.02 10#32#37 PDT-- SNIP --Current Usage NPU-0 Total In-Use # 377026 (48 %) iproute # 349730 (44 %) ip6route # 0 (0 %) mplslabel # 2 (0 %) -- SNIP -- RP/0/RP0/CPU0#5508-1-6.3.2#sh contr npu resources lem loc 0/7/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green -- SNIP --Current Usage NPU-0 Total In-Use # 42 (0 %) iproute # 40 (0 %) ip6route # 0 (0 %) mplslabel # 2 (0 %)ConclusionIt’s possible to learn a large number of routes in VRF but it’s important to change the default label allocation mode to per-vrf or per-ce, otherwise we will create one label entry for each prefix learnt.Thanks to Lukas Mergenthaler, Fred Cuiller and Phil Bedard for their suggestions and comments.", "url": "/tutorials/ncs5500-routing-in-vrf/", "author": "Nicolas Fevrier", "tags": "iosxr, ncs5500, xr, internet, vrf" } , "tutorials-netflow-ncs5500-test-results": { "title": "Netflow on NCS5500: Test Results", "content": " Netflow Test Results Introduction The tests Impact of the packet size Impact of the port load / bandwidth Impact of sampling interval Impact of the number of flows Impact of the active / inactive timers Full chassis Stress tests Test conditions Conclusion Acknowledgements You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, following this link.IntroductionIn this last blog post on NCS5500 Netflow, we presented the NF implementation details# from the software processes to the internal networks used to transport sampled traffic and the CPU protection mechanisms. Also, we provided some important information on the various packet sizes and how should be approached the sampling-interval question.Today we will present the results of a Netflow test campaign executed in lab last month. We will see how NCS5500 behaves when we push all cursors.The testsWe will try to check various parameters today, and make sure it doesn’t have any side effects. For instance, we need to make sure we are not impacting the control plane# the routing protocols handled on the same Line Card CPU should not flap or lose update. When possible, we will show the impact on LC CPU.These tests are executed on 36x100G-A-SE line cards running IOS XR 6.3.15. The card is fully wired to an Ixia chassis, able to push line rate traffic over each interface simultaneously. In some specific tests, we will use a fully loaded chassis (16-slots) with a “snake” configuration# each port is looped to another one to re-inject traffic and load the chassis without requiring 574x 100G testing ports.The tests have been carried out configuring Neflow v9 on physical but also bundle interfaces. To make the test more realistic, we added URPF v4+v6 to the interface configuration, dampening, ingress+egress QoS and ingress v4+v6 ACLs, LDP and RSVP-TE, and finally multicast (IGMP and PIM).All the tests have been executed with both v4 and v6 simultaneously.We made sure that a lot of “new flows” were generated from Ixia using some traffic distribution knobs. Indeed, the “effort” of creating a new entry in the cache compared to updating an existing one is not the same for the nfsrv process.A picture of the test device console#Let’s get started…Impact of the packet sizeIn this first test, we will check the impact on the CPU load when pushing different packet size.Test parameters# each port is generating 200,000 PPS 1M flows total generated by Ixia sample-interval configured# 1#4000 active/inactive timers# 30s/30s timeout rate-limit 10000Variable parameter(s)# packet size# 64B, 128B, 256B, 512BMeasurement# CPU impact on nfproducer, nfsrv, netioResults#Comment# during all these tests, we noticed that CPU utilization is rarely completely linear. We will need to accept some margin of errors in the figures collected and presented here.Conclusion# the packet size doesn’t seem to influence significantly the CPU load it’s something we are expecting for everything above 100B @ L3 since the NPU doesn’t sample more no taildrop observed, no impact on other routing protocols (v4 or v6)Impact of the port load / bandwidthIn this second test, we are generating IMIX traffic (with packet size variable between 100B and 300B) and we simply adjust the interface load (simultaneously on the 36x ports).Test parameters# each port is generating 100B-300B packets 1M flows total generated by Ixia sample-interval configured# 1#4000 active/inactive timers# 30s/30s timeout rate-limit 10000Variable parameter(s)# interface load# 20%, 40%, 60%, 80%, 100%Measurement# CPU impact on nfproducer, nfsrv, netioResults#We can see with some margin of error in the third measurement that CPU load for nfproducer and nfsrv is growing and reach a “plateau”. This can be easily explained by the shaper used on each NPU. In current release (6.3.2), we use a shaper of 133Mbps. We will increase this number in next releases (to 200Mbps).Conclusion# bandwidth utilization has, logically, an impact on the CPU load but no taildrop were observed, no impact on other routing protocols (v4 or v6)Impact of sampling intervalHere, we will use line rate traffic on the 36 ports and we will only change the sampling-interval in our configuration. It will logically have an impact on the number of sampled packets we push from the NPUs to the Line Card CPU. So we expect the CPU load to increase.Test parameters# each port is generating 512B packets each port is transmitting line rate we use all 36 ports 1M flows total generated by Ixia active/inactive timers# 30s/30s timeout rate-limit 10000Variable parameter(s)# sampling interval# 1#32K, 1#16K, 1#8K, 1#4K, 1#2K# 1#1K, 1#1Measurement# CPU impact on nfproducer, nfsrv, netioResults#Again, we will need to accept some margin of error in the measurement of the last test.We can see a progression in the CPU utilization, up to a plateau when we reach the shaper. After this value, even if we sample more aggressively, we are not pushing more sampled packets to the LC CPU.Conclusion# with constant traffic, sampling-interval has, of course, a direct impact on the CPU load but no taildrop were observed, no impact on other routing protocols (v4 or v6)Impact of the number of flowsIn this fourth test, we will check what happens when we exceed the cache size. So we will generate more flows than the maximum cache limit we can configure (1 million entries).Test parameters# each port is generating 512B packets each port is transmitting line rate we use all 36 ports sample-interval configured# 1#4000 active/inactive timers# 30s/30s timeout rate-limit 10000Variable parameter(s)# Number of flows# 1M, 2M, 3MMeasurement# CPU impact on nfproducer, nfsrv, netioResults#Very straightforward analysis# no impact at all, the CPU load stays constant.Conclusion# once the cache reaches its limit, nothing happens# we don’t remove new entries or punt packets or anything impact on LC CPU is not noticeable regardless the actual number of flows passing through the box of course, in such situation, the traffic matrix based on this netflow records will be inacurrate no taildrop observed, no impact on other routing protocols (v4 or v6)Impact of the active / inactive timersThis fifth test is now stressing a different aspect of the netflow protocol# the record generation. When we manipulate the active and inactive timers, we are influencing the amount of records generated.Test parameters# each port is generating 512B packets each port is transmitting line rate we use all 36 ports sample-interval configured# 1#4000 1M flows total generated by Ixia timeout rate-limit 50000# note it’s a very high number here, we don’t want to limit the amount of records and potentially pollute the testVariable parameter(s)# active/inactive timers# 30/30, 15/15, 5/5, 1/1Measurement# CPU impact on nfproducer, nfsrv, netioResults#It appears that only nfsrv is impacted by this test, even if we see a small increase in the netio process too.Quick refresher on the role of nfsvr# Receives NF record packets from nf_producer Creates a new flow cache if not already created, or update a existing flow cache (packet / byte count) for the flow monitor Periodically ages entries form the NF cache into NF export packets Sends the NF export packets to the NF collector using UDPNot a surprise to see the CPU load occupied by nfsvr increasing when we move to lower active/inactive timers. Since netio is used to transport the NF records, it’s also logical to see a progression of the CPU load when we transmit more and more of them.Conclusion# nfsvr and netio are the only processes impacted by more agressive active/inactive timing if we have kept a much lower NF record rate-limiter, it’s very likely we would have seen a plateau in the diagram no taildrop observed, no impact on other routing protocols (v4 or v6)Full chassisIn this test, we used a fully loaded chassis (16 times 36x 100G connected with a “snake topology”) and we configured Netflow on all ports.Since the netflow is handled at the line card CPU level, the number of line cards makes this test interesting for only one aspect# we used the same collector destination address everywhere.Test parameters# each port is generating 512B packets each port is transmitting line rate we use all 36 ports sample-interval configured# 1#4000 1M flows total generated by Ixia timeout rate-limit 10000 destination address of the collector# 173.173.173.1Picture of the testbed#Conclusion# nfsvr, nf_producer and netio being local to each line cards, nothing noticeable no taildrop observed, no impact on other routing protocols (v4 or v6)Stress testsFinally, we performed stress tests# on the line card# reloading it multiple times in a row on the processes# forcing manual restart of the various processes (nf_producer and nfsrv)RP/0/RP0/CPU0#fretta-64#process restart nfsvr location 0/5/CPU0RP/0/RP0/CPU0#Feb 25 10#59#05.975 UTC# sysmgr_control[66620]# %OS-SYSMGR-4-PROC_RESTART_NAME # User hsivasam (con0_RP0_CPU0) requested a restart of process nfsvr at 0/5/CPU0RP/0/RP0/CPU0#fretta-64# on the configuration# configuring and unconfiguring Netflow on interfaces dozens of times in a rowRP/0/RP0/CPU0#fretta-64#sh run int hundredGigE 0/5/0/11interface HundredGigE0/5/0/11 mtu 4484 service-policy input policy-backbone-default-in.v4 service-policy output policy-backbone-default-out-P-P.v4 ipv4 address 2.254.132.2 255.255.255.0 ipv4 verify unicast source reachable-via any ipv6 verify unicast source reachable-via any ipv6 address 2001#2#254#132##2/64 load-interval 30 flow ipv4 monitor ICX sampler ICX ingress flow ipv6 monitor ICX-v6 sampler ICX ingress dampening ipv4 access-group 121 ingress ipv6 access-group ipv6-edge-peer ingress!RP/0/RP0/CPU0#fretta-64#rollback configuration last 1RP/0/RP0/CPU0#fretta-64# on the interfaces# we forced flapping on the interfaces where netflow was configured and checked the impact. Both with bundled interfaces and physical interfaces. clear cache record# forces the generation of all the records before flushing the cache entries. RP/0/RP0/CPU0#fretta-64#clear flow monitor ICX cache force-export location 0/5$Clear cache entries for this monitor on this location. Continue? [confirm]RP/0/RP0/CPU0#fretta-64#Results# no problem encountered during these tests.Test conditionsIn this last section, we simply copy paste a couple of show commands to demonstrate the routing scale used during this test.RP/0/RP0/CPU0#fretta-64#show bgp scale VRF# default Neighbors Configured# 636 Established# 636 Address-Family Prefixes Paths PathElem Prefix Path PathElem Memory Memory Memory IPv4 Unicast 795399 20859177 795399 112.27MB 1.71GB 81.17MB IPv6 Unicast 148856 745047 148856 22.71MB 62.53MB 15.47MB ------------------------------------------------------------------------------ Total 944255 21604224 944255 134.98MB 1.77GB 96.64MB Total VRFs Configured# 0RP/0/RP0/CPU0#fretta-64#show bgp sum BGP router identifier 193.251.245.8, local AS number 1000BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 189937114BGP main routing table version 189937114BGP NSR Initial initsync version 808735 (Reached)BGP NSR/ISSU Sync-Group versions 189937114/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 189937114 189937114 189937114 189937114 189937114 189937114Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd179.179.1.2 0 1000 6114 33893 189937114 0 0 04#54#45 10179.179.1.3 0 1000 6119 33945 189937114 0 0 04#54#48 10179.179.1.4 0 1000 6120 33947 189937114 0 0 04#54#48 10179.179.1.5 0 1000 6120 33944 189937114 0 0 04#54#44 10179.179.1.6 0 1000 6121 33947 189937114 0 0 04#54#47 10179.179.1.7 0 1000 6114 31913 189937114 0 0 05#12#25 10179.179.1.8 0 1000 6114 31913 189937114 0 0 05#12#30 10179.179.1.9 0 1000 6114 31911 189937114 0 0 05#12#26 10179.179.1.10 0 1000 6116 31910 189937114 0 0 05#12#30 10179.179.1.11 0 1000 6113 31910 189937114 0 0 05#12#25 10179.179.1.12 0 1000 6113 31912 189937114 0 0 05#12#25 10179.179.1.13 0 1000 6112 31913 189937114 0 0 05#12#28 10179.179.1.14 0 1000 6112 31912 189937114 0 0 05#12#28 10179.179.1.15 0 1000 6114 31914 189937114 0 0 05#12#27 10179.179.1.16 0 1000 6113 31911 189937114 0 0 05#12#29 10179.179.1.17 0 1000 6115 31912 189937114 0 0 05#12#28 10179.179.1.18 0 1000 6114 31912 189937114 0 0 05#12#29 10179.179.1.19 0 1000 6115 31914 189937114 0 0 05#12#26 10179.179.1.20 0 1000 6116 31912 189937114 0 0 05#12#30 10179.179.1.21 0 1000 6113 31911 189937114 0 0 05#12#26 10179.179.1.22 0 1000 6114 31911 189937114 0 0 05#12#28 10179.179.1.23 0 1000 6113 31912 189937114 0 0 05#12#26 10179.179.1.24 0 1000 6114 31911 189937114 0 0 05#12#30 10179.179.1.25 0 1000 6112 31912 189937114 0 0 05#12#25 10179.179.1.26 0 1000 6112 31910 189937114 0 0 05#12#26 10179.179.1.27 0 1000 6112 31910 189937114 0 0 05#12#25 10179.179.1.28 0 1000 6115 31914 189937114 0 0 05#12#25 10179.179.1.29 0 1000 6114 31913 189937114 0 0 05#12#26 10179.179.1.30 0 1000 6114 31911 189937114 0 0 05#12#26 10179.179.1.31 0 1000 6112 31914 189937114 0 0 05#12#26 10179.179.1.32 0 1000 6113 31914 189937114 0 0 05#12#28 10179.179.1.33 0 1000 6114 31915 189937114 0 0 05#12#29 10179.179.1.34 0 1000 6116 31913 189937114 0 0 05#12#30 10179.179.1.35 0 1000 6115 31914 189937114 0 0 05#12#29 10179.179.1.36 0 1000 6117 31915 189937114 0 0 05#12#28 10179.179.1.37 0 1000 6113 31912 189937114 0 0 05#12#30 10179.179.1.38 0 1000 6112 31909 189937114 0 0 05#12#26 10179.179.1.39 0 1000 6114 31913 189937114 0 0 05#12#28 10179.179.1.40 0 1000 6114 31912 189937114 0 0 05#12#30 10179.179.1.41 0 1000 6112 31500 189937114 0 0 05#12#28 10179.179.1.42 0 1000 6113 31913 189937114 0 0 05#12#27 10179.179.1.43 0 1000 6113 31914 189937114 0 0 05#12#28 10179.179.1.44 0 1000 6114 31912 189937114 0 0 05#12#29 10179.179.1.45 0 1000 6112 31914 189937114 0 0 05#12#26 10179.179.1.46 0 1000 6115 31912 189937114 0 0 05#12#27 10179.179.1.47 0 1000 6115 31912 189937114 0 0 05#12#26 10179.179.1.48 0 1000 6113 31911 189937114 0 0 05#12#27 10179.179.1.49 0 1000 6114 31913 189937114 0 0 05#12#30 10179.179.1.50 0 1000 6115 31914 189937114 0 0 05#12#30 10179.179.1.51 0 1000 6236 31913 189937114 0 0 05#12#30 12000187.187.1.2 0 3000 6105 26291 189937114 0 0 05#38#33 1800187.187.1.3 0 3001 6106 26294 189937114 0 0 05#38#32 1800187.187.1.4 0 3002 6108 26294 189937114 0 0 05#38#32 1800187.187.1.5 0 3003 6106 26291 189937114 0 0 05#38#34 1800187.187.1.6 0 3004 6105 26292 189937114 0 0 05#38#33 1800187.187.1.7 0 3005 6106 26294 189937114 0 0 05#38#32 1800187.187.1.8 0 3006 6106 26293 189937114 0 0 05#38#32 1800187.187.1.9 0 3007 6107 26294 189937114 0 0 05#38#35 1800187.187.1.10 0 3008 6107 26295 189937114 0 0 05#38#31 1800187.187.1.11 0 3009 6107 26295 189937114 0 0 05#38#35 1800187.187.1.12 0 3010 6107 26294 189937114 0 0 05#38#30 1800187.187.1.13 0 3011 6105 26294 189937114 0 0 05#38#30 1800187.187.1.14 0 3012 6104 26292 189937114 0 0 05#38#32 1800187.187.1.15 0 3013 6106 26294 189937114 0 0 05#38#35 1800187.187.1.16 0 3014 6109 26296 189937114 0 0 05#38#30 1800187.187.1.17 0 3015 6105 26292 189937114 0 0 05#38#34 1800187.187.1.18 0 3016 6107 26294 189937114 0 0 05#38#34 1800187.187.1.19 0 3017 6106 26292 189937114 0 0 05#38#34 1800187.187.1.20 0 3018 6107 26293 189937114 0 0 05#38#33 1800187.187.1.21 0 3019 6108 26292 189937114 0 0 05#38#33 1800187.187.1.22 0 3020 6107 26295 189937114 0 0 05#38#33 1800187.187.1.23 0 3021 6105 26293 189937114 0 0 05#38#34 1800187.187.1.24 0 3022 6105 26291 189937114 0 0 05#38#32 1800187.187.1.25 0 3023 6106 26294 189937114 0 0 05#38#35 1800187.187.1.26 0 3024 6106 26293 189937114 0 0 05#38#34 1800187.187.1.27 0 3025 6108 26296 189937114 0 0 05#38#30 1800187.187.1.28 0 3026 6106 26293 189937114 0 0 05#38#34 1800187.187.1.29 0 3027 6107 26295 189937114 0 0 05#38#35 1800187.187.1.30 0 3028 6107 26295 189937114 0 0 05#38#33 1800187.187.1.31 0 3029 6106 26292 189937114 0 0 05#38#35 1800187.187.1.32 0 3030 6106 26294 189937114 0 0 05#38#31 1800187.187.1.33 0 3031 6106 26293 189937114 0 0 05#38#33 1800187.187.1.34 0 3032 6107 26295 189937114 0 0 05#38#34 1800187.187.1.35 0 3033 6107 26293 189937114 0 0 05#38#32 1800187.187.1.36 0 3034 6104 26292 189937114 0 0 05#38#31 1800187.187.1.37 0 3035 6105 26294 189937114 0 0 05#38#33 1800187.187.1.38 0 3036 6105 26293 189937114 0 0 05#38#30 1800187.187.1.39 0 3037 6107 26295 189937114 0 0 05#38#34 1800187.187.1.40 0 3038 6107 26295 189937114 0 0 05#38#31 1800187.187.1.41 0 3039 6105 26293 189937114 0 0 05#38#35 1800187.187.1.42 0 3040 6106 26293 189937114 0 0 05#38#32 1800187.187.1.43 0 3041 6105 26295 189937114 0 0 05#38#34 1800187.187.1.44 0 3042 6106 26296 189937114 0 0 05#38#35 1800187.187.1.45 0 3043 6108 26295 189937114 0 0 05#38#31 1800187.187.1.46 0 3044 6107 26294 189937114 0 0 05#38#31 1800187.187.1.47 0 3045 6105 26293 189937114 0 0 05#38#32 1800187.187.1.48 0 3046 6107 26295 189937114 0 0 05#38#33 1800187.187.1.49 0 3047 6107 26295 189937114 0 0 05#38#31 1800187.187.1.50 0 3048 6105 26292 189937114 0 0 05#38#31 1800187.187.1.51 0 3049 6105 26291 189937114 0 0 05#38#31 1800188.1.1.2 0 1000 2669 12981 189937114 0 0 05#38#32 250188.1.1.3 0 1001 2668 12980 189937114 0 0 05#38#30 250188.1.1.4 0 1002 2669 12984 189937114 0 0 05#38#33 250188.1.1.5 0 1003 2669 12980 189937114 0 0 05#38#34 250188.1.1.6 0 1004 2669 12982 189937114 0 0 05#38#35 250188.1.1.7 0 1005 2669 12981 189937114 0 0 05#38#37 250188.1.1.8 0 1006 2669 12982 189937114 0 0 05#38#38 250188.1.1.9 0 1007 2652 11836 189937114 0 0 05#30#57 250188.1.1.10 0 1008 2669 12984 189937114 0 0 05#38#32 250188.1.1.11 0 1009 2669 12980 189937114 0 0 05#38#32 250188.1.1.12 0 1010 2668 12980 189937114 0 0 05#38#38 250188.1.1.13 0 1011 2669 12981 189937114 0 0 05#38#34 250188.1.1.14 0 1012 2669 12982 189937114 0 0 05#38#35 250188.1.1.15 0 1013 2669 12981 189937114 0 0 05#38#32 250188.1.1.16 0 1014 2670 12981 189937114 0 0 05#38#32 250188.1.1.17 0 1015 2670 12984 189937114 0 0 05#38#35 250188.1.1.18 0 1016 2669 12980 189937114 0 0 05#38#30 250188.1.1.19 0 1017 2670 12981 189937114 0 0 05#38#31 250188.1.1.20 0 1018 2669 12981 189937114 0 0 05#38#33 250188.1.1.21 0 1019 2669 12980 189937114 0 0 05#38#34 250188.1.1.22 0 1020 2669 12981 189937114 0 0 05#38#37 250188.1.1.23 0 1021 2668 12980 189937114 0 0 05#38#31 250188.1.1.24 0 1022 2670 12984 189937114 0 0 05#38#33 250188.1.1.25 0 1023 2669 12981 189937114 0 0 05#38#32 250188.1.1.26 0 1024 2670 12981 189937114 0 0 05#38#31 250188.1.1.27 0 1025 2668 12980 189937114 0 0 05#38#31 250188.1.1.28 0 1026 2670 12981 189937114 0 0 05#38#34 250188.1.1.29 0 1027 2668 12981 189937114 0 0 05#38#32 250188.1.1.30 0 1028 2669 12981 189937114 0 0 05#38#34 250188.1.1.31 0 1029 2669 12980 189937114 0 0 05#38#33 250188.1.1.32 0 1030 2669 12981 189937114 0 0 05#38#34 250188.1.1.33 0 1031 2669 12983 189937114 0 0 05#38#32 250188.1.1.34 0 1032 2669 12984 189937114 0 0 05#38#35 250188.1.1.35 0 1033 2653 11838 189937114 0 0 05#30#58 250188.1.1.36 0 1034 2669 12981 189937114 0 0 05#38#31 250188.1.1.37 0 1035 2652 11836 189937114 0 0 05#30#54 250188.1.1.38 0 1036 2670 12984 189937114 0 0 05#38#34 250188.1.1.39 0 1037 2670 12980 189937114 0 0 05#38#33 250188.1.1.40 0 1038 2670 12957 189937114 0 0 05#38#35 250188.1.1.41 0 1039 2669 12984 189937114 0 0 05#38#36 250188.1.1.42 0 1040 2670 12981 189937114 0 0 05#38#31 250188.1.1.43 0 1041 2669 12984 189937114 0 0 05#38#34 250188.1.1.44 0 1042 2652 11836 189937114 0 0 05#30#53 250188.1.1.45 0 1043 2669 12981 189937114 0 0 05#38#33 250188.1.1.46 0 1044 2669 12981 189937114 0 0 05#38#33 250188.1.1.47 0 1045 2668 12981 189937114 0 0 05#38#38 250188.1.1.48 0 1046 2653 11836 189937114 0 0 05#30#57 250188.1.1.49 0 1047 2653 11837 189937114 0 0 05#30#58 250188.1.1.50 0 1048 2668 12981 189937114 0 0 05#38#36 250188.1.1.51 0 1049 2668 12981 189937114 0 0 05#38#32 250188.1.1.52 0 1050 6092 25321 189937114 0 0 05#38#38 250188.1.1.53 0 1051 6093 25324 189937114 0 0 05#38#32 250188.1.1.54 0 1052 6092 25323 189937114 0 0 05#38#34 250188.1.1.55 0 1053 6090 25323 189937114 0 0 05#38#35 250188.1.1.56 0 1054 6092 25321 189937114 0 0 05#38#34 250188.1.1.57 0 1055 6091 25324 189937114 0 0 05#38#35 250188.1.1.58 0 1056 6074 24178 189937114 0 0 05#30#54 250188.1.1.59 0 1057 6092 25296 189937114 0 0 05#38#31 250188.1.1.60 0 1058 6091 25324 189937114 0 0 05#38#35 250188.1.1.61 0 1059 6092 25325 189937114 0 0 05#38#38 250188.1.1.62 0 1060 6091 25322 189937114 0 0 05#38#38 250188.1.1.63 0 1061 6090 25323 189937114 0 0 05#38#34 250188.1.1.64 0 1062 6091 25324 189937114 0 0 05#38#38 250188.1.1.65 0 1063 6091 25323 189937114 0 0 05#38#35 250188.1.1.66 0 1064 6092 25325 189937114 0 0 05#38#37 250188.1.1.67 0 1065 6091 25324 189937114 0 0 05#38#38 250188.1.1.68 0 1066 6091 25324 189937114 0 0 05#38#35 250188.1.1.69 0 1067 6093 25324 189937114 0 0 05#38#32 250188.1.1.70 0 1068 6090 25323 189937114 0 0 05#38#34 250188.1.1.71 0 1069 6076 24179 189937114 0 0 05#30#56 250188.1.1.72 0 1070 6091 25323 189937114 0 0 05#38#34 250188.1.1.73 0 1071 6090 25323 189937114 0 0 05#38#33 250188.1.1.74 0 1072 6090 25323 189937114 0 0 05#38#37 250188.1.1.75 0 1073 6089 25321 189937114 0 0 05#38#38 250188.1.1.76 0 1074 6091 25298 189937114 0 0 05#38#34 250188.1.1.77 0 1075 6076 24178 189937114 0 0 05#30#56 250188.1.1.78 0 1076 6092 25324 189937114 0 0 05#38#33 250188.1.1.79 0 1077 6091 25323 189937114 0 0 05#38#34 250188.1.1.80 0 1078 6092 25300 189937114 0 0 05#38#32 250188.1.1.81 0 1079 6091 25322 189937114 0 0 05#38#33 250188.1.1.82 0 1080 6091 25297 189937114 0 0 05#38#32 250188.1.1.83 0 1081 6091 25297 189937114 0 0 05#38#34 250188.1.1.84 0 1082 6089 25295 189937114 0 0 05#38#35 250188.1.1.85 0 1083 6091 25297 189937114 0 0 05#38#34 250188.1.1.86 0 1084 6092 25298 189937114 0 0 05#38#34 250188.1.1.87 0 1085 6092 25299 189937114 0 0 05#38#34 250188.1.1.88 0 1086 6093 25300 189937114 0 0 05#38#34 250188.1.1.89 0 1087 6091 25298 189937114 0 0 05#38#35 250188.1.1.90 0 1088 6091 25299 189937114 0 0 05#38#33 250188.1.1.91 0 1089 6091 25296 189937114 0 0 05#38#32 250188.1.1.92 0 1090 6091 25298 189937114 0 0 05#38#31 250188.1.1.93 0 1091 6091 25297 189937114 0 0 05#38#33 250188.1.1.94 0 1092 6091 25298 189937114 0 0 05#38#31 250188.1.1.95 0 1093 6091 25296 189937114 0 0 05#38#32 250188.1.1.96 0 1094 6090 25296 189937114 0 0 05#37#38 250188.1.1.97 0 1095 6092 25298 189937114 0 0 05#38#31 250188.1.1.98 0 1096 6092 25299 189937114 0 0 05#38#35 250188.1.1.99 0 1097 6092 25298 189937114 0 0 05#38#30 250188.1.1.100 0 1098 6091 25297 189937114 0 0 05#38#35 250188.1.1.101 0 1099 6092 25298 189937114 0 0 05#38#34 250189.1.1.2 0 100 6195 25324 189937114 0 0 05#38#33 12000189.1.1.3 0 101 6100 25324 189937114 0 0 05#38#33 1800189.1.1.4 0 102 6100 25322 189937114 0 0 05#38#34 1800189.1.1.5 0 103 6099 25322 189937114 0 0 05#38#35 1800189.1.1.6 0 104 6100 25323 189937114 0 0 05#38#32 1800189.1.1.7 0 105 6101 25323 189937114 0 0 05#38#34 1800189.1.1.8 0 106 6101 25325 189937114 0 0 05#38#35 1800189.1.1.9 0 107 6101 25324 189937114 0 0 05#38#34 1800189.1.1.10 0 108 6100 25321 189937114 0 0 05#38#34 1800189.1.1.11 0 109 6100 25323 189937114 0 0 05#38#33 1800189.1.1.12 0 110 6099 25323 189937114 0 0 05#38#32 1800189.1.1.13 0 111 6100 25323 189937114 0 0 05#38#32 1800189.1.1.14 0 112 6101 25322 189937114 0 0 05#38#32 1800189.1.1.15 0 113 6101 25322 189937114 0 0 05#38#30 1800189.1.1.16 0 114 6101 25325 189937114 0 0 05#38#33 1800189.1.1.17 0 115 6099 25322 189937114 0 0 05#38#30 1800189.1.1.18 0 116 6100 25323 189937114 0 0 05#38#34 1800189.1.1.19 0 117 6101 25324 189937114 0 0 05#38#34 1800189.1.1.20 0 118 6101 25322 189937114 0 0 05#38#33 1800189.1.1.21 0 119 6100 25323 189937114 0 0 05#38#33 1800189.1.1.22 0 120 6101 25324 189937114 0 0 05#38#30 1800189.1.1.23 0 121 6100 25324 189937114 0 0 05#38#31 1800189.1.1.24 0 122 6100 25322 189937114 0 0 05#38#34 1800189.1.1.25 0 123 6100 25324 189937114 0 0 05#38#35 1800189.1.1.26 0 124 6100 25323 189937114 0 0 05#38#32 1800189.1.1.27 0 125 6099 25322 189937114 0 0 05#38#31 1800189.1.1.28 0 126 6100 25323 189937114 0 0 05#38#32 1800189.1.1.29 0 127 6100 25324 189937114 0 0 05#38#33 1800189.1.1.30 0 128 6101 25323 189937114 0 0 05#38#31 1800189.1.1.31 0 129 6100 25322 189937114 0 0 05#38#34 1800189.1.1.32 0 130 6100 25321 189937114 0 0 05#38#33 1800189.1.1.33 0 131 6100 25322 189937114 0 0 05#38#31 1800189.1.1.34 0 132 6100 25322 189937114 0 0 05#38#31 1800189.1.1.35 0 133 6100 25323 189937114 0 0 05#38#30 1800189.1.1.36 0 134 6099 25322 189937114 0 0 05#38#34 1800189.1.1.37 0 135 6100 25323 189937114 0 0 05#38#33 1800189.1.1.38 0 136 6100 25322 189937114 0 0 05#38#35 1800189.1.1.39 0 137 6101 25323 189937114 0 0 05#38#35 1800189.1.1.40 0 138 6101 25325 189937114 0 0 05#38#34 1800189.1.1.41 0 139 6100 25323 189937114 0 0 05#38#33 1800189.1.1.42 0 140 6101 25325 189937114 0 0 05#38#35 1800189.1.1.43 0 141 6101 25325 189937114 0 0 05#38#35 1800189.1.1.44 0 142 6101 25323 189937114 0 0 05#38#32 1800189.1.1.45 0 143 6100 25323 189937114 0 0 05#38#34 1800189.1.1.46 0 144 6100 25324 189937114 0 0 05#38#32 1800189.1.1.47 0 145 6100 25323 189937114 0 0 05#38#31 1800189.1.1.48 0 146 6101 25324 189937114 0 0 05#38#32 1800189.1.1.49 0 147 6100 25323 189937114 0 0 05#38#35 1800189.1.1.50 0 148 6099 25372 189937114 0 0 05#38#31 1800189.1.1.51 0 149 6100 25322 189937114 0 0 05#38#33 1800189.1.1.52 0 150 6101 25323 189937114 0 0 05#38#31 1800189.1.1.53 0 151 6099 25322 189937114 0 0 05#38#30 1800189.1.1.54 0 152 6101 25324 189937114 0 0 05#38#34 1800189.1.1.55 0 153 6100 25323 189937114 0 0 05#38#30 1800189.1.1.56 0 154 6100 25323 189937114 0 0 05#38#35 1800189.1.1.57 0 155 6099 25321 189937114 0 0 05#38#33 1800189.1.1.58 0 156 6101 25323 189937114 0 0 05#38#31 1800189.1.1.59 0 157 6101 25323 189937114 0 0 05#38#30 1800189.1.1.60 0 158 6101 25324 189937114 0 0 05#38#30 1800189.1.1.61 0 159 6100 25324 189937114 0 0 05#38#31 1800189.1.1.62 0 160 6100 25323 189937114 0 0 05#38#32 1800189.1.1.63 0 161 6101 25322 189937114 0 0 05#38#36 1800189.1.1.64 0 162 6101 25323 189937114 0 0 05#38#35 1800189.1.1.65 0 163 6099 25323 189937114 0 0 05#38#32 1800189.1.1.66 0 164 6101 25323 189937114 0 0 05#38#32 1800189.1.1.67 0 165 6101 25323 189937114 0 0 05#38#31 1800189.1.1.68 0 166 6100 25323 189937114 0 0 05#38#31 1800189.1.1.69 0 167 6101 25325 189937114 0 0 05#38#34 1800189.1.1.70 0 168 6100 25322 189937114 0 0 05#38#32 1800189.1.1.71 0 169 6100 25322 189937114 0 0 05#38#30 1800189.1.1.72 0 170 6099 25322 189937114 0 0 05#38#30 1800189.1.1.73 0 171 6101 25324 189937114 0 0 05#38#34 1800189.1.1.74 0 172 6099 25323 189937114 0 0 05#38#30 1800189.1.1.75 0 173 6100 25323 189937114 0 0 05#38#35 1800189.1.1.76 0 174 6101 25325 189937114 0 0 05#38#30 1800189.1.1.77 0 175 6101 25322 189937114 0 0 05#38#34 1800189.1.1.78 0 176 6101 25323 189937114 0 0 05#38#33 1800189.1.1.79 0 177 6101 25323 189937114 0 0 05#38#31 1800189.1.1.80 0 178 6099 25323 189937114 0 0 05#38#33 1800189.1.1.81 0 179 6101 25324 189937114 0 0 05#38#32 1800189.1.1.82 0 180 6101 25322 189937114 0 0 05#38#35 1800189.1.1.83 0 181 6101 25324 189937114 0 0 05#38#35 1800189.1.1.84 0 182 6100 25323 189937114 0 0 05#38#32 1800189.1.1.85 0 183 6099 25322 189937114 0 0 05#38#33 1800189.1.1.86 0 184 6100 25323 189937114 0 0 05#38#34 1800189.1.1.87 0 185 6100 25321 189937114 0 0 05#38#34 1800189.1.1.88 0 186 6100 25324 189937114 0 0 05#38#31 1800189.1.1.89 0 187 6099 25321 189937114 0 0 05#38#35 1800189.1.1.90 0 188 6099 25321 189937114 0 0 05#38#34 1800189.1.1.91 0 189 6100 25323 189937114 0 0 05#38#33 1800189.1.1.92 0 190 6101 25324 189937114 0 0 05#38#34 1800189.1.1.93 0 191 6100 25322 189937114 0 0 05#38#30 1800189.1.1.94 0 192 6100 25324 189937114 0 0 05#38#34 1800189.1.1.95 0 193 6101 25324 189937114 0 0 05#38#33 1800189.1.1.96 0 194 6100 25324 189937114 0 0 05#38#34 1800189.1.1.97 0 195 6100 25321 189937114 0 0 05#38#34 1800189.1.1.98 0 196 6101 25324 189937114 0 0 05#38#31 1800189.1.1.99 0 197 6101 25323 189937114 0 0 05#38#33 1800189.1.1.100 0 198 6102 25326 189937114 0 0 05#38#35 1800189.1.1.101 0 199 6100 25323 189937114 0 0 05#38#34 1800193.251.245.7 0 1000 31824 31480 189937114 0 0 06#12#07 477709193.251.246.7 0 1000 22499 21128 189937114 0 0 06#15#57 477708193.251.246.9 0 1000 25867 27470 189937114 0 0 06#20#53 477709193.251.246.10 0 1000 25869 27470 189937114 0 0 06#20#48 477709193.251.246.11 0 1000 26643 27549 189937114 0 0 16#34#06 477709193.251.246.12 0 1000 25844 27177 189937114 0 0 06#20#49 477709193.251.246.13 0 1000 25870 27327 189937114 0 0 06#20#54 477709193.251.246.14 0 1000 25868 27470 189937114 0 0 06#20#56 477709193.251.246.15 0 1000 25867 27471 189937114 0 0 06#20#53 477709193.251.246.16 0 1000 25869 27471 189937114 0 0 06#20#51 477709193.251.246.17 0 1000 25868 27472 189937114 0 0 06#20#53 477709193.251.246.18 0 1000 25867 27470 189937114 0 0 06#20#59 477709193.251.246.19 0 1000 25869 27472 189937114 0 0 06#20#52 477709193.251.246.20 0 1000 25869 27471 189937114 0 0 06#20#57 477709193.251.246.21 0 1000 25866 27471 189937114 0 0 06#20#57 477709193.251.246.22 0 1000 25869 27470 189937114 0 0 06#20#58 477709193.251.246.23 0 1000 25869 27471 189937114 0 0 06#20#52 477709193.251.246.24 0 1000 25868 27471 189937114 0 0 06#20#55 477709193.251.246.25 0 1000 25867 27470 189937114 0 0 06#20#57 477709193.251.246.26 0 1000 25866 27470 189937114 0 0 06#20#57 477709193.251.246.27 0 1000 25868 27472 189937114 0 0 06#20#51 477709193.251.246.28 0 1000 25793 27356 189937114 0 0 06#20#24 477709193.251.246.29 0 1000 25869 27469 189937114 0 0 06#20#52 477709193.251.246.30 0 1000 25868 27470 189937114 0 0 06#20#55 477709193.251.246.31 0 1000 25793 27356 189937114 0 0 06#20#32 477709193.251.246.32 0 1000 25395 27045 189937114 0 0 06#16#47 477709193.251.246.33 0 1000 25867 27471 189937114 0 0 06#20#45 477709193.251.246.34 0 1000 25867 27468 189937114 0 0 06#20#57 477709193.251.246.35 0 1000 25868 27471 189937114 0 0 06#20#56 477709193.251.246.36 0 1000 25869 27470 189937114 0 0 06#20#54 477709193.251.246.37 0 1000 25866 27470 189937114 0 0 06#20#56 477709193.251.246.38 0 1000 25867 27472 189937114 0 0 06#20#57 477709193.251.246.39 0 1000 25793 27355 189937114 0 0 06#20#35 477709193.251.246.40 0 1000 24130 26122 189937114 0 0 06#16#28 477709193.251.246.41 0 1000 24528 26580 189937114 0 0 06#20#28 477709193.251.246.42 0 1000 24603 26694 189937114 0 0 06#20#57 477709193.251.246.43 0 1000 24528 26578 189937114 0 0 06#20#30 477709193.251.246.44 0 1000 24603 26694 189937114 0 0 06#20#58 477709193.251.246.45 0 1000 24603 26694 189937114 0 0 06#21#01 477709193.251.246.46 0 1000 24603 26694 189937114 0 0 06#20#57 477709193.251.246.47 0 1000 24528 26580 189937114 0 0 06#20#29 477709193.251.246.48 0 1000 24603 26551 189937114 0 0 06#20#59 477709193.251.246.49 0 1000 24603 26692 189937114 0 0 06#21#00 477709 RP/0/RP0/CPU0#fretta-64#show bgp ipv6 uni sumBGP router identifier 193.251.245.8, local AS number 1000BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 3399620BGP main routing table version 3399620BGP NSR Initial initsync version 148858 (Reached)BGP NSR/ISSU Sync-Group versions 3399620/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 3399620 3399620 3399620 3399620 3399620 3399620Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd179#179#1##2 0 1000 6109 16590 3399620 0 0 05#12#17 1179#179#1##3 0 1000 6108 16590 3399620 0 0 05#12#18 1179#179#1##4 0 1000 6108 16590 3399620 0 0 05#12#13 1179#179#1##5 0 1000 6108 16590 3399620 0 0 05#12#20 1179#179#1##6 0 1000 6092 16576 3399620 0 0 05#12#25 1179#179#1##7 0 1000 6109 16591 3399620 0 0 05#12#23 1179#179#1##8 0 1000 6107 16591 3399620 0 0 05#12#26 1179#179#1##9 0 1000 6110 16590 3399620 0 0 05#12#16 1179#179#1##a 0 1000 6108 16590 3399620 0 0 05#12#22 1179#179#1##b 0 1000 6107 16589 3399620 0 0 05#12#13 1179#179#1##c 0 1000 6108 16590 3399620 0 0 05#12#18 1179#179#1##d 0 1000 6107 16590 3399620 0 0 05#12#27 1179#179#1##e 0 1000 6110 16591 3399620 0 0 05#12#13 1179#179#1##f 0 1000 6094 16575 3399620 0 0 05#12#14 1179#179#1##10 0 1000 6109 16590 3399620 0 0 05#12#25 1179#179#1##11 0 1000 6108 16590 3399620 0 0 05#12#14 1179#179#1##12 0 1000 6108 16590 3399620 0 0 05#12#28 1179#179#1##13 0 1000 6109 16590 3399620 0 0 05#12#21 1179#179#1##14 0 1000 6092 16575 3399620 0 0 05#12#15 1179#179#1##15 0 1000 6110 16590 3399620 0 0 05#12#27 1179#179#1##16 0 1000 6109 16590 3399620 0 0 05#12#25 1179#179#1##17 0 1000 6110 16588 3399620 0 0 05#12#23 1179#179#1##18 0 1000 6110 16591 3399620 0 0 05#12#25 1179#179#1##19 0 1000 6109 16591 3399620 0 0 05#12#20 1179#179#1##1a 0 1000 6093 16576 3399620 0 0 05#12#13 1179#179#1##1b 0 1000 6092 16577 3399620 0 0 05#12#21 1179#179#1##1c 0 1000 6108 16590 3399620 0 0 05#12#16 1179#179#1##1d 0 1000 6108 16588 3399620 0 0 05#12#14 1179#179#1##1e 0 1000 6091 16576 3399620 0 0 05#12#21 1179#179#1##1f 0 1000 6109 16592 3399620 0 0 05#12#24 1179#179#1##20 0 1000 6108 16590 3399620 0 0 05#12#24 1179#179#1##21 0 1000 6108 16591 3399620 0 0 05#12#22 1179#179#1##22 0 1000 6108 16591 3399620 0 0 05#12#14 1179#179#1##23 0 1000 6109 16591 3399620 0 0 05#12#22 1179#179#1##24 0 1000 6107 16589 3399620 0 0 05#12#23 1179#179#1##25 0 1000 6108 16590 3399620 0 0 05#12#18 1179#179#1##26 0 1000 6108 16590 3399620 0 0 05#12#22 1179#179#1##27 0 1000 6110 16593 3399620 0 0 05#12#13 1179#179#1##28 0 1000 6109 16590 3399620 0 0 05#12#24 1179#179#1##29 0 1000 6109 16590 3399620 0 0 05#12#20 1179#179#1##2a 0 1000 6109 16591 3399620 0 0 05#12#14 1179#179#1##2b 0 1000 6109 16592 3399620 0 0 05#12#26 1179#179#1##2c 0 1000 6109 16592 3399620 0 0 05#12#18 1179#179#1##2d 0 1000 6108 16591 3399620 0 0 05#12#19 1179#179#1##2e 0 1000 6109 16590 3399620 0 0 05#12#25 1179#179#1##2f 0 1000 6108 16590 3399620 0 0 05#12#19 1179#179#1##30 0 1000 6108 16590 3399620 0 0 05#12#23 1179#179#1##31 0 1000 6108 16591 3399620 0 0 05#12#14 1179#179#1##32 0 1000 6108 16592 3399620 0 0 05#12#23 1179#179#1##33 0 1000 6183 16592 3399620 0 0 05#12#23 2500187#187#1##2 0 3200 6092 14416 3399620 0 0 05#38#23 1187#187#1##3 0 3201 6090 14414 3399620 0 0 05#37#44 1187#187#1##4 0 3202 6092 14415 3399620 0 0 05#38#36 1187#187#1##5 0 3203 6089 14414 3399620 0 0 05#37#40 1187#187#1##6 0 3204 6090 14414 3399620 0 0 05#37#46 1187#187#1##7 0 3205 6091 14415 3399620 0 0 05#38#32 1187#187#1##8 0 3206 6090 14414 3399620 0 0 05#37#40 1187#187#1##9 0 3207 6089 14415 3399620 0 0 05#38#25 1187#187#1##a 0 3208 6092 14417 3399620 0 0 05#38#32 1187#187#1##b 0 3209 6091 14417 3399620 0 0 05#38#32 1187#187#1##c 0 3210 6091 14417 3399620 0 0 05#38#26 1187#187#1##d 0 3211 6091 14417 3399620 0 0 05#38#29 1187#187#1##e 0 3212 6091 14416 3399620 0 0 05#38#34 1187#187#1##f 0 3213 6092 14418 3399620 0 0 05#38#32 1187#187#1##10 0 3214 6091 14416 3399620 0 0 05#38#25 1187#187#1##11 0 3215 6092 14417 3399620 0 0 05#38#32 1187#187#1##12 0 3216 6091 14415 3399620 0 0 05#38#24 1187#187#1##13 0 3217 6089 14415 3399620 0 0 05#37#58 1187#187#1##14 0 3218 6090 14416 3399620 0 0 05#38#24 1187#187#1##15 0 3219 6091 14414 3399620 0 0 05#37#52 1187#187#1##16 0 3220 6091 14415 3399620 0 0 05#37#46 1187#187#1##17 0 3221 6091 14415 3399620 0 0 05#38#33 1187#187#1##18 0 3222 6091 14416 3399620 0 0 05#38#37 1187#187#1##19 0 3223 6092 14415 3399620 0 0 05#38#23 1187#187#1##1a 0 3224 6090 14416 3399620 0 0 05#38#12 1187#187#1##1b 0 3225 6091 14417 3399620 0 0 05#38#22 1187#187#1##1c 0 3226 6091 14417 3399620 0 0 05#38#40 1187#187#1##1d 0 3227 6091 14416 3399620 0 0 05#38#24 1187#187#1##1e 0 3228 6090 14416 3399620 0 0 05#37#50 1187#187#1##1f 0 3229 6091 14415 3399620 0 0 05#37#49 1187#187#1##20 0 3230 6091 14416 3399620 0 0 05#38#41 1187#187#1##21 0 3231 6090 14416 3399620 0 0 05#38#30 1187#187#1##22 0 3232 6092 14415 3399620 0 0 05#38#34 1187#187#1##23 0 3233 6086 14412 3399620 0 0 05#37#46 1187#187#1##24 0 3234 6090 14416 3399620 0 0 05#38#22 1187#187#1##25 0 3235 6092 14418 3399620 0 0 05#38#35 1187#187#1##26 0 3236 6091 14416 3399620 0 0 05#37#40 1187#187#1##27 0 3237 6091 14416 3399620 0 0 05#38#41 1187#187#1##28 0 3238 6090 14413 3399620 0 0 05#38#35 1187#187#1##29 0 3239 6092 14415 3399620 0 0 05#38#40 1187#187#1##2a 0 3240 6091 14416 3399620 0 0 05#38#04 1187#187#1##2b 0 3241 6091 14415 3399620 0 0 05#37#35 1187#187#1##2c 0 3242 6090 14415 3399620 0 0 05#37#30 1187#187#1##2d 0 3243 6090 14416 3399620 0 0 05#38#23 1187#187#1##2e 0 3244 6093 14415 3399620 0 0 05#38#37 1187#187#1##2f 0 3245 6088 14413 3399620 0 0 05#38#20 1187#187#1##30 0 3246 6089 14415 3399620 0 0 05#38#08 1187#187#1##31 0 3247 6089 14415 3399620 0 0 05#37#59 1187#187#1##32 0 3248 6093 14416 3399620 0 0 05#38#00 1187#187#1##33 0 3249 6089 14415 3399620 0 0 05#37#35 1188#1#1##2 0 2200 6100 13400 3399620 0 0 05#38#25 1500188#1#1##3 0 2201 6101 13398 3399620 0 0 05#38#36 1500188#1#1##4 0 2202 6104 13397 3399620 0 0 05#38#43 1500188#1#1##5 0 2203 6101 13397 3399620 0 0 05#38#22 1500188#1#1##6 0 2204 6101 13395 3399620 0 0 05#38#23 1500188#1#1##7 0 2205 6103 13397 3399620 0 0 05#38#40 1500188#1#1##8 0 2206 6101 13397 3399620 0 0 05#38#24 1500188#1#1##9 0 2207 6103 13396 3399620 0 0 05#38#42 1500188#1#1##a 0 2208 6101 13397 3399620 0 0 05#38#24 1500188#1#1##b 0 2209 6103 13397 3399620 0 0 05#38#35 1500188#1#1##c 0 2210 6102 13397 3399620 0 0 05#38#29 1500188#1#1##d 0 2211 6101 13397 3399620 0 0 05#38#22 1500188#1#1##e 0 2212 6098 13398 3399620 0 0 05#37#57 1500188#1#1##f 0 2213 6101 13400 3399620 0 0 05#38#42 1500188#1#1##10 0 2214 6099 13396 3399620 0 0 05#37#54 1500188#1#1##11 0 2215 6100 13399 3399620 0 0 05#38#25 1500188#1#1##12 0 2216 6100 13398 3399620 0 0 05#38#26 1500188#1#1##13 0 2217 6100 13397 3399620 0 0 05#38#19 1500188#1#1##14 0 2218 6101 13397 3399620 0 0 05#38#05 1500188#1#1##15 0 2219 6101 13399 3399620 0 0 05#38#35 1500188#1#1##16 0 2220 6091 13398 3399620 0 0 05#38#23 1500188#1#1##17 0 2221 6093 13398 3399620 0 0 05#38#30 1500188#1#1##18 0 2222 6093 13397 3399620 0 0 05#38#38 1500188#1#1##19 0 2223 6091 13398 3399620 0 0 05#38#31 1500188#1#1##1a 0 2224 6094 13399 3399620 0 0 05#38#36 1500188#1#1##1b 0 2225 6093 13398 3399620 0 0 05#38#07 1500188#1#1##1c 0 2226 6093 13398 3399620 0 0 05#38#37 1500188#1#1##1d 0 2227 6092 13396 3399620 0 0 05#38#38 1500188#1#1##1e 0 2228 6092 13395 3399620 0 0 05#37#49 1500188#1#1##1f 0 2229 6093 13397 3399620 0 0 05#38#39 1500188#1#1##20 0 2230 6094 13396 3399620 0 0 05#38#33 1500188#1#1##21 0 2231 6091 13396 3399620 0 0 05#38#30 1500188#1#1##22 0 2232 6093 13399 3399620 0 0 05#38#38 1500188#1#1##23 0 2233 6093 13398 3399620 0 0 05#38#31 1500188#1#1##24 0 2234 6092 13396 3399620 0 0 05#38#21 1500188#1#1##25 0 2235 6092 13398 3399620 0 0 05#38#36 1500188#1#1##26 0 2236 6093 13398 3399620 0 0 05#38#43 1500188#1#1##27 0 2237 6094 13397 3399620 0 0 05#38#37 1500188#1#1##28 0 2238 6092 13398 3399620 0 0 05#38#32 1500188#1#1##29 0 2239 6093 13394 3399620 0 0 05#38#24 1500188#1#1##2a 0 2240 6094 13399 3399620 0 0 05#38#40 1500188#1#1##2b 0 2241 6095 13397 3399620 0 0 05#38#41 1500188#1#1##2c 0 2242 6094 13396 3399620 0 0 05#38#34 1500188#1#1##2d 0 2243 6094 13398 3399620 0 0 05#38#42 1500188#1#1##2e 0 2244 6093 13398 3399620 0 0 05#38#32 1500188#1#1##2f 0 2245 6092 13397 3399620 0 0 05#38#38 1500188#1#1##30 0 2246 6092 13396 3399620 0 0 05#38#21 1500188#1#1##31 0 2247 6092 13396 3399620 0 0 05#38#43 1500188#1#1##32 0 2248 6093 13398 3399620 0 0 05#38#32 1500188#1#1##33 0 2249 6093 13398 3399620 0 0 05#38#38 1500188#1#1##34 0 2250 6092 13396 3399620 0 0 05#37#46 1500188#1#1##35 0 2251 6092 13395 3399620 0 0 05#37#48 1500188#1#1##36 0 2252 6094 13398 3399620 0 0 05#38#30 1500188#1#1##37 0 2253 6092 13396 3399620 0 0 05#38#24 1500188#1#1##38 0 2254 6093 13396 3399620 0 0 05#38#36 1500188#1#1##39 0 2255 6093 13396 3399620 0 0 05#38#31 1500188#1#1##3a 0 2256 6091 13395 3399620 0 0 05#38#34 1500188#1#1##3b 0 2257 6094 13398 3399620 0 0 05#38#32 1500188#1#1##3c 0 2258 6093 13398 3399620 0 0 05#38#40 1500188#1#1##3d 0 2259 6091 13398 3399620 0 0 05#38#34 1500188#1#1##3e 0 2260 6090 13395 3399620 0 0 05#38#24 1500188#1#1##3f 0 2261 6093 13396 3399620 0 0 05#38#21 1500188#1#1##40 0 2262 6093 13397 3399620 0 0 05#38#38 1500188#1#1##41 0 2263 6091 13396 3399620 0 0 05#38#25 1500188#1#1##42 0 2264 6093 13396 3399620 0 0 05#38#21 1500188#1#1##43 0 2265 6093 13398 3399620 0 0 05#38#23 1500188#1#1##44 0 2266 6095 13398 3399620 0 0 05#38#32 1500188#1#1##45 0 2267 6091 13398 3399620 0 0 05#38#34 1500188#1#1##46 0 2268 6092 13397 3399620 0 0 05#38#30 1500188#1#1##47 0 2269 6093 13398 3399620 0 0 05#38#25 1500188#1#1##48 0 2270 6091 13397 3399620 0 0 05#37#43 1500188#1#1##49 0 2271 6092 13397 3399620 0 0 05#38#34 1500188#1#1##4a 0 2272 6101 13395 3399620 0 0 05#38#20 1500188#1#1##4b 0 2273 6102 13397 3399620 0 0 05#37#44 1500188#1#1##4c 0 2274 6101 13398 3399620 0 0 05#38#38 1500188#1#1##4d 0 2275 6103 13397 3399620 0 0 05#38#30 1500188#1#1##4e 0 2276 6104 13397 3399620 0 0 05#38#42 1500188#1#1##4f 0 2277 6104 13397 3399620 0 0 05#38#34 1500188#1#1##50 0 2278 6102 13395 3399620 0 0 05#38#19 1500188#1#1##51 0 2279 6101 13397 3399620 0 0 05#38#37 1500188#1#1##52 0 2280 6102 13397 3399620 0 0 05#38#32 1500188#1#1##53 0 2281 6102 13398 3399620 0 0 05#38#37 1500188#1#1##54 0 2282 6101 13397 3399620 0 0 05#38#42 1500188#1#1##55 0 2283 6101 13396 3399620 0 0 05#38#23 1500188#1#1##56 0 2284 6101 13398 3399620 0 0 05#38#23 1500188#1#1##57 0 2285 6100 13397 3399620 0 0 05#38#21 1500188#1#1##58 0 2286 6102 13398 3399620 0 0 05#37#56 1500188#1#1##59 0 2287 6100 13398 3399620 0 0 05#38#22 1500188#1#1##5a 0 2288 6084 13397 3399620 0 0 05#38#35 1188#1#1##5b 0 2289 6086 13398 3399620 0 0 05#38#41 1188#1#1##5c 0 2290 6083 13397 3399620 0 0 05#38#20 1188#1#1##5d 0 2291 6083 13397 3399620 0 0 05#38#24 1188#1#1##5e 0 2292 6084 13398 3399620 0 0 05#38#35 1188#1#1##5f 0 2293 6082 13396 3399620 0 0 05#38#21 1188#1#1##60 0 2294 6082 13398 3399620 0 0 05#38#27 1188#1#1##61 0 2295 6081 13394 3399620 0 0 05#37#41 1188#1#1##62 0 2296 6084 13398 3399620 0 0 05#38#20 1188#1#1##63 0 2297 6083 13398 3399620 0 0 05#38#20 1188#1#1##64 0 2298 6083 13397 3399620 0 0 05#38#21 1188#1#1##65 0 2299 6085 13396 3399620 0 0 05#38#35 1189#1#1##2 0 200 6086 13398 3399620 0 0 05#38#32 1189#1#1##3 0 201 6083 13398 3399620 0 0 05#38#27 1189#1#1##4 0 202 6084 13400 3399620 0 0 05#38#29 1189#1#1##5 0 203 6084 13397 3399620 0 0 05#38#24 1189#1#1##6 0 204 6081 13397 3399620 0 0 05#38#19 1189#1#1##7 0 205 6082 13398 3399620 0 0 05#38#43 1189#1#1##8 0 206 6084 13398 3399620 0 0 05#38#27 1189#1#1##9 0 207 6083 13396 3399620 0 0 05#38#41 1189#1#1##a 0 208 6082 13397 3399620 0 0 05#37#46 1189#1#1##b 0 209 6084 13398 3399620 0 0 05#38#33 1189#1#1##c 0 210 6084 13398 3399620 0 0 05#38#28 1189#1#1##d 0 211 6084 13397 3399620 0 0 05#38#36 1189#1#1##e 0 212 6082 13395 3399620 0 0 05#37#53 1189#1#1##f 0 213 6082 13399 3399620 0 0 05#38#27 1189#1#1##10 0 214 6083 13397 3399620 0 0 05#38#27 1189#1#1##11 0 215 6084 13397 3399620 0 0 05#38#36 1189#1#1##12 0 216 6084 13397 3399620 0 0 05#38#35 1189#1#1##13 0 217 6083 13396 3399620 0 0 05#38#23 1189#1#1##14 0 218 6086 13398 3399620 0 0 05#38#34 1189#1#1##15 0 219 6083 13397 3399620 0 0 05#38#32 1189#1#1##16 0 220 6085 13399 3399620 0 0 05#38#26 1189#1#1##17 0 221 6083 13397 3399620 0 0 05#38#32 1189#1#1##18 0 222 6084 13397 3399620 0 0 05#38#25 1189#1#1##19 0 223 6082 13397 3399620 0 0 05#37#50 1189#1#1##1a 0 224 6082 13398 3399620 0 0 05#38#30 1189#1#1##1b 0 225 6081 13398 3399620 0 0 05#38#29 1189#1#1##1c 0 226 6084 13399 3399620 0 0 05#38#27 1189#1#1##1d 0 227 6084 13398 3399620 0 0 05#38#32 1189#1#1##1e 0 228 6084 13393 3399620 0 0 05#37#52 1189#1#1##1f 0 229 6083 13394 3399620 0 0 05#37#51 1189#1#1##20 0 230 6083 13397 3399620 0 0 05#38#32 1189#1#1##21 0 231 6084 13398 3399620 0 0 05#38#36 1189#1#1##22 0 232 6082 13399 3399620 0 0 05#38#29 1189#1#1##23 0 233 6082 13398 3399620 0 0 05#38#29 1189#1#1##24 0 234 6084 13397 3399620 0 0 05#38#33 1189#1#1##25 0 235 6083 13397 3399620 0 0 05#38#33 1189#1#1##26 0 236 6084 13396 3399620 0 0 05#38#38 1189#1#1##27 0 237 6083 13398 3399620 0 0 05#38#36 1189#1#1##28 0 238 6083 13397 3399620 0 0 05#38#27 1189#1#1##29 0 239 6084 13398 3399620 0 0 05#38#03 1189#1#1##2a 0 240 6085 13399 3399620 0 0 05#38#34 1189#1#1##2b 0 241 6085 13398 3399620 0 0 05#38#23 1189#1#1##2c 0 242 6081 13397 3399620 0 0 05#38#22 1189#1#1##2d 0 243 6083 13397 3399620 0 0 05#38#20 1189#1#1##2e 0 244 6082 13397 3399620 0 0 05#37#57 1189#1#1##2f 0 245 6083 13397 3399620 0 0 05#38#26 1189#1#1##30 0 246 6083 13398 3399620 0 0 05#38#40 1189#1#1##31 0 247 6083 13397 3399620 0 0 05#37#58 1189#1#1##32 0 248 6084 13397 3399620 0 0 05#38#34 1189#1#1##33 0 249 6082 13397 3399620 0 0 05#38#20 12001#688#0#1##55 0 1000 13924 17188 3399620 0 0 22#38#34 141952001#688#0#2##70 0 1000 11075 12569 3399620 0 0 22#38#34 141952001#688#0#2##71 0 1000 11073 12570 3399620 0 0 22#38#27 141952001#688#0#2##72 0 1000 11075 12451 3399620 0 0 22#38#35 141952001#688#0#2##73 0 1000 11071 12570 3399620 0 0 22#38#29 141952001#688#0#2##74 0 1000 11072 12571 3399620 0 0 22#38#15 141952001#688#0#2##75 0 1000 11074 12569 3399620 0 0 22#38#01 141952001#688#0#2##76 0 1000 11074 12570 3399620 0 0 22#38#34 141952001#688#0#2##77 0 1000 11070 12569 3399620 0 0 22#38#32 141952001#688#0#2##78 0 1000 11075 12450 3399620 0 0 22#38#35 141952001#688#0#2##79 0 1000 11073 12569 3399620 0 0 22#38#32 141952001#688#0#2##80 0 1000 11073 12570 3399620 0 0 22#38#04 141952001#688#0#2##81 0 1000 11073 12571 3399620 0 0 22#38#38 141952001#688#0#2##82 0 1000 11073 12570 3399620 0 0 22#38#28 141952001#688#0#2##83 0 1000 11071 12569 3399620 0 0 22#38#31 141952001#688#0#2##84 0 1000 11072 12569 3399620 0 0 22#38#32 141952001#688#0#2##85 0 1000 11073 12569 3399620 0 0 22#38#34 141952001#688#0#2##86 0 1000 11072 12569 3399620 0 0 22#38#34 141952001#688#0#2##87 0 1000 11073 12569 3399620 0 0 22#38#30 141952001#688#0#2##88 0 1000 11077 12569 3399620 0 0 22#38#24 141952001#688#0#2##89 0 1000 11072 12450 3399620 0 0 22#38#30 141952001#688#0#2##90 0 1000 11072 12571 3399620 0 0 22#38#36 141952001#688#0#2##91 0 1000 11071 12570 3399620 0 0 22#38#30 141952001#688#0#2##92 0 1000 11073 12451 3399620 0 0 22#38#05 141952001#688#0#2##93 0 1000 11076 12570 3399620 0 0 22#38#32 141952001#688#0#2##94 0 1000 11072 12569 3399620 0 0 22#38#38 141952001#688#0#2##95 0 1000 11072 12570 3399620 0 0 22#38#00 141952001#688#0#2##96 0 1000 11073 12570 3399620 0 0 22#38#32 141952001#688#0#2##97 0 1000 10684 11480 3399620 0 0 22#38#36 141952001#688#0#2##98 0 1000 11074 12570 3399620 0 0 22#38#29 141952001#688#0#2##99 0 1000 11074 12569 3399620 0 0 22#38#34 141952001#688#0#2##100 0 1000 11072 12569 3399620 0 0 22#38#38 141952001#688#0#2##101 0 1000 11059 12557 3399620 0 0 22#38#07 141952001#688#0#2##102 0 1000 11057 12556 3399620 0 0 22#38#27 141952001#688#0#2##103 0 1000 11059 12556 3399620 0 0 22#38#38 141952001#688#0#2##104 0 1000 11056 12555 3399620 0 0 22#38#30 141952001#688#0#2##105 0 1000 10958 11736 3399620 0 0 22#38#29 141952001#688#0#2##106 0 1000 11058 12557 3399620 0 0 22#37#58 141952001#688#0#2##107 0 1000 11059 12438 3399620 0 0 22#37#59 141952001#688#0#2##108 0 1000 10959 11617 3399620 0 0 22#38#29 141952001#688#0#2##109 0 1000 11059 12555 3399620 0 0 22#38#27 141952001#688#0#2##110 0 1000 11058 12555 3399620 0 0 22#38#39 141952001#689#0#2##55 0 1000 6945 13253 3399620 0 0 22#38#26 14195RP/0/RP0/CPU0#fretta-64RP/0/RP0/CPU0#fretta-64#show mpls ldp summary AFIs # IPv4 Routes # 1391 prefixes Bindings # 1391 prefixes Local # 1391 Remote # 2041 Neighbors # 151 (151 NSR) Hello Adj # 184 Addresses # 88 Interfaces# 35 LDP configuredRP/0/RP0/CPU0#fretta-64#ConclusionWe proved today that Netflow implementation in IOS XR 6.3.15 (and following 6.3.2) is solid and can be stressed without noticing any side effect. We pushed the scale, the sampling-interval, the timers and rate-limiters, etc and obtained consistent behavior and results.AcknowledgementsMany thanks to Hari Baskar Sivasamy, Benoit Mercier Des Rochettes for defining and executing these tests.Thanks also to Raj Kalavendi and Jisu Bhattacharya for their comments and guidance.", "url": "/tutorials/netflow-ncs5500-test-results/", "author": "Nicolas Fevrier", "tags": "iosxr, ncs5500, netflow, xr" } , "tutorials-mixing-base-and-scale-lc-in-ncs5500": { "title": "Mixing Base and Scale Line Cards in the same Chassis", "content": " Understanding NCS5500 Resources S01E09 Mixing Base and Scale Line Cards in the same Chassis Previously on “Understanding NCS5500 Resources” Selective Route/FIB Download Configuration examples Verification Network design considerations Caveats / Gotchas Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.S01E09 Mixing Base and Scale Line Cards in the same ChassisPreviously on “Understanding NCS5500 Resources”In previous posts… We detailed how routes are stored in systems with or without external TCAM, based on Jericho or Jericho+ forwarding ASICs, with URPF, in VRFs, …You can check all the former articles in the page here.Let’s address today a recurring question# “What happens if you mix different types of line cards in a chassis?” and we will add the question people should ask more often# “Where do I position the Base line cards and the Scale line cards in my network design?”.We have today 7 different line card types offering a variety of services# MACsec encryption grey or colored (IPoDWDM) ports 40G or 40G/100G capable ports with different routing scales (Jericho, Jericho with eTCAM, Jericho+ with eTCAM)and soon we will see even more diversity with the introduction of the MOD line cards# different framing capability (LAN Phy, WAN Phy, OTN) different colored optics (ACO or DCO) and 25G SFP28 native ports or 4x25G breakoutBelow, a 8min video covering this topic.https#//www.youtube.com/watch?v=FEDFNyuBj3gSelective Route/FIB DownloadFirst, let’s clarify the following doubts# no, we are not limiting the route scale at the level of the lowest common denominator# we will use specific features to organize the route distribution between Base and Scale line cards. no, the system is not breaking nor the packets are punted to the RP CPU or LC CPU when we exceed the limit of a given memory type# the level of abstraction (DPA) used between the routing process and the hardware FIB is controlling all the resources and will refuse to push more entries if a database is full.Now, let’s introduce this feature used to granularily decide where the routes should be populated.It’s named “Selective Route Download”, the name is self-descriptive #) We will use it to decide where a given route is meant to be programmed# in the Base line cards, in the Scale line cards, or both.It will follow some simple principles# all the IGP routes (dynamic like ISIS or OSPF, or static/connected) will be programmed in all types of line cards by default, BGP routes are programmed in all line card types. “Default” here implies the operator didn’t do anything special in the configuration BGP paths can be colored as “external-reach-only” and they will be only programmed in -SE line cards (with an external TCAM). The coloring is defined locally via configuration.Configuration examplesWe can decide to mark these BGP paths in multiple ways. Here are two configurations example, but it’s not limited to them.In the first example, we want to mark “external-reach” all BGP routes received from a specific peer.We will proceed in three steps#1 - he routes received from this peer with a specific community 100#1112 - then we will match this particular community to set the path-color external-reach3 - finally, we call this policy in a table-policy.route-policy PEER-EXT  set community PEER-EXT-commend-policy!community-set PEER-EXT-comm 100#111end-set!route-policy HILO-FIB  if community matches-any PEER-EXT-comm then    set path-color external-reach    pass  else    pass  endifend-policy!router bgp 100address-family ipv4 unicast  table-policy HILO-FIB!neighbor 192.168.100.151  address-family ipv4 unicast   route-policy PEER-EXT in   maximum-prefix 8000000 75   route-policy PERMIT-ANY outIn this second example, we don’t differentiate the routes from their peer of origin but more from their nature (for instance the prefix length).route-policy HILO-FIB  if destination in (100.0.0.0/8 le 24, 3000##/8 le 64) then    set path-color external-reach    pass  else    pass  endifend-policy!router bgp 100address-family ipv4 unicast table-policy HILO-FIB!neighbor 192.168.100.151 address-family ipv4 unicast route-policy PEER-EXT in maximum-prefix 8000000 75 route-policy PERMIT-ANY outWe could easily imagine other situations where the BGP routes would be marked with specific communities by internet border routers and the local router would take reach-only marking decision based on these communities.VerificationThe route now carries the color as a locally-significant attribute and we can verify it with simple “show route” CLI#RP/0/RP0/CPU0#NCS5508#sh route 1.0.144.0/20Routing entry for 1.0.144.0/20 Known via ~bgp 100~, distance 200, metric 0, external-reach-lc-only Tag 2914, type internal Installed Nov 27 22#48#56.925 for 00#00#45 Routing Descriptor Blocks 192.168.100.151, from 192.168.100.151 Route metric is 0 No advertising protos.RP/0/RP0/CPU0#NCS5508#RP/0/RP0/CPU0#NCS5508#sh cef 1.0.144.0/20 detail1.0.144.0/20, version 25081094, external-reach-lc-only, internal 0x5000001 0x0 (ptr 0x8f485390) [1], 0x0 (0x0), 0x0 (0x0) Updated Nov 27 22#48#56.929 local adjacency 192.168.100.151 Prefix Len 20, traffic index 0, precedence n/a, priority 4 gateway array (0x8e0e9250) reference count 655801, flags 0x2010, source rib (7), 0 backups [1 type 3 flags 0x48501 (0x8e18f758) ext 0x0 (0x0)] LW-LDI[type=0, refc=0, ptr=0x0, sh-ldi=0x0] gateway array update type-time 1 Nov 27 22#48#56.929 LDI Update time Nov 27 22#48#56.929 via 192.168.100.151/32, 2 dependencies, recursive [flags 0x6000] path-idx 0 NHID 0x0 [0x8e0bf1b0 0x0] next hop 192.168.100.151/32 via 192.168.100.151/32 Load distribution# 0 (refcount 1) Hash OK Interface Address 0 Y MgmtEth0/RP0/CPU0/0 192.168.100.151RP/0/RP0/CPU0#NCS5508#Network design considerationsTo understand where the different types of line cards should be positioned in a network architecture, it is important to understand how the packet routing decision happens. The NCS5500 behaves differently than traditional IOS XR platforms like CRS, ASR9000 or NCS6000. In these platforms, we used a two-step lookup model where both the ingress and egress line card ASICs/NPUs are involved. Let’s take an ASR9000 as example# both the -TR and -SE line cards have the full routing view and the difference between the position of the two is related to specific features like QoS but not linked to the route scale.In the NCS5500, -SE and non-SE line cards have the same QoS capability but different route scale. The lookup happens mostly in the ingress pipeline of the Jericho/Jericho+ ASIC as shown in this diagram.It implies that the ingress blocks need access to the database where are stored the routing information (route, next-hop, load-balancing info, …).To decide where should be positioned each type of line card, you have to take the perspective of the packet #) When a packet needs to reach some destination on internet, the ingress line card must have the full view and it’s expected it will be the high(er)-scale board.In the following diagram, we illustrate it with two routers. One is facing the internet (peering role), and the other one is connected to an internet content server (DC role). Both are connected to the core network (full IP or MPLS).On the Peering router# packets received from the core (left to right) and targeted to the internet require a lookup in the full view, we need a Scale line card here. packets received from internet (right to left) and targeted to a host address simply need to find the internal routes, a Base line card can do this job.Considering that current DWDM and the MACsec line cards are non-SE, they can be used to reach distant internet exchange point (IXPs) and if needed can be encrypted. But if we want to position such cards for core-facing interfaces, ie. with 100G/200G colored ports and/or MACsec encryption, we will need the MOD-SE line card (planned for the end of calendar year 2018).On the DC router. packets received from the content server (left to right) and targeted to internet will also need the full public view. That’s why we advise using a Scale line card here. Alternative options exist to be able to forward packets to some upper layer of the network where the full internet view will be available, or to filter the routes learnt to limit the FIB programming to the entries actually carrying traffic. But they require additional study. Most of the users will position Scale line cards for the sake of simplicity (both from a design and operation perspective). packets received from the core (right to left) are targeted to the content server, so the routing table required here is minimal. A Base line card is sufficient.Another example where CE devices need to reach other CE devides through a MPLS VPN network.On both PE routers, ingress interfaces will require Scale line cards since they need to know all the VRF routes plus the internal routes and transport labels. On the core facing interfaces, we will can position non-SE line cards since they only need to store the local/connected CE routes.Caveats / Gotchas The color marking of the paths is done locally on the router, that implies it should be done on all routers (it’s not something advertised over the network through the BGP updates) in the case of a link bundle, one port could be on a base line cards which another port could be on a scale line card. In such a situation the parser will refuse to commit the configuration. The aggregated ports must be bundled in the same type of line card (but not necessarily on the same line card) It has been asked several times if this feature can be used on fixed-formed factor chassis using Jericho ASIC without eTCAM to filter the routes programmed. It’s not a scenario tested/supported and we suggested they use route-filter insteadConclusionSelective Route Download permits a very flexible selection of the BGP prefixes that needs to be programmed only in the -SE line cards. It’s easy to configure and operate, and it allows the mix of Base and Scale line cards in the same chassis.", "url": "/tutorials/mixing-base-and-scale-LC-in-NCS5500/", "author": "Nicolas Fevrier", "tags": "ncs5500, scale, base, chassis, ios xr, xr, SRD" } , "tutorials-ncs5500-things-to-know": { "title": "NCS5500 Things to Know / Q&A", "content": " NCS5500 Things to know NCS5500 (and some other XR platforms)# Good to Know Understanding IOS XR Releases Specific release per NCS5500 platform? Understanding Product IDs Understanding NCS5500 slot numbering Products, ASICs and route scale Supported optics Ethernet only on grey interfaces Breakout-cable Interface identification ER4L Quad ? BGP Flowspec You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.NCS5500 (and some other XR platforms)# Good to KnowWe frequently see the same questions around the NCS5500 platform and its software. Individually, they probably don’t deserved a dedicated article, so we create this specific post to relay them and bring some answers.We will keep this one updated regularly.Revision# 2018-07-09# First version 2018-07-11# Add link to software center, and fix the optics support URL 2018-07-26# Add section on PIDs + section on the scale per LC / systems 2019-04-18# Add RSFEC/ER4L section 2019-06-03# Clarification on the Quad concept in NCS55A2-MOD systems 2019-10-23# BGP Flowspec supportUnderstanding IOS XR ReleasesFirst, you may have heard that IOS XR exists in two flavors, the 32-bit and the 64-bit versions.In the case of an ASR9000, you may use one or the other but with care since some hardware may not be supported in one or the other. For the NCS5500, things are simpler# it only supports the 64-bit version. We introduced the platform with IOS XR 6.0.0 and in 64-bit only.Note# platforms are not necessarily “participating” in all “trains”. For instance, the 6.4.x is available for XRv9000 and ASR9000 but not for the NCS5500. NCS5500 portfolio can use 6.0.x, 6.1.x, 6.2.x and 6.3.x. It will be part of the 6.5.x with the upcoming release 6.5.1 for example.Image name formatThe format used in IOS XR releases naming is always following X.Y.ZzX, Y, Z meaning are detailed in this CCO web page. For example 6.1.3, 6.1.31 and 6.2.25.It’s an unwritten rule that x.y.z5 are “business releases” that are bringing the same features than x.y.z but with many bug fixes. Therefore, 6.2.25 is coming after 6.2.2 but pre 6.2.3 (even if 3<25)x.y.z1, x.y.z2 or x.y.z3 (like 6.1.31) are releases built specifically with a group of features and most of the time with a specific list of customers. They are supposed to be ran with a defined and scoped use-case. They are usually not available on the Cisco Software Download web site.What means EFT, GA, etc…Different images are qualified with these acronyms# FCS stands for First Customer Shipment# it’s the official image build published on the software center. EFT stands for Early Field Trial# it a precode provided to specific customers for an approved test case. It’s an image built and tested specifically for an usage. It can only be given to customers via their account team and in agreement with the BU/Engineering. LA means Limited availability and is usually referring to images available for specific customers, images you will not find on the Cisco Software download website. GA stands for General Availability and refers to images available to all customers. Examples# 6.1.3, 6.2.25, 6.2.3, 6.3.2. EMR means Extended Maintenance Release and represents a specific images in a train which will be supported for longer time.Specific release per NCS5500 platform?We have a single image for the NCS5500 entire family, regardless it’s a fixed-form system or a chassis, could it be 4, 8 and 16 slots, and regardless of the forwarding ASIC (Qumran-MX, Jericho or Jericho+).Pick the link to 5508 image as indicated above, it’s the link to all systems, not only for 5508.Understanding Product IDsEach router, each part and each license has its own PID and it could be confusing.First, the PID finish with an equal character (“=”) represents a spare part.Second, very ofter you will not find in the ordering or maintenance tool the same PID that the one you see in your router when using “show platform”. It’s because the tool are using the “bundle PID” made of the product itself and the RTU (right to use) license.RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh platformNode Type State Config state--------------------------------------------------------------------------------0/1/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/1/NPU0 Slice UP0/1/NPU1 Slice UP0/1/NPU2 Slice UP0/1/NPU3 Slice UP0/2/CPU0 NC55-36X100G IOS XR RUN NSHUT0/2/NPU0 Slice UP0/2/NPU1 Slice UP0/2/NPU2 Slice UP0/2/NPU3 Slice UP0/2/NPU4 Slice UP0/2/NPU5 Slice UP0/6/CPU0 NC55-24H12F-SE IOS XR RUN NSHUT0/6/NPU0 Slice UP0/6/NPU1 Slice UP0/6/NPU2 Slice UP0/6/NPU3 Slice UP0/7/CPU0 NC55-24X100G-SE IOS XR RUN NSHUT0/7/NPU0 Slice UP0/7/NPU1 Slice UP0/7/NPU2 Slice UP0/7/NPU3 Slice UP0/RP0/CPU0 NC55-RP(Active) IOS XR RUN NSHUT0/RP1/CPU0 NC55-RP(Standby) IOS XR RUN NSHUT0/FC0 NC55-5508-FC OPERATIONAL NSHUT0/FC1 NC55-5508-FC OPERATIONAL NSHUT0/FC3 NC55-5508-FC OPERATIONAL NSHUT0/FC5 NC55-5508-FC OPERATIONAL NSHUT0/FT0 NC55-5508-FAN OPERATIONAL NSHUT0/FT1 NC55-5508-FAN OPERATIONAL NSHUT0/FT2 NC55-5508-FAN OPERATIONAL NSHUT0/SC0 NC55-SC OPERATIONAL NSHUT0/SC1 NC55-SC OPERATIONAL NSHUTRP/0/RP0/CPU0#TME-5508-1-6.3.2#The line cards NC55-36X100G-A-SE is oftened seen in the CCO tool as NC55-36X100G-A-SB.NC55-36X100G-A-SB being a bundle made of# NC55-36H-SE-RTU (right to use license) NC55-36X100G-A-SE (line card)Let’s summarize the product IDs in this chart# PID Description Bundle? NCS-5508 NCS5500 8 Slot Single Chassis   NCS-5516 NCS5500 8 Slot Single Chassis   NCS-5504 NCS5500 4 Slot Single Chassis   NC55-RP NCS 5500 Route Processor   NC55-SC NCS 5500 System Controller   NC55-PWR-3KW-AC NCS 5500 AC 3KW Power Supply   NC55-PWR-3KW-DC NCS 5500 DC 3KW Power Supply   NC55-5508-FAN NCS 5508 Fan Tray   NC55-5508-FC NCS 5508 Fabric Card   NC55-5516-FAN NCS 5508 Fan Tray   NC55-5516-FC NCS 5516 Fabric Card   NC55-5504-FAN NCS 5504 Fan Tray   NC55-5504-FC NCS 5504 Fabric Card   NC55-36X100G NCS 5500 36X100GE BASE NC55-36X100G-BA / NC55-36X100G-U-BA NC55-36X100G-S NCS 5500 36x100G MACsec Line Card NC55-36X100G-BM / NC55-36X100G-U-BM NC55-24X100G-SE NCS 5500 24x100G Scaled Line Card NC55-24X100G-SB NC55-18H18F NCS 5500 18X100G and 18X40GE Line Card NC55-18H18F-BA NC55-24H12F-SE NCS 5500 24X100GE and 12X40GE Line Card NC55-24H12F-SB NC55-36X100G-A-SE NCS 5500 36x100G-SE Line Card NC55-36X100G-A-SB / NC55-36X100G-U-SB NC55-6x200-DWDM-S NCS 5500 6x200 DWDM MACsec Line Card NC55-6X2H-DWDM-BM / NC55-2H-DWDM-BM NC55-MOD-A-S NCS 5500 12X10, 2X40 & 2XMPA Line Card Base, MACSec NC55-MOD-A-BM NC55-MPA-2TH-S 2X200G CFP2 MPA   NC55-MPA-1TH2H-S 1X200G CFP2 + 2X100G QSFP28 MPA   NC55-MPA-12T-S 12X10G MPA   NC55-MPA-4H-S 4X100G QSFP28 MPA         NCS-5501 NCS 5501 Fixed 48x10G and 6x100G Chassis   NCS-5501-SE NCS 5501 - 40x10G and 4x100G Scale Chassis   NCS-1100W-ACFW NCS 5500 AC 1100W Power Supply Port-S Intake / Front-to-back   NCS-1100W-ACRV NCS 5500 AC 1100W Power Supply Port-S Exhaust/Back-to-Front   NCS-950W-DCFW NCS 5500 DC 950W Power Supply Port-S Intake / Front-to-back   NCS-1100W-DCRV NCS 5500 DC 1100W Power Supply Port-S Exhaust / Back-to-Front   NCS-1100W-HVFW NCS 5500 1100W HVAC/HVDC Port-S Intake / Front-to-back   NCS-1100W-HVRV NCS 5500 1100W HVAC/HVDC Port-S Exhaust / Back-to-Front   NCS-1RU-FAN-FW NCS 5500 1RU Chassis Fan Tray Port-S Intake / Front-to-back   NCS-1RU-FAN-RV NCS 5500 1RU Chassis Fan Tray Port-S Exhaust / Back-to-Front   NCS-5502 NCS5502 Fixed 48x100G Chassis   NCS-5502-SE NCS5502 - 48x100G Scale Chassis   NC55-2RU-FAN-RV NCS 5500 Fan Tray 2RU Chassis Port-S Exhaust / Back-to-Front   NC55-2RU-FAN-FW NCS 5500 Fan Tray 2RU Chassis Port-S Intake / Front-to-back   NC55-2KW-DCRV NCS5500 DC 2KW Power Supply Port-S Exhaust/Back-to-Front   NC55-2KW-DCFW NCS 5500 DC 2KW Power Supply Port-S Intake / Front-to-back   NC55-2KW-ACRV NCS 5500 AC 2KW Power Supply Port-S Exhaust / Back-to-Front   NC55-2KW-ACFW NCS 5500 AC 2KW Power Supply Port-S Intake / Front-to-back   NCS-5502-FLTR-FW NCS 5502 Filter Port-side exhaust / Back-to-Front   NCS-5502-FLTR-RV NCS 5502 Filter Port-Side intake / Front-to-back   NCS-55A1-24H NCS55A1 Fixed 24x100G chassis bundle NCS-55A1-24H-B NC55-A1-FAN-FW NCS 5500 Fan Tray 1RU Chassis Port-S Intake / Front-to-back Port-Side intake   NC55-A1-FAN-RV NCS 5500 Fan Tray 1RU Chassis Port-S Exhaust / Back-to-Front Port-side exhaust   NCS-55A1-36H-S NCS55A1 Fixed 36x100G Base chassis bundle NCS-55A1-36H-B NCS-55A2-MOD-S NCS 55A2 Fixed 24X10G + 16X25G & MPA Chassis   NC55-1200W-ACFW NCS 5500 AC 1200W Power Supply Port-S Intake / Front-to-back   NC55-930W-DCFW NCS 5500 DC 930W Power Supply Port-S Intake / Front-to-back   NC55-A2-FAN-FW NCS 5500 Fan Tray 1RU Chassis Port-S Intake / Front-to-back Port-Side intake   NC55-MPA-2TH-S 2X200G CFP2 MPA   NC55-MPA-1TH2H-S 1X200G CFP2 + 2X100G QSFP28 MPA   NC55-MPA-12T-S 12X10G MPA   NC55-MPA-4H-S 4X100G QSFP28 MPA   NCS-55A2-MOD-HD-S NCS 55A2 Fixed 24X10G + 16X25G & MPA Temp Hardened Chassis   NC55-900W-ACFW-HD NCS 5500 AC 900W Power Supply Port-S Intake / Front-to-back   NC55-900W-DCFW-HD NCS 5500 DC 900W Power Supply Port-S Intake / Front-to-back   NC55-MPA-4H-HD-S 4X100G QSFP28 Temp Hardened MPA   Understanding NCS5500 slot numberingLine cards count starts from 0, from top to bottom.Products, ASICs and route scaleWith Qumran-MX, Jericho, Jericho+ with Jericho-scale, Jericho+ with large LPM, with or without eTCAM, it’s not easy to remember which ASIC is used in the various LC and systems and what is the routing scale they can reach.The following chart will help clarifying it (scale for IPv4 prefixes specifically)# PID ASIC type # of ASICs Route Scale NC55-36X100G Jericho w/o eTCAM 6 786k in LEM + 256-350k in LPM NC55-36X100G-S Jericho w/o eTCAM 6 786k in LEM + 256-350k in LPM NC55-24X100G-SE Jericho with eTCAM 4 786k in LEM + 256-350k in LPM + 2M in eTCAM NC55-18H18F Jericho w/o eTCAM 3 786k in LEM + 256-350k in LPM NC55-24H12F-SE Jericho with eTCAM 4 786k in LEM + 256-350k in LPM + 2M in eTCAM NC55-36X100G-A-SE Jericho+ with NG eTCAM 4 4M in eTCAM NC55-6x200-DWDM-S Jericho w/o eTCAM 2 786k in LEM + 256-350k in LPM NC55-MOD-A-S Jericho+ w/o eTCAM 1 786k in LEM + 256-350k in LPM NC55-MOD-A-SE-S Jericho+ w/ eTCAM 1 4M in eTCAM NCS-5501 Q-MX w/o eTCAM 1 786k in LEM + 256-350k in LPM NCS-5501-SE Jericho with eTCAM 1 786k in LEM + 256-350k in LPM + 2M in eTCAM NCS-5502 Jericho w/o eTCAM 8 786k in LEM + 256-350k in LPM NCS-5502-SE Jericho with eTCAM 8 786k in LEM + 256-350k in LPM + 2M in eTCAM NCS-55A1-24H Jericho+ w/o eTCAM 2 786k in LEM + 1M-1.3M in LPM NCS-55A1-36H-S Jericho+ w/o eTCAM 4 786k in LEM + 256-350k in LPM NCS-55A1-36H-SE-S Jericho+ with NG eTCAM 4 4M in eTCAM NCS-55A2-MOD-S Jericho+ w/o eTCAM 1 786k in LEM + 256-350k in LPM NCS-55A2-MOD-HD-S Jericho+ w/o eTCAM 1 786k in LEM + 256-350k in LPM NCS-55A2-MOD-SE-S Jericho+ w/o eTCAM 1 786k in LEM + 256-350k in LPM + 4M in eTCAM Buffers#Each ASIC is associated by 4GB of GDDR5 memory (total is the multiplication of number of NPU by 4GB).The buffer size is not related to the -SE or non-SE aspect (it’s not related to TCAM).For more details, refer to Lane’s whitepaper#Supported opticsThe first link to bookmark is the following# Select the interface type and then NCS5500.https#//www.cisco.com/c/en/us/support/interfaces-modules/transceiver-modules/products-device-support-tables-list.htmlBut what about the third party optics?We are not preventing any optic to work on the system. No PID check similar to classic XR. Therefore, no special cli is needed to enable similar to classic XR or NxOS. But the behaviour is not guaranteed. Some optics may use non-standard features and we can not guarantee whether it will work as expected or not.Check the Third Party Components - Cisco Policy for the official company position on the matter.Ethernet only on grey interfacesThe NCS5500 is only ethernet capable ?Indeed, no frame-relay or SDH/Sonet technologies supported on these ASICsThe NCS5500 is LAN-Phy only?It’s true with the exception of the modular systems and line cards (NCS55A2-MOD and NC55-MOD LCs). With these particular systems, we offer MPAs supporting WAN-Phy and OTN framing too.Breakout-cableQ# Do you support breakout cable options?A# Yes, depending on the optic type, it’s possible to configure 4x10G, 4x25G, 4x100G, etc.Q# How do you configure it?A# it depends on the router type.On NCS55xx#RP/0/RP0/CPU0#NCS5500(config)#controller optics 0/0/0/2RP/0/RP0/CPU0#NCS5500(config-Optics)# breakout 4x10On NCS57B1#RP/0/RP0/CPU0#NCS5500(config)#hw-module port-range 6 7 ? instance card instance of MPA's location fully qualified location specificationRP/0/RP0/CPU0#NCS5500(config)#hw-module port-range 6 7 instance 0 location 0/RP0/CPU0 mode ? WORD port mode 40-100, 400, 2x100, 4x10, 4x25, 4x10-4x25, 1x100, 2x100-PAM4, 3x100, 4x100RP/0/RP0/CPU0#NCS5500(config)#hw-module port-range 6 7 instance 0 location 0/RP0/CPU0 mode 4x10It doesn’t need a reload to be enabled.Some optics like the SR4 can be natively broke out in 4x10G. Some others will need specific optics like the 4x10G LR. Check the pdf linked above for all options. 25G is only supported on the Jericho+ platforms.Q# Do you support all the features on ports in breakout mode?A# Yes, no restrictionQ#Can you mix breakout ports and “normal” ports on the same NPUA# Yes, no restriction in the combination of ports on a given NPUInterface identificationIn QSFP based systems or line cards, we support multiple types of interfaces.If you insert a QSFP28, the port will be seen as HundredGig 0/x/y/z the first value describe the rack-id for multi-chassis. Since we don’t support MC on this platform, it will always be 0 x being the slot number, starting from the top of the chassis with 0 y being the position inside a modular line card (NC55-MOD***), if it’s a non-MOD card, it will be 0 z being the port position in the line card or the system This xrdocs post detailed the ports for each platformIf you insert a QSFP+, the port will be seen as FortyGig 0/x/y/z by default same description for x, y and z depending on the optic type, it may be possible to configure the breakout option. In this case, the port will appear as TenGig 0/x/y/z/w. You notice a fifth tuple to describe the interface position. Check the configuration needed above in this articleIf you insert a QSA optic (QSFP to SFP Adaptor), the port will appear as TenGig 0/x/y/zER4LWe support ER4-Lite optics in NCS5500 and it needs error correction (FEC) on both ends to reach to 40Km.RS-FEC is enabled by default, so nothing specific is required to activate the feature.You could verify with#RP/0/RP0/CPU0#router#show controllers HundredGigE0/0/0/13 all | in ForwardForward error correction# Standard (Reed-Solomon)You could turn it off to interop with a remote optics that do not have RS-FEC on#RP/0/RP0/CPU0#router(config)#interface HundredGigE <0/2/0/8>RP/0/RP0/CPU0#router(config-if)#fec ?base-r Enable BASE-R FECnone Disable any FEC enabled on the interfacestandard Enable the standard (Reed-Solomon) FECRP/0/RP0/CPU0#router(config-if)#fec noneQuad ?Reading this blog post http#//networkingbodges.blogspot.com/2019/03/how-to-use-25g-ports-in-110g-mode-on.html, it appeared that the concept of quad was poorly documented.Briefly mentioned in the NCS540 documentation, it was inexistent on the NCS5500 side. Let’s try to address this, until the documentation team fixes the situation.Contrary to QSFP ports where the insertion of QSFP+ optic or a QSFP28 optic made the port automatically 40Gbps or 100Gbps (or 4x10G / 4x25G is the breakout option is configured), the SFP does not automatically become a SFP+ 10G or a SFP28 25G.It’s necessary to configure the group of 4 ports where you insert the optic first.These groups of ports are call “quad” and are configured with the following CLI#RP/0/RP0/CPU0#Peyto-SE#confSun Apr 14 19#04#34.426 UTCRP/0/RP0/CPU0#Peyto-SE(config)#hw-module quad ? 0-11 configure quad propertiesRP/0/RP0/CPU0#Peyto-SE(config)#hw-module quad 0 ? location fully qualified location specificationRP/0/RP0/CPU0#Peyto-SE(config)#hw-module quad 0 location 0/0/CPU0 ? mode select mode 10g or 25g for a quad(group of 4 ports).RP/0/RP0/CPU0#Peyto-SE(config)#hw-module quad 0 location 0/0/CPU0 mode ? WORD 10g or 25g, (10g mode also operates 1g transceivers)RP/0/RP0/CPU0#Peyto-SE(config)#Note that the 10G mode allows the 1G operation too.Example with NCS55A2-MOD routers (but it applies to other 25G-capable platforms too, like the NCS55A1-48Q6H or NCS55A1-24Q6H-S)# Quad 0 represents ports 0/0/0/24 to 0/0/0/27 Quad 1 represents ports 0/0/0/28 to 0/0/0/31 Quad 2 represents ports 0/0/0/32 to 0/0/0/35 Quad 3 represents ports 0/0/0/36 to 0/0/0/39By default, the ports from 0/0/0/24 to 0/0/0/39 are configured for 25G (or “TF*”) as shown below. TF0/0/0/24 admin-down admin-down ARPA 1514 25000000 TF0/0/0/25 admin-down admin-down ARPA 1514 25000000 TF0/0/0/26 admin-down admin-down ARPA 1514 25000000 TF0/0/0/27 admin-down admin-down ARPA 1514 25000000 TF0/0/0/28 admin-down admin-down ARPA 1514 25000000 TF0/0/0/29 admin-down admin-down ARPA 1514 25000000 TF0/0/0/30 admin-down admin-down ARPA 1514 25000000 TF0/0/0/31 admin-down admin-down ARPA 1514 25000000 TF0/0/0/32 admin-down admin-down ARPA 1514 25000000 TF0/0/0/33 admin-down admin-down ARPA 1514 25000000 TF0/0/0/34 admin-down admin-down ARPA 1514 25000000 TF0/0/0/35 admin-down admin-down ARPA 1514 25000000 TF0/0/0/36 admin-down admin-down ARPA 1514 25000000 TF0/0/0/37 admin-down admin-down ARPA 1514 25000000 TF0/0/0/38 admin-down admin-down ARPA 1514 25000000 TF0/0/0/39 admin-down admin-down ARPA 1514 25000000Changing the configuration does not require system reload and only takes a few second to become effective. During the process, the ports are shown as “not ready” in the show commands.Changing quad 0 for 10G# Te0/0/0/24 admin-down admin-down ARPA 1514 10000000 Te0/0/0/25 admin-down admin-down ARPA 1514 10000000 Te0/0/0/26 admin-down admin-down ARPA 1514 10000000 Te0/0/0/27 admin-down admin-down ARPA 1514 10000000 TF0/0/0/28 admin-down admin-down ARPA 1514 25000000 TF0/0/0/29 admin-down admin-down ARPA 1514 25000000 TF0/0/0/30 admin-down admin-down ARPA 1514 25000000 TF0/0/0/31 admin-down admin-down ARPA 1514 25000000 TF0/0/0/32 admin-down admin-down ARPA 1514 25000000 TF0/0/0/33 admin-down admin-down ARPA 1514 25000000 TF0/0/0/34 admin-down admin-down ARPA 1514 25000000 TF0/0/0/35 admin-down admin-down ARPA 1514 25000000 TF0/0/0/36 admin-down admin-down ARPA 1514 25000000 TF0/0/0/37 admin-down admin-down ARPA 1514 25000000 TF0/0/0/38 admin-down admin-down ARPA 1514 25000000 TF0/0/0/39 admin-down admin-down ARPA 1514 25000000BGP FlowspecContrary to the ASR9000 or CRS implementation, the for-us packets are not filtered by BGP FS rules.This behavior will be changed in a future release and will affect all J+ and J2 line cards and products (post 7.0.2, to be confirmed).", "url": "/tutorials/ncs5500-things-to-know/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-security-acl-on-ncs5500-part1": { "title": "Security ACL on NCS5500 (Part1)", "content": " NCS5500 Security Access-lists - Part1 Introduction Basic notions on ACLs Interface/ACL Support (status in IOS XR 6.2.3 / 6.5.1) Scale Match support, Parameters and Edition Edition Range Match statements TTL match enable-set-ttl Fragment match Packet-length match Logging Memory space Sharing / Unique Shared ACLs Unique interface-based ACLs Statistics Object-based ACL Misc No resequencing ACL copy DPA hw-module profiles From us traffic Resources Thanks #) You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow following this link.IntroductionLet’s talk about the “traditional” Security Access-List implementation on the NCS5500 series. In the near future, we will dedicate a separate post on the Hybrid-ACL (also known as Scale-ACL or Object-Based-ACL).While compressed / scaled-ACL are only supported on -SE systems (with external TCAM), the traditional security ACL can be configured on all systems and line cards of the NCS5500 portfolio. They are available in ingress, egress, for IPv4, IPv6 and L2.Please note# won’t cover access-list used for route-filtering in this document, nor we will talk about Access-list Based Forwarding or SPAN (packet capture / replication) based on ACL either. We only intend to present security ACL used for packets filtering.Basic notions on ACLsSecurity Access-Lists are used to protect the router or the infrastructure by matching the fields in the packets headers and applying filters.An access-list is configured under on interface statement. It contains an protocol-type, an ACL identifier (or name) and a direction.An access-list is composed of one or multiple access-list entries (ACEs).When defining an ACL, the first line if made of a protocol-type (L2, v4 or v6) and of the name used to call it under the inferfaces. The following lines are representing the Access-list Entries (ACEs).You don’t have to use numbers to identify the lines when you configure your ACEs for the first time, the system will automatically assign them. They are multiples of 10 and increment line after line. After the creation, the operator will be able to edit the ACL content, inserting entries with intermediate line numbers or modify/deleting entries with existing line number.In ASR9000 or CRS, it was possible to re-sequence the ACEs but it’s not supported with NCS5500.Access-list are composed of deny or permit entries. If the entry denies an address or protocol, the NPU discards the packet and returns an Internet Control Message Protocol (ICMP) Host Unreachable message. It’s possible to change this behavior via configuration.The scale, both in term of ACL and ACE, will depend on the type of interface, the address-family and the direction.Interface/ACL Support (status in IOS XR 6.2.3 / 6.5.1)Where can be “used” these ACLs ? We support L2 and L3 ACL but “conditions may apply” Ingress IPv4 ACLs are supported on L3 physical, bundles, sub-interfaces and bundled sub-interfaces, but also on BVI interfaces Ingress IPv6 ACLs are supported on L3 physical, bundles, sub-interfaces and bundled sub-interfaces, but also on BVI interfaces Egress IPv4 ACLs are supported on L3 physical and bundle interfaces but also on BVI interfaces Egress IPv6 ACLs are supported on L3 physical and bundle interfaces but also on BVI interfaces Egress IPv4 or IPv6 ACLs are NOT supported on L3 sub-interfaces or bundled sub-interfaces (but if you apply the ACL on the physical or bundle, all packets on the sub-interfaces will be handled by this ACL) It’s no possible to apply an L2 ACL on an IPv4/IPv6 (L3) interface or vice versa Ingress L2 ACLs are supported but not egress L2 ACLs Ranges are supported but only for source-portLet’s summarise# Interface Type Direction AF Suppport ? L3 Physical Ingress IPv4 YES L3 Physical Ingress IPv6 YES L3 Physical Egress IPv4 YES L3 Physical Egress IPv6 YES L3 Physical Ingress L2 NO L3 Physical Egress L2 NO L3 Bundle Ingress IPv4 YES L3 Bundle Ingress IPv6 YES L3 Bundle Egress IPv4 YES L3 Bundle Egress IPv6 YES L3 Bundle Ingress L2 NO L3 Bundle Egress L2 NO L3 Sub-interface Ingress IPv4 YES L3 Sub-interface Ingress IPv6 YES L3 Sub-interface Egress IPv4 NO L3 Sub-interface Egress IPv6 NO L3 Bundled Sub-interface Ingress IPv4 YES L3 Bundled Sub-interface Ingress IPv6 YES L3 Bundled Sub-interface Egress IPv4 NO L3 Bundled Sub-interface Egress IPv6 NO L3 Bundled Sub-interface Ingress L2 NO L3 Bundle Sub-interface Egress L2 NO BVI Ingress IPv4 YES BVI Ingress IPv6 YES BVI Egress IPv4 YES BVI Egress IPv6 NO Tunnel Ingress IPv4 Partial Tunnel Ingress IPv6 NO Tunnel Egress IPv4 Partial Tunnel Egress IPv6 NO L2 Ingress IPv4 NO L2 Ingress IPv6 NO L2 Egress IPv4 NO L2 Egress IPv6 NO L2 Ingress L2 YES L2 Egress L2 NO ScaleThe number of ACLs and ACEs we support is expressed per NPU (Qumran-MX, Jericho, Jericho+). Since the ACLs are applied on ports, we invite you to check the former blog post describing the port to NPU assignments.Also, keep in mind that an ACL applied to a bundle interface with port members spanning over multiple NPUs will see the ACL/ACEs replicated on all the participating NPUs.By default (that mean without changing the hardware profiles), we support simultaneously up to# max 31 unique attached ingress ACLs per NPU max 255 unique attached egress ACLs per NPU max 4000 attached ingress IPv4 ACEs per LC max 4000 attached egress IPv4 ACEs per LC max 2000 attached ingress IPv6 ACEs per LC max 2000 attached egress IPv6 ACEs per LC max 2000 attached ingress L2 ACEs per LCNote that it’s actually possible to configure much more if they are not attached to interfaces.RP/0/RP0/CPU0#5500-6.3.2#show access-lists ipv4 maximum detailDefault max configurable acls #16000Default max configurable aces #350000Current configured acls #22Current configured aces #93455Current max configurable acls #16000Current max configurable aces #350000Max configurable acls #16000Max configurable aces #350000RP/0/RP0/CPU0#5500-6.3.2#show access-lists ipv6 maximum detailDefault max configurable acls #16000Default max configurable aces #350000Current configured acls #1Current configured aces #1003Current max configurable acls #16000Current max configurable aces #350000Max configurable acls #16000Max configurable aces #350000RP/0/RP0/CPU0#5500-6.3.2#Match support, Parameters and EditionEditionWhen using traditional / “flat” ACLs, it’s possible to edit the ACEs in-line. When an ACL is attached to an interface, it’s not necessary to remove it from the port before editing it. With object-groups (defined in a following section), it’s an atomic process where the new ACE replaces the existing one.RangeWe support range statements but only within the limit of 23 range-IDs.Match statementsThe following protocols can be matched# IGMP type ICMP type code UDP protocol name or port number DSCP / precedence fragments log icmp-off packet length (eq or range) ttl TCP protocol name or port number DSCP / precedence established icmp-off packet length (eq or range) ttl Check https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/b-ip-addresses-cr-ncs5500/b-ncs5500-ip-addresses-cli-reference_chapter_01.html#reference_7C68561395FF4CE1902EF920B47FA254 for a complete list.TTL matchWe can match the TTL field in the IP header (both v4 and v6). We support exact values or ranges. For traditional (non-hybrid) ACL, it’s not enabled by default and must be configured via a specific hardware profile.RP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 src-addr src-port enable-set-ttl ttl-matchRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 dst-addr dst-port enable-set-ttl ttl-match enable-set-ttlEven if it’s a bit outside of the scope of this article, it’s possible to match the TTL field but also to manipulate this value. We invite you to check this URL if you are looking for more details.https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/b-ip-addresses-cr-ncs5500/b-ncs5500-ip-addresses-cli-reference_chapter_01.html#id_60681Fragment matchWe differentiate 3 types of packets# non-fragmented initial fragments (with the port information) non-initial fragments (without the port number)In the third category, we don’t treat non-initial non-last and non-initial last fragments differently.In NCS5500 platforms, we can match IPv4 fragments but we don’t support IPv6 fragments.Configuration example#10 permit tcp 1.1.0.2/32 any dscp ef fragmentsIf you define an ACL with L4 information (UDP or TCP ports for instance) and with “fragment” keyword, non-intital fragments can not be matched. It’s expected since the packet no longer transports the port information but only an indication of fragment needed for the re-assembly of the original packet at the destination host level.The same ACL will be able to match initial fragments.An L4 permit / deny without “fragment” keyword will be able to match non-fragmented and initial fragments while an L3 permit / deny without the keyword will be able to match all types of packets (non-fragmented, initial and non-initial fragments).Some more details are available in the “Extended Access Lists with Fragment Control” section of this CCO document#https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/61x/b-ncs5500-ip-addresses-configuration-guide-61x/b-ncs5500-ip-addresses-configuration-guide-61x_chapter_01.htmlPacket-length matchMatching on the packet length is supported. It could be useful to tackle specific amplification attacks at the border of the internet (an alternative to using BGP Flowspec for example).The notion of packet length is frequently a matter of doubts since it may vary between products and manufacturer. For example, it’s common for test devices to express it at L2.In the NCS5500 ACL context, the packet length is expressed at L3# the total IP packet including the IP header. It doesn’t include any L2 headers (Ethernet or dot1q). Still there are differences between IPv4 and IPv6# IPv4# the “total length” field in the packet includes the IP header as well as the payload IPv6# the “payload length” field in the packet does not include the IP header (40 bytes for IPv6), so it only covers the payload lengthAlso, due to the representation of the packet-length information internally, it should be a multiple of 16. So we support values like 0, 16, 32, 48, 64, … 992, 1008, 1024, … up to 16368.LoggingThe “log” keyword is supported on ingress but not on egress. “log-input” in the other hand is not supported on this platform.Memory spaceTraditional / non-hybrid ACLs are stored in the internal TCAM, even on -SE systems.You can check memory utilisation in 6.1.x with#RP/0/RP0/CPU0#NCS5508-1-614#sh contrnpuinternaltcamloc0/7/CPU0 | i ~(size|NPU|==|Id)~NPU 0#==================================================================================BankId Key EntrySize Free InUse Nof DBs Owner DB Id DB InUse Prefix==================================================================================0 size_160_bits 2043 5 8 pmf-01 size_160_bits 2047 1 1 pmf-12\\3 size_320_bits 1972 76 3 pmf-04\\5 size_320_bits 2020 28 1 pmf-012 size_160_bits 126 2 1 pmf-113 size_160_bits 115 13 1 pmf-014 size_160_bits 118 10 1 egress_aclWith IOS XR 6.3 or later, we’ll use#RP/0/RP0/CPU0#NCS5500#sh contr npu internaltcam location 0/7/CPU0Check next sections for more CLI output.Sharing / UniqueShared ACLsIt’s possible to share access-lists in ingress but not in egress.What does it mean exactly? Let’s take 2 interfaces handled by the same NPU# Hu0/7/0/1 and Hu0/7/0/2.Before applying the ACLs#RP/0/RP0/CPU0#5500-6.3.2#sh contr npu internaltcam loc 0/7/CPU0Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0\\1 320b pmf-0 1987 49 7 INGRESS_LPTS_IPV40 0\\1 320b pmf-0 1987 8 12 INGRESS_RX_ISIS0 0\\1 320b pmf-0 1987 2 32 INGRESS_QOS_IPV60 0\\1 320b pmf-0 1987 2 34 INGRESS_QOS_L20 2 160b pmf-0 2044 2 31 INGRESS_QOS_IPV40 2 160b pmf-0 2044 1 33 INGRESS_QOS_MPLS0 2 160b pmf-0 2044 1 42 INGRESS_ACL_L20 3 160b egress_acl 2031 17 4 EGRESS_QOS_MAP0 4\\5 320b pmf-0 2013 35 8 INGRESS_LPTS_IPV60 6 160b Free 2048 0 00 7 160b Free 2048 0 00 8 160b Free 2048 0 00 9 160b Free 2048 0 00 10 160b Free 2048 0 00 11 160b Free 2048 0 00 12 160b pmf-1 30 41 11 INGRESS_RX_L20 12 160b pmf-1 30 13 26 INGRESS_MPLS0 12 160b pmf-1 30 44 79 INGRESS_BFD_IPV4_NO_DESC_TCAM_T0 13 160b pmf-1 124 3 10 INGRESS_DHCP0 13 160b pmf-1 124 1 41 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 14 160b Free 128 0 00 15 160b Free 128 0 0All the banks from 6 to 10 are empty.In ingress, if we apply the ACL “test-1000” (as the name imples, made of 1000 lines) on these two interfaces.RP/0/RP0/CPU0#5500-6.3.2#sh access-list ipv4 usage pfilter location 0/7/$Interface # HundredGigE0/7/0/1 Input ACL # Common-ACL # N/A ACL # test-1000 Output ACL # N/AInterface # HundredGigE0/7/0/2 Input ACL # Common-ACL # N/A ACL # test-1000 Output ACL # N/ARP/0/RP0/CPU0#5500-6.3.2#sh contr npu internaltcam loc 0/7/CPU0Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0\\1 320b pmf-0 1987 49 7 INGRESS_LPTS_IPV40 0\\1 320b pmf-0 1987 8 12 INGRESS_RX_ISIS0 0\\1 320b pmf-0 1987 2 32 INGRESS_QOS_IPV60 0\\1 320b pmf-0 1987 2 34 INGRESS_QOS_L20 2 160b pmf-0 2044 2 31 INGRESS_QOS_IPV40 2 160b pmf-0 2044 1 33 INGRESS_QOS_MPLS0 2 160b pmf-0 2044 1 42 INGRESS_ACL_L20 3 160b egress_acl 2031 17 4 EGRESS_QOS_MAP0 4\\5 320b pmf-0 2013 35 8 INGRESS_LPTS_IPV60 6 160b pmf-0 997 1051 16 INGRESS_ACL_L3_IPV40 7 160b Free 2048 0 00 8 160b Free 2048 0 00 9 160b Free 2048 0 00 10 160b Free 2048 0 00 11 160b Free 2048 0 00 12 160b pmf-1 30 41 11 INGRESS_RX_L20 12 160b pmf-1 30 13 26 INGRESS_MPLS0 12 160b pmf-1 30 44 79 INGRESS_BFD_IPV4_NO_DESC_TCAM_T0 13 160b pmf-1 124 3 10 INGRESS_DHCP0 13 160b pmf-1 124 1 41 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 14 160b Free 128 0 00 15 160b Free 128 0 0We note that 1051 entries are consumed for these two access-lists. So the 1000 entries are just counted once even if the ACL is applied on multiple interfaces of the same NPU. That’s what we qualified a “shared ACL”.Note# it’s not showing exactly 1000 but 1051. The difference comes from internal entries automatically allocated by the system. They don’t represent a significant number compared to the overall scale capability.We remove the ingress ACLs and apply the same on egress this time#RP/0/RP0/CPU0#5500-6.3.2#sh access-list ipv4 usage pfilter location 0/7/CPU0Interface # HundredGigE0/7/0/1 Input ACL # N/A Output ACL # test-1000Interface # HundredGigE0/7/0/2 Input ACL # N/A Output ACL # test-1000RP/0/RP0/CPU0#5500-6.3.2#sh contr npu internaltcam loc 0/7/CPU0Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0\\1 320b pmf-0 1987 49 7 INGRESS_LPTS_IPV40 0\\1 320b pmf-0 1987 8 12 INGRESS_RX_ISIS0 0\\1 320b pmf-0 1987 2 32 INGRESS_QOS_IPV60 0\\1 320b pmf-0 1987 2 34 INGRESS_QOS_L20 2 160b pmf-0 2044 2 31 INGRESS_QOS_IPV40 2 160b pmf-0 2044 1 33 INGRESS_QOS_MPLS0 2 160b pmf-0 2044 1 42 INGRESS_ACL_L20 3 160b egress_acl 0 2031 1 EGRESS_ACL_IPV40 3 160b egress_acl 0 17 4 EGRESS_QOS_MAP0 4\\5 320b pmf-0 2013 35 8 INGRESS_LPTS_IPV60 6 160b Free 2048 0 00 7 160b egress_acl 1889 159 1 EGRESS_ACL_IPV40 8 160b Free 2048 0 00 9 160b Free 2048 0 00 10 160b Free 2048 0 00 11 160b Free 2048 0 00 12 160b pmf-1 30 41 11 INGRESS_RX_L20 12 160b pmf-1 30 13 26 INGRESS_MPLS0 12 160b pmf-1 30 44 79 INGRESS_BFD_IPV4_NO_DESC_TCAM_T0 13 160b pmf-1 124 3 10 INGRESS_DHCP0 13 160b pmf-1 124 1 41 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 14 160b Free 128 0 00 15 160b Free 128 0 0We can see that each ACL applied on egress is counted once per application. It exceeds a single bank capability so it spreads between bank #3 and bank #7.In summary, we are sharing the ACL on ingress but not on egress. On egress, the entries used in the iTCAM are the multiplication of the ACE count by the number of times the ACL is applied.Unique interface-based ACLsThe scale mentioned earlier (31 ACLs ingress and 255 ACLs egress) can be seen as too restrictive for some use-cases. We added the capability to extend this scale with a specific hardware profile#RP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 ? dst-addr destination address, 32 bit qualifier dst-port destination L4 Port, 16 bit qualifier enable-capture Enable ACL based mirroring. Disables ACL logging enable-set-ttl Enable Setting TTL field frag-bit fragment-bit, 1 bit qualifier interface-based Enable non-shared interface based ACL location Location of format access-list ipv4 config packet-length packet length, 10 bit qualifier port-range ipv4 port range qualifier, 24 bit qualifier precedence precedence/dscp, 8 bit qualifier proto protocol type, 8 bit qualifier src-addr source address, 32 bit qualifier src-port source L4 port, 16 bit qualifier tcp-flags tcp-flags, 6 bit qualifier ttl-match Enable matching on TTL field udf1 user defined filter udf2 user defined filter udf3 user defined filter udf4 user defined filter udf5 user defined filter udf6 user defined filter udf7 user defined filter udf8 user defined filterRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 interface-basedIn order to activate/deactivate this ipv4 profile, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#5500-6.3.2(config)#You can be more specific on the key format of these ACLs#RP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 src-addr src-port dst-addr dst-port interface-basedRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv6 src-addr dst-addr dst-port interface-basedWith this approach, the limitations of 31 and 255 respectively are removed. You can configure many more ACLs with smaller ACE size.StatisticsCounters being a precious resource on DNX chipset, the permit entries are not counted by default.It’s possible to change this behavior and enable the statistics on the permit entries in ingress via a specific hw-module profile.This feature is particularly useful if you use ABF and needs to track the flows handled by each ACE.Note# this profile will not activate counters for egress permits.RP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. qos-enhanced Enable enhanced QoS stats.RP/0/RP0/CPU0#NCS5500-6.3.2(config)# hw-module profile stats acl-permitIn order to activate/deactivate this stats profile, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#NCS5500-6.3.2(config)# commitRP/0/RP0/CPU0#NCS5500-6.3.2(config)# endRP/0/RP0/CPU0#router# reload location allProceed with reload? [confirm]After the reload, we can now see matches in the permit statements.RP/0/RP0/CPU0#NCS5500-632#sh access-lists ipv4 PERMIT-TEST hardware ingress location 0/7/CPU0ipv4 access-list PERMIT-TEST 10 permit icmp any host 1.1.1.1 15 permit icmp any host 1.1.1.3 16 permit tcp any any eq telnet (2 matches)17 permit tcp any eq telnet any 20 permit udp any any 30 permit tcp any any 40 deny ipv4 any any (1169 matches)RP/0/RP0/CPU0#NCS5500-632#Let’s take a look at the statistic database allocation before the activation of the profile and what is the difference after the activation and reload#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#show controllers npu resources stats instance 0 loc 0/0/CPU0System information for NPU 0# Counter processor configuration profile# Default Next available counter processor# 4Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 95 300 | Trap 95 300 Policer (QoS) 0 6976 | Policer (QoS) 0 6976 ACL RX, LPTS 148 915 | ACL RX, LPTS 148 915 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 29 8191 | VOQ 29 8191 | |Counter processor# 4 | Counter processor# 5 State# Free | State# Free | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 0 8191 | L3 RX 0 8191 L2 RX 0 8192 | L2 RX 0 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16384 | Interface TX 0 16384 | |RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#confRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#hw-module profile stats acl-permitIn order to activate/deactivate this stats profile, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#commitRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#endRP/0/RP0/CPU0#NCS55A1-24H-6.3.2#adminroot connected from 127.0.0.1 using console on NCS55A1-24H-6.3.2sysadmin-vm#0_RP0# reload rack 0Reload node ? [no,yes] yesresult Rack graceful reload request on 0 acknowledged.-=|RELOAD|=-----=|RELOAD|=-----=|RELOAD|=-----=|RELOAD|=-----=|RELOAD|=-----=|RELOAD|=-RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#show controllers npu resources stats instance 0 loc 0/0/CPU0System information for NPU 0# Counter processor configuration profile# ACL Permit Next available counter processor# 4Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 95 300 | Trap 95 300 ACL RX, LPTS 147 7891 | ACL RX, LPTS 147 7891 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 29 8191 | VOQ 29 8191 | |Counter processor# 4 | Counter processor# 5 State# Free | State# Free | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 0 8191 | L3 RX 0 8191 L2 RX 0 8192 | L2 RX 0 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#Enabling this profile removed the allocation of counters for QoS. It’s not possible to count QoS with it. Hw-profiles are always a trade-off.Object-based ACLEven if not frequently used in non-eTCAM systems, we support the use of object-based ACLs.It simplifies the management of the filter rules# it’s easy to add one entry in a network group and see all the ports related to this role automatically added.Note# in this context of non-SE platforms, we will use a non-compressed mode. All entries will be expanded and programmed in the iTCAM.The principle is simple and clever# define groups of networks and groups of ports, then use them in the ACEs#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#object-group ? network Network object group port Port object groupRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#object-group network ipv4 net-obj-srv-1RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-ipv4)#host 1.2.3.4RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-ipv4)#2.3.4.0/24RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-ipv4)#3.4.0.0/16RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-ipv4)#4.0.0.0/8RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-ipv4)#exitRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#object-group port port-obj-srv-1RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-port)#description Ports for srv1RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-port)#eq 80RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-port)#eq 443RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-port)#eq 8080RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-port)#eq 179RP/0/RP0/CPU0#NCS55A1-24H-6(config-object-group-port)#exitRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#ipv4 access-list network-object-acl-1RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-ipv4-acl)#10 permit tcp net-group net-obj-srv-1 port-group port-obj-srv-1 anyRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-ipv4-acl)#int hu 0/0/0/2RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-if)#ipv4 access-group network-object-acl-1 ingress compress level 0RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-if)#commitIt creates a matrix of the network and port entries.Note# the compress level 0 is default and not necessary here.With the following show commands, we can verify the ACL is actually expanded when programmed in the iTCAM (because we don’t use compression). So these 4x4 matrix will end up as 16 entries (+ the default entry).Note# a default entry is added for each ACLv4 and 3 default entries are added for each ACLv6.RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh access-list ipv4 usage pfilter location 0/0/CPU0Interface # HundredGigE0/0/0/2 Input ACL # Common-ACL # N/A ACL # network-object-acl-1 Output ACL # N/ARP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh access-lists ipv4 network-object-acl-1 expandedipv4 access-list network-object-acl-1 10 permit tcp 4.0.0.0 0.255.255.255 eq www any 10 permit tcp 4.0.0.0 0.255.255.255 eq bgp any 10 permit tcp 4.0.0.0 0.255.255.255 eq 443 any 10 permit tcp 4.0.0.0 0.255.255.255 eq 8080 any 10 permit tcp 3.4.0.0 0.0.255.255 eq www any 10 permit tcp 3.4.0.0 0.0.255.255 eq bgp any 10 permit tcp 3.4.0.0 0.0.255.255 eq 443 any 10 permit tcp 3.4.0.0 0.0.255.255 eq 8080 any 10 permit tcp 2.3.4.0 0.0.0.255 eq www any 10 permit tcp 2.3.4.0 0.0.0.255 eq bgp any 10 permit tcp 2.3.4.0 0.0.0.255 eq 443 any 10 permit tcp 2.3.4.0 0.0.0.255 eq 8080 any 10 permit tcp host 1.2.3.4 eq www any 10 permit tcp host 1.2.3.4 eq bgp any 10 permit tcp host 1.2.3.4 eq 443 any 10 permit tcp host 1.2.3.4 eq 8080 anyRP/0/RP0/CPU0#NCS55A1-24H-6.3.2#And we can verify it’s programmed as such in the hardware / iTCAM#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh access-lists ipv4 network-object-acl-1 hardware ingress interface hu0/0/0/2 verify location 0/0/CPU0Verifying TCAM entries for network-object-acl-1Please wait... INTF NPU lookup ACL # intf Total compression Total result failed(Entry) TCAM entries type ID shared ACES prefix-type Entries ACE SEQ # verified ---------- --- ------- --- ------ ------ ----------- ------- ------ ------------- ------------HundredGigE0_0_0_2 (ifhandle# 0xe0) 0 IPV4 1 1 1 NONE 17 passed 17RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh access-lists ipv4 network-object-acl-1 hardware ingress interface hu 0/0/0/2 detail location 0/0/CPU0network-object-acl-1 Details#Sequence Number# 10NPU ID# 0Number of DPA Entries# 16ACL ID# 1ACE Action# PERMITACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0Protocol# 0x06 (Mask 0xFF)Source Address# 4.0.0.0 (Mask 0.255.255.255)DPA Entry# 1 Entry Index# 0x0 DPA Handle# 0xC5196098 Source Port# 80 (Mask 65535)DPA Entry# 2 Entry Index# 0x1 DPA Handle# 0xC51963E0 Source Port# 179 (Mask 65535)DPA Entry# 3 Entry Index# 0x2 DPA Handle# 0xC5196728 Source Port# 443 (Mask 65535)DPA Entry# 4 Entry Index# 0x3 DPA Handle# 0xC5196A70 Source Port# 8080 (Mask 65535)DPA Entry# 5 Entry Index# 0x4 DPA Handle# 0xC5196DB8 Source Port# 80 (Mask 65535)DPA Entry# 6 Entry Index# 0x5 DPA Handle# 0xC5197100 Source Port# 179 (Mask 65535)DPA Entry# 7 Entry Index# 0x6 DPA Handle# 0xC5197448 Source Port# 443 (Mask 65535)DPA Entry# 8 Entry Index# 0x7 DPA Handle# 0xC5197790 Source Port# 8080 (Mask 65535)DPA Entry# 9 Entry Index# 0x8 DPA Handle# 0xC5197AD8 Source Port# 80 (Mask 65535)DPA Entry# 10 Entry Index# 0x9 DPA Handle# 0xC5197E20 Source Port# 179 (Mask 65535)DPA Entry# 11 Entry Index# 0xa DPA Handle# 0xC5198168 Source Port# 443 (Mask 65535)DPA Entry# 12 Entry Index# 0xb DPA Handle# 0xC51984B0 Source Port# 8080 (Mask 65535)DPA Entry# 13 Entry Index# 0xc DPA Handle# 0xC51987F8 Source Port# 80 (Mask 65535)DPA Entry# 14 Entry Index# 0xd DPA Handle# 0xC5198B40 Source Port# 179 (Mask 65535)DPA Entry# 15 Entry Index# 0xe DPA Handle# 0xC5198E88 Source Port# 443 (Mask 65535)DPA Entry# 16 Entry Index# 0xf DPA Handle# 0xC51991D0 Source Port# 8080 (Mask 65535)Sequence Number# IMPLICIT DENYNPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0DPA Entry# 1 Entry Index# 0x0 DPA Handle# 0xC5199518RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh contr npu internaltcam loc 0/0/CPU0Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0\\1 320b pmf-0 2010 32 7 INGRESS_LPTS_IPV40 0\\1 320b pmf-0 2010 2 12 INGRESS_RX_ISIS0 0\\1 320b pmf-0 2010 2 32 INGRESS_QOS_IPV60 0\\1 320b pmf-0 2010 2 34 INGRESS_QOS_L20 2 160b pmf-0 2044 2 31 INGRESS_QOS_IPV40 2 160b pmf-0 2044 1 33 INGRESS_QOS_MPLS0 2 160b pmf-0 2044 1 42 INGRESS_ACL_L20 3 160b egress_acl 2031 17 4 EGRESS_QOS_MAP0 4\\5 320b pmf-0 2024 24 8 INGRESS_LPTS_IPV60 6 160b pmf-0 2031 17 16 INGRESS_ACL_L3_IPV40 7 160b Free 2048 0 00 8 160b Free 2048 0 00 9 160b Free 2048 0 00 10 160b Free 2048 0 00 11 160b Free 2048 0 00 12 160b pmf-1 30 41 11 INGRESS_RX_L20 12 160b pmf-1 30 13 26 INGRESS_MPLS0 12 160b pmf-1 30 44 79 INGRESS_BFD_IPV4_NO_DESC_TCAM_T0 13 160b pmf-1 124 3 10 INGRESS_DHCP0 13 160b pmf-1 124 1 41 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 14 160b Free 128 0 00 15 160b Free 128 0 0Of course, the result of “number of ports” x “number of networks” should be under the limit of the available space, otherwise the application of the ACL will be refused.MiscNo resequencingIt’s possible to resequence ACL for prefix-list but not for security ACLs.RP/0/RP0/CPU0#5500-6.3.2#resequence ? prefix-list Prefix listsRP/0/RP0/CPU0#5500-6.3.2#ACL copyWe support the copy of an ACL to another one. It could be a very useful feature for the operator.RP/0/RP0/CPU0#5500-6.3.2#copy access-list ? ethernet-service Copy Ethernet Service access list ipv4 Copy IPv4 access list ipv6 Copy IPv6 access listRP/0/RP0/CPU0#5500-6.3.2#copy access-list ipv4 test-range-24 new-aclRP/0/RP0/CPU0#5500-6.3.2#DPAIn all the NCS5500 products, we use an abstraction layer between the IOS XR and the hardware. From an operator perspective, this function can be represented by the DPA (Data Plane Abstraction). When a route or a next-hop info is added or removed, it goes through the DPA. It’s also true for ACLs.The following show command are used to monitor the number of operation and current status.RP/0/RP0/CPU0#5500-6.3.2#sh dpa resources ipacl loc 0/7/CPU0~ipacl~ DPA Table (Id# 60, Scope# Non-Global)-------------------------------------------------- NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 1051 0 0 0 Create Requests Total# 4761 0 0 0 Success# 4761 0 0 0 Delete Requests Total# 3710 0 0 0 Success# 3710 0 0 0 Update Requests Total# 990 0 0 0 Success# 990 0 0 0 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#5500-6.3.2#sh dpa resources ip6acl loc 0/7/CPU0~ip6acl~ DPA Table (Id# 61, Scope# Non-Global)-------------------------------------------------- NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 0 0 0 0 Create Requests Total# 0 0 0 0 Success# 0 0 0 0 Delete Requests Total# 0 0 0 0 Success# 0 0 0 0 Update Requests Total# 0 0 0 0 Success# 0 0 0 0 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#5500-6.3.2#The HW Failures counter is important since it will represent the number of times the software tried to push more entries than what the hardware actually supports.hw-module profilesSeveral hw-profiles exist to enable specific functions around ACLs. They need a reload to be activated on the system or the line cards after the configuration. Enable an IPv4 egress ACL on BVIRP/0/RP0/CPU0#5500-6.3.2(config)# hw-module profile acl egress layer3 interface-based Enable permit statisticsRP/0/RP0/CPU0#5500-6.3.2(config)# hw-module profile stats acl-permit Match on TTL fieldRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 src-addr src-port enable-set-ttl ttl-matchRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 dst-addr dst-port enable-set-ttl ttl-match Enable the interface-based unique ACL modeRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv4 interface-basedRP/0/RP0/CPU0#5500-6.3.2(config)#hw-module profile tcam format access-list ipv6 src-addr dst-addr dst-port interface-basedFrom us trafficACLs applied on interface can match and handle traffic going through the router or targeted to the router, but it’s important to remind that traffic “from the router” is not matched by egress ACLs.That means, the egress ACL you apply on your interface will not prevent your locally generated traffic to leave the router.RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh run int Hu0/0/0/22interface HundredGigE0/0/0/22 ipv4 address 192.168.22.2 255.255.255.0 ipv6 address 2001#22##2/64!RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#ping 192.168.22.1Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 192.168.22.1, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 msRP/0/RP0/CPU0#NCS55A1-24H-6.3.2#confRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#ipv4 access-list NO-PASARANRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-ipv4-acl)#deny anyRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-ipv4-acl)#exitRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#interface HundredGigE0/0/0/22RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-if)#ipv4 access-group NO-PASARAN egressRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-if)#commitRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-if)#endRP/0/RP0/CPU0#NCS55A1-24H-6.3.2#ping 192.168.22.1Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 192.168.22.1, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 msRP/0/RP0/CPU0#NCS55A1-24H-6.3.2#ResourcesCCO guides# Implementing ACLshttps#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/63x/b-ip-addresses-configuration-guide-ncs5500-63x/b-ip-addresses-configuration-guide-ncs5500-63x_chapter_010.htmlCCO guide# ACL commandshttps#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/b-ip-addresses-cr-ncs5500/b-ncs5500-ip-addresses-cli-reference_chapter_01.htmlThanks #)Thanks a lot to Puneet Kalra, Jeff Tayler and Ashok Kumar for their precious help.", "url": "/tutorials/security-acl-on-ncs5500-part1/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-security-acl-on-ncs5500-part2-hybrid-acl": { "title": "Security ACL on NCS5500 (Part2): Hybrid ACL", "content": " NCS5500 Security Access-lists - Part2# Hybrid ACL Introduction Support Configuration Operation Coexistence with flat ACL eTCAM Carving requirement Jericho+ systems Jericho systems with 6.1.x and 6.3.2 onwards Jericho systems with 6.2.x and 6.3.1/6.3.15 Monitoring Notes Object-group and non-eTCAM? Use of ranges Compression ACL content edition Statistics Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow following this link.IntroductionAfter introducing the traditional ACLs in a recent post, let’s focus on an interesting approach offering significantly higher scale and better flexibility# the hybrid ACLs.Let’s start with a 5-minute video to illustrate this feature and its benefits#https#//www.youtube.com/watch?v=xIUgbL7d6tkSupportHybrid ACL feature uses two databases to store the information# internal TCAM (present inside the NPU) external TCAMThat implies, only the systems equiped with external TCAM can be used here. It’s true for Qumran-MX, Jericho and Jericho+ based routers and line cards.Hybrid ACLs can be used with IPv4 and IPv6 in ingress and egress direction. Hybrid ACLs in the egress direction is supported from IOS-XR 771.ConfigurationBefore jumping into the configuration aspects, let’s define the concept of object-groups# it’s a structure of data used to describe elements and called later in an access-list entry line (permit or deny).We will use two types of object-groups# network object-groups# set of network addresses port object-groups# set of UDP or TCP ports RP/0/RP0/CPU0#TME-5508-1-6.3.2#confRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#object-group ? network Network object group port Port object groupRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#object-group network ? ipv4 IPv4 object group ipv6 IPv6 object groupRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#object-group network ipv4 TEST ? A.B.C.D/length IPv4 address/prefix description Description for the object group host A single host address object-group Nested object group range Range of host addressesRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#object-group network ipv6 TESTv6 ? X # X # # X/length IPv6 prefix x # x # # x/y description Description for the object group host A single host address object-group nested object group range Range of host addressesRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#object-group port TEST ? description description for the object group eq Match packets on ports equal to entered port number gt Match packets on ports greater than entered port number lt Match packets on ports less than entered port number neq Match packets on ports not equal to entered port number object-group nested object group range Match only packets on a given port rangeRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#As you can see in the help options of the IOS XR configuration# we use specific object-groups per address-family IPv4 and IPv6 you can describe host, networks + prefix length or ranges of addresses port object-groups are not specifying if they are for UDP or TCP, it will be described in the ACE and can potentially be re-used for both you can describe ports with logical “predicates” like greater than, less than, equal or even rangesFor example# object-group network ipv4 OBJ-DNS-network 183.13.48.0/24 host 183.13.64.23 range 183.13.64.123 183.13.64.150!object-group port OBJ-Email-Ports eq smtp eq pop3 eq 143 eq 443These objects will be used in an access-list entry to describe the flow(s).Example1# IPv4 addresses, source and destinationExample2# IPv4 addresses and ports, source and destinationSeparating addresses and ports in two groups and calling these objects in access-list entries line offers an unique flexibility.It’s easy to create a matrix that would take dozens or even hundreds of lines if they were described one by one with traditional ACLs.Let’s imagine a case with email servers# 17 servers or networks and 8 ports.RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run object-group network ipv4 OBJ-Email-Netsobject-group network ipv4 OBJ-Email-Nets 183.13.64.23/32 183.13.64.38/32 183.13.64.39/32 183.13.64.48/32 183.13.64.133/32 183.13.64.145/32 183.13.64.146/32 183.13.64.155/32 183.13.65.0/24 183.13.65.128/25 183.13.66.15/32 183.13.66.17/32 183.13.66.111/32 183.13.66.112/32 183.13.68.0/23 192.168.1.0/24 195.14.52.0/24!RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run object-group network ipv4 OBJ-Email-Nets | i ~/~ | utility wc -l17RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run object-group port OBJ-Email-Portsobject-group port OBJ-Email-Ports eq smtp eq www eq pop3 eq 143 eq 443 eq 445 eq 993 eq 995!RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run object-group port OBJ-Email-Ports | i ~eq~ | utility wc -l8RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run ipv4 access-list FILTER-INipv4 access-list FILTER-IN 10 remark Email Servers 20 permit tcp any net-group OBJ-Email-Nets port-group OBJ-Email-Ports!RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run int hu 0/7/0/2interface HundredGigE0/7/0/2 ipv4 address 27.7.4.1 255.255.255.0 ipv6 address 2001#27#7#4##1/64 load-interval 30 ipv4 access-group FILTER-IN ingress compress level 3!RP/0/RP0/CPU0#TME-5508-1-6.3.2#You could visualize what would be the traditional/flat ACL with this show command#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists ipv4 FILTER-IN expandedipv4 access-list FILTER-IN 20 permit tcp any 183.13.68.0 0.0.1.255 eq smtp 20 permit tcp any 183.13.68.0 0.0.1.255 eq www 20 permit tcp any 183.13.68.0 0.0.1.255 eq pop3 20 permit tcp any 183.13.68.0 0.0.1.255 eq 143 20 permit tcp any 183.13.68.0 0.0.1.255 eq 443 20 permit tcp any 183.13.68.0 0.0.1.255 eq 445 20 permit tcp any 183.13.68.0 0.0.1.255 eq 993 20 permit tcp any 183.13.68.0 0.0.1.255 eq 995 20 permit tcp any 192.168.1.0 0.0.0.255 eq smtp 20 permit tcp any 192.168.1.0 0.0.0.255 eq www 20 permit tcp any 192.168.1.0 0.0.0.255 eq pop3 20 permit tcp any 192.168.1.0 0.0.0.255 eq 143 20 permit tcp any 192.168.1.0 0.0.0.255 eq 443 ...RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists ipv4 FILTER-IN expanded | utility wc -lipv4 access-list FILTER-IN136RP/0/RP0/CPU0#TME-5508-1-6.3.2#This line of access-list FILTER-IN equals to a matrix of 136 entries#To add one new mail server, it’s easy#RP/0/RP0/CPU0#TME-5508-1-6.3.2#confRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#object-group network ipv4 OBJ-Email-NetsRP/0/RP0/CPU0#TME-5508-1-6.(config-object-group-ipv4)#183.13.64.157/32RP/0/RP0/CPU0#TME-5508-1-6.(config-object-group-ipv4)#commitRP/0/RP0/CPU0#TME-5508-1-6.(config-object-group-ipv4)#endRP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists ipv4 FILTER-IN expanded | i .157ipv4 access-list FILTER-IN 20 permit tcp any host 183.13.64.157 eq smtp 20 permit tcp any host 183.13.64.157 eq www 20 permit tcp any host 183.13.64.157 eq pop3 20 permit tcp any host 183.13.64.157 eq 143 20 permit tcp any host 183.13.64.157 eq 443 20 permit tcp any host 183.13.64.157 eq 445 20 permit tcp any host 183.13.64.157 eq 993 20 permit tcp any host 183.13.64.157 eq 995RP/0/RP0/CPU0#TME-5508-1-6.3.2#Adding one line in the object group for networks, it’s like we add 8 lines in a flat ACL.As you can see, it’s a very flexible way to manage your access-lists.OperationThe hybrid ACL implies we defined object-groups members but also that we used compression#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh run int hu 0/7/0/2interface HundredGigE0/7/0/2 ipv4 address 27.7.4.1 255.255.255.0 ipv6 address 2001#27#7#4##1/64 load-interval 30 ipv4 access-group FILTER-IN ingress compress level 3!RP/0/RP0/CPU0#TME-5508-1-6.3.2#Once the ACL has been applied in ingress and with compression level 3 on the interface, the PMF block of the pipeline will perform two look-ups# the first one in the external TCAM for objects (compressed result of source address, destination address and source port) the second one in the internal TCAM for the ACL with destination port and the compressed result of the first lookupIt’s not necessary to remove the ACL from the interface to edit the content, it can be done “in-place” and without traffic impact.Coexistence with flat ACLIt’s possible to use traditional access-list entries in the same ACL. Each line is pretty much independant from the others, with or without object-groups#ipv4 access-list FILTER-IN 10 permit tcp any net-group SRC port-group PRT 20 permit tcp net-group SRC any port-group PRT 30 permit tcp net-group SRC port-group PRT any 40 permit tcp any port-group PRT net-group SRC 50 permit udp any host 1.2.3.3 eq 445 60 permit ipv4 1.3.5.0/24 any 70 permit ipv4 any 1.3.5.0/24!eTCAM Carving requirementA portion of the eTCAM should be used to store a part ACL information (compressed or not).Jericho+ systemsFor systems based on Jericho+, we don’t have anything to worry about# the eTCAM can handle this information without any specific configuration or preparationJericho systems with 6.1.x and 6.3.2 onwardsFor systems based on Jericho running 6.1.x and 6.3.2 onwards# no carving done by default, it will be necessary to change the configuration before enabling hybrid ACLsRP/0/RP0/CPU0#NCS5508-6.3.2#sh contr npu ext loc 0/7/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 2047993 7 15 IPV4 DC1 0 80b FLP 2047993 7 15 IPV4 DC2 0 80b FLP 2047993 7 15 IPV4 DC3 0 80b FLP 2047993 7 15 IPV4 DCRP/0/RP0/CPU0#NCS5508-6.3.2#If you try to apply a compressed (level 3) ACL on this interface 0/7/0/2 (LC0/7 is a 24x100-SE), the system will refuse the commit and the show config failed will explain it can not be done on this line card#RP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#int hu 0/7/0/2RP/0/RP0/CPU0#TME-5508-1-6.3.2(config-if)# ipv4 access-group FILTER-IN ingress compress level 3RP/0/RP0/CPU0#TME-5508-1-6.3.2(config-if)#commit% Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted. Please issue 'show configuration failed [inheritance]' from this session to view the errorsRP/0/RP0/CPU0#TME-5508-1-6.3.2(config-if)#show configuration failed!! SEMANTIC ERRORS# This configuration was rejected by!! the system due to semantic errors. The individual!! errors with each failed configuration command can be!! found below.interface HundredGigE0/7/0/2 ipv4 access-group FILTER-IN ingress compress level 3!!% 'dnx_feat_mgr' detected the 'resource not available' condition 'ACL compression is not supported on this LC due to configuration or LC type'!endRP/0/RP0/CPU0#TME-5508-1-6.3.2(config-if)#We took the decision starting from 6.3.2 onwards to let the carving decision to the user and no longer taking arbitrary 20% of the resource whether or not the operator decided to use this feature.A manual carving of the external TCAM is necessary and can be done with the following steps#RP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#hw-module profile tcam acl-prefix percent 20RP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#commitRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#endRP/0/RP0/CPU0#TME-5508-1-6.3.2#adminroot connected from 127.0.0.1 using console on TME-5508-1-6.3.2sysadmin-vm#0_RP0# reload rack 0Reload node ? [no,yes] yesresult Rack graceful reload request on 0 acknowledged.sysadmin-vm#0_RP0#Once reload is completed, you can check the external TCAM carving, it’s now ready for hybrid ACLs.RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh contr npu ext loc 0/7/CPU0Sat Jul 21 13#02#35.879 PDTExternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 886729 751671 15 IPV4 DC0 1 80b FLP 8192 0 81 INGRESS_IPV4_SRC_IP_EXT0 2 80b FLP 8192 0 82 INGRESS_IPV4_DST_IP_EXT0 3 160b FLP 8192 0 83 INGRESS_IPV6_SRC_IP_EXT0 4 160b FLP 8192 0 84 INGRESS_IPV6_DST_IP_EXT0 5 80b FLP 8192 0 85 INGRESS_IP_SRC_PORT_EXT0 6 80b FLP 8192 0 86 INGRESS_IPV6_SRC_PORT_EXT...Jericho systems with 6.2.x and 6.3.1/6.3.15For systems based on Jericho running 6.2.x and 6.3.1/6.3.1, 20% of the eTCAM is pre-allocated, even if you don’t plan to use hybrid ACLs.Nothing should be done if we decide to enable this feature.RP/0/RP0/CPU0#TME-5508-6.2.3#sh contr npu externaltcam loc 0/7/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 498950 1139450 15 IPV4 DC0 1 80b FLP 28672 0 76 INGRESS_IPV4_SRC_IP_EXT0 2 80b FLP 28672 0 77 INGRESS_IPV4_DST_IP_EXT0 3 160b FLP 26624 0 78 INGRESS_IPV6_SRC_IP_EXT0 4 160b FLP 26624 0 79 INGRESS_IPV6_DST_IP_EXT0 5 80b FLP 28672 0 80 INGRESS_IP_SRC_PORT_EXT0 6 80b FLP 28672 0 81 INGRESS_IPV6_SRC_PORT_EXT...Hybrid ACL can be applied directly. No specific preparation needed.MonitoringLet’s take a look at several show commands useful to verify the ACL configuration and application#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists ipv4 FILTER-IN object-groupsACL Name # FILTER-INNetwork Object-group # OBJ-Email-Nets--------------------------- Total 1Port Object-group # OBJ-Email-Ports--------------------------- Total 1RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists FILTER-IN usage pfilter loc 0/7/CPU0Interface # HundredGigE0/7/0/2 Input ACL # Common-ACL # N/A ACL # FILTER-IN (comp-lvl 3) Output ACL # N/AThe verify option will permit to check if the compression happened correctly (expect “passed” in these lines).RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists FILTER-IN hardware ingress interface hundredGigE 0/7/0/2 verify location 0/7/CPU0Verifying TCAM entries for FILTER-INPlease wait... INTF NPU lookup ACL # intf Total compression Total result failed(Entry) TCAM entries type ID shared ACES prefix-type Entries ACE SEQ # verified ---------- --- ------- --- ------ ------ ----------- ------- ------ ------------- ------------HundredGigE0_7_0_2 (ifhandle# 0x3800120) 0 IPV4 1 1 1 COMPRESSED 9 passed 9 SRC IP 1 passed 1 DEST IP 19 passed 19 SRC PORT 1 passed 1Since the ACL is stored in both internal and external TCAMs, we can check the memory utilization with the following#RP/0/RP0/CPU0#TME-5508-1-6.3.2#show controllers npu internaltcam loc 0/7/CPU0Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0\\1 320b pmf-0 1979 48 7 INGRESS_LPTS_IPV40 0\\1 320b pmf-0 1979 8 12 INGRESS_RX_ISIS0 0\\1 320b pmf-0 1979 2 32 INGRESS_QOS_IPV60 0\\1 320b pmf-0 1979 2 34 INGRESS_QOS_L20 0\\1 320b pmf-0 1979 9 49 INGRESS_HYBRID_ACL0 2 160b pmf-0 2044 2 31 INGRESS_QOS_IPV40 2 160b pmf-0 2044 1 33 INGRESS_QOS_MPLS0 2 160b pmf-0 2044 1 42 INGRESS_ACL_L20 3 160b egress_acl 2031 17 4 EGRESS_QOS_MAP0 4\\5 320b pmf-0 2013 35 8 INGRESS_LPTS_IPV60 6 160b Free 2048 0 00 7 160b Free 2048 0 00 8 160b Free 2048 0 00 9 160b Free 2048 0 00 10 160b Free 2048 0 00 11 160b Free 2048 0 00 12 160b pmf-1 30 41 11 INGRESS_RX_L20 12 160b pmf-1 30 13 26 INGRESS_MPLS0 12 160b pmf-1 30 44 79 INGRESS_BFD_IPV4_NO_DESC_TCAM_T0 13 160b pmf-1 124 3 10 INGRESS_DHCP0 13 160b pmf-1 124 1 41 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 14 160b Free 128 0 00 15 160b Free 128 0 0...RP/0/RP0/CPU0#TME-5508-1-6.3.2#show controllers npu externaltcam loc 0/7/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 886729 751671 15 IPV4 DC0 1 80b FLP 8191 1 81 INGRESS_IPV4_SRC_IP_EXT0 2 80b FLP 8173 19 82 INGRESS_IPV4_DST_IP_EXT0 3 160b FLP 8192 0 83 INGRESS_IPV6_SRC_IP_EXT0 4 160b FLP 8192 0 84 INGRESS_IPV6_DST_IP_EXT0 5 80b FLP 8191 1 85 INGRESS_IP_SRC_PORT_EXT0 6 80b FLP 8192 0 86 INGRESS_IPV6_SRC_PORT_EXT...RP/0/RP0/CPU0#TME-5508-1-6.3.2#The DPA is the abstraction layer used for the programming of the hardware.If the resource is exhausted, you’ll find “HW Failures” count incremented#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh dpa resources ipaclprefix loc 0/7/CPU0~ipaclprefix~ DPA Table (Id# 111, Scope# Non-Global)-------------------------------------------------- NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 20 0 0 0 Create Requests Total# 20 0 0 0 Success# 20 0 0 0 Delete Requests Total# 0 0 0 0 Success# 0 0 0 0 Update Requests Total# 0 0 0 0 Success# 0 0 0 0 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#TME-5508-1-6.3.2#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh dpa resources scaleacl loc 0/7/CPU0~scaleacl~ DPA Table (Id# 114, Scope# Non-Global)-------------------------------------------------- NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 9 0 0 0 Create Requests Total# 9 0 0 0 Success# 9 0 0 0 Delete Requests Total# 0 0 0 0 Success# 0 0 0 0 Update Requests Total# 0 0 0 0 Success# 0 0 0 0 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0RP/0/RP0/CPU0#TME-5508-1-6.3.2#You note the size of each object-group is one more than the number of entries. It’s always +1 for IPv4 and +3 for IPv6.The statistics can be also monitored. In this example, we don’t count permit ACLs (profile not enabled by default). Check the ACL entries in both engines.RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh contr npu resources stats inst 0 loc 0/7/CPU0System information for NPU 0# Counter processor configuration profile# Default Next available counter processor# 4Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 95 300 | Trap 95 300 Policer (QoS) 0 6976 | Policer (QoS) 0 6976 ACL RX, LPTS 182 915 | ACL RX, LPTS 182 915 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 140 8191 | VOQ 140 8191 | |Counter processor# 4 | Counter processor# 5 State# Free | State# Free | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 3 8191 | L3 RX 11 8191 L2 RX 1 8192 | L2 RX 1 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 4 16383 | Interface TX 6 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16384 | Interface TX 0 16384 | |RP/0/RP0/CPU0#TME-5508-1-6.3.2#NotesObject-group and non-eTCAM?It’s possible to define object-based ACLs and apply them in non-eTCAM systems.They will be expanded and programmed in the internal TCAM. But it will not be possible to use the compression.Use of rangesRanges are supported but within a limit of 24 range-IDs.CompressionWe only support the level 3 compression# source address, destination address, source port are compressed and stored in the external TCAM destination port is not compressed level 0 equals not-compressedACL content editionIt’s possible edit the object-groups in-place, without having to remove the acl from the interface. But an edition of netgroup or portgroup will force the replacement and reprogramming of the ACL. The counters will be reset.StatisticsPermits are not counted by default. It’s necessary to enable another hw-profile to count permit but it will replace the QoS counters#RP/0/RP0/CPU0#TME-5508-1-6.3.2#confhw-module profile stats acl-permitRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#hw-module profile stats acl-permitIn order to activate/deactivate this stats profile, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#commitRP/0/RP0/CPU0#TME-5508-1-6.3.2(config)#endRP/0/RP0/CPU0#TME-5508-1-6.3.2#adminroot connected from 127.0.0.1 using console on TME-5508-1-6.3.2sysadmin-vm#0_RP0# reload rack 0Sat Jul 21 22#27#14.156 UTCReload node ? [no,yes] yesresult Rack graceful reload request on 0 acknowledged.After the reload#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh contr npu resources stats inst 0 loc 0/7/CPU0System information for NPU 0# Counter processor configuration profile# ACL Permit Next available counter processor# 4Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 95 300 | Trap 95 300 ACL RX, LPTS 182 7891 | ACL RX, LPTS 182 7891 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 140 8191 | VOQ 140 8191 | |Counter processor# 4 | Counter processor# 5 State# Free | State# Free | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 3 8191 | L3 RX 9 8191 L2 RX 1 8192 | L2 RX 1 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 4 16383 | Interface TX 6 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |RP/0/RP0/CPU0#TME-5508-1-6.3.2#Now permit matches will be counted#RP/0/RP0/CPU0#TME-5508-1-6.3.2#sh access-lists ipv4 FILTER-IN hardware ingress interface hundredGigE 0/7/0/2 loc 0/7/CPU0ipv4 access-list FILTER-IN 10 permit tcp any net-group SRC port-group PRT 20 permit tcp net-group SRC any port-group PRT (3 matches) 30 permit tcp net-group SRC port-group PRT any 40 permit tcp any port-group PRT net-group SRCRP/0/RP0/CPU0#TME-5508-1-6.3.2#If packets are matching a permit entry in the ACL and are targeted to the router, they will be punted but not counted in the ACL “matches”.ConclusionWe hope we demonstrated the power of hybrid ACL for infrastructure security. They offer a lot of flexibility and huge scale.Definitely something you should consider for a greenfield deployment.Nevertheless, moving from existing traditional ACLs is not an easy task. It’s common to see networks with very large flat ACLs, poorly documented. The operators are usually very uncomfortable touching it.In these brownfield scenarios, it’s mandatory to start an entire project to redefine the flows that are allowed and forbidden through these routers and it could be a long process.Post Scriptum# the address ranges used in this article (183.13.x.y) are random numbers I picked. We are not sharing any real ACL from any real operator here. We could have picked the 192.0.2.0/24 but it would have made the examples less relevant.", "url": "/tutorials/security-acl-on-ncs5500-part2-hybrid-acl/", "author": "Nicolas Fevrier", "tags": "iosxr, acl, ncs5500, hybrid, scale, compressed, xr" } , "tutorials-ncs5500-fib-programming-speed": { "title": "NCS5500 FIB Programming Speed", "content": " NCS5500 FIB Programming Speed Programming Speed Video demo Test methodology Test results Conclusion Acknowledgements You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.Programming SpeedIn this post, we will measure the time it takes to learn routes in the RIB and in the FIB of an NCS 5500.The first one exists in the Route Processor and will be provided by a BGP process.The second one exists in multiple places, but to simplify the discussion, we will measure what is actually programmed in the NPU database.We often hear that “Merchant Silicon systems program prefixes slowers than other products” but clearly this assertion is not based on facts and we will debunk it with this post and video.Video demoLet’s get started with a video we recorded and published on youtube.In this demo, we advertised 1,200,000 IPv4 routes to our system under test# 300K IPv4/22 300K IPv4/23 600K IPv4/25 router bgp 1000bgp_id 192.168.100.151neighbor 192.168.100.200 remote-as 100neighbor 192.168.100.200 update-source 192.168.100.151capability ipv4 unicastnetwork 1 11.0.0.0/23 300000nexthop 1 192.168.22.1network 2 101.0.0.0/25 600000nexthop 2 192.168.22.1network 3 171.0.0.0/22 300000nexthop 3 192.168.22.1capability refreshThe results of this test were# RIB programming in RP# 133,000 pfx/s eTCAM programming speed# 29,000 pfx/sFor the next test in this blog post, we will use the exact same methodology but this time we will use a real internet view (recorded from a real internet router).Test methodologyThe system (DUT for Device Under Test) we will use for this demo is a chassis with a 36x 100G ports “Scale”. That means, it’s based on Jericho+ chipset with a new generation external TCAM. Since we are using IOS XR 6.3.2 or 6.5.1, all routes (IPv4 and IPv6) are stored on the eTCAM, regarless their prefix length. RP/0/RP0/CPU0#TME-5508-1-6.5.1#sh plat 0/1Node Type State Config state--------------------------------------------------------------------------------0/1/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUTRP/0/RP0/CPU0#TME-5508-1-6.5.1#sh verCisco IOS XR Software, Version 6.5.1Copyright (c) 2013-2018 by Cisco Systems, Inc.Build Information# Built By # ahoang Built On # Wed Aug 8 17#10#43 PDT 2018 Built Host # iox-ucs-025 Workspace # /auto/srcarchive17/prod/6.5.1/ncs5500/ws Version # 6.5.1 Location # /opt/cisco/XR/packages/cisco NCS-5500 () processorSystem uptime is 1 day 1 hour 5 minutesRP/0/RP0/CPU0#TME-5508-1-6.5.1#The speed a router learns BGP routes is directly dependant on the neighbor and how fast it is able to advertise these prefixes. Since BGP is based on TCP, all messages are ack’d and the local process can request to slow down for any reason. That’s why we thought it woud not be relevant to use a route generator for this test. Or at least, we didn’t want the device under test to be directly peered to the route generator.We decided to use an intermediate system of the same kind, for instance an NCS55A1-24H. This system will receive the BGP table from our route generator. When all the routes will be received in this intermediate system, we will enable the BGP session to the system under test.That way, the routes are advertised from a real router BGP stack and the results are representing what you could expect in your production environment.We will monitor the programming speed of the entries in the RIB (in the Route Processor) and in the external TCAM (connected to the Jericho+ ASIC) via Streaming Telemetry.The DUT will stream every second the counters related to the BGP table and the ASIC resource utilization#The related router configuration#RP/0/RP0/CPU0#TME-5508-1-6.5.1#sh run telemetry model-driventelemetry model-driven destination-group DGroup1 address-family ipv4 10.30.110.40 port 5432 encoding self-describing-gpb protocol tcp ! ! sensor-group fib sensor-path Cisco-IOS-XR-fib-common-oper#fib/nodes/node/protocols/protocol/vrfs/vrf/summary ! sensor-group brcm sensor-path Cisco-IOS-XR-fretta-bcm-dpa-hw-resources-oper#dpa/stats/nodes/node/hw-resources-datas/hw-resources-data ! sensor-group routing sensor-path Cisco-IOS-XR-ipv4-bgp-oper#bgp/instances/instance/instance-active/default-vrf/process-info sensor-path Cisco-IOS-XR-ip-rib-ipv4-oper#rib/vrfs/vrf/afs/af/safs/saf/ip-rib-route-table-names/ip-rib-route-table-name/protocol/bgp/as/information sensor-path Cisco-IOS-XR-ip-rib-ipv6-oper#ipv6-rib/vrfs/vrf/afs/af/safs/saf/ip-rib-route-table-names/ip-rib-route-table-name/protocol/bgp/as/information ! subscription fib sensor-group-id fib strict-timer sensor-group-id fib sample-interval 1000 destination-id DGroup1 ! subscription brcm sensor-group-id brcm strict-timer sensor-group-id brcm sample-interval 1000 destination-id DGroup1 ! subscription routing sensor-group-id routing strict-timer sensor-group-id routing sample-interval 1000 destination-id DGroup1 !!RP/0/RP0/CPU0#TME-5508-1-6.5.1#Step 0# “Before the test”In this step, the router generator established an eBGP (AS1000 to AS100) session to the intermediate router and advertised the full internet view# 751,657 IPv4 routes.We can check the routes are indeed received and valid but also their distribution in term of prefix length#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh bgp sumBGP router identifier 1.1.1.22, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 23817074BGP main routing table version 23817074BGP NSR Initial initsync version 1200006 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 23817074 23817074 23817074 23817074 23817074 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.22.1 0 100 164033 15246 0 0 0 00#26#05 Idle (Admin)192.168.100.151 0 1000 1241354 49602 23817074 0 0 00#05#14 751657RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#sh dpa resources iproute loc 0/0/CPU0~iproute~ DPA Table (Id# 24, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 1 /1 0 /2 0 /3 0 /4 1 /5 0 /6 0 /7 0 /8 15 /9 13 /10 35 /11 106 /12 285 /13 550 /14 1066 /15 1880 /16 13419 /17 7773 /18 13636 /19 25026 /20 38261 /21 43073 /22 80751 /23 67073 /24 376982 /25 567 /26 2032 /27 4863 /28 15599 /29 16868 /30 41735 /31 52 /32 15 NPU ID# NPU-0 NPU-1 In Use# 751677 751677 Create Requests Total# 12246542 12246542 Success# 12246542 12246542 ... SNIP ...You notice the session to the device under test (192.168.22.1) is currently in state “Idle (Admin)”.It means the neighbor under the router bgp is configured with “shutdown”.Step 1# Test begins at T1The test begins when we unshut the BGP peer from the intermediate router.RP/0/RP0/CPU0#NCS55A1-24H-6.3.2#confRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config)#router bgp 100RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-bgp)# neighbor 192.168.22.1RP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-bgp-nbr)#no shutRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-bgp-nbr)#commitRP/0/RP0/CPU0#NCS55A1-24H-6.3.2(config-bgp-nbr)#endRP/0/RP0/CPU0#NCS55A1-24H-6.3.2#As soon as the session is established, the first routes are received and we note down this particular moment as “T1”#Step 2# All routes advertised via BGP at T2We note down the T2 timestamp# it represents when all the BGP routes have been received on the Device Under Test.(T2 - T1) is the time it took to advertise all the BGP routes from intermediate router to DUT.The speed to program the BGP in the RP RIB is 751677 / (T2 - T1) and is expressed in number of prefixes per second.Step 3# All routes are programmed in eTCAM at T3We note down the last timestamp# T3. It represents the moment all the prefixes have been programmed in the hardware.(T3 - T1) is the time it took to program all the routes in the Jericho+ external TCAM.The speed to program the hardware is 751677 / (T3 - T1) and is expressed in number of prefixes per second.Test results(T2 - T1) = 49#12.736 - 48#59.712 = 12sSpeed to program BGP# 751,677 / 12 = 62,639 pfx/s(T3 - T1) = 49#27.739 - 48#59.712 = 27sSpeed to program hardware# 751,677 / 27 = 27,839 pfx/sWith an internet distribution, we note that BGP advertisement is slower than the results we got in the first test with 1.2M routes (all aligned and sorted) but the hardware programming speed is consistent.And we performed the opposite test with the shutdown of the BGP peer#(T2 - T1) = 54#07.791 - 54#02.769 = 5sSpeed to withdraw all BGP routes# 751,677 / 5 = 150,335 pfx/s(T3 - T1) = 54#25.760 - 54#02.769 = 23sSpeed to remove all routes from hardware# 751,677 / 23 = 32,681 pfx/sConclusionThe engineering team implemented multiple innovative ideas to speed up the process of programming entries in the hardware (prefix re-ordering, batching, direct memory access, etc).The result is a programming performance comparable, if not better, to platforms based on custom silicon.One last word, remember that we support multiple fast convergence features like BGP PIC Core. We maintain different databases for prefixes and for next-hop/adjacencies. It’s only necessary to change a pointer to a new next-hop when you lose a BGP peer, and not to reprogram the entire internet table.AcknowledgementsBig shout out to Viktor Osipchuk for his help and availability. I invite you to check the excellent posts he published on MDT, Pipeline, etc# https#//xrdocs.io/telemetry/.", "url": "/tutorials/ncs5500-fib-programming-speed/", "author": "Nicolas Fevrier", "tags": "ncs5500, ncs 5500, FIB, eTCAM, Internet" } , "tutorials-bgp-evpn-configuration-ncs-5500-part-1": { "title": "Configure BGP-EVPN Control-Plane & Segment Routing based MPLS Forwarding-Plane", "content": " On This Page Introduction to BGP-EVPN BGP-EVPN Key Route Types for Reference Configuring BGP EVPN control-plane and ISIS Segment Routing forwarding plane Reference Topology# Task 1# Configure the Routing Protocol for Transport# Task 2# Enable ISIS Segment Routing# Task 3# Configure the BGP-EVPN Control-Plane Introduction to BGP-EVPNEVPN is the next generation L2VPN technology, it provides layer-2 as well as layer-3 VPN services in a scalable and simplified manner. The evolution of EVPN started due to the need of a scalable solution to bridge various layer-2 domains and overcome the limitations faced by VPLS such as scalability, multi-homing and per-flow load balancing.EVPN uses MAC addresses as routable addresses and distribute them to all participating PEs via MP-BGP EVPN control-plane. EVPN is used for E-LAN, E-LINE, E-TREE services and provides data-plane and control-plane separation. This allows the use of different encapsulation mechanisms in data plane while maintaining the same control-plane. In addition, EVPN offers many advantages over existing technologies, including more efficient load-balancing of VPN traffic. Some of the prominent advantages are# Multi-homing and redundancy Per flow-based load balancing Scalability Provisioning simplicity Reduced operational complexityIn this and next few posts we will cover BGP-EVPN configuration, implementation and verification on NCS 5500 Platform using IOS-XR. The goal of this tutorial is to provide familiarity to BGP-EVPN from configuration perspective and cover the following use cases. Configuring BGP EVPN control-plane and Segment Routing based forwarding plane Configure EVPN based Multi-homing to the Hosts EVPN based Layer-2 VPN Service EVPN-IRB between Leafs in the networkBGP-EVPN Key Route Types for ReferenceThe EVPN network layer reachability information (NLRI) provides different route types. Following is the summary of the route types and their usage. Route Type Usage 0x1 Ethernet Auto-Discovery (A-D) Route MAC Mass-Withdraw, Aliasing (load balancing) 0x2 MAC Advertisement Route Advertises Host MAC and IP address 0x3 Inclusive Multicast Route Indicates interest of BUM traffic for attached L2 segments 0x4 Ethernet Segment Route Auto discovery of Multi-homed Ethernet Segments and Designated Forwarder (DF) Election 0x5 IP Prefix Route Advertises IP prefix for a subnet via EVPN address family Note# We are using Spine Leaf Fabric example in the configuration but essentially a Leaf is a PE and Spine is a P router as we are implementing MPLS forwarding plane with BGP-EVPN.Configuring BGP EVPN control-plane and ISIS Segment Routing forwarding planeIn this post, we will configure the BGP EVPN control-plane and ISIS Segment Routing based forwarding plane. This will provide the basis to enable us for provisioning of EVPN based services using segment routing transport.Reference Topology#Task 1# Configure the Routing Protocol for Transport#Configure IGP routing protocol between Leafs and Spines. In this tutorial we are using ISIS as the underlay routing protocol. Loopback 0 Prefix-SID ISIS Net Spine-1 6.6.6.6/32 16006 49.0001.0000.0000.0006.0 Spine-2 7.7.7.7/32 16007 49.0001.0000.0000.0007.0 Leaf-1 1.1.1.1/32 16001 49.0001.0000.0000.0001.0 Leaf-2 2.2.2.2/32 16002 49.0001.0000.0000.0002.0 Leaf-5 5.5.5.5/32 16005 49.0001.0000.0000.0005.0 Following is a sample config from Leaf-1 to implement ISIS routing protocol in the network. Similar configs with relevant Net address (shown in above table) and interfaces should be used on other devices to bring up the ISIS routing protocol in the network. Don’t configure ISIS on the links from host to leafs, these will be set up later as layer-2 links. router isis 1 is-type level-2-only net 49.0001.0000.0000.0001.00 nsr log adjacency changes address-family ipv4 unicast metric-style wide ! interface Bundle-Ether16 point-to-point address-family ipv4 unicast ! interface Bundle-Ether17 point-to-point address-family ipv4 unicast ! interface Loopback0 passive address-family ipv4 unicast !Verify that the point-to-point interfaces between the spines and leafs and other devices in the network are up and the ISIS routing adjacency is formed between the devices as per the topology. In this setup, ISIS routing protocol is configured on all the devices except the hosts, the host will be connected layer-2 dual-homed to the Leafs.The “show isis neighbor” and “show route isis” commands can be used to verify that the adjacency is formed and the routes of all the Leafs and Spines are learnt via ISIS.Task 2# Enable ISIS Segment Routing#Configure Segment Routing protocol under ISIS routing protocol which enables MPLS on all the non-passive ISIS interfaces. A prefix SID is associated with an IP prefix and is manually configured from the segment routing global block (SRGB) range of labels. It is configured under the loopback interface with the loopback address of the node as the prefix. The prefix SID is globally unique within the segment routing domain.The Prefix-SID can be an absolute value or an indexed value. In this guide, we are configuring Prefix-SID as absolute value. ISIS Segment Routing is configured in the Fabric between Leafs and Spines.Following is a sample config to enable Segment Routing in the network. Similar config with prefix-SID that is unique for each device in the network, should be configured on other devices (as per the above diagram) to enable ISIS Segment Routing. In this config prefix-SID is enabled on the “loopback 0” interface of the devices. Spine-1# router isis 1 address-family ipv4 unicast segment-routing mpls ! interface Loopback0 passive address-family ipv4 unicast prefix-sid absolute 16006 ! Spine-2# router isis 1 address-family ipv4 unicast segment-routing mpls ! interface Loopback0 passive address-family ipv4 unicast prefix-sid absolute 16007 !Verify that all devices that have ISIS Segment Routing configured have advertised their prefix-SIDs. Also verify the prefix-SIDs are learnt and programmed in the forwarding plane on each device. This output is collected from Spines; we can see that the prefix-SID labels (identified by “Pfx”) of all the Leafs and other routers are learnt and programmed in the forwarding plane along with their outgoing interfaces. Spine-1# RP/0/RP0/CPU0#Spine-1#show isis segment-routing label table Tue Sep 4 23#35#11.115 UTC IS-IS 1 IS Label Table Label Prefix/Interface ---------- ---------------- 16001 1.1.1.1/32 16002 2.2.2.2/32 16005 5.5.5.5/32 16006 Loopback0 16007 7.7.7.7/32 RP/0/RP0/CPU0#Spine-1# RP/0/RP0/CPU0#Spine-1#show mpls forwarding Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 16001 Pop SR Pfx (idx 1) BE16 192.1.6.2 0 16002 Pop SR Pfx (idx 2) BE26 192.2.6.2 0 16005 Pop SR Pfx (idx 5) BE56 192.5.6.2 0 16007 16007 SR Pfx (idx 7) BE16 192.1.6.2 0 16007 SR Pfx (idx 7) BE26 192.2.6.2 0 16007 SR Pfx (idx 7) BE56 192.5.6.2 0 64000 Pop SR Adj (idx 1) BE16 192.1.6.2 0 64001 Pop SR Adj (idx 3) BE16 192.1.6.2 0 64002 Pop SR Adj (idx 1) BE26 192.2.6.2 0 64003 Pop SR Adj (idx 3) BE26 192.2.6.2 0 64004 Pop SR Adj (idx 1) BE56 192.5.6.2 0 64005 Pop SR Adj (idx 3) BE56 192.5.6.2 0 Spine-2# RP/0/RP0/CPU0#Spine-2#show isis segment-routing label table Tue Sep 4 23#45#48.834 UTC IS-IS 1 IS Label Table Label Prefix/Interface ---------- ---------------- 16001 1.1.1.1/32 16002 2.2.2.2/32 16005 5.5.5.5/32 16006 6.6.6.6/32 16007 Loopback0 RP/0/RP0/CPU0#Spine-2# RP/0/RP0/CPU0#Spine-2#show mpls forwarding Tue Sep 4 23#46#40.028 UTC Local Outgoing Prefix Outgoing Next Hop Bytes Label Label or ID Interface Switched ------ ----------- ------------------ ------------ --------------- ------------ 16001 Pop SR Pfx (idx 1) BE17 192.1.7.2 0 16002 Pop SR Pfx (idx 2) BE27 192.2.7.2 0 16005 Pop SR Pfx (idx 5) BE57 192.5.7.2 0 16006 16006 SR Pfx (idx 6) BE17 192.1.7.2 0 16006 SR Pfx (idx 6) BE27 192.2.7.2 0 16006 SR Pfx (idx 6) BE57 192.5.7.2 0 64000 Pop SR Adj (idx 1) BE17 192.1.7.2 0 64001 Pop SR Adj (idx 3) BE17 192.1.7.2 0 64002 Pop SR Adj (idx 1) BE27 192.2.7.2 0 64003 Pop SR Adj (idx 3) BE27 192.2.7.2 0 64004 Pop SR Adj (idx 1) BE57 192.5.7.2 0 64005 Pop SR Adj (idx 3) BE57 192.5.7.2 0 RP/0/RP0/CPU0#Spine-2#After configuring ISIS segment routing, verify that the underlay is capable of forwarding traffic using labels assigned by segment routing.Below output shows traceroute from Leaf-1 to Leaf-5 using the loopback address. Trace from Leaf-1 reaches Leaf-5 via Spines using label forwarding where Spine is the PHP for Leaf-5. Ping from Leaf-1 to Leaf-5# RP/0/RP0/CPU0#Leaf-1#ping sr-mpls 5.5.5.5/32 Tue Sep 4 23#40#51.032 UTC Sending 5, 100-byte MPLS Echos to 5.5.5.5/32, timeout is 2 seconds, send interval is 0 msec# Codes# '!' - success, 'Q' - request not sent, '.' - timeout, 'L' - labeled output interface, 'B' - unlabeled output interface, 'D' - DS Map mismatch, 'F' - no FEC mapping, 'f' - FEC mismatch, 'M' - malformed request, 'm' - unsupported tlvs, 'N' - no rx label, 'P' - no rx intf label prot, 'p' - premature termination of LSP, 'R' - transit router, 'I' - unknown upstream index, 'X' - unknown return code, 'x' - return code 0 Type escape sequence to abort. !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 3/5/13 ms RP/0/RP0/CPU0#Leaf-1# Trace from Leaf-1 to Leaf-5 RP/0/RP0/CPU0#Leaf-1#trace sr-mpls 5.5.5.5/32 Tue Sep 4 23#42#06.069 UTC Tracing MPLS Label Switched Path to 5.5.5.5/32, timeout is 2 seconds Codes# '!' - success, 'Q' - request not sent, '.' - timeout, 'L' - labeled output interface, 'B' - unlabeled output interface, 'D' - DS Map mismatch, 'F' - no FEC mapping, 'f' - FEC mismatch, 'M' - malformed request, 'm' - unsupported tlvs, 'N' - no rx label, 'P' - no rx intf label prot, 'p' - premature termination of LSP, 'R' - transit router, 'I' - unknown upstream index, 'X' - unknown return code, 'x' - return code 0 Type escape sequence to abort. 0 192.1.7.2 MRU 1500 [Labels# 16005 Exp# 0] L 1 192.1.7.1 MRU 1500 [Labels# implicit-null Exp# 0] 121 ms ! 2 192.5.7.2 4 ms RP/0/RP0/CPU0#Leaf-1#Task 3# Configure the BGP-EVPN Control-PlaneMP-BGP with its various address families is used to transport specific reachability information in the network. BGP’s L2VPN-EVPN address family is capable of transporting tenant-aware/VRF-aware IP (Layer-3) and MAC (Layer-2) reachability information in MP-BGP. BGP EVPN provides the learnt information to all the devices within the network through a common control plane. BGP EVPN next-hops are going to be reachable via segment routing paths.In this configuration guide to configure EVPN in the Fabric, we will configure iBGP EVPN, however eBGP EVPN can also be configured and is support on NCS 5500 routers. Spines are configured as the BGP EVPN Route Reflectors. Leaf-1, Leaf-2 and Leaf-5 will all be Route Reflector clients.Configure Spines as RR for BGP EVPN address family. Spine-1# router bgp 65001 bgp router-id 6.6.6.6 ! address-family l2vpn evpn ! neighbor-group RRC remote-as 65001 update-source Loopback0 address-family l2vpn evpn route-reflector-client ! ! neighbor 1.1.1.1 use neighbor-group RRC description BGP session to Leaf-1 ! neighbor 2.2.2.2 use neighbor-group RRC description BGP session to Leaf-2 ! neighbor 5.5.5.5 use neighbor-group RRC description BGP session to Leaf-5 ! Spine-2# router bgp 65001 bgp router-id 7.7.7.7 ! address-family l2vpn evpn ! neighbor-group RRC remote-as 65001 update-source Loopback0 address-family l2vpn evpn route-reflector-client ! ! neighbor 1.1.1.1 use neighbor-group RRC description BGP session to Leaf-1 ! neighbor 2.2.2.2 use neighbor-group RRC description BGP session to Leaf-2 ! neighbor 5.5.5.5 use neighbor-group RRC description BGP session to Leaf-5 ! !Use the following configuration and apply it to configure the Leaf-1 Leaf-2 and Leaf-5 to form the BGP EVPN adjacency between Leafs and Route Reflectors. Leaf-1# router bgp 65001 bgp router-id 1.1.1.1 ! address-family l2vpn evpn ! neighbor 6.6.6.6 remote-as 65001 description ~BGP session to Spine-1~ update-source Loopback0 address-family l2vpn evpn ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP session to Spine-2~ update-source Loopback0 address-family l2vpn evpn ! ! ! Leaf-2# router bgp 65001 bgp router-id 2.2.2.2 ! address-family l2vpn evpn ! neighbor 6.6.6.6 remote-as 65001 description ~BGP session to Spine-1~ update-source Loopback0 address-family l2vpn evpn ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP session to Spine-2~ update-source Loopback0 address-family l2vpn evpn ! ! ! Leaf-5# router bgp 65001 bgp router-id 5.5.5.5 ! address-family l2vpn evpn ! neighbor 6.6.6.6 remote-as 65001 description ~BGP session to Spine-1~ update-source Loopback0 address-family l2vpn evpn ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP session to Spine-2~ update-source Loopback0 address-family l2vpn evpn ! ! !Use “show bgp l2vpn evpn summary” cli command to verify the evpn neighborship between Route Reflectors and Leafs. Below output from the Spines show that the BGP EVPN neighborship is formed between the Leafs and the Route Reflectors and the control-plane is up. Spine-1# RP/0/RP0/CPU0#Spine-1#show bgp l2vpn evpn summary BGP router identifier 6.6.6.6, local AS number 65001 Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVer Speaker 1 1 1 1 1 0 Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd 1.1.1.1 0 65001 8 8 1 0 0 00#06#02 0 2.2.2.2 0 65001 7 7 1 0 0 00#04#53 0 5.5.5.5 0 65001 7 7 1 0 0 00#04#14 0 Spine-2# RP/0/RP0/CPU0#Spine-2#show bgp l2vpn evpn summary BGP router identifier 7.7.7.7, local AS number 65001 Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVer Speaker 1 1 1 1 1 0 Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd 1.1.1.1 0 65001 9 10 1 0 0 00#06#50 0 2.2.2.2 0 65001 8 8 1 0 0 00#05#43 0 5.5.5.5 0 65001 7 7 1 0 0 00#05#03 0In this post we covered the configuration and verification of BGP-EVPN control-plane and ISIS-SR based MPLS forwarding plane. In the next post we will leverage the EVPN control-plane and ISIS-SR to provision BGP-EVPN based Multi-Homing of devices.", "url": "/tutorials/bgp-evpn-configuration-ncs-5500-part-1/", "author": "Ahmad Bilal Siddiqui", "tags": "iosxr, EVPN, NCS 5500, ncs5500, evpn" } , "tutorials-bgp-evpn-configuration-ncs-5500-part-2": { "title": "BGP-EVPN based Active-Active Multi-Homing", "content": " On This Page BGP-EVPN based multi-homing benefits over traditional MC-LAG Reference Topology Task 1# Configure LACP bundle on Host-1 Task 2# Configure EVPN based multi-homing for Host-1 This post will cover BGP-EVPN based Multi-Homing of devices. Multi-homing is achieved by EVPN Ethernet Segment feature; it offers redundant connectivity and utilizes all the links for active/active per-flow load balancing. For EVPN Multi-Homing tutorial, we will leverage EVPN control-plane and ISIS Segment Routing based forwarding that we configured in the previous post.EVPN Ethernet segment is a set of Ethernet links that connects a multi-homed device. If a multi-homed device or network is connected to two or more PEs through a set of Ethernet links, then that set of links is referred to as an Ethernet segment. Each device connected in the network is identified by a unique non-zero identifier called Ethernet-Segment Identifier (ESI).On NCS 5500 platform, following modes of operation are supported.-\tSingle-Homing — A device is single-homed when its connected to only one Leaf or PE. There is no \t\tredundancy in this mode of operation and it does not need Ethernet Segment to be configured.-\tActive-Active Multi-Homing — In active-active multi-homing mode, a device is multi-homed to multiple Leafs/PEs and both the links actively forward the traffic on that Ethernet Segment. This mode of operation is bandwidth efficient and provides per-flow active/active forwarding.-\tSingle-Active Multi-Homing — In active-standby multi-homing mode, a device is multi-homed to multiple Leaf/PEs and only one link is in active state to forward the traffic on that Ethernet Segment. In case of failure of the active link the standby link takes over and starts forwarding for that Ethernet Segment.BGP-EVPN based multi-homing benefits over traditional MC-LAGThere are traditional ways to implement MC-LAG and then there is BGP EVPN based multi-homing. Following table lists some of the benefits of BGP-EVPN based multi-homing over traditional MC-LAG. Traditional MC-LAG EVPN Multi-Homing with Ethernet-Segment Dedicated inter-chassis links required. This needs to be sized according to access bandwidth Dedicated inter-chassis link not mandatory, but can be used optionally if needed State sync between nodes is via proprietary protocol/mechanism State synchronization between nodes is via BGP Prefix independent convergence on attachment circuit (AC) failure not possible BGP-EVPN provides prefix independent convergence on attachment circuit failure Only 2-way redundancy is practical (due to requirement of inter-chassis link) N-way redundancy is possible. Note# NCS 5500 platform supports only BGP-EVPN based multi-homing with 2-way redundancy.Reference TopologyTask 1# Configure LACP bundle on Host-1As per the reference topology Host-1 is dual-homed to Leaf-1 and Leaf-2. ASR9K is acting as the host with IP address 10.0.0.10/24. Host-1 is configured with LACP bundle containing the interfaces connected to the Leaf-1 and Leaf-2. Following is the configuration of LAG on Host-1. The LAG on Host-1 will come up after we configure the multi-homing using EVPN Ether-Segment on the Leaf-1 and Leaf-2.In this tutorial we are using ASR9K router as the host but we can use any server or other CE device dual-homed connected to the Leaf/PE via BGP-EVPN. Host-1 interface Bundle-Ether 1 description ~Bundle to Leaf-1/2~ ipv4 address 10.0.0.10 255.255.255.0 ! interface TenGigE0/0/2/0 description ~Link to Leaf-1 ten0/0/0/47~ bundle id 1 mode active load-interval 30 ! interface TenGigE0/0/2/1 description ~Link to Leaf-2 ten0/0/0/47~ bundle id 1 mode active load-interval 30 !Task 2# Configure EVPN based multi-homing for Host-1Configure Leaf-1 and Leaf-2 to provision all active multi-homing to host-1. The set of links from Host-1 to the Leafs will be configured as an Ethernet Segment on the Leafs. For each Ethernet-Segment, identical ESI along with identical LACP System MAC address should be configured on the Leaf pair. Every Ethernet-Segment has to be configured with its own uniqure LACP System MAC.NCS 5500 platform supports static LAG as well as LACP, however in this guide we are using LACP for link aggregation.Configure the bundle on the Leaf-1 and Leaf-2. Use the same config for both the Leafs. interface TenGigE0/0/0/47 description ~Link to Host-1~ bundle id 1 mode active load-interval 30 ! interface Bundle-Ether 1 description ~Bundle to Host-1~ lacp system mac 1101.1111.1111 load-interval 30 !Configure Ethernet Segment id (ESI) for the bundle interface to enable multi-homing of the host. Use the identical configuration on both the Leafs. Each device connected in the network should be identified by a unique non-zero Ethernet-Segment Identifier (ESI). We can configure Single-Active load-balancing by CLI command “load-balancing-mode single-active” under “ethernet-segment”.EVPN All Active Multi-Homing Config# (Used in this tutorial) evpn interface Bundle-Ether 1 ethernet-segment identifier type 0 11.11.11.11.11.11.11.11.11 ! !EVPN Single-Active Multi-Homing Config# (For reference only) evpn interface Bundle-Ether 1 ethernet-segment identifier type 0 11.11.11.11.11.11.11.11.11 load-balancing-mode single-active ! !Use “show bundle bundle-ether 1” CLI command to verify the state of the bundle interface on Leafs and Host-1. Leaf-1 RP/0/RP0/CPU0#Leaf-1#show bundle bundle-ether 1 Bundle-Ether1 Status# Up Local links <active/standby/configured># 1 / 0 / 1 Local bandwidth <effective/available># 10000000 (10000000) kbps MAC address (source)# 00bc.601c.d0da (Chassis pool) Inter-chassis link# No Minimum active links / bandwidth# 1 / 1 kbps Maximum active links# 64 Wait while timer# 2000 ms Load balancing# Link order signaling# Not configured Hash type# Default Locality threshold# None LACP# Operational Flap suppression timer# Off Cisco extensions# Disabled Non-revertive# Disabled mLACP# Not configured IPv4 BFD# Not configured IPv6 BFD# Not configured Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Te0/0/0/47 Local Active 0x8000, 0x0003 10000000 Link is Active Leaf-2 RP/0/RP0/CPU0#Leaf-2#show bundle bundle-ether 1 Sat Sep 1 07#57#39.368 UTC Bundle-Ether1 Status# Up Local links <active/standby/configured># 1 / 0 / 1 Local bandwidth <effective/available># 10000000 (10000000) kbps MAC address (source)# 00bc.600e.40da (Chassis pool) Inter-chassis link# No Minimum active links / bandwidth# 1 / 1 kbps Maximum active links# 64 Wait while timer# 2000 ms Load balancing# Link order signaling# Not configured Hash type# Default Locality threshold# None LACP# Operational Flap suppression timer# Off Cisco extensions# Disabled Non-revertive# Disabled mLACP# Not configured IPv4 BFD# Not configured IPv6 BFD# Not configured Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Te0/0/0/47 Local Active 0x8000, 0x0003 10000000 Link is Active RP/0/RP0/CPU0#Leaf-2#Ethernet Segment configuration on both Leaf-1 and Leaf-2 is complete and we can see that the bundle interface is ‘Up’ as per the above output.Verify the Ethernet Segment status by CLI command “show evpn ethernet-segment detail”. RP/0/RP0/CPU0#Leaf-1#show evpn ethernet-segment detail Legend# Ethernet Segment Id Interface Nexthops ------------------------ ---------------------------------- -------------------- 0011.1111.1111.1111.1111 BE1 1.1.1.1 ES to BGP Gates # B,M ES to L2FIB Gates # H Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.601c.d0da IfHandle # 0x00000000 State # Down Redundancy # Not Defined ESI type # 0 Value # 11.1111.1111.1111.1111 ES Import RT # 1111.1111.1111 (Local) Source MAC # 0000.0000.0000 (Incomplete Configuration) Topology # Operational # SH Configured # All-active (AApF) (default) Service Carving # Auto-selection Peering Details # 1.1.1.1[MOD#P#00] Service Carving Results# Forwarders # 0 Permanent # 0 Elected # 0 Not Elected # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # None Remote SHG labels # 0 RP/0/RP0/CPU0#Leaf-1# As we verify the Ethernet segment status, it is observed that there is no information of VLAN services and Designated Forwarder election. Also, the below output only shows Leaf-1’s own next-hop IP address for Ethernet segment, although for all-active multi-homing we should also see peer Leaf’s address as next-hop address. This is due to the reason that we have configured Ethernet segment but have not provisioned a VLAN service for it yet.In the next post, we will implement configuration of VLAN and stretching layer-2 bridging for that VLAN between the Leafs. Task-2 and Task-3 focuses on VLAN configuration and service carving for Ethernet Segment.", "url": "/tutorials/bgp-evpn-configuration-ncs-5500-part-2/", "author": "Ahmad Bilal Siddiqui", "tags": "iosxr, ncs 5500, NCS 5500, evpn" } , "tutorials-bgp-evpn-configuration-ncs-5500-part-3": { "title": "Configure BGP-EVPN based Layer-2 VPN service", "content": " On This Page Reference Topology# Task 1# Configure Host-1 and Host-5 IP address Task 2# Configure Layer-2 interfaces and Bridge Domain on Leafs Task 3# Configure EVPN EVI on Leaf-1, Leaf-2 for VLAN 10 Task 4# Configure EVPN EVI on Leaf-5 for VLAN 10 Task 5# Verify EVPN EVI and Layer-2 Stretch between the Leaf-1, Leaf-2 and Leaf-5 In the last post, we configured the BGP-EVPN based Multi-homing of host/CE using EVPN Ethernet Segment. In this post, we will provision BGP-EVPN based Layer-2 VPN service between the Leafs. The EVPN Layer-2 service will enable forwarding between host-1 and host-5 which are part of the same subnet.Reference Topology#In this setup, Host-1 and Host-5 belong to the same subnet. Host-1 is dual-homed to Leaf-1 and Leaf-2 while Host-5 is single homed to the Leaf-5. Packets sourced from Host-1 for destination Host-5 will arrive to Leaf-1 or Leaf-2 based on the LAG’s hash calculation. On Leaf the lookup will be performed for destination Host-5 MAC address. Host-5’s MAC address will be learnt on Leaf-1 and Leaf-2 via EVPN control-plane. After the lookup, the traffic will be forwarded to the Host-5 MAC address using EVPN service label and transport label to reach to Leaf-5.Task 1# Configure Host-1 and Host-5 IP addressHost-1 and Host-5 will be part of the same subnet to communicate over layer-2 stretch. Host-1 is connected dual-homed to uplink Leafs via LACP link aggregation and Host-5 is connected single-homed to Leaf-5. Configure IP address on Host-1’s and Host-5 as follows. Host-1 interface Bundle-Ether1 description ~Bundle to Leaf-1/2~ ipv4 address 10.0.0.10 255.255.255.0 ! Host-5 interface TenGigE0/0/2/0 description ~Link to Leaf-5~ ipv4 address 10.0.0.50 255.255.255.0 !Task 2# Configure Layer-2 interfaces and Bridge Domain on LeafsConfigure layer-2 interfaces with dot1q encapsulation for VLAN 10 on Leaf-1 and Leaf-2. Use the following configuration for both Leaf-1, Leaf-2 and Leaf-5. Leaf-1 and Leaf-2 interface Bundle-Ether 1.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric ! Leaf-5 interface TenGigE0/0/0/47.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric !Configure Bridge domain for the VLAN and add the VLAN tagged interfaces to the bridge-domain. Configure the following on Leaf-1, Leaf-2 and Leaf-5. Leaf-1 and Leaf-2 l2vpn bridge group bg-1 bridge-domain bd-10 interface Bundle-Ether 1.10 ! ! Leaf-5 l2vpn bridge group bg-1 bridge-domain bd-10 interface TenGigE0/0/0/47.10 ! ! Verify that the bridge-domain and the related attachment circuits are up. Following output shows that the bridge-domain bd-10’s state is ‘up’, its attachment circuit is ‘up’. Leaf-1 RP/0/RP0/CPU0#Leaf-1#show l2vpn bridge-domain bd-name bd-10 Legend# pp = Partially Programmed. Bridge group# bg-1, bridge-domain# bd-10, id# 0, state# up, ShgId# 0, MSTi# 0 Aging# 300 s, MAC limit# 64000, Action# none, Notification# syslog Filter MAC addresses# 0 ACs# 1 (1 up), VFIs# 0, PWs# 0 (0 up), PBBs# 0 (0 up), VNIs# 0 (0 up) List of ACs# BE1.10, state# up, Static MAC addresses# 0 List of Access PWs# List of VFIs# List of Access VFIs# Leaf-5 RP/0/RP0/CPU0#Leaf-5#show l2vpn bridge-domain bd-name bd-10 Legend# pp = Partially Programmed. Bridge group# bg-1, bridge-domain# bd-10, id# 0, state# up, ShgId# 0, MSTi# 0 Aging# 300 s, MAC limit# 64000, Action# none, Notification# syslog Filter MAC addresses# 0 ACs# 1 (1 up), VFIs# 0, PWs# 0 (0 up), PBBs# 0 (0 up), VNIs# 0 (0 up) List of ACs# Te0/0/0/47.10, state# up, Static MAC addresses# 0 List of Access PWs# List of VFIs# List of Access VFIs# RP/0/RP0/CPU0#Leaf-5#So far, we have configured local bridging on the Leafs and connected them to the hosts for vlan 10 tagged data. We verified that the local bridging and attachment circuits are ‘up’. In order for Host-1 to communicate to Host-5 via layer-2, we need to configure layer-2 stretch/service between the Leafs to which Hosts are connected.The layer-2 service/stretch across the Leafs is offered by configuring EVPN EVI (EVPN Instance). EVI allows the layer-2 to be stretched via MP-BGP EVPN control-plane across multiple participating Leafs/PEs. An EVI is configured on a per layer-2 bridge basis across Leafs/PEs. Each EVI has a unique route distinguisher and one or more route targets.For Layer-2 VPN use case, we are stretching the layer-2 between Leaf-1, Leaf-2 and Leaf-5. Therefore, we will provision Layer-2 VPN service by configure EVI on all three leafs.Task 3# Configure EVPN EVI on Leaf-1, Leaf-2 for VLAN 10First we will configure the EVI on Leaf-1 and Leaf-2, then we will verify that the Ethernet Segment for vlan 10 tagged data is up.Configure EVI in EVPN config on Leaf-1 and Leaf-2. Also assign the route-target values for the EVI related network to get advertised and received via BGP EVPN control-plane. Advertise-mac keyword is used to advertise the MAC addresses in EVI to other Leafs part of EVI via BGP EVPN. Leaf-1 and Leaf-2 evpn evi 10 bgp route-target import 1001#11 route-target export 1001#11 ! advertise-mac ! !Associate the EVI to bridge-domain for VLAN 10, this is where the attachment-circuit/host is connected to. l2vpn bridge group bg-1 bridge-domain bd-10 evi 10 ! !As we have now configured layer-2 service with EVI for Bridge-domain 10, lets verify the Ethernet Segment status to see that the multi-homing is operational for Bridge-domain 10 forwarding.Observe in the below output that for Ethernet-segment bundle interface ‘BE1’, there are two next-hops. The next-hops represent each Leaf-1 and Leaf-2 forming Leaf pair for Ethernet Segment. Also in below output we can see that Ethernet-segment state is ‘Up’ and all-active multi-homing is operational. We have one forwarder which is VLAN 10 and Leaf-1 is the elected designated forwarded (DF) for it. Leaf-1 RP/0/RP0/CPU0#Leaf-1#show evpn ethernet-segment detail Ethernet Segment Id Interface Nexthops ------------------------ ---------------------------------- -------------------- 0011.1111.1111.1111.1111 BE1 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.601c.d0da IfHandle # 0x08000044 State # Up Redundancy # Not Defined ESI type # 0 Value # 11.1111.1111.1111.1111 ES Import RT # 1111.1111.1111 (Local) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH, All-active Configured # All-active (AApF) (default) Service Carving # Auto-selection Peering Details # 1.1.1.1[MOD#P#00] 2.2.2.2[MOD#P#00] Service Carving Results# Forwarders # 1 Permanent # 0 Elected # 1 Not Elected # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # 24061 Remote SHG labels # 1 24043 # nexthop 2.2.2.2 RP/0/RP0/CPU0#Leaf-1#With the following CLI command we can verify that the MAC address of Host-1 is being learnt on Leaf-1 and Leaf-2. MAC address of Host-5 will be learnt on Leaf-1 and Leaf-2 after we configure EVI on Leaf-5 for VLAN 10 layer-2 stretch.\tLeaf-1 RP/0/RP0/CPU0#Leaf-1#show l2route evpn mac all Topo ID Mac Address Producer Next Hop(s) -------- -------------- ----------- ---------------------------------------- 0 6c9c.ed6d.1d8b LOCAL Bundle-Ether1.10 RP/0/RP0/CPU0#Leaf-1# Leaf-2 RP/0/RP0/CPU0#Leaf-2#show l2route evpn mac all Sat Sep 1 22#49#43.498 UTC Topo ID Mac Address Producer Next Hop(s) -------- -------------- ----------- ---------------------------------------- 0 6c9c.ed6d.1d8b L2VPN Bundle-Ether1.10 RP/0/RP0/CPU0#Leaf-2#Task 4# Configure EVPN EVI on Leaf-5 for VLAN 10 On Leaf-5 evpn evi 10 bgp route-target import 1001#11 route-target export 1001#11 ! advertise-mac ! !Associate the EVI to bridge-domain for VLAN 10, this is where the attachment-circuit/host is connected to. l2vpn bridge group bg-1 bridge-domain bd-10 evi 10 ! !Task 5# Verify EVPN EVI and Layer-2 Stretch between the Leaf-1, Leaf-2 and Leaf-5We have configured the Layer-2 stretch between Leaf-1, Leaf-2 and Leaf-5 using EVPN EVI. In the next steps lets verify the layer-2 connectivity is up and we can reach from one host to another via layer-2. “show evpn evi detail” cli command shows the configured EVI and its associated bridge-domain. It also shows the route-target import and export values as shown in the below output. RP/0/RP0/CPU0#Leaf-1#show evpn evi detail VPN-ID Encap Bridge Domain Type ---------- ------ ---------------------------- ------------------- 10 MPLS bd-10 EVPN Stitching# Regular Unicast Label # 24060 Multicast Label# 24121 Flow Label# N Control-Word# Enabled Forward-class# 0 Advertise MACs# Yes Advertise BVI MACs# No Aliasing# Enabled UUF# Enabled Re-origination# Enabled Multicast source connected# No Statistics# Packets Sent Received Total # 0 0 Unicast # 0 0 BUM # 0 0 Bytes Sent Received Total # 0 0 Unicast # 0 0 BUM # 0 0 RD Config# none RD Auto # (auto) 1.1.1.1#10 RT Auto # 65001#10 Route Targets in Use Type ------------------------------ --------------------- 1001#11 Import 1001#11 Export RP/0/RP0/CPU0#Leaf-1#Ping from Host-1 to Host-5 and verify that the Hosts are reachable. We can see in the below output that that Host-1 can ping Host-5. Also, below output shows that the MAC address for Host-5 is learnt on Leaf-1 and Leaf-2. Similarly, we are learning the MAC address of Host-1 on Leaf-5. Host-1 RP/0/RSP0/CPU0#Host-1#ping 10.0.0.50 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.50, timeout is 2 seconds# !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms RP/0/RSP0/CPU0#Host-1# Leaf-1 RP/0/RP0/CPU0#Leaf-1#show l2route evpn mac all Sat Sep 1 22#53#57.880 UTC Topo ID Mac Address Producer Next Hop(s) -------- -------------- ----------- ---------------------------------------- 0 6c9c.ed6d.1d8b LOCAL Bundle-Ether1.10 0 a03d.6f3d.5443 L2VPN 5.5.5.5/24002/ME RP/0/RP0/CPU0#Leaf-1# Leaf-2 RP/0/RP0/CPU0#Leaf-2#show l2route evpn mac all Sat Sep 1 23#00#03.487 UTC Topo ID Mac Address Producer Next Hop(s) -------- -------------- ----------- ---------------------------------------- 0 6c9c.ed6d.1d8b L2VPN Bundle-Ether1.10 0 a03d.6f3d.5443 L2VPN 5.5.5.5/24002/ME RP/0/RP0/CPU0#Leaf-2# Leaf-5 RP/0/RP0/CPU0#Leaf-5#show l2route evpn mac all Sat Sep 1 23#00#03.785 UTC Topo ID Mac Address Producer Next Hop(s) -------- -------------- ----------- ---------------------------------------- 0 6c9c.ed6d.1d8b L2VPN 24007/I/ME 0 a03d.6f3d.5443 LOCAL TenGigE0/0/0/47.10 RP/0/RP0/CPU0#Leaf-5#We can verify the BGP EVPN control-plane to verify the various routes and mac addresses are advertised and learnt.In the below output from Leaf-1 we can see the MAC address of Host-1 and Host-5 are being learnt under their respective route distinguishers. MAC addresses are advertised using EVPN Route-Type-2.Example of Host-1 MAC learnt ([2][0][48][6c9c.ed6d.1d8b][0]/104)The route distinguisher value is comprised of router-id#EVI eg. 1.1.1.1#10, 2.2.2.2#10 which are highlighted below. Leaf-5 RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 1.1.1.1#10 Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discard Origin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path Route Distinguisher# 1.1.1.1#10 *>i[1][0011.1111.1111.1111.1111][0]/120 1.1.1.1 100 0 i * i 1.1.1.1 100 0 i *>i[2][0][48][6c9c.ed6d.1d8b][0]/104 1.1.1.1 100 0 i * i 1.1.1.1 100 0 i *>i[3][0][32][1.1.1.1]/80 1.1.1.1 100 0 i * i 1.1.1.1 100 0 i Processed 3 prefixes, 6 paths RP/0/RP0/CPU0#Leaf-5# RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 2.2.2.2#10 Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discard Origin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path Route Distinguisher# 2.2.2.2#10 *>i[1][0011.1111.1111.1111.1111][0]/120 2.2.2.2 100 0 i * i 2.2.2.2 100 0 i *>i[2][0][48][6c9c.ed6d.1d8b][0]/104 2.2.2.2 100 0 i * i 2.2.2.2 100 0 i *>i[3][0][32][2.2.2.2]/80 2.2.2.2 100 0 i * i 2.2.2.2 100 0 i Processed 3 prefixes, 6 paths RP/0/RP0/CPU0#Leaf-5#CLI command “show evpn evi vpn-id 10 mac” can be used to verify the MAC address and Host IP addresses being learnt related to the EVI. In the following output of EVI table from Leaf-5, we can see that we are learning MAC address of Host-1 via EVI 10 on Leaf-5. We can reach to Host-1 MAC address either via next-hop 1.1.1.1 of Leaf-1 or 2.2.2.2 which is Leaf-2. We can run the same command on Leaf-1 and Leaf-2 for verification. Leaf-5 RP/0/RP0/CPU0#Leaf-5#show evpn evi vpn-id 10 mac Sat Sep 1 23#24#00.808 UTC VPN-ID Encap MAC address IP address Nexthop Label ---------- ------ -------------- ---------------------------------------- ----------------------------- 10 MPLS 6c9c.ed6d.1d8b ## 1.1.1.1 24060 10 MPLS 6c9c.ed6d.1d8b ## 2.2.2.2 24042 10 MPLS a03d.6f3d.5443 ## TenGigE0/0/0/47.10 24002 RP/0/RP0/CPU0#Leaf-5#We are only seeing MAC address and not IP address of the Host in the above output. This is because we configured only Layer-2 service between the Leafs. Once we configure EVPN IRB, we will start advertising MAC + IP of the host via EVPN Route-Type-2 and will be able to see IP address in the above show command as well as in Leaf’s routing table.Since only MAC address is advertised, the advertisement will only have Bridge-Domain/EVI label and its respective route-target. In below output on Leaf-5 for route type 2 learnt from Leaf-1 (RD 1.1.1.1#10), we can see the highlighted route-target and Bridge-Domain/EVI label value. Leaf-5 RP/0/RP0/CPU0#Leaf-5#sh bgp l2vpn evpn rd 1.1.1.1#10 [2][0][48][6c9c.ed6d.1d8b][0]/104 detail BGP routing table entry for [2][0][48][6c9c.ed6d.1d8b][0]/104, Route Distinguisher# 1.1.1.1#10 Versions# Process bRIB/RIB SendTblVer Speaker 44 44 Flags# 0x00040001+0x00010000; Last Modified# Jul 26 01#34#57.072 for 00#00#03 Paths# (2 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Flags# 0x4000000025060005, import# 0x1f, EVPN# 0x1 Not advertised to any peer Local 1.1.1.1 (metric 20) from 6.6.6.6 (1.1.1.1) Received Label 24060 Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 1, version 44 Extended community# Flags 0x10# SoO#1.1.1.1#10 RT#1001#11 Originator# 1.1.1.1, Cluster list# 6.6.6.6 EVPN ESI# 0011.1111.1111.1111.1111 Path #2# Received by speaker 0 Flags# 0x4000000020020005, import# 0x20, EVPN# 0x1 Not advertised to any peer Local 1.1.1.1 (metric 20) from 7.7.7.7 (1.1.1.1) Received Label 24060 Origin IGP, localpref 100, valid, internal, not-in-vrf Received Path ID 0, Local Path ID 0, version 0 Extended community# Flags 0x10# SoO#1.1.1.1#10 RT#1001#11 Originator# 1.1.1.1, Cluster list# 7.7.7.7 EVPN ESI# 0011.1111.1111.1111.1111 RP/0/RP0/CPU0#Leaf-5#In the next post, we are covering EVPN Integrated Routing and Bridging (IRB) configuration in detail.", "url": "/tutorials/bgp-evpn-configuration-ncs-5500-part-3/", "author": "Ahmad Bilal Siddiqui", "tags": "iosxr, ncs 5500, evpn, NCS5500" } , "tutorials-ncs5500-qos-part-1-understanding-packet-buffering": { "title": "NCS5500 QoS Part 1 - Understanding Packet Buffering", "content": " NCS5500 Buffering Architecture Part 1 Introduction to Packet BUFFERING Video DNX ASICs in NCS5500 family DNX ASICs architecture Ingress-only buffering Virtual Output Queues DRAM capacity and Bandwidth to DRAM Taildrop HOLB? Next episode You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.Introduction to Packet BUFFERINGThis first blog post will help understanding key concepts on the buffering architecture and clarify things like VOQ-only, single-lookup and ingress-buffering only designs.It’s necessary to detail all these mechanisms to understand later the subtleties of the QoS implementation, including the scales and limitations.VideoThe short version of this article is available in the following Youtube video#https#//www.youtube.com/watch?v=_ozPYN6Ej9YDNX ASICs in NCS5500 familyAll the NCS5500 routers are powered by Broadcom StrataDNX (or DNX) Network Processor Units (NPUs).These ASICs can be used in standalone mode (Systems on Chip)#or interconnected through one or multiple Fabric Engines#All Jericho and Qumran options are following a very similar architecture, they are using resources inside the ASIC (packet memory, routing information memories, next-hop memories, encapsulation memories, statistics memories, etc etc) and also external to the chipset (like the DRAM used for packet buffering).Some systems are equipped with external TCAMs or not, that’s how we differentiate base and scale systems/LCs.On the Jericho side# Jericho ASICs are used in NCS5502, NCS5502-SE, and in following line cards# NC55-36X100G, NC55-24H12F-SE, NC55-24X100G-SE, NC55-18H18F, NC55-6X2H-DWDM, NC55-36X100G-S Jericho+ (with “normal LPM”) ASICs are used in NCS55A1-36H-S, NCS55A1-36H-SE-s, all NCS55A2-MOD-xx and in following line cards# NC55-36X100G-A-SE, NC55-MOD-A-S, NC55-MOD-A-SE-S Jericho+ with large LPM is used in NCS-55A1-24HOn the Qumran side# Qumran-MX is used in NCS5501 and NCS5501-SE Qumran-MX with 2nd generation eTCAM (OP) is used in the scale version of RSP4 / NCS560 Qumran-AX is used in the N540-24Z8Q2C-SYSDNX ASICs architectureAll the DNX forwarding ASICs used in the NCS550 portfolio are made of two cores (actually, only the Qumran-AX used in NCS540 is single core), and they are all made of an ingress and an egress pipeline.A packet processed in the forwarding ASIC will always be received in the ingress pipeline handling the receiving port and via the egress pipeline of the outgoing port, could it be in the same NPU or in different NPUs (connected through a fabric engine).A big difference, compared to our tradition routers based on run-to-completion architectures, is the fact we will only perform a single-lookup in ingress to figure out the destination of a received packet. So, this operation will happen in the ingress pipeline only.It will reduce the latency and globally will improve the performance.But it has significant implications on the packet buffering too. Indeed, the buffering in egress side will be minimal and in case of queue or port congestion, the packets will be stored in ingress. That also implies an advanced mechanism of request/authorization between arbitrators.That’s what we will describe now.Ingress-only bufferingEvery packet received in the ingress pipeline will go through multiple stages. Each pipeline block has a specific role and a budget of resource access (read and write). The FWD block will be responsible for the packet destination lookup. That’s where the routing decision is made and it will result in pointers to next-hop information and header encapsulation (among other things).Once the destination is identified, the packet is associated to a queue (a VOQ more precisely) but we will come back on this concept below.The ingress scheduler will contact its egress counterpart and will issue a packet transmission request#The egress scheduler will accept or refuse this request depending on the queue availability.If the queue is congested, the egress scheduler will not provide token for transmission and the packet will be stored in the ingress packet memories.Depending on the level of queue congestion, the packet will be stored inside the NPU or in the external DRAM. Once the congestion state disappears, the egress scheduler will be able to issue the token for the most critical queue. The QoS configuration will dictate this behavior. If QoS is not configured, the First In First Out rule will apply.So the packets are buffered in different places along the path# (ingress) On-chip Buffer# 16MB or (ingress) external DRAM# 4GB (egress) Port Buffer# 6MBFor a given queue, packets are stored in OCB or DRAM.The decision to keep a queue in OCB or to evict it to the DRAM will depends on the numbers of packets buffered# below 6000 packets or 200k Bytes, the queue will stay inside the chipset above one of these two thresholds, the packets are stored in DRAMThis management is done at the queue level only. It will not impact the other packets going to other destination or even to the same destination but in a different queue#As long as we have packets in this evicted queue, it will stay in the DRAM. Once emptied, the queue is moved back to internal buffer.On the egress side, we have a 16MB memory used to differentiate unicast and multicast traffic, and among them to separate high and low priority. That’s the only level of classification available on the output buffer.In the future we will come back on the main difference of behavior between unicast and multicast, in a VOQ-only model.Virtual Output QueuesContrary traditional platforms in our SP portfolio (ASR9000, CRS, NCS6000, …), the packets are not stored in the output queues in case of congestion.In the NCS5500 world, we are creating a virtual representation of the egress queues on the ingress side.Indeed, everytime a packet needs to be sent to a remote point but is not given the right to be transmitted to an egress pipeline, it will be stored in the ingress buffer (could it be inside the NPU or in the DRAM).These are our famous “Virtual Output Queues”.For a given egress port or “attachment point”, we have 8 queues in each ingress NPU.In this example, the egress port HunGig0/7/0/1 is materialized by 8 queues in EACH ingress NPUAnd for three interfaces#With this show command, we take the perspective of the NPU 0 on Line Card 1, the queue is seen as remote.RP/0/RP0/CPU0#R1#sh contr npu stats voq ingress interface hu 0/7/0/1 instance 0 location 0/1/CPU0Interface Name = Hu0/7/0/1Interface Handle = 3800118Asic Instance = 0VOQ Base = 1328Port Speed(kbps) = 100000000Local Port = remote ReceivedPkts ReceivedBytes DroppedPkts DroppedBytes-------------------------------------------------------------------COS0 = 0 0 0 0COS1 = 0 0 0 0COS2 = 0 0 0 0COS3 = 0 0 0 0COS4 = 0 0 0 0COS5 = 0 0 0 0COS6 = 0 0 0 0COS7 = 0 0 0 0RP/0/RP0/CPU0#R1#It represents the number of packets received on NPU0 LC1 and targeted to HundredGig 0/7/0/1 queues.COS0 represents the default queue.But the queues also exist locally on the same NPU of the same line card, from an ingress pipeline perspective.In the following output, we see that the queues are seen as local from the NPU 0 LC 0/7 perspective which indeed handles HunGig0/7/0/1RP/0/RP0/CPU0#R1#sh contr npu stats voq ingress interface hu 0/7/0/1 instance 0 location 0/7/CPU0Interface Name = Hu0/7/0/1Interface Handle = 3800118Asic Instance = 0VOQ Base = 1328Port Speed(kbps) = 100000000Local Port = local ReceivedPkts ReceivedBytes DroppedPkts DroppedBytes-------------------------------------------------------------------COS0 = 0 0 0 0COS1 = 9 1107 0 0COS2 = 0 0 0 0COS3 = 0 0 0 0COS4 = 0 0 0 0COS5 = 0 0 0 0COS6 = 0 0 0 0COS7 = 139289 28863790 0 0RP/0/RP0/CPU0#R1#It represents the number of packets received on NPU0 LC7 and targeted to HundredGig 0/7/0/1 queues, in the same ASIC.Even if the packets destination is located in the same NPU, it will follow the same rules of request / grant to get the right to transmit.In such case, the packets will not transit through the fabric but still, they will be split in cells. They will take an internal path inside the ASIC. It’s also what happens when using a System on the Chip (NCS5501, NCS55A2-MOD, …).Each ingress NPU links the local VOQ to the output queues via a “flow connector”.DRAM capacity and Bandwidth to DRAMEach NPU is associated to a 4GB DRAM. It’s not a removable part, it’s not an option. All products in the NCS5500 family are following this rule.This memory is important but is not used most of the time.We will demonstrate in next article that the percentage of packets going to DRAM represents a tiny portion of the overall traffic (around 0.15%).The NPU is connected to the DRAM via a 900Gbps unidirectional link#Unidirectional means we can# write in the DRAM at 900Gbpsor read in the DRAM at 900Gbpsor read at 700Gbps and write at 200GbpsIf we maintain a constant state of queue eviction (all the queues are constantly receiving traffic and can’t be moved back to OCB), we end up with a 450/450 state#To reach such a situation, we need very specifically crafted traffic flows, maintaining a constant state of saturation without being taildropped (check the next section).Aside in a lab, having traffic received on a given NPU and targeted all the queues of all the egress ports while maintaining a congestion state, is not a realistic scenario.It’s important to keep in mind we are increasing the amount of DRAM and the total amount of bandwidth to this DRAM when we insert more line cards (more NPUs) in a system#In this example, we have 3 NPUs, representing a total of 12GB of traffic and an aggregated bandwidth of 2.7Tbps unidirectional to the external buffer.More line cards –> more NPUs –> more DRAM and more bandwidth to DRAM.TaildropBy default a given queue is given 10ms of the overall interface capacity (concept of queue-limit). Please note, it’s not a guaranteed amount of buffer, actually it represents the maximum a queue can be given by default. Above that level, the queue will start dropping new packets.The max size of a queue can be changed by configuration as shown below where qos3 is modified from default to 20ms#RP/0/RP0/CPU0#NCS5508(config)#policy-map egress-policyRP/0/RP0/CPU0#NCS5508(config-pmap)# class qos2RP/0/RP0/CPU0#NCS5508(config-pmap-c)#queue-limit 20 ? bytes Bytes kbytes Kilobytes mbytes Megabytes ms Milliseconds packets Packets (default) us MicrosecondsRP/0/RP0/CPU0#NCS5508(config-pmap-c)#queue-limit 20 msRP/0/RP0/CPU0#NCS5508(config-pmap-c)#commitRP/0/RP0/CPU0#NCS5508(config-pmap-c)#RP/0/RP0/CPU0#NCS5508#sh qos int hu 0/0/0/0 outputNOTE#- Configured values are displayed within parenthesesInterface HundredGigE0/0/0/0 ifh 0x130 -- output policyNPU Id# 0Total number of classes# 4Interface Bandwidth# 100000000 kbpsVOQ Base# 1192Accounting Type# Layer1 (Include Layer 1 encapsulation and above)------------------------------------------------------------------------------Level1 Class (HP1) = qos1Egressq Queue ID = 1193 (HP1 queue)Queue Max. BW. = 0 kbps (20 %)TailDrop Threshold = 125304832 bytes / 10 ms (default)WRED not configured for this classLevel1 Class = qos2Egressq Queue ID = 1194 (LP queue)Queue Max. BW. = 50901747 kbps (50 %)Queue Min. BW. = 0 kbps (default)Inverse Weight / Weight = 1 / (BWR not configured)TailDrop Threshold = 2506752 bytes / 20 ms (20 ms)WRED not configured for this classLevel1 Class = qos3Egressq Queue ID = 1195 (LP queue)Queue Max. BW. = 30247384 kbps (30 %)Queue Min. BW. = 0 kbps (default)Inverse Weight / Weight = 1 / (BWR not configured)TailDrop Threshold = 1253376 bytes / 10 ms (default)WRED not configured for this classLevel1 Class = class-defaultEgressq Queue ID = 1192 (Default LP queue)Queue Max. BW. = 101803495 kbps (default)Queue Min. BW. = 0 kbps (default)Inverse Weight / Weight = 1 / (BWR not configured)TailDrop Threshold = 1253376 bytes / 10 ms (default)WRED not configured for this classRP/0/RP0/CPU0#NCS5508#If a queue is completely saturated, the packets are taildropped. That means the NPU will discard them without trying to push them to the DRAM / External memory, freeing the unidirectional bandwidth detailed above.HOLB?The problem of head of line blocking experienced by some routers in the past is not present with current architecture where any unicast packet is scheduled. A transmission can only happen if the egress scheduler/arbitrator provided a token to the ingress side. Therefore, a saturated queued will have no impact on the other queues, even on the same port.In this example, the two queues represented by packets in grey are targeted to interface 1, saturated. It will not prevent the egress scheduler for interface 2 to provide authorization to transmit to the packet in green#The communication between ingress and egress scheduler is going through the fabric via a protected/priviledge path.Next episodeIn part 2, we will demonstrate in a large test bed how the buffer works and we will check in production routers how many packets are handled by OCB or DRAM.", "url": "/tutorials/ncs5500-qos-part-1-understanding-packet-buffering/", "author": "Nicolas Fevrier", "tags": "ncs5500, qos, voq, buffers" } , "tutorials-ncs5500-qos-part-2-verifying-buffering": { "title": "NCS5500 QoS Part 2 - Verifying Buffering in Lab and Live Networks", "content": " NCS5500 Buffering Architecture Part 2 Checking Buffering in action Video Lab test Metrology Auditing real production routers How to read these numbers? Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, ACLs, Netflow following this link.Also you can find the first part of this post here#https#//xrdocs.io/ncs5500/tutorials/ncs5500-qos-part-1-understanding-packet-buffering/Checking Buffering in actionThis second blog post will take concrete examples to illustrate the concepts covered in the first part. The NCS5500 is based on a VOQ-only, single-lookup and ingress-buffering forwarding architecture.We will use a lab example to illustrate how the system handles bursts, then we will present the monitoring tools / counters we can use to measure where packets are buffered, and finally we will present the data collected on 500+ NPUs in production.This should answer frequently asked questions and clarify all potential doubts.VideoWe recommend to start watching this short Youtube video first#https#//www.youtube.com/watch?v=1qXD70_cLK8Lab testFor this first part, and following customer request, we built a large test bed# NCS5508 with two line cards 36x100G-A-SE (each card is made of 4x NPU Jericho+, each one handling 9 ports 100G) 27 tester ports 100GE (Spirent) connected to 27 router ports 9 ports on LC 4 NPU 0 9 ports on LC 4 NPU 1 9 ports on LC 6 NPU 0 We generate a background / constant traffic of 80% line rate (80Gbps on each port) between two NPUs. This bi-directional traffic is displayed in purple in the diagram above.Then, we will use the remaining 9 ports to generate peaks of traffic targeted to the ports Hu0/6/0/0-8 (shown in red in the diagram).These bursts are lasting 100ms, every second.On the tester, we didn’t make any specific PPM adjustment and used internal clock.On the router, no specific configuration either. Interfaces are simply configured with IPv4 addresses (no QoS).The tests performed are the following# test 1# 80% background and all the ports bursting at 20%. That means# 900ms at 80% LR 100ms at 100% LRWe verify no packets are dropped on the background or the bursts, and we also make sure with the counters that packets are exclusely handled in the OCB. test 2 # 80% background other ports bursting at 20% one single port bursts at 25%, creating a 5Gbps saturation for 100msHere again, we verify no packets are dropped on the background or the bursts, but also we verify that packets are sent to the DRAM.It’s expected since one queue exceeds the threshold and is evicted to the external buffer. This test is basic but has been requested by several customers to verify we had no drop in such situations. It proves it’s not the case, as designed.MetrologyIn the former test, we used specific counters to verify the buffering behavior.Let’s review them.We will collect the following Broadcom counters# IQM_EnqueuePktCnt# total number of packets handled by the NPU IDR_MMU_CREDITS# total number of packets moved to DRAM IQM_EnqueueDscrdPktCnt# total number of packets dropped because of taildrop IQM_RejectDramIneligiblePktCnt# total number of packets dropped because DRAM was not accessible in read, typically when the bandwidth to DRAM is saturated and potentially also IDR_FullDramRejectPktsCnt and IDR_PartialDramRejectPktsCntForm CLI “show controller npu stats counters-all instance all location all” we can extract# ENQUEUE_PKT_CNT, MMU_IDR_PACKET_COUNTER and ENQ_DISCARDED_PACKET_COUNTERRP/0/RP0/CPU0#ROUTER#show controller npu stats counters-all instance all location allFIA Statistics Rack# 0, Slot# 0, Asic instance# 0Per Block Statistics#Ingress#NBI RX# RX_TOTAL_BYTE_COUNTER = 161392268790033002 RX_TOTAL_PKT_COUNTER = 164628460653364IRE# CPU_PACKET_COUNTER = 0 NIF_PACKET_COUNTER = 164628460651867 OAMP_PACKET_COUNTER = 32771143 OLP_PACKET_COUNTER = 4787508 RCY_PACKET_COUNTER = 67452938 IRE_FDT_INTRFACE_CNT = 192IDR# MMU_IDR_PACKET_COUNTER = 697231761913 IDR_OCB_PACKET_COUNTER = 1IQM# ENQUEUE_PKT_CNT = 164640311902277 DEQUEUE_PKT_CNT = 164640311902198 DELETED_PKT_CNT = 0 ENQ_DISCARDED_PACKET_COUNTER = 90015441To get the DRAM reject counters, we will use# show contr npu stats counters-all detail instance all location allor if the IOS XR version doesn’t support the “detail” option, use the following instead# show controllers fia diagshell 0 “diag counters” loc 0/x/CPU0RP/0/RP0/CPU0#ROUTER#show contr npu stats counters-all detail instance all location all | i Dram IDR FullDramRejectPktsCnt # 0 IDR FullDramRejectBytesCnt # 0 IDR PartialDramRejectPktsCnt # 0 IDR PartialDramRejectBytesCnt # 0 IQM0 RjctDramIneligiblePktCnt # 0 IQM1 RjctDramIneligiblePktCnt # 0 IDR FullDramRejectPktsCnt # 0 IDR FullDramRejectBytesCnt # 0 IDR PartialDramRejectPktsCnt # 0 IDR PartialDramRejectBytesCnt # 0 IQM0 RjctDramIneligiblePktCnt # 0 IQM1 RjctDramIneligiblePktCnt # 0--%--SNIP--%--SNIP--%-- None of these counters are available through SNMP / MIB but instead you can use streaming telemetry#From https#//github.com/YangModels/yang/blob/master/vendor/cisco/xr/653/Cisco-IOS-XR-fretta-bcm-dpa-hw-resources-oper-sub2.yangYou’ll found#ENQUEUE_PKT_CNT# iqm-enqueue-pkt-cnt leaf iqm-enqueue-pkt-cnt { type uint64; description ~Counts enqueued packets~;MMU_IDR_PACKET_COUNTER# idr-mmu-if-cnt leaf idr-mmu-if-cnt { type uint64; description ~Performance counter of the MMU interface~;ENQ_DISCARDED_PACKET_COUNTER# iqm-enq-discarded-pkt-cnt leaf iqm-enq-discarded-pkt-cnt { type uint64; description ~Counts all packets discarded at the ENQ pipe~;At the moment (Apr 2019), RjctDramIneligiblePktCnt / FullDramRejectPktsCnt / PartialDramRejectPktsCnt are not available in the data models and therefor, can’t be streamed.Auditing real production routersWe have the counters available and we asked multiple customers (25+) to collect data from their production routers.In total, we had information for 550 NPUs transporting live traffic in multiple network positions# IP core MPLS core (P/LSR) Internet border (transit / peering) CDN (connected to FB, Akamai, Google Cache, Netflix, …) PE (L2VPN and L3VPN) Aggregation SPDC / ToR leafThe data aggregated is helpful since it gives a vision of what is happening in reality.The total amount of traffic measured is tremendous# 24,526,679,839,376,100 packets!!!Not in lab, not in academic models / simulations, but in real routers.With the show commands described in former section, we extracted# ENQUEUE_PKT_CNT# packets transmitted in the NPU MMU_IDR_PACKET_COUNTER# packets passed to DRAM ENQ_DISCARDED_PACKET_COUNTER# packets taildropped RjctDramIneligiblePktCnt# packets drop because of DRAM bandwidthDividing MMU_IDR_PACKET_COUNTER by ENQUEUE_PKT_CNT, we can compute the ratio of packets moved to DRAM.–> 0,151%This number is an average value and should be considered as such. It shows that indeed, the vast majority of the traffic is handled in OCB (inside the NPU).Dividing ENQ_DISCARDED_PACKET_COUNTER by ENQUEUE_PKT_CNT, we can compute the ratio of packets taildropped.–> 0,0358%Having drops is normal in the life of a router. Multiple reasons here, from TCP windowing to temporary congestion situations.Finally, RjctDramIneligiblePktCnt will tell us if the link from the NPU to the DRAM can get saturated and drops packets with production traffic.–> not a single packet discarded in such scenario.LAPTOP# nicolas$ grep RjctDramIneligiblePktCnt * | wc -l 1570LAPTOP# nicolas$ grep RjctDramIneligiblePktCnt * | grep ~ 0~ | wc -l 1570LAPTOP# nicolas$ grep RjctDramIneligiblePktCnt * | grep -v ~ 0~ | wc -l 0LAPTOP# nicolas$In this chart, we sort by numbers of ENQUEUE_PKT_CNT# it represents the most active ASICs in term of packets handled. Rank ENQUEUE_PKT_CNT MMU_IDR ENQ_DISC RjctDram Ratio DRAM % Ratio drops % Network roles 1 527 787 533 239 280 7 369 339 005 1 705 600 246 0 0,001396 0,000323 IP Core 2 527 731 939 299 538 7 637 629 256 1 692 666 188 0 0,001447 0,000321 IP Core 3 392 487 675 531 358 111 916 953 940 24 771 334 182 0 0,028515 0,006311 Peering 4 348 026 620 119 625 1 610 856 619 781 841 479 0 0,000463 0,000225 IP Core 5 342 309 183 713 774 1 348 042 248 855 820 846 0 0,000394 0,000250 IP Core 6 327 474 089 745 397 906 227 869 871 575 599 0 0,000277 0,000266 IP Core 7 312 691 087 570 935 149 450 319 13 211 540 367 0 0,000048 0,004225 Peering 8 309 754 368 573 783 217 802 498 7 194 870 813 0 0,000070 0,002323 Peering 9 286 534 007 041 471 7 937 852 208 208 639 446 202 0 0,002770 0,072815 IP Core 10 285 891 289 921 804 17 316 635 372 208 339 055 090 0 0,006057 0,072874 IP Core 11 282 857 716 700 675 4 619 576 899 213 929 762 107 0 0,001633 0,075632 IP Core 12 281 159 664 612 018 13 756 617 725 448 617 960 0 0,004893 0,000160 MPLS Core 13 263 035 976 927 915 6 226 947 181 191 620 663 098 0 0,002367 0,072850 IP Core 14 253 469 417 751 706 4 811 103 101 1 078 118 413 0 0,001898 0,000425 IP Core 15 253 388 455 606 099 4 808 169 387 1 080 970 708 0 0,001898 0,000427 IP Core 16 249 198 998 283 888 723 211 883 432 154 974 0 0,000290 0,000173 IP Core 17 245 552 977 886 019 1 672 035 279 1 336 139 650 0 0,000681 0,000544 IP Core 18 244 787 789 764 429 1 422 882 240 1 211 167 690 0 0,000581 0,000495 IP Core 19 244 639 576 497 565 1 347 623 535 1 122 306 072 0 0,000551 0,000459 IP Core 20 244 604 586 809 869 1 935 267 429 1 384 016 976 0 0,000791 0,000566 IP Core 21 243 908 249 497 111 586 491 218 410 239 247 0 0,000240 0,000168 IP Core 22 243 639 692 775 431 17 237 153 490 1 434 003 860 0 0,007075 0,000589 Peering 23 237 152 936 875 785 448 662 754 384 071 603 0 0,000189 0,000162 IP Core 24 224 477 013 789 647 13 369 954 892 1 319 318 749 0 0,005956 0,000588 MPLS Core 25 219 820 911 786 839 1 068 821 932 647 226 846 0 0,000486 0,000294 IP Core 26 205 119 650 216 462 766 453 141 568 411 772 0 0,000374 0,000277 IP Core 27 203 306 915 869 451 61 364 621 713 12 422 238 060 0 0,030183 0,006110 Peering 28 194 981 015 445 738 19 793 539 645 117 282 213 0 0,010152 0,000060 MPLS Core 29 182 104 629 921 870 163 108 290 12 704 233 685 0 0,000090 0,006976 Peering 30 180 871 118 289 426 1 362 976 328 269 38 037 126 715 0 0,753562 0,021030 P+PE 31 173 166 873 157 959 397 471 140 353 417 929 0 0,000230 0,000204 IP Core 32 167 311 796 856 352 1 282 666 496 069 36 120 954 409 0 0,766632 0,021589 P+PE 33 164 640 767 868 235 697 231 782 299 90 015 446 0 0,423487 0,000055 CDN 34 164 640 311 902 277 697 231 761 913 90 015 441 0 0,423488 0,000055 CDN 35 160 506 138 361 929 851 487 161 1 826 760 067 0 0,000531 0,001138 IP Core 36 158 521 030 661 438 1 391 033 571 161 3 987 222 205 0 0,877507 0,002515 CDN 37 157 286 154 450 629 14 699 938 426 44 060 250 0 0,009346 0,000028 Peering 38 154 081 895 058 387 623 038 967 483 267 224 0 0,000404 0,000314 IP Core 39 143 902 175 998 205 205 944 615 947 595 151 004 0 0,143114 0,000414 Peering 40 143 686 937 442 122 12 638 644 130 005 734 0 0,000009 0,000090 Peering 41 142 498 738 296 176 649 883 065 1 404 348 505 0 0,000456 0,000986 IP Core 42 142 426 983 443 239 645 597 568 1 417 441 644 0 0,000453 0,000995 IP Core 43 138 083 165 878 093 2 778 355 335 54 030 686 0 0,002012 0,000039 Peering 44 130 425 299 235 308 235 149 102 989 117 322 562 0 0,180294 0,000090 CDN 45 125 379 522 379 915 219 241 802 484 77 781 184 0 0,174863 0,000062 CDN 46 122 178 283 814 177 122 250 106 2 168 387 244 0 0,000100 0,001775 Peering 47 121 842 623 410 092 419 677 747 284 419 677 747 284 0 0,344442 0,344442 P+Peering 48 121 842 227 567 846 419 677 746 356 419 677 746 356 0 0,344444 0,344444 P+Peering 49 119 048 492 308 148 19 756 851 882 1 468 594 303 0 0,016596 0,001234 Peering 50 118 902 447 078 432 20 140 676 286 1 437 569 523 0 0,016939 0,001209 Peering For the most busiest NPUs collected, we see the DRAM ratio and taildrop ratio being actually much smaller than aggregated numbers.How to read these numbers?First of all, it demonstrates clearly that most of the packets are handled inside the ASIC, only a very small portion of the traffic being evicted to DRAM.Second, with RjctDramIneligiblePktCnt being zero in EVERY data collection, we prove that bandwidth from NPU to DRAM (900Gbps unidirectional) is correctly dimensionned. It handles the real burstiness of the traffic without a single drop.Last, the data collected represents a snapshot. It is recommended to collect these counters regularly and to analyze them with the network activity during the interval.Having higher numbers in your network may be correlated to a particular outage or specific situation.Having small numbers, in the other hand, is much easier to read (no drops being… no drops).ConclusionIn conclusion, the ingress-buffering / VOQ-only model is well adapted for real networks.We have seen “academic” studies trying to prove the contrary, but the numbers are talking here.A sandbox, or an imaginary model are not relevant approach.Production networks deployed all around the world, in different positions/roles, transporting Petabytes of traffic for multiple years, prove the efficiency of this architecture.", "url": "/tutorials/ncs5500-qos-part-2-verifying-buffering/", "author": "Nicolas Fevrier", "tags": "ncs5500, qos, production, buffers" } , "tutorials-bgp-evpn-irb-configuration": { "title": "BGP-EVPN IRB Configuration for Inter-Subnet Routing", "content": " On This Page Reference Topology# Task 1# Configure the BGP-EVPN Distributed Anycast Gateway on Leaf-1 and Leaf-2 Task 2# Configure the BGP-EVPN Distributed Anycast Gateway on Leaf-5 EVPN Integrated Routing and Bridging (IRB) feature allows end hosts across the overlay to communicate with each other within or across the subnets in the VPN. In this post we will cover the implementation of EVPN IRB to route between Host-1 and Host-9. Distributed Anycast Gateway will be configured on Leaf-1 and Leaf-2 for subnet 10.0.0.0/24 and on Leaf-5 for subnet 20.0.0.0/24. After configuring IRB we will ping between the Host-1 and Host-9 to verify the reachability and observe the routes are learnt vie BGP EVPN.In last post we configured the Layer-2 stretch between Leaf-1, Leaf-2 and Leaf-5 using BGP EVPN EVI 10 for VLAN 10. We don’t need VLAN 10 on Leaf-5 for this post, that is why EVI 10 and related Bridge Domain is removed from Leaf-5.Reference Topology#Task 1# Configure the BGP-EVPN Distributed Anycast Gateway on Leaf-1 and Leaf-2BGP EVPN provides Distributed anycast gateway feature that enables any Leaf in the fabric to serve as the active default gateway for a host in a subnet. Same virtual gateway IP address and virtual MAC address is configured on the BVI interface for each subnet across the Leafs enabling them to act as gateway for their locally connected hosts. Distributed anycast gateway brings the advantage of seamless workload mobility.We will configure the GW BVI on the Leafs inside the VRF for this post, however we can also configure the BVI in the global routing table.A virtual routing and forwarding instance VRF, represents a tenant. This VRF will have the routes that belong to the overlay network for that tenant. The route-target values should be configured for the VRF to define which prefixes are exported and imported on the Leafs. As we will configure BVI under VRF, the related show commands and troubleshooting should point to the VRF.Configure VRF 10 on Leaf-1, Leaf-2 and Leaf-5 vrf 10 address-family ipv4 unicast import route-target 10#10 ! export route-target 10#10 !Use “show vrf 10” to verify the config.Configure the VRF in BGP to advertised the routes of the VRF to other Leafs. Initiate the VPNv4 address family to advertise VRF label. RD auto under VRF generates RD value automatically based on [BGP-Router-ID#EVI-ID]. However, configuring RD manually is also supported.We will use “redistribute connected” under VRF to advertise connected routes via BGP. In addition, we are configuring BGP multipathing for load balancing where multiple next-hops are available for a prefix.Configure the following on Leaf-1, Leaf-2 and Leaf-5 router bgp 65001 address-family vpnv4 unicast ! vrf 10 rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected !Now, we will configure the BVI-10 on Leaf-1 and Leaf-2 under VRF 10. The BVI will serve as the Distributed Anycast GW for subnet 10.0.0.0/24. Make sure the BVI IP address and MAC address are identical on Leaf-1 and Leaf-2. Configure “host-routing” under BVI interface to advertise route-type 2. interface BVI 10 host-routing vrf 10 ipv4 address 10.0.0.1 255.255.255.0 mac-address 1001.1001.1001 !In order for the BVI interface to come up and serve as the gateway to the host connected to the Leaf, we will have to configure the host connectivity to the Leaf (this is already configured in post-2 and post-3). Also associate the BVI to a Bridge-Domain.Associate the BVI interface to the bridge-domain. Configure the following on Leaf-1 and Leaf-2. l2vpn bridge group bg-1 bridge-domain bd-10 interface Bundle-Ether 1.10 ! routed interface BVI 10 !Verify that the BVI is up on Leaf-1 and Leaf-2. Leaf-1 RP/0/RP0/CPU0#Leaf-1#show ip interface brief Interface IP-Address Status Protocol Vrf-Name BVI10 10.0.0.1 Up Up 10 Leaf-2 RP/0/RP0/CPU0#Leaf-2#show ip interface brief Interface IP-Address Status Protocol Vrf-Name BVI10 10.0.0.1 Up Up 10 Reference config of Host-1 with default route to BVI interface on Leaf-1 and Leaf-2 serving as Gateway# Host-1 interface Bundle-Ether1.10 description ~Dual-homed Bundle to Leaf-1 and Leaf-2~ ipv4 address 10.0.0.10 255.255.255.0 encapsulation dot1q 10 ! router static address-family ipv4 unicast 0.0.0.0/0 10.0.0.1 !Task 2# Configure the BGP-EVPN Distributed Anycast Gateway on Leaf-5Configure the BVI for subnet 20.0.0.0/24 on Leaf-5. interface BVI 20 host-routing vrf 10 ipv4 address 20.0.0.1 255.255.255.0 mac-address 1001.1001.2002 !Associate the BVI to a Bridge-Domain and add the Host’s attachment circuit to the Bridge Domain. The BVI will come up once the host connectivity to the Leaf is configured. For Host’s connectivity, configure layer-2 interface with dot1q encapsulation for VLAN 20 on Leaf-5. Leaf-5 interface TenGigE0/0/0/45.20 l2transport encapsulation dot1q 20 rewrite ingress tag pop 1 symmetric !Configure Bridge domain for the VLAN 20 and add the VLAN tagged interface to the bridge-domain. Configure the following on Leaf-5. Leaf-5 l2vpn bridge group bg-1 bridge-domain bd-20 interface TenGigE0/0/0/45.20 ! Associate the BVI interface to the bridge-domain. Configure the following on Leaf-5. Leaf-5 l2vpn bridge group bg-1 bridge-domain bd-20 interface TenGigE0/0/0/45.20 ! routed interface BVI 20 !Verify that the BVI is up on Leaf-5.Leaf-5RP/0/RP0/CPU0#Leaf-5#show ip interface briefInterface IP-Address Status Protocol Vrf-NameBVI20 20.0.0.1 Up Up 10 Reference config of Host-9 with default route to BVI interface on Leaf-5 serving as Gateway# Host-9 interface TenGigE0/0/1/3.20 description ~Link to Leaf-5~ ipv4 address 20.0.0.50 255.255.255.0 encapsulation dot1q 20 ! router static address-family ipv4 unicast 0.0.0.0/0 20.0.0.1 !Configure EVI under EVPN config on Leaf-5 to create EVPN service for VLAN 20. This EVI 20 will then be associated to the Bridge-Domain for VLAN 20. Assign the route-target values for the EVI to import and export prefixes via BGP EVPN control-plane.In the below configuration route-target is manually configured, however route-target can be automatically generated as well, based on [BGP-AS]#[EVI-ID]. Leaf-5 evpn evi 20 bgp route-target import 1001#22 route-target export 1001#22 ! !Associate the EVI 20 to Bridge-Domain for VLAN 20 that has attachment-circuit/host-9 connected. Leaf-5 l2vpn bridge group bg-1 bridge-domain bd-20 interface TenGigE0/0/0/45.20 ! routed interface BVI20 ! evi 20 ! !Lets check the host reachability by pinging from Host-1 (IP 10.0.0.10/32) to Host-9 (IP 20.0.0.50/32).In the below output we can see that we can ping between the Host-1 (IP 10.0.0.10) and Host-9 (IP 20.0.0.50) successfully which are both on different subnets. Host-1# RP/0/RSP0/CPU0#Host-1#ping 20.0.0.50 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 20.0.0.50, timeout is 2 seconds# !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms RP/0/RSP0/CPU0#Host-1# Host-9# RP/0/RSP0/CPU0#Host-9#ping 10.0.0.10 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.10, timeout is 2 seconds# !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms RP/0/RSP0/CPU0#Host-9#We can verify the routes advertisement using the BGP EVPN control-plane. In the below output from Leaf-5 we can see the MAC and IP address of Host-1 are learnt under their respective route distinguishers via EVPN Route-Type-2.Example of Host-1 MAC+IP learnt via Route-Type-2 ([2][0][48][6c9c.ed6d.1d8b][32][10.0.0.10]/136)The route distinguisher value is comprised of [BGP-Router-ID#EVI-ID] eg. for Leaf-1# 1.1.1.1#10, Leaf-2# 2.2.2.2#10. Leaf-5# RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn BGP router identifier 5.5.5.5, local AS number 65001 BGP generic scan interval 60 secs Non-stop routing is enabled BGP table state# Active Table ID# 0x0 RD version# 0 BGP main routing table version 147 BGP NSR Initial initsync version 10 (Reached) BGP NSR/ISSU Sync-Group versions 0/0 BGP scan interval 60 secs Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discard Origin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path Route Distinguisher# 1.1.1.1#10 *>i[2][0][48][6c9c.ed6d.1d8b][32][10.0.0.10]/136 1.1.1.1 100 0 i * i 1.1.1.1 100 0 i Route Distinguisher# 2.2.2.2#10 *>i[2][0][48][6c9c.ed6d.1d8b][32][10.0.0.10]/136 2.2.2.2 100 0 i * i 2.2.2.2 100 0 i Route Distinguisher# 5.5.5.5#20 (default for vrf bd-20) *> [2][0][48][a03d.6f3d.5447][32][20.0.0.50]/136 0.0.0.0 0 i *> [3][0][32][5.5.5.5]/80 0.0.0.0 0 i Processed 4 prefixes, 6 paths RP/0/RP0/CPU0#Leaf-5# Similarly, on Leaf-1 and Leaf-2 we can see the prefix learnt that is advertised by Leaf-5. Leaf-1# RP/0/RP0/CPU0#Leaf-1#show bgp l2vpn EVpn rd 5.5.5.5#20 BGP router identifier 1.1.1.1, local AS number 65001 BGP generic scan interval 60 secs Non-stop routing is enabled BGP table state# Active Table ID# 0x0 RD version# 0 BGP main routing table version 3911 BGP NSR Initial initsync version 5 (Reached) BGP NSR/ISSU Sync-Group versions 0/0 BGP scan interval 60 secs Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discard Origin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path Route Distinguisher# 5.5.5.5#20 *>i[2][0][48][a03d.6f3d.5447][32][20.0.0.50]/136 5.5.5.5 100 0 i * i 5.5.5.5 100 0 i Processed 1 prefixes, 2 paths RP/0/RP0/CPU0#Leaf-1# Leaf-2# RP/0/RP0/CPU0#Leaf-2#show bgp l2vpn evpn rd 5.5.5.5#20 BGP router identifier 2.2.2.2, local AS number 65001 BGP generic scan interval 60 secs Non-stop routing is enabled BGP table state# Active Table ID# 0x0 RD version# 0 BGP main routing table version 3947 BGP NSR Initial initsync version 5 (Reached) BGP NSR/ISSU Sync-Group versions 0/0 BGP scan interval 60 secs Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discard Origin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path Route Distinguisher# 5.5.5.5#20 *>i[2][0][48][a03d.6f3d.5447][32][20.0.0.50]/136 5.5.5.5 100 0 i * i 5.5.5.5 100 0 i Processed 1 prefixes, 2 paths RP/0/RP0/CPU0#Leaf-2#When a host is discovered through ARP, the MAC and IP Route Type 2 is advertised with both Bridge-Domain/EVI label and IP VRF label with their respective route-targets. The VRF route-targets and IP VPN labels are associated with Route Type-2 to achieve Leaf-Leaf IP routing similar to traditional L3VPNs. For Layer-2 forwarding between Leaf-Leaf, the Bridge-Domain/EVI route-targets and labels associated with the Route Type 2 are used.In the below output on Leaf-5 for the prefix learnt from Leaf-1 (RD 1.1.1.1#10), we can see the highlighted route-target and label values. Leaf-5 RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 1.1.1.1#10 [2][0][48][6c9c.ed6d.1d8b][32][10.0.0.10]/136 detail BGP routing table entry for [2][0][48][6c9c.ed6d.1d8b][32][10.0.0.10]/136, Route Distinguisher# 1.1.1.1#10 Versions# Process bRIB/RIB SendTblVer Speaker 209 209 Flags# 0x00840001+0x00010000; Last Modified# Jul 25 19#37#14.072 for 00#01#17 Paths# (2 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Flags# 0x4000000025060005, import# 0x1f, EVPN# 0x3 Not advertised to any peer Local 1.1.1.1 (metric 20) from 6.6.6.6 (1.1.1.1) Received Label 24060, Second Label 24004 Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 1, version 209 Extended community# Flags 0x1e# SoO#2.2.2.2#10 RT#10#10 RT#1001#11 Originator# 1.1.1.1, Cluster list# 6.6.6.6 EVPN ESI# 0011.1111.1111.1111.1111 Path #2# Received by speaker 0 Flags# 0x4000000020020005, import# 0x40, EVPN# 0x3 Not advertised to any peer Local 1.1.1.1 (metric 20) from 7.7.7.7 (1.1.1.1) Received Label 24060, Second Label 24004 Origin IGP, localpref 100, valid, internal, not-in-vrf Received Path ID 0, Local Path ID 0, version 0 Extended community# Flags 0x1e# SoO#2.2.2.2#10 RT#10#10 RT#1001#11 Originator# 1.1.1.1, Cluster list# 7.7.7.7 EVPN ESI# 0011.1111.1111.1111.1111 RP/0/RP0/CPU0#Leaf-5#Lets check the routing table of VRF 10 on the Leafs. In below output we can see that 10.0.0.10/32 and 20.0.0.50/32 prefixes are being learnt on the Leafs. Leaf-1# RP/0/RP0/CPU0#Leaf-1#show route vrf 10 Gateway of last resort is not set C 10.0.0.0/24 is directly connected, 02#47#36, BVI10 L 10.0.0.1/32 is directly connected, 02#47#36, BVI10 B 10.0.0.10/32 [200/0] via 2.2.2.2 (nexthop in vrf default), 00#55#24 B 20.0.0.50/32 [200/0] via 5.5.5.5 (nexthop in vrf default), 00#26#54 RP/0/RP0/CPU0#Leaf-1# Leaf-2# RP/0/RP0/CPU0#Leaf-2#show route vrf 10 Gateway of last resort is not set C 10.0.0.0/24 is directly connected, 02#48#31, BVI10 L 10.0.0.1/32 is directly connected, 02#48#31, BVI10 B 10.0.0.10/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 00#56#15 B 20.0.0.50/32 [200/0] via 5.5.5.5 (nexthop in vrf default), 00#27#45 RP/0/RP0/CPU0#Leaf-2# Leaf-5# RP/0/RP0/CPU0#Leaf-5#show route vrf 10 Gateway of last resort is not set B 10.0.0.10/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 00#39#32 [200/0] via 2.2.2.2 (nexthop in vrf default), 00#39#32 C 20.0.0.0/24 is directly connected, 1d19h, BVI20 L 20.0.0.1/32 is directly connected, 1d19h, BVI20 RP/0/RP0/CPU0#Leaf-5#Lastly, we verify the CEF table for Host-1’s prefix (10.0.0.10/32) on Leaf-5. We can see that we have ECMP paths available to reach to Host-1 and BGP multipathing is operational. Leaf-5 RP/0/RP0/CPU0#Leaf-5#show cef vrf 10 10.0.0.10/32 10.0.0.10/32, version 16, internal 0x5000001 0x0 (ptr 0x97d2f7fc) [1], 0x0 (0x0), 0x208 (0x98aa3138) Updated Jul 25 17#45#29.253 Prefix Len 32, traffic index 0, precedence n/a, priority 3 via 1.1.1.1/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 0 NHID 0x0 [0x96e3ba50 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 1.1.1.1/32 via 16001/0/21 next hop 192.5.6.1/32 BE56 labels imposed {16001 24004} next hop 192.5.7.1/32 BE57 labels imposed {16001 24004} via 2.2.2.2/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 1 NHID 0x0 [0x96e3bbf0 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 2.2.2.2/32 via 16002/0/21 next hop 192.5.6.1/32 BE56 labels imposed {16002 24004} next hop 192.5.7.1/32 BE57 labels imposed {16002 24004} RP/0/RP0/CPU0#Leaf-5#After the configuration and verifcation we are able to perform inter-subnet routing between Host-1 and Host-5. This was achieved with BGP EVPN Integrated Routing and Bridging (IRB) feature along with Distributed Anycast Gateway. For deep dive details of BGP EVPN, refer to our e-vpn.io webpage, it has a lot of material explaining the core concepts of EVPN, its operations and troubleshooting details.", "url": "/tutorials/bgp-evpn-irb-configuration/", "author": "Ahmad Bilal Siddiqui", "tags": "iosxr, ncs 5500, evpn, NCS5500" } , "tutorials-bgp-flowspec-on-ncs5500": { "title": "BGP FlowSpec on NCS5500: A few tests on scale, rate and memory usage", "content": " BGP FlowSpec on NCS5500# Few Tests Introduction Specific NCS5500 implementation Recirculation IPv6 specific mode Interface support Test setup Scale Tests 3000 rules 4000 rules 6000 rules 9000 rules Latest Scale Session limit configuration Verification of the resource used with complex rules ICMP type / code Packet size Fragmented TCP SYN Arbor auto-mitigation First group# unique source-port Second group# dual source-port Third group# packet length Last group# frag To summarize Programming rate References Conclusion/Acknowledgements Update 1# Correction on the hw-module profile ipv6-flowspec sectionUpdate 2# Netscout simplified the ntp auto-mitigation, we ran the test with this new rule. Also, error on the Netbios ports has been fixed.You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN implementation following this link.IntroductionYosef published a couple of articles related to BGP FlowSpec implementation on the NCS5500 routers here# SupportForum# BGP Flowspec implementation on NCS5500 platforms# https#//community.cisco.com/t5/service-providers-blogs/bgp-flowspec-implementation-on-ncs5500-platforms/ba-p/3387443 SupportForum# NCS5500 BGP flowspec packet matching criteria# https#//community.cisco.com/t5/service-providers-blogs/bgp-flowspec-implementation-on-ncs5500-platforms/ba-p/3387443Today, we will gather several questions from customers and we will use this opportunity to dig a bit deeper in the subtleties of this implementation# presenting the memory spaces used to store the rules information and the statistics and running a couple of tests to identify the limits.As a starter, I suggest three videos on Youtube that could answer most of your questions on the topic. The first two are relatively short, the last one will require a couple of hours of your time.All the principles and details of the configuration#Cisco NCS5500 Flowspec (Principles and Configuration) Part1A simple demo of interoperability between Netscout / Arbor SP and NCS5500 to auto-mitigate an MemCacheD amplification attack#Cisco NCS5500 Flowspec (Auto-Mitigation of a Memcached Attack) Part2Finally, the CiscoLive session dedicated to BGP FlowSpec. A deepdive in the technology#BRKSPG 3012 - Leveraging BGP Flowspec to protect your infrastructureSpecific NCS5500 implementationFirst reminder# the support is limited today (September 2019) to the platforms based on Jericho+ NPU and External TCAM (OP# Optimus Prime).BGP FlowSpec being implemented in ingress, the distinction between line card is important only where the packets are received. What is used to egress the traffic is not relevant.We support BGP FS on the following products# NCS55A1-36H-SE-S NCS55A2-MOD-SE-S (the one we are using for these tests) NC55-36X100G-A-SE line card NC55-MOD-A-SE-S line cardFor the most part, the implementation is identical to what has been done on the ASR9000, CRS and NCS6000 platforms.You can refer to the configuration guide on the ASR9000 and use the examples available from multiple sources.In the next parts, you’ll find aspect that are specific to the NCS5500#RecirculationWhen packets are matched by a BGP FS rule, they will be recirculated. It’s required to permit the accounting of the matched packets.IPv6 specific modeBGP FS for IPv6 requires a specific hardware profile.It will impact the overall performance. That means all packets, handled or not by the BGP FlowSpec rules, will be treated at a maximum of 700MPPS instead of the nominal 835MPPS.You need to enable the following profile as described below#RP/0/RP0/CPU0#Peyto-SE(config)#hw-module profile flowspec ? v6-enable Configure support for v6 flowspecRP/0/RP0/CPU0#Peyto-SE(config)#hw-module profile flowspec v6-enable ? location Location of flowspec configRP/0/RP0/CPU0#Peyto-SE(config)#hw-module profile flowspec v6-enableRP/0/RP0/CPU0#Peyto-SE(config)#commitRP/0/RP0/CPU0#Peyto-SE(config)#To be enabled, the profile needs a reload of the line cards or the entire system.Interface supportYosef covered it in the supportforum blog but it’s important to remind that BGP flowspec is activate on L3 interface but will NOT process packets when received from GRE tunnel, or on BVI interface. Also, BGP flowspec is NOT supported with multicast traffic.Test setupConfig Route Generator / Controllerrouter bgp 100bgp_id 192.168.100.151neighbor 192.168.100.217 remote-as 100neighbor 192.168.100.217 update-source 192.168.100.151capability ipv4 flowspecnetwork 1 ipv4 flowspecnetwork 1 dest 2.2.2.0/24 source 3.3.0.0/16 protocol 6 dest-port 8080network 1 count 4000 dest-incrext_community 1 traffic-rate#1#0Config Router / Client #We are using an NCS55A2MOD router with External TCAM#RP/0/RP0/CPU0#Peyto-SE#sh platNode Type State Config state--------------------------------------------------------------------------------0/0/1 NC55-MPA-4H-S OK0/0/2 NC55-MPA-12T-S OK0/RP0/CPU0 NCS-55A2-MOD-SE-S(Active) IOS XR RUN NSHUT0/RP0/NPU0 Slice UP0/FT0 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT1 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT2 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT3 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT4 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT5 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT6 NC55-A2-FAN-FW OPERATIONAL NSHUT0/FT7 NC55-A2-FAN-FW OPERATIONAL NSHUT0/PM0 NC55-1200W-ACFW OPERATIONAL NSHUT0/PM1 NC55-1200W-ACFW FAILED NSHUTRP/0/RP0/CPU0#Peyto-SE#And the configuration#router bgp 100 address-family ipv4 flowspec ! neighbor 192.168.100.151 remote-as 100 update-source MgmtEth0/RP0/CPU0/0 ! address-family ipv4 flowspec route-policy PERMIT-ANY in route-policy PERMIT-ANY out ! !flowspec local-install interface-all!Scale Tests3000 rulesFrom the controller, we advertise 3000 simple rules (which is the level of support on the IOS XR routers) and we will use this opportunity to check the resources consumed. The following commands can be used for normal operation and troubleshooting.We verify the advertisement at the BGP peer level first#RP/0/RP0/CPU0#Peyto-SE#sh bgp ipv4 flowspec sumBGP router identifier 1.1.1.111, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 97804BGP main routing table version 97804BGP NSR Initial initsync version 0 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 97804 97804 97804 97804 97804 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 802 463 97804 0 0 00#00#11 3000RP/0/RP0/CPU0#Peyto-SE#We also verify that the rules are properly received#RP/0/RP0/CPU0#Peyto-SE#show policy-map transient type pbr pmap-name __bgpfs_default_IPv4policy-map type pbr __bgpfs_default_IPv4 handle#0x36000002 table description# L3 IPv4 and IPv6 class handle#0x76004f03 sequence 1024 match destination-address ipv4 2.2.2.0 255.255.255.0 match source-address ipv4 3.3.0.0 255.255.0.0 match protocol tcp match destination-port 8080 drop ! class handle#0x76004f04 sequence 2048 match destination-address ipv4 2.2.3.0 255.255.255.0 match source-address ipv4 3.3.0.0 255.255.0.0 match protocol tcp match destination-port 8080 drop ! ...On the flowspec side too#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4 detailAFI# IPv4 Flow #Dest#2.2.2.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=8080 Actions #Traffic-rate# 0 bps (bgp.1) Statistics (packets/bytes) Matched # 0/0 Transmitted # 0/0 Dropped # 0/0 Flow #Dest#2.2.3.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=8080 Actions #Traffic-rate# 0 bps (bgp.1) Statistics (packets/bytes) Matched # 0/0 Transmitted # 0/0 Dropped # 0/0 Flow #Dest#2.2.4.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=8080 Actions #Traffic-rate# 0 bps (bgp.1) Statistics (packets/bytes) Matched # 0/0 Transmitted # 0/0 Dropped # 0/0 Flow #Dest#2.2.5.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=8080 Actions #Traffic-rate# 0 bps (bgp.1) Statistics (packets/bytes) Matched # 0/0 Transmitted # 0/0 Dropped # 0/0...To be passed from IOS XR to the hardware, we are using the DPA/OFA table “ippbr”#RP/0/RP0/CPU0#Peyto-SE#sh dpa resources ippbr loc 0/0/cPU0~ippbr~ OFA Table (Id# 137, Scope# Global)-------------------------------------------------- NPU ID# NPU-0 In Use# 3000 Create Requests Total# 3000 Success# 3000 Delete Requests Total# 1000 Success# 1000 Update Requests Total# 0 Success# 0 EOD Requests Total# 0 Success# 0 Errors HW Failures# 0 Resolve Failures# 0 No memory in DB# 0 Not found in DB# 0 Exists in DB# 0 Reserve Resources Failures# 0 Release Resources Failures# 0 Update Resources Failures# 0RP/0/RP0/CPU0#Peyto-SE#The BGP FlowSpec rules are stored in external TCAM in a specific zone, different from the one used for IPv4 and IPv6 prefixes#RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 1096 3000 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#Nothing will be used in the other most common resources# LPM, LEM, IPv4/IPv6 eTCAM or iTCAM. You can verify it with “sh contr npu resources all loc 0/0/CPU0”RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources all loc 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 0 (0 %) iproute # 0 (0 %) ip6route # 0 (0 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 329283 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 4 (0 %) iproute # 0 (0 %) ip6route # 0 (0 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)HW Resource Information Name # encapOOR Information NPU-0 Estimated Max Entries # 104000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 0 (0 %) ipnh # 0 (0 %) ip6nh # 0 (0 %) mplsnh # 0 (0 %)HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 6 (0 %) iproute # 9 (0 %)HW Resource Information Name # fecOOR Information NPU-0 Estimated Max Entries # 126976 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 15 (0 %) ipnhgroup # 7 (0 %) ip6nhgroup # 2 (0 %) edpl # 0 (0 %) limd # 0 (0 %) punt # 4 (0 %) iptunneldecap # 0 (0 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ipnh # 0 (0 %) ip6nh # 0 (0 %) mplsmdtbud # 0 (0 %) ipvrf # 1 (0 %) ippbr # 0 (0 %) redirectvrf # 0 (0 %) erp # 0 (0 %)HW Resource Information Name # ecmp_fecOOR Information NPU-0 Estimated Max Entries # 4096 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 0 (0 %) ipnhgroup # 0 (0 %) ip6nhgroup # 0 (0 %)HW Resource Information Name # ext_tcam_ipv6OOR Information NPU-0 Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 3 (0 %) ip6route # 9 (0 %)RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh contr npu internaltcam location 0/0/CPU0Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b flp-tcam 2045 0 00 1 160b pmf-0 1959 58 7 INGRESS_LPTS_IPV40 1 160b pmf-0 1959 8 14 INGRESS_RX_ISIS0 1 160b pmf-0 1959 16 27 INGRESS_QOS_IPV40 1 160b pmf-0 1959 6 29 INGRESS_QOS_MPLS0 1 160b pmf-0 1959 1 36 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 2 160b pmf-0 1975 40 17 INGRESS_ACL_L3_IPV40 2 160b pmf-0 1975 33 30 INGRESS_QOS_L20 3 160b egress_acl 2030 18 3 EGRESS_QOS_MAP0 4\\5 320b pmf-0 1984 49 8 INGRESS_LPTS_IPV60 4\\5 320b pmf-0 1984 15 28 INGRESS_QOS_IPV60 6 160b Free 2048 0 00 7 160b Free 2048 0 00 8 160b Free 2048 0 00 9 160b Free 2048 0 00 10 160b Free 2048 0 00 11 160b Free 2048 0 00 12 160b flp-tcam 125 0 00 13 160b pmf-1 9 54 13 INGRESS_RX_L20 13 160b pmf-1 9 13 23 INGRESS_MPLS0 13 160b pmf-1 9 46 74 INGRESS_BFD_IPV4_NO_DESC_TCAM_T0 13 160b pmf-1 9 4 86 SRV6_END0 13 160b pmf-1 9 2 95 INGRESS_IP_DISABLE0 14 160b egress_acl 120 8 6 EGRESS_L3_QOS_MAP0 15 160b Free 128 0 0RP/0/RP0/CPU0#Peyto-SE#The BGP Flowspec rules will consume statistic entries.Before the advertisement of the rules#RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance 0 loc 0/0/CPU0System information for NPU 0# Counter processor configuration profile# Default Next available counter processor# 6Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 113 300 | Trap 110 300 Policer (QoS) 32 6976 | Policer (QoS) 0 6976 ACL RX, LPTS 202 915 | ACL RX, LPTS 202 915 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 67 8191 | VOQ 67 8191 | |Counter processor# 4 | Counter processor# 5 State# Free | State# Free | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 0 1638 | L3 RX 0 1638 L2 RX 0 8192 | L2 RX 0 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16384 | Interface TX 0 16384 | |RP/0/RP0/CPU0#Peyto-SE#We highlighted the “ACL RX, LPTS” which will contain the counters for Flowspec.Before injecting the rules, we are already consuming 202 entries. It will be our reference point.And now after the learning of 3000 rules#RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance 0 location allHW Stats Information For Location# 0/0/CPU0System information for NPU 0# Counter processor configuration profile# Default Next available counter processor# 6Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 113 300 | Trap 110 300 Policer (QoS) 32 6976 | Policer (QoS) 0 6976 ACL RX, LPTS 914 915 | ACL RX, LPTS 914 915 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 67 8191 | VOQ 67 8191 | |Counter processor# 4 | Counter processor# 5 State# In use | State# In use | Application# In use Total | Application# In use Total ACL RX, LPTS 2288 8192 | ACL RX, LPTS 2288 8192 | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 0 1638 | L3 RX 0 1638 L2 RX 0 8192 | L2 RX 0 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16384 | Interface TX 0 16384 | |RP/0/RP0/CPU0#Peyto-SE#In Counter Processor 0# we used to consume 202 entries before the BGP FS rules and we have now 914, so, 712 entries have allocated to Flowspec.In Counter Processor 4# we allocated 2288 new entries.So, in total, we have 2288 + 712 = 3000 entries which is in-line with the expectation.Note# This number 3000 is the validated scale on all the IOS XR platforms. It does not mean that some systems couldn’t go higher. It will depend on the platforms and the software releases. But 3000 simple rules are guaranteed. The rest of the tests performed below will try to answer specific questions from customers (during CPOC or for production), but it’s only for information. Results may vary depending on platform and software release.So, what happens if we inject 4000, 6000 or 9000 rules?4000 rulesLet’s see what will happen if we push further. We start with 4000 rules of the same kind than used in the former test.RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 96 4000 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh dpa resources ippbr loc 0/0/CPU0~ippbr~ OFA Table (Id# 137, Scope# Global)-------------------------------------------------- NPU ID# NPU-0 In Use# 4000 Create Requests Total# 7000 Success# 7000 Delete Requests Total# 4118 Success# 4118 Update Requests Total# 0 Success# 0 EOD Requests Total# 0 Success# 0 Errors HW Failures# 0 Resolve Failures# 0 No memory in DB# 0 Not found in DB# 0 Exists in DB# 0 Reserve Resources Failures# 0 Release Resources Failures# 0 Update Resources Failures# 0RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance 0 location allHW Stats Information For Location# 0/0/CPU0System information for NPU 0# Counter processor configuration profile# Default Next available counter processor# 6Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 113 300 | Trap 110 300 Policer (QoS) 32 6976 | Policer (QoS) 0 6976 ACL RX, LPTS 912 915 | ACL RX, LPTS 912 915 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 67 8191 | VOQ 67 8191 | |Counter processor# 4 | Counter processor# 5 State# In use | State# In use | Application# In use Total | Application# In use Total ACL RX, LPTS 3287 8192 | ACL RX, LPTS 3287 8192 | |Counter processor# 6 | Counter processor# 7 State# Free | State# Free | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 0 1638 | L3 RX 0 1638 L2 RX 0 8192 | L2 RX 0 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16384 | Interface TX 0 16384 | |RP/0/RP0/CPU0#Peyto-SE#It looks like 4000 entries were received quickly and didn’t trigger any error.6000 rulesMoving the cursor to 6000 rules now, twice the supported level.The BGP part is learnt almost instantly.RP/0/RP0/CPU0#Peyto-SE#sh bgp ipv4 flowspec sumBGP router identifier 1.1.1.111, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 132804BGP main routing table version 132804BGP NSR Initial initsync version 0 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 132804 126804 132804 132804 126804 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 989 523 126804 0 0 00#00#33 6000RP/0/RP0/CPU0#Peyto-SE#On the hardware side, the first 4200 rules are programmed in a few seconds then it progresses much more slowly#RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 4940 4276 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#It will take several minutes to program the remaining 2000ish rules.Eventually, rules will be programmed and the DPA part doesn’t show any error despite the very long time it takes.RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 4240 6000 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance 0 location allHW Stats Information For Location# 0/0/CPU0System information for NPU 0# Counter processor configuration profile# Default Next available counter processor# 4Counter processor# 0 | Counter processor# 1 State# In use | State# In use | Application# In use Total | Application# In use Total Trap 113 300 | Trap 110 300 Policer (QoS) 32 6976 | Policer (QoS) 0 6976 ACL RX, LPTS 915 915 | ACL RX, LPTS 915 915 | |Counter processor# 2 | Counter processor# 3 State# In use | State# In use | Application# In use Total | Application# In use Total VOQ 67 8191 | VOQ 67 8191 | |Counter processor# 4 | Counter processor# 5 State# Free | State# Free | |Counter processor# 6 | Counter processor# 7 State# In use | State# In use | Application# In use Total | Application# In use Total ACL RX, LPTS 5287 8192 | ACL RX, LPTS 5287 8192 | |Counter processor# 8 | Counter processor# 9 State# Free | State# Free | |Counter processor# 10 | Counter processor# 11 State# In use | State# In use | Application# In use Total | Application# In use Total L3 RX 0 1638 | L3 RX 0 1638 L2 RX 0 8192 | L2 RX 0 8192 | |Counter processor# 12 | Counter processor# 13 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16383 | Interface TX 0 16383 | |Counter processor# 14 | Counter processor# 15 State# In use | State# In use | Application# In use Total | Application# In use Total Interface TX 0 16384 | Interface TX 0 16384 | |RP/0/RP0/CPU0#Peyto-SE#sh dpa resources ippbr loc 0/0/CPU0~ippbr~ OFA Table (Id# 137, Scope# Global)-------------------------------------------------- NPU ID# NPU-0 In Use# 6000 Create Requests Total# 179286 Success# 179286 Delete Requests Total# 173286 Success# 173286 Update Requests Total# 0 Success# 0 EOD Requests Total# 0 Success# 0 Errors HW Failures# 0 Resolve Failures# 0 No memory in DB# 0 Not found in DB# 0 Exists in DB# 0 Reserve Resources Failures# 0 Release Resources Failures# 0 Update Resources Failures# 0RP/0/RP0/CPU0#Peyto-SE#9000 rulesOk, one last try… This time with 9000 rules. Three times the officially supported scale.Like we noticed for the former test with 6000 rules, the BGP part is going pretty fast, the programming goes to 4200 rules quickly and then learns the routes slowly.RP/0/RP0/CPU0#Peyto-SE#sh bgp ipv4 flowspec sumBGP router identifier 1.1.1.111, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 163804BGP main routing table version 163804BGP NSR Initial initsync version 0 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 163804 154804 163804 163804 154804 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 1174 593 154804 0 0 00#02#45 9000RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 4997 4219 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#This time, we pushed too far and exceeded the memory allocations.The DPA/OFA is showing error messages which proves it was not able to program the entry in hardware.RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 4406 8906 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh dpa resources ippbr loc 0/0/CPU0~ippbr~ OFA Table (Id# 137, Scope# Global)-------------------------------------------------- NPU ID# NPU-0 In Use# 8906 Create Requests Total# 867909 Success# 867374 Delete Requests Total# 858909 Success# 858468 Update Requests Total# 0 Success# 0 EOD Requests Total# 0 Success# 0 Errors HW Failures# 535 Resolve Failures# 0 No memory in DB# 0 Not found in DB# 441 Exists in DB# 0 Reserve Resources Failures# 0 Release Resources Failures# 0 Update Resources Failures# 0RP/0/RP0/CPU0#Peyto-SE#We are seeing the router is not behaving erratically (crash or memory dumps), it just refuses to program more entries in the memory and increments the DPA Hw errors counters.I have to re-iterate# the officially tested, it means, supported scale for BGP Flowspec is 3000 rules.We were able to push to 4000 with this platform with no noticeable problem, to 6000 with a very low programming rate in the last part but not to 9000. But it doesn’t prove anything, just that it doesn’t badly impair the router.The results may be different on a different NCS5500 platform or a different IOS XR version. So, please take all this with a grain of salt.Latest ScaleWith IOS-XR 7.6.1, we can now assign 32K BGP Flowspec entries, thus increasing the number of matches and actions covered. In earlier releases, you could configure 16K BGP Flowspec entries. BGP Flowspec entries up to 32K are supported only on Cisco NCS 5700 series fixed port routers and the Cisco NCS 5500 series routers that have the Cisco NC57 line cards that are installed and operating in the native mode. BGP Flowspec can scale up to 32K entries only when you enable the l3max-se profile.More details on the scale can be found here.Session limit configurationIs it possible to limit the number of rules received per session or globally?We can configure the “maximum-prefix” under the neighbor statement to limit the number of advertised (received) rules for a given session. But it’s not possible to globally limit the number of rules to a specific value.The only workaround will consist in using a single BGP FS session from the client to a route-reflector.The max-prefix feature is directly inherited from the BGP world and benefits to Flowspec without specific adaptation.RP/0/RP0/CPU0#Peyto-SE(config)#router bgp 100RP/0/RP0/CPU0#Peyto-SE(config-bgp)# neighbor 192.168.100.151RP/0/RP0/CPU0#Peyto-SE(config-bgp-nbr)# address-family ipv4 flowspecRP/0/RP0/CPU0#Peyto-SE(config-bgp-nbr-af)#maximum-prefix 1010 75RP/0/RP0/CPU0#Peyto-SE(config-bgp-nbr-af)#commitWe advertise 1000 rules, it only generates a warning message#RP/0/RP0/CPU0#Jul 15 00#56#58.887 UTC# bgp[1084]# %ROUTING-BGP-5-ADJCHANGE # neighbor 192.168.100.151 Up (VRF# default) (AS# 100)RP/0/RP0/CPU0#Jul 15 00#56#58.888 UTC# bgp[1084]# %ROUTING-BGP-5-NSR_STATE_CHANGE # Changed state to Not NSR-ReadyRP/0/RP0/CPU0#Jul 15 00#56#59.147 UTC# bgp[1084]# %ROUTING-BGP-5-MAXPFX # No. of IPv4 Flowspec prefixes received from 192.168.100.151 has reached 758, max 1010If we push to 1020 rules#RP/0/RP0/CPU0#Jul 15 00#59#55.549 UTC# bgp[1084]# %ROUTING-BGP-4-MAXPFXEXCEED # No. of IPv4 Flowspec prefixes received from 192.168.100.151# 1011 exceed limit 1010RP/0/RP0/CPU0#Jul 15 00#59#55.549 UTC# bgp[1084]# %ROUTING-BGP-5-ADJCHANGE # neighbor 192.168.100.151 Down - Peer exceeding maximum prefix limit (CEASE notification sent - maximum number of prefixes reached) (VRF# default) (AS# 100)RP/0/RP0/CPU0#Peyto-SE#sh bgp ipv4 flowspec sumBGP router identifier 1.1.1.111, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 176824BGP main routing table version 176824BGP NSR Initial initsync version 0 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 176824 176824 176824 176824 176824 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 1243 649 0 0 0 00#00#33 Idle (PfxCt)RP/0/RP0/CPU0#Peyto-SE#Note that by default, it will be necessary to clear the bgp session to “unstuck” it from idle state.Also, other options exist to restart it automatically after a few minutes, to ignore the extra rules or to simply generate a warning message#RP/0/RP0/CPU0#Peyto-SE(config-bgp-nbr-af)#maximum-prefix 1010 75 ? discard-extra-paths Discard extra paths when limit is exceeded restart Restart time interval warning-only Only give warning message when limit is exceededRP/0/RP0/CPU0#Peyto-SE(config-bgp-nbr-af)#Verification of the resource used with complex rulesIn the tests above, we used a simple rule made of# source prefix destination prefix protocol UDP port 8080From the generator, it’s represented as#network 1 dest 2.2.2.0/24 source 3.3.0.0/16 protocol 6 dest-port 8080Which is received on the client#AFI# IPv4 Flow #Dest#2.2.2.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=8080 Actions #Traffic-rate# 0 bps (bgp.1)This simple rule will use a single entry in our external TCAM bank 11.Now, let’s try to identify how much space other rules will consume.ICMP type / codeOn the controller, we advertise 100 rules with source, destination, ICMP type and code, and an increase of the destination.On the Controller#network 1 ipv4 flowspecnetwork 1 dest 2.2.2.0/24 source 3.3.0.0/16network 1 icmp-type 3 icmp-code 16network 1 count 100 dest-incrOn the Client/Router#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,Source#3.3.0.0/16,ICMPType#=3,ICMPCode#=16 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#2.2.3.0/24,Source#3.3.0.0/16,ICMPType#=3,ICMPCode#=16 Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | FLOWSPEC0 11 320b FLP 3996 100 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 301 915 | ACL RX, LPTS 301 915RP/0/RP0/CPU0#Peyto-SE#100 rules occupy 100 entries in the eTCAM and in the stats DB.So one for one.Packet sizeWe define on the controller a set of 100 rules with address source and destination, protocol TCP, destination port 123 and larger than 400 bytes#network 1 ipv4 flowspecnetwork 1 dest 2.2.2.0/24 source 3.3.0.0/16 protocol 6 dest-port 123network 1 packet-len >=400network 1 count 100 dest-incrOn the client side#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=123,Length#>=400 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#2.2.3.0/24,Source#3.3.0.0/16,Proto#=6,DPort#=123,Length#>=400 Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOWSPEC0 11 320b FLP 3096 1000 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 300 915 | ACL RX, LPTS 300 915RP/0/RP0/CPU0#Peyto-SE#On the statistic side, one rule occupies one entry. But on the eTCAM, each rule will consume 10 entries.Let’s try to see if different packet sizes will show different memory occupation.**network 1 packet-len >=255**RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOW0 11 320b FLP 3196 900 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#**network 1 packet-len >=256**RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOW0 11 320b FLP 3296 800 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#**network 1 packet-len >=257**RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOW0 11 320b FLP 2596 1500 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#**network 1 packet-len >=512**RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOW0 11 320b FLP 3396 700 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#Clearly, (packet) size matters# <= X Bytes eTCAM Entries for one rule <= X Bytes eTCAM Entries for one rule 120 10 245 11 121 12 246 10 122 11 247 10 123 11 248 9 124 10 249 11 125 11 250 10 126 10 251 10 127 10 252 9 128 9 253 10 129 15 254 9 130 14 255 9 131 14 256 8 132 13 257 15 133 14 258 14 134 13 259 14 Based on these couples of examples, to optimize the memory utilization, it’s advised to use power of twos.FragmentedIn this example, we only use source and destination, and the indication the packets are fragmented.network 1 ipv4 flowspecnetwork 1 dest 2.2.2.0/24 source 3.3.0.0/16 protocol 17 fragment (isf)network 1 count 100 dest-incrOn the router#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,Source#3.3.0.0/16,Proto#=17,Frag#~IsF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#2.2.3.0/24,Source#3.3.0.0/16,Proto#=17,Frag#~IsF Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOW0 11 320b FLP 3896 200 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sRP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 300 915 | ACL RX, LPTS 300 915RP/0/RP0/CPU0#Peyto-SE#So, a simple rule with source and destination address and fragment flag will use one stats entry and two eTCAM entries.TCP SYNnetwork 1 ipv4 flowspecnetwork 1 dest 2.2.2.0/24 source 3.3.0.0/16 protocol 6 tcp-flags *(syn)network 1 count 100 dest-incrRP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,Source#3.3.0.0/16,Proto#=6,TCPFlags#=0x02 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#2.2.3.0/24,Source#3.3.0.0/16,Proto#=6,TCPFlags#=0x02 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/cpu0 | i FLOW0 11 320b FLP 3996 100 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resources stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 302 915 | ACL RX, LPTS 302 915RP/0/RP0/CPU0#Peyto-SE#For TCP SYNs, one stats and one eTCAM entry per rule.Arbor auto-mitigationWhen Netscout / Arbor SP is used as a Flowspec controller, it can generate auto-mitigation rules such as#chargen, cldap, mdns, memcached, mssql, ripv1, rpcbind, ssdp, netbios, snmp, dns, l2tp, ntp and frags.First group# unique source-port chargen# dest 7.7.7.7/32 protocol 17 source-port 19 cldap# dest 7.7.7.7/32 protocol 17 source-port 389 mdns# dest 7.7.7.7/32 protocol 17 source-port 5353 memcached# dest 7.7.7.7/32 protocol 17 source-port 11211 mssql# dest 7.7.7.7/32 protocol 17 source-port 1434 ripv1# dest 7.7.7.7/32 protocol 17 source-port 520 rpcbind# dest 7.7.7.7/32 protocol 17 source-port 111 ssdp# dest 7.7.7.7/32 protocol 17 source-port 1900On the controller#network 1 ipv4 flowspecnetwork 1 dest 7.7.7.7/32 protocol 17 source-port 19network 1 count 100 dest-incrOn the router/client#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.8/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 3996 100 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#RP/0/RP0/CPU0#Peyto-SE#sh contr npu resource stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 303 915 | ACL RX, LPTS 303 915RP/0/RP0/CPU0#Peyto-SE#–> For all these cases, it will consume one stats entry and one eTCAM per rule.Second group# dual source-port netbios# dest 7.7.7.7/32 protocol 17 source-port {137 138} snmp# dest 7.7.7.7/32 protocol 17 source-port {161 162}Controller config#network 1 ipv4 flowspecnetwork 1 dest 7.7.7.7/32 protocol 17 source-port {137 138}network 1 count 100 dest-incrOn the router/client#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=137|=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.8/32,Proto#=17,SPort#=137|=138 Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 3896 200 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resource stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 303 915 | ACL RX, LPTS 303 915RP/0/RP0/CPU0#Peyto-SE#–> these cases are consuming one stats entry and two eTCAM entries per rule.Third group# packet length dns# dest 7.7.7.7/32 protocol 17 source-port 53 packet-len {>=768}On the controller side#network 1 ipv4 flowspecnetwork 1 dest 7.7.7.7/32 protocol 17 source-port 53 packet-len {>=768}network 1 count 100 dest-incrOn the router/client#RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.8/32,Proto#=17,SPort#=53,Length#>=768 Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 3396 700 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resource stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 302 915 | ACL RX, LPTS 302 915RP/0/RP0/CPU0#Peyto-SE#–> with this range (larger than 768), it consumes one stats entry and 7 eTCAM entries per rule. l2tp# dest 7.7.7.7/32 protocol 17 source-port 1701 packet-len {>=500}We check if the “larger than 500” makes a significant difference#RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 3196 900 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#–> yes, each rule will consume 9 eTCAM entries here. Some optimization is possible but it will not change fundamentally the scale. ntp# dest 7.7.7.7/32 protocol 17 source-port 123 packet-len {>=1 and<=35 >=37 and<=45 >=47 and<=75 >=77 and<=219 >=221 and<=65535}RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 796 3300 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resource stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 302 915 | ACL RX, LPTS 302 915RP/0/RP0/CPU0#Peyto-SE#–> each rule here will consume one stats entry and 33 eTCAM entries.Update# In latest version, NetScout modified the NTP auto-mitigation rule to use only the ranges 1-75,77-550 ntp# dest 7.7.7.7/32 protocol 17 source-port 123 packet-len {>=1 and<=75 >=77 and<=550}RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 3096 1000 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resource stats instance all loc 0/0/CPU0 |$ ACL RX, LPTS 301 915 | ACL RX, LPTS 301 915RP/0/RP0/CPU0#Peyto-SE#With these two ranges, each rule will consume 10 entries in the eTCAM (and still one in the stats).Last group# frag udp-frag# dest 7.7.7.7/32 protocol 17 fragment (isf)RP/0/RP0/CPU0#Peyto-SE#sh flowspec ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,Frag#~IsF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.8/32,Proto#=17,Frag#~IsF Actions #Traffic-rate# 0 bps (bgp.1) ...RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam loc 0/0/CPU0 | i FLOWSPEC0 11 320b FLP 3896 200 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#sh contr npu resource stats instance all loc 0/0/CPU0 | i ACL ACL RX, LPTS 302 915 | ACL RX, LPTS 302 915RP/0/RP0/CPU0#Peyto-SE#To summarize Auto-Mitigation eTCAM Entries chargen 1 cldap 1 mdns 1 memcached 1 mssql 1 ripv1 1 rpcbind 1 ssdp 1 netbios 2 snmp 2 dns 7 l2tp 9 ntp 33 UDP frag 2 Programming rateTo measure the number of rules we can program per second, we are using a very rudimentary method based on show command timestamps.After establishing the flowspec session, I will type “sh contr npu externaltcam location 0/0/CPU0” regularly and collect the number of entries in the bank ID 11, I will also note down the timing of the session, and convert it in milliseconds.RP/0/RP0/CPU0#Peyto-SE#sh contr npu externaltcam location 0/0/CPU0Sun Jul 14 23#35#44.252 UTCExternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 6481603 6 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 2389864 3 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 320b FLP 4067 29 5 IPv6 MC0 5 80b FLP 4096 0 82 INGRESS_IPV4_SRC_IP_EXT0 6 80b FLP 4096 0 83 INGRESS_IPV4_DST_IP_EXT0 7 160b FLP 4096 0 84 INGRESS_IPV6_SRC_IP_EXT0 8 160b FLP 4096 0 85 INGRESS_IPV6_DST_IP_EXT0 9 80b FLP 4096 0 86 INGRESS_IP_SRC_PORT_EXT0 10 80b FLP 4096 0 87 INGRESS_IPV6_SRC_PORT_EXT0 11 320b FLP 4351 4865 126 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#Peyto-SE#I can extract the following chart and diagram# Timing (ms) eTCAM Entries 38610 549 39551 774 40320 950 41128 1150 41979 1352 42680 1532 43384 1700 44039 1850 44673 2003 45312 2159 45943 2320 46584 2474 47240 2640 47849 2785 48488 2944 49193 3100 49823 3200 50481 3360 51150 3525 51799 3676 52393 3806 52976 3950 53667 4097 The programming rate in this external TCAM bank is around 250 rules per second, at least in the boundaries of the supported scale (up to 3000).ReferencesYoutube video# Cisco NCS5500 Flowspec (Principles and Configuration) Part1https#//www.youtube.com/watch?v=dTgh0p9VynsYoutube video# BRKSPG 3012 - Leveraging BGP Flowspec to protect your infrastructurehttps#//www.youtube.com/watch?v=dbsNf8DcNRQYoutube video# Cisco NCS5500 Flowspec (Auto-Mitigation of a Memcached Attack) Part2https#//www.youtube.com/watch?v=iRPob7Ws2v8SupportForum# BGP Flowspec implementation on NCS5500 platformshttps#//community.cisco.com/t5/service-providers-blogs/bgp-flowspec-implementation-on-ncs5500-platforms/ba-p/3387443SupportForum# NCS5500 BGP flowspec packet matching criteriahttps#//community.cisco.com/t5/service-providers-blogs/bgp-flowspec-implementation-on-ncs5500-platforms/ba-p/3387443Conclusion/AcknowledgementsThis post aimed at clarifying some specific aspects of the NCS550 BGP Flowspec implementation. the space used by Flowspec rules is variable and dependent on the complexity ranges can use different memory sizes and it’s usually the best to use power of twos the officially supported scale is 3000 “simple” rules the NCS55A2-MOD-SE-S based on Jericho+ with OP eTCAM can program up to 250 rules per second exceeding the scale won’t have much consequencesWe will update it with new content and corrections in the future if required.As usual, use the comment section below for your questions.Thanks to Kirill Kasavchenko, Didier Urie and Ashok Kumar for their help and feedback.", "url": "/tutorials/bgp-flowspec-on-ncs5500/", "author": "Nicolas Fevrier", "tags": "iosxr, ncs5500, flowspec, bgpfs" } , "tutorials-introducing-400ge-on-ncs5500-series": { "title": "Introducing 400GE on NCS5500 Series", "content": " Introducing 400GE on NCS5500 Introduction Pre-requisites Hardware Software New Line Cards NPU# J2 NC57-24DD NC57-18DD-SE Fabric Speed-Up and Redundancy 400GE and QSFP-DD Mixing Line Card Generations and Features at FCS Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionThey have been announced in June 2019 on this blog post# https#//blogs.cisco.com/sp/cisco-ncs-5500-welcomes-400ge-and-doubles-installed-base, it’s now time to present in detail the two new line cards with 400 Gigabit Ethernet (400GE) capabilities.And we will start with 8-min of videos to introduce the topic#YouTube# Introducing 400GE on NCS5500.We will cover the following in this blog post# Pre-requisites before inserting the new line cards Description of the NC57-24DD and NC57-18DD-SE Interoperability with former generation New fabric bandwidth, speed-up and redundancy Features at FCS 400GE and QSFP-DD (with breakout to 100G)We plan to add other sections to this article in the future. It will be a living document.Pre-requisitesBefore inserting the new products, we need to verify the system is running the minimal IOS XR release and is equipped with the appropriate “commons”.HardwareIt will be mandatory to upgrade the fabric cards and the fan trays in the chassis before any 400GE line card insertion.We differentiate the new fan trays and fabric cards by the number “2” at the end of the product IDs# NC55-5504-FC2 / NC55-5508-FC2 / NC55-5516-FC2 NC55-5504-FAN2 / NC55-5508-FAN2 / NC55-5516-FAN2A few notes on these parts# 8-slot and 16-slot are supported since version IOS XR 6.6.25 but you will need a newer release for the 400GE line cards. With 6.6.25, you can still insert them and benefit from the lower power consumption with cards based on Jericho and Jericho+ 4-slot support has been introduced in IOS XR 7.2.2 and 7.3.1 we don’t support in-service migration. You’ll have to shut down the chassis and replace them. we can’t mix different generations v2 Fabric Cards can only operate with other v2 Fabric Cards v2 Fan Trays can only operate with v2 Fan Trays we can’t mix v1 Fan Trays with v2 Fabric Cards, or vice versa. All must be v2. To avoid incident, we have it clearly labelled on the new parts.Regarding the other chassis parts# Route Processors, System Controllers and Power Supply Modules# RP and RP-E can both operate with the new line cards SCs exist in a single version today which operates with the new line cards number of power supply will depend on the numbers and types of cards, please refer to the Cisco Power Calculator for accurate calculation (today it only contains the v2 FC and FT, the new line cards will be added later on)# https#//cpc.cloudapps.cisco.com/cpc/DS.cpcSoftwareThe new fan trays and fabric cards are supported since IOS XR 6.6.25 but don’t be confused by it. We will need a minimum version of IOS XR 7.0.2 to support the new line cards NC57-24DD and NC57-18DD-SE.Regarding the features supported at inception, please refer to the feature section of this blog post.New Line CardsBefore describing the two new cards and their respective port density, let’s take a look at the ASIC powering them.NPU# J2These new line cards are powered by 2x Jericho2 NPUs from Broadcom.These ASICs are offering 4.8Tbps of interface bandwidth and 5.6Tbps to the fabric.Like its predecessors (used in NCS5500 Series), the J2 NPUs are made of two cores, an ingress pipeline and an egress pipeline. It’s capable of approx 2000 MPPS.The main differences will be# the use of faster SERDES with a different encoding scheme the use of HBM instead of DRAM the use of MDB instead of multiple internal memories the use of a second generation external TCAM (OP2)They are interconnected through Ramon ASICs located in the fabric cards.The SERDES between Ramon and J2 are around 53 Gbps each (between the v2 Fabric cards and the 400G line cards).Since they are backward compatible with former line card generations, they are still able to use 25Gbps SERDES between Ramon and Jericho/Jericho+ ASICs too.The Jericho2 uses a High Bandwidth Memory (HBM) instead of a GDDR5. We have double the amount of packet buffers (8GB) and double the speed to access it (1.8Tbps). One of the benefits of this HBM is it does not consume “links” to connect to the NPU, offering more to the interfaces and fabric.Keep in mind that this HBM is only used to store packets in case of congestion (micro-burst or longer-term link saturation). Most of the time, the packets will transit only through the on-chip buffer (OCB) which is now twice the size of former generation (32MB) and they will not be stored in the HBM.You can check the studies done on J+ on this topic in other xrdocs.io articles#https#//xrdocs.io/ncs5500/tutorials/ncs5500-qos-part-2-verifying-buffering/Also, very different from the former Jericho generations (J/J+), we will use a Modular DataBase (MDB) to store the router information (prefixes, MAC, MPLS, adjacencies, next-hop, …). As the name implies, this database can be carved at the boot up and allocate more or less space depending on the use-case. It permits to give more scale to L3 or to L2, etc.Finally, we will use a newer generation of external TCAM with these NPUs too.We will have two of these Jericho2s in each of the line cards we are introducing.NC57-24DDThis first line card will offer 24 slots QSFPDD-capable. It means they will also be able to support QSFP+ (40G) and QSFP28 (100G) optics.The front view of the card#Reminder# the chassis must have been upgraded to IOS XR 7.0.2 minimum and must operate v2 Fabric Cards and Fan Trays before inserting these line cards.The currently estimated power consumption is# 1050W typical and 1350W max.The line card is slightly longer than former generations as shown here#For NEBS compliancy, it will require new doors. They are currently in the roadmap.Internally, the card is made of two J2 NPUs, each servicing half of the ports for a total of 4.8Tbps.NC57-18DD-SEThe second line card is offering more physical ports, higher scale but less overall bandwidth.Despite the name 18DD, we have a total of 30 QSFP-DD ports on the front plate#Among these ports, some should be considered in pairs, some others should be considered individually.The ports from 18 to 23 (highlighted in yellow in the diagram below) are directly attached to Jericho#1 and, individually, they can be used with QSFP56-DD/400G optics.They will also host QSFP56/200G, QSFP28-DD/2x100G, QSFP28/100G or QSFP+/40G (depending on the software support).The other ports (highlighted in yellow in the diagram below) should be treated in pairs. That means what we can insert in port 1 is directly dependent of what has been inserted in port 0. Port N Port N+1 400G Must be emptied 200G 200G Hi-Speed 400G / 200G Can’t be used with Low-Speed 100G / 40G 200G Can’t be 2x 100G 100G 100G / 40G If you use only QSFP56-DD, only 18 ports can be populated, hence the line card name# 18DD-SE. Also, on the front plate, you can see in blue the ports that can be used with 400GE optics.We will add more details on these subtleties in the next months while we are getting closer to IOS XR 7.0.2 release date.The currently estimated power consumption is# 1000W typical and 1400W max.For reference, here is the block diagram on the 18DD-SE line card#Note# None of the NC57-24DD or the NC57-18DD-SE are MACsec capable but the timing features are possible and will be enabled in future software releases (of course, it will require the proper RP-Es).Fabric Speed-Up and RedundancyThe 8-slot fabric are made of 2 Ramon ASICs while the 16-slot fabric cards contains 3 of them.Each Jericho2 connects to each Fabric Module with a total of 18 SERDES at 53.125Gbps.They can be evenly distributed between one, two or three Fabric Engine (Ramon). 8-slot# 9 SERDES connected to 2 Ramon/FC 16-slot# 6 SERDES connected to 3 Ramon/FC 4-slot# 18 SERDES connected to 1 Ramon/FCFor the 8-slot chassis#For the 16-slot chassis#The 53.125Gbps per SERDES represents the raw bandwidth. After cell tax, encoding and correction, we can actually transport 45.8Gbps.To calculate the bandwidth available per Jericho2, we will use the following math# 6 Fabric Modules 18x 53.125Gbps x 6FM = 5737Gbps (raw) 18x 45.8Gbps x 6FM = 4946Gbps 5 Fabric Modules 18x 53.125Gbps x 5FM = 4781Gbps (raw) 18x 45.8Gbps x 5FM = 4122Gbps (85.8% of 4800Gbps) If we lose one fabric card, the NC57-24DD can not be line rate (around 86%) while the NC57-18DD-SE will still be.It’s important to state the obvious here, we are talking about lab corner-cases since the situation where 12 ports 400GE of the same card will be used at an average level exceeding 86%, at the exact moment we lose a fabric card is virtually impossible.400GE and QSFP-DDCisco invested in QSFP-DD for all the 400 Gigabit Ethernet plans.For more details on the technology, the best place is certainly this CiscoLive session# https#//www.youtube.com/watch?v=46D4zs_TlrM&list=PLWgSsi4SkfuaJaY6GeCE7wG52dKn-NOirI want to bring to the reader’s attention that breakout of 400GE to 4 ports 100GE will not interoperate with most currently deployed technologies (based on SR4, LR4 and CWDM4).It may sound obvious, but it is still worth mentioning that 400GE is based on 4 lambdas of 100G each encoded in PAM4 while the technologies above are using 4 lanes of 25G encoded in NRZ. That makes the interconnection impossible.Breakout of 400GE is still possible, but it will imply to use 100G “one-lambda PAM4” optics on the other side#In the other hand, QSFP28-DD will be perfect to breakout in two SR4/LR4/CWDM4. A perfect use-case for the NC57-18DD-SE ports operating in pairs, it will allow a smooth transition from existing 100G backbones to 400G.Mixing Line Card Generations and Features at FCSAt FCS, in IOS XR 7.0.2, we will support the “compatibility mode” where Jericho/Jericho+ line cards will co-exist with Jericho2-based line cards.In this specific mode, the features and scales are aligned to# features for peering and core (including multicast and L3VPN) same scale supported in XR 6.5.1The following list is still susceptible to change before the FCS date, but we will not support# ERSPAN Lawful Intercept mLDP/P2MP Edge Timing 400G Auto-Negotiation EVPN L2 / BVI or L2 OAM Features in 6.6.X onwardFeatures will be added to the support list release after release in the usual iterative process.Also, in the future, we will add the “J2 Native Mode” where all the line cards inserted in the chassis are only J2-based. It will “unleash” the full capability of the ASIC. In the compatibility mode the features and scale are often limited by the lower common denominator, being the Jericho/Jericho+ NPUs.ConclusionWith the introduction of these two new line cards, we are boosting the NCS5500 chassis from 3.6Tbps to 9.6Tbps per slot. That’s 153.6Tbps per router !!!More details will be added to this post or in new ones very soon.As usual, use the section below for comments and questions.", "url": "/tutorials/introducing-400ge-on-ncs5500-series/", "author": "Nicolas Fevrier", "tags": "ncs5500, ncs 5500, 400GE, 400G, really fast ethernet" } , "tutorials-ncs5500-lab-series": { "title": "Testing the NCS5500: The Lab Series", "content": " Testing the NCS5500# The Lab Series Introduction Video Lab Tests BGP Flowspec Hardware High Availability and Redundancy Netflow and IPFIX FIB Buffers Performance / Snake Conclusion# What’s next? You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionWe are trying something new today, and we hope it will really help speeding up the validation process for our customers and partners.In these videos and blog posts, we will present tests performed in our labs in different situations. The purpose is to# present tests requiring extremely large or complex setup. Cisco account teams and customer will not need to rebuild them during Proof of Concept (CPOC) provide recommendations on test methodology comment the results and provide more internal details to explain the behavior experienced during the testsWe open the books.VideoLab TestsHere are the tests already performed and documented, and the ones we will present in the near(ish) future#BGP FlowspecAuto-Mitigation of a Memcached AttackVideo# interoperability demo between NetScout (Arbor) DDoS mitigation system and NCS5500 (using Jericho+ and eTCAM, -SE system)# attack detection via Netflow / IPFIX BGP flowspec rules injection and attack mitigationhttps#//www.youtube.com/watch?v=iRPob7Ws2v8Flowspec scale and resourcesTest of various aspects around NCS5500 Flowspec implementation# scale validation# injection of 3000 simple rules out of resource (oor) validation# check the behavior when injecting, 4000, 6000 and 9000 rules programming rate# validation of the speed to write the rules in the eTCAM max-prefix# validation of the behavior when exceeding authorized number of advertisements per session memory consumption# verification of memory space used by different rules. Tests with the auto-mitigations generated by the NetScout/Arbor flowspec controllerhttps#//xrdocs.io/ncs5500/tutorials/bgp-flowspec-on-ncs5500/Hardware High Availability and RedundancyFabric Redundancyhttps#//xrdocs.io/ncs5500/tutorials/ncs5500-fabric-redundancy-tests/Test performed with a snake topology on a single line card. The purpose is to identify the impact on performance and bandwidth, per NPU, when removing a fabric cards. It will also point the limit of using a snake for performance testing.Route Processor, System Controller and Power Modules RedundancyNo article posted, just a video# https#//www.youtube.com/watch?v=Y_RoK2PsC1k.Demonstration of the impact (or lack of impact for instance) when ejecting Active or Standby RP and System Controller of an NCS5508. Then, the test is focusing on the power supply modules.Fabric and Fan Trays upgradeNot exactly a test, but the demo of a migration from v1 to v2 fabric cards and fan trays with the software upgrade to prepare this operation.https#//xrdocs.io/ncs5500/tutorials/ncs-5500-fabric-migration/Netflow and IPFIXPushing Netflow to 11Validation of the netflow implementation on Jericho+ (with eTCAM) line card# 36x100G-SE# impact of the packet size impact of the port load / bandwidth impact of sampling interval impact of the number of flows impact of the active / inactive timers scale test on a full loaded chassis stress test with process crash, config/rollback, …https#//xrdocs.io/ncs5500/tutorials/netflow-ncs5500-test-results/FIBFIB Programming RateDemonstrates how fast routing information is programmed in control plane and data plane.First, the BGP table at the Route Processor level.Second, the FIB table programmed in the External TCAM of a Jericho+ based system.https#//xrdocs.io/ncs5500/tutorials/ncs5500-fib-programming-speed/FIB Writing Rate on Jericho+ w/ eTCAMNo article posted, just a video# https#//www.youtube.com/watch?v=3tIVveCOZHs.FIB Writing Rate on Jericho+ w/o eTCAM but Large LEM / NCS55A1-24HFollow up of the previous test but this time, in the LPM / LEM of an NCS55A1-24H#https#//www.youtube.com/watch?v=nT31rHqFm-o.FIB Scale on Jericho+ w/ eTCAMValidation of the capability to store a full routing table first then to push the cursor to 4M IPv4 entries in the hardware (for instance, the eTCAM of a Jericho+ system).Verification of the impact of enabling URPF on the interfaces and activating 3000 BGP Flowspec rules with the 4M IPv4 routes.https#//xrdocs.io/ncs5500/tutorials/ncs5500-fib-scale-test/https#//www.youtube.com/watch?v=oglYEDpKsLYBuffersBuffer and Burst TestTwo parts in this demo# Data collected from hundreds of production routers to identify the amount of traffic handled in On-Chip Buffer compared to traffic evicted to DRAM. Also the number of packets dropped because of DRAM bandwidth exhaustion (spoiler alert# zero) Burst test in a large lab setup with 27x 100G tester interfaces. We run 80% of background traffic and we generate 20% or more of bursty traffic.https#//xrdocs.io/ncs5500/tutorials/ncs5500-qos-part-2-verifying-buffering/First part in the video# https#//www.youtube.com/watch?v=1qXD70_cLK8Second part, lab demo with Sai Venka# https#//youtu.be/1qXD70_cLK8?t=291Performance / SnakeIPv4 and IPv6 VRF Snakehttps#//xrdocs.io/ncs5500/tutorials/ncs5500-performance-and-load-balancing/Pratyusha Aluri set up a very large testbed with two NCS5508 back to back, interconnected through 288x 100GE interfaces. Tests performed on 36x100G-SE line cards (Jericho+ with eTCAM) snake IPv4 with 129B, 130B and IMIX traffic distribution snake IPv6 with same packet sizes performed with and without configuration on interfaces (ACL ingress+egress and QoS# classification and remarking)ECMP and Link Aggregationhttps#//xrdocs.io/ncs5500/tutorials/ncs5500-performance-and-load-balancing/https#//youtu.be/s6qSt6C2D5U?t=598Same testbed then above, reconfigured to validate ECMP and LAG load balancing with multiple bundles of 64x 100GE interfaces each.Non-Drop Ratehttps#//xrdocs.io/ncs5500/tutorials/testing-ndr-on-ncs5500/Test performed on Jericho+ systems. Identification and explanation of the different packet size performances.NDR for NC57-18DD-SE line cardsHari and Sai provides an overview of the tests they performed for their customer’s CPOC.https#//www.youtube.com/watch?v=10XBBe_uYKc.Conclusion# What’s next?We plan to add more and more test demos to this page, so the first call to action is to come back regularly to stay informed.Also, you can use the comments section in the video or this blog post to tell us what would be of interest for you specifically.Let’s be clear ;) We don’t guarantee we will do it, but as much as possible we will take your feedback into consideration for the next ones.", "url": "/tutorials/ncs5500-lab-series/", "author": "Nicolas Fevrier", "tags": "ncs5500, lab, test, test cases, validation, ios xr" } , "tutorials-ncs5500-performance-and-load-balancing": { "title": "NCS5500 Performance and Load-Balancing at Scale [Lab Series 01]", "content": " NCS5500 Performance and Load-Balancing Introduction Tests Disclaimers Performance testing Link Aggregation and ECMP You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionThis test is the first episode coming with a video on our new Lab Series.You can find a detailed explanation on the purpose and also a link to all other tests in this xrdocs.io post# https#//xrdocs.io/ncs5500/tutorials/ncs5500-lab-series/We see regular requests from customer to demonstrate the scale we claim and particularly regarding the port density.The NCS5500 chassis exist in 4-slot, 8-slot and 16-slot version and it’s fairly complicated to create a setup large enough in a lab when we talk about 36x 100GE interfaces per slot.Due to the orthogonal architecture of the chassis, it’s not really necessary to have fully wired chassis at least to demonstrate that performance aspect, which is directly related to the NPU capabilities. Having the ports of two NPUs wired should be enough. But we understand the customers’ concern when investing in such large systems, that’s why we had to create testbeds specifically to clarify these doubts.These topologies permit to test fabric load, ASIC limits, and power consumption. Considering the cost of traffic generator ports, the snake architecture is a good approach the re-inject of the traffic hop-by-hop and load the chassis with minimal (but still significant) investment.For this article, we built a test bed where two NC5508s equipped with 36x100GE-SE line cards (the ones with Jericho+ NPUs and external TCAM) are fully wired back-to-back. That means we have twice 288 interfaces at 100GE and we will push bi-directional line rate traffic through it.This large test bed will give us the opportunity to verify# line rate traffic for IPv4 and IPv6 what is the minimum packet size how it behaves with IMIX packet distribution what is the impact on features like ACLs and QoS when they are applied on ALL the ports longevity tests power and CPU usageBut also, the setup is perfect to demonstrate link bundling and load balancing at scale bundles of 64x 100GE interfaces load balancing inside each bundle load balancing between multiple large bundlesPratyusha Aluri, software engineer in Cisco’s Service Provider business unit built and configured this setup. She will run all these tests as recorded in the video#TestsDisclaimersIt’s important to understand the limits when using snake topologies# the NDR performance is reflecting the more loaded Cores in the NPU (particularly in situation where we have an odd number of ports per NPU and therefor an uneven allocation. Ex# Jericho+, where 5 ports are assigned to a core and 4 ports are allocated to the other core) the latency measured can’t be trusted configuration tricks are required to overcome the natural limitation of max 255 hops in IP routing once the NDR is identified, tests on performance below that level can not be trusted to identify the drop rates (a drop on the first link will be cascaded on the following ports, making the overall drop rate amplified artificially)Definition# NDR stands for Non Drop rate. It represents the minimum packet size the router can forward on all ports, both directions, 100% line rate, without any drops.Performance testingThe video is 13-minute long, you can directly reach the different sections with these shortcuts# Lab topology description# https#//youtu.be/s6qSt6C2D5U?t=782 NDR tests at 129B and 130B IPv4 https#//youtu.be/s6qSt6C2D5U?t=164 Longevity testing / Power and CPU# https#//youtu.be/s6qSt6C2D5U?t=315 Performance test with IPv4 IMIX distribution# https#//youtu.be/s6qSt6C2D5U?t=345 NDR tests at 129B and 130B IPv6# https#//youtu.be/s6qSt6C2D5U?t=375 NDR tests with IPv4 130B packets with ACL and QoS applied# https#//youtu.be/s6qSt6C2D5U?t=407For these tests we are using an Layer 3 snake, that means we will use basic static routing and VRF-lite (only locally significant VRFs). Since all the ports are directly to connect to their counterpart on the facing NCS5500, the configuration is easy to understand. Only ports 0/0/0/0 are used to connected to the traffic generator.The configuration is made such as traffic received in a port is not locally routed or switched but will always “travel” through the fabric (under the form of cells).We have 288 ports, twice. So it’s much more than the max TTL count, even if we set up the traffic generator to mark the packets with TTL=255. We need to use the following trick#hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto frag-bit enable-set-ttl ttl-matchIn the tests above, we will be able to demonstrate# NDR for Jericho+ systems snake topology is 130 bytes per packet IPv4 and IPv6 performance are identical Features applied on interface are not impacting the PPS performanceAlso we performed a longevity test to verify we don’t lose any packet on a long period (9h+)#And during this test we also measured power consumption and CPU usage#CLI used during the test#monitor interface *show controller fia diagshell 0 ~diag counters g~ location 0/0/CPU0(admin) show controller fabric plane all statisticsshow processes cpushow interfaces hu0/0/0/0 accountingLink Aggregation and ECMPThe second part of the testing starts at# LAG and ECMP tests# https#//youtu.be/s6qSt6C2D5U?t=599We define 4 bundles with 64x 100GE interfaces each and a fifth one made of the remaining 31x 100GE ports.In this test we are able to measure that ECMP is properly load balancing the traffic between the different bundles but also that traffic is evenly spread inside the bundles themselves.CLI used during the test#show bundle brmonitor interface bundle-ether *show interface be1monitor interface hu 0/1/0/*show cef ipv6 131##", "url": "/tutorials/ncs5500-performance-and-load-balancing/", "author": "Nicolas Fevrier", "tags": "ncs5500, lab, testing, ndr, performance, pps, line rate, load balacing, ecmp, link aggregation" } , "tutorials-testing-ndr-on-ncs5500": { "title": "Testing NDR on NCS5500 36x 100GE Line Cards [Lab Series 02]", "content": " Testing NDR on 36x100G-SE Introduction Video What is NDR? Test results What are we measuring actually? Under and above 130B/pkt Between 230B/pkt and 278B/pkt Do we have always drops with packets in this 230B-278B range? Performance per 100G ports Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionThis test is the second episode of our new Lab Series. You can find a detailed explanation on the purpose and also a link to all other tests in this xrdocs.io post# https#//xrdocs.io/ncs5500/tutorials/ncs5500-lab-series/Last week, we ran multiple tests in very heavily wired systems. Among the topics covered, we measured the NDR with a very long snake.The concept of Non Drop Rate deserves dedicated explanations.In this article and video, we will explain what it represents. We will demonstrate why the snake topology is not the best to reach the full capability of the ASIC and what happens when you push the system to its limit.VideoWhat is NDR?The concept of Non Drop Rate, or NDR, is often used in system validation to qualify the NPU capabilities. It represents the minimum possible packet size you can transmit# on all ports of the NPU with 100% line rate on each port bi-directionallyOften times, it’s something that can be derived with simple math starting from the number of packets per second the forwarding ASIC can support. But some other factors may come into play.We will see in these tests that a good understanding of the internal architecture may be necessary to interpret correctly the numbers measured in lab.As useful as it is to compare different devices, it’s also very important to understand the limit of this number. NDR, like many other topics covered in these videos, is mostly a “lab thing”.In the example we are using today, the NDR for Jericho+ used in the 36x100G-SE line card, is 130 bytes per packet. For Jericho2, it will be around 230 or 280 bytes per packet depending on the line card. But it’s virtually impossible to get 9 ports 100G (assigned to a same NPU) or worse, 12 ports 400G, running sustained line rate bidirectional traffic simultaneously.It’s possible to imagine a couple of ports being saturated following a network outage, with a lot of traffic redirected via a specific path. But having the 9 ports in such situation would reflect a very poorly designed network, or… a lab situation.What about a DDoS attack trying to saturate the Jericho+ NPU?The question would have make sense 5 or 10 years ago, but it’s not something that can be expected in current production networks based on such ASICs.We are talking about 900Gbps and 835MPPS of forwarding capacity here.A DDoS attack larger than 900Gbps have been seen in the past, but the very nature of a DDoS is to be distributed (the first D in DDoS), that means the attack packets will come from everywhere on the internet and it will be virtually impossible to concentrate it on 9 ports of an NPU specifically. Attack packets will land naturally on many sites, many routers, many NPUs.With the same logic, if we can imagine attacks exceeding 835MPPS (with SYN flood, it has been seen in the past), the coordination of this attack on a specific NPU is extremely complex. And attackers with such compute power at their disposal will leverage other tools to attack their targets, in a much efficient way.So, a DDoS attack leveraging this “bottleneck” is extremely unlikely (in this world, never say never, but today it should not be a matter of concern).Can we use a snake test for NDR measurement?Yes and no.Let’s say it’s not the best for such test, but considering the number of ports on the router, the tester interfaces necessary for a fully wired topology are extremely expensive and difficult to find.So the logical approach to load a line card or a chassis is to wire ports back-to-back and to configure VRFs, Bridging or MPLS static swapping. That way, we only need two ports from the traffic generator (actually one can be enough).But this methodology comes with a lot of limitations# TTL remarking may be needed for large setup latency can no longer be measured if numbers of ports are not consistent per NPU or per Core, it will affect the results a drop on a link or an NPU will be reflected on all the remaining ports# you can measure NDR with such topology but you can not trust any measurement below that level due to multiple cascading effects.Do I need a fully loaded chassis or even a fully loaded line card?Since we are testing the NPU capability, we just need to make sure the traffic is not locally routed/switched. That means we need to use at minimum two different NPUs, all wired. So, with the line card we are using today, 9x2=18 ports would have been enough to run this test.Having more ports is impressive (and I admit, fun), but it doesn’t not bring anything aside, may be, the power consumption measurement.Test resultsIn this test we were able to measure NDR at 130 bytes per packets. But also, we identified drops above this limit in some particular ranges.What are we measuring actually?The first mistake would be to think we can measure the ASIC performance, dividing it by the number of ports.835MPPS / 9x 100GE gives us 92.77MPPS per port. That’s not how it works internally.In the case of the Jericho+ ASIC, the ports allocation is unbalanced. Simply because we have an odd number of them# 5x 100GE interfaces on core 0 4x 100GE interfaces on core 1You can verify the port allocation per NPU and core with#show controller npu voq-usage interface all instance all location 0/x/CPU0So what we actually measure with a snake topology is the performance of the most loaded core#5x100G on core 0.Instead of the 92.77MPPS calculated above, the reality of the test matches this number# 83.5MPPS per 100GE port.Since the packets handled by core 0 will flow through core 1 too, we take the lowest denominator.Consequence# this kind of snake test can only show a maximum of 751.5MPPS per NPU.Note that if you test only the 4 other ports allocated to core 1, we get 104.4MPPS per 100GE.Under and above 130B/pktWe have seen above that each core can handle 417.5MPPS.In the worst case (core 0), we have 5 ports 100GE. Total 500Gbps.To calculate the larger packet size we can push, we will run this simple math#NDR + Header (20 bytes) = ROUNDUP ( 500,000,000,000 / 417,500,00 /8 )That gives us an NDR of 130 bytes per packet.Below, we are exceeding the number of PPS the NPU can handle. At 130, we finally cross the 835MPPS boundary and we can push packet line rate on all interfaces.Between 230B/pkt and 278B/pktIt’s frequent that customer ask to test specific packet size during the validation / CPOC. They want to see 64 bytes, 128 bytes, 256 bytes, …Not reaching line rate with the first two is expected, as explained above. But it’s surprising that we still see packet drops at 256 bytes.It’s because we have a specific behavior in the range 230 bytes to 278 bytes.Internally, packets are split in cells before being sent to the egress pipeline (could be locally routed/switched or transmitted via the fabric). These cells could be of variable length, from 64B to 256B.Also, when it’s transmitted internally, each packet is appended a couple of headers and 230 bytes is the point where you “jump” from one cell to two cells.At that point, we hit a different bottleneck# the fabric capacity. With the packet size growing, it reduces the amount ot cells to the Fabric Engines and the symptoms disappear after 278 bytes per packet.In the video, we executed the test with different packet sizes to illutrate that point.If, during the test, you maintain the line rate traffic in both drop cases described above, you will see the percentage of drops moving from 8% to 20%. It can be explained by a cascading effect illustrated by the two diagrams below#A token is granted to transmit the packet by the egress scheduler, the fabric is saturated and it issues a backpressure message to the ingress scheduler.Since more and more packets are received from the same VOQ, the queue is evicted and the packets are stored in the GDDR5 DRAM. All packets are now going through the DRAM, which eventually saturates the link to this memory (900Gbps unidirectional that becomes 450Gbps read / 450Gbps write). We triggered a third type of bottleneck now. It’s possible to monitor all these drops with the following CLI.show controller npu diag counters graphical instance 0 location 0/x/CPU0In the ENQ_DISCARDED_PACKET_COUNTER, we will get details on the reasons of the drop.First it will show VOQ_FADT_STATUS. FADT stands for Fair Adaptive Tail Drop. It’s an internal mechanism optimizing the buffer management in case of congestion in the NPU core. The purpose being to reduce the drop threshold for congested queue if the level of congestion increases, in order to leave enough buffer for the non-congested queues.In second step, if we maintain the congestion, we will see other kinds of ENQ_DISCARDED like IDR_DRAM_REJECT_STATUS (when we saturate the bandwidth to the DRAM).Do we have always drops with packets in this 230B-278B range?It’s a legitimate question we got from customers.In the video, we demonstrate that reducing the bandwidth to 90% line rate makes the drop symptoms disappear.Performance per 100G portsAnother frequent question we have during validation and CPOCs is the following# “is it a per port limitation or per NPU limitation”?All these performance tests and NDR measurements are only reflecting the NPU capability (or in the case of snake topology, at least the core capability).If the NPU is not reaching its PPS limit you can push packets, through the fabric or locally routed, at 64 bytes per packet between two ports.That’s what we demonstrate in the video in the last part with ports 0 and 35 configured in the same VRFs and 64B packets transmitted line rate between the two ports.And just for the sake of demo, we prove it with 256B packets too (in the 230B-278B range).ConclusionWe hope this video and few explanations have been useful and will guide you if you need to run these kind of tests yourself. The snake topology is good to reduce the amount of traffic generator ports but it comes with some limitations, so it’s important to understand the internal mechanisms at play to explain all the results.Finally, it’s something we repeated several times in the video# these tests should be taken for what they are, lab demo. And it’s dangerous to compare the results with production since the nature of this last one is very different.", "url": "/tutorials/testing-ndr-on-ncs5500/", "author": "Nicolas Fevrier", "tags": "ncs5500, lab series, testing, ndr" } , "tutorials-ncs5500-fabric-redundancy-tests": { "title": "NCS5500 Fabric Redundancy Tests [Lab Series 03] ", "content": " NCS5500 Fabric Redundancy Tests Introduction Video Architecture Theory 4-slot and 8-slot Chassis with FE3600 fabric and Jericho Line Cards 4-slot and 8-slot Chassis with FE3600 fabric and Jericho+ Line Cards 16-slot Chassis with FE3600 fabric and Jericho Line Cards 16-slot Chassis with FE3600 fabric and Jericho+ Line Cards 8-slot Chassis with Ramon/FE9600 fabric and Jericho Line Cards 8-slot Chassis with Ramon/FE9600 fabric and Jericho+ Line Cards 8-slot Chassis with Ramon/FE9600 fabric and Jericho2 Line Cards 16-slot Chassis with Ramon/FE9600 fabric and Jericho Line Cards 16-slot Chassis with Ramon/FE9600 fabric and Jericho+ Line Cards 16-slot Chassis with Ramon/FE9600 fabric and Jericho2 Line Cards Test results Next one ? You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionThird episode of the lab series, today, we will talk about NCS5500 fabric. More specifically, we will try to qualify the impact of losing a fabric card in a chassis.The goal of these blog posts is to describe tests performed in lab, detail the methodology and the results, and finally provide additional information on the internals of the NCS5500 platforms.All the former tests are listed here# https#//xrdocs.io/ncs5500/tutorials/ncs5500-lab-series/VideoIn the video below, we are showing a test with a 36x100G-SE line card used with 6 fabric cards then 5.ArchitectureAs you certainly know, the NCS5500 exists in fixed systems and modular chassis. For this second category, we have 4-slot, 8-slot and 16-slot versions.If they can all mix and match different type of line cards, each chassis type is using specific fabric cards. They have different size, which is expected considering the orthogonal design where all line cards are directly plugged into the fabric cards.Also, we have now two generations of fabric cards.The v1 supports line cards equipped with Jericho and Jericho+ NPUs. The v2 are supporting the same line cards but also the new ones powered by Jericho2 ASICs.Depending on the chassis size, each fabric card will be made of one or multiple Fabric Engine#In the first generation fabric cards, we have one or several FE3600 ASICs#In the second generation, we will use a new fabric engine named “Ramon” (or FE9600)In summary, per Fabric Card# Fabric Engines v1 v2 NCS5504 1 1 NCS5508 2 2 NCS5516 6 3 Regardless of the generation, each router is using 6 fabric cards. They can operate with less than 6 but it could be at the expense of the bandwidth of some line cards. That’s what we will detail in this blog post.TheoryNow, let’s present the math used to identify the bandwidth available when losing one fabric card and how it affects each type of line cards in our portfolio.Each Jericho NPU connects to each Fabric Module with 6 SERDES at 25Gbps raw# after cell tax, encoding and correction, we can actually use 20.8Gbps. Each Jericho+ NPU connects to each Fabric Module with 8 SERDES links, with the same useable bandwidth.4-slot and 8-slot Chassis with FE3600 fabric and Jericho Line Cards4-slot# From a Jericho NPU’s perspective, all 6 links are connected to a single FE3600 per fabric card#8-slot# Similarly, the 6 links from each Jericho NPU are split in 3+3 to the two FE3600s#In nominal state, with 6 Fabric Modules# 6x 25Gbps x 6FM = 900Gbps (raw) 6x 20.8Gbps x 6FM = 748GbpsWith only 5 Fabric Modules# 6x 25Gbps x 5FM = 750Gbps (raw) 6x 20.8Gbps x 5FM = 624GbpsNow, if we consider the bandwidth required per NPU for the different Jericho line cards, we can see how much the fabric will be able to accomodate in case of fabric loss# Line Card Ports per Jericho NPU (Gbps) <624G? 36x100G 600 Yes 36x100G-S 600 Yes 24x100G-SE 600 Yes 24H12F 720 No 18H12F 840 (but ASIC allows 720) No 6x200G-COH 600 Yes 4-slot and 8-slot Chassis with FE3600 fabric and Jericho+ Line Cards4-slot# from the Jericho+ NPU’s perspective, all 8 links are connected to a single FE3600 per fabric card#8-slot# the 8 links from each Jericho+ NPU are split in 4+4 to the two FE3600s#In nominal state, with 6 Fabric Modules# 8x 25Gbps x 6FM = 1200Gbps (raw) 8x 20.8Gbps x 6FM = 998GbpsWith only 5 Fabric Modules# 8x 25Gbps x 5FM = 900Gbps (raw) 8x 20.8Gbps x 5FM = 832GbpsHere again, let’s check the bandwidth necessary for line rate with Jericho+ line cards# Line Card Ports per J+ NPU (Gbps) <832G? 36x100G-SE 900 No MOD-A 1,000 (but ASIC allows 900) No MOD-A-SE 1,000 (but ASIC allows 900) No Now mixing Jericho and Jericho+ in the same chassis#     6 Fabrics 5 Fabrics J@600G J@600G 100% 600G 100% 600G J@720G J@720G 100% 720G 87% 624G J J+ 100% 600G 100% 600G J+ J+ 100% 900G 92% 828G 16-slot Chassis with FE3600 fabric and Jericho Line CardsEach Jericho ASIC has 6 links at 25Gbps, that are equally distributed to the 6 FE3600 of each fabric card#In nominal state, with 6 Fabric Modules# 6x 25Gbps x 6FM = 900Gbps (raw) 6x 20.8Gbps x 6FM = 748GbpsWith only 5 Fabric Modules# 6x 25Gbps x 5FM = 750Gbps (raw) 6x 20.8Gbps x 5FM = 624GbpsSame logic and results than 4-slot and 8-slot chassis.How much the fabric will be able to accomodate in case of fabric loss? Line Card Ports per Jericho NPU (Gbps) <624G? 36x100G 600 Yes 36x100G-S 600 Yes 24x100G-SE 600 Yes 24H12F 720 No 18H12F 840 (but ASIC allows 720) No 6x200G-COH 600 Yes 16-slot Chassis with FE3600 fabric and Jericho+ Line CardsThings are getting a bit more complex when we use Jericho+ NPUs in 16-slot chassis. Indeed, each NPU has 8 SERDES (links at 25Gbps raw bw) and they need to connect to fabric cards made of 6 FE3600, we need to address an unequal distribution#We will have some two FE3600 connected with 2 SERDES links while the rest will have only one connection.So as long as we have a flow transiting from NPU X of line A to NPU X of line B, we don’t have issues (example# Jericho+ instance 0 of LC0 pushing traffic to J+ instance 0 in LC1)#The situation is less ideal when the communication is from NPU X of line A to NPU Y of line B, we have paths where two links can only be used at the bandwidth of one#In this example above (Jericho+ 0 on LC0 pushing traffic to NPU 1 on LC1), we can’t get line rate traffic because we have only 40 SERDES links.Let’s take this example for the math below#In nominal state, with 6 Fabric Modules# 40x 25Gbps = 1000Gbps (raw) 40x 20.8Gbps = 832GbpsThat’s indeed below the 900Gbps of bandwidth capability of a Jericho+ ASIC, in nominal state (note it’s a worst case situation, remember that the traffic targeted to the same ASICs are not transiting through the fabric and that aligned NPUs (NPU X to NPU X in a different line card) have full bandwidth available).With only 5 Fabric Modules# 657GbpsThe bandwidth necessary for line rate with Jericho+ line cards# Line Card Ports per J+ NPU (Gbps) <657G or 832G? 36x100G-SE 900 No MOD-A 1,000 (but ASIC allows 900) No MOD-A-SE 1,000 (but ASIC allows 900) No Note# this bandwidth problem is addressed with the second generation fabric cards on the 16-slot chassis.Finally, mixing Jericho and Jericho+ in the same chassis#     6 Fabrics 5 Fabrics J J 100% 600G 100% 600G J J+ 100% 600G 100% 600G J+ J 82% 738G 69% 621G J+ J+ 92% 828G 73% 657G 8-slot Chassis with Ramon/FE9600 fabric and Jericho Line CardsAt the moment of this blog publication, the 4-slot chassis doesn’t support the next generation fabric cards, it’s still in the roadmap. But all the principles and results shared below can also be used for the 4-slot.All 6 links from the Jericho ASICs are split in 3+3 on the two Ramon Fabric Engines in each fabric card.In nominal state, with 6 Fabric Modules# 6x 25Gbps x 6FM = 900Gbps (raw) 6x 20.8Gbps x 6FM = 748GbpsWith only 5 Fabric Modules# 6x 25Gbps x 5FM = 750Gbps (raw) 6x 20.8Gbps x 5FM = 624GbpsHow much the fabric will be able to accomodate in case of fabric loss? Line Card Ports per Jericho NPU (Gbps) <624G? 36x100G 600 Yes 36x100G-S 600 Yes 24x100G-SE 600 Yes 24H12F 720 No 18H12F 840 (but ASIC allows 720) No 6x200G-COH 600 Yes Note# these results are the same with the FE3600. It’s expected since new fabric cards don’t change the way the NPUs are attached to them.8-slot Chassis with Ramon/FE9600 fabric and Jericho+ Line CardsThe 8 links from the Jericho+ NPUs are split in 4+4 on the two Ramon Fabric Engines in each fabric card.In nominal state, with 6 Fabric Modules# 8x 25Gbps x 6FM = 1,200Gbps (raw) 8x 20.8Gbps x 6FM = 998GbpsWith only 5 Fabric Modules# 8x 25Gbps x 5FM = 1,000Gbps (raw) 8x 20.8Gbps x 5FM = 832GbpsThe bandwidth necessary for line rate with Jericho+ line cards# Line Card Ports per J+ NPU (Gbps) <657G or 832G? 36x100G-SE 900 No MOD-A 1,000 (but ASIC allows 900) No MOD-A-SE 1,000 (but ASIC allows 900) No Here again, no difference for the 8-slot chassis.8-slot Chassis with Ramon/FE9600 fabric and Jericho2 Line CardsIn nominal state, with 6 Fabric Modules# 18x 53.125Gbps x 6FM = 5,737Gbps (raw) 18x 45.8Gbps x 6FM = 4,946GbpsWith only 5 Fabric Modules# 18x 53.125Gbps x 5FM = 4781Gbps (raw) 18x 45.8Gbps x 5FM = 4122Gbps (85.8% of 4800Gbps)At the moment of this publication, we have two line cards based on Jericho-2#NC55-24DD where each J2 services 4.8Tbpsand NC55-18DD-SE where each J2 services 3.6Tbps. Line Card Ports per J2 NPU (Gbps) <4,122G 24x400G 4,800 No 18x400G-SE 3,600 Yes 16-slot Chassis with Ramon/FE9600 fabric and Jericho Line CardsSame math and behavior than Jericho line cards with 8-slot Line Card Ports per Jericho NPU (Gbps) <624G? 36x100G 600 Yes 36x100G-S 600 Yes 24x100G-SE 600 Yes 24H12F 720 No 18H12F 840 (but ASIC allows 720) No 6x200G-COH 600 Yes 16-slot Chassis with Ramon/FE9600 fabric and Jericho+ Line CardsThe limitation encountered on the FE3600-based fabric cards on the 16-slot disappears with the new generation fabric cards.The support of line cards and impact of a fabric card loss is the same than 8-slot#In nominal state, with 6 Fabric Modules# 8x 25Gbps x 6FM = 1,200Gbps (raw) 8x 20.8Gbps x 6FM = 998GbpsWith only 5 Fabric Modules# 8x 25Gbps x 5FM = 1,000Gbps (raw) 8x 20.8Gbps x 5FM = 832Gbps Line Card Ports per J+ NPU (Gbps) <998G? <832G? 36x100G-SE 900 Yes No MOD-A 1,000 (but ASIC allows 900) Yes No MOD-A-SE 1,000 (but ASIC allows 900) Yes No 16-slot Chassis with Ramon/FE9600 fabric and Jericho2 Line CardsSame behavior and results than 8-slot. Line Card Ports per J2 NPU (Gbps) <4,946G 24x400G 4,800 No 18x400G-SE 3,600 Yes Test resultsNow that we have all the theory clarified, let’s see what has been tested and measured in this video.We are using# an 8-slot chassis FE3600 fabric cards (first generation) 36x100G-SE line cards (based on 4x Jericho+ NPUs)To simplify the process (and not remove the fan trays), we shut down the fabric cards electrically from the admin config level#RP/0/RP0/CPU0#NCS5508-2#adminroot connected from 127.0.0.1 using console on NCS5508-2sysadmin-vm#0_RP0# hw-module location 0/fc0 ?Possible completions#  offline    Take a hardware module offline for diagnostics  online     Take a hardware module online for normal operation  reload     Reload a hardware module  shutdown   Shut down a hardware modulesysadmin-vm#0_RP0# hw-module location 0/fc0 shutShutdown hardware module ? [no,yes] yesresult Card graceful shutdown request on 0/FC0 succeeded.sysadmin-vm#0_RP0# sh platfoLocation  Card Type               HW State      SW State      Config State----------------------------------------------------------------------------0/0       NC55-36X100G            OPERATIONAL   OPERATIONAL   NSHUT0/6       NC55-36X100G-A-SE       OPERATIONAL   OPERATIONAL   NSHUT0/RP0     NC55-RP-E               OPERATIONAL   OPERATIONAL   NSHUT0/RP1     NC55-RP-E               OPERATIONAL   OPERATIONAL   NSHUT0/FC0     NC55-5508-FC            POWERED_OFF   SW_INACTIVE   NSHUT0/FC1     NC55-5508-FC            OPERATIONAL   OPERATIONAL   NSHUT0/FC2     NC55-5508-FC            OPERATIONAL   OPERATIONAL   NSHUT0/FC3     NC55-5508-FC            OPERATIONAL   OPERATIONAL   NSHUT0/FC4     NC55-5508-FC            OPERATIONAL   OPERATIONAL   NSHUT0/FC5     NC55-5508-FC            OPERATIONAL   OPERATIONAL   NSHUT0/FT0     NC55-5508-FAN           OPERATIONAL   N/A           NSHUT0/FT1     NC55-5508-FAN           OPERATIONAL   N/A           NSHUTWe use a snake topology (with the limits described in the former posts), but limited to a single line card, 36 ports 100GE.Based on the math above, we should get around 830Gbps of forwarding capacity when running on five fabric cards, that’s 92% of the nominal mode. Logically, we could expect to see 8% loss on the traffic generator.For 1500B#We measure a bit more than 11% loss.For 500B#We measure 8 to 10% loss.For 130B#We measure 12% to 16% loss.How can we explain this deviation from the theory?It’s actually exactly the same behavior than what we explain in the previous blog about NDR.https#//xrdocs.io/ncs5500/tutorials/testing-ndr-on-ncs5500/In this blog post, we described a case of fabric saturation, generating a backpressure message for the particular VOQ, triggering the eviction of the queue to the DRAM and eventually saturating the DRAM bandwidth (because the saturation was maintained along multiple seconds).In the lab, we verify that indeed we have counters pointing in this direction.So we are no longer measuring the impact of the fabric loss, but we are also forcing so much traffic that it exceeds the bandwidth to the deep buffer.It’s illustrated by the “Rejects” reasons in the ENQ_DISCARDED_PACKET_COUNTER above.Next one ?We will demonstrate the FIB scale on the large external TCAM systems. If you are interested in any other test case, don’t hesitate to let us know in the blog notes.", "url": "/tutorials/ncs5500-fabric-redundancy-tests/", "author": "Nicolas Fevrier", "tags": "ncs5500, fabric, redundancy, lab, test" } , "#": {} , "#": {} , "tutorials-ncs5500-hw-module-profiles": { "title": "NCS5500 Hw-module Profiles", "content": " NCS5500 Hw Profiles Introduction Hardware Module CLI hierarchy Graphical view of the 6.6.3 structure Graphical view of the 7.0.2 structure fib dlb ipv4 / ipv6 mpls recycle bgp-pic multipath oversubscription port-range profile acl egress acl ingress acl ipv6 bundle-scale bundle-hash bw-threshold flowspec load-balance algo netflow ipfix315-enable netflow fpc-enable offload oam qos enabling native mode segment-routing srv6 sr-policy stats stats acl-permit stats ingress-sr stats enh-sr-policy stats qos-enhanced tx-scale-enhanced stats egress-stats-scale tcam acl-prefix tcam fib tcam format acl mdb quad service tcam route-stats stats-fpga vrrpscale storm-control-combine-policer-bw Conclusion     2020-Feb-14 Document Creation 2020-Mar-12 Correction of the max-classmap-size description 2020-Apr-08 Correction# HQoS not needed for ingress policer on sub-if 2020-Jul-31 Add# hw-mod profile bw-threshold 2020-Aug-10 Add# comment on the need to use UDK profile for packet-length match in ACL 2021-Apr-3 Add# hw-mod profiles till 7.3.1, updated the support for J2 based platforms for exisiting profiles 2021-Sept-9 Add# hw-mod profiles till 7.4.1, updated the support for J2 based platforms for exisiting profiles 2022-Jan-20 Add# hw-mod profiles till 7.5.1 2023-May-16 Add# Updated the support for J2 based platforms for existing profiles 2024-May-13 Add# Modified the loadbalancing algo You can find more content related to NCS5500 including routing memory management, URPF, ACLs, Netflow following this link.This article is meant to be updated regularly, consider it a constant “work in progress”.IntroductionDuring the 4 years of its existence, the NCS5500 has been used in a constantly growing number of network roles. The NCS 5500 resources (databases where we store prefixes, nexthop, counters, …) have been optimized to accommodate the specific requirements of these networking roles.To enable these particular resource optimizations, we carved the memories in specific ways, via “hardware profiles”.This document aims at listing all these options and clarify# what they do and where they can be used which platforms can use them when they have been introduced what are the side effects of enabling them, if anyIf not specifically mentioned, consider that activation of a new hw-module config will require a system or line card reload.AcknowledgementsMany thanks to# Neelu Jethani Jisu Bhattacharya Vincent Ng Anup Kumar Vasudevan Rajeev Mishra Aleksandar Vidakovic Neeraj Garg Riadh Habibi Richard Poll Tejas Lad Angu Chakravarthy Paban Sarma Bala Murali Krishna Sanka Deepak Balasubramanian Gaddam Ravindher Reddy Avinash PrabhuHardware Module CLI hierarchyFor this article, we use IOS XR 6.6.3, 7.0.2, 7.3.1, 7.4.1 . The document will be updated regularly.Graphical view of the 6.6.3 structureRP/0/RP0/CPU0#NCS5500-663#sh verCisco IOS XR Software, Version 6.6.3Copyright (c) 2013-2019 by Cisco Systems, Inc.Build Information# Built By # hlo Built On # Fri Dec 13 17#40#12 PST 2019 Built Host # iox-lnx-029 Workspace # /auto/srcarchive15/prod/6.6.3/ncs5500/ws Version # 6.6.3 Location # /opt/cisco/XR/packages/cisco NCS-5500 () processorSystem uptime is 2 weeks 6 days 14 hours 56 minutesRP/0/RP0/CPU0#NCS5500-663#confRP/0/RP0/CPU0#NCS5500-663(config)#hw-module ? fib Forwarding table to configure oversubscription Configure oversubscription profile Configure profile. quad Configure quad. route-stats Configure multicast per-route statistics service Configure service role. subslot Configure subslot h/w module tcam Configure profile for TCAM LC cards vrrpscale to scale VRRP sessionsRP/0/RP0/CPU0#NCS5500-663(config)Graphical view of the 7.0.2 structureRP/0/RP0/CPU0#NCS5500-702#sh verCisco IOS XR Software, Version 7.0.2.18ICopyright (c) 2013-2019 by Cisco Systems, Inc.Build Information# Built By # ahoang Built On # Tue Nov 19 16#44#39 PST 2019 Built Host # iox-ucs-027 Workspace # /auto/iox-ucs-027-san2/prod/7.0.2.18I.SIT_IMAGE/ncs5500/ws Version # 7.0.2.18I Location # /opt/cisco/XR/packages/ Label # 7.0.2.18Icisco NCS-5500 () processorSystem uptime is 4 weeks 6 days 12 hours 8 minutesRP/0/RP0/CPU0#NCS5500-702#confRP/0/RP0/CPU0#NCS5500-702(config)#hw-module ? ains Configure AINS Params fib Forwarding table to configure oversubscription Configure oversubscription port-range Configure port range profile Configure profile. quad Configure quad. route-stats Configure multicast per-route statistics service Configure service role. shut shutdown the hw-module stats-fpga Configure h/w module subslot Configure subslot h/w module tcam Configure profile for TCAM LC cards unshut Unshut the hw-module vrrpscale to scale VRRP sessionsRP/0/RP0/CPU0#NCS5500-702(config)#Now, let’s review these profiles individually. We will define their role, the type of platforms using them, and potentially the conflicts with other profiles.fibdlbRP/0/RP0/CPU0#NCS5500-702(config)#hw-module fib dlb level-1 enable ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#“Destination-based Load balancing” has been introduced in 6.6.2 and is completed in 7.1.1.It’s another solution to address the 4K ECMP FEC limitation with software based pre-selection of the path. In this approach, one path out of available multipaths is selected based on software hash on destination IP before prefix is programmed in hardware. Data plane is programmed with selected single path. So traffic is virtually distributed on available paths by pre-selection of path based on per-prefix hash.This profile enables DLB for IGP with LDP, IGP with SR, L2VPN PW over IGP, BGP LU over IGP, BGP L3VPN over IGP, TI-LFA/LFA/RLFA at IGP but not for IGP over SRTE nor BGP PIC Edge.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/routing/71x/b-routing-cg-ncs5500-71x/b-routing-cg-ncs5500-71x_chapter_01000.htmlipv4 / ipv6RP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ? dlb Destination Based Load balancing ipv4 Configure ipv4 protocol ipv6 Configure ipv6 protocol mpls Configure mpls protocolRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv4 ? scale Configure scale mode for no-TCAM cardRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv4 scale ? host-optimized-disable Configure Host optimization by default internet-optimized Configure Intetrnet optimizedRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv4 scale internet-optimized ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv4 scale host ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv6 ? scale Configure scale mode for no-TCAM cardRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv6 scale ? internet-optimized-disable Configure by default Intetrnet optimizedRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib ipv6 scale internet ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#These profiles will dictate how we distribute IPv4 and IPv6 prefixes in the different databases (LPM or LEM) depending on their prefix length. Not effective when using external TCAM.They are mandatory if we need to store large routing tables (ie full internet view) and/or if want to configure URPF.Default mode is “host-optimized” for IPv4 and “internet-optimized” for IPv6. This hardware profile is only relevant for systems/LC with the no eTCAM (“base” systems) using Jericho and “Jericho+ with Jericho-scale” (with the 256k-350k large LPM). It’s not recommended for NCS55A1-24H or NCS55A1-48Q-6H, based on a Jericho+ with large LPM (1M-1.5M v4 entries).External documentation# https#//xrdocs.io/ncs5500/tutorials/2017-08-03-understanding-ncs5500-resources-s01e02/ https#//xrdocs.io/ncs5500/tutorials/2017-08-07-understanding-ncs5500-resources-s01e03/ https#//xrdocs.io/ncs5500/tutorials/2017-12-30-full-internet-view-on-base-ncs-5500-systems-s01e04/ https#//xrdocs.io/ncs5500/tutorials/ncs5500-urpf/ https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/security/62x/b-system-security-cg-ncs5500-62x/b-system-security-cg-ncs5500-62x_chapter_01001.htmlNote1# Please pay attention to the form of the command since it could lead to confusions# for IPv4 it’s internet-optimized while it’s internet-optimized-disable for IPv6.Note2# This profiles are not applicable for J2 based systems.mplsRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib mpls ? label Configure MPLS label convergence optimization for LDP/SR labels ldp Configure signalling protocol for MPLSRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib mpls ldp lsr-optimizedRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib mpls ldp lsr-optimized ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#When using an MPLS network, you bind specific prefixes to labels for the IP-to-MPLS case, and also you bind ingress labels to egress labels for the LSR role (MPLS-to-MPLS case).In the NCS5500, several resources are used to store this information. Among them, the FEC database is solicited. LEM and EEDB are also important in this discussion but we will try to simplify it to focus only on the FEC part.When the destination of the packet is resolved via a single path, the information is stored in this 124k-entry large database. But when the destination is resolved via multiple equal cost paths (ECMP case), the information is stored in a sub-block of the FEC table named ECMP FEC.This zone can accommodate 4k entries.By default, the system will allocate 3 entries for each prefix bound to a label# one for the IP to MPLS case one for the MPLS to MPLS case one for the case where the next hop is made of several paths, some with LDP, some others with IP-only (referred as EOS0/1 case, it’s usually a transcient situation).That reduces the overall number of prefixes associated to labels to 1,300 (you’ll find many places mentioning the supported number is 1,000).When we position NCS5500 as a pure LSR role, the first and third allocation are not necessary. That’s the purpose of this hardware profile# extending the support to 3000+ prefixes bound to labels (with up to 16 way ECMP paths). This CLI enables creation of “Push/Swap” Shared MPLS encap i.e. same encap can be used for Label Push or Label Swap.EVPN services are not supported with this profile.Note1# in an IGP domain, only the router’s loopbacks need to be bound to labels. It’s a best practise to use filters to reduce the IGP to MPLS relationship.Note2# This profiles is not supported for J2 based systems.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/mpls/63x/b-mpls-cg-ncs5500-63x/b-mpls-cg-ncs5500-63x_chapter_0101.htmlhw-module fib mpls label lsr-optimizedRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib mpls ldp ? lsr-optimized Configure optimization for LSR roleRP/0/RP0/CPU0#NCS5500-663(config)#hw-module fib mpls label lsr-optimized ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#ECMP optimization for /32 prefixes - when all paths have same label, the common label stored in the LEM so we don’t have to consume EMCP FEC entries.In 6.5.x# no services could work with this profile (no L2VPN/L3VPN as the “transport” label won’t be pushed on the traffic). Starting from 7.1.1, L3VPN supported but no L2.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs560/segment-routing/71x/b-segment-routing-cg-71x-ncs560/b-segment-routing-cg-71x-ncs560_chapter_011.htmlNote1# lsr-optimized mode was introduced in 6.5.x and was not supported on J+ w/ eTCAM systems initially, this limitation is removed starting from 7.1.1.Note2# this profile can not be used with the internet-optimized.Note3# This profiles are not supported for J2 based systems.recyclehw-module fib recycle service-over-rsvpteRP/0/RP1/CPU0#5508-1-731(config)#hw-module fib recycle ? service-over-rsvpte Recycle traffic for BGP services going over RSVP TERP/0/RP1/CPU0#5508-1-731(config)#hw-module fib recycle service-over-rsvpte ? -cr- RP/0/RP1/CPU0#5508-1-731(config)#hw-module fib recycle service-over-rsvpte Wed Mar 31 23#10#12.063 PDTIn order to activate/deactivate recycling traffic for BGP services going over RSVP TE, you must manually reload the chassis/all line cardsRP/0/RP1/CPU0#5508-1-731(config)#To enable L3 BGP services over NCS5500/5700 use hw-module profile hw-module fib recycle service-over-rsvpte”. Prior to this feature, BGP services (6PE, VPNv4) were not supported over LDPoRSVPTE on NCS5500. With introduction of this feature, in IOS-XR 7.3.1 on systems with J/J+ and IOS-XR 7.4.1 on J2-comp-mode this use cases is possible. The is achieved via recycle approach. In first pass, IGP local label and BGP label is added and then packet is recycled. Recycled packet is treated as regular IGP MPLSoRSVPTE packet in second pass. In second pass, IGP local label is swapped with IGP out label and then pushed RSVP TE label and packet is sent out. In order to activate/deactivate recycling traffic for BGP services going over RSVP TE, you must manually reload the chassis/all line cardsNote# Enabling this profile, packets meant for the TE tunnel gets recycledEVPN over LDPoTE is supported only on J/J+.bgp-pic multipathhw-module fib bgp-pic multipath-core enableRP/0/RP1/CPU0#5508-1-731(config)#hw-module fib bgp-pic multipath-core ? enable Enable pic core in forwarding chainRP/0/RP1/CPU0#5508-1-731(config)#hw-module fib bgp-pic multipath-core enable ? -cr- RP/0/RP1/CPU0#5508-1-731(config)#hw-module fib bgp-pic multipath-core enable Thu Apr 1 02#49#14.402 PDTIn order to activate/deactivate bgp multipath pic core, you must manually reload the chassis/all line cardsRP/0/RP1/CPU0#5508-1-731(config)#To avoid BGP prefix dependent convergence, when multiple ECMP paths are available via IGP, use the hw-module profile fib multipath-core enable. This helps in achieving sub second convergence. The BGP protocol will select more than 1 primary path as primary path set and 1 backup path for primary path set. It will install the primary and backup path pair in the RIB. FIB will replicate same backup path for all primary paths and program all path as a protected path. To disable the feature, need to use below CLI. After disabling the CLI router reload is recommended.“no hw-module fib bgp-pic multipath-core enable”.Note# In order to activate/deactivate bgp multipath pic core, you must manually reload the chassis/all line cards.BGP LU level 1 is not supported. L3VPN/6PE/6VPE pic core multipath is not supported over igpoversubscriptionRP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize ? cos CoS values between 0-5 untagged Prioritize untagged packetsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize cos ? 0-5 configure CoS number 0-5RP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize cos 0 ? interface Interface nameRP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize cos 0 interface hu0/0/0/0 ?RP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize untagged ? interface Interface nameRP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize untagged interface hu0/0/0/0 ?RP/0/RP0/CPU0#NCS5500-702(config)#hw-module oversubscription prioritize untagged interface hu0/0/0/0port-rangeRP/0/RP0/CPU0#NCS5500-702(config)#hw-module port-range ? 0-35 configure start portRP/0/RP0/CPU0#NCS5500-702(config)#hw-module port-range 0 ? 0-35 configure end portRP/0/RP0/CPU0#NCS5500-702(config)#hw-module port-range 0 1 ? location fully qualified location specificationRP/0/RP0/CPU0#NCS5500-702(config)#hw-module port-range 0 1 location 0/4/CPU0 ? mode port mode 40-100, 400, 2x100, 4x10-4x25RP/0/RP0/CPU0#NCS5500-702(config)#hw-module port-range 0 1 location 0/4/CPU0 mode ? WORD port mode 40-100, 400, 2x100, 4x10-4x25RP/0/RP0/CPU0#NCS5500-702(config)#Feature introduced to assign port roles in NC57-18DD-SE line cards (Vigor-SE).On NC57-18DD-SE line cards, the ports from 0 to 17 and 24-29 should be configured in pairs. They below to a same Reverse Gear Box / CDR5 cage that can handle up to 400Gbps. If the port n is 400G, the port n+1 is disabled. Default mode is 40-100.RP/0/RP0/CPU0#NCS5500-702(config)#do sh int brief | i ~(0/3/0/28|0/3/0/29)~ Hu0/3/0/28 admin-down admin-down ARPA 1514 100000000 Hu0/3/0/29 admin-down admin-down ARPA 1514 100000000RP/0/RP0/CPU0#NCS5500-702(config)#hw-module port-range 28 29 loc 0/3/CPU0 mode 400RP/0/RP0/CPU0#NCS5500-702(config)#commitLC/0/3/CPU0#Feb 2 22#37#54 # ifmgr[163]# %PKT_INFRA-LINK-3-UPDOWN # Interface HundredGigE0/3/0/29, changed state to DownLC/0/3/CPU0#Feb 2 22#37#54 # ifmgr[163]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface HundredGigE0/3/0/29, changed state to DownLC/0/3/CPU0#Feb 2 22#37#54 # ifmgr[163]# %PKT_INFRA-LINK-3-UPDOWN # Interface HundredGigE0/3/0/28, changed state to DownLC/0/3/CPU0#Feb 2 22#37#54 # ifmgr[163]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface HundredGigE0/3/0/28, changed state to DownLC/0/3/CPU0#Feb 2 22#37#56 # ifmgr[163]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/3/0/28, changed state to DownLC/0/3/CPU0#Feb 2 22#37#56 # ifmgr[163]# %PKT_INFRA-LINK-3-UPDOWN # Interface FourHundredGigE0/3/0/28, changed state to DownLC/0/3/CPU0#Feb 2 22#37#56 # ifmgr[163]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface FourHundredGigE0/3/0/28, changed state to DownRP/0/RP0/CPU0#Feb 2 22#37#57 # config[67778]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'root'. Use 'show configuration commit changes 1000000037' to view the changes.RP/0/RP0/CPU0#NCS5500-702(config)#do sh int brief | i ~(0/3/0/28|0/3/0/29)~ FH0/3/0/28 down down ARPA 1514 400000000RP/0/RP0/CPU0#NCS5500-702(config)#External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs560/interfaces/71x/b-interfaces-hardware-component-cg-71x-ncs560/preconfiguring_physical_interfaces.htmlprofileacl egressRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile acl ? egress egress acl ipv6 ipv6 protocol specific optionsRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile acl egress ? layer3 egress layer3 aclRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile acl egress layer3 ? interface-based egress layer3 interface-based aclRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile acl egress layer3 interface-based ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#Introduced in IOS XR 6.1.4, this profile is specifically designed to enable L3 egress ACL over BVI.Restrictions# Once this profile is enabled, it’s not possible to configure egress ACL over non-BVI interfaces (Physical included).External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/61x/b-ncs5500-ip-addresses-configuration-guide-61x/b-ncs5500-ip-addresses-configuration-guide-61x_chapter_01.htmlacl ingressRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ? egress egress acl ingress ingress acl ipv6 ipv6 protocol specific optionsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ingress ? compress Specify ACL compression in hardwareRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ingress compress ? enable Enable ACL compression in hardwareRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ingress compress enable ? location Location of acl config -cr-RP/0/RP0/CPU0#NCS5500-702(config)#Note# This profiles is applicable only for systems with etcamacl ipv6RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ? egress egress acl ingress ingress acl ipv6 ipv6 protocol specific optionsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ipv6 ? ext-header ipv6 extension header related optionsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ipv6 ext-header ? permit allow permit of extension header packetsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile acl ipv6 ext-header permit ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#Feature available on 6.6.3 and 7.0.1.In previous implementation, ipv6 header parser can’t identify next protocol properly when one or more extension headers are present. This means, if ipv6 extension header is present, it will not be able to apply security ACLs properly based on ip protocols and other L4 information like L4 ports, TCP flags etc.When these extension headers are detected, these packets are sent to control plane CPU for further processing and applying the security ACLs. This mean, these packets will not be processed at full rate, which can be anything from 150Mpps to 835Mpps based on ASIC type, but only at 100 packets per sec. Any more packets of this type will be dropped.This behavior of sending packet to CPU is enabled by default, but in case user wants to disable this special handling of extension headers, they can enable the hardware profile.With this, they don’t have to insert permit rules in each ACLs. All the packets with extension headers will bypass security ACLs and will be permitted. This CLI can be configured or de-configured anytime during router operation and does not need any restart of system.Note# Can be enabled and disabled without requiring a system reload.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/63x/b-ip-addresses-configuration-guide-ncs5500-63x/b-ip-addresses-configuration-guide-ncs5500-63x_chapter_010.htmlbundle-scaleRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile bundle-scale ? 1024 Max 1024 trunks, Max 16 members 256 Max 256 trunks, Max 64 members 512 Max 512 trunks, Max 32 membersRP/0/RP0/CPU0#NCS5500-702(config)#In IOS XR 6.5.1, we introduced this hardware profile to offer more flexibility in the distribution “number of bundles vs numbers of bundle members”. Some customers wanted more bundle interfaces with few ports or vice versa fewer bundle interfaces with more port members. # Bundles Members / Bundle 256 64 512 32 1024 16 Note# “hw-module profile qos max-trunks <256/512/1024>” is replaced by “hw-module profile bundle-scale <256/512/1024>” from 6.5.1 release onwardsExternal documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0100.html#wp3930832296bundle-hashhw-module profile bundle-hash ignore-ingress-portRP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash ignore-ingress-port ? -cr- RP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash ignore-ingress-portThis profile was introduced in IOS-XR 7.1.2. After enabling this profile ingress traffic port is removed from the hash-key computation. This results in different hash-key value gets computation. This alters the traffic distribution across the bundle members. This is a global CLI and applies to all Line cards and all bundles on the chassis. This profile doesn’t require a chassis or a LC reload.hw-module profile bundle-hash per-packet-round-robinRP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash ? hash-index configure which polynomial to use in bundle-hash ignore-ingress-port Disable ingress port during bundle hash computation per-packet-round-robin Enable per-packet round robin loadbalancing for all bundles in systemRP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash RP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash per-packet-round-robin ? -cr- This profile is introduced in IOS XR 7.3.1. Enabling this profile, bundles will start egressing traffic in a per-packet round robin manner across all members. This suppresses any internal load balancing algorithm and it becomes purely round robin. This is a global CLI and applies to all bundles across all locations/LCs.Note# This profile doesn’t require a chassis or a LC reload.When per packet round robin mode is enabled, all bundle links will be equally used for egress. Hence, the bundle-hash CLI tool or ‘show cef exact-route’ command to predict the egress member link will not be of any use.hw-module profile bundle-hash hash-index <> location <>RP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash hash-index ? 1 Use Polynomial value 0x8011 10 Use LB-Key-Pkt-Data directly 11 Use counter incremented every packet 12 Use counter incremented every two clocks 2 Use Polynomial value 0x8423 3 Use Polynomial value 0x8101 4 Use Polynomial value 0x84A1 5 Use Polynomial value 0x9019RP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash hash-index 10 ? location Location of bundle-hash polynomial configRP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash hash-index 10 location ? 0/0/CPU0 Location of bundle-hash polynomial config 0/3/CPU0 Location of bundle-hash polynomial config 0/7/CPU0 Location of bundle-hash polynomial config WORD Location of bundle-hash polynomial configRP/0/RP1/CPU0#5508-1-731(config)#hw-module profile bundle-hash hash-index 10 locationThis is the config to adjust the LAG load balancing algorithm by changing the hash polynomial used by ASIC. The router continues to use the existing 7-Tuples algorithm, but user can change the polynomial value used internally. This results in different load-balancing which may be better in some cases. This profile is per location or LC based. So the change is applicable only for traffic streams ingressing on that particular Line Card. Hash-key is calculated based on ingress LC. Hash-key value decides which bundle member will be selected for egress, irrespective of whichever LC bundle member belongs to. User can configure same hash-index on all LCs if desired.Note# Any other hash-index is not valid or there are some internal limitations. Hence only CLI should be used to modify the hash-index and not via shell.Hash-indices 11 and 12 result in a per-pkt based distribution across bundle. So should be used with caution/after testing. This profile doesn’t require a chassis or a LC reload. When there are multiple NPU’s/ASIC’s on a LC, and hash index is configured on that location, that configuration gets applied to each NPU.bw-thresholdRP/0/RP0/CPU0#5508-2-702(config)#hw-module profile bw-threshold ? WORD value in percent# 0-100,in increments of 10RP/0/RP0/CPU0#5508-2-702(config)#hw-module profile bw-threshold 20 ? crRP/0/RP0/CPU0#5508-2-702(config)This feature allows the NPU to monitor of the number of fabric interfaces “up”.When the fabric bandwidth falls below the configured bandwidth threshold on any one ASIC, the front panel interfaces on the line card are forced down, triggering a network re-convergence (through routing protocols). If we cross the threshold in the other direction, the front panel interfaces are brought back up again.The “bring down threshold” is kept 2 fabric links per ASIC below the “bring up threshold”.This algorithm does not apply to fixed systems where the fabric links are not connected.Check “Set Fabric Bandwidth Threshold” in the installation guide#https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/hardware-install/b-ncs5500-hardware-installation-guide/b-ncs5500-hardware-installation-guide_chapter_011.htmlNote# This profile is not needed for J2 based systems.flowspecRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile flowspec ? v6-enable Configure support for v6 flowspecRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile flowspec v6-enable ? location Location of flowspec config -cr-RP/0/RP0/CPU0#NCS5500-702(config)#NCS 5500 supports BGP FlowSpec starting from IOS XR 6.5.1. Only the J+ platforms with eTCAM (-A-SE) are supporting this feature.Details on the BGP FlowSpec implementation are available in this support forum post#https#//community.cisco.com/t5/service-providers-blogs/bgp-flowspec-implementation-on-ncs5500-platforms/ba-p/3387443Once you enable this profile, due to the potentially very long key necessary to perform the matching part of the BGP FSv6 rule, it has been decided to reduce the PPS for the all data paths to 700MPPS (and not only IPv6 packets).External documentation# https#//community.cisco.com/t5/service-providers-blogs/bgp-flowspec-implementation-on-ncs5500-platforms/ba-p/3387443 https#//xrdocs.io/ncs5500/tutorials/bgp-flowspec-on-ncs5500/load-balance algoRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile load-balance algorithm ? L3-only L3 Header only Hash. PPPoE PPPoE session based optimized hash. Reload is required for this option gtp GTP optimized. gtp-mpls GTP over MPLS optimized hash. inner-l2-field Inner MAC based optimised hashing. ip-tunnel IP tunnel optimized. layer2 Layer 2 optimized. mpls-lsr-ler MPLS lsr ler LB profile. mpls-lsr-ler-optimized MPLS lsr ler LB profile optimized mpls-safe-speculative-parsing MPLS safe Speculative parsing.RP/0/RP0/CPU0#NCS5500-702(config)# The headers which make the forwarding decision in a packet are called the forwarding headers. Further the forwarding header along with other headers can be used for the Hashing Decision.For example, in a packet format of ETHoIP# In a L3 Network# content from the IP header is used and hence called the forwarding header. In a L2 Network# content from the ETH header is used and hence called the forwarding header The header following the forwarding header have been indicated as Fwd+1, Fwd+2 and so on in the document below. In Cisco IOSXR Jericho routers by default For ECMP# FWD and FWD+1 headers are taken into consideration. For LAG# FWD, FWD+1 and FWD+2 headers are taken into consideration. In J+ routers for both ECMP hash and LAG hash takes 3 headers i.e. Fwd, Fwd+1 and Fwd+2 for hash calculation. Certain parameters for hash calculation are not dependent on the incoming packet but on the router. For example, Input pp_port which represents the port on which packet is entering the router and Router_id which is the configured IPv4 loopback address of the router (If no IPv4 loopback is configured i.e. no loopback interface or only IPv6 loopback interfaces then chassis ID is used as a Router_id) J/J+ provides the flexibility to choose from multiple profiles to satisfy the requirement of different deployment scenario. While on J2 no additional load balancing hw-module config is required Index Profile Name Use Case   0 Default Performs default hashing operation as explained above. No h/w module knob needed to enable this.   1 layer2 Optimized for Ethernet (Layer 2) based forwarding deployments. The profile also allows the hashing algorithm to be able to use the inner IP header information when doing layer 2 forwarding with inner payload is MPLS.   2 ip-tunnel Suitable for deployment of IP tunnel such as GRE and GUE. It allows the hashing algorithm to use the outer IPv4 GRE header even when doing an IP tunnel decapsulation. For GUE load balancing happens only on outer IP and Outer UDP header. The profile needs the router to be reloaded to enable this profile.   3 mpls-safe-speculative-parsing Recommended for P (core) nodes as it can look one header inside the forwarding MPLS. But in this case, if the first nibble of the MAC DA address is a 4 or 6, the rest of the packet could be interpreted incorrectly as IPv4 or IPv6 header. The profile was introduced in 6.5.3   4 gtp Optimized for GTP deployments as it allows hashing based upon the tunnel endpoint id in GTP-U packets. Hashing decision is based on IPv4/IPv6, UDP and Tunnel Endpoint (TE ID) The profile was introduced in 6.5.1   5 L3-Only L3 header only hash i.e. fields from IPv4 and IPv6 will be used. This profile will not make use of any L4 header for hashing. Recommended when majority of traffic is fragmented to guarantee the fragmented packets will take the same path. The profile was introduced in 7.4.1   6 gtp-mpls Like GTP this profile is optimized for GTP deployments with MPLS backbone. TE ID is taken into consideration instead of L4. The profile was introduced in 7.2.1   7 mpls-lsr-ler Recommended for Load balancing at Label Edge Router (LER) and Label Switched Routers (LSRs) with MPLS traffic and where there could be partial MPLS label termination that could lead to Errata flow (pop of the topmost label and lookup on the label underneath it). Recommended for below cases# 1) MPLS pop and lookup flows (EthoMPLS2/3oIPv4oL4) with L4 as TCP or UDP 2) MPLS pop and lookup flows (EthoMPLS2/3oIPv6oL4) with L4 as TCP or UDP The profile was introduced in 7.8.1.   8 mpls-lsr-ler-optimized Added optimizations above mpls-lsr-ler profile. Allows for optimized hashing in LER and LSR with MPLS IPv6 traffic Recommended for below cases# 1) MPLS pop and lookup flows (EthoMPLS2/3oIPv4) but no L4. 2) MPLS pop and lookup flows (EthoMPLS2/3oIPv4) but no L4. 3) EthoMPLS4+ labels packets for topmost label swap/pop. The profile was introduced in 7.10.1   9 pppoe Profile designed for head and tail node under ECMP and LAG based hashing scenario in J/J+ where inner IPv4/IPv6 header immediately after PPPoE header is taken consideration. A unique FAT label is allocated for this load-balancing to happen. The profile needs the router to be reloaded to enable this profile.The profile was introduced in 7.4.1   9.1 pppoe decap-fatbased-hashing Sub-profile under pppoe similar with fat-based hash profile for pppoe packets which has been specifically recommended for tail node which would enable the use of FAT label and VC label which are terminating and hence not taken into hash tuple to be considered for hash calculation. The profile needs the router to be reloaded to enable this profile.   10 inner-L2-Field Allows the hashing algorithm to use the inner ethernet fields like source MAC (SMAC) and destination MAC (DMAC) addresses for an EthoMPLSoEth kind of packets into consideration for hashing. By default, these fields are not taken. This knob is applicable to J2 as well. The profile was introduced in 7.7.1 on J2 and for J/J+ introduced in 24.1.2   11 fat-based-hash Profile recommended for L2VPN deployment with FAT configured on the disposition node. Enabling this profile ensures that FAT label is also included for hash computation. The profile needs the router to be reloaded to enable this profile. The profile was introduced in 24.3.1   CLI’s that can affect the behaviour of load balancing are # hw-module profile bundle-hash ignore-ingress-port# Enabling this command removes ingress traffic port from the hash-key computation. This results in different hash-key value generation. This alters the traffic distribution across the bundle members. This is a global CLI and applies to all Line cards and all bundles on the chassis. This profile doesn’t require a chassis, or a LC reload. hw-module profile bundle-hash hash-index Command supports the following hash-indices# 1 Use Polynomial value 0x8011 10 Use LB-Key-Pkt-Data directly 11 Use counter incremented every packet 12 Use counter incremented every two clocks 2 Use Polynomial value 0x8423 3 Use Polynomial value 0x8101 4 Use Polynomial value 0x84A1 5 Use Polynomial value 0x9019 This command is applicable only to the location or line card specified while configuring this command. hw-module profile bundle-hash per-packet-round-robin Enabling this profile, bundles will start egressing traffic in a per-packet round robin manner across all members. This suppresses any internal load balancing algorithm, and it becomes purely round robin. This is a global CLI and applies to all bundles across all locations/LCs.netflow ipfix315-enableRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile netflow ? fpc-enable Netflow full packet capture enable ipfix315-enable IPFIX 315 enableRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile netflow ipfix315-enable ? location Location of NETFLOW config -cr-RP/0/RP0/CPU0#NCS5500-702(config)#This hardware profile must be enabled prior to configure an exporter-map with version ipfix option 315 on a specific line card.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/netflow/63x/b-netflow-cg-ncs5500-63x/b-netflow-cg-ncs5500-63x_chapter_010.htmlnetflow fpc-enableRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile netflow ? fpc-enable Netflow full packet capture enable ipfix315-enable IPFIX 315 enableRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile netflow fpc-enable ? location Location of NETFLOW config -cr-RP/0/RP0/CPU0#NCS5500-702(config)#This feature introduced in 7.0.1 enables the full packet capture mode, and is documented externally on https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/netflow/b-ncs5500-netflow-cli-reference/b-ncs5500-netflow-cli-reference_chapter_01.html#wp4044800456Full MPLS+IP headers are displayed only for L3VPN. For L2VPN, the netflow code does not support any decoding from CW onwards (a XR netflow limitation, and not specific to NCS5500). The Netflow Full Packet Capture is required to capture non-IPoMPLS packets.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/netflow/70x/configuration/guide/b-netflow-cg-ncs5500-70x/b-netflow-cg-ncs5500-70x_chapter_010.htmloffloadRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile offload ? 1 BFDv6 and Bsync 2 BFDv6 and Route download 3 Route download and BsyncRP/0/RP0/CPU0#NCS5500-702(config)#This profile, introduced in 6.6.1, enables the hardware offload of the IPv6 BFD for NCS5501-SE. Bsync here refers to the PTP (timing feature) capability.On NCS5501-SE, we can use two cores and we need to select between 3 options# Bsync, BFDv6 and the acceleration of the route download in the NL12k eTCAM.The problem doesn’t exist on other Jericho + NL12k eTCAM systems since they don’t support timing and doesn’t exist either on the Jericho+ systems since they use an OP eTCAM.Option 1 is used by default. Not relevant for other platforms than NCS5501-SE.Doesn’t work for Segment Routing (same as IPv4) and doesn’t support Explicit NULL. Only fixed values of timers can be used (3.3 msec, 10 msec, 100 msec, 1 sec, 10 sec).External documentation# https#//community.cisco.com/t5/service-providers-blogs/bfd-over-ipv6-implementation-on-ncs5500-platform/ba-p/3771621Note# This profile is not needed for J2 based systemsoamhw-module profile oam 48byte-cfm-maid-enableRP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile oam ? 48byte-cfm-maid-enable Enable 48byte cfm maid feature sat-enable enable SAT featureRP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile oam 48byte-cfm-maid-enable ? -cr-RP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile oam 48byte-cfm-maid-enableThu Jan 20 09#55#51.661 UTCIn order to make the oam profile take effect, the router must be manually reloaded.The NCS 5500 and NCS 5700 system has limited format support for MAID/MDID formats for hw-offloaded CFM session (<1 minutes). With introduction to this new hw-module profile, NCS 5700 fixed systems and modular boxes running in J2 native mode will support flexible format MDID/MAIDs for hardware offloaded MEPs. Activation of this mode needs the router to be reloated.qosMany options behind the qos profiles…hw-module profile qos hqos-enableRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos ? bvi-l2qos-disable Disable L2QOS on BVI interfaces hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map sizeRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos hqos-enable ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#If you need to apply any kind of QoS policy on a sub-interface, it’s mandatory to enable this hardware profile. Even for simple shaper in egress L3 sub-if (note# you don’t need it for ingress shaper).Of course, it’s also necessary if you need to apply hierarchical quality of service with parent/children structure.Restrictions HQoS profile and ingress peering profile do not work simultaneously. And hence, features requiring peering profile also do not work with HQoS profile enabled. Starting from 6.5.1# Lawful Intercept no longer needs the peering profile and therefor can be used with HQoS profile PBTS feature does not work when HQoS profile is enabled. A maximum of 896 bundle sub-interfaces are only supported in the system even if there are no QoS policies applied. This is because of internal LAG_ID resource consumption in HQoS profile mode for bundle sub-interfaces with/without QoS policies being applied. A maximum of 4 priority levels are only supported in HQoS profile mode unlike the default mode where 7-priority levels are supported. The restriction also applies to physical and bundle main interface policies where 7-level priorities were previously used in non-HQoS profile mode.External documentation#https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0100.html#wp2922716190hw-module profile qos max-classmap-sizeRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos ? bvi-l2qos-disable Disable L2QOS on BVI interfaces hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map sizeRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos max-classmap-size ? 16 Max 16 class-maps per policy 32 Max 32 class-maps per policy 4 Max 4 class-maps per policy 8 Max 8 class-maps per policyRP/0/RP0/CPU0#NCS5500-663(config)#The max-classmap-size represents the maximum number of class-map we can use in policy-map. It has a direct impact on the number of L2 interfaces where you can apply QoS.By default, class-maps are are configured to contain 32 match statements. With the 4096 available counters, it translates into a maximum of 256 L2 interfaces per NPU, 128 if used on bundle interfaces (with the default 2-counter mode we will detail below).This hardware profile defines a different class-map size, offering higher scale for L2 interfaces with QoS# Class-map Size Scale on bundle per NPU Scale per NPU 4 1024 2000 8 512 1024 16 256 512 32 (default) 128 256 External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0101.html#wp9836538690hw-module profile qos ingress-model peeringRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos ? bvi-l2qos-disable Disable L2QOS on BVI interfaces hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map sizeRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos ingress-model ? default Default model for ingress QoS peering Peering model for ingress QoSRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile qos ingress-model peering ? location Location of QoS config -cr-RP/0/RP0/CPU0#NCS5500-663(config)#This profile allows hybrid ACLs to classify traffic and set a specific qos-group to it.But instead of the traditional range supported by qos-groups (0 to 7), we can allocate a much larger range from 0 to 511.This qos-group can be used as a match condition for the traffic in egress port.With this extension from 8 to 512 values, we improve the granularity of the classification.To disable this feature, use “no hw-module profile qos ingress-model peering” and NOT “hw-module profile qos ingress-model default”.Note# HQoS profile and ingress peering profile do not work simultaneously# features requiring peering profile like LI also do not work with HQoS profile enabled.Note2# L2 ACL will not work if qos peering is enabled on the line cards.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/63x/b-qos-cg-ncs5500-63x/b-qos-cg-ncs5500-63x_chapter_010.html https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0101.html#wp9836538690hw-module profile qos ipv6 shortRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile qos ? hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map sizeRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile qos ipv6 ? short Configure ipv6 source short address tcam lookup short-l2qos-enable Enable l2qos feature which requires to reduce ipv6 dest mask to 96 bitsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile qos ipv6 short ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#Introduced in IOS XR 6.5.1 for the support of the QPPB feature.Only mandatory to apply it (and reload the line card) for IPv6, it’s not needed for IPv4 QPPB activation.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/66x/b-qos-cg-ncs5500-66x/b-qos-cg-ncs5500-66x_chapter_010.html#id_77442hw-module profile qos ipv6 short-l2qos-enableRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile qos ? hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map sizeRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile qos ipv6 ? short Configure ipv6 source short address tcam lookup short-l2qos-enable Enable l2qos feature which requires to reduce ipv6 dest mask to 96 bitsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile qos ipv6 short-l2qos-enable ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#From CCO# “To enable classification of IPv6 packets based on (CoS, DEI) on L3 sub-interfaces, run the hw-module profile qos ipv6 short-l2qos-enable command in the XR Config mode.”This profile, introduced in 7.1.1, replaces the “short” one and extends the support of the classification to the COS/DEI fields in the IPv6 headers, reducing the destination address to 96bits.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0101.html#wp1849861368hw-module profile qos conform-aware-policerRP/0/RP0/CPU0#IOS(config)#hw-module profile qos ? conform-aware-policer Configure Conform Aware Policer mode hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map size physical-hqos-enable Enable Hierarchical QoS on physical interfaces qosg-dscp-mark-enable Enable both 'set qos-group' and 'set dscp/precedence' actions in the same ingress QoS policy shared-policer-per-class-stats Enable shared policer (per class stats) modeRP/0/RP0/CPU0#IOS(config)#hw-module profile qos conform-aware-policer ? -cr- RP/0/RP0/CPU0#IOS(config)#hw-module profile qos conform-aware-policerTo enable the conform-aware hierarchical policy feature use the hw-module profile qos conform-aware-policer command. It was introduced in release IOS-XR 7.2.1. Implementing the profile conform aware hierarchical policy, allows conform traffic from the child level policy to get priority over Exceed or Violate traffic at parent level policy. Prior to this profile, there was no way that conform traffic belonging to child policy was getting priority over the parent level policy. When this profile is enabled the entire system is put in color aware mode as compared to color blind mode by default. There is no effect on other features or resources.Note# In order to activate this new npu profile, you must manually reload the chassisThe support on J2 based platforms is only in native mode from release IOS XR 7.4.1External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0100.htmlhw-module profile qos shared-policer-per-class-statsRP/0/RP0/CPU0#IOS(config)#hw-module profile qos ? conform-aware-policer Configure Conform Aware Policer mode hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map size physical-hqos-enable Enable Hierarchical QoS on physical interfaces qosg-dscp-mark-enable Enable both 'set qos-group' and 'set dscp/precedence' actions in the same ingress QoS policy shared-policer-per-class-stats Enable shared policer (per class stats) modeRP/0/RP0/CPU0#IOS(config)#hw-module profile qos shared-policer-per-class-stats ? -cr- RP/0/RP0/CPU0#IOS(config)#hw-module profile qos shared-policer-per-class-statsThis profile was introduced in IOS-XR 7.2.1. Implementing this profile, will provide an option for token bucket sharing among two or more classes. Prior to this, there was no way to share a policer token bucket among two or more class. This token sharing depends on the incoming traffic rate, so there is no guarantee that both c1 and c2 will get equal share. However, if c2 doesn’t have any traffic, then all the bandwidth will be used by c1. This CLI will need system reload to enable the feature. The stats for each class will be available separately. In Shared Policer feature Per-Class Mode, policer bucket will be shared among two or more classes. It will also allow individual class statistics rather than aggregated statistics. Without enabling this feature as well the shared policer will work with respect to token bucket sharing, however the stats will be available only as aggregate stats, and not per individual class stats.Note# In order to activate this new npu profile, you must manually reload the chassisThe support on J2 based platforms is only in native mode from release IOS XR 7.4.1Policy based tunnel selection (PBTS) is disabled when we enable this profileExternal documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_010.html#wp4161007330hw-module profile qos wred-stats-enableRP/0/RP0/CPU0#IOS(config)#hw-module profile qos wred-stats-enable -cr- RP/0/RP0/CPU0#IOS(config)#hw-module profile qos shared-policer-per-class-statsWRED-stats accounting feature on NCS5700 platform will be introduced from IOS-XR 7.4.1. WRED stats will be supported on systems based on J2 with OP2 TCAM only in native mode. Statistics like WRED FWD, WRED Drop count will be available for all the discard-class 0,1,2. WRED stats can be used as an indication of network congestion bottleneck and network planning can be done accordingly.Note#In order to activate this new npu profile, you must manually reload the chassisThe support is only on J2 platforms with e-tcam in native mode from release IOS XR 7.4.1hw-module profile qos qosg-dscp-mark-enableRP/0/RP0/CPU0#IOS(config)#hw-module profile qos ? conform-aware-policer Configure Conform Aware Policer mode hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map size physical-hqos-enable Enable Hierarchical QoS on physical interfaces qosg-dscp-mark-enable Enable both 'set qos-group' and 'set dscp/precedence' actions in the same ingress QoS policy shared-policer-per-class-stats Enable shared policer (per class stats) modeRP/0/RP0/CPU0#IOS(config)#hw-module profile qos qosg-dscp-mark-enable 10 ? 0-63 Second DSCP/Precedence value -cr- RP/0/RP0/CPU0#IOS(config)#hw-module profile qos qosg-dscp-mark-enable 10To set the qos-group and DSCP values within the same QoS policy that is applied in the ingress direction, use the hw-module profile qos qosg-dscp-mark-enable command. This profile was introduced in IOS-XR 7.1.2. With the use of this profile, we can configure a single ingress policy with both set dscp/precedence and set qos-group at the same time. We can configure set dscp or prec and set qos-group under the same policy-map but not inside the same class map in a policy map. In order to activate this new npu profile, you must manually reload the chassisExternal documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_0101.html#wp1468010117hw-module profile qos ecn-marking-statsRP/0/RP0/CPU0#ios#con tFri Mar 26 10#35#10.739 UTCRP/0/RP0/CPU0#ios(config)#hw-module profile qos ? conform-aware-policer Configure Conform Aware Policer mode ecn-marking-stats Enable ECN marking stats mode hqos-enable Enable Hierarchical QoS ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol max-classmap-size max class map size qosg-dscp-mark-enable Enable both 'set qos-group' and 'set dscp/precedence' actions in the same ingress QoS policy shared-policer-per-class-stats Enable shared policer (per class stats) mode wred-stats-enable Enable Wred egress statsRP/0/RP0/CPU0#ios(config)#hw-module profile qos ecn-marking-stats Fri Mar 26 10#35#20.004 UTCIn order to activate this profile, you must manually reload the chassis/all line cardsThis profile helps to enable the counters for ECN marked packets when the packet is going out of the node. It was introduced in release IOS-XR 7.3.2. ECN feature is enabled by default in the system. But counters for ECN marked packets is not enabled due to stats resource limitation. When this hw-module is enabled, we enable rules in hardware to count ECN marked packets during congestion in the system.In order to activate this new npu profile, you must manually reload the chassisWith this profile enabled, Egress IPv4 and IPv6 ACL will not work due to PMF resource limitation in egress.Note# In order to activate this new npu profile, you must manually reload the chassisWith this profile enabled, Egress IPv4 and IPv6 ACL will not work due to PMF resource limitation in egress.hw-module profile qos l2-match-dest-addr-v4v6RP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile qos ? arp-isis-priority-enable Prioritize ISIS and ARP packets conform-aware-policer Configure Conform Aware Policer mode ecn-marking-stats Enable ECN marking stats mode hqos-enable Enable Hierarchical QoS on physical and sub-interfaces ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol l2-match-dest-addr-v4v6 Enable l2qos match on ipv4/ipv6 destination address max-classmap-size max class map size physical-hqos-enable Enable Hierarchical QoS only on physical interfaces, and not on sub-interfaces qosg-dscp-mark-enable Enable both 'set qos-group' and 'set dscp/precedence' actions in the same ingress QoS policy shared-policer-per-class-stats Enable shared policer (per class stats) mode wred-stats-enable Enable Wred egress statsRP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile qos l2-match-dest-addr-v4v6Thu Jan 20 09#52#48.156 UTCIn order to activate this profile, you must manually reload the chassis/all line cardsPrior to 7.5.1 there is no option to match IPv4/v6 destination addresses for QoS classification on l2transport interface. With introduction to this profile , QoS classification can be done based on destination address even on l2transport interface/sub interface. activation of this profile needs the router to be reloaded. A new match criterion “match destination-address” is also introduced along with the hw-module profile. This hw-module profile is supported only on NCS 55xx and NCS 540 series routers based on J/J+ generation of ASICS.RP/0/RP0/CPU0#N55xx(config)#RP/0/RP0/CPU0#N55xx(config)#class-map l2-destRP/0/RP0/CPU0#N55xx(config-cmap)#match destination-address ipv4 ? A.B.C.D/prefix IP prefix -network/length-RP/0/RP0/CPU0#N55xx(config-cmap)#match destination-address ipv4 100.1.1.1/24 ? -cr-RP/0/RP0/CPU0#N55xx(config-cmap)#match destination-address ipv4 100.1.1.1/24RP/0/RP0/CPU0#N55xx(config-cmap)#match destination-address ipv6 ? X#X##X/0-128 IPV6 address with prefix and maskRP/0/RP0/CPU0#N55xx(config-cmap)#match destination-address ipv6 2001##1/128RP/0/RP0/CPU0#N55xx(config-cmap)#show configurationThu Jan 20 09#54#11.276 UTCBuilding configuration...!! IOS XR Configuration 7.5.1!class-map match-any l2-dest match destination-address ipv4 100.1.1.1 255.255.255.0 match destination-address ipv6 2001##1/128 end-class-map!endhw-module profile qos arp-isis-priority-enableRP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile qos ? arp-isis-priority-enable Prioritize ISIS and ARP packets conform-aware-policer Configure Conform Aware Policer mode ecn-marking-stats Enable ECN marking stats mode hqos-enable Enable Hierarchical QoS on physical and sub-interfaces ingress-model QoS model for ingress feature ipv6 Configure ipv6 protocol l2-match-dest-addr-v4v6 Enable l2qos match on ipv4/ipv6 destination address max-classmap-size max class map size physical-hqos-enable Enable Hierarchical QoS only on physical interfaces, and not on sub-interfaces qosg-dscp-mark-enable Enable both 'set qos-group' and 'set dscp/precedence' actions in the same ingress QoS policy shared-policer-per-class-stats Enable shared policer (per class stats) mode wred-stats-enable Enable Wred egress statsRP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile qos arp-isis-priority-enable ? -cr-RP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#hw-module profile qos arp-isis-priority-enableThu Jan 20 09#51#01.244 UTCIn order to activate this profile, you must manually reload the chassis/all line cardsThis feature gives an option to assign the highest priority (TC7) to Integrated Intermediate System-to-Intermediate System (IS-IS) and Address Resolution Protocol (ARP) packets in transit. This feature is disabled by default.The feature provides more flexibility in transit traffic management on a per-hop basis and also fine-tunes the traffic profile management for transit traffic.enabling native modehw-module profile npu native-mode-enableRP/0/RP0/CPU0#IOS(config)#hw-module profile npu ? native-mode-enable Enable NPUs to operate in native modeRP/0/RP0/CPU0#T-2006(config)#hw-module profile npuRP/0/RP0/CPU0#ios#configRP/0/RP0/CPU0#ios(config)#hw-module profile npu native-mode-enableIn order to activate this new npu profile, you must manually reload the chassisRP/0/RP0/CPU0#ios(config)#commitRP/0/RP0/CPU0#ios(config)J2 based systems can operate in 2 modes# Compatible Mode—Used when the chassis contains combination of Cisco NC57 and older generation line cards. This is the default mode.Native Mode—Used when the chassis contains only Cisco NC57 line cards.This hw-module profile is introduced in IOS-XR 7.2.1. For operating a modular chassis with J2 based Line cards in full capacity we need to enable this profile. Jericho and Jericho+ based LC’s present in the chassis should be shut down from admin mode before reload or removed from the chassis. The forwarding behavior will be broken and can be unpredictable if the system will not be able to shut them down properly. To enable the native mode, use the hw-module profile npu native-mode-enable command in the configuration mode. Reload of the router will be needed to switch to native mode. The fixed chassis based on J2 chipset will operate by default in native mode only.External documentation#-https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/vpn/72x/b-l2vpn-cg-ncs5500-72x/configure-gigabit-ethernet-for-layer-2-VPNs.htmlsegment-routing srv6RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile segment-routing ? srv6 Configure support for SRv6 and its paramatersRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile segment-routing srv6 ? encapsulation Configure encapsulation parameters -cr-RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile segment-routing srv6 encapsulation ? traffic-class Control traffic-class field on IPv6 headerRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile segment-routing srv6 encapsulation traffic-class ? -0x0-0xff- Traffic-class value (specified as 2 hexadecimal nibbles) propagate Propagate traffic-class from incoming packet/frameRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile segment-routing srv6 encapsulation traffic-class propagate ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#This profile is required to enable Segment Routing v6. We can define a static TC value or copy it from the payload.Note# The encapsulation option is deprecated in 7.4.1 as NCS 5500 will support both SRv6 base and SRv6 uSID modes. Therefore SRv6 related hardware command will follow hw-module profile segment-routing srv6 mode <> and are described belowhw-module profile segment-routing srv6 mode baseRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 ? encapsulation Configure encapsulation parameters (DEPRECATED) mode Mode of operationRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode ? base Base SRv6 (Format-1) support only micro-segment Micro-segment support onlyRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base ? encapsulation Configure encapsulation parameters -cr-RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode baseThu Sep 9 06#23#24.077 UTCIn order to activate/deactivate this srv6 profile, you must manually reload the chassis/all line cardsThis hw-module mode enables Segment Routing v6 (SRv6) on the node using base SRH format.A reload of the chassis is needed to configure/change any SRv6 related hw-module profile.hw-module profile segment-routing srv6 mode base encapsulation RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base encapsulation ? traffic-class Control traffic-class field on IPv6 headerRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base encapsulation traffic-class ? -0x0-0xff- Traffic-class value (specified as 2 hexadecimal nibbles) propagate Propagate traffic-class from incoming packet/frameRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base encapsulation traffic-class 0xe0 RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base encapsulation ? traffic-class Control traffic-class field on IPv6 headerRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base encapsulation traffic-class ? -0x0-0xff- Traffic-class value (specified as 2 hexadecimal nibbles) propagate Propagate traffic-class from incoming packet/frameRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode base encapsulation traffic-class propagate The encapsulation option followed by the base SRH format decides the QoS bits for the SRv6 header. The TC value on the IPv6 header can be either set explicitly to any value in the range of 0x00-0xff . This value is global for all the SRv6 services that starts on the box. When configured with propagate, QoS marking of the incoming payload is propagated to the SRv6 TC marking. For L3VPN IP Precedence is copied to the SRv6 header IPv6 precedence. For L2VPN the Layer 2 PCP markings are copied to the SRv6 Precedence.A reload of the chassis is needed to configure/change any SRv6 related hw-module profile.hw-module profile segment-routing srv6 mode micro-segment format f3216RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment ? format Specify carrier formatRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment format ? f3216 32-bit block and 16-bit IDsRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment format f3216 ? encapsulation Configure encapsulation parameters -cr-RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment format f3216Wed Aug 25 22#58#12.659 PDTIn order to activate/deactivate this srv6 profile, you must manually reload the chassis/all line cards Introdcued in IOS XR 7.3.1/7.4.1. with this hw-module profile enables SRv6 uSID instead of base SRH format on the node. A reload of the chassis is needed to configure/change any SRv6 related hw-module profile.hw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulationRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation ? traffic-class Control traffic-class field on IPv6 headerRP/0/RP1/CPU0#5508-1-741C(config)#$ode micro-segment format f3216 encapsulation traffic-class ? -0x0-0xff- Traffic-class value (specified as 2 hexadecimal nibbles) propagate Propagate traffic-class from incoming packet/frameRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation traffic-class 0xe0Wed Aug 25 23#01#54.477 PDTIn order to activate/deactivate this srv6 profile, you must manually reload the chassis/all line cards RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation traffic-class propagateWed Aug 25 23#01#59.065 PDTIn order to activate/deactivate this srv6 profile, you must manually reload the chassis/all line cards The encapsulation option followed by the micro-segment mode decides the QoS bits for the SRv6 header. The TC value on the IPv6 header can be either set explicitly to any value in the range of 0x00-0xff . This value is global for all the SRv6 services on the box. When configured with propagate, QoS marking of the incoming payload is propagated to the SRv6 TC marking. For L3VPN IP Precedence is copied to the SRv6 header IPv6 precedence. For L2VPN the Layer 2 PCP markings are copied to the SRv6 Precedence.A reload of the chassis is needed to configure/change any SRv6 related hw-module profile.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/segment-routing/66x/b-segment-routing-cg-ncs5500-66x/b-segment-routing-cg-ncs5500-66x_chapter_011.htmlsr-policyRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile sr-policy ? V6-Null-label-autopush Configure IPV6 NULL label autopush for SR policyRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile sr-policy V6-Null-label-autopush ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#This profiles enables the V6 null label autopush over SR-policy. DSCP preserve would be disabled.This profile is not supported with 6VPE (v6 null label would be pushed rather than 6vpe label).With this feature, we can use up to 12 labels for v6 too.statsRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. enh-sr-policy Enable Enhanced_SR_Policy_Scale stats profile counter. ingress-sr Enable ingress SR stats profile counter. qos-enhanced Enable enhanced QoS stats. tx-scale-enhanced Enable enhanced TX stats scale (Non L2 stats).RP/0/RP0/CPU0#NCS5500-702(config)#Enabling one of these profiles will disable any other previously applied.stats acl-permitRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. enh-sr-policy Enable Enhanced_SR_Policy_Scale stats profile counter. ingress-sr Enable ingress SR stats profile counter. qos-enhanced Enable enhanced QoS stats. tx-scale-enhanced Enable enhanced TX stats scale (Non L2 stats).RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats acl-permit ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#By default, access-lists don’t count permit ACE hits but only the deny ones.It could be very important to track the packets permitted, for example when using ABF (ACL Based Forwarding).This profile allocates statistic entries to permit ACEs.If acl-permit is configured, qos-enhanced or other options are disabled. To return to the default mode, use “no hw-module…” and reload.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/b-ip-addresses-cr-ncs5500/b-ncs5500-ip-addresses-cli-reference_chapter_01.html#id_82511 https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/63x/b-ip-addresses-configuration-guide-ncs5500-63x/b-ip-addresses-configuration-guide-ncs5500-63x_chapter_010.html https#//xrdocs.io/ncs5500/tutorials/security-acl-on-ncs5500-part1/Note# This profiles is not needed for J2 based systems.stats ingress-srRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. enh-sr-policy Enable Enhanced_SR_Policy_Scale stats profile counter. ingress-sr Enable ingress SR stats profile counter. qos-enhanced Enable enhanced QoS stats. tx-scale-enhanced Enable enhanced TX stats scale (Non L2 stats).RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats ingress-sr ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#Profile enabling per-label statistics at “ingress” for Segment Routing labels (only for the labels within configured SRGB and SRLB).Once activated, QoS Stats will not work for the same labeled packets.stats enh-sr-policyRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. egress-stats-scale Enable Egress Stats_Scale profile counter. enh-sr-policy Enable Enhanced_SR_Policy_Scale stats profile counter. ingress-sr Enable ingress SR stats profile counter. qos-enhanced Enable enhanced QoS stats. tx-scale-enhanced Enable enhanced TX stats scale (Non L2 stats).RP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile stats enh-sr-policy ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#Profile increasing the counters available in the egress pipeline from 16K (default) to 24K. These counters are taken from the ingress pipeline, impacting the scale for ACL/QOS/LPTS/etc. This command also enables ingress SR counters. QoS stats are disabled while enabling this particular stats profile. Higher egress stats# upto 16K stats available for MPLS cases like RSVP-TE and SR-TE, 4K for ARP/ND and 4K for L2-AC’s.stats qos-enhancedRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. enh-sr-policy Enable Enhanced_SR_Policy_Scale stats profile counter. ingress-sr Enable ingress SR stats profile counter. qos-enhanced Enable enhanced QoS stats. tx-scale-enhanced Enable enhanced TX stats scale (Non L2 stats).RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats qos-enhanced ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#By default, we only use two counters for policers.This profile enables the 4-counter mode# In 2-counter mode, statistics for “confirm” and “violate” packets are collected in hardware and displayed to the user via the “show policy-map” commandClass C1 Classification statistics (packets/bytes) (rate - kbps) Matched # 52198655/6681427840 13 Transmitted # 6138929/785782912 1 Total Dropped # 46059726/5895644928 12 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 6138929/785782912 1 Policed(exceed) # 0/0 0 Policed(violate) # 46059726/5895644928 12 Policed and dropped # 46059726/5895644928 In 4-counter mode, additionally “exceed” packets statistics are collected and displayed.Class C1 Classification statistics (packets/bytes) (rate - kbps) Matched # 52198655/6681427840 13 Transmitted # 6138929/785782912 1 Total Dropped # 46059726/5895644928 12 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 6138929/785782912 1 Policed(exceed) # 11326/1449728 0 Policed(violate) # 46059726/5895644928 12 Policed and dropped # 46071052/5897095778 The trade-off for such visibility will be half the scale.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/qos/b-ncs5500-qos-cli-reference/b-ncs5500-qos-cli-reference_chapter_01.html#wp9836538690tx-scale-enhancedRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats tx-scale-enhanced ? acl-permit Enable ACL permit stats. ingress-sr Enable ingress SR stats. qos-enhanced Enable enhanced QoS stats. -cr-RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats tx-scale-enhanced acl-permit ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats tx-scale-enhanced ingress-sr ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile stats tx-scale-enhanced qos-enhanced ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#This profile changes the “stats” EEDB bank to use “half” entries and thereby increases the EEDB scale available. It has been specifically created for large TE deployments, extending the head-end scale from 4k to 14k. Only on J+ systems.Don’t use in mixed (J/J+) chassis. Also it can’t support L2 features.stats egress-stats-scaleRP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile stats ? acl-permit Enable ACL permit stats. egress-stats-scale Enable Egress Stats_Scale profile counter. enh-sr-policy Enable Enhanced_SR_Policy_Scale stats profile counter. ingress-sr Enable ingress SR stats profile counter. qos-enhanced Enable enhanced QoS stats. tx-scale-enhanced Enable enhanced TX stats scale (Non L2 stats).RP/0/RP0/CPU0#NCS5500-663(config)#hw-module profile stats egress-stats-scale ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#Very similar the enh profile, extending the MPLS scale from 8k to 20k counters (4k for ARP/ND) and at the expense of the L2 counters (no L2 with this one).Note# This profiles is not needed for J2 based systemstcam acl-prefixRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam ? acl-prefix ACL table to configure fib Forwarding table to configure format format of the tcam entryRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam acl-prefix ? percent percent to configureRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam acl-prefix percent ? -0-100- value in percentRP/0/RP0/CPU0#NCS5500-702(config)#Hybrid (or Scaled) ACLs are stored in two places# iTCAM and eTCAM. But by default in 6.3.2 (onwards), the eTCAM is not carved to receive ACL information.This hardware profiles helps with this carving.First, it’s only relevant for products with eTCAM. Hybrid ACLs are not supported on “base” systems or LC, only on “scale / -SE” ones.Second, it’s only relevant to products based on Jericho. With Jericho+, the eTCAM is much larger and a portion of the database is allocated to ACLs dynamically, no configuration is needed.Finally, pay specific attention to the release used on the router. Indeed, between 6.2.x and before 6.3.2, the eTCAM was carved by default with 20% for ACL and 80% for IPv4 routes. Its not the case before 6.2 and starting from 6.3.2, where default config is allocating 100% of the eTCAM to IPv4 routes.Note# if you don’t configure space for ACL in eTCAM, the hybrid ACL configuration will be rejected at commit. This profile is not needed for J2 based systems.External documentation#These subtleties and many more details on Hybrid ACLs are covered in this external blog post#https#//xrdocs.io/ncs5500/tutorials/security-acl-on-ncs5500-part2-hybrid-acl/tcam fibRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam ? acl-prefix ACL table to configure fib Forwarding table to configure format format of the tcam entryRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ? ipv4 Configure ipv4 addresses in TCAM ipv6 Configure ipv6 addresses in TCAM v6mcast Multicast addressRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 ? unicast Unicast addressRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 unicast ? percent percent to configure prefix ip prefix lengthRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 unicast percent ? -1-100- value in percentRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 unicast percent 20 ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 unicast prefix ? -0-32- IPv4 prefix length.RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 unicast prefix 24 ? percent prefix percentage to configureRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam fib ipv4 unicast prefix 24 percent ? WORD prefix precent 0.0-100.00000RP/0/RP0/CPU0#NCS5500-702(config)#Only valid on the first generation of eTCAM (with Jericho ASIC). These hardware profile CLI permits the allocation / carving of the memory for specific route type. It does influence the way routes are stored in the different databases too.It’s not operational on J+ eTCAM since the allocation is dynamic and all routes are stored in external TCAM by default.External documentation# Mentioned in the CiscoLive breakout Session# https#//www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2900.pdftcam format aclRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam ? acl-prefix ACL table to configure fib Forwarding table to configure format format of the tcam entryRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam format ? access-list Access List formatRP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam format access-list ? ipv4 IPv4 ipv6 IPv6RP/0/RP0/CPU0#NCS5500-702(config)#hw-module profile tcam format access-list ipv4 ? common-acl enable common-acl, 1 bit qualifier dst-addr destination address, 32 bit qualifier dst-port destination L4 Port, 16 bit qualifier enable-capture Enable ACL based mirroring (Included by default) enable-set-ttl Enable Setting TTL field (Included by default) frag-bit fragment-bit, 1 bit qualifier interface-based Enable non-shared interface based ACL location Location of format access-list ipv4 config packet-length packet length, 16 bit qualifier port-range ipv4 port range qualifier, 24 bit qualifier precedence precedence/dscp, 8 bit qualifier proto protocol type, 8 bit qualifier src-addr source address, 32 bit qualifier src-port source L4 port, 16 bit qualifier tcp-flags tcp-flags, 6 bit qualifier ttl-match Enable matching on TTL field udf1 user defined filter udf2 user defined filter udf3 user defined filter udf4 user defined filter udf5 user defined filter udf6 user defined filter udf7 user defined filter udf8 user defined filter -cr-RP/0/RP0/CPU0#NCS5500-702(config)#These hardware profiles are necessary to enable the UDK/UDF (User-Defined TCAM Key and Field) used with ingress ACL. They permit the definition of the TCAM key. Allow For ttl rewrite based on Tunnel policy if used with set-ttl and match ttl Disable ACL sharing and support many small and unique ACL if used with interface-basedIndeed, the default behavior of an ingress access-list is to be re-usable# that means ACE lines of an ingress ACL applied on multiple ports of the same NPU are only counted twice (it’s not the case with egress ACLs).This hardware profile when used with the keyword “interface-based” will disable the re-usability and move to the unique-ACL mode.Starting from 6.5.2, we don’t support ACL match on packet-lenth (and ranges) by default, it’s mandatory to use a specific UDK as described in this blog post#https#//xrdocs.io/ncs5500/tutorials/acl-packet-length-matching-ncs55xx-and-ncs5xx/An example for configuring unique ACL for both IPv4 and IPv6 with the following fields (SRC_IP, DST_IP, SRC_PORT, DST_PORT) available for ACE matching is#hw-module profile tcam format access-list ipv4 src-addr src-port dst-addr dst-port interface-basedhw-module profile tcam format access-list ipv6 src-addr dst-addr dst-port interface-basedNote# switching functionality between shared and interface-based ACLs, all existing interface attachments needs to be removed.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/63x/b-ip-addresses-configuration-guide-ncs5500-63x/b-ip-addresses-configuration-guide-ncs5500-63x_chapter_010.htmlmdbThe Modular database (MDB) is applicable to all J2 based (NCS 5700) PIDs. This new hardwre module profile is intoroduced to choose the MDB carving used in J2 Native modes i.e all fixed PIDs based on J2 NPU and all modular NCS 5500 chassis operating in Native mode.RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile mdb ? l3max l3max profile for router containing non-TCAM cards l3max-se l3max-se profile for router containing only TCAM cards hw-module profile mdb l3maxRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile mdb l3max ? -cr-RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile mdb l3maxWed Aug 25 23#42#53.961 PDTIn order to activate this new mdb profile, you must manually reload the chassis introduced in IOS XR 7.4.1, This is applicable to J2 based PIDs (NCS 5700) without any e-TCAM (as well as eTCAM based cards when used in mixed mode). This carves out the NPU modular database to maximize the scale from L3 Heavy deployment. This profile should be applied to all Line Cards in a modular system . l3-max profile is the default MDB profile in a J2 fixed box or NCS 5500 modular chassis operating in J2 Native mode.To change MDB profiles, the router needs a relaodhw-module profile mdb l3max-seRP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile mdb l3max-se ? -cr-RP/0/RP1/CPU0#5508-1-741C(config)#hw-module profile mdb l3max-seWed Aug 25 23#43#00.050 PDTIn order to activate this new mdb profile, you must manually reload the chassis introduced in IOS XR 7.4.1, This is applicable to J2 based PIDs with external TCAM. This carves out the NPU modular database to maximize the scale from L3 Heavy deployment. For a modular system , all the LCs must be the Scale variant (i.e with eTCAM) for this profile to be applicable. If there is mix of scale and non scale LC, then the system falls back to l3max profile.To change MDB profiles, the router needs a relaodquadRP/0/RP0/CPU0#55A2-MOD-SE(config)#hw-module quad 0 location 0/0/CPU0 mode ? WORD 10g or 25g, (10g mode also operates 1g transceivers)RP/0/RP0/CPU0#55A2-MOD-SE(config)#Feature used only on 25GE-capable platforms (25GE native, not related to the breakout option here)# all NCS55A2-MOD versions NCS55A1-24Q6H-S NCS55A1-48Q6HBy default, those 1G/10G/25G ports are configured in TF mode only. An SFP or SFP+ optic inserted will not turn up. The profile allows the configuration of a block of 4 ports (a quad) to be used for 1G/10G mode.What quad 0 represents will depend on each platform.External documentation# https#//www.cisco.com/c/en/us/td/docs/routers/ncs5500/software/interfaces/configuration/guide/b-interfaces-hardware-component-cg-ncs5500-66x/b-interfaces-hardware-component-cg-ncs5500-66x_chapter_01100.htmlserviceRP/0/RP0/CPU0#NCS5500-702(config)#hw-module service ? offline Take all services on the card offlineRP/0/RP0/CPU0#NCS5500-702(config)#hw-module service offline ? location Location to configureRP/0/RP0/CPU0#NCS5500-702(config)#hw-module service offline location ? 0/0/CPU0 Fully qualified location specification 0/3/CPU0 Fully qualified location specification 0/4/CPU0 Fully qualified location specification 0/6/2 Fully qualified location specification 0/6/CPU0 Fully qualified location specification 0/7/CPU0 Fully qualified location specification 0/RP0/CPU0 Fully qualified location specification WORD Fully qualified location specificationRP/0/RP0/CPU0#NCS5500-702(config)#Meant to put a line card offline, but according to the following release notes#“The offline diagnostics functionality is not supported in NCS 5500 platform. Therefore, the hw-module service offline location command will not work. However, you can use the (sysadmin)# hw-module shutdown location command to bring down the LC.”External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/general/66x/release/notes/b-release-notes-ncs55k-r663.htmltcamRP/0/RP0/CPU0#NCS5500-702(config)#hw-module tcam ? fib Forwarding table to configureRP/0/RP0/CPU0#NCS5500-702(config)#hw-module tcam fib ? ipv4 Configure ipv4 protocolRP/0/RP0/CPU0#NCS5500-702(config)#hw-module tcam fib ipv4 ? scaledisable Configure scale mode for TCAM cardRP/0/RP0/CPU0#NCS5500-702(config)#hw-module tcam fib ipv4 scaledisable ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#This profile exists specifically for the application of URPF on a Jericho-based eTCAM system or line card.It will disable a mechanism named “double capacity” that allowed to double the size of the 80b key memory, extending the route count from 1M to 2M entries.External documentation#Details are available on the external blog post#https#//xrdocs.io/ncs5500/tutorials/ncs5500-urpf/On CCO#https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/security/62x/b-system-security-cg-ncs5500-62x/b-system-security-cg-ncs5500-62x_chapter_01001.htmlroute-statsIn IOS XR 7.0.2#RP/0/RP0/CPU0#NCS5500-702(config)#hw-module route-stats ? l3mcast Layer 3 MulticastRP/0/RP0/CPU0#NCS5500-702(config)#hw-module route-stats l3mcast ? ipv4 IPv4 Multicast vrf Vrf NameRP/0/RP0/CPU0#NCS5500-702(config)#hw-module route-stats l3mcast ipv4 ? res Access list name - maximum 64 characters WORD Access list name - maximum 64 charactersRP/0/RP0/CPU0#NCS5500-702(config)#hw-module route-stats l3mcast ipv4 TEST ? -cr-RP/0/RP0/CPU0#NCS5500-702(config)#In IOS XR 6.6.3#RP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats ? l3mcast Layer 3 MulticastRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ? ipv4 IPv4 Multicast ipv6 IPv6 Multicast vrf Vrf NameRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ipv4 ? egress per route per OIF stats ingress per route statsRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ipv4 ingress ? WORD Access list name - maximum 64 charactersRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ipv4 egress ? WORD Access list name - maximum 64 charactersRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ipv6 ? egress per route per OIF stats ingress per route statsRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ipv6 ingress ? WORD Access list name - maximum 64 charactersRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast ipv6 egress ? WORD Access list name - maximum 64 charactersRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast vrf ? WORD Name of VRF default Default VRFRP/0/RP0/CPU0#NCS5500-663(config)#hw-module route-stats l3mcast vrf foo ? ipv4 IPv4 Multicast ipv6 IPv6 MulticastRP/0/RP0/CPU0#NCS5500-663(config)#Where mcast-counter is an access-list defined in the configurationRP0/0/RP0/CPU0#NCS5500-663(config)# ipv4 access-list mcast-counterRP0/0/RP0/CPU0#NCS5500-663(config-acl)# 10 permit ipv4 host 10.1.1.2 host 224.2.151.1RP0/0/RP0/CPU0#NCS5500-663(config-acl)# 20 permit ipv4 10.1.1.0/24 232.0.4.0/22RP0/0/RP0/CPU0#NCS5500-663(config-acl)#commitBefore the introduction of this feature, we only have a very brief implementation of an hw-profile only in 6.1.31 (“hw-module profile mfib stats”). Otherwise the count packets and bytes per (S,G) was not available.The ingress stats are always per (S,G). The egress stats are always per OLE.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/multicast/66x/configuration/guide/b-multicast-cg-ncs55k-66x/b-multicast-cg-ncs55k-66x_chapter_010.htmlstats-fpgahw-module stats-fpga location 0/x/CPU0 enableCurrently only supported on the NCS560. In the roadmap for other platforms equiped with this FGPA# MOD-SE Line Cards NCS55A2-MOD-SEvrrpscaleRP/0/RP0/CPU0#NCS5500-663(config)#hw-module vrrpscale ? enable enable VRRP scalingRP/0/RP0/CPU0#NCS5500-663(config)#hw-module vrrpscale enable ? -cr-RP/0/RP0/CPU0#NCS5500-663(config)#Extends the scale from 16 (or even 13 with BFD) to 255.In 7.1.1, it’s also required to enable HSRP.External documentation# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/ip-addresses/b-ip-addresses-cr-ncs5500/b-ncs5500-ip-addresses-cli-reference_chapter_01001.html#id_91192storm-control-combine-policer-bwhw-module storm-control-combine-policer-bw enableRP/0/RP1/CPU0#5508-1-741C(config)#hw-module storm-control-combine-policer-bw ? enable Enable storm-control-combine-policer-bw modeRP/0/RP1/CPU0#5508-1-741C(config)#hw-module storm-control-combine-policer-bw enable ? -cr-RP/0/RP1/CPU0#5508-1-741C(config)#hw-module storm-control-combine-policer-bw enableThu Aug 26 22#24#41.022 PDTIn order to activate this storm-control-combine-policer-bw, you must manually reload the chassisThis mode is enabled from IOS XR 7.4.1 and is applicable to J2 based PIDs with external TCAM. For L2 storm control by default the three policer (broadcast, multicast and unknown unicast) are different. This hw-module allows to combine all of them and achieve a combined control late for all BUM traffic. For example, if broadcast storm-control is set to 100 PPS then in default mode, broadcast is limited to 100 PPS and Unknown unicast and Multicast are not subjected to any rate limit. IN the combined mode, the same configuration of a broadcast storm-control policer of 100 pps will rate limit all BUM traffic to 100 PPS. Note that in compbined mode the policer units used must be same for all three policer. If all three policers are configured , then the combined storm control rate is the sum of all three policer.Note# To enable/disable the combined policer for storm control using this profile , the router needs to be reloadedConclusionMultiple hardware profiles have been added release after release to meet certain use-cases. This document helped at clarifying all the variations available.Note that several profiles are orthogonal and can not be used simultaneously, like#     hw-module profile stats acl-permit hw-module profile stats qos-enhanced hw-module profile qos ingress-model peering loc QPPB features hw-module profile qos ingress-model peering loc hw-module profile qos hqos-enable hw-module profile qos hqos-enable PBTS hw-module fib mpls label lsr-optimized hw-module fib ipv4 scale internet-optimized hw-module fib mpls ldp lsr-optimized No EVPN Services Bookmark this link and check it regularly for updates.", "url": "/tutorials/ncs5500-hw-module-profiles/", "author": "Nicolas Fevrier", "tags": "iosxr, dnx" } , "tutorials-bgp-evpn-based-single-active-multi-homing": { "title": "BGP-EVPN based Single-Active Multi-Homing", "content": " On This Page Implementation of BGP-EVPN based Single-Active Multi-Homing Reference Topology# Task 1# Configure Ethernet bundles on Host-1 for multi-homing Task 2# Configure EVPN based single-active multi-homing Task 3# Configure BGP EVPN based layer-2 multipoint service Task 4# Verify that EVPN based single-active multi-homing is operational Task 5# Configure the BGP-EVPN Distributed Anycast Gateway for inter-subnet routing Implementation of BGP-EVPN based Single-Active Multi-HomingIn single-active multi-homing mode, only a single Leaf among a group of Leafs attached to a Host is allowed to forward the traffic to and from on a given VLAN.In this post we will cover the BGP-EVPN based Single-Active Multi-Homing of CE/Hosts. Similar to Active/Active Multi-homing, Single-Active is also achieved by EVPN Ethernet Segment feature. Single-active offers redundant connectivity with forwarding for a VLAN on a single link at a time with failover to the second link in case of active link’s failure. Single-Active load balancing’s strengths arise from directing traffic to a single uplink as opposed to all-active’s approach of ECMP-hashing. This approach is very useful for network scenarios where policing, metering and billing are required.Reference Topology#For this post, we will leverage EVPN control-plane and ISIS Segment Routing based forwarding that we configured in a previous post.As shown in the above topology, Host-1 is multi-homed to Leaf-1 and Leaf-2. For EVPN single-active multi-homing, each link towards the Leaf will be in a unique ethernet bundle interface. VLAN 10 and 20 are allowed on both the ethernet-bundles. As both the links are in separate ethernet bundles, the host H-1 will flood traffic at first to both the Leafs but only the Ethernet-Segment’s Designated Forwarder (DF) Leaf will forward the traffic. As a result, the host will have only one ethernet bundle interface in its forwarding table to forward the traffic and achieve per VLAN single-active multi-homing.Task 1# Configure Ethernet bundles on Host-1 for multi-homingAs per the reference topology Host-1 is multi-homed to Leaf-1 and Leaf-2 via LACP bundle-ethernet 11 going to Leaf-1 and bundle-ethernet 12 going to Leaf-2. ASR9K is acting as the host/CE with IP address 10.0.0.10/24 configured on a BVI. Following is the configuration of LAG on Host-1.The LAG on Host-1 will come up after we configure single-active multi-homing using EVPN Ether-Segment on the Leaf-1 and Leaf-2.Note# In this post we will configure VLAN 10 to show the single-active behavior. Configuration of VLAN 20 is out of scope for this post but follows the same procedure.Host-1#interface Bundle-Ether 11 description ~Bundle to Leaf-1~!interface TenGigE0/0/2/0 description ~Link to Leaf-1 ten0/0/0/47~ bundle id 11 mode active!interface Bundle-Ether11.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric!interface Bundle-Ether 12 description ~Bundle to Leaf-2~!interface TenGigE0/0/2/1 description ~Link to Leaf-2 ten0/0/0/47~ bundle id 12 mode active!interface Bundle-Ether12.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric!interface BVI10 description ~Host-1 IP~ ipv4 address 10.0.0.10 255.255.255.0!l2vpn bridge group bg1 bridge-domain bd-10 interface Bundle-Ether11.10 ! interface Bundle-Ether12.10 ! routed interface BVI10 !!Task 2# Configure EVPN based single-active multi-homingConfigure Leaf-1 and Leaf-2 to provision single-active multi-homing to host-1. The set of links from Host-1 to the Leafs will be configured as Ethernet Segment on the Leafs.Configure the bundles on the Leaf-1 and Leaf-2. Use below configuration for the Leafs.Note# For single-active multi-homing, the LACP System MAC address should not be configured on ethernet bundle interface.Leaf-1#interface TenGigE0/0/0/47 description ~Link to Host-1~ bundle id 11 mode active!interface Bundle-Ether 11 description ~Bundle to Host-1~!Leaf-2interface TenGigE0/0/0/47 description ~Link to Host-1~ bundle id 12 mode active!interface Bundle-Ether 12 description ~Bundle to Host-1~!Configure Ethernet Segment id (ESI) for the bundle interface to enable multi-homing of the host. Use the identical ethernet-segment configuration on both the Leafs, though the ethernet-bundle interface is different for both Leafs. Configure load-balancing mode to single-active using “single-active” keyword for ethernet-segment.Note# Single-active mode is the default for Physical interfaces and no extra configuration to enable single-active is required.Leaf-1#evpn interface Bundle-Ether 11 ethernet-segment identifier type 0 11.11.11.11.11.11.11.11.11 load-balancing-mode single-active !Leaf-2#evpn interface Bundle-Ether 12 ethernet-segment identifier type 0 11.11.11.11.11.11.11.11.11 load-balancing-mode single-active !Use “show bundle bundle-ether ” CLI command to verify the state of the bundle interface on Leafs and Host-1.Leaf-1#RP/0/RP0/CPU0#Leaf-1#show bundle bundle-ether 11Bundle-Ether11 Status# Up Local links <active/standby/configured># 1 / 0 / 1 Local bandwidth <effective/available># 10000000 (10000000) kbps MAC address (source)# 00bc.601c.d0da (Chassis pool) Inter-chassis link# No Minimum active links / bandwidth# 1 / 1 kbps Maximum active links# 64 Wait while timer# 2000 ms Load balancing# Link order signaling# Not configured Hash type# Default Locality threshold# None LACP# Operational Flap suppression timer# Off Cisco extensions# Disabled Non-revertive# Disabled mLACP# Not configured IPv4 BFD# Not configured IPv6 BFD# Not configured Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Te0/0/0/47 Local Active 0x8000, 0x0003 10000000 Link is ActiveRP/0/RP0/CPU0#Leaf-1#Leaf-2RP/0/RP0/CPU0#Leaf-2#sh bundle bundle-ether 12Bundle-Ether12 Status# Up Local links <active/standby/configured># 1 / 0 / 1 Local bandwidth <effective/available># 10000000 (10000000) kbps MAC address (source)# 00bc.600e.40da (Chassis pool) Inter-chassis link# No Minimum active links / bandwidth# 1 / 1 kbps Maximum active links# 64 Wait while timer# 2000 ms Load balancing# Link order signaling# Not configured Hash type# Default Locality threshold# None LACP# Operational Flap suppression timer# Off Cisco extensions# Disabled Non-revertive# Disabled mLACP# Not configured IPv4 BFD# Not configured IPv6 BFD# Not configured Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Te0/0/0/47 Local Active 0x8000, 0x0003 10000000 Link is ActiveRP/0/RP0/CPU0#Leaf-2#Above output shows that the bundle interfaces are up. Next, lets provision the EVPN layer-2 service.Task 3# Configure BGP EVPN based layer-2 multipoint serviceConfigure the EVPN layer-2 service between Leaf-1, Leaf-2 and Leaf-5 and then check the status of ethernet segment. For detailed explanation of configuring BGP EVPN based layer-2 service, refer to this post.Leaf-1#interface Bundle-Ether 11.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric !l2vpn bridge group bg-1 bridge-domain bd-10 interface Bundle-Ether 11.10 evi 10 ! !evpn evi 10 bgp route-target import 1001#11 route-target export 1001#11 ! advertise-mac ! !Leaf-2#interface Bundle-Ether 12.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric !l2vpn bridge group bg-1 bridge-domain bd-10 interface Bundle-Ether 12.10 evi 10 ! !evpn evi 10 bgp route-target import 1001#11 route-target export 1001#11 ! advertise-mac ! !Leaf-5#interface TenGigE0/0/0/45.10 l2transport encapsulation dot1q 10 rewrite ingress tag pop 1 symmetric!evpn evi 10 bgp route-target import 1001#11 route-target export 1001#11 ! advertise-mac ! !!l2vpn bridge group bg-1 bridge-domain bd-10 interface TenGigE0/0/0/45.10 ! evi 10 ! !Host-5 is single-homed to Leaf-5, below is the Host-5 configuration.Host-5#interface TenGigE0/0/1/3.10 description ~Link to Leaf-5~ ipv4 address 10.0.0.50 255.255.255.0 encapsulation dot1q 10Task 4# Verify that EVPN based single-active multi-homing is operationalAs we have configured the BGP EVPN layer-2 service as well as the ethernet segment, lets verify the ethernet segment status by “show evpn ethernet-segment detail”.Leaf-1#RP/0/RP0/CPU0#Leaf-1#show evpn ethernet-segment detailLegend#Ethernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0011.1111.1111.1111.1111 BE11 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether11 Interface MAC # 00bc.601c.d0db IfHandle # 0x08000144 State # Up Redundancy # Not Defined ESI type # 0 Value # 11.1111.1111.1111.1111 ES Import RT # 1111.1111.1111 (Local) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH, Single-active Configured # Single-active (AApS) Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 1 Permanent # 0 Elected # 1 Not Elected # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # 24005 Remote SHG labels # 1 24005 # nexthop 2.2.2.2Leaf-2RP/0/RP0/CPU0#Leaf-2#sh evpn ethernet-segment detail Legend#Ethernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0011.1111.1111.1111.1111 BE12 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether12 Interface MAC # 00bc.600e.40db IfHandle # 0x0800011c State # Up Redundancy # Not Defined ESI type # 0 Value # 11.1111.1111.1111.1111 ES Import RT # 1111.1111.1111 (Local) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH, Single-active Configured # Single-active (AApS) Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 1 Permanent # 0 Elected # 0 Not Elected # 1 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # 24005 Remote SHG labels # 1 24005 # nexthop 1.1.1.1In the above output we can observe that Leaf-1 has bundle-ethernet 11 and Leaf-2 has bundle-ethernet 12 in ‘Up’ state. Both have two next-hops, one being the Leaf itself and the second next-hop is the peer-leaf/PE. Operational state of the ethernet-segment is multi-homed with single-active load-balancing.The output of both the Leafs show that both are forwarders of 1 subnet (10.0.0.x/24 in our case), while Leaf-1 is elected as Designated Forwarder (DF) and Leaf-2 is the non-DF. This means that any Uniccast and BUM traffic that comes to Leaf-2 will not be forwarded and only Leaf-1 being the DF will forward it.Ping from Host-1 to Host-5 shows that the hosts are reachable.Host-1#RP/0/RSP0/CPU0#Host-1#ping 10.0.0.50Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.0.0.50, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 msRP/0/RSP0/CPU0#Host-1#Let’s take a look at the BGP EVPN control-plane to verify that only Leaf-1 being the designated-forwarder for EVI 10 is advertising itself the next-hop and Leaf-2 is not announcing any MAC addresses related to EVI 10. This is due to the fact that for single-active load-balancing only one Leaf-1 should advertise the reachability.In the below output from Leaf-5 we can see the MAC address of Host-1 is learnt from Leaf-1 (rd 1.1.1.1#10) in a route-type 2 advertisement. As we look at Leaf-2’s route distinguishers (rd 2.2.2.2#10) we see that no MAC address is advertised for EVI 10. This verifies that only Leaf-1 will be programmed in Leaf-5 as the next-hop to reach to Host-1.Leaf-5 – Route advertisement from Leaf-1RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 1.1.1.1#10Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 1.1.1.1#10*>i[1][0011.1111.1111.1111.1111][0]/120 1.1.1.1 100 0 i* i 1.1.1.1 100 0 i*>i[2][0][48][6c9c.ed6d.1d90][0]/104 1.1.1.1 100 0 i* i 1.1.1.1 100 0 i*>i[3][0][32][1.1.1.1]/80 1.1.1.1 100 0 i* i 1.1.1.1 100 0 iProcessed 3 prefixes, 6 pathsRP/0/RP0/CPU0#Leaf-5#Leaf-5 – Route advertisement from Leaf-2RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 2.2.2.2#10Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 2.2.2.2#10*>i[1][0011.1111.1111.1111.1111][0]/120 2.2.2.2 100 0 i* i 2.2.2.2 100 0 i*>i[3][0][32][2.2.2.2]/80 2.2.2.2 100 0 i* i 2.2.2.2 100 0 iProcessed 2 prefixes, 4 pathsRP/0/RP0/CPU0#Leaf-5#Lastly, run “show evpn evi vpn-id 10 mac” command to verify the MAC address learnt for EVI 10. We see that Leaf-1 and Leaf-2 have learnt Host-5’s MAC address with Leaf-5 as the next-hop.Leaf-1RP/0/RP0/CPU0#Leaf-1#show evpn evi vpn-id 10 macVPN-ID Encap MAC address IP address Nexthop Label ---------- ------ -------------- --------------- -------------------------- --------10 MPLS 6c9c.ed6d.1d90 ## Bundle-Ether11.10 24004 10 MPLS a03d.6f3d.5447 ## 5.5.5.5 24010 RP/0/RP0/CPU0#Leaf-1#Leaf-2RP/0/RP0/CPU0#Leaf-2#show evpn evi vpn-id 10 macVPN-ID Encap MAC address IP address Nexthop Label ---------- ------ -------------- --------------- -------------------------- --------10 MPLS 6c9c.ed6d.1d90 ## 1.1.1.1 24004 10 MPLS a03d.6f3d.5447 ## 5.5.5.5 24010 RP/0/RP0/CPU0#Leaf-2#Leaf-5RP/0/RP0/CPU0#Leaf-5#show evpn evi vpn-id 10 macVPN-ID Encap MAC address IP addres Nexthop Label ---------- ------ -------------- --------------- -------------------------- --------10 MPLS 6c9c.ed6d.1d90 ## 1.1.1.1 24004 10 MPLS a03d.6f3d.5447 ## TenGigE0/0/0/45.10 24010 RP/0/RP0/CPU0#Leaf-5#As we observe Leaf-5’s output, we see that the Leaf-5 has programmed Leaf-1 as the only next-hop for Host-1’s MAC address reachability, although Host-1 is multi-homed to both Leaf-1 and Leaf-2. This verifies that single-active dual-homing is operational and that at one time only one Leaf will forward the traffic to and from the Host for a given EVI.Note# As Leaf-2 sees Host-1’s MAC reachable via Leaf-1, in case of another Host/ESI connected to Leaf-2 wants to reach to Host-1 it will have to go over Leaf-1 to reach to Host-1.Task 5# Configure the BGP-EVPN Distributed Anycast Gateway for inter-subnet routingFor Layer-3 inter-subnet routing use case; similar to Host-1’s layer-2 reachability, Host-1’s IP will also only be reachable via Leaf-1 as next-hop. After we configure BGP-EVPN distributed anycast gateway for inter-subnet routing, we will observe the routing table of Leaf-5.Configure the BGP-EVPN Distributed Anycast Gateway on Leaf-1, Leaf-2 and Leaf-5. For detailed explanation of distributed anycast gateway, refer to this post.BGP-EVPN distributed anycast gateway configuration.Configure VRFs on Leaf-1, Leaf-2 and Leaf-5. vrf 10 address-family ipv4 unicast import route-target 10#10 ! export route-target 10#10 ! router bgp 65001 address-family vpnv4 unicast ! vrf 10 rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected !Configure BVI as distributed anycast gateway interface BVI 10 host-routing vrf 10 ipv4 address 10.0.0.1 255.255.255.0 mac-address 1001.1001.1001 ! l2vpn bridge group bg-1 bridge-domain bd-10 interface Bundle-Ether11.10 ---- configure on Leaf-1 interface Bundle-Ether12.10 ---- configure on Leaf-2 interface TenGigE0/0/0/45.10 ---- configure on Leaf-5 ! routed interface BVI 10 evi 10 !Configure the static route on Host-1 and Host-5 to reach to the default gateway on Leafs. router static address-family ipv4 unicast 0.0.0.0/0 10.0.0.1 !As we have now configured the BGP-EVPN distributed anycast gateway on Leafs, lets observe the routing table of Leaf-5. The below output shows that Host-1’s IP 10.0.0.10/32 is reachable via only Leaf-1.Leaf-5#RP/0/RP0/CPU0#Leaf-5#show route vrf 10Gateway of last resort is not setC 10.0.0.0/24 is directly connected, 00#41#23, BVI10L 10.0.0.1/32 is directly connected, 00#41#23, BVI10B 10.0.0.10/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 00#46#13RP/0/RP0/CPU0#Leaf-5#Leaf-1#RP/0/RP0/CPU0#Leaf-1#show arp vrf 10-------------------------------------------------------------------------------0/0/CPU0-------------------------------------------------------------------------------Address Age Hardware Addr State Type Interface10.0.0.1 - 1001.1001.1001 Interface ARPA BVI1010.0.0.10 00#04#16 6c9c.ed6d.1d91 Dynamic ARPA BVI10RP/0/RP0/CPU0#Leaf-1#Leaf-2#RP/0/RP0/CPU0#Leaf-2#show arp vrf 10-------------------------------------------------------------------------------0/0/CPU0-------------------------------------------------------------------------------Address Age Hardware Addr State Type Interface10.0.0.1 - 1001.1001.1001 Interface ARPA BVI10RP/0/RP0/CPU0#Leaf-2#Finally we can observe in the ARP table output of Leaf-1 and Leaf-2 that the ARP entry for Host-1 is only programmed on Leaf-1. This is becuase of the single-active behavior of ethernet-segment and Leaf-1 being the designated-forwarder. This concludes the BGP-EVPN single-active implementation, for further technical details refer to our e-vpn.io webpage that has a lot of material explaining the core concepts of EVPN, its operations and troubleshooting.", "url": "/tutorials/bgp-evpn-based-single-active-multi-homing/", "author": "Ahmad Bilal Siddiqui", "tags": "iosxr, cisco, EVPN, NCS 5500" } , "tutorials-ncs5500-fib-scale-test": { "title": "NCS5500 FIB Scale Test [Lab Series 04] ", "content": " NCS5500 FIB Scale Tests Introduction Video Demo Full internet view 4M IPv4 routes 2M IPv6 routes Other features impact# BGP Flowspec Other features impact# URPF Conclusion What’s next? You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionEpisode 04 of the lab series, today, we will talk about NCS5500 FIB. More specifically, we will verify the number of routes we can store in systems with Jericho+ and eTCAM.The goal of these blog posts and videos is to describe tests performed in lab, detail the methodology and the results.They are extracted from customers POC (proof of concept). With this information, we hope it will speed up your validation process and provide additional information on the NCS5500 platforms internals.All former tests are listed here# https#//xrdocs.io/ncs5500/tutorials/ncs5500-lab-series/VideoIn the video below, we performing two tests on Jericho+ systems with external memory (eTCAM)# how much space consumes a full internet view (both v4 and v6)? can we push 4M IPv4 prefixes in this memory?DemoSince the video has been recorded a few months back, we will actually use a more fresher internet view collected from one of our asian customer. Then we will re-do the 4M IPv4 prefixes test.To complete these tests, we will also activate URFP loose mode on some interfaces and we will also activate BGP Flowspec and push 3000 rules.Full internet viewWe start with a single view made of 790k IPv4 and 72 IPv6 routes.RP/0/RP0/CPU0#OCSE-653#sh bgp sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 19451056BGP main routing table version 19451056BGP NSR Initial initsync version 240593 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 19451056 19451056 19451056 19451056 19451056 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.152 0 100 1061397 24179 19451056 0 0 00#04#18 790769192.168.100.153 0 100 4789584 24072 0 0 0 15#01#03 Idle (Admin)RP/0/RP0/CPU0#OCSE-653#sh bgp ipv6 un sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 4364742BGP main routing table version 4364742BGP NSR Initial initsync version 72950 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 4364742 4364742 4364742 4364742 4364742 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 242899 24068 4364742 0 0 00#01#10 729482001#111##152 0 100 30725 25124 0 0 0 4w2d Idle (Admin)RP/0/RP0/CPU0#OCSE-653#sh dpa resources iproute loc 0/0/CPU0~iproute~ OFA Table (Id# 25, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 4 /1 0 /2 4 /3 5 /4 4 /5 0 /6 0 /7 0 /8 10 /9 12 /10 36 /11 97 /12 285 /13 571 /14 1143 /15 1913 /16 13186 /17 7901 /18 13534 /19 25210 /20 39182 /21 47039 /22 100821 /23 78898 /24 445789 /25 144 /26 211 /27 383 /28 537 /29 721 /30 3241 /31 440 /32 9547 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 790868 790868 790868 790868 Create Requests Total# 11358014 11358014 11358014 11358014 Success# 11358014 11358014 11358014 11358014 Delete Requests Total# 10567146 10567146 10567146 10567146 Success# 10567146 10567146 10567146 10567146 Update Requests Total# 838044 838044 838044 838044 Success# 837906 837906 837906 837906 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0 Reserve Resources Failures# 0 0 0 0 Release Resources Failures# 0 0 0 0 Update Resources Failures# 0 0 0 0RP/0/RP0/CPU0#OCSE-653#sh dpa resources ip6route loc 0/0/CPU0~ip6route~ OFA Table (Id# 26, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Capacity Prefix Actual Capacity /0 4 0 /1 0 0 /2 0 0 /3 0 0 /4 0 0 /5 0 0 /6 0 0 /7 0 0 /8 0 0 /9 0 0 /10 4 0 /11 0 0 /12 0 0 /13 0 0 /14 0 0 /15 0 0 /16 13 0 /17 0 0 /18 0 0 /19 2 0 /20 12 0 /21 4 0 /22 7 0 /23 5 0 /24 23 0 /25 7 0 /26 13 0 /27 18 0 /28 94 0 /29 2696 0 /30 418 0 /31 179 0 /32 12686 0 /33 1060 0 /34 814 0 /35 517 0 /36 2887 0 /37 501 0 /38 908 0 /39 282 0 /40 3689 0 /41 544 0 /42 888 0 /43 144 0 /44 4720 0 /45 465 0 /46 2223 0 /47 1352 0 /48 35009 0 /49 0 0 /50 0 0 /51 1 0 /52 0 0 /53 0 0 /54 0 0 /55 0 0 /56 219 0 /57 2 0 /58 0 0 /59 0 0 /60 16 0 /61 0 0 /62 0 0 /63 3 0 /64 67 0 /65 0 0 /66 0 0 /67 0 0 /68 0 0 /69 0 0 /70 0 0 /71 0 0 /72 0 0 /73 0 0 /74 0 0 /75 0 0 /76 0 0 /77 0 0 /78 0 0 /79 0 0 /80 0 0 /81 0 0 /82 0 0 /83 0 0 /84 0 0 /85 0 0 /86 0 0 /87 0 0 /88 0 0 /89 0 0 /90 0 0 /91 0 0 /92 0 0 /93 0 0 /94 0 0 /95 0 0 /96 0 0 /97 0 0 /98 0 0 /99 0 0 /100 0 0 /101 0 0 /102 0 0 /103 0 0 /104 4 0 /105 0 0 /106 0 0 /107 0 0 /108 0 0 /109 0 0 /110 0 0 /111 0 0 /112 0 0 /113 0 0 /114 0 0 /115 0 0 /116 0 0 /117 0 0 /118 0 0 /119 0 0 /120 2 0 /121 0 0 /122 0 0 /123 0 0 /124 1 0 /125 8 0 /126 432 0 /127 16 0 /128 55 0 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 73014 73014 73014 73014 Create Requests Total# 2220292 2220292 2220292 2220292 Success# 2220292 2220292 2220292 2220292 Delete Requests Total# 2147278 2147278 2147278 2147278 Success# 2147278 2147278 2147278 2147278 Update Requests Total# 266 266 266 266 Success# 202 202 202 202 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0 Reserve Resources Failures# 0 0 0 0 Release Resources Failures# 0 0 0 0 Update Resources Failures# 0 0 0 0RP/0/RP0/CPU0#OCSE-653#sh contr npu resources exttcamipv4 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#27 UTC NPU-1 Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#25 UTC NPU-2 Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#25 UTC NPU-3 Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#26 UTCCurrent Usage NPU-0 Total In-Use # 790856 (20 %) iproute # 790868 (20 %) NPU-1 Total In-Use # 790856 (20 %) iproute # 790868 (20 %) NPU-2 Total In-Use # 790856 (20 %) iproute # 790868 (20 %) NPU-3 Total In-Use # 790856 (20 %) iproute # 790868 (20 %)RP/0/RP0/CPU0#OCSE-653#sh contr npu resources exttcamipv6 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv6OOR Information NPU-0 Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#27 UTC NPU-1 Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#25 UTC NPU-2 Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#25 UTC NPU-3 Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # Green OOR State Change Time # 2020.Apr.12 23#19#26 UTCCurrent Usage NPU-0 Total In-Use # 72990 (4 %) ip6route # 73014 (4 %) NPU-1 Total In-Use # 72990 (4 %) ip6route # 73014 (4 %) NPU-2 Total In-Use # 72990 (4 %) ip6route # 73014 (4 %) NPU-3 Total In-Use # 72990 (4 %) ip6route # 73014 (4 %)RP/0/RP0/CPU0#OCSE-653#It clearly shows the systems based on Jericho+ with eTCAM have a ton of memory space when used for internet peering / border roles.4M IPv4 routesOn this one, we are using a route generator with extremely basic configuration. The goal being to use the route memory to the fullest supported level# 4 millions IPv4 entries.router bgp 100bgp_id 192.168.100.153neighbor 192.168.100.202 remote-as 100neighbor 192.168.100.202 update-source 192.168.100.152capability ipv4 unicastcapability refreshnetwork 1 1.1.1.1/32 4000000aspath 1 random 5locpref 1 120metric 1 5sendallOn the router, we see the routes received and programmed#RP/0/RP0/CPU0#OCSE-653#sh bgp sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 14558159BGP main routing table version 14558159BGP NSR Initial initsync version 240593 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 14558159 14558159 14558159 14558159 14558159 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.152 0 100 31280 24091 14558159 0 0 00#41#12 4000000192.168.100.153 0 100 4789584 24072 0 0 0 00#43#26 Idle (Admin)RP/0/RP0/CPU0#OCSE-653#sh dpa resources iproute loc 0/0/CPU0~iproute~ OFA Table (Id# 25, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 4 /1 0 /2 0 /3 0 /4 4 /5 0 /6 0 /7 0 /8 0 /9 0 /10 0 /11 0 /12 0 /13 0 /14 0 /15 0 /16 2 /17 0 /18 0 /19 0 /20 0 /21 0 /22 0 /23 0 /24 20 /25 0 /26 0 /27 0 /28 0 /29 0 /30 0 /31 0 /32 4000068 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 4000098 4000098 4000098 4000098 Create Requests Total# 10567244 10567244 10567244 10567244 Success# 10567244 10567244 10567244 10567244 Delete Requests Total# 6567146 6567146 6567146 6567146 Success# 6567146 6567146 6567146 6567146 Update Requests Total# 838040 838040 838040 838040 Success# 837903 837903 837903 837903 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0 Reserve Resources Failures# 0 0 0 0 Release Resources Failures# 0 0 0 0 Update Resources Failures# 0 0 0 0RP/0/RP0/CPU0#OCSE-653#sh contr npu resources exttcamipv4 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 4000086 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.12 21#21#33 UTC NPU-1 Estimated Max Entries # 4000086 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.12 21#21#33 UTC NPU-2 Estimated Max Entries # 4000086 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.12 21#21#33 UTC NPU-3 Estimated Max Entries # 4000086 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.12 21#21#33 UTCCurrent Usage NPU-0 Total In-Use # 4000086 (100 %) iproute # 4000098 (100 %) NPU-1 Total In-Use # 4000086 (100 %) iproute # 4000098 (100 %) NPU-2 Total In-Use # 4000086 (100 %) iproute # 4000098 (100 %) NPU-3 Total In-Use # 4000086 (100 %) iproute # 4000098 (100 %)RP/0/RP0/CPU0#OCSE-653#We reached an OOR (out of resource) state “Red” since we exceeded 95% of the “Estimated Max Entries” capacity, but the routes are programmed successfully in hardware.You can verify it with the “HW Failures#” counters in the “show dpa resource” output.2M IPv6 routesLet’s see if we can push a lot of IPv6 routes too.RP/0/RP0/CPU0#OCSE-653#sh bgp ipv6 un sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 26437690BGP main routing table version 26437690BGP NSR Initial initsync version 72950 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 26437690 26437690 26437690 26437690 26437690 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 279980 24234 26437690 0 0 00#03#05 20000002001#111##152 0 100 30725 25124 0 0 0 4w2d Idle (Admin)RP/0/RP0/CPU0#OCSE-653#sh contr npu resources exttcamipv6 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv6OOR Information NPU-0 Estimated Max Entries # 2000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#35#32 PDT NPU-1 Estimated Max Entries # 2000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#35#32 PDT NPU-2 Estimated Max Entries # 2000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#35#32 PDT NPU-3 Estimated Max Entries # 2000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#35#32 PDTCurrent Usage NPU-0 Total In-Use # 2000040 (100 %) ip6route # 2000064 (100 %) NPU-1 Total In-Use # 2000040 (100 %) ip6route # 2000064 (100 %) NPU-2 Total In-Use # 2000040 (100 %) ip6route # 2000064 (100 %) NPU-3 Total In-Use # 2000040 (100 %) ip6route # 2000064 (100 %)RP/0/RP0/CPU0#OCSE-653#As advertised, tested and officially supported, no problem pushing 2M IPv6 routes in the external TCAM.Of course, we are in Red OOR state but this limit of 2M is actually not hard coded in the system. We can potentially go further if needed. But as mentioned, it’s not tested officially.Example with 3M IPv6/64 routes#RP/0/RP0/CPU0#OCSE-653#sh bgp ipv6 un sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 31437690BGP main routing table version 31437690BGP NSR Initial initsync version 72950 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 31437690 31437690 31437690 31437690 31437690 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 290819 24242 31437690 0 0 00#01#37 30000002001#111##152 0 100 30725 25124 0 0 0 4w2d Idle (Admin)RP/0/RP0/CPU0#OCSE-653#sh dpa resources ip6route loc 0/0/CPU0~ip6route~ OFA Table (Id# 26, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Capacity Prefix Actual Capacity /0 4 0 /1 0 0 /2 0 0 /3 0 0 /4 0 0 /5 0 0 /6 0 0 /7 0 0 /8 0 0 /9 0 0 /10 4 0 /11 0 0 /12 0 0 /13 0 0 /14 0 0 /15 0 0 /16 12 0 /17 0 0 /18 0 0 /19 0 0 /20 0 0 /21 0 0 /22 0 0 /23 0 0 /24 0 0 /25 0 0 /26 0 0 /27 0 0 /28 0 0 /29 0 0 /30 0 0 /31 0 0 /32 0 0 /33 0 0 /34 0 0 /35 0 0 /36 0 0 /37 0 0 /38 0 0 /39 0 0 /40 0 0 /41 0 0 /42 0 0 /43 0 0 /44 0 0 /45 0 0 /46 0 0 /47 0 0 /48 0 0 /49 0 0 /50 0 0 /51 0 0 /52 0 0 /53 0 0 /54 0 0 /55 0 0 /56 0 0 /57 0 0 /58 0 0 /59 0 0 /60 0 0 /61 0 0 /62 0 0 /63 0 0 /64 3000018 0 /65 0 0 /66 0 0 /67 0 0 /68 0 0 /69 0 0 /70 0 0 /71 0 0 /72 0 0 /73 0 0 /74 0 0 /75 0 0 /76 0 0 /77 0 0 /78 0 0 /79 0 0 /80 0 0 /81 0 0 /82 0 0 /83 0 0 /84 0 0 /85 0 0 /86 0 0 /87 0 0 /88 0 0 /89 0 0 /90 0 0 /91 0 0 /92 0 0 /93 0 0 /94 0 0 /95 0 0 /96 0 0 /97 0 0 /98 0 0 /99 0 0 /100 0 0 /101 0 0 /102 0 0 /103 0 0 /104 4 0 /105 0 0 /106 0 0 /107 0 0 /108 0 0 /109 0 0 /110 0 0 /111 0 0 /112 0 0 /113 0 0 /114 0 0 /115 0 0 /116 0 0 /117 0 0 /118 0 0 /119 0 0 /120 0 0 /121 0 0 /122 0 0 /123 0 0 /124 0 0 /125 0 0 /126 0 0 /127 0 0 /128 22 0 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 3000064 3000064 3000064 3000064 Create Requests Total# 9958976 9958976 9958976 9958976 Success# 9958976 9958976 9958976 9958976 Delete Requests Total# 6958912 6958912 6958912 6958912 Success# 6958912 6958912 6958912 6958912 Update Requests Total# 0 0 0 0 Success# 0 0 0 0 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0 Reserve Resources Failures# 0 0 0 0 Release Resources Failures# 0 0 0 0 Update Resources Failures# 0 0 0 0RP/0/RP0/CPU0#OCSE-653#sh contr npu resources exttcamipv6 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv6OOR Information NPU-0 Estimated Max Entries # 3000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#42#21 PDT NPU-1 Estimated Max Entries # 3000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#42#21 PDT NPU-2 Estimated Max Entries # 3000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#42#21 PDT NPU-3 Estimated Max Entries # 3000040 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Apr.13 08#42#21 PDTCurrent Usage NPU-0 Total In-Use # 3000040 (100 %) ip6route # 3000064 (100 %) NPU-1 Total In-Use # 3000040 (100 %) ip6route # 3000064 (100 %) NPU-2 Total In-Use # 3000040 (100 %) ip6route # 3000064 (100 %) NPU-3 Total In-Use # 3000040 (100 %) ip6route # 3000064 (100 %)RP/0/RP0/CPU0#OCSE-653#Other features impact# BGP FlowspecLet’s start with the 4M IPv4 routes.Flowspec is configured but we don’t receive any rule at the moment (session is not active).RP/0/RP0/CPU0#OCSE-653#sh run router bgp 100 neighbor 192.168.100.151router bgp 100 neighbor 192.168.100.151 remote-as 100 address-family ipv4 flowspec route-policy PERMIT-ANY in maximum-prefix 8000000 75 route-policy PERMIT-ANY out ! !!RP/0/RP0/CPU0#OCSE-653#sh run flowspecflowspec local-install interface-all address-family ipv4 service-policy type pbr scale_ipv4 !!RP/0/RP0/CPU0#OCSE-653#sh bgp ipv4 flowspec sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 202BGP main routing table version 202BGP NSR Initial initsync version 0 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 202 202 202 202 202 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 5 5 0 0 0 00#11#57 ActiveRP/0/RP0/CPU0#OCSE-653#sh contr npu externaltcam loc 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 2871968 4000086 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 713193 39 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT0 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT0 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT0 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT0 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT0 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT0 10 320b FLP 4094 2 118 INGRESS_FLOWSPEC_IPV41 0 80b FLP 2871968 4000086 0 IPv4 UC1 1 80b FLP 0 0 1 IPv4 RPF1 2 160b FLP 713193 39 3 IPv6 UC1 3 160b FLP 0 0 4 IPv6 RPF1 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT1 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT1 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT1 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT1 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT1 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT1 10 320b FLP 4094 2 118 INGRESS_FLOWSPEC_IPV42 0 80b FLP 2871968 4000086 0 IPv4 UC2 1 80b FLP 0 0 1 IPv4 RPF2 2 160b FLP 713193 39 3 IPv6 UC2 3 160b FLP 0 0 4 IPv6 RPF2 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT2 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT2 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT2 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT2 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT2 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT2 10 320b FLP 4094 2 118 INGRESS_FLOWSPEC_IPV43 0 80b FLP 2871968 4000086 0 IPv4 UC3 1 80b FLP 0 0 1 IPv4 RPF3 2 160b FLP 713193 39 3 IPv6 UC3 3 160b FLP 0 0 4 IPv6 RPF3 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT3 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT3 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT3 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT3 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT3 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT3 10 320b FLP 4094 2 118 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#OCSE-653#Now we advertise 3000 simple rules.On the testing tool#router bgp 100bgp_id 192.168.100.151neighbor 192.168.100.202 remote-as 100neighbor 192.168.100.202 update-source 192.168.100.151capability ipv4 flowspecnetwork 1 ipv4 flowspecnetwork 1 dest 7.7.7.7/32 protocol 17 source-port 123network 1 count 3000 dest-incrOn the router#RP/0/RP0/CPU0#OCSE-653#sh bgp ipv4 flowspec sumBGP router identifier 1.3.5.99, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 3402BGP main routing table version 3402BGP NSR Initial initsync version 0 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 3402 402 3402 3402 402 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 28 13 402 0 0 00#00#06 3000RP/0/RP0/CPU0#OCSE-653#sh flowspec ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.8/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.9/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.10/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.11/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.12/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.13/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.14/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.15/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.16/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1) Flow #Dest#7.7.7.17/32,Proto#=17,SPort#=123 Actions #transmit (bgp.1)///-- SNIP SNIP SNIP --//RP/0/RP0/CPU0#OCSE-653#sh dpa resources ippbr loc 0/0/CPU0~ippbr~ OFA Table (Id# 137, Scope# Global)-------------------------------------------------- NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 In Use# 3001 3001 3001 3001 Create Requests Total# 3202 3202 3202 3202 Success# 3202 3202 3202 3202 Delete Requests Total# 201 201 201 201 Success# 201 201 201 201 Update Requests Total# 0 0 0 0 Success# 0 0 0 0 EOD Requests Total# 0 0 0 0 Success# 0 0 0 0 Errors HW Failures# 0 0 0 0 Resolve Failures# 0 0 0 0 No memory in DB# 0 0 0 0 Not found in DB# 0 0 0 0 Exists in DB# 0 0 0 0 Reserve Resources Failures# 0 0 0 0 Release Resources Failures# 0 0 0 0 Update Resources Failures# 0 0 0 0RP/0/RP0/CPU0#OCSE-653#sh contr npu externaltcam loc 0/0/CPU0External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 80b FLP 2871968 4000086 0 IPv4 UC0 1 80b FLP 0 0 1 IPv4 RPF0 2 160b FLP 713193 39 3 IPv6 UC0 3 160b FLP 0 0 4 IPv6 RPF0 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT0 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT0 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT0 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT0 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT0 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT0 10 320b FLP 1094 3002 118 INGRESS_FLOWSPEC_IPV41 0 80b FLP 2871968 4000086 0 IPv4 UC1 1 80b FLP 0 0 1 IPv4 RPF1 2 160b FLP 713193 39 3 IPv6 UC1 3 160b FLP 0 0 4 IPv6 RPF1 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT1 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT1 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT1 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT1 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT1 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT1 10 320b FLP 1094 3002 118 INGRESS_FLOWSPEC_IPV42 0 80b FLP 2871968 4000086 0 IPv4 UC2 1 80b FLP 0 0 1 IPv4 RPF2 2 160b FLP 713193 39 3 IPv6 UC2 3 160b FLP 0 0 4 IPv6 RPF2 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT2 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT2 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT2 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT2 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT2 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT2 10 320b FLP 1094 3002 118 INGRESS_FLOWSPEC_IPV43 0 80b FLP 2871968 4000086 0 IPv4 UC3 1 80b FLP 0 0 1 IPv4 RPF3 2 160b FLP 713193 39 3 IPv6 UC3 3 160b FLP 0 0 4 IPv6 RPF3 4 80b FLP 4096 0 75 INGRESS_IPV4_SRC_IP_EXT3 5 80b FLP 4096 0 76 INGRESS_IPV4_DST_IP_EXT3 6 160b FLP 4096 0 77 INGRESS_IPV6_SRC_IP_EXT3 7 160b FLP 4096 0 78 INGRESS_IPV6_DST_IP_EXT3 8 80b FLP 4096 0 79 INGRESS_IP_SRC_PORT_EXT3 9 80b FLP 4096 0 80 INGRESS_IPV6_SRC_PORT_EXT3 10 320b FLP 1094 3002 118 INGRESS_FLOWSPEC_IPV4RP/0/RP0/CPU0#OCSE-653#BGP Flowspec entries are stored in a different zone in the external TCAM then the part used for routing information.Conclusion# BGP Flowspec doesn’t impact the routing scaleIt would be an interesting study to quantify the impact of the complexity of the flowspec rule on the number of entries we can store in eTCAM. For the example above, we used simple rules, but other parameters like packet length ranges can consume more entries.Other features impact# URPFLet’s try another feature and identify the impact on routing scale# Unicast Reverse Path Forwarding.To configure it on systems with Jericho+ and eTCAM, you don’t need to enable any specific hw-module profile, which is not the case for other types of NCS5500. More details in#https#//xrdocs.io/ncs5500/tutorials/ncs5500-urpf/We apply URPF on two interfaces and it does not have any impact.RP/0/RP0/CPU0#OCSE-653#sh run int hu0/0/0/0interface HundredGigE0/0/0/0 description OCSE H0/0/0/0 to 5508 H0/0/0/1 cdp mtu 9646 ipv4 address 25.1.11.2 255.255.255.0 ipv4 verify unicast source reachable-via any ipv6 address 2001#25#1#11##2/64 load-interval 30!RP/0/RP0/CPU0#OCSE-653#sh run int hu0/0/0/4interface HundredGigE0/0/0/4 description OCSE H0/0/0/4 to 24H H0/0/0/4 cdp ipv4 address 25.1.110.2 255.255.255.0 ipv4 verify unicast source reachable-via any ipv6 address 2001#25#1#110##2/64 load-interval 30!RP/0/RP0/CPU0#OCSE-653#No impact on the 4M routes.ConclusionVery basic tests but regularly requested in the Customer Proof of Concept.We demonstrated that we can store a full internet feed IPv4 and IPv6 with a lot of growth margin, but also that we could store 4M IPv4 entries in the eTCAM while enabling other features like BGP Flowspec and URPF (loose mode) without any issue.What’s next?Next blog post, we still test the performance of the new line cards based on Jericho2 ASICs.If you would like specific tests executed in this series, please let us know in the comments below.", "url": "/tutorials/ncs5500-fib-scale-test/", "author": "Nicolas Fevrier", "tags": "lab series, ncs5500, ios xr" } , "tutorials-bgp-evpn-and-l3vpn-interworking": { "title": "BGP EVPN and L3VPN Interworking", "content": " On This Page BGP EVPN and L3VPN Interworking Support on IOS-XR based Routers Task 1# Configuration of Segment Routing on DCI routers Task 2# Configuration of BGP L3VPN on DCI and PE-1 Task 3# Configuration of BGP-EVPN on Leafs, Spines and DCIs Task 4# Configure BGP EVPN and L3VPN interworking on DCI routers Task 5# Advertise summarized routes and filter host routes on DCI BGP EVPN and L3VPN Interworking Support on IOS-XR based RoutersBGP EVPN and L3VPN interworking is a way to connect EVPN domain such as a DC or CO over an IPVPN Core/WAN network. This is a common use-case for end-to-end connectivity of Hosts/CEs in EVPN domain to other domains over an IPVPN network providing inter-subnet routing.Below topology shows an EVPN fabric connecting to L3VPN domain with the help of DCI/Boarder-Leaf routers. The DCI routers perform BGP EVPN to L3VPN interworking to provide the reachability between PE-1’s and Host prefixes. Boarder-Leaf/DCI routers are essential in these types of designs to keep prefixes local to each domain and send summarized advertisement out.In this post we will go over the configuration of EVPN and L3VPN interworking on IOS-XR routers acting as DCI. When we complete the configuration, we will have Host subnet (10.0.0.0/24) learnt on PE-1, and PE-1’s VPNv4 prefix loopback-100 learnt on Leafs. We will verify the reachability between Hosts and PE-1’s prefixes, that are advertised by their respective address-families.The configuration setup is based on single BGP AS 65001. There are two separate ISIS routing domains for EVPN and L3VPN with Segment Routing enabled for MPLS based forwarding. DCI performs BGP EVPN and L3VPN interworking, hence it is participating in both ISIS domains. There is no route redistribute between ISIS domains.Though EVPN and L3VPN interworking is going to be configured on DCI routers only, yet in this post we will go over the configuration of overall setup including EVPN fabric and L3VPN domain. To achieve end-to-end connectivity, below is the list of tasks we will implement. Some of these tasks are already covered in previous posts, their details will not be covered here. Click on the links below to visit previous posts. The remaining items from the list that don’t have links to previous write-ups are covered in this post. Configure BGP-EVPN control-plane & Segment Routing based MPLS forwarding Configure BGP EVPN based Layer-2 VPN Service Configure BGP EVPN IRB for Inter-subnet Routing Configure Segment Routing on DCI routers Configure BGP L3VPN domain Configure DCI routers to perform EVPN and L3VPN interworkingTask 1# Configuration of Segment Routing on DCI routersSegment routing configuration for EVPN fabric is covered in earlier post but DCI routers were not part of that post. That is why we will only cover segment routing configuration for DCI and show it participating in two MPLS forwarding domains. One for providing forwarding to EVPN fabric and other to the L3VPN domain. DCI-1 DCI-2 router isis 1 is-type level-2-only net 49.0001.0000.0000.0008.00 nsr log adjacency changes address-family ipv4 unicast metric-style wide segment-routing mpls ! interface Bundle-Ether68 point-to-point address-family ipv4 unicast ! interface Bundle-Ether78 point-to-point address-family ipv4 unicast ! interface Loopback0 passive address-family ipv4 unicast prefix-sid absolute 16008!router isis 2 is-type level-2-only net 49.0002.0000.0000.0008.00 nsr log adjacency changes address-family ipv4 unicast metric-style wide segment-routing mpls ! interface Bundle-Ether81 point-to-point address-family ipv4 unicast ! interface Loopback0 passive address-family ipv4 unicast prefix-sid absolute 16008! router isis 1 is-type level-2-only net 49.0001.0000.0000.0009.00 nsr log adjacency changes address-family ipv4 unicast metric-style wide segment-routing mpls ! interface Bundle-Ether69 point-to-point address-family ipv4 unicast ! interface Bundle-Ether79 point-to-point address-family ipv4 unicast ! interface Loopback0 passive address-family ipv4 unicast prefix-sid absolute 16009!router isis 2 is-type level-2-only net 49.0002.0000.0000.0009.00 nsr log adjacency changes address-family ipv4 unicast metric-style wide segment-routing mpls ! interface Bundle-Ether91 point-to-point address-family ipv4 unicast ! interface Loopback0 passive address-family ipv4 unicast prefix-sid absolute 16009! Below output shows the segment routing label table for both the ISIS processes on DCI routers. Routers in EVPN and L3VPN domains are reachable from DCI routers. DCI-1 DCI-2 DCI-1#show isis segment-routing label table IS-IS 1 IS Label TableLabel Prefix/Interface---------- ----------------16001 1.1.1.1/3216002 2.2.2.2/3216006 6.6.6.6/3216007 7.7.7.7/3216008 Loopback016009 9.9.9.9/32IS-IS 2 IS Label TableLabel Prefix/Interface---------- ----------------16008 Loopback016009 9.9.9.9/3216010 10.10.10.10/32 DCI-2#show isis segment-routing label table IS-IS 1 IS Label TableLabel Prefix/Interface---------- ----------------16001 1.1.1.1/3216002 2.2.2.2/3216006 6.6.6.6/3216007 7.7.7.7/3216008 8.8.8.8/3216009 Loopback0IS-IS 2 IS Label TableLabel Prefix/Interface---------- ----------------16008 8.8.8.8/3216009 Loopback016010 10.10.10.10/32 Task 2# Configuration of BGP L3VPN on DCI and PE-1As per the topology we have L3VPN configured between DCIs and PE-1. VRF 10 is configured on DCIs and PE-1 with route-target 110#110. iBGP neighborship is formed using VPNv4 address-family between DCI routers and PE-1. Though we are not using a Route-Reflector in L3VPN domain for this write-up; a Route-Reflector is supported and can be used for this design. Configure L3VPN VRF on PE-1, DCI-1 and DCI-2.vrf 10 address-family ipv4 unicast import route-target 110#110 ! export route-target 110#110 !Configure BGP L3VPN neighborship via VPNv4 address-family. Also, configure the VRF under BGP to advertised the routes of the VRF to other PE routers. Initiate the VPNv4 address family to advertise VRF label. Route-Distinguisher (RD) auto under VRF generates RD value automatically. However, configuring RD manually is also supported.We will use “redistribute connected” under VRF to advertise connected routes via BGP. In addition, we are configuring BGP multipathing for load balancing where multiple next-hops are available for a prefix. PE-1 DCI-1 DCI-2 router bgp 65001 bgp router-id 10.10.10.10 address-family vpnv4 unicast ! neighbor 8.8.8.8 remote-as 65001 description ~vpnv4 session to DCI-1~ update-source Loopback0 address-family vpnv4 unicast ! neighbor 9.9.9.9 remote-as 65001 description ~vpnv4 session to DCI-2~ update-source Loopback0 address-family vpnv4 unicast ! vrf 10 rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected! router bgp 65001 bgp router-id 8.8.8.8 address-family vpnv4 unicast ! neighbor 9.9.9.9 remote-as 65001 description ~vpnv4 session to DCI-2~ update-source Loopback0 address-family vpnv4 unicast next-hop-self ! neighbor 10.10.10.10 remote-as 65001 description ~vpnv4 session to PE-1~ update-source Loopback0 address-family vpnv4 unicast next-hop-self ! vrf 10 rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected! router bgp 65001 bgp router-id 9.9.9.9 address-family vpnv4 unicast ! neighbor 8.8.8.8 remote-as 65001 description ~vpnv4 session to DCI-1~ update-source Loopback0 address-family vpnv4 unicast next-hop-self ! neighbor 10.10.10.10 remote-as 65001 description ~vpnv4 session to PE-1~ update-source Loopback0 address-family vpnv4 unicast next-hop-self ! vrf 10 rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected ! Configure Loopback 100 for VRF 10 on PE-1. This will be advertised as VPNv4 prefix to DCI routers, the DCI routers will re-originate this prefix and advertise to Leafs in EVPN fabric for end-to-end reachability. PE-1#interface Loopback100 vrf 10 ipv4 address 111.1.1.1 255.255.255.255! At this point we are done with BGP L3VPN (VPNv4) configuration on DCI routers and PE-1. We are advertising interface Loopback 100’s prefix from PE-1 towards DCI routers with VPNv4 address-family and route-targets 110#110 for import and export of routes for VRF 10. Check routing table of DCI routers for VRF 10 to verify that PE-1 prefix is learnt. RP/0/RP0/CPU0#DCI-1#show route vrf 10 Gateway of last resort is not setB 111.1.1.1/32 [200/0] via 10.10.10.10 (nexthop in vrf default), 01#07#05RP/0/RP0/CPU0#DCI-1#RP/0/RP0/CPU0#DCI-1#show cef vrf 10 111.1.1.1/32111.1.1.1/32, version 1, internal 0x5000001 0x0 (ptr 0x97c1d714) [1], 0x0 (0x0), 0x208 (0x98422d28) Updated Apr 14 11#39#51.099 Prefix Len 32, traffic index 0, precedence n/a, priority 3 via 10.10.10.10/32, 3 dependencies, recursive [flags 0x6000] path-idx 0 NHID 0x0 [0x972aef08 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 10.10.10.10/32 via 16010/0/21 next hop 192.8.10.2/32 BE81 labels imposed {ImplNull 24017} The above output shows that PE-1’s advertised prefix 111.1.1.1/32 is learnt on DCI in VRF 10. We can also verify the prefix advertisement using the L3VPN BGP control-plane. In the below output from DCI-1 we can see the details of prefix 111.1.1.1/32; that it is received from PE-1 (10.10.10.10), its label value and route-target (RT) information. RP/0/RP0/CPU0#DCI-1#show bgp vpnv4 unicast rd 10.10.10.10#0 111.1.1.1/32 detailBGP routing table entry for 111.1.1.1/32, Route Distinguisher# 10.10.10.10#0Versions# Process bRIB/RIB SendTblVer Speaker 3 3 Flags# 0x00040001+0x00000000; Last Modified# Apr 14 11#39#50.740 for 00#02#05Paths# (1 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Flags# 0x4000000025060005, import# 0x1f Not advertised to any peer Local 10.10.10.10 (metric 10) from 10.10.10.10 (10.10.10.10) Received Label 24017 Origin incomplete, metric 0, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 1, version 3 Extended community# RT#110#110 RP/0/RP0/CPU0#DCI-1# Based on above output we can see the prefix from PE-1 learnt and programmed in the forwarding table for VRF 10 on DCI routers. This concludes the configuration and verification of BGP L3VPN (VPNv4) domain in the above setup. Next we will cover the remaining tasks of configuring BGP EVPN on DCI routers and implementing EVPN to L3VPN interworking.Task 3# Configuration of BGP-EVPN on Leafs, Spines and DCIsThe EVPN fabric configuration was done in another post but that did not include DCI router’s configuration. We will configure DCI routers now and form BGP EVPN neighborship with Spines serving as Route-Reflectors.Configure BGP-EVPN neighborship with Route Reflectors. DCI-1 DCI-2 router bgp 65001 bgp router-id 8.8.8.8 address-family l2vpn evpn ! neighbor 6.6.6.6 remote-as 65001 description ~BGP-EVPN session to Spine-1~ update-source Loopback0 address-family l2vpn evpn next-hop-self ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP-EVPN session to Spine-2~ update-source Loopback0 address-family l2vpn evpn next-hop-self ! ! vrf 10 --- VRF was already configured in L3VPN config task rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected ! ! router bgp 65001 bgp router-id 9.9.9.9 address-family l2vpn evpn ! neighbor 6.6.6.6 remote-as 65001 description ~BGP-EVPN session to Spine-1~ update-source Loopback0 address-family l2vpn evpn next-hop-self ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP-EVPN session to Spine-2~ update-source Loopback0 address-family l2vpn evpn next-hop-self ! ! vrf 10 --- VRF was already configured in L3VPN config task rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected ! ! As BGP-EVPN Layer-2 VPN service and EVPN-IRB on Leafs is already configured in earlier posts (refer to EVPN Layer-2 Service and EVPN-IRB); in next task lets configure route-target for VRF 10 to import and export EVPN routes.Task 4# Configure BGP EVPN and L3VPN interworking on DCI routersConfigure route-target stitching for EVPN routes#We already have configured VRF 10 on DCI when we configured L3VPN on DCI routers in Task 2. Now, since we are extending EVPN to DCI routers, we will configure EVPN RT (10#10) under VRF 10.With this, DCI routers for VRF 10 are configured with two sets of import and export route-targets. One set is associated to L3VPN domain using VPNv4 to advertise layer-3 information; while the other set is for EVPN fabric using EVPN address-family for advertisement of routes. The separation of route-targets enables DCI routers to have two separate domains configured independently. In order for EVPN and L3VPN to interwork, “Stitching” keyword configuration under VRF is required to stitch the two set of route-targets. Below configuration is making EVPN RTs as stitching RTs, while the L3VPN remain normal RTs.Configure VRF 10 on both DCI routers with EVPN route-target stitching.vrf 10 address-family ipv4 unicast import route-target 10#10 stitching 110#110 ! export route-target 10#10 stitching 110#110 !!We will import the EVPN routes with stitching RT and then will re-originate these with VPNv4 towards PE. For this we will need two knobs in BGP; “import stitching-rt re-originate” and “advertise vpnv4 unicast re-originated”.Import EVPN routes using “import stitching-rt re-originate”#In order to import evpn routes, we will have to import routes using “stitching-rt” keyword in EVPN address-family. The “re-originate” keyword will enable the routes to be re-originated with VPNv4 normal RT (110#110).Configure below on DCI-1 and DCI-2# DCI#router bgp 65001 neighbor 6.6.6.6 remote-as 65001 description ~BGP-EVPN session to Spine-1~ address-family l2vpn evpn import stitching-rt re-originate ! neighbor 7.7.7.7 remote-as 65001 description ~BGP-EVPN session to Spine-2~ address-family l2vpn evpn import stitching-rt re-originate! As a result we can see Host-1 and Host-2 routes programmed in the routing table of VRF 10 on DCI. DCI-1#RP/0/RP0/CPU0#DCI-1#show route vrf 10Gateway of last resort is not setB 10.0.0.20/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 00#06#07B 10.0.0.40/32 [200/0] via 2.2.2.2 (nexthop in vrf default), 00#06#07B 111.1.1.1/32 [200/0] via 10.10.10.10 (nexthop in vrf default), 21#31#55RP/0/RP0/CPU0#DCI-1# “show bgp vrf-db table vrf-table-id” cli command can be used to see the list of stitching-RT configured on DCI routers. RP/0/RP0/CPU0#DCI-1#show bgp vrf-db table allID REF AF VRF0xe0000000 2 IPv4 Unicast default0xe000000f 13 IPv4 Unicast 10RP/0/RP0/CPU0#DCI-1#show bgp vrf-db table 0xe000000fVRF-TBL# 10 (IPv4 Unicast) TBL ID# 0xe000000f RSI Handle# 0x43e5ca8 Refcount# 13 Import# RT-List# RT#110#110 Stitching RT-List# RT#10#10 Export# RT-List# RT#110#110 Stitching RT-List# RT#10#10 Re-originate evpn routes with vpnv4 RT “advertise vpnv4 unicast re-originated”#Next we will advertise the routes learnt from EVPN fabric to L3VPN PE. Configure “advertise vpnv4 unicast re-originated” keyword under VPNv4 address family to re-originate the EVPN routes matching stitching RT to vpnv4 using vpnv4 RT (110#110).Since, PE-1 does not have reachability to Leafs in EVPN fabric, DCI will act as inline-RR. DCI will change the next-hop to itself as it re-originates the routes and advertises to PE. We also need to configure “ibgp policy out enforce-modifications” to send the updated BGP route attributes to peers. DCI-1 DCI-2 router bgp 65001 ibgp policy out enforce-modifications neighbor 9.9.9.9 remote-as 65001 description ~vpnv4 session to DCI-1~ update-source Loopback0 address-family vpnv4 unicast route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! ! neighbor 10.10.10.10 remote-as 65001 description ~vpnv4 session to PE-1~ update-source Loopback0 address-family vpnv4 unicast route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! router bgp 65001 ibgp policy out enforce-modifications neighbor 8.8.8.8 remote-as 65001 description ~vpnv4 session to DCI-1~ update-source Loopback0 address-family vpnv4 unicast route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! ! neighbor 10.10.10.10 remote-as 65001 description ~vpnv4 session to PE-1~ update-source Loopback0 address-family vpnv4 unicast route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! Lets verify the routing table and BGP VPNv4 control-plane on PE-1. PE-1#RP/0/RP0/CPU0#PE-1#show route vrf 10Gateway of last resort is not setB 10.0.0.20/32 [200/0] via 8.8.8.8 (nexthop in vrf default), 00#02#50 [200/0] via 9.9.9.9 (nexthop in vrf default), 00#02#50B 10.0.0.40/32 [200/0] via 8.8.8.8 (nexthop in vrf default), 00#02#50 [200/0] via 9.9.9.9 (nexthop in vrf default), 00#02#50L 111.1.1.1/32 is directly connected, 1d00h, Loopback100RP/0/RP0/CPU0#PE-1#RP/0/RP0/CPU0#PE-1#show bgp vpnv4 unicast rd 8.8.8.8#0 10.0.0.20/32 detail BGP routing table entry for 10.0.0.20/32, Route Distinguisher# 8.8.8.8#0Versions# Process bRIB/RIB SendTblVer Speaker 207 207 Flags# 0x00040001+0x00010200; Last Modified# Mar 8 14#39#48.767 for 1d13hPaths# (2 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Flags# 0x4000000025060005, import# 0x3f Not advertised to any peer Local 8.8.8.8 (metric 10) from 8.8.8.8 (1.1.1.1) Received Label 64000 Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 0, version 207 Extended community# SoO#1.1.1.1#10 0x060e#0000.0000.000a RT#110#110 Originator# 1.1.1.1, Cluster list# 8.8.8.8, 6.6.6.6 Path #2# Received by speaker 0 Flags# 0x4000000024020005, import# 0x16 Not advertised to any peer Local 9.9.9.9 (metric 10) from 9.9.9.9 (1.1.1.1) Received Label 64002 Origin IGP, localpref 100, valid, internal, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 0, version 0 Extended community# SoO#1.1.1.1#10 0x060e#0000.0000.000a RT#110#110 Originator# 1.1.1.1, Cluster list# 9.9.9.9, 8.8.8.8, 6.6.6.6RP/0/RP0/CPU0#PE-1#RP/0/RP0/CPU0#PE-1#show cef vrf 10 10.0.0.20/3210.0.0.20/32, version 228, internal 0x5000001 0x0 (ptr 0x8d1ccacc) [1], 0x0 (0x0), 0x208 (0x8d9fe0e0) Updated Mar 8 14#46#37.085 Prefix Len 32, traffic index 0, precedence n/a, priority 3 via 8.8.8.8/32, 5 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 0 NHID 0x0 [0x8cce6d08 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 8.8.8.8/32 via 16008/0/21 next hop 192.8.10.1/32 BE81 labels imposed {ImplNull 64000} via 9.9.9.9/32, 5 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 1 NHID 0x0 [0x8cce8268 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 9.9.9.9/32 via 16009/0/21 next hop 192.9.10.1/32 BE91 labels imposed {ImplNull 64002}RP/0/RP0/CPU0#PE-1# The routing table on PE-1 shows the hosts routes of EVPN fabric are learnt in VRF 10. We have DCI-1 and DCI-2 as the next-hops to get to host prefixes in EVPN fabric. This accomplishes the reachability from PE-1 to host prefixes on Leafs.Next we will configure the DCI routers to re-originate the routes to its BGP EVPN neighbor that are received from PE-1 via VPNv4 address-family. This will need two knobs configured in BGP, “import re-originate stitching-rt” and “advertise vpnv4 unicast re-originated stitching-rt”.Re-originate VPNv4 routes using EVPN stitching RT “import re-originate stitching-rt”#This is configured under VPNv4 address-family to enable import of VPNv4 routes with normal RT 110#110 and re-originate it with EVPN stitching-rt.Advertise re-originated routes to EVPN “advertise vpnv4 unicast re-originated stitching-rt”#Configure “advertise vpnv4 unicast re-originated stitching-rt” keyword under EVPN address family. This will configure advertisement of vpnv4 routes to BGP EVPN neighbors. The route targets will change from vpnv4 RT 110#110 to EVPN stitching route target before advertising to EVPN neighbors. DCI advertises this as EVPN route type 5. DCI-1 DCI-2 router bgp 65001 neighbor 6.6.6.6 remote-as 65001 description ~BGP-EVPN session to Spine-1~ update-source Loopback0 address-family l2vpn evpn import stitching-rt re-originate advertise vpnv4 unicast re-originated stitching-rt next-hop-self ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP-EVPN session to Spine-2~ update-source Loopback0 address-family l2vpn evpn import stitching-rt re-originate advertise vpnv4 unicast re-originated stitching-rt next-hop-self ! ! neighbor 9.9.9.9 remote-as 65001 description ~vpnv4 session to DCI-1~ update-source Loopback0 address-family vpnv4 unicast import re-originate stitching-rt route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! ! neighbor 10.10.10.10 remote-as 65001 description ~vpnv4 session to PE-1~ update-source Loopback0 address-family vpnv4 unicast import re-originate stitching-rt route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! ! router bgp 65001 neighbor 6.6.6.6 remote-as 65001 description ~BGP session to Spine-1~ update-source Loopback0 address-family l2vpn evpn import stitching-rt re-originate advertise vpnv4 unicast re-originated stitching-rt next-hop-self ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP session to Spine-2~ update-source Loopback0 address-family l2vpn evpn import stitching-rt re-originate advertise vpnv4 unicast re-originated stitching-rt next-hop-self ! ! neighbor 8.8.8.8 remote-as 65001 description ~vpnv4 session to DCI-1~ update-source Loopback0 address-family vpnv4 unicast import re-originate stitching-rt route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! ! neighbor 10.10.10.10 remote-as 65001 description ~vpnv4 session to PE-1~ update-source Loopback0 address-family vpnv4 unicast import re-originate stitching-rt route-reflector-client advertise vpnv4 unicast re-originated next-hop-self ! ! Finally lets observe the routing table and BGP-EVPN control-plane on Leafs to verify PE-1 prefix is reachable. Leaf-1#RP/0/RP0/CPU0#Leaf-1#show route vrf 10Gateway of last resort is not setC 10.0.0.0/24 is directly connected, 04#17#17, BVI10L 10.0.0.1/32 is directly connected, 04#17#17, BVI10B 10.0.0.40/32 [200/0] via 2.2.2.2 (nexthop in vrf default), 00#55#51B 111.1.1.1/32 [200/0] via 8.8.8.8 (nexthop in vrf default), 00#05#26 [200/0] via 9.9.9.9 (nexthop in vrf default), 00#05#26RP/0/RP0/CPU0#Leaf-1#Leaf-2#RP/0/RP0/CPU0#Leaf-2#show route vrf 10Gateway of last resort is not setC 10.0.0.0/24 is directly connected, 04#26#59, BVI10L 10.0.0.1/32 is directly connected, 04#26#59, BVI10B 10.0.0.20/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 01#05#14B 111.1.1.1/32 [200/0] via 8.8.8.8 (nexthop in vrf default), 00#10#48 [200/0] via 9.9.9.9 (nexthop in vrf default), 00#10#48RP/0/RP0/CPU0#Leaf-2#Leaf-1#RP/0/RP0/CPU0#Leaf-1#show cef vrf 10 111.1.1.1/32111.1.1.1/32, version 17, internal 0x5000001 0x0 (ptr 0x97d58d24) [1], 0x0 (0x0), 0x208 (0x98f38180) Updated Apr 21 05#07#11.003 Prefix Len 32, traffic index 0, precedence n/a, priority 3 via 8.8.8.8/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 0 NHID 0x0 [0x97074eb8 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 8.8.8.8/32 via 16008/0/21 next hop 192.1.6.1/32 BE16 labels imposed {16008 64000} next hop 192.1.7.1/32 BE17 labels imposed {16008 64000} via 9.9.9.9/32, 3 dependencies, recursive, bgp-multipath [flags 0x6080] path-idx 1 NHID 0x0 [0x970746e8 0x0] recursion-via-/32 next hop VRF - 'default', table - 0xe0000000 next hop 9.9.9.9/32 via 16009/0/21 next hop 192.1.6.1/32 BE16 labels imposed {16009 64000} next hop 192.1.7.1/32 BE17 labels imposed {16009 64000}RP/0/RP0/CPU0#Leaf-1#Leaf-1#RP/0/RP0/CPU0#Leaf-1#show bgp l2vpn evpn route-type 5BGP router identifier 1.1.1.1, local AS number 65001BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 0BGP main routing table version 194BGP NSR Initial initsync version 8 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsStatus codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 8.8.8.8#0*>i[5][0][32][111.1.1.1]/80 8.8.8.8 0 100 0 ?* i 8.8.8.8 0 100 0 ?Route Distinguisher# 9.9.9.9#0*>i[5][0][32][111.1.1.1]/80 9.9.9.9 0 100 0 ?* i 9.9.9.9 0 100 0 ?Processed 2 prefixes, 4 pathsRP/0/RP0/CPU0#Leaf-1#Leaf-1#RP/0/RP0/CPU0#Leaf-1#show bgp l2vpn evpn rd 8.8.8.8#0 [5][0][32][111.1.1.1]/80 detail BGP routing table entry for [5][0][32][111.1.1.1]/80, Route Distinguisher# 8.8.8.8#0Versions# Process bRIB/RIB SendTblVer Speaker 192 192 Flags# 0x00040001+0x00010000; Last Modified# Apr 21 05#07#10.766 for 00#07#07Paths# (2 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Flags# 0x4000000025060005, import# 0x1f, EVPN# 0x1 Not advertised to any peer Local 8.8.8.8 (metric 20) from 6.6.6.6 (10.10.10.10) Received Label 64000 Origin incomplete, metric 0, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 1, version 192 Extended community# Flags 0x6# RT#10#10 Originator# 10.10.10.10, Cluster list# 6.6.6.6, 8.8.8.8 EVPN ESI# 0000.0000.0000.0000.0000, Gateway Address # 0.0.0.0 Path #2# Received by speaker 0 Flags# 0x4000000020020005, import# 0x20, EVPN# 0x1 Not advertised to any peer Local 8.8.8.8 (metric 20) from 7.7.7.7 (10.10.10.10) Received Label 64000 Origin incomplete, metric 0, localpref 100, valid, internal, not-in-vrf Received Path ID 0, Local Path ID 0, version 0 Extended community# Flags 0x6# RT#10#10 Originator# 10.10.10.10, Cluster list# 7.7.7.7, 8.8.8.8 EVPN ESI# 0000.0000.0000.0000.0000, Gateway Address # 0.0.0.0RP/0/RP0/CPU0#Leaf-1#Reachability to Host prefixes from PE-1#RP/0/RP0/CPU0#PE-1#ping vrf 10 10.0.0.20 source 111.1.1.1Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.0.0.20, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 msRP/0/RP0/CPU0#PE-1#ping vrf 10 10.0.0.40 source 111.1.1.1Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.0.0.40, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/3 msRP/0/RP0/CPU0#PE-1# The routing table on Leafs show reachability to PE-1’s prefix (111.1.1.1/32) with DCI-1 and DCI-2 as the next-hop. The EVPN control-plane of Leafs show the route is received from DCI-1 (8.8.8.8) and DCI-2 (9.9.9.9) routers with PE-1 (10.10.10.10) as the originator.Successful Ping from PE-1 to Host prefixes verifies that the BGP EVPN and L3VPN interworking is operational and end-to-end reachability from Hosts connected to Leaf-1/Leaf-2 to PE-1 is established.Task 5# Advertise summarized routes and filter host routes on DCIIn this setup, Leafs are advertising host-routes “10.0.0.20/32 and 10.0.0.40/32” towards DCI routers and from DCI routers eventually to PE-1. Generally in a network, there are going to be a large number of host-routes advertised from evpn fabric. From scalability and optimization point of view it is not a good approach to advertise the host-routes outside of the EVPN fabric. Therefore, it is recommended to advertise summarized prefix routes outside of EVPN fabric and filter host-routes at DCI routers. In this post, we will advertise the prefix-route (evpn route-type 5) from Leafs for subnet 10.0.0.0/24 and filter the host-routes (x.x.x.x/32) on DCI routers.Note# EVPN uses route-type 2 to advertise host-routes x.x.x.x/32 and route-type 5 to advertise subnet x.x.x.0/24.Below configuration is needed on the Leafs to advertise EVPN prefix-route (route-type 5). router bgp 65001 neighbor 6.6.6.6 remote-as 65001 description ~BGP session to Spine-1~ update-source Loopback0 address-family l2vpn evpn advertise vpnv4 unicast re-originated ! ! neighbor 7.7.7.7 remote-as 65001 description ~BGP session to Spine-2~ update-source Loopback0 address-family l2vpn evpn advertise vpnv4 unicast re-originated ! ! Apply Route-Policies under BGP neighbors to filter routes on DCI routers. We are filtering EVPN host-routes as well as VPNv4 routes to avoid routing loops due to routes re-origination. router bgp 65001 neighbor evpn-neighbor-Spines address-family l2vpn evpn route-policy vpnv4-filter in ---filter routes with VPNv4 community route-policy vpnv4-community-set out ---Set VPNv4 community ! ! neighbor vpnv4-neighbors address-family vpnv4 unicast route-policy evpn-filter in ---filter routes with EVPN community route-policy rt2-filter out ---filter host-routes and set EVPN community ! ! Reference Route-Policy for route filtering on DCI routers. Route-Policy to filter routes#community-set evpn 1#111end-set!community-set vpnv4 1#222end-set!route-policy rt2-filter if destination in (0.0.0.0/0 ge 32) then drop else set community evpn endifend-policy!route-policy evpn-filter if community matches-any evpn then drop else pass endif end-policy!route-policy vpnv4-filter if community matches-any vpnv4 then drop else pass endifend-policy!route-policy vpnv4-community-set set community vpnv4end-policy!end Lets have a look at the BGP EVPN control-plane on DCI router to verify the 10.0.0.0/24 prefix route (route-type 5) is learnt. RP/0/RP0/CPU0#DCI-1#show bgp l2vpn evpn rd 1.1.1.1#0 [5][0][24][10.0.0.0]/80 detailBGP routing table entry for [5][0][24][10.0.0.0]/80, Route Distinguisher# 1.1.1.1#0Versions# Process bRIB/RIB SendTblVer Speaker 48 48 Flags# 0x00040001+0x00010000; Last Modified# Apr 22 20#48#28.740 for 01#37#33Paths# (2 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Flags# 0x4000600025060005, import# 0x3f Not advertised to any peer Local 1.1.1.1 (metric 20) from 6.6.6.6 (1.1.1.1) Received Label 24014 Origin incomplete, metric 0, localpref 100, valid, internal, best, group-best, import-candidate, reoriginate, not-in-vrf Received Path ID 0, Local Path ID 1, version 48 Extended community# Flags 0x6# RT#10#10 Originator# 1.1.1.1, Cluster list# 6.6.6.6 EVPN ESI# 0000.0000.0000.0000.0000, Gateway Address # 0.0.0.0 Path #2# Received by speaker 0 Flags# 0x4000600020020005, import# 0x20 Not advertised to any peer Local 1.1.1.1 (metric 20) from 7.7.7.7 (1.1.1.1) Received Label 24014 Origin incomplete, metric 0, localpref 100, valid, internal, reoriginate, not-in-vrf Received Path ID 0, Local Path ID 0, version 0 Extended community# Flags 0x6# RT#10#10 Originator# 1.1.1.1, Cluster list# 7.7.7.7 EVPN ESI# 0000.0000.0000.0000.0000, Gateway Address # 0.0.0.0RP/0/RP0/CPU0#DCI-1# The above output from DCI-1 shows that subnet 10.0.0.0/24 is learnt from Leaf-1 (RD#1.1.1.1#0) via route-type 5 with route-target 10#10. Lets, have a look at the PE-1’s routing table to verify the subnet route is learnt. RP/0/RP0/CPU0#PE-1#show route vrf 10 Gateway of last resort is not setB 10.0.0.0/24 [200/0] via 8.8.8.8 (nexthop in vrf default), 01#08#30 [200/0] via 9.9.9.9 (nexthop in vrf default), 01#08#30L 111.1.1.1/32 is directly connected, 1d21h, Loopback100RP/0/RP0/CPU0#PE-1#RP/0/RP0/CPU0#PE-1#ping 10.0.0.20 vrf 10 Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.0.0.20, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 2/2/3 msRP/0/RP0/CPU0#PE-1#ping 10.0.0.40 vrf 10 Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.0.0.40, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 msRP/0/RP0/CPU0#PE-1# Verifying the output of PE-1’s routing table shows that the Leafs host-routes (x.x.x.x/32) are filtered out and not learnt anymore. However the subnet route 10.0.0.0/24 is learnt and programmed with DCI routers as the next-hops.Successful Ping from PE-1 to Host prefixes confirms that the BGP EVPN and L3VPN interworking is operational and end-to-end reachability from Hosts connected to Leaf-1/Leaf-2 to PE-1 is established. This concludes the configuration and implementation of BGP EVPN and L3VPN interworking on IOS-XR routers.For deep dive details of BGP EVPN, refer to our e-vpn.io webpage, it has a lot of material explaining the core concepts of EVPN, its operations and troubleshooting details.", "url": "/tutorials/bgp-evpn-and-l3vpn-interworking/", "author": "Ahmad Bilal Siddiqui", "tags": "iosxr, cisco, EVPN" } , "tutorials-macsec-on-ncs-5500-technology-and-platform-overview": { "title": "MACsec on NCS-5500 - Technology and Platform Overview", "content": " On This Page MACsec on NCS 5500 - Technology and Platform Overview Introduction MACsec Technology Overview and Benefits Commonly used MACsec Terminologies MACsec Data Plane Use Cases MACsec Basic Configuration MACsec Platform Support MACsec on NCS 5500 - Technology and Platform OverviewIntroductionThis document is the first part of a series, and provides an overview of MACsec technology, data plane overhead, basic configuration and platform support. MACsec is a line-rate Ethernet encryption and works at Layer 2, with hop-by-hop links. MACsec is based on IEEE standards, and is supported in Cisco’s NCS-5500 and many other Cisco Platforms. MACsec products based on IEEE MKA standards will interop with each other seamlessly.MACsec Technology Overview and BenefitsMACsec is a Layer 2 IEEE 802.1AE standard for encrypting packets between two MACsec-capable routers. It supports IEEE 802.1AEbn 256bit encryption and uses Advanced Encryption Standard (AES) algorithm. MACsec secures data on physical media, making it impossible for data to be compromised at higher layers. Security breaches can occur in any layer and MACsec prevents from layer 2 security breaches, including Packet sniffing, Packet Eavesdropping, DOS attack, tampering, MAC address spoofing and ARP spoofing etc.Some of the major MACsec benefits are, Confidentiality# MACsec helps ensure confidentiality by providing strong encryption at layer 2 Integrity# MACsec provides integrity checking to help ensure that data cannot be modified in transit Flexibility# You can selectively enable MACsec on per interface basis by attaching MACsec policy which gives flexibility for having both Secured (MACsec enabled) and Non secured ports to operate on same Router Network Intelligence# Unlike end-to-end, Layer 3 encryption techniques that hide the contents of packets from the network devices they cross, MACsec encrypts packets on a hop-by-hop basis at Layer 2, allowing the network to inspect, monitor, mark, and forward traffic according to your existing policiesAs MACsec is the hop-by-hop encryption technology, Frames gets encrypted as it leaves the wire (PHY or FPGA post NPU operation) and gets decrypted before it ingress to NPU. So NPU will have complete view of data and will be able to provide any services required to these packets.MACsec allows you to secure an Ethernet link including all control plane protocol packets except EAPoL. It uses the IEEE 802.1X MACsec Key Agreement protocol (MKA) to exchange session keys, and manage encryption keys.Commonly used MACsec TerminologiesMACsec Key Agreement (MKA) is defined in IEEE 802.1X is a key agreement protocol for discovering MACsec peers and negotiating keys .Secure Channel (SC) is a security relationship used to provide security guarantees for frames transmitted from one member of a CA to the others. An SC is supported by a sequence of SAs thus allowing the periodic use of fresh keys without terminating the relationship.Secure Channel Identifier (SCI) is a globally unique identifier for a secure channel, comprising a globally unique MAC Address and a Port Identifier, unique within the system allocated that address.Connectivity Association Key (CAK) is a long-lived master key used to generate all other keys used for MACsec. In our implementation, it is the Pre-Shared Key (PSK) configured through a key chain. CAK is a hex string of 16 bytes for AES 128bit cipher and 32 bytes for 256bit cipher.CAK Key Name (CKN) It is used to identify the CAK. It is a hex string of 1 to 32 bytes. CKN has to be same on both side to form session successfully.Secure Association Key (SAK) is derived by the elected Key Server from the CAK and SAK is used by the router/end devices to encrypt traffic and decrypt traffic for a given session.Key Sever Priority is the optional value, which can be configured in MACsec policy.Key server (KS) is the one, which controls key generation and distribution of SAK to clients (Non-KS). Device with lowest key server priority value preferred to win key server election. In case of tie, lowest value of SCI wins.MACsec Data PlaneOnce we enable MACsec on a link, Both Tx and Rx SCI (Secure Channel Identifier) and associated Tx & Rx SAs which is distributed by Key Server gets programmed in hardware. Any traffic leaving the interface will get encrypted using Tx SA policy programmed in hardware only with exception of EAPoL packets as this takes the different path inside MACsec core (Clear Path). And traffic getting ingress to the interface will gets decrypted using the programmed Rx SA policy. Once SCI gets programmed for that interface, MACsec policy gets pushed to the hardware which enables the interface to apply access-control policy (should / must secure) for all traffics leaving interface till SA gets programmed.MACsec inserts two tags for all data frames, which egress the interface. Which are SecTag and ICV. The value of these additional overheads can be from 16 to 32Byte maximum. Both SecTag and ICV can vary from 8B to 16B depending upon the information it carries and cipher it uses. SecTag carry an 8 byte SCI that is optional. The authentication is provided to the complete frame except CRC and ICV part, which resides at end of the frame. And Encryption is provided starting from VLAN header (if used) till Payload.Cisco’s implementation always uses 16B SecTag and 16B ICV, so the data plane overhead is 32B.Use CasesOne common use case of NCS-5500 can be link MACsec on all regular IP core / MPLS core devices which are generally part of service provider network. MACsec can simply be enabled on all back to back links over IP/MPLS core devices as an underlay protocols. This will still get you the high-speed lean core network with complete security provided by MACsec encryption as MACsec works on wire speed.As you can see in above figure, MACsec is enabled between each links connected between each core devices on your WAN from PE to PE. Take a look at below figure, which gives the comparison of different possible frames over IP/MPLS network in both clear and encrypted format. As you can see, Encryption starts right after Source MAC address and ends just before FCS in all kinds frames gets into IP/MPLS core.MACsec over bundle is supported on NCS55xx family of products. MACsec is enabled on all bundle member interfaces individually and we will have separate sessions for each member as MACsec works on MAC layer. Since MACsec is enabled on per member interface basis, we can have bundle, which contains mixing of MACsec and Non MACsec enabled links as member of same bundle interface.MACsec Basic ConfigurationMACsec can be configured in 3 simple steps# Create Key Chain (to configure the PSK - CKN & CAK) Create MACsec policy (optional, to configure encryption cipher & other policies etc.) Attach created key chain and policy to an interface.key chain psk_name macsec  key ckn-2-to-64-hex-char   key-string cak-32-or-64-hex-char cryptographic-algorithm {aes-128-cmac|aes-256-cmac}    lifetime start-time start-date {end-time end-date|duration seconds|infinite}macsec-policy policy_name [optional-policies]interface Interface_name macsec psk-keychain psk_name [policy policy_name]A basic MACsec configuration, with default policy GCM-AES-XPN-256# key chain psk1 macsec key 01 key-string 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef cryptographic-algorithm aes-256-cmac lifetime 00#00#00 january 01 2020 infinite ! !!interface HundredGigE0/0/2/0 macsec psk-keychain psk1! RP/0/RP0/CPU0#55A2-MOD-SE-6625#show macsec mka summaryNODE# node0_0_CPU0======================================================================================== Interface-Name Status Cipher-Suite KeyChain PSK/EAP CKN======================================================================================== Hu0/0/2/0 Secured GCM-AES-XPN-256 psk1 PRIMARY 01Total MACSec Sessions # 1 Secured Sessions # 1 Pending Sessions # 0 MACsec Platform SupportMACsec is supported on both modular and fixed platforms. However, on some platforms, not all ports will support MACsec. Below is a summary of the platforms with support ports highlighted.NC55-36x100G-S MACsec Modular Line CardAll 36x100G ports support MACsecNC55-6x200-DWDM-S IPoDWDM Modular Line CardAll 6x100G/200G ports support MACsecNC55-MOD-A(-SE)-S MOD Line Card with MPAAll 12x10G, 2x40G ports and both MPA support MACsecNCS-55A1-36H(-SE)-S Fixed ChassisAll 36x100G ports support MACsecNCS-55A2-MOD(-SE)-S MOD Fixed Chassis with MPAThe 16x25G ports and both MPA support MACsec, while the 24x10G ports do not.NCS-55A1-24Q6H-S Fixed ChassisThe 6x100G and 16 out of 24x25G ports support MACsec, while the 24x10G and 8 out of 24x25G ports do not.NCS-55A1-48Q6H Fixed ChassisThe 6x100G ports support MACsec, while the 48x25G ports do not.NCS-55A1-24Q6H-SS Fixed ChassisAll 6x100G ports, 24x25G ports and 24x10G ports support MACsec.NC55-MPA-2TH-S NC55-MPA-1TH2H-S NC55-MPA-4H-S NC55-MPA-12T-SAll MPA ports, 10G, 100G and CFP2 support MACsec.Platform Matrix for MACsec Support Platform SFP+ SFP28 QSFP+ QSFP28 CFP2 NC55-36x100G-S - - - 10G/40G/4x10G/100G - NC55-6x200-DWDM-S - - - - Nx100G NC55-MOD-A(-SE)-S 10G - 4x10G/40G - - NCS-55A1-36H(-SE)-S - - - 10G/40G/4x10G/4x25G/100G - NCS-55A2-MOD(-SE)-S - 10G/25G - - - 400G MPA’s 10G - - 40G/4x10G/4x25G/100G Nx100G NCS-55A1-48Q6H - - - 40G/4x10G/4x25G/100G - NCS-55A1-24Q6H-S - 10G/25G - 40G/4x10G/4x25G/100G - NCS-55A1-24Q6H-SS 10G 10G/25G - 40G/4x10G/4x25G/100G - Please note there is no 1G or 100M support for MACsec.", "url": "/tutorials/macsec-on-ncs-5500-technology-and-platform-overview/", "author": "Vincent Ng", "tags": "iosxr, MACsec, NCS-5500" } , "#": {} , "tutorials-bgp-flowspec-to-acl-script": { "title": "BGP FlowSpec to ACL Script", "content": " BGPFS2ACL Introduction Video The script Github Description / Match Action Support of packet-length ranges Support of fragments Examples Example1# Destination IP and UDP source port Example2# Destination IP and UDP destination port Example3# Source Prefix, Destination host, Source UDP Port Example4# Destination Host, UDP Source Port Range Example5# Packet Length Range Example6# Multiple Packet-length Ranges Example7# ICMP Type and Code Example8# Fragments Example9# Action Redirect-to-IP Validation / Tests Test 01# Starting the script with existing ACL config Test 02# Ignore rules not supported by the script Test 03# Creation of a new IPv4 interface Test 04# BGP FS session lost Test 05# manipulation of the created ACL Test 06# add up multiple FS rules Misc Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionBGP Flowspec is a technology we described in multiple posts and videos# https#//xrdocs.io/ncs5500/tutorials/bgp-flowspec-on-ncs5500/ Cisco NCS5500 Flowspec (Principles and Configuration) Part1 Cisco NCS5500 Flowspec (Auto-Mitigation of a Memcached Attack) Part2In July 2020, we support BGP Flowspec only on the NCS5500 products equiped with the following ASICs and memories# Jericho+ with OP eTCAM NCS55A1-36H-SE-S NCS55A2-MOD-SE-S NC55-36X100G-A-SE Jericho2 with OP2 eTCAM NC57-18DD-SE All these systems and line cards can program the flowspec rules in the datapath, matching packets based on a description and applying actions like drop, rate-limit, remark dscp, redirect to IP or to VRF. But the rest of the portfolio can’t support it. Nevertheless, it’s possible to configure a BGP FS client on all the IOS XR, even if they are not able to program the hardware leveL.That’s an useful feature because we can develop a program executed on the router itself to convert the flowspec rules received into configuration line. That’s exactly what the bgpfs2acl script is doing.Routers powered by the Jericho+ with large LPM (NCS55A1-24H, NCS55A1-48Q6H, NCS55A1-24Q6H-SS) are used in peering position and will be perfect candidates for this script.Thanks to Carl Fredrik Lagerfeldt and Johan Gustawsson who brought first the original idea.VideoIn this interview, Mike explained the basic tools used by the script to operate (ZTP config, etc). Since the day of this interview, Dmitrii Rudnev worked on the script and changed the structure entirely, extending the capabilities significantly.The scriptGithubCode is available on Github# https#//github.com/ios-xr/bgpfs2acl.Description / MatchThe script checks the flowspec rules every 30 seconds.In this release, it covers most description options with the following exceptions# no match on DSCP field no match on TCP flag match on ICMP type and code works but no support of lists or ranges in type and code no match on Dont FragmentActionIn term of action, it supports drop and redirect-to-IP but not# set dscp rate limit to X bps redirect to VRFSupport of packet-length rangesTo support the match in packet length (including ranges), it’s necessary to enable a specific UDK configuration. We recommend to configure it if you plan to use the script, even before 7.0.1.hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto packet-length frag-bit port-rangeStarting up the script, it will check the presence of this config line and warn it will not be able to “translate” rules containing length ranges if it’s not enabled.If it receives such a rule and the config is not present, it will trigger a message to inform the rule has not been handled.Note# enabling this command requires a reload of the line card or chassis.Note2# the NCS5500 is limited in the number of range IDs, the script is not aware of the limitation and will try to configure as many range than received in the rules.Support of fragmentsACL match on fragment-type is only supported on systems with external TCAM because it requires to enable the compression feature. It goes against the main purpose of the script, which is to mimic the flowspec behavior on non-eTCAM routers.But matching fragments (without going in deeper level of subtleties like First Fragment, Last Fragment, Is Fragmented, etc) is supported, that why all frag-types described in the flowspec rule will be translated into the same ACL line “fragments”.ExamplesIn the next 9 examples, we demonstrate the capabilities of the script in term of match and actions.Cannonball is the BGP Flowspec Client, receiving the rules and where the script is executed.It’s an NCS-5501 (non-SE) running IOS XR 7.0.2#RP/0/RP0/CPU0#Cannonball#sh verCisco IOS XR Software, Version 7.0.2Copyright (c) 2013-2020 by Cisco Systems, Inc.Build Information# Built By # ahoang Built On # Fri Mar 13 22#56#17 PDT 2020 Built Host # iox-ucs-027 Workspace # /auto/srcarchive15/prod/7.0.2/ncs5500/ws Version # 7.0.2 Location # /opt/cisco/XR/packages/ Label # 7.0.2cisco NCS-5500 () processorSystem uptime is 3 days 13 hours 57 minutesRP/0/RP0/CPU0#Cannonball#sh platfNode Type State Config state--------------------------------------------------------------------------------0/RP0/CPU0 NCS-5501(Active) IOS XR RUN NSHUT0/RP0/NPU0 Slice UP0/FT0 NCS-1RU-FAN-FW OPERATIONAL NSHUT0/FT1 NCS-1RU-FAN-FW OPERATIONAL NSHUT0/PM0 NCS-1100W-ACFW OPERATIONAL NSHUT0/PM1 NCS-1100W-ACFW FAILED NSHUTRP/0/RP0/CPU0#Cannonball#Macrocarpa is the BGP Flowspec controller injecting/pushing the rules to Cannonball, it runs IOSXR 6.6.3.Example1# Destination IP and UDP source portOn the controler#class-map type traffic match-all CHARGEN match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 19policy-map type pbr Example1 class type traffic CHARGEN drop !!flowspec address-family ipv4 service-policy type pbr Example1!Before the receiving the BGP FS rule on the client side. The script is enabled and created a bgpfs2acl-ipv4 and applied it on the interface where no ACL was present (Te0/0/0/2). This ACL is empty at the moment with just a permit any any.Another ACL test2 exists and is applied to Te0/0/0/1.interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0 ipv4 access-group test2 ingress!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!ipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 permit ipv4 any any!The BGP FS rule is received on the client. The script kicks in and change the configuration.RP/0/RP0/CPU0#Cannonball# RP/0/RP0/CPU0#Jul 16 13#07#36.861 UTC# config[68949]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000359' to view the changes.RP/0/RP0/CPU0#Cannonball#We can see how the ACLs have been modified by the script#RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp any eq 19 host 7.7.7.7 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#The ACE line 100501 represents the translation of our rule# match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 19Note the ACL entries created by the script are always “signaled” by two remarks “FLOWSPEC RULES BEGIN” and “FLOWSPEC RULES END”.Now we verify the behavior of the script when the BGP FS rule is removed#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 15#06#37.126 UTC# config[67938]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000360' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example2# Destination IP and UDP destination portOn the controler#class-map type traffic match-all SunRPC match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match destination-port 111 end-class-map!policy-map type pbr Example2 class type traffic SunRPC drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example2 !!On the client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 15#09#07.034 UTC# config[67321]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000361' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any host 7.7.7.7 eq sunrpc 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp any host 7.7.7.7 eq sunrpc 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example3# Source Prefix, Destination host, Source UDP PortOn the controller#class-map type traffic match-all CHARGEN2 match destination-address ipv4 7.7.7.7 255.255.255.255 match source-address ipv4 80.2.1.0 255.255.255.0 match protocol udp match source-port 19 end-class-map!policy-map type pbr Example3 class type traffic CHARGEN2 drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example3 !!On the client#RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example4# Destination Host, UDP Source Port RangeOn the controller#class-map type traffic match-all NETBIOS match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 137-138 end-class-map!policy-map type pbr Example4 class type traffic NETBIOS drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example4 !!On the client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 15#24#07.012 UTC# config[66986]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000365' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example5# Packet Length RangeNote it’s necessary to have the UDK hw-module configured (keep in mind it requires a reload).On the controller#class-map type traffic match-all DNS match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 53 match packet length 768-65535 end-class-map!policy-map type pbr Example5 class type traffic DNS drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example5 !!Nothing happens on the client.Expected since the NCS5500 doesn’t support packet length range larger than 16k.We change on the controller a range in the supported scope.class-map type traffic match-all DNS match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 53 match packet length 768-1600 end-class-map!policy-map type pbr Example5 class type traffic DNS drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example5 !!On the client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 15#33#07.092 UTC# config[67272]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000367' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq domain host 7.7.7.7 packet-length range 768 1600 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp any eq domain host 7.7.7.7 packet-length range 768 1600 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#show controllers fia diagshell 0 ~diag field Ranges~ location 0/0/CPU0Node ID# 0/0/CPU0============================================================| Range Dump |============================================================| Qualifier | Range | Flags | Min | Max |============================================================| RangeCheck(26) | 1 | TCP UDP SrcPort | 49152 | 65535 |============================================================RP/0/RP0/CPU0#Cannonball#Example6# Multiple Packet-length RangesOn the controller#class-map type traffic match-all NTP2 match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 123 match packet length 1-35 37-45 47-75 77-219 221-65535 end-class-map!policy-map type pbr Example6 class type traffic NTP2 drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example6 !!On the client#RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100502 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100503 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100504 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100505 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100508 remark FLOWSPEC RULES END 100509 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Note# the range 221-65535 has been refused but all other ranges have been configured.On the controller again we remove one range#(on the controller)RP/0/RP0/CPU0#Macrocarpa#confRP/0/RP0/CPU0#Macrocarpa(config)#class-map type traffic match-all NTP2RP/0/RP0/CPU0#Macrocarpa(config-cmap)#no match packet length 1-35 37-45 47-75 77-219 221-65535RP/0/RP0/CPU0#Macrocarpa(config-cmap)# match packet length 1-35 37-45 47-75 221-65535RP/0/RP0/CPU0#Macrocarpa(config-cmap)#commitThu Jul 16 15#42#10.899 UTCRP/0/RP0/CPU0#Macrocarpa(config-cmap)#endRP/0/RP0/CPU0#Macrocarpa#On the client#RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100502 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100503 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100504 remark FLOWSPEC RULES END 100505 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100505 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100507 remark FLOWSPEC RULES END 100508 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example7# ICMP Type and CodeOn the controller#class-map type traffic match-all ICMPmatch destination-address ipv4 2.2.2.0 255.255.255.0 match ipv4 icmp-code 2 match ipv4 icmp-type 2 end-class-map!policy-map type pbr Example7 class type traffic ICMP drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Example7!!On the client#RP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,ICMPType#=2,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-list bgpfs2acl-ipv4Fri Jul 17 15#43#07.863 UTC 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny icmp any 2.2.2.0/24 2 2 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example8# FragmentsOn the controller#policy-map type pbr Example8a class type traffic FRAG1 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example8b class type traffic FRAG2 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example8c class type traffic FRAG3 drop ! class type traffic class-default ! end-policy-map!class-map type traffic match-all FRAG1 match destination-address ipv4 70.2.1.1 255.255.255.255 match fragment-type is-fragment end-class-map!class-map type traffic match-all FRAG2 match destination-address ipv4 70.2.1.2 255.255.255.255 match fragment-type first-fragment end-class-map!class-map type traffic match-all FRAG3 match destination-address ipv4 70.2.1.3 255.255.255.255 match fragment-type last-fragment end-class-map!We have 3 different rules for IsFrag, FirstFrag and LastFrag.Example8a# is-fragmentRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any host 70.2.1.1 fragments 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny ipv4 any host 70.2.1.1 fragments 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example8b# first-fragmentRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any host 70.2.1.2 fragments 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny ipv4 any host 70.2.1.2 fragments 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example8c# last-fragmentRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any host 70.2.1.3 fragments 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 deny ipv4 any host 70.2.1.3 fragments 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example9# Action Redirect-to-IPIn this test, we will test 3 different / overlapping rules with 3 different next-hop addresses, and we will advertise them one by one and see where they appear in the order of operation / ACE numbering.Controller#class-map type traffic match-all test9a match destination-address ipv4 70.2.1.0 255.255.255.0 end-class-map!policy-map type pbr example9a class type traffic test9a redirect ipv4 nexthop 16.16.16.2 ! class type traffic class-default ! end-policy-map!class-map type traffic match-all test9b match destination-address ipv4 70.2.1.1 255.255.255.255 end-class-map!policy-map type pbr example9b class type traffic test9b redirect ipv4 nexthop 16.16.16.3 ! class type traffic class-default ! end-policy-map!class-map type traffic match-all test9c match destination-address ipv4 70.0.0.0 255.0.0.0 end-class-map!policy-map type pbr example9c class type traffic test9c redirect ipv4 nexthop 16.16.16.4 ! class type traffic class-default ! end-policy-map!RP/0/RP0/CPU0#Macrocarpa(config)#flowsRP/0/RP0/CPU0#Macrocarpa(config-flowspec)#address-family ipv4RP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#service-policy type pbr example9aRP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#commitRP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#Client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 15#58#07.139 UTC# config[68582]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000381' to view the changes.RP/0/RP0/CPU0#Cannonball#sh flowspec ipv4AFI# IPv4 Flow #Dest#70.2.1.0/24 Actions #Nexthop# 16.16.16.2 (bgp.1)RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Second rule#RP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#service-policy type pbr example9bRP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#commitRP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 15#59#07.151 UTC# config[69182]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000382' to view the changes.RP/0/RP0/CPU0#Cannonball#sh flowspec ipv4AFI# IPv4 Flow #Dest#70.2.1.1/32 Actions #Nexthop# 16.16.16.3 (bgp.1) Flow #Dest#70.2.1.0/24 Actions #Nexthop# 16.16.16.2 (bgp.1)RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 permit ipv4 any host 70.2.1.1 nexthop1 ipv4 16.16.16.3 100502 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100503 remark FLOWSPEC RULES END 100504 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 permit ipv4 any host 70.2.1.1 nexthop1 ipv4 16.16.16.3 100505 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100506 remark FLOWSPEC RULES END 100507 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Third rule#RP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#service-policy type pbr example9cRP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#commitRP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#endRP/0/RP0/CPU0#Macrocarpa#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 16 16#00#07.142 UTC# config[66018]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000383' to view the changes.RP/0/RP0/CPU0#Cannonball#sh flowspec ipv4AFI# IPv4 Flow #Dest#70.2.1.1/32 Actions #Nexthop# 16.16.16.3 (bgp.1) Flow #Dest#70.2.1.0/24 Actions #Nexthop# 16.16.16.2 (bgp.1) Flow #Dest#70.0.0.0/8 Actions #Nexthop# 16.16.16.4 (bgp.1)RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 permit ipv4 any host 70.2.1.1 nexthop1 ipv4 16.16.16.3 100502 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100503 permit ipv4 any 70.0.0.0/8 nexthop1 ipv4 16.16.16.4 100504 remark FLOWSPEC RULES END 100505 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100503 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100504 permit ipv4 any host 70.2.1.1 nexthop1 ipv4 16.16.16.3 100505 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100506 permit ipv4 any 70.0.0.0/8 nexthop1 ipv4 16.16.16.4 100507 remark FLOWSPEC RULES END 100508 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Validation / TestsIn this section, we carried out test to validate the script behavior in various conditions.Test 01# Starting the script with existing ACL configStep 0 interface A configured with IPv4 address, shut (down/down), no ACL applied interface B configured with IPv4 address, no shut (up/up), no ACL applied interface C configured with IPv4 address, no shut (up/up), ACL test2 applied in ingress bgp flowspec configured on the client but session down (controller in “shutdown” state)Client#RP/0/RP0/CPU0#Cannonball#sh ru intinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!interface TenGigE0/0/0/3 shutdown!RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-listipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Controller#RP/0/RP0/CPU0#Macrocarpa#sh run router bgprouter bgp 65000 bgp router-id 18.18.18.30 address-family ipv4 unicast network 192.0.3.0/24 network 192.168.1.2/32 network 200.1.1.1/32 ! address-family ipv4 flowspec ! neighbor 16.16.16.20 remote-as 65000 shutdown address-family ipv4 unicast ! address-family ipv4 flowspec route-policy PASS-ALL in route-policy PASS-ALL out ! !RP/0/RP0/CPU0#Macrocarpa#sh run flowsflowspec!RP/0/RP0/CPU0#Macrocarpa#Step 1 start script check interfaces on the client check access-listsRP/0/RP0/CPU0#Cannonball#term monRP/0/RP0/CPU0#Cannonball#bashRP/0/RP0/CPU0#Jul 17 14#24#40.305 UTC# bash_cmd[67549]# %INFRA-INFRA_MSG-5-RUN_LOGIN # User cisco logged into shell from vty0[Cannonball#~]$ docker start bgpfs2aclbgpfs2acl[Cannonball#~]$ exitlogoutRP/0/RP0/CPU0#Jul 17 14#24#49.906 UTC# bash_cmd[67549]# %INFRA-INFRA_MSG-5-RUN_LOGOUT # User cisco logged out of shell from vty0RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#sh run intinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!RP/0/RP0/CPU0#Cannonball#accRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#So, clearly nothing happens.Step 2 no shut the flowspec on the controller but no rule advertised check interfaces on the client, check access-listsController#RP/0/RP0/CPU0#Macrocarpa#confRP/0/RP0/CPU0#Macrocarpa(config)#router bgp 65000RP/0/RP0/CPU0#Macrocarpa(config-bgp)# neighbor 16.16.16.20RP/0/RP0/CPU0#Macrocarpa(config-bgp-nbr)#no shutRP/0/RP0/CPU0#Macrocarpa(config-bgp-nbr)#commitRP/0/RP0/CPU0#Macrocarpa(config-bgp-nbr)#endRP/0/RP0/CPU0#Macrocarpa#Client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 14#28#56.185 UTC# bgp[1080]# %ROUTING-BGP-5-ADJCHANGE # neighbor 16.16.16.30 Up (VRF# default) (AS# 65000)RP/0/RP0/CPU0#Jul 17 14#28#56.186 UTC# bgp[1080]# %ROUTING-BGP-5-NSR_STATE_CHANGE # Changed state to Not NSR-ReadyRP/0/RP0/CPU0#Cannonball#sh bgp ipv4 flows sumBGP router identifier 16.16.16.20, local AS number 65000BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0 RD version# 42BGP main routing table version 42BGP NSR Initial initsync version 2 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 42 42 42 42 42 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd16.16.16.30 0 65000 1712 1691 42 0 0 00#00#24 0RP/0/RP0/CPU0#Cannonball#sh run intinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Here again, the script doesn’t kick in since no BGPFS rule are advertisedStep 3 advertise basic rule on the controller client receives BGP FS rule client# script creates the ACL “bgpfs2acl” and adds ACE(s) for each match/action with the appropriate remark client# script update existing ACL “test2” and add a second with the appropriate remarkController#RP/0/RP0/CPU0#Macrocarpa#confRP/0/RP0/CPU0#Macrocarpa(config)#flowspecRP/0/RP0/CPU0#Macrocarpa(config-flowspec)# address-family ipv4RP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)# service-policy type pbr Example1RP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#commitRP/0/RP0/CPU0#Macrocarpa(config-flowspec-af)#endRP/0/RP0/CPU0#Macrocarpa#Client#RP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 14#34#57.948 UTC# config[69179]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000399' to view the changes.RP/0/RP0/CPU0#Cannonball#show configuration commit changes 1000000399Building configuration...!! IOS XR Configuration 7.0.2ipv4 access-list test2 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!interface TenGigE0/0/0/1 ipv4 access-group bgpfs2acl-ipv4 ingress!interface HundredGigE0/0/1/2 ipv4 access-group bgpfs2acl-ipv4 ingress!interface HundredGigE0/0/1/4 ipv4 access-group bgpfs2acl-ipv4 ingress!interface HundredGigE0/0/1/5 ipv4 access-group bgpfs2acl-ipv4 ingress!endRP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#sh run intinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#We see that Management and loopback interfaces are not modified.Also, interfaces in shutdown state are not modified either.ACL “bgpfs2acl-ipv4” is created and applied to interface TenGigE0/0/0/1ACL “test2” is modified since it existed and was already applied to TenGigE0/0/0/2.Step4We stop the rule advertisement from the controller. remove the rule advertisement from the controller client# check the impact on the access-listsController#RP/0/RP0/CPU0#Macrocarpa#confRP/0/RP0/CPU0#Macrocarpa(config)#flowspecRP/0/RP0/CPU0#Macrocarpa(config-flowspec)#no address-family ipv4RP/0/RP0/CPU0#Macrocarpa(config-flowspec)#commitRP/0/RP0/CPU0#Macrocarpa(config-flowspec)#Client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 14#56#27.929 UTC# config[66774]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000400' to view the changes.RP/0/RP0/CPU0#Cannonball#intRP/0/RP0/CPU0#Cannonball#sh run intFri Jul 17 14#58#24.079 UTCinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lFri Jul 17 14#58#26.525 UTCipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#The ACLs “test2” and “bgpfs2acl-ipv4” have been modified but they are still applied to the interfaces.Test 02# Ignore rules not supported by the scriptStep0 interface A configured with IPv4 address, shut (down/down), no ACL applied interface B configured with IPv4 address, no shut (up/up), bgpfs2acl-ipv4 ACL applied interface C configured with IPv4 address, no shut (up/up), ACL test2 applied in ingress bgp flowspec session configured and established but no rule advertisedClient#RP/0/RP0/CPU0#Cannonball#sh run intinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!interface TenGigE0/0/0/3 shutdownRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#sh flows ipv4Fri Jul 17 15#09#48.070 UTCRP/0/RP0/CPU0#Cannonball#Step1 advertise bad rules with multiple unsupported match statements or unsupported actions from the controller check the configuration of the ACL on the clientController# Multiple icmp-code in the same ruleclass-map type traffic match-all bflow match destination-address ipv4 2.2.2.0 255.255.255.0 match ipv4 icmp-code 2 3 match ipv4 icmp-type 2 end-class-map!policy-map type pbr Bad-Ruleclass type traffic bflow drop ! class type traffic class-default ! end-policy-map!flowspec address-family ipv4 service-policy type pbr Bad-Rule!Client#RP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,ICMPType#=2,ICMPCode#=2|=3 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#accRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lFri Jul 17 16#01#47.933 UTCipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#bash[Cannonball#~]$ docker logs bgpfs2acl…2020-07-17 15#45#23,135 - INFO - Failed to convert flow# Dest#2.2.2.0/24,ICMPType#=2,ICMPCode#=2|=3. Errors# ICMPCode# bgpfs2acl doesn't support icmp ranges# =2|=3[Cannonball#~]$Conclusion, create multiple rules. In this example, 2/2 and 2/3.Other example# Don’t FragementRP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,Frag#=DF Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#bashRP/0/RP0/CPU0#Jul 17 16#19#40.247 UTC# bash_cmd[67397]# %INFRA-INFRA_MSG-5-RUN_LOGIN # User cisco logged into shell from vty0[Cannonball#~]$ docker logs bgpfs2acl...2020-07-17 16#19#23,125 - INFO - Failed to convert flow# Dest#2.2.2.0/24,Frag#=DF. Errors# Frag# Unsupported fragment type value# =DF[Cannonball#~]$Other example# TCP FlagsRP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,TCPFlags#=0x40 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#bashRP/0/RP0/CPU0#Jul 17 16#23#35.393 UTC# bash_cmd[69290]# %INFRA-INFRA_MSG-5-RUN_LOGIN # User cisco logged into shell from vty0[Cannonball#~]$[Cannonball#~]$ docker logs bgpfs2acl…2020-07-17 16#23#23,142 - INFO - Failed to convert flow# Dest#2.2.2.0/24,TCPFlags#=0x40. Errors# TCPFlags# Unsupported keyword!Other example# match DSCPRP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24,DSCP#=46 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#bashRP/0/RP0/CPU0#Jul 17 16#26#01.176 UTC# bash_cmd[66788]# %INFRA-INFRA_MSG-5-RUN_LOGIN # User cisco logged into shell from vty0[Cannonball#~]$ docker logs bgpfs2acl…2020-07-17 16#25#23,158 - INFO - Failed to convert flow# Dest#2.2.2.0/24,DSCP#=46. Errors# DSCP# Unsupported keywordOther example# Rate-limiting actionRP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24 Actions #Traffic-rate# 1000 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#bashRP/0/RP0/CPU0#Jul 17 16#36#35.751 UTC# bash_cmd[66763]# %INFRA-INFRA_MSG-5-RUN_LOGIN # User cisco logged into shell from vty0[Cannonball#~]$ docker logs bgpfs2acl...2020-07-17 16#35#23,075 - INFO - Failed to convert flow# Dest#2.2.2.0/24. Errors# action# Usupported action# Traffic-rate# 1000 bps (bgp.1)Other example# Set DSCP actionRP/0/RP0/CPU0#Cannonball#sh flows ipv4AFI# IPv4 Flow #Dest#2.2.2.0/24 Actions #DSCP# ef (bgp.1)RP/0/RP0/CPU0#Cannonball#bashRP/0/RP0/CPU0#Jul 17 16#40#01.670 UTC# bash_cmd[68494]# %INFRA-INFRA_MSG-5-RUN_LOGIN # User cisco logged into shell from vty0[Cannonball#~]$ docker logs bgpfs2acl…2020-07-17 16#39#53,141 - INFO - Failed to convert flow# Dest#2.2.2.0/24. Errors# action# Usupported action# DSCP# ef (bgp.1)[Cannonball#~]$Test 03# Creation of a new IPv4 interfaceStep0 interface A configured with IPv4 address, shut (down/down), no ACL applied interface B configured with IPv4 address, no shut (up/up), bgpfs2acl-ipv4 ACL applied interface C configured with IPv4 address, no shut (up/up), ACL test2 applied in ingress bgp flowspec session configured and established but a simple rule is advertisedRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any 2.2.2.0/24 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any 2.2.2.0/24 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#sh run intinterface Loopback0 ipv4 address 11.11.11.11 255.255.255.255!interface MgmtEth0/RP0/CPU0/0 ipv4 address 10.30.111.177 255.255.255.224 lldp enable !!interface TenGigE0/0/0/0 ipv4 address 33.44.77.88 255.255.255.0 shutdown!interface TenGigE0/0/0/1 ipv4 address 44.55.66.77 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!interface TenGigE0/0/0/2 ipv4 address 55.66.77.88 255.255.255.0 ipv4 access-group test2 ingress!interface TenGigE0/0/0/3 shutdown!Step1 no shut of interface Te0/0/0/3 but still no IPv4 addressesRP/0/RP0/CPU0#Cannonball(config)#interface TenGigE0/0/0/3RP/0/RP0/CPU0#Cannonball(config-if)#no shutRP/0/RP0/CPU0#Cannonball(config-if)#commitLC/0/0/CPU0#Jul 17 16#46#18.959 UTC# ifmgr[259]# %PKT_INFRA-LINK-3-UPDOWN # Interface TenGigE0/0/0/3, changed state to DownLC/0/0/CPU0#Jul 17 16#46#18.959 UTC# ifmgr[259]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface TenGigE0/0/0/3, changed state to DownRP/0/RP0/CPU0#Jul 17 16#46#19.733 UTC# config[67444]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'cisco'. Use 'show configuration commit changes 1000000410' to view the changes.RP/0/RP0/CPU0#Cannonball(config-if)#RP/0/RP0/CPU0#Cannonball(config-if)#endRP/0/RP0/CPU0#Jul 17 16#46#21.568 UTC# config[67444]# %MGBL-SYS-5-CONFIG_I # Configured from console by cisco on vty0 (10.209.200.69)RP/0/RP0/CPU0#Cannonball#sh run int TenGigE0/0/0/3% No such configuration item(s)RP/0/RP0/CPU0#Cannonball#As expected no modification has been applied, the script needs an L3 interface and for instance an IPv4 configured.Step2 we add IPv4 address to this interfaceRP/0/RP0/CPU0#Cannonball#confRP/0/RP0/CPU0#Cannonball(config)#int TenGigE0/0/0/3RP/0/RP0/CPU0#Cannonball(config-if)#ipv4 add 11.44.22.33/24RP/0/RP0/CPU0#Cannonball(config-if)#commitRP/0/RP0/CPU0#Jul 17 16#47#41.512 UTC# config[68122]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'cisco'. Use 'show configuration commit changes 1000000411' to view the changes.RP/0/RP0/CPU0#Cannonball(config-if)#endRP/0/RP0/CPU0#Jul 17 16#47#44.599 UTC# config[68122]# %MGBL-SYS-5-CONFIG_I # Configured from console by cisco on vty0 (10.209.200.69)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#sh run int RP/0/RP0/CPU0#Jul 17 16#47#58.056 UTC# config[68408]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000412' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run int TenGigE0/0/0/3interface TenGigE0/0/0/3 ipv4 address 11.44.22.33 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!RP/0/RP0/CPU0#Cannonball#As soon as interface is up/up AND an address is configured, the script will add the bgpfs2acl-ipv4 ACL in ingressStep3 We use an existing ACL “test3” no applied yet We replace the bgpfs2acl-ipv4 ACL on this interface by the “test3” one We verify the modification of the “test3” ACL with the addition of the flowspec entriesRP/0/RP0/CPU0#Cannonball(config-ipv4-acl)#exitRP/0/RP0/CPU0#Cannonball(config)#do sh run ipv4 access-list test3 10 permit ipv4 1.2.3.0 0.0.0.24 any 20 deny icmp any any 30 permit ipv4 host 3.4.5.6 any!RP/0/RP0/CPU0#Cannonball(config)#do sh run int te 0/0/0/3interface TenGigE0/0/0/3 ipv4 address 11.44.22.33 255.255.255.0 ipv4 access-group bgpfs2acl-ipv4 ingress!RP/0/RP0/CPU0#Cannonball(config)#interface TenGigE0/0/0/3RP/0/RP0/CPU0#Cannonball(config-if)# ipv4 access-group test3 ingressRP/0/RP0/CPU0#Cannonball(config-if)#commitRP/0/RP0/CPU0#Jul 17 16#53#35.668 UTC# config[69624]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'cisco'. Use 'show configuration commit changes 1000000414' to view the changes.RP/0/RP0/CPU0#Cannonball(config-if)#endRP/0/RP0/CPU0#Jul 17 16#53#37.711 UTC# config[69624]# %MGBL-SYS-5-CONFIG_I # Configured from console by cisco on vty0 (10.209.200.69)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 16#53#58.083 UTC# config[67380]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000415' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run int te 0/0/0/3interface TenGigE0/0/0/3 ipv4 address 11.44.22.33 255.255.255.0 ipv4 access-group test3 ingress!RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-list test3ipv4 access-list test3 10 permit ipv4 1.2.3.0 0.0.0.24 any 20 deny icmp any any 30 permit ipv4 host 3.4.5.6 any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any 2.2.2.0/24 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#The script detected the association of this ACL test3 to an interface and changed the content, adding the translated BGPFS ruleTest 04# BGP FS session lost from the controller, we shut down the session on the client, all rules are deleted script will remove all ACEsController#RP/0/RP0/CPU0#Macrocarpa(config)#router bgp 65000RP/0/RP0/CPU0#Macrocarpa(config-bgp)#RP/0/RP0/CPU0#Macrocarpa(config-bgp)# neighbor 16.16.16.20RP/0/RP0/CPU0#Macrocarpa(config-bgp-nbr)#shutRP/0/RP0/CPU0#Macrocarpa(config-bgp-nbr)#commitRP/0/RP0/CPU0#Macrocarpa(config-bgp-nbr)#Client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 17#19#21.750 UTC# bgp[1080]# %ROUTING-BGP-5-ADJCHANGE # neighbor 16.16.16.30 Down - BGP Notification received, administrative shutdown (VRF# default) (AS# 65000)RP/0/RP0/CPU0#Jul 17 17#19#21.750 UTC# bgp[1080]# %ROUTING-BGP-5-NSR_STATE_CHANGE # Changed state to NSR-ReadyRP/0/RP0/CPU0#Jul 17 17#19#58.044 UTC# config[66780]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000416' to view the changes.RP/0/RP0/CPU0#Cannonball#sh flows ipv4RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 permit ipv4 any any!ipv4 access-list test3 10 permit ipv4 1.2.3.0 0.0.0.24 any 20 deny icmp any any 30 permit ipv4 host 3.4.5.6 any 100500 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#And we restore the session#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 17#22#06.632 UTC# bgp[1080]# %ROUTING-BGP-5-ADJCHANGE # neighbor 16.16.16.30 Up (VRF# default) (AS# 65000)RP/0/RP0/CPU0#Jul 17 17#22#06.632 UTC# bgp[1080]# %ROUTING-BGP-5-NSR_STATE_CHANGE # Changed state to Not NSR-ReadyRP/0/RP0/CPU0#Jul 17 17#22#28.187 UTC# config[68049]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000417' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list test2 10 deny ipv4 any host 1.2.3.4 20 permit ipv4 any host 2.3.4.5 30 remark end test2 40 deny ipv4 any any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any 2.2.2.0/24 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list test3 10 permit ipv4 1.2.3.0 0.0.0.24 any 20 deny icmp any any 30 permit ipv4 host 3.4.5.6 any 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any 2.2.2.0/24 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!ipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny ipv4 any 2.2.2.0/24 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Test 05# manipulation of the created ACLThe script will modify existing ACLs if they are already applied but will not change ACLs configured but not associated to interfaces.Also, it will create a specific ACL for interfaces up/up with an IPv4 address configured. This ACL bgpfs2acl-ipv4 can be also edited by user as long as nothing if modified in between the two remarks. a rule is advertised and translated in the client check existing bgpfs2acl-ipv4 access-list add ACE / entries in this ACL withdraw the rule advertisement from the controller check the ACL bgpfs2acl-ipv4RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-list bgpfs2acl-ipv4ipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#We add a line in this ACLRP/0/RP0/CPU0#Cannonball#confRP/0/RP0/CPU0#Cannonball(config)#ipv4 access-list bgpfs2acl-ipv4RP/0/RP0/CPU0#Cannonball(config-ipv4-acl)#10 permit udp any 1.2.3.0/24 eq 80RP/0/RP0/CPU0#Cannonball(config-ipv4-acl)#commitRP/0/RP0/CPU0#Cannonball(config-ipv4-acl)#endRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-list bgpfs2acl-ipv4ipv4 access-list bgpfs2acl-ipv4 10 permit udp any 1.2.3.0/24 eq 80 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#We stop the advertisement on the controller#RP/0/RP0/CPU0#Macrocarpa#confRP/0/RP0/CPU0#Macrocarpa(config)#flowspecRP/0/RP0/CPU0#Macrocarpa(config-flowspec)#no address-family ipv4RP/0/RP0/CPU0#Macrocarpa(config-flowspec)#commitMon Jul 20 09#08#18.309 UTCRP/0/RP0/CPU0#Macrocarpa(config-flowspec)#On the client, the ACL bgpfs2acl-ipv4 only contains the entry we manually contain and (it’s important) the permit any any at the end.RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 20 09#08#32.835 UTC# config[68089]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000449' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-list bgpfs2acl-ipv4ipv4 access-list bgpfs2acl-ipv4 10 permit udp any 1.2.3.0/24 eq 80 100500 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Finally, we re-advertise the rule from the controller and check the ACL one last time on the client#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 20 09#14#02.886 UTC# config[66542]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000450' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-list bgpfs2acl-ipv4ipv4 access-list bgpfs2acl-ipv4 10 permit udp any 1.2.3.0/24 eq 80 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Test 06# add up multiple FS rulesWe will apply examples 1 to 9, one by one.Controller#policy-map type pbr Example1 class type traffic CHARGEN drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example2 class type traffic SunRPC drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example3 class type traffic CHARGEN2 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example4 class type traffic NETBIOS drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example5 class type traffic DNS drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example6 class type traffic NTP2 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example7 class type traffic ICMP drop ! class type traffic class-default ! end-policy-map!class type traffic class-default ! end-policy-map!policy-map type pbr Example8a class type traffic FRAG1 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example8b class type traffic FRAG2 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr Example8c class type traffic FRAG3 drop ! class type traffic class-default ! end-policy-map!policy-map type pbr example9a class type traffic test9a redirect ipv4 nexthop 16.16.16.2 ! class type traffic class-default ! end-policy-map!policy-map type pbr example9b class type traffic test9b redirect ipv4 nexthop 16.16.16.3 ! class type traffic class-default ! end-policy-map!policy-map type pbr example9c class type traffic test9c redirect ipv4 nexthop 16.16.16.4 ! class type traffic class-default ! end-policy-map!class-map type traffic match-all DNS match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 53 match packet length 768-65535 768-1601 end-class-map!class-map type traffic match-all FRAG match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match fragment-type is-fragment end-class-map!class-map type traffic match-all ICMP match destination-address ipv4 70.2.1.1 255.255.255.255 match ipv4 icmp-type 3 match ipv4 icmp-code 2 end-class-map!class-map type traffic match-all NTP2 match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 123 match packet length 1-35 37-45 47-75 77-219 221-65535 end-class-map!class-map type traffic match-all FRAG1 match destination-address ipv4 70.2.1.1 255.255.255.255 match fragment-type is-fragment end-class-map!class-map type traffic match-all FRAG2 match destination-address ipv4 70.2.1.2 255.255.255.255 match fragment-type first-fragment end-class-map!class-map type traffic match-all FRAG3 match destination-address ipv4 70.2.1.3 255.255.255.255 match fragment-type last-fragment end-class-map!class-map type traffic match-all SunRPC match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match destination-port 111 end-class-map!class-map type traffic match-all test9a match destination-address ipv4 70.2.1.0 255.255.255.0 end-class-map!class-map type traffic match-all test9b match destination-address ipv4 70.2.1.1 255.255.255.255 end-class-map!class-map type traffic match-all test9c match destination-address ipv4 70.0.0.0 255.0.0.0 end-class-map!class-map type traffic match-all CHARGEN match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 19 end-class-map!class-map type traffic match-all NETBIOS match destination-address ipv4 7.7.7.7 255.255.255.255 match protocol udp match source-port 137-138 end-class-map!class-map type traffic match-all CHARGEN2 match destination-address ipv4 7.7.7.7 255.255.255.255 match source-address ipv4 80.2.1.0 255.255.255.0 match protocol udp match source-port 19 end-class-map!On the client, we check the received rules one by one and we verify the ACL created.First rule, example1RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#02#58.125 UTC# config[67395]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000433' to view the changes.flRP/0/RP0/CPU0#Cannonball#show flows ipv4Fri Jul 17 18#03#00.691 UTCAFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any eq 19 host 7.7.7.7 100502 remark FLOWSPEC RULES END 100503 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Let’s add the second rule#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#03#58.243 UTC# config[68062]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000434' to view the changes.RP/0/RP0/CPU0#Cannonball#accRP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp any host 7.7.7.7 eq sunrpc 100502 deny udp any eq 19 host 7.7.7.7 100503 remark FLOWSPEC RULES END 100504 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#The RFC5575 Section 5.1 defines the “Order of Traffic Filtering Rules” which permits to define an order of precedence between two or more rules.“ If the types differ, the rule with lowest numeric type value has higher precedence (and thus will match before)”. That’s why a “Type 1 - Destination Prefix” is applied before a “Type 2 - Source Prefix” and it’s indeed reflected in the “show flowspec ipv4” output.The script uses this output to define the order of the ACEs it creates.Third example#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#fl RP/0/RP0/CPU0#Jul 17 18#04#58.161 UTC# config[68662]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000435' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any eq 19 host 7.7.7.7 100504 remark FLOWSPEC RULES END 100505 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example 4#RP/0/RP0/CPU0#Cannonball#show flows ipv4Fri Jul 17 18#05#15.319 UTCAFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#05#28.192 UTC# config[69101]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000436' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 remark FLOWSPEC RULES END 100506 permit ipv4 any any!RP/0/RP0/CPU0#Cannonball#Example 5#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#06#28.106 UTC# config[65605]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000437' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 remark FLOWSPEC RULES END 100507 permit ipv4 any anyExample 6#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#07#28.275 UTC# config[66206]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000438' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 remark FLOWSPEC RULES END 100511 permit ipv4 any anyExample 7#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,ICMPType#=3,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#07#58.230 UTC# config[66597]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000439' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 deny icmp any host 70.2.1.1 protocol-unreachable 100511 remark FLOWSPEC RULES END 100512 permit ipv4 any anyExample 8#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,ICMPType#=3,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,Frag#=IsF Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#09#28.187 UTC# config[67422]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000440' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 deny icmp any host 70.2.1.1 protocol-unreachable 100511 deny ipv4 any host 70.2.1.1 fragments 100512 remark FLOWSPEC RULES END 100513 permit ipv4 any anyRP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,ICMPType#=3,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,Frag#=IsF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.2/32,Frag#=FF Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#10#28.259 UTC# config[68054]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000441' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 deny icmp any host 70.2.1.1 protocol-unreachable 100511 deny ipv4 any host 70.2.1.1 fragments 100512 deny ipv4 any host 70.2.1.2 fragments 100513 remark FLOWSPEC RULES END 100514 permit ipv4 any anyThe Three different rules are translated with “fragments” as explain in the beginning of the post.Example9#RP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,ICMPType#=3,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,Frag#=IsF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.2/32,Frag#=FF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.3/32,Frag#=LF Actions #Traffic-rate# 0 bps (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#11#28.245 UTC# config[68658]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000442' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 deny icmp any host 70.2.1.1 protocol-unreachable 100511 deny ipv4 any host 70.2.1.1 fragments 100512 deny ipv4 any host 70.2.1.2 fragments 100513 deny ipv4 any host 70.2.1.3 fragments 100514 remark FLOWSPEC RULES END 100515 permit ipv4 any anyRP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,ICMPType#=3,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,Frag#=IsF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.2/32,Frag#=FF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.3/32,Frag#=LF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.0/24 Actions #Nexthop# 16.16.16.2 (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#12#58.245 UTC# config[69474]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000443' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 deny icmp any host 70.2.1.1 protocol-unreachable 100511 deny ipv4 any host 70.2.1.1 fragments 100512 deny ipv4 any host 70.2.1.2 fragments 100513 deny ipv4 any host 70.2.1.3 fragments 100514 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100515 remark FLOWSPEC RULES END 100516 permit ipv4 any anyRP/0/RP0/CPU0#Cannonball#show flows ipv4AFI# IPv4 Flow #Dest#7.7.7.7/32,Source#80.2.1.0/24,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,DPort#=111 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#>=137&<=138 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=19 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=53,Length#>=768&<=65535|>=768&<=1601 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#7.7.7.7/32,Proto#=17,SPort#=123,Length#>=1&<=35|>=37&<=45|>=47&<=75|>=77&<=219|>=221&<=65535 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,ICMPType#=3,ICMPCode#=2 Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32,Frag#=IsF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.1/32 Actions #Nexthop# 16.16.16.3 (bgp.1) Flow #Dest#70.2.1.2/32,Frag#=FF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.3/32,Frag#=LF Actions #Traffic-rate# 0 bps (bgp.1) Flow #Dest#70.2.1.0/24 Actions #Nexthop# 16.16.16.2 (bgp.1)RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 17 18#14#58.169 UTC# config[66405]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'ZTP'. Use 'show configuration commit changes 1000000444' to view the changes.RP/0/RP0/CPU0#Cannonball#sh run ipv4 access-lipv4 access-list bgpfs2acl-ipv4 100500 remark FLOWSPEC RULES BEGIN. Do not add statements below this. Added automatically. 100501 deny udp 80.2.1.0/24 eq 19 host 7.7.7.7 100502 deny udp any host 7.7.7.7 eq sunrpc 100503 deny udp any range netbios-ns netbios-dgm host 7.7.7.7 100504 deny udp any eq 19 host 7.7.7.7 100505 deny udp any eq domain host 7.7.7.7 packet-length range 768 1601 100506 deny udp any eq ntp host 7.7.7.7 packet-length range 1 35 100507 deny udp any eq ntp host 7.7.7.7 packet-length range 37 45 100508 deny udp any eq ntp host 7.7.7.7 packet-length range 47 75 100509 deny udp any eq ntp host 7.7.7.7 packet-length range 77 219 100510 deny icmp any host 70.2.1.1 protocol-unreachable 100511 deny ipv4 any host 70.2.1.1 fragments 100512 permit ipv4 any host 70.2.1.1 nexthop1 ipv4 16.16.16.3 100513 deny ipv4 any host 70.2.1.2 fragments 100514 deny ipv4 any host 70.2.1.3 fragments 100515 permit ipv4 any 70.2.1.0/24 nexthop1 ipv4 16.16.16.2 100516 remark FLOWSPEC RULES END 100517 permit ipv4 any any!MiscThe script triggers logging messages related to EDM callback function from sysdbRP/0/RP0/CPU0#Cannonball#RP/0/RP0/CPU0#Jul 20 09#38#56.696 UTC# nvgen[69429]# %MGBL-CONFIG_HIST_UPDATE-3-SYSDB_GET # Error 'sysdb' detected the 'warning' condition 'A verifier or EDM callback function returned# 'not found'' getting host address from sysdbRP/0/RP0/CPU0#Jul 20 09#38#57.864 UTC# nvgen[69500]# %MGBL-CONFIG_HIST_UPDATE-3-SYSDB_GET # Error 'sysdb' detected the 'warning' condition 'A verifier or EDM callback function returned# 'not found'' getting host address from sysdbRP/0/RP0/CPU0#Jul 20 09#39#26.646 UTC# nvgen[65542]# %MGBL-CONFIG_HIST_UPDATE-3-SYSDB_GET # Error 'sysdb' detected the 'warning' condition 'A verifier or EDM callback function returned# 'not found'' getting host address from sysdbRP/0/RP0/CPU0#Jul 20 09#39#27.809 UTC# nvgen[65620]# %MGBL-CONFIG_HIST_UPDATE-3-SYSDB_GET # Error 'sysdb' detected the 'warning' condition 'A verifier or EDM callback function returned# 'not found'' getting host address from sysdbUntil this is fixed from the script, you can safely ignore them and the easiest way is to suppress them with the following configuration.logging suppress rule EDM alarm MGBL CONFIG_HIST_UPDATE SYSDB_GET!logging suppress apply rule EDM all-of-router!ConclusionKudos to Mike (github) for the initial framework and to Dmitrii (github / linkedin) for the great work delivered during the last weeks on this program.Other features are “work in progress”, particularly a syslog module providing information at the user level (and not only the script logs) of the different actions, errors, etc.We really hope this script will help the community, give it a try and provide us your feedback.", "url": "/tutorials/bgp-flowspec-to-acl-script/", "author": "Nicolas Fevrier", "tags": "iosxr, script, flowspec, ncs5500" } , "tutorials-ncs5500-routing-resource-with-2020-internet": { "title": "NCS5500 Routing Resource with 2020 Internet (and Future)", "content": " NCS5500 and the 2020 Internet Introduction Let’s try to predict the future (with some really bad science) Products and ASICs Jericho / Qumran-MX with NL eTCAM Jericho+ with OP eTCAM Jericho+ with large LPM (and no eTCAM) Lab and Test Starting point Year 2020 Year 2021 to year 2029 Special case of the Jericho+ with Large LPM Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.Important Update# in IOS XR 7.3.1, we will decommission the “internet-optimized” mode, please check this article# https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/IntroductionBetween September 2017 and March 2018, we published five articles to answer recurrent questions on the routing memory utilization. The NCS5500 adoption was ramping up, and users were learning about the LEM, LPM and eTCAM used to store the routing information.1 - NCS5500 Resources focusing on IPv4 prefixes#https#//xrdocs.io/ncs5500/tutorials/2017-08-03-understanding-ncs5500-resources-s01e02/2 - NCS5500 Resources focusing on IPv6 prefixes#https#//xrdocs.io/ncs5500/tutorials/2017-08-07-understanding-ncs5500-resources-s01e03/3 - full internet support on non-eTCAM systems#https#//xrdocs.io/ncs5500/tutorials/2017-12-30-full-internet-view-on-base-ncs-5500-systems-s01e04/ 4 - very large routing table in eTCAM systems#https#//xrdocs.io/ncs5500/tutorials/2018-01-25-s01e05-large-routing-tables-on-scale-ncs-5500-systems/5 - NCS5500 Jericho+ Systems and their Scalability#https#//xrdocs.io/ncs5500/tutorials/Understanding-ncs5500-jericho-plus-systems/In the third article, we demonstrated a feature used to optimize the route distribution between LEM and LPM to internet distribution. It was updated with a disclaimer in September 2019, we don’t recommend to use non-SE systems (Jericho or Jericho+) for full internet view. It will be different with Jericho2, but we will keep it for a dedicated post.We have now three remaining options# Jericho / Qumran-MX with NL eTCAM Jericho+ with OP eTCAM Jericho+ with large LPMLet’s try to predict the future (with some really bad science)Based on the routing table growth during the last 6 years, we can try to “project” the evolution of the table in the near future (let’s say, 9 years).From a pure scientific perspective, it is not worth much. I admit it. So let’s take it for what it is, a wet finger estimation.Let me explain the “method” used to get the number we will eventually use to estimate the resources utilization.Sources# routes as seen in AS6447#https#//bgp.potaroo.net/as6447/andhttps#//bgp.potaroo.net/v6/as6447/Also, we used the following data# full table v4 per day#https#//bgp.potaroo.net/as6447/bgp-active.txt IPv4/24 routes per day#https#//bgp.potaroo.net/as6447/bgp-prefix-24.txt full table v6 per day#https#//bgp.potaroo.net/v6/as6447/bgp-active.txt IPv6/48 routes per day#https#//bgp.potaroo.net/v6/as6447/bgp-prefix-48.txtAnd to convert epoch / UNIX timestamps in human readable dates# https#//www.epochconverter.com/I started writing the article the 10th of July, so I collected the figures from the sources above the 10th of July of each year. Year IPv4 total IPv6 total v4/24 v6/48 2014 522,313 19,025 275,805 7,978 2015 587,105 24,651 310,949 10,607 2016 645,833 33,495 351,527 14,438 2017 710,976 43,249 390,413 18,727 2018 761,170 59,330 422,454 25,383 2019 817,799 74,613 464,433 34,508 2020 865,274 91,133 500,530 44,876 Let’s focus on the IPv4 part first, and more particularly to the /24 population. Year IPv4 total Growth v4/24 Growth Growth Increase non v4/24 Growth Growth Increase 2014 522,313 - 275,805 - - 246,508 - - 2015 587,105 64,792 310,949 35,144 - 276,156 29,648 - 2016 645,833 58,728 351,527 40,578 5,434 294,306 18,150 -11,498 2017 710,976 65,143 390,413 38,886 -1,692 320,563 26,257 8,107 2018 761,170 50,194 422,454 32,041 -6,845 338,716 18,153 -8,104 2019 817,799 56,629 464,433 41,979 9,938 353,366 14,650 -3,503 2020 865,274 47,475 500,530 36,097 -5,882 364,744 11,378 -3,272 The “Growth” is showing the difference between two sub-sequent years and the “Growth Increase” shows the difference between two sub-sequent growth rates. The second number should help identifying a linear or an algorithmic progression.IPv4/24 growth trend# it doesn’t seem we have a clear trend here, between the different years we see various numbers scattered from -6,845 to +9,938.Conclusion# we will consider it’s a linear growth and we will estimate the number of new IPv4/24 per year# +42,000 prefixes.Non-IPv4 growth# except in 2017, it shows the growth is progressively slowing down.Conclusion# here it will be a totally arbitrary decision to continue this trend with lower and lower numbers (starting from -2,000 in 2021 to -200 in 2029).The results of this projection are the following# Year IPv4 total Growth v4/24 Growth Growth Increase non v4/24 Growth Growth Increase 2019 817,799 56,629 464,433 41,979 9,938 353,366 14,650 -3,503 2020 865,274 47,475 500,530 36,097 -5,882 364,744 11,378 -3,272 2021 919,274 54,000 542,530 42,000 5,903 376,744 9,378 -2,000 2022 969,152 49,878 584,530 42,000 0 384,622 7,878 -1,500 2023 1,018,030 48,878 626,530 42,000 0 391,500 6,878 -1,000 2024 1,066,108 48,078 668,530 42,000 0 397,578 6,078 -800 2025 1,113,586 47,478 710,530 42,000 0 403,056 5,478 -600 2026 1,160,564 46,978 752,530 42,000 0 408,034 4,978 -500 2027 1,207,142 46,578 794,530 42,000 0 412,612 4,578 -400 2028 1,253,420 46,278 836,530 42,000 0 416,890 4,278 -300 2029 1,299,498 46,078 878,530 42,000 0 420,968 4,078 -200 Checking other sources on the web, like this APNIC article from Geoff Huston#https#//blog.apnic.net/2020/01/14/bgp-in-2019-the-bgp-table/They predict 1,079,000 routes for Jan 2025, so it matches our “model”, predicting something between 1,066,108 and 1,113,586 IPv4 prefixes between July 2024 and July 2025.Let’s study the IPv6 internet table evolution. Due to the smallest size of this table, it’s more hazardous to create a projection, but it won’t prevent me from making arbitrary asumptions to build my model ;) Year IPv6 total Growth Growth Increase v6/48 Growth Growth Increase non v6/48 Growth Growth Increase 2014 19,025 - - 7,978 - - 11,047 - - 2015 24,651 5,626 - 10,607 2,629 - 14,044 2,997 - 2016 33,495 8,844 3,218 14,438 3,831 1,202 19,057 5,013 2,016 2017 43,249 9,754 910 18,727 4,289 458 24,522 5,465 452 2018 59,330 16,081 6,327 25,383 6,656 2,367 33,947 9,425 3,960 2019 74,613 15,283 -798 34,508 9,125 2,469 40,105 6,158 -3,267 2020 91,133 16,520 1,237 4,4876 10,368 1,243 46,257 6,152 -6 More particularly, we will pay attention to the /48 prefixes and their progression.IPv6/48 growth trend# The “growth increase” varies from +458 to +2,469 during the last 5 years.Conclusion# Let’s take the highest number for the projection, rounded to +2500.Non-IPv6/48 growth trend# The “growth” varies from 2,997 to 9,425 but with a majority of the years around 6,000.Conclusion# For this one, we will take a totaly arbitrary growth of 6,000 new IPv6 prefixes (non /48) per year.The results of this projection are the following# Year IPv6 total Growth Growth Increase v6/48 Growth Growth Increase non v6/48 Growth Growth Increase 2019 74,613 15,283 -798 34,508 9,125 2,469 40,105 6,158 -3,267 2020 91,133 16,520 1,237 44,876 10,368 1,243 46,257 6,152 -6 2021 110,001 18,868 2,348 57,744 12,868 2,500 52,257 6,000 -152 2022 131,369 21,368 2,500 73,112 15,368 2,500 58,257 6,000 0 2023 155,237 23,868 2,500 90,980 17,868 2,500 64,257 6,000 0 2024 181,605 26,368 2,500 111,348 20,368 2500 70,257 6,000 0 2025 210,473 28,868 2,500 134,216 22,868 2,500 76,257 6,000 0 2026 241,841 31,368 2 500 159,584 25,368 2,500 82,257 6,000 0 2027 275,709 33,868 2,500 187,452 27,868 2,500 88,257 6,000 0 2028 312,077 36,368 2,500 217,820 30,368 2,500 94,257 6,000 0 2029 350,945 38,868 2,500 250,688 32,868 2,500 100,257 6,000 0 Here again we can compare this guesstimation with Geoff’s projectionhttps#//blog.apnic.net/2020/01/14/bgp-in-2019-the-bgp-table/In January 2025, they present a range starting from 160,000 for the linear model and 318,000 for the algorithmic one, while we plan for something between 181,605 and 210,473.Admittedly, it’s pretty vague ;)Let’s take these numbers nevertheless and see where these routes will be stored depending on the different products.In summary, we project the following distribution# Date (10th of July of each year) IPv4 total v4/24 v4 non/24 IPv6 total v6/48 v6 non/48 2020 865,274 500,530 364,744 91,133 44,876 46,257 2021 919,274 542,530 376,744 110,001 57,744 52,257 2022 969,152 584,530 384,622 131,369 73,112 58,257 2023 1,018,030 626,530 391,500 155,237 90,980 64,257 2024 1,066,108 668,530 397,578 181,605 111,348 70,257 2025 1,113,586 710,530 403,056 210,473 134,216 76,257 2026 1,160,564 752,530 408,034 241,841 159,584 82,257 2027 1,207,142 794,530 412,612 275,709 187,452 88,257 2028 1,253,420 836,530 416,890 312,077 217,820 94,257 2029 1,299,498 878,530 420,968 350,945 250,688 100,257 Products and ASICsIn summary# Jericho / Qumran-MX with NL eTCAM Jericho+ with OP eTCAM Jericho+ with large LPM (and no eTCAM) NCS5501-SE NCS55A1-36H-SE-S NCS55A1-24H NCS5502-SE NCS55A2-MOD-SE-S NCS55A1-48Q6H NC55-24X100G-SE NC55-A-36X100-SE-S NCS55A1-24H6H-SS Jericho / Qumran-MX with NL eTCAMProducts# NCS5501-SE NCS5502-SE NC55-24X100G-SEJericho+ with OP eTCAMProducts# NCS55A1-36H-SE-S NCS55A2-MOD-SE-S NC55-A-36X100-SE-SJericho+ with large LPM (and no eTCAM)Products# NCS55A1-24H NCS55A1-48Q6H NCS55A1-24H6H-SSCase 1# Default configuration is Host OptimizedCase 2# user changed to host-optimized-disableLab and TestIn this section, we will inject a real table in the routers and collect utilization statistics for the different resources (LEM, LPM and potentially external TCAM). Then, we will inject v4/24s, v6/48s and other routes following the estimated progression described above, and we will see the impact on resources, year after year.It certainly very fun (is it?) but the real purpose of this exercice is to extract valuable and actionable information out of these tests. For example, what hw-module profile we should use in the future depending on the ASIC type.Starting pointWe use a public v4/v6 view collected in 2019. It shows# 790,780 IPv4 prefixes 445,773 /24s 345,007 non-/24s 72,949 IPv6 prefixes 35,009 /48s 37,940 non-/48s Jericho / Qumran-MX with NL eTCAMRP/0/RP0/CPU0#5508-2-702#sh bgp sumBGP router identifier 1.3.5.8, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 4539177BGP main routing table version 4539177BGP NSR Initial initsync version 4 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 4539177 4539177 4539177 4539177 4539177 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.153 0 100 1719117 16 4539177 0 0 00#13#25 790771RP/0/RP0/CPU0#5508-2-702#RP/0/RP0/CPU0#5508-2-702#sh bgp ipv6 un sumBGP router identifier 1.3.5.8, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 72953BGP main routing table version 72953BGP NSR Initial initsync version 4 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 72953 72953 72953 72953 72953 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 72965 16 72953 0 0 00#14#01 72948RP/0/RP0/CPU0#5508-2-702#RP/0/RP0/CPU0#5508-2-702#sh dpa resources iproute loc 0/7/CPU0~iproute~ OFA Table (Id# 41, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Capacity Prefix Actual Capacity /0 26 20 /1 0 20 /2 4 20 /3 7 20 /4 26 20 /5 0 20 /6 0 20 /7 0 20 /8 11 20 /9 12 20 /10 36 204 /11 97 409 /12 285 818 /13 571 1636 /14 1143 3275 /15 1913 5732 /16 13184 42381 /17 7901 25387 /18 13534 42585 /19 25210 86603 /20 39182 127348 /21 47039 141679 /22 100821 231968 /23 78898 207173 /24 445838 1105590 /25 144 4299 /26 211 4504 /27 383 3275 /28 537 2866 /29 721 6961 /30 3241 2866 /31 440 204 /32 9657 20OFA Infra Stats Summary Create Requests# 1846483 Delete Requests# 1055411 Update Requests# 241726 Get Requests# 0 Errors Resolve Failures# 0 Not Found in DB# 0 Exists in DB# 0 No Memory in DB# 0 Reserve Resources# 0 Release Resources# 0 Update Resources# 0 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 Create Server API Err# 0 0 0 0 Update Server API Err# 0 0 0 0 Delete Server API Err# 0 0 0 0RP/0/RP0/CPU0#5508-2-702#sh dpa resources ip6route loc 0/7/CPU0~ip6route~ OFA Table (Id# 42, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Prefix Actual /0 25 /1 0 /2 0 /3 0 /4 0 /5 0 /6 0 /7 0 /8 0 /9 0 /10 25 /11 0 /12 0 /13 0 /14 0 /15 0 /16 76 /17 0 /18 0 /19 2 /20 12 /21 4 /22 7 /23 5 /24 23 /25 7 /26 13 /27 18 /28 94 /29 2696 /30 418 /31 179 /32 12686 /33 1060 /34 814 /35 517 /36 2887 /37 501 /38 908 /39 282 /40 3689 /41 544 /42 888 /43 144 /44 4720 /45 465 /46 2223 /47 1352 /48 35009 /49 0 /50 0 /51 1 /52 0 /53 0 /54 0 /55 0 /56 219 /57 2 /58 0 /59 0 /60 16 /61 0 /62 0 /63 3 /64 87 /65 0 /66 0 /67 0 /68 0 /69 0 /70 0 /71 0 /72 0 /73 0 /74 0 /75 0 /76 0 /77 0 /78 0 /79 0 /80 0 /81 0 /82 0 /83 0 /84 0 /85 0 /86 0 /87 0 /88 0 /89 0 /90 0 /91 0 /92 0 /93 0 /94 0 /95 0 /96 0 /97 0 /98 0 /99 0 /100 0 /101 0 /102 0 /103 0 /104 25 /105 0 /106 0 /107 0 /108 0 /109 0 /110 0 /111 0 /112 0 /113 0 /114 0 /115 0 /116 0 /117 0 /118 0 /119 0 /120 2 /121 0 /122 0 /123 0 /124 1 /125 8 /126 432 /127 16 /128 97OFA Infra Stats Summary Create Requests# 73268 Delete Requests# 66 Update Requests# 0 Get Requests# 0 Errors Resolve Failures# 0 Not Found in DB# 0 Exists in DB# 0 No Memory in DB# 0 Reserve Resources# 0 Release Resources# 0 Update Resources# 0 NPU ID# NPU-0 NPU-1 NPU-2 NPU-3 Create Server API Err# 0 0 0 0 Update Server API Err# 0 0 0 0 Delete Server API Err# 0 0 0 0RP/0/RP0/CPU0#5508-2-702#That places us somewhere between 2018 and 2019 on the estimation we built from the potaroo info. We will start from 2019 and calculate what needs to be advertised on top of our full views.A simple substraction will tell us how many routes “extra” we need to advertise to simulate the growth along the years# Year v4/24 Extra /24 v4 non/24 Extra others v6/48 Extra /48 v6 non/48 Extra others 2020 500,530 54,757 364,744 19,737 44,876 9,867 46,257 8,317 2021 542,530 96,757 376,744 31,737 57,744 22,735 52,257 14,317 2022 584,530 138,757 384,622 39,615 73,112 38,103 58,257 20,317 2023 626,530 180,757 391,500 46,493 90,980 55,971 64,257 26,317 2024 668,530 222,757 397,578 52,571 111,348 76,339 70,257 32,317 2025 710,530 264,757 403,056 58,049 134,216 99,207 76,257 38,317 2026 752,530 306,757 408,034 63,027 159,584 124,575 82,257 44,317 2027 794,530 348,757 412,612 67,605 187,452 152,443 88,257 50,317 2028 836,530 390,757 416,890 71,883 217,820 182,811 94,257 56,317 2029 878,530 432,757 420,968 75,961 250,688 215,679 100,257 62,317 At the starting point we have#Jericho with NL eTCAMRP/0/RP0/CPU0#5501-SE-6625#sh platfNode Type State Config state--------------------------------------------------------------------------------0/RP0/CPU0 NCS-5501-SE(Active) IOS XR RUN NSHUT0/RP0/NPU0 Slice UP0/FT0 NCS-1RU-FAN-FW OPERATIONAL NSHUT0/PM1 NCS-1100W-ACFW OPERATIONAL NSHUTRP/0/RP0/CPU0#5501-SE-6625#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 45013 (6 %) iproute # 10006 (1 %) ip6route # 35009 (4 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)RP/0/RP0/CPU0#5501-SE-6625#RP/0/RP0/CPU0#5501-SE-6625#sh contr npu resources lpm loc 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 534746 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 38969 (7 %) iproute # 0 (0 %) ip6route # 39733 (7 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)RP/0/RP0/CPU0#5501-SE-6625#RP/0/RP0/CPU0#5501-SE-6625#sh contr npu resources exttcamipv4 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 781297 (38 %) iproute # 782062 (38 %)RP/0/RP0/CPU0#5501-SE-6625#Jericho+ with OP eTCAMRP/0/RP0/CPU0#5508-2-702#sh platf 0/0Node Type State Config state--------------------------------------------------------------------------------0/0/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUTRP/0/RP0/CPU0#5508-2-702#sh contr npu resources exttcamipv4 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv4 Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 790992 (20 %) iproute # 791072 (20 %)...RP/0/RP0/CPU0#5508-2-702#sh contr npu resources exttcamipv6 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv6 Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 73052 (4 %) ip6route # 73202 (4 %)...RP/0/RP0/CPU0#5508-2-702#Jericho+ with Large LPM and no eTCAMRP/0/RP0/CPU0#24H-1-701#sh platfNode Type State Config state--------------------------------------------------------------------------------0/RP0/CPU0 NCS-55A1-24H(Active) IOS XR RUN NSHUT0/RP0/NPU0 Slice UP0/RP0/NPU1 Slice UP0/FT0 NC55-A1-FAN-FW OPERATIONAL NSHUT0/FT1 NC55-A1-FAN-FW OPERATIONAL NSHUT0/PM0 NCS-1100W-ACFW FAILED NSHUT0/PM1 NCS-1100W-ACFW OPERATIONAL NSHUTRP/0/RP0/CPU0#24H-1-701#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lem Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 490276 (62 %) iproute # 455268 (58 %) ip6route # 35009 (4 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lpm loc 0/0/CPU0HW Resource Information Name # lpm Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 1563508 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 373486 (24 %) iproute # 335526 (21 %) ip6route # 37956 (2 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#Year 2020We advertise the extra routes through a new peer (/24 + /23 for IPv4 and /48 + /47 for IPv6).On the route generator#pxe@pxe-ubuntu#~/routem$ more extra-v4-24H.2020router bgp 100bgp_id 192.168.100.153neighbor 192.168.100.200 remote-as 100neighbor 192.168.100.200 update-source 192.168.100.152capability ipv4 unicastcapability refreshnetwork 1 11.1.1.0/24 54757aspath 1 random 5locpref 1 120metric 1 5network 2 51.1.1.0/23 19737aspath 2 random 5locpref 2 110metric 2 10sendallpxe@pxe-ubuntu#~/routem$ more extra-v6-24H.2020router bgp 152bgp_id 192.168.100.11neighbor 2001#111##200 remote-as 100neighbor 2001#111##200 update-source 2001#111##152capability ipv6 unicastcapability refreshnetwork 1 100#1#1#1/48 9867aspath 1 random 5locpref 1 120metric 1 5network 2 102#1#1#1/47 8317aspath 2 random 5locpref 2 110metric 2 15sendallpxe@pxe-ubuntu#~/routemAnd on the IOS XR router#RP/0/RP0/CPU0#24H-1-701#sh bgp sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 1735636BGP main routing table version 1735636BGP NSR Initial initsync version 6 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 1735636 1735636 1735636 1735636 1735636 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 1721352 12009 1735636 0 0 1d13h 790771192.168.100.152 0 100 163 12 1735636 0 0 00#01#28 74494RP/0/RP0/CPU0#24H-1-701#sh bgp ipv6 un sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 273405BGP main routing table version 273405BGP NSR Initial initsync version 6 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 273405 273405 273405 273405 273405 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 148151 12245 273405 0 0 21#04#39 729482001#111##152 0 152 110 128 273405 0 0 00#00#07 18184RP/0/RP0/CPU0#24H-1-701#sh dpa resources iproute loc 0/0/CPU0 | i /24 /24 500530 /25 144RP/0/RP0/CPU0#24H-1-701#sh dpa resources ip6route loc 0/0/CPU0 | i /48 /48 44876 /49 0RP/0/RP0/CPU0#24H-1-701#Jericho with NL eTCAMRP/0/RP0/CPU0#5501-SE-6625#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lemOOR Information NPU-0 Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 54881 (7 %) iproute # 10007 (1 %) ip6route # 44876 (6 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)RP/0/RP0/CPU0#5501-SE-6625#sh contr npu resources lpm loc 0/0/CPU0HW Resource Information Name # lpmOOR Information NPU-0 Estimated Max Entries # 549919 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 47287 (9 %) iproute # 0 (0 %) ip6route # 48051 (9 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)RP/0/RP0/CPU0#5501-SE-6625#sh contr npu resources exttcamipv4 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv4OOR Information NPU-0 Estimated Max Entries # 2048000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage NPU-0 Total In-Use # 855743 (42 %) iproute # 856508 (42 %)RP/0/RP0/CPU0#5501-SE-6625#Jericho+ with OP eTCAMRP/0/RP0/CPU0#5508-2-702#sh contr npu resources exttcamipv4 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv4 Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 4000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 865438 (22 %) iproute # 865518 (22 %)...RP/0/RP0/CPU0#5508-2-702#sh contr npu resources exttcamipv6 loc 0/0/CPU0HW Resource Information Name # ext_tcam_ipv6 Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 2000000 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 91237 (5 %) ip6route # 91387 (5 %)...Jericho+ with Large LPM and no eTCAMRP/0/RP0/CPU0#24H-1-701#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lem Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 554898 (71 %) iproute # 510026 (65 %) ip6route # 44876 (6 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lpm loc 0/0/CPU0HW Resource Information Name # lpm Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 1559724 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 401492 (26 %) iproute # 355214 (23 %) ip6route # 46274 (3 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#Year 2021 to year 2029We continue to increase the amount of “extra” routes step by step and we gather the results in following charts.Jericho w/ NL12K eTCAM Year LEM Max LEM in-use LPM Max LPM in-use eTCAM Max eTCAM in-use Starting Point 786432 45013 534746 38969 2048000 781297 2020 786432 54881 549919 47287 2048000 855743 2021 786432 67749 554073 53287 2048000 908919 2022 786432 83113 558189 59287 2048000 957666 2023 786432 100985 562075 65287 2048000 1006238 2024 786432 121353 566489 71287 2048000 1053379 2025 786432 144221 564116 77287 2048000 1099520 2026 786432 169581 568307 83287 2048000 1146447 2027 786432 197449 565508 89287 2048000 1193007 2028 786432 227817 569891 95287 2048000 1239282 Conclusion# these devices can handle the internet growth with no concern or limitation.Jericho+ w/ OP eTCAM Year eTCAM Max v4 in-use v6 in-use Starting Point 4M+2M 790992 73052 2020 4M+2M 865438 91237 2021 4M+2M 918614 110105 2022 4M+2M 967361 131473 2023 4M+2M 1015933 155341 2024 4M+2M 1063074 181709 2025 4M+2M 1109215 210577 2026 4M+2M 1156142 241945 2027 4M+2M 1202702 275813 2028 4M+2M 1248977 312181 Conclusion# these devices can handle the internet growth with zero concern or limitation, we have a lot of available space in the OP eTCAM.Jericho+ with Large LPM and host-optimized (default mode) Year LEM Max LEM in-use LPM Max LPM in-use Starting Point 786432 490276 1563508 373486 2020 786432 554898 1559724 401492 2021 786432 608960 1575645 419475 2022 786432 665324 1573918 433222 2023 786432 724944 1572299 446038 2024 786432 781632 1570948 458084 2025 - - - - We will cover what’s happening in 2024 in the next section.Jericho+ with Large LPM and host-optimized-disable Year LEM Max LEM in-use LPM Max LPM in-use Starting Point 786432 44509 1349849 819259 2020 786432 54373 1355608 902022 2021 786432 67241 1386730 961198 2022 786432 82609 1418157 1015945 2023 786432 100473 1422956 1070517 2024 786432 120841 1423107 1123656 2025 786432 143705 1412551 1175799 2026 786432 169077 1400564 1228726 2027 786432 196945 1388103 1281286 2028 786432 227313 1373880 1333561 Special case of the Jericho+ with Large LPMWhen simulating year 2024, we hit the first bottleneck with the J+ with large LPM systems#RP/0/RP0/CPU0#24H-1-701#sh bgp sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 3153690BGP main routing table version 3153690BGP NSR Initial initsync version 6 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 3153690 3153690 3153690 3153690 3153690 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 1721534 12191 3153690 0 0 1d16h 790771192.168.100.152 0 100 1118 155 3153690 0 0 00#00#14 275328RP/0/RP0/CPU0#24H-1-701#RP/0/RP0/CPU0#24H-1-701#sh bgp ipv6 un sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 755765BGP main routing table version 755765BGP NSR Initial initsync version 6 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 755765 755765 755765 755765 755765 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 148333 14655 755765 0 0 1d00h 729482001#111##152 0 152 1063 764 755765 0 0 00#00#33 108656RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lem Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Jul.27 08#23#12 PDTCurrent Usage Total In-Use # 781632 (99 %) iproute # 670293 (85 %) ip6route # 111348 (14 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)NPU-1OOR Summary Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Jul.27 08#23#12 PDTCurrent Usage Total In-Use # 781632 (99 %) iproute # 670293 (85 %) ip6route # 111348 (14 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lpm loc 0/0/CPU0HW Resource Information Name # lpm Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 1570948 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 458084 (29 %) iproute # 387806 (25 %) ip6route # 70274 (4 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)NPU-1OOR Summary Estimated Max Entries # 1570948 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 458084 (29 %) iproute # 387806 (25 %) ip6route # 70274 (4 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)RP/0/RP0/CPU0#24H-1-701#RP/0/RP0/CPU0#24H-1-701#sh dpa resources iproute loc 0/0/CPU0~iproute~ OFA Table (Id# 37, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 1 /1 0 /2 4 /3 7 /4 1 /5 0 /6 0 /7 0 /8 10 /9 12 /10 36 /11 97 /12 285 /13 571 /14 1143 /15 1913 /16 13184 /17 7901 /18 13534 /19 25210 /20 39182 /21 47039 /22 100821 /23 131178 /24 665574 /25 144 /26 211 /27 383 /28 537 /29 721 /30 3241 /31 440 /32 9496OFA Infra Stats Summary Create Requests# 1740765 Delete Requests# 677889 Update Requests# 13819 Errors Resolve Failures# 0 Not Found in DB# 0 Exists in DB# 0 No Memory in DB# 0 Reserve Resources# 0 Release Resources# 0 Update Resources# 0 NPU ID# NPU-0 NPU-1 Create Server API Err# 4777 4777 Update Server API Err# 0 0 Delete Server API Err# 0 0RP/0/RP0/CPU0#24H-1-701#Starting from this point, the LEM is saturated while LPM is only used 29%.It would be profitable to disable the default “host-optimized” mode. With this configuration (requiring a reload of the product), we store the IPv4/24 routes in LPM, which is the largest memory of the system.RP/0/RP0/CPU0#24H-1-701#confRP/0/RP0/CPU0#24H-1-701(config)#hw-module fib ? dlb Destination Based Load balancing ipv4 Configure ipv4 protocol ipv6 Configure ipv6 protocol mpls Configure mpls protocolRP/0/RP0/CPU0#24H-1-701(config)#hw-module fib ipv4 ? scale Configure scale mode for no-TCAM cardRP/0/RP0/CPU0#24H-1-701(config)#hw-module fib ipv4 scale ? host-optimized-disable Configure Host optimization by default internet-optimized Configure Intetrnet optimizedRP/0/RP0/CPU0#24H-1-701(config)#hw-module fib ipv4 scale host-optimized-disableIn order to activate this new scale, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#24H-1-701(config)#commitRP/0/RP0/CPU0#24H-1-701(config)#endRP/0/RP0/CPU0#24H-1-701#After reloading the router, we verify LEM and LPM#RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lem Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 120841 (15 %) iproute # 9496 (1 %) ip6route # 111348 (14 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lpHW Resource Information Name # lpm Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 1423107 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 1123656 (79 %) iproute # 1053378 (74 %) ip6route # 70275 (5 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)LEM is now very lightly used (15%) and we will use massively the LPM for most of our prefixes. That’s why it’s now loaded at 79% but keep in mind it’s the largest memory of the system.We continue advertising more and more routes and in 2028, we are getting very close to the limit for this chipset#RP/0/RP0/CPU0#24H-1-701#sh bgp sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 5960575BGP main routing table version 5960575BGP NSR Initial initsync version 629974 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 5960575 5960575 5960575 5960575 5960575 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd192.168.100.151 0 100 1719200 99 5960575 0 0 01#36#51 790771192.168.100.152 0 100 2525 63 5960575 0 0 00#00#14 462640RP/0/RP0/CPU0#24H-1-701#sh bgp ipv6 un sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 1939637BGP main routing table version 1939637BGP NSR Initial initsync version 181612 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 1939637 1939637 1939637 1939637 1939637 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#111##151 0 100 73048 8961 1939637 0 0 01#37#08 729492001#111##152 0 152 3031 1169 1939637 0 0 00#00#41 239128RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lem loc 0/0/CPU0HW Resource Information Name # lem Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 786432 Red Threshold # 95 Yellow Threshold # 80 OOR State # GreenCurrent Usage Total In-Use # 227313 (29 %) iproute # 9496 (1 %) ip6route # 217820 (28 %) mplslabel # 0 (0 %) l2brmac # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#sh contr npu resources lpm loc 0/0/CPU0HW Resource Information Name # lpm Asic Type # Jericho PlusNPU-0OOR Summary Estimated Max Entries # 1373880 Red Threshold # 95 Yellow Threshold # 80 OOR State # Red OOR State Change Time # 2020.Jul.27 10#16#25 PDTCurrent Usage Total In-Use # 1333561 (97 %) iproute # 1239283 (90 %) ip6route # 94275 (7 %) ipmcroute # 1 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)...RP/0/RP0/CPU0#24H-1-701#Conclusion# it’s advisable to disable the default profile “host-optimized”, it will extend significatively the router capability when used with full internet view.ConclusionIn this post, we created a prevision model for the internet size progression.No doubt it can be refined. For example, we only considered the “non-IPv4/24 routes” and “non-IPv6/48 routes” as a block. Also because we advertised only subsequent /23s and /47s to create the “extra prefixes”. Much better can be done here (hopefully by someone else ;)).Take all this with a grain of salt. Also, if you are aware of more precise prediction models (with an evolution for individual prefix length), please let us know.In the lab, we simulated internet routing from 2020 to 2028 and we examined each memory consumption (LEM, LPM and when present, eTCAM).The systems with external TCAM are clearly showing a ton of free space for internet, even in 8 years.The systems based on Jericho+ with large LPM can also be used for internet peering, but it may be required to disable the “host-optimized” mode in a couple of years (around 2024) to leverage the large size of the LPM and offer 4+ more years of growth.Next episode# Jericho2. Stay tuned.", "url": "/tutorials/ncs5500-routing-resource-with-2020-internet/", "author": "Nicolas Fevrier", "tags": "iosxr, internet, bgp, ncs5500" } , "tutorials-acl-packet-length-matching-ncs55xx-and-ncs5xx": { "title": "ACL Packet Length Matching - NCS55xx and NCS5xx", "content": " On This Page Introduction Overview Supported ACL Match Criteria Header Definition - IPv4 Packet matching criterias ACL configuration ACL Verification Hardware or TCAM programming of the ACL Traffic Tests Changing the frame size Packet Length range Hardware Programming of the range Header Definition - IPv6 Optimizing Memory usage References Summary IntroductionAccess Control Lists have been implemented for a long time now and has been integral part of data plane security for almost every organization.Though everyone would be aware of what an access control list is, I would like to brush up some basics before deepdiving into complex functionalities.ACL’s can be considered as an ordered list of conditions used to test the network traffic that traverses through the router interfaces. On the basis of the defined lists, the router decides which packets to accept and which to drop. ACL’s help in managing the traffic and secure the access to and from the network.ACL’s can make permit/deny decisions based on source/destination address, source/destination ports, L3 protocols, L4 port numbers and many others.OverviewSecurity ACL’s introduction, feature support and statistics is covered at high level in the LinkIn this document, we will deepdive how the NCS55xx and NCS5xx program the packet length in the TCAM and use it to filter the packets. The main use case of this matching criteria is to identify malicious packet ranges entering the network and denying them.ACL’s on NCS55xx and NCS5xx, uses the Programmable Mapping and Filtering (PMF) functionality and TCAM (internal/exertnal) in both the Ingress Receive Packet Processing (IRPP) blocks and Egress Receive Packet Processing (ERPP) blocks. The line cards in these platforms are based on the Broadcom family of chipsets. These chipsets uses a pipeline architecture which has dedicated hardware blocks for performing various functions.ACL’s contains one or more ACEs which are used to match packets and perform an action on those packets. Typical action of the ACE is to either Permit or Deny. The TCAM is programmed with tables, may be like a database on which the match criteria and action criteria are performed.In hardware, we have databases that are unique to each feature. For ACL, we have further defined unique databases based on these fields# Protocol (IPv4, IPv6, L2) Direction (ingress/egress) Compression (uncompressed/compressed) Default TCAM key or user-defined (UDK) TCAM keySupported ACL Match Criteria Packet Length IP Fragmentation Source and Destination Port User Defined Keys - UDK User Defined Field - UDFNote# We will have dedicated posts for explaining each matching criterias.Header Definition - IPv4In NCS55xx and NCS5xx, when we configure an ACE through CLI, the total IP packet includes only IPv4 header. As per the above figure, only IP payload is taken into the consideration when you define the packet length in an ACE. It does not include any L2 headers, including ethernet/vlan. Therefore, when matching the packets on the router, the layer 2 headers needs to be taken into consideration and the packet length value should be configured accordingly. We will see this with an example in later section.Packet matching criteriasWe have different criterias for the matchingRP/0/RP0/CPU0#N55-24(config-ipv4-acl)#40 permit ipv4 any any packet-length ? eq Match only packets with a given value gt Match only packet with a greater value lt Match only packets with a lower value neq Match only packets not on a given value range Match only packets in the range of valueRP/0/RP0/CPU0#N55-24(config-ipv4-acl)#40 permit ipv4 any any packet-length ACL configurationLets us configure a simple ACL for matching packet length and attaching it to the interfaceRP/0/RP0/CPU0#N55-24#show access-lists ipv4 test-acl-v4-pkt-length Thu Jul 23 06#46#11.884 UTCipv4 access-list test-acl-v4-pkt-length 10 permit ipv4 any any packet-length eq 800 20 permit ipv4 any any packet-length eq 1000 30 permit ipv4 any any packet-length eq 1500RP/0/RP0/CPU0#N55-24#RP/0/RP0/CPU0#N55-24#show running-config interface tenGigE 0/0/0/0.10Thu Jul 23 06#46#34.378 UTCinterface TenGigE0/0/0/0.10 description using it for ACL testing ipv4 address 60.1.1.1 255.255.255.0 load-interval 30 encapsulation dot1q 10 ipv4 access-group test-acl-v4-pkt-length ingress!Note# After IOS-XR release, 6.5.2 and later the packet length is not supported by default TCAM key and we need to configure a UDK to have it in the key.hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto packet-length frag-bit port-rangeACL VerificationOther show commands are extensively covered in the LinkHardware or TCAM programming of the ACLThe above 2 output shows us that the IPv4 L3 ACL database is programmed and a Database ID is created for it. NPU details is extracted, dedicated to interface where the ACL is applied. We can see the bank_ID with the entry size as 320 bits. For more information on memory banks please Refer. This data/values will help us in understanding the values configured in the hardware.The above output shows the TCAM programming of the packet length configured in Hexadecimal Hexadecimal Decimal 320 800 3E8 1000 5DC 1500 Traffic Tests Below is the snapshot of the traffic stream used. It has a packet length of 822 bytes. (800 bytes plus 18 bytes of the Ethernet header + 4 bytes VLAN header)The traffic is matching the first ACE with packet length of 800 bytes. As mentioned earlier, that the TCAM doesnt consider L2 headers. So the traffic stream has to be sent accordingly. In real production network, the ACE has to be configured accordingly so we can permit or deny legitimate packets. This can be done by configuring range command. It will be explained in the later sectionRP/0/RP0/CPU0#N55-24#show access-lists ipv4 test-acl-v4-pkt-length hardware ingress location 0/0/CPU0Thu Jul 23 08#40#42.081 UTCipv4 access-list test-acl-v4-pkt-length 10 permit ipv4 any any packet-length eq 800 (4124541 matches) 20 permit ipv4 any any packet-length eq 1000 30 permit ipv4 any any packet-length eq 1500RP/0/RP0/CPU0#N55-24#Note# Permit ACL stats are not enabled by default. We need to configure the below hw-module profile to enable the samehw-module profile stats acl-permitChanging the frame sizeModifying the packet length to 800# traffic drops as the packet does not match any ACE. For the traffic to match we need to configure an ACE with packet length 800-22 = 778 bytes, due to the reason stated above.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 test-acl-v4-pkt-length hardware ingress location 0/0/CPU0Thu Jul 23 08#49#28.720 UTCipv4 access-list test-acl-v4-pkt-length 10 permit ipv4 any any packet-length eq 800 20 permit ipv4 any any packet-length eq 1000 30 permit ipv4 any any packet-length eq 1500RP/0/RP0/CPU0#N55-24#Packet Length rangeFor scenarios, where we are not sure on the absolute packet length, we have the option to configure range. In the below ACL, we have confgured 2 ACE’s. Sequence 10 is a range and sequence 20 is an absolute value.ipv4 access-list test-acl-v4-pkt-length 10 permit ipv4 any any packet-length range 800 1000 20 permit ipv4 any any packet-length eq 1500We can see that with range command only 8 (7+ 1 internal usage) entries are consumed in the TCAM. If we configure different values that would utilize one entry each.Hardware Programming of the rangeAs per the above ACL, TCAM programmed values Hexadecimal Decimal Value 3E8 1000 Absolute Value programmed 3E0/FFF8 992 to 999 Range programmed 3C0/FFE0 960 to 991 Range programmed 380/FFC0 896 to 959 Range programmed 340/FFC0 832 to 895 Range programmed 320/FFE0 800 to 831 Range programmed 5DC 1500 Absolute Value programmed When configuring the range command the algorithm takes into account the mask as well the value. For example, 3E0/FFF8 has 3E0 as value and FFF8 is mask. Accordingly it programs all the values in batches of entries for the given range into the TCAM. Let us understand, how the range is configured in the TCAM w.r.t value/mask pair and how to interpret it.3E0/FFF83E0 -- ValueFFF8 -- MaskFFF8 in binary is 1111111111111000The last 3 bits are 0 So we can program 2^3 = 8 values. Which means 3E0 to 3E73E0 = 9923E7 = 999 Hence, the key here is the bits in the mask with 0’s, according to which the values are programmed in the TCAMHeader Definition - IPv6For IPv6 the “payload length” field in the packet does not include the IPv6 headers. It only covers the payload length (the data following this headers).The IPv6 header is assumed to be 40 bytes; so the configured ACE’s packet length is reduced by 40 bytes and this value is configured into the TCAM as the match criteria for the IPv6 header. For example, if the ACE is configured for the packet length of 200, the TCAM will configure it as 160The IPv6 header is really not a fixed size, because there can be one or more extension headers. Currently, the hardware does not take this into consideration and simply assumes the IPv6 header is a fixed 40 bytes.We can similarly apply a IPv6 ACL and check the programming using the same commands.We will dedicate the a separate post for IPv6 Extension HeaderOptimizing Memory usageSometimes in real production networks, we are not sure what packet length we will receive on the interface. Whether it have a VLAN header or not, or what will be the size of the packet. In this scenarios, it is recommended to use range option. This will help in optimizing resource utilization with lesser TCAM entries. For example, if we want to permit packets only with length 800 to 810 bytes while denying others.This will consume 13 entries in the TCAM Instead of using the individual ace’s if we use range option, we will use only 5 entriesReferences Security ACL’s on NCS5500 CCO documentationSummaryHope this document helps to understand the matching criteria on the basis of packet length. This can be particularly useful in mitigating packets with sizes which are known for malicious behaviour. Those can be detected and prevented from causing data plane security issues.We also saw how to utilize the internal tcam resources optimally by using range command. This is particularly useful when we have many ACE’s in traditional ACL’s. Configuring higher or lower packet length doesnt cause the TCAM entries to increase.Stay tuned for the next matching criteria and its interpretation at the hardware level.", "url": "/tutorials/acl-packet-length-matching-ncs55xx-and-ncs5xx/", "author": "Tejas Lad", "tags": "cisco, NCS5500, NCS540, NCS560, ACL, NCS 5500" } , "tutorials-acl-ip-fragments-matching-ncs55xx-and-ncs5xx": { "title": "ACL IP Fragments Matching - NCS55xx and NCS5xx", "content": " On This Page Introduction Fragmentation IPv4 Packet Types w.r.t Fragmentation Understanding the keyword# Fragments ACL Verification Traffic Tests and Validation Sending non fragmented packets with TCP destination port 80 Sending fragmented packets# Non-initial Fragment with Offset > 0 Limitation with Keyword# Fragments Understanding the keyword# Fragment-Type Fragment-Types Configuring ACL with fragment-type Applying the ACL to the interface ACL Verification Traffic Tests and Validation Sending first fragmented packet MF=1 and Offset = 0 Changing the ACE 10 to fragment-type don’t fragment with same packet MF=1 and Offset = 0 References Summary IntroductionIn the previous TechNote, we covered the matching criteria on the basis of Packet Length. In this note, we will discuss how customers can protect the data-plane from Fragmented Packets. On many occasions,fragmented packets are not expected in the network. It becomes very important for network administrators to add that extra layer of filtering. Hardware should be capable of filtering incoming packets on the basis of fragment flags and offset values. In this document, we will explore yet another filtering capability of NCS55xx and NCS5xx.FragmentationLet us have a quick refresher, before jumping into the implementation. In simple words, IP fragmentation can be considered a process of breaking down large packets into smaller fragments. This happens when intermediate devices has lower MTU on the interface than the arriving packet.The packet is reassembled later with various fields of the IP packet.IP HeaderBelow are the important fields during fragmentation and reassembling of the original packet. Identification Flags Offset Source and Destination AddressesLet us take a simple example.In the above figure, all the links are default size of 1500 bytes, except the link connected between R1 and H1. We have configured it as 1000 bytes. When a packet of 1500 byte arrives on the interface, R1 has to fragment the packet. The packet will be fragmented as belowFor more information on IP fragmentation please ReferIPv4 Packet Types w.r.t FragmentationIPv4 Packets fall into the below category. Packet Type More Fragment Fragment Offset L4 Header Non Fragmented 0 0 Yes Initial Fragment 1 0 Yes Non-Initial Fragment 1 Non-Zero No Non-Initial Fragment 0 Non-Zero No As per the following Documentation, non-fragments and the initial fragment of an IP packet can contain both Layer 3 and 4 information that the ACLs can match against, for a permit or deny decision. Non-initial fragments are typically allowed through the ACL, because they can be blocked based only Layer 3 information in the packets. However, because these packets do not contain Layer 4 information, they do not match the Layer 4 information in the ACL entry, if it exists. Allowing the non-initial fragments of an IP datagram through is acceptable because the host receiving the fragments is not able to reassemble the original IP datagram without receiving all the fragments. These initial or non-initial fragments, may always not be, legitimate packets.Understanding the keyword# FragmentsFrom the above discussion, let us see the use of the keyword Fragments and how it works on NCS55xx and NCS5xx.Consider the above scenario. A host wants to access a web server inside the network. The network administrator configures an access list which should allow all non fragmented packets from any user to that server on port number 80. This access-list allows any user to the destination with tcp port 80 and denies all other services.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment ipv4 access-list fragment 10 permit tcp any host 70.1.1.2 eq www 20 deny ipv4 any anyRP/0/RP0/CPU0#N55-24#ACL VerificationRP/0/RP0/CPU0#N55-24#show access-lists ipv4 usage pfilter location 0/0/CPU0Thu Aug 6 17#27#24.056 UTCInterface # TenGigE0/0/0/0.10 Input ACL # Common-ACL # N/A ACL # fragment Output ACL # N/ARP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment hardware ingress verify location 0/0/CPU0Thu Aug 6 17#27#47.309 UTCVerifying TCAM entries for fragmentPlease wait... INTF NPU lookup ACL # intf Total compression Total result failed(Entry) TCAM entries type ID shared ACES prefix-type Entries ACE SEQ # verified ---------- --- ------- --- ------ ------ ----------- ------- ------ ------------- ------------ TenGigE0_0_0_0.10 (ifhandle# 0x41b8) 0 IPV4 1 1 2 NONE 4 passed 4RP/0/RP0/CPU0#N55-24#Note# The fragment keyword is available in the default key and this can be applied to all the systems including Q-MX,Jericho,Jericho+ with and without external TCAM.Traffic Tests and ValidationFirst we send non fragmented packets with TCP destination port 80. ACE 10 is matching the packets allowing the access to the web server.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment hardware ingress location 0/0/CPU0 Fri Aug 7 05#35#14.811 UTCipv4 access-list fragment 10 permit tcp any host 70.1.1.2 eq www (70495 matches) 20 deny ipv4 any anyNow let us modify the packet and make it a fragmented packet and check if the ACL allows or denies the trafficRP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment hardware ingress location 0/0/CPU0 Fri Aug 7 05#39#50.345 UTCipv4 access-list fragment 10 permit tcp any host 70.1.1.2 eq www (1863448 matches) 20 deny ipv4 any anyFrom the above, we could see the that Fragmented packets are also making their way through the network, which is against what the network administrator had intented. It permits these packets because non-initial fragments do not contain Layer 4 information, and the ACL logic assumes that if the Layer 3 information matches, then the Layer 4 information would also match, if it was available. This could lead to data plane security issues.Now how to stop this ? Lets see the use of the keyword Fragments and how we can use the same to drop the fragmented packets and allow only non-fragmented traffic.Modifying the ACL as belowRP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment Fri Aug 7 07#07#09.803 UTCipv4 access-list fragment 10 deny ipv4 any host 70.1.1.2 fragments 20 permit tcp any host 70.1.1.2 eq www 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-24#Sending non fragmented packets with TCP destination port 80We can see from the below output that ACE 20 is matching and traffic is allowed.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment hardware ingress location 0/0/CPU0 Fri Aug 7 07#12#37.129 UTCipv4 access-list fragment 10 deny ipv4 any host 70.1.1.2 fragments 20 permit tcp any host 70.1.1.2 eq www (20260 matches) 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-24#Sending fragmented packets# Non-initial Fragment with Offset > 0We can see the packets are matching the ACE 10 now and traffic is deniedRP/0/RP0/CPU0#N55-24#show access-lists ipv4 fragment hardware ingress location 0/0/CPU0 ipv4 access-list fragment 10 deny ipv4 any host 70.1.1.2 fragments (40966 matches) 20 permit tcp any host 70.1.1.2 eq www 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-24#We can use the below command, to check the node counters to see the reason behind the packet drops. Here we can see the counter is getting increased due to deny ACLRP/0/RP0/CPU0#N55-24#show spp node-counters location 0/0/CPU0 | in ACLPUNT ACL_DENY# 220484RP/0/RP0/CPU0#N55-24#Limitation with Keyword# FragmentsIn the above section, we saw how a fragmented packet with a non zero offset value is filtered out. The limitation of the keyword Fragments is, it can be used only for offset values greater than 0. If we want to filter out the initial fragments (FO=0), we will not be able to do by this keyword. For example consider the below packetIt has More Fragment = 1 and Fragment Offset = 0. Therefore this is the initial fragment.RP/0/RP0/CPU0#N55-24#show access-list ipv4 fragment hardware ingress location 0/0/CPU0 Fri Aug 7 08#08#07.663 UTCipv4 access-list fragment 10 deny ipv4 any host 70.1.1.2 fragments 20 permit tcp any host 70.1.1.2 eq www (264506 matches) 30 deny ipv4 any anyWe can see it matches the ACE 20 and traffic is allowed, though it is a fragmented packet. To avoid this scenario, we need to use another keyword Fragment-Type.Understanding the keyword# Fragment-TypeLet us try to understand the keyword Fragment-Type in more details and the way we can use it to overcome the limitation of the keyword Fragments.Fragment-TypesBelow are the options available for matching the fragments Fragment Type Description Flags dont-fragment Match don’t fragment flag DF=1 first-fragment Match first fragment flag MF=1 & FO=0 is-fragment Match is fragment flag Any fragments. last-fragment Match last fragment flag MF=0 and FO>0 Configuring ACL with fragment-typeWe will take into consideration packet type which is first-fragment. MF=1 & FO=0 (this is the packet which escaped the keyword - fragments)RP/0/RP0/CPU0#N55-20#show access-lists ipv4 fragment-type Fri Aug 7 12#15#36.349 UTCipv4 access-list fragment-type 10 deny ipv4 any host 60.1.1.2 fragment-type first-fragment 20 permit tcp any host 60.1.1.2 eq www 30 deny ipv4 any anyNote# Below hw-module profile needs to be configured along with a UDKhw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto frag-bithw-module profile tcam acl-prefix percent 20Applying the ACL to the interfaceBefore applying the ACL to the interface let us understandhw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto frag-bithw-module profile tcam acl-prefix percent 20Keyword fragment-type is not supported with default TCAM keys. We need to define a UDK with frag-bit. When configuring it to an interface it needs to be applied along with compression level.There are multiple levels of ACL compression supported,however the NCS5xx and NCS55xx only supports certain levels, protocols and directions. Uncompressed or compression level 0 ACLs utilize only one TCAM lookup in hardware. Compressed ACLs utilize two TCAM lookups in hardware # Stage 1# External TCAM & Stage 2# Internal TCAM. Only compress level 3 is supported.Because compression requires two TCAM lookups, the keyword fragment-type can only be supported on systems with an external TCAM. Compression is only supported for IPv4/IPv6 in the ingress direction.RP/0/RP0/CPU0#N55-20#show running-config interface gigabitEthernet 0/0/0/2.10Fri Aug 7 12#27#15.320 UTCinterface GigabitEthernet0/0/0/2.10 ipv4 address 70.1.1.1 255.255.255.0 encapsulation dot1q 10 ipv4 access-group fragment-type ingress compress level 3!The above 2 profiles are only applicable for systems having Jericho and Q-MX with NetLogic (NL12k) eTCAM. For systems having Jericho+ and Optimus Prime (OP) eTCAM it works by default due to larger space available. (We will dedicate a separate post for Jericho2 and its properties.) Hardware ASIC eTCAM NCS-5501-SE Q-MX NL12k NCS-5502-SE J NL12k NCS55A1-36H-SE-S J+ OP NC55-24H12F-SE J NL12k NC55-24X100G-SE J NL12k NC55-36X100G-A (-SE) J+ OP NCS55A2-MOD-SE-S J+ OP To understand in detail regarding the profile and compression support, please refer couple of excellent articles (Hybrid ACL’s, HW-Module Profiles)ACL VerificationTraffic Tests and ValidationFirst we send non fragmented packets with TCP destination port 80 and could ACE 20 is matching and traffic is permitted.RP/0/RP0/CPU0#N55-20#show access-lists ipv4 fragment-type hardware ingress location 0/0/CPU0 Fri Aug 7 15#30#33.954 UTCipv4 access-list fragment-type 10 deny ipv4 any host 60.1.1.2 fragment-type first-fragment 20 permit tcp any host 60.1.1.2 eq www (513249 matches) 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-20#Sending first fragmented packet MF=1 and Offset = 0Packets are matching the ACE 10 and getting deniedRP/0/RP0/CPU0#N55-20#show access-lists ipv4 fragment-type hardware ingress location 0/0/CPU0Fri Aug 7 17#28#06.005 UTCipv4 access-list fragment-type 10 deny ipv4 any host 60.1.1.2 fragment-type first-fragment (186591 matches) 20 permit tcp any host 60.1.1.2 eq www 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-20#We can also use is-fragment in place of first-fragment.RP/0/RP0/CPU0#N55-20#show spp node-counters location 0/0/CPU0 | in ACLFri Aug 7 17#32#14.929 UTC PUNT ACL_DENY# 23004RP/0/RP0/CPU0#N55-20#show spp node-counters location 0/0/CPU0 | in ACLFri Aug 7 17#32#18.124 UTC PUNT ACL_DENY# 23278RP/0/RP0/CPU0#N55-20#Changing the ACE 10 to fragment-type don’t fragment with same packet MF=1 and Offset = 0RP/0/RP0/CPU0#N55-20#show access-lists ipv4 fragment-type Fri Aug 7 17#37#43.211 UTCipv4 access-list fragment-type 10 deny ipv4 any host 60.1.1.2 fragment-type dont-fragment 20 permit tcp any host 60.1.1.2 eq www 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-20#RP/0/RP0/CPU0#N55-20#show access-lists ipv4 fragment-type hardware ingress location 0/0/CPU0 Fri Aug 7 17#37#00.856 UTCipv4 access-list fragment-type 10 deny ipv4 any host 60.1.1.2 fragment-type dont-fragment 20 permit tcp any host 60.1.1.2 eq www (98063 matches) 30 deny ipv4 any anyRP/0/RP0/CPU0#N55-20#From the above output we can see, it is not matching sequence 10, hence it moves to the next sequence 20. The fragment matches the criteria, hence it is permitted.To summarise the behaviour for different fragment typesConsidering Packet-Fragment has MF=1 and FO=0 Fragment Type Action Reason first-fragment Dropped Expects MF=1 FO=0 dont-fragment Permitted Expects DF=1 is-fragment Dropped Matches Any fragments last-fragment Permitted Expects MF=0 FO>0 You can use the available options for fragment-type, to filter the fragments as per different scenarios. Fragment-type keyword gives user granular level of filtering the packets.References CCO Config Guide Fragmentation White Paper Security ACL Part 2 NCS5500 HW-Module ProfilesSummaryFragmentation is a process of breaking bigger packet into smaller packets and reassembling it. We saw how malicious fragments can make their way in and cause security issues. NCS5xx and NCS55xx is equipped with the capabilities to provide that extra security to the data plane. The platforms support different keywords - Fragments and Fragment-type to filter out fragments which are not expected to enter the network. They can be filtered before reaching the target. Hope this helps to clear the filtering criteria for fragmented packets.Stay tuned for next article, where we will explore another ACL matching capabilities of the portfolio.", "url": "/tutorials/acl-ip-fragments-matching-ncs55xx-and-ncs5xx/", "author": "Tejas Lad", "tags": "NCS5500, ACL, IP FRAGMENTS, NCS560, NCS540, NCS5xx, NCS55xx" } , "tutorials-user-defined-key-udk-for-ncs55xx-and-ncs5xx": { "title": "User Defined Key - UDK for NCS55xx and NCS5xx", "content": " On This Page Introduction User-Defined Key - UDK Advantages of using UDK UDK Feature Support When to use UDK ? Feature Details UDK Definition UDK TCAM Size UDK and Default-Key # Preference ? How many UDK you can configure ? Configuring two different ACL’s on different interface using same global UDK Global or LC Specific UDK # Preference ? Reference Summary IntroductionIn the previous technotes, (ACL Packet Length Match, ACL Fragment Match) we have used the term User Defined Key - UDK many times and also saw it was compulsory to configure it for certain match criteria. In this technote, we will deep dive into the UDK concept and explore in details regarding the feature support.User-Defined Key - UDK(Reference# NCS5500 deepdive)As we already know, the NCS55xx and NCS5xx use either internal or external TCAM to perform the lookup and take defined action on each packet. Multiple features share the same TCAM resource in the hardware. Hence it needs to be utilized properly or else we are at a risk of running out of TCAM space. As default key definitions does not have enough space to include all qualifier/action fields, User-Defined Key (UDK) is needed. The space (key width) available for these key definitions is also constrained. A key definition specifies which qualifier and action fields are available to the ACL feature when performing the lookup. Not all available qualifier and action fields can be included in each key definition. (Reference)The key definitions depend on the following attributes of the access-list# Attributes Details Direction of attachment Ingress or Egress Protocol type IPv4/IPv6/L2 Compression level Uncompressed/Compressed Advantages of using UDK To include qualifier fields which are not included in the default TCAM key To change the ACL mode from shared to unique to support a greater number of unique ACLs, unique counters, etc. To reduce the size of the TCAM key (number of banks consumed) For further information, please referUDK Feature Support A UDK can be defined globally or line card specific. The line card specific configuration will take precedence over global configuration. Only traditional or uncompressed ACL is supported. Hybrid or Scaled ACL is not supported along with UDK. A UDK definition will override the default key definition. Only IPv4 and IPv6 keys in ingress direction are currently supported. The IPv4 UDK supports a TCAM key size of 160 bits and 320 bits The IPv6 UDK supports the size of 320 bits. If the key defintion goes beyond the supported TCAM size, it will reject the ACL configuration.When to use UDK ?Below table shows the frequently used qualifiers for IPv4 and IPv6. If the default TCAM key is set as Enabled, then the Qualifier field is enabled by default. If the default TCAM key is set as Disabled, then Qualifier field must use UDK. (refer) Parameter IPv4 Default Key IPv6 Default Key Source Address Enabled Enabled Destination Address Enabled Enabled Source Port Enabled Enabled Destination Port Enabled Enabled Port Range Enabled Not Supported Protocol/Next Header Enabled Enabled Fragment bit Enabled (fragment-type needs UDK) Not Supported Packet length Disabled Disabled Precedence/DSCP Disabled Enabled TCP Flags Enabled Enabled TTL Match Disabled Disabled Interface-based Disabled Disabled UDF 1-7 Disabled Disabled ACL ID Enabled Enabled Note# This table is applicable across portfolio and also holds true for system with external tcam as well.Feature DetailsLet us explore the UDK support in details.UDK DefinitionDefining IPv4 UDK ACL FormatRP/0/RP0/CPU0#N55-24(config)#hw-module profile tcam format access-list ipv4 ? common-acl enable common-acl, 1 bit qualifier dst-addr destination address, 32 bit qualifier dst-port destination L4 Port, 16 bit qualifier enable-capture Enable ACL based mirroring (Included by default) enable-set-ttl Enable Setting TTL field (Included by default) frag-bit fragment-bit, 1 bit qualifier interface-based Enable non-shared interface based ACL location Location of format access-list ipv4 config packet-length packet length, 16 bit qualifier port-range ipv4 port range qualifier, 24 bit qualifier precedence precedence/dscp, 8 bit qualifier proto protocol type, 8 bit qualifier src-addr source address, 32 bit qualifier src-port source L4 port, 16 bit qualifier tcp-flags tcp-flags, 6 bit qualifier ttl-match Enable matching on TTL field udf1 user defined filter udf2 user defined filter udf3 user defined filter udf4 user defined filter udf5 user defined filter udf6 user defined filter udf7 user defined filter udf8 user defined filterDefining IPv6 UDK ACL FormatRP/0/RP0/CPU0#N55-24(config)#hw-module profile tcam format access-list ipv6 ? common-acl enable common-acl, 1 bit qualifier dst-addr destination address, 128 bit qualifier dst-port destination L4 Port, 16 bit qualifier enable-capture Enable ACL based mirroring (Included by default) enable-set-ttl Enable Setting TTL field (Included by default) interface-based Enable non-shared interface based ACL location Location of format access-list ipv6 config next-hdr next header, 8 bit qualifier (manditory field) payload-length payload length, 16 bit qualifier src-addr source address, 128 bit qualifier src-port source L4 Port, 16 bit qualifier (manditory field) tcp-flags tcp-flags, 8 bit qualifier traffic-class Traffic Class, 8 bit qualifier ttl-match Enable matching on TTL field udf1 user defined filter udf2 user defined filter udf3 user defined filter udf4 user defined filter udf5 user defined filter udf6 user defined filter udf7 user defined filter udf8 user defined filterExample hw-module profilehw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port packet-length frag-bit precedence port-rangehw-module profile tcam format access-list ipv6 src-addr src-port dst-addr dst-port next-hdr payload-lengthUDK TCAM SizeLet us configure an IPv4 ACL as belowRP/0/RP0/CPU0#N55-24#show access-lists ipv4 test-acl-v4-pkt-length Sun Aug 16 06#24#34.093 UTCipv4 access-list test-acl-v4-pkt-length 10 permit ipv4 any any packet-length range 800 831RP/0/RP0/CPU0#N55-24#show running-config int tenGigE 0/0/0/0.10Sun Aug 16 06#23#44.599 UTCinterface TenGigE0/0/0/0.10 description using it for ACL testing ipv4 address 60.1.1.1 255.255.255.0 ipv6 address 60##1/64 load-interval 30 encapsulation dot1q 10 ipv4 access-group test-acl-v4-pkt-length ingressRP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Sun Aug 16 06#26#57.373 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1902 97 30 INGRESS_LPTS_IPV40 0 160b pmf-0 1902 10 36 INGRESS_RX_ISIS0 0 160b pmf-0 1902 23 46 INGRESS_QOS_IPV40 0 160b pmf-0 1902 15 48 INGRESS_QOS_MPLS0 0 160b pmf-0 1902 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1996 52 49 INGRESS_QOS_L20 2 160b egress_acl 2031 17 17 EGRESS_QOS_MAP0 3 160b Free 2048 0 0 0 4\\5 320b pmf-0 1999 27 31 INGRESS_LPTS_IPV60 4\\5 320b pmf-0 1999 3 39 INGRESS_ACL_L3_IPV40 4\\5 320b pmf-0 1999 19 47 INGRESS_QOS_IPV6Above output shows a ingress ACL in the TCAM occupying the key space of 320 bits.Let us modify the hw-module profile format.hw-module profile tcam format access-list ipv4 src-addr dst-addr packet-lengthRP/0/RP0/CPU0#N55-24(config)#interface tenGigE 0/0/0/0.10 RP/0/RP0/CPU0#N55-24(config-subif)#ipv4 access-group test-acl-v4-pkt-length ingress RP/0/RP0/CPU0#N55-24(config-subif)#commit RP/0/RP0/CPU0#N55-24(config-subif)#endRP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Sun Aug 16 06#54#01.114 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1902 97 30 INGRESS_LPTS_IPV40 0 160b pmf-0 1902 10 36 INGRESS_RX_ISIS0 0 160b pmf-0 1902 23 46 INGRESS_QOS_IPV40 0 160b pmf-0 1902 15 48 INGRESS_QOS_MPLS0 0 160b pmf-0 1902 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1993 3 39 INGRESS_ACL_L3_IPV4 0 1 160b pmf-0 1993 52 49 INGRESS_QOS_L20 2 160b egress_acl 2031 17 17 EGRESS_QOS_MAP0 3 160b Free 2048 0 0 0 4\\5 320b pmf-0 2002 27 31 INGRESS_LPTS_IPV60 4\\5 320b pmf-0 2002 19 47 INGRESS_QOS_IPV6We can see after modifying the profile with a fewer keys the same ACL is occupying only 160 bits in the TCAM. This way users can define keys which can help optimize the TCAM resources.Note# Changing of hw-module profile format will require reload of the router or line card depending on fixed or modular chassis.Let us see an example of IPv6 ACL and TCAM entryhw-module profile tcam format access-list ipv6 src-addr src-port dst-addr next-hdripv6 access-list IPv6_ingress 10 permit ipv6 any anyRP/0/RP0/CPU0#N55-24(config)#interface tenGigE 0/0/0/0.10 RP/0/RP0/CPU0#N55-24(config-subif)#ipv6 access-group IPv6_ingress ingress RP/0/RP0/CPU0#N55-24(config-subif)#commit RP/0/RP0/CPU0#N55-24(config-subif)#endRP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Sun Aug 16 08#36#05.722 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1902 97 30 INGRESS_LPTS_IPV40 0 160b pmf-0 1902 10 36 INGRESS_RX_ISIS0 0 160b pmf-0 1902 23 46 INGRESS_QOS_IPV40 0 160b pmf-0 1902 15 48 INGRESS_QOS_MPLS0 0 160b pmf-0 1902 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1996 52 49 INGRESS_QOS_L20 2 160b egress_acl 2031 17 17 EGRESS_QOS_MAP0 3 160b Free 2048 0 0 0 4\\5 320b pmf-0 2002 27 31 INGRESS_LPTS_IPV60 4\\5 320b pmf-0 2002 19 47 INGRESS_QOS_IPV60 6\\7 320b pmf-0 2043 5 40 INGRESS_ACL_L3_IPV6As mentioned above, IPv6 ACL occupies 320 bits in the TCAM. We will see in later section how the size are calculated and are dependent on the configured UDK.UDK and Default-Key # Preference ?Consider below 2 simple IPv4 and IPv6 ACL’sipv4 access-list test-ipv4 10 permit ipv4 any anyipv6 access-list IPv6_ingress 10 permit ipv6 any anyRP/0/RP0/CPU0#N55-24(config)#interface tenGigE 0/0/0/0.10RP/0/RP0/CPU0#N55-24(config-subif)#ipv4 access-group test-ipv4 ingress RP/0/RP0/CPU0#N55-24(config-subif)#ipv6 access-group IPv6_ingress ingress RP/0/RP0/CPU0#N55-24(config-subif)#commit RP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Sun Aug 16 08#59#10.118 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1902 97 30 INGRESS_LPTS_IPV40 0 160b pmf-0 1902 10 36 INGRESS_RX_ISIS0 0 160b pmf-0 1902 23 46 INGRESS_QOS_IPV40 0 160b pmf-0 1902 15 48 INGRESS_QOS_MPLS0 0 160b pmf-0 1902 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1993 3 39 INGRESS_ACL_L3_IPV40 1 160b pmf-0 1993 52 49 INGRESS_QOS_L20 2 160b egress_acl 2031 17 17 EGRESS_QOS_MAP0 3 160b Free 2048 0 0 0 4\\5 320b pmf-0 2002 27 31 INGRESS_LPTS_IPV60 4\\5 320b pmf-0 2002 19 47 INGRESS_QOS_IPV60 6\\7 320b pmf-0 2035 13 40 INGRESS_ACL_L3_IPV6We could see the IPv4 ACL using 160 bits in the TCAM and IPv6 ACL using 320 bits.At this moment, only default key is being used.Let us add UDKhw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto packet-length frag-bit precedence port-rangeRP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Sun Aug 16 09#12#51.495 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1902 97 30 INGRESS_LPTS_IPV40 0 160b pmf-0 1902 10 36 INGRESS_RX_ISIS0 0 160b pmf-0 1902 23 46 INGRESS_QOS_IPV40 0 160b pmf-0 1902 15 48 INGRESS_QOS_MPLS0 0 160b pmf-0 1902 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1996 52 49 INGRESS_QOS_L20 2 160b egress_acl 2031 17 17 EGRESS_QOS_MAP0 3 160b Free 2048 0 0 0 4\\5 320b pmf-0 1999 27 31 INGRESS_LPTS_IPV60 4\\5 320b pmf-0 1999 3 39 INGRESS_ACL_L3_IPV4The above output shows, the configured UDK is taking precendence over default TCAM key. We can see the same ACL now uses 320 bits TCAM size. The key size programmed is as per the UDK defined to accomodate the various keys. The UDK has src-addr dst-addr src-port dst-port proto packet-length frag-bit precedence port-range. As mentioned above, each key has a size which get programmed in the TCAM.src-addr 32 bitsdst-addr 32 bitssrc-port 16 bitsdst-port 16 bitsproto 8 bitspacket-length 10 bitsfrag-bit 3 bitsprecedence 8 bitsport-range 24 bitsTotal = 149 + ACL_ID (8 bits) and copy engines in TCAMSo the TCAM size needed to accomodate the UDK with these many keys needs to be more than 160 bits.Similarly let us check IPv6 ACLhw-module profile tcam format access-list ipv6 src-addr src-port dst-addr dst-port next-hdr payload-lengthRP/0/RP0/CPU0#N55-24(config)#interface tenGigE 0/0/0/0.10RP/0/RP0/CPU0#N55-24(config-subif)#ipv6 access-group IPv6_ingress ingress RP/0/RP0/CPU0#N55-24(config-subif)#commit Sun Aug 16 09#16#04.486 UTC% Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted. Please issue 'show configuration failed [inheritance]' from this session to view the errorsRP/0/RP0/CPU0#N55-24(config-subif)#show configuration failed Sun Aug 16 09#16#09.777 UTC!! SEMANTIC ERRORS# This configuration was rejected by !! the system due to semantic errors. The individual !! errors with each failed configuration command can be !! found below.interface TenGigE0/0/0/0.10 ipv6 access-group IPv6_ingress ingress!!% 'DPA' detected the 'warning' condition 'SDK - Table full'!endThe above output shows the key size is not able to accomodate in the TCAM and hence the ACL is rejected. The same ACL was getting applied if we didnt use a UDK. Let us see the reason of getting rejected.src-addr 128 bitsdst-addr 128 bitssrc-port 16 bitsdst-port 16 bitsnext-header 8 bitspayload-length 16 bitsTotal = 312 + ACL_ID (8 bits) and copy engines in TCAMAs we can see there is no space left for copy engines and TCAM space is almost full with the defined keys itself. So the users need to define the UDK carefully, one for the ACL to be configurable and second to utilize the TCAM resources wisely.How many UDK you can configure ?You can configure only one UDK per location. If you try to configure another UDK, when one already exist for that location it will be overridden.For example, we have this existing UDKhw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto packet-length frag-bit precedence port-rangehw-module profile tcam format access-list ipv6 src-addr src-port dst-addr dst-port next-hdr payload-lengthConfiguring another UDK for the same locationRP/0/RP0/CPU0#N55-24(config)#hw-module profile tcam format access-list ipv4 src-addr dst-addr packet-length Sun Aug 16 09#29#53.771 UTCIn order to activate/deactivate this ipv4 profile, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#N55-24(config)#hw-module profile tcam format access-list ipv6 src-addr dst-addr Sun Aug 16 09#30#16.648 UTCIn order to activate/deactivate this ipv6 profile, you must manually reload the chassis/all line cardsRP/0/RP0/CPU0#N55-24(config)#commit Sun Aug 16 09#30#22.250 UTCRP/0/RP0/CPU0#N55-24(config)#It overrides the previous UDK after reloadhw-module profile tcam format access-list ipv4 src-addr dst-addr packet-lengthhw-module profile tcam format access-list ipv6 src-addr src-port dst-addr next-hdrConfiguring two different ACL’s on different interface using same global UDKRP/0/RP0/CPU0#N55-24#show access-lists ipv4 usage pfilter location all Sun Aug 16 13#42#36.260 UTCInterface # TenGigE0/0/0/0.10 Input ACL # Common-ACL # N/A ACL # test-ipv4 Output ACL # N/AInterface # TenGigE0/0/0/0.20 Input ACL # Common-ACL # N/A ACL # test-acl-v4-pkt-length Output ACL # N/ARP/0/RP0/CPU0#N55-24#RP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Sun Aug 16 13#08#36.481 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1897 102 30 INGRESS_LPTS_IPV40 0 160b pmf-0 1897 10 36 INGRESS_RX_ISIS0 0 160b pmf-0 1897 23 46 INGRESS_QOS_IPV40 0 160b pmf-0 1897 15 48 INGRESS_QOS_MPLS0 0 160b pmf-0 1897 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1991 5 39 INGRESS_ACL_L3_IPV4From the above output, we can see there are 2 different ACL’s applied and TCAM size occupied @160 bitsGlobal or LC Specific UDK # Preference ?In the above sections, we saw how a global UDK when defined, takes precedence over default key.What happens when we define a Line Card specific UDK along with Global UDK ?Let us see with the help of an example. We have a modular chassis with Line Card present in slot 1RP/0/RP0/CPU0#N55-38#sho platform Mon Aug 17 13#11#50.976 UTCNode Type State Config state--------------------------------------------------------------------------------0/0/1 NC55-MPA-2TH-S DISABLED 0/0/CPU0 NC55-MOD-A-S IOS XR RUN NSHUT0/0/NPU0 Slice UP 0/1/CPU0 NC55-18H18F IOS XR RUN NSHUT0/1/NPU0 Slice UP 0/1/NPU1 Slice UP 0/1/NPU2 Slice UP 0/RP0/CPU0 NC55-RP-E(Active) IOS XR RUN NSHUT0/FC0 NC55-5504-FC OPERATIONAL NSHUT0/FC1 NC55-5504-FC OPERATIONAL NSHUT0/FC2 NC55-5504-FC OPERATIONAL NSHUT0/FC3 NC55-5504-FC OPERATIONAL NSHUT0/FC4 NC55-5504-FC OPERATIONAL NSHUT0/FC5 NC55-5504-FC OPERATIONAL NSHUT0/FT0 NC55-5504-FAN OPERATIONAL NSHUT0/FT1 NC55-5504-FAN OPERATIONAL NSHUT0/FT2 NC55-5504-FAN OPERATIONAL NSHUT0/PM0 NC55-PWR-3KW-AC OPERATIONAL NSHUT0/PM2 NC55-PWR-3KW-AC OPERATIONAL NSHUT0/SC0 NC55-SC OPERATIONAL NSHUT0/SC1 NC55-SC OPERATIONAL NSHUTRP/0/RP0/CPU0#N55-38#UDK configured for 2 different location. If we dont specify the location it is considered 0/0/CPU0hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port frag-bit location 0/0/CPU0hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port packet-length frag-bit location 0/1/CPU0The UDK for location 0/0/CPU doesnt include the key packet-length. The UDK for location 0/1/CPU0 includes that key. Let us apply the below policy on interfaces corresponding to those locationsipv4 access-list test-acl-v4-pkt-length 10 permit ipv4 any any packet-length range 800 831Applying the ACL on a interface at location 0/0/CPU0, we get the below errorRP/0/RP0/CPU0#N55-38(config)#interface tenGigE 0/0/0/1 RP/0/RP0/CPU0#N55-38(config-if)#ipv4 access-group test-acl-v4-pkt-length ingressRP/0/RP0/CPU0#N55-38(config-if)#commit Mon Aug 17 13#22#35.250 UTCLC/0/0/CPU0#Aug 17 13#22#35.332 UTC# pfilter_ea[146]# %PKT_INFRA-DPA_FM-3-USER_DEF_TCAM_KEY_PARAM_MISSING # ACL test-acl-v4-pkt-length, dir 0, seq 10, IPv4, 'dpa_feat_mgr' detected the 'warning' condition 'Parameter not programmed on ACL TCAM UDK (User Defined Key), check syslog for more details'# Packet Length % Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted. Please issue 'show configuration failed [inheritance]' from this session to view the errorsRP/0/RP0/CPU0#N55-38(config-if)#show configuration failed Mon Aug 17 13#22#56.445 UTC!! SEMANTIC ERRORS# This configuration was rejected by !! the system due to semantic errors. The individual !! errors with each failed configuration command can be !! found below.interface TenGigE0/0/0/1 ipv4 access-group test-acl-v4-pkt-length ingress!!% 'dpa_feat_mgr' detected the 'warning' condition 'Parameter not programmed on ACL TCAM UDK (User Defined Key), check syslog for more details'!endApplying the ACL on interface at location 0/1/CPU0.RP/0/RP0/CPU0#N55-38#show running-config interface hundredGigE 0/1/0/6Mon Aug 17 13#29#46.432 UTCinterface HundredGigE0/1/0/6 ipv4 address 106.1.1.1 255.255.255.0 ipv4 access-group test-acl-v4-pkt-length ingress!RP/0/RP0/CPU0#N55-38# show access-lists ipv4 test-acl-v4-pkt-length hardware ingress verify location 0/0/CPO</mark>Mon Aug 17 14#29#49.576 UTCInvalid ACL name or not attached in specified direction/interfaceRP/0/RP0/CPU0#N55-38# show access-lists ipv4 test-acl-v4-pkt-length hardware i$Mon Aug 17 14#29#59.391 UTCVerifying TCAM entries for test-acl-v4-pkt-lengthPlease wait... INTF NPU lookup ACL # intf Total compression Total result failed(Entry) TCAM entries type ID shared ACES prefix-type Entries ACE SEQ # verified ---------- --- ------- --- ------ ------ ----------- ------- ------ ------------- ------------ HundredGigE0_1_0_6 (ifhandle# 0x8000a8) 0 IPV4 2 1 1 NONE 2 passed 2RP/0/RP0/CPU0#N55-38#show controllers npu internaltcam location 0/1/CPU0Mon Aug 17 14#28#03.398 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b flp-tcam 2045 0 0 0 1 160b pmf-0 1993 38 30 INGRESS_LPTS_IPV40 1 160b pmf-0 1993 12 36 INGRESS_RX_ISIS0 1 160b pmf-0 1993 2 46 INGRESS_QOS_IPV40 1 160b pmf-0 1993 2 48 INGRESS_QOS_MPLS0 1 160b pmf-0 1993 1 54 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 2 160b pmf-0 2036 3 39 INGRESS_ACL_L3_IPV4Summary of TCAM key Precedence Precedence Order Line Card-specific UDK (if defined) Global UDK (if defined) Default TCAM Key ReferenceCCO Config GuideSummaryIn this document, we covered the details of User Defined Key - UDK for NCS55xx and NCS5xx. We also saw the advantages of using the UDK, particularly optimize the valuable TCAM resources. How the UDK will take precedence when configured, over the default key. One thing to note is UDK can be used with the keys which are already defined. What if user wants to define their own fields and match against that.Stay tuned for the next document on NCS55xx and NCS5xx UDF which will cover, how we can define fields and match the traffic against the same and apply action on it.", "url": "/tutorials/user-defined-key-udk-for-ncs55xx-and-ncs5xx/", "author": "Tejas Lad", "tags": "NCS5500, NCS500, ACL, UDK, User Defined Key, NCS55xx" } , "tutorials-ncs-5700-400g-optics-technology-and-roadmap": { "title": "NCS-5700 400G Optics Technology and Roadmap", "content": " On This Page NCS-5700 400G Optics Technology and Roadmap Introducing QSFP-DD MSA Number of Serdes 50G Serdes Support PAM4 Encoding Support RS-FEC RS(544,514) Support High Power Support Flexible Breakout Support 400G Optics Technology Evolution 50G and 100G Wavelengths Evolution Distance and Fiber Types Bidirectional Optics Support Parallel Fiber and Breakout Support Timeline for 400G Optics Standards Development NCS-5700 400G Optics Support and Roadmap NCS-5700 QSFP-DD 400G Optics with 50G/100G PAM4 Wavelengths Support and Roadmap NCS-5700 QSFP28-DD 2x100G Optics with 25G NRZ Wavelengths Support and Roadmap NCS-5700 QSFP28 100G Optics with New Generation 50G/100G PAM4 Wavelengths Support and Roadmap Summary of Optics PIDs and Roadmap NCS-5700 400G Optics Technology and RoadmapNCS-5700 is the first platform in the NCS-5500 Series that supports 400G optics, which is introduced in a previous tutorial#Introducing 400GE on NCS5500 SeriesThis tutorial will discuss in more details the 400G optics technology and roadmap for the NCS-5700 series.A follow up tutorial will discuss the 400G optics and breakout options supported for each port type on the NCS-5700 series linecards, NC57-24DD and NC57-18DD-SE.Introducing QSFP-DD MSAQSFP-DD is the leading form factor for 400G optics, promoted by QSFP-DD MSA. Members include Cisco and other major suppliers.QSFP+ and QSFP28 are the de-facto standard for high density 40G and 100G optics.QSFP-DD is fully compatible with QSFP+ and QSFP28, and so maintains the same port density.The QSFP-DD MSA published its Revision 1.0 specifications in Sep 2016, and it is at its Revision 5.0 as of Jul 2019.Number of SerdesQSFP-DD adds a second row of pins and increases the number of serdes from 4 to 8, hence the name double-density.It therefore supports optics up to 8 electrical lanes.50G Serdes SupportQSFP-DD increases the maximum serdes speed from 25G to 50G, therefore supporting a maximum speed of 8x50G = 400G aggregate.For backward compatibility, it also supports 10G and 25G serdes speed, thus allowing all possible aggregate speeds of 40G, 100G, 200G and 400G.Optics of all form factors, such as QSFP+, QSFP28, QSFP56, QSFP28-DD and QSFP56-DD are supported by QSFP-DD.PAM4 Encoding SupportQSFP-DD supports PAM4 encoding for 50G serdes, while maintaining the use of NRZ encoding for 10G/25G serdes.PAM4 allows 2 bits per baud, so will double the serdes speed from 25G to 50G, while maintaining the same 25 GBaud rate.RS-FEC RS(544,514) SupportQSFP-DD supports RS-FEC (Clause 91) RS(544,514) for 50G serdes, while maintaining the use of RS(528,514) for 25G serdes.RS(528,514) is sometimes called the KR4 FEC, and RS(544,514) KP4 FEC.RS(544,514) will provide a stronger FEC for use with higher speed serdes, and hence require a slightly higher overhead.RS(528,514) is running at 25.78125 GBaud/s (25.78125 Gbit/s), and RS(544,514) is running at 26.5625 GBaud/s (53.125 Gbit/s)High Power SupportQSFP-DD supports high power optics, therefore ready for coherent optics such as 400G-ZR and 400G-ZR+.At OFC 2019, Cisco Demonstrate 20W+ power dissipation of QSFP-DD.https#//blogs.cisco.com/sp/cisco-demonstrates-20w-power-dissipation-of-qsfp-dd-at-ofc-2019https#//www.lightwaveonline.com/optical-tech/transmission/article/14036073/qsfpdd-msa-releases-common-management-interface-40-and-hardware-specification-50Flexible Breakout SupportThe wide range of combination of serdes number and speeds provides very flexible breakout options#     QSFP56-DD         QSFP28-DD         QSFP56         QSFP28         QSFP+     400G 200G 200G 100G 40G 2x200G 2x100G 2x100G 4x25G 4x10G 4x100G 8x25G 4x50G     8x50G         400G Optics Technology EvolutionFor early QSFP28 100G optics, majority are using 4x 25G wavelengths with NRZ encoding.50G and 100G Wavelengths EvolutionWith evolution to 200G/400G optics, there is a need for higher speed wavelengths in the optical domain, such as 50G/100G in order to reduce lasers cost and complexity. 25G wavelengths are supported using NRZ encoding with 25 GBaud and RS-FEC RS(528,514). 16x 25G wavelengths are required for an aggregate 400G speed. 50G wavelengths are supported using PAM4 encoding with 25 GBaud and RS-FEC RS(544,514). 8x 50G wavelengths are required for an aggregate 400G speed. 100G wavelengths are supported using PAM4 encoding with 50 GBaud and RS-FEC RS(544,514). An even lower number of wavelengths, 4x, are required for an aggregate 400G speed. Distance and Fiber TypesGenerally the 400G optics distance support will be dependent on the type of cables or fibers used. For very short distances up to a few meters, usually copper cables or active optical cables will be most cost effective. For short distances up to about 100m, usually multimode fibers (MMF) with 850nm wavelength are deployed for lower cost. Furthermore, parallel fibers, each with single wavelength, are used to reduce optics complexity. For medium distances to about 500m, usually parallel single mode fibers (SMF) with 1310nm wavelength are deployed. For longer distances 2km and beyond, usually a single duplex SMF fiber pair is deployed with 1310nm wavelengths to minimize fiber numbers and cost. Multiple wavelengths are multiplexed into a single SMF using WDM technologies, such as CWDM or LWDM. LWDM is higher cost and usually for longer distances. To reach even longer distances like 80km and above, usually Coherent Detection optics with 1550nm wavelength are used, which have their own specific encoding and FEC, and have single tunable wavelength at speed 100G or 400G. These wavelengths may even be transported over long haul DWDM systems and reach 1000’s of km’s. Bidirectional Optics SupportBidirectional optics can save half the number of fibers, as each fiber supports 2 wavelengths, one in each direction.For example, 400GBase-SR8 requires 8 pairs of MMF, total 16 fibers. Each fiber supports one wavelength in one direction, total 16 wavelengths.In case of bidirectional 400GBase-SR4.2, it only requires 4 pairs of MMF, total 8 fibers. Each fiber run 2 wavelengths, one in each direction, total also 16 wavelengths.WDM technology is required for the multiplexing, such as SWDM in the case of 400GBase-SR4.2 and 100G-SWDM2.Parallel Fiber and Breakout SupportGenerally, parallel fibers have additional advantage of supporting breakout from 400G to multiple lower speed optics for more flexible deployment.Any types of parallel cables, MMF or SMF can support breakout, and usually each piece of fiber will support one wavelength.For example, 400GBase-DR4 have 4 MMF fibers, each with one 100G wavelength, therefore it could support 4x100G breakout.Timeline for 400G Optics Standards DevelopmentThis is a brief summary of the latest optics standards with 50G, 100G or 400G wavelengths.Most optics standards are from IEEE 802.3 Ethernet Standards Committee.However, in some areas, various MSA standards will also provide important supplement to the IEEE standards, and we have included some of them below. Date Specs Standard Speed W/L Distance Cable Freq Type Breakout WDM 2017 Dec IEEE 802.3bs 400GBase-SR16 400G 25G 100m MMF OM4 850nm Parallel Y       400GBase-DR4 400G 100G 500m SMF 1310nm Parallel Y       400GBase-FR8 400G 50G 2km SMF 1310nm Duplex N LWDM     400GBase-LR8 400G 50G 10km SMF 1310nm Duplex N LWDM     200GBase-DR4 200G 50G 500m SMF 1310nm Parallel Y       200GBase-FR4 200G 50G 2km SMF 1310nm Duplex N CWDM     200GBase-LR4 200G 50G 10km SMF 1310nm Duplex N LWDM 2018 Jun MSA ETC 400GBase-KR8 400G 50G 1m Backplane   Parallel Y       400GBase-CR8 400G 50G 5m Copper   Parallel Y   2018 Sep MSA 100G LD 100G-FR 100G 100G 2km SMF 1310nm Duplex N       100G-LR 100G 100G 10km SMF 1310nm Duplex N       400G-FR4 400G 100G 2km SMF 1310nm Duplex N CWDM     400G-LR4-10 400G 100G 10km SMF 1310nm Duplex N CWDM 2018 Dec IEEE 802.3cd 200GBase-KR4 200G 50G 1m Backplane   Parallel Y       200GBase-CR4 200G 50G 5m Copper   Parallel Y       200GBase-SR4 200G 50G 100m MMF OM4 850nm Parallel Y       100GBase-KR2 100G 50G 1m Backplane   Parallel Y       100GBase-CR2 100G 50G 5m Copper   Parallel Y       100GBase-SR2 100G 50G 100m MMF OM4 850nm Parallel Y       100GBase-DR 100G 100G 500m SMF 1310nm Duplex N   2019 Mar MSA SWDM 100G-SWDM2 100G 50G 100m MMF OM4 850nm Duplex N SWDM Bidi 2019 Nov IEEE 802.3cn 400GBase-ER8 400G 50G 40km SMF 1310nm Duplex N LWDM     200GBase-ER4 200G 50G 40km SMF 1310nm Duplex N LWDM 2020 Jan IEEE 802.3cm 400GBase-SR8 400G 50G 100m MMF OM4 850nm Parallel Y       400GBase-SR4.2 400G 50G 100m MMF OM4 850nm Parallel Y SWDM Bidi 2020 Mar MSA OIF 400ZR 400G 400G <120km SMF 1550nm Duplex Muxponder Coherent 2020 End IEEE P802.3cu 400GBase-FR4 400G 100G 2km SMF 1310nm Duplex N CWDM     400GBase-LR4-6 400G 100G 6km SMF 1310nm Duplex N CWDM 2020 End MSA OpenZR+ 400G-ZR+ 400G 400G >120km SMF 1550nm Duplex Muxponder Coherent 2021 End IEEE P802.3ck 400GBase-CR4 400G 100G 5m Copper   Parallel Y       200GBase-CR2 200G 100G 5m Copper   Parallel Y       100GBase-CR 100G 100G 5m Copper   Duplex N   2021 End IEEE P802.3ct 100GBase-ZR 100G 100G 80km SMF 1550nm Duplex N Coherent 2022 Mid IEEE P802.3cw 400GBase-ZR 400G 400G 80km SMF 1550nm Duplex N Coherent 2022 Mid IEEE P802.3db 400GBase-SR4 400G 100G 50m MMF OM4 850nm Parallel Y       200GBase-SR2 200G 100G 50m MMF OM4 850nm Parallel y       100GBase-SR 100G 100G 50m MMF OM4 850nm Duplex N   NCS-5700 400G Optics Support and RoadmapThe whole new generation of 400G optics support will be introduced in phases for NCS-5700.NCS-5700 QSFP-DD 400G Optics with 50G/100G PAM4 Wavelengths Support and RoadmapThis section will cover all 400G optics with new generation 50G and 100G wavelengths, and using QSFP-DD form factor, that is available or under roadmap. More optics will be planned as and when they become available.For ease of visualizing the use case of each optics, we have organized the optics into three major categories, each with varying distances and breakout capabilities. Cables# Usually comes in passive copper Direct Attach Cables (DAC) for a few meters, or active optical cables (AOC) up to 30m. It could be direct, or breakout like 8x50 or 4x100. Multimode Fibers# Usually Parallel MMF up to about 100m for OM4, and capable of breakout. Single mode Fibers# Could be parallel SMF or duplex SMF. Parallel SMF is capable of breakout, and Duplex SMF will need WDM for multiplexing multiple wavelengths into a single SMF. Please note for the supported 4x100G breakout, the remote end need to be a new generation 100G optics supporting 100G PAM4 wavelength. For example, 400GBase-DR4 supports 4x100G breakout to 4 remote 100G-DR/FR/LR.Below charts will show currently available optics as of IOS XR 7.0.2. Releases for optics in roadmap will be announced when they become available.NCS-5700 QSFP28-DD 2x100G Optics with 25G NRZ Wavelengths Support and RoadmapIn current deployments, there are a lot of current generation QSFP28 optics with 4x 25G NRZ wavelengths.There is therefore a need for higher density support of these optics on the new NCS-5700 linecards, in order to be backward compatible with currently deployed QSFP28 optics.Current technology could support packaging 2 current generation QSFP28 optics in a single QSFP-DD form factor, therefore we will support a new generation of high density 2x100G optics with QSFP28-DD on the NCS-5700.For ease of visualizing the use case of each optics, we have organized the optics into two major categories, each with varying distances and breakout capabilities. Multimode Fibers# Usually Parallel MMF up to about 100m for OM4, and capable of breakout. Single mode Fibers# Could be parallel SMF or duplex SMF. Parallel SMF is capable of breakout, and Duplex SMF will need WDM for multiplexing multiple wavelengths into a single SMF. Below charts will show currently available optics as of IOS XR 7.0.2. Releases for optics in roadmap will be announced when they become available.As there is limited face plate space on the QSFP-DD package, the 2x100G dual optics will require higher density connectors, such as MPO-24 for parallel optics, and Dual Duplex CS Connectors for duplex optics.NCS-5700 QSFP28 100G Optics with New Generation 50G/100G PAM4 Wavelengths Support and RoadmapAs we gradually migrate to 400G optics, with new generation 50G/100G wavelengths, we also need the currently deployed 100GE ports to migrate to new generation optics with 50G/100G wavelengths.This section will cover all 100G optics with new generation 50G and 100G wavelengths using QSFP28 form factor.Exception will be 100G-ZR coherent optics, which requires higher power, so will only be supported on QSFP-DD form factor.Below charts will show currently available optics as of IOS XR 7.0.2. Releases for optics in roadmap will be announced when they become available.Summary of Optics PIDs and RoadmapThis is a summary table for 400G and related 100G optics availability and roadmap for NCS-5700 Linecards, NC55-24DD and NC57-18DD-SE for your reference, valid as of currently available IOS XR 7.0.2 release. PID Description Distance Target FCS QDD-400-CUxM 400G QSFP-DD to QSFP-DD Passive Copper Cable 1/2 m x m 7.0.2 QDD-400G-DR4-S 400G QSFP-DD Transceiver, 400GBASE-DR4, MPO-12, SMF 500 m 7.0.2   Can be used as 4x100G breakout to QSFP-100G-FR/LR-S 500 m Roadmap QDD-400G-FR4-S 400G QSFP-DD Transceiver, 400GBASE-FR4, Duplex LC, SMF 2 km 7.0.2 QDD-400G-LR8-S 400G QSFP-DD Transceiver, 400GBASE-LR8, Duplex LC, SMF 10 km 7.0.2 QDD-2X100-LR4-S 2x100G QSFP-DD Transceiver, 2x100GBASE-LR4, Dual Duplex CS, SMF 10 km 7.0.2 QDD-2X100-CWDM4-S 2x100G QSFP-DD Transceiver, 2x100G-CWDM4, Dual Duplex CS, SMF 2 km Roadmap QDD-2X100-SR4-S 2x100G QSFP-DD Transceiver, 2x100GBASE-SR4, MPO-24, MMF, OM4 100 m Roadmap QDD-2x100-PSM4-S 2x100G QSFP-DD Transceiver, 2x100G-PSM4, MPO-24, SMF 500 m Roadmap QSFP-40/100-SRBD Dual Rate QSFP28 Transceiver, 100G-SWDM2 Bidi , Duplex LC, MMF, OM4 100 m Roadmap QSFP-100G-DR-S 100G QSFP28 Transceiver, 100GBASAE-DR, SMF, Duplex LC, SMF 500 m Roadmap QSFP-100G-FR-S 100G QSFP28 Transceiver, 100GBASAE-FR, SMF, Duplex LC, SMF 2 km Roadmap QSFP-100G-LR-S 100G QSFP28 Transceiver, 100GBASAE-LR, SMF, Duplex LC, SMF 10 km Roadmap QDD-4x100G-FR 4x100G QSFP-DD Transceiver, 4x100GBASE-FR, MPO-12, SMF 2 km Roadmap   Can be used as 4x100G breakout to QSFP-100G-FR/LR-S 2 km Roadmap QDD-4x100G-LR 4x100G QSFP-DD Transceiver, 4x100GBASE-LR, MPO-12, SMF 10 km Roadmap   Can be used as 4x100G breakout to QSFP-100G-LR-S 10 km Roadmap QDD-400G-SR4-BD 400G QSFP-DD Transceiver, 400GBASE-SR4.2, MPO-12, MMF, OM4 100 m Roadmap   Can be used as 4x100G breakout to 100G-SWDM2 BiDi 100 m Roadmap QDD-400G-SR8-S 400G QSFP-DD Transceiver, 400GBASE-SR8, MPO-16, MMF, OM4 100 m Roadmap   Can be used as 8x50G breakout to 50GBASE-SR 100 m Roadmap QDD-400G-LR4-S 400G QSFP-DD Transceiver, 400G-LR4, SMF, Duplex LC, SMF 10 km Roadmap QDD-400G-ZR-S 400G QSFP-DD Transceiver, OIF 400ZR, Tunable Coherent, Duplex LC, SMF (requires EDFA for links in excess of 40km) 80 km+ (120 km max) Roadmap QDD-400G-ZRP-S 100G/200G/300G/400G QSFP-DD Transceiver, 400G-ZR+, Tunable Coherent Duplex LC, SMF (requires EDFA for links in excess of 40km - 400G) 1200 km+ Roadmap QDD-100G-ZR 100G QSFP-DD Transceiver, Tunable Coherent, Duplex LC, SMF 80 km+ (120 km max) Roadmap ", "url": "/tutorials/ncs-5700-400g-optics-technology-and-roadmap/", "author": "Vincent Ng", "tags": "NCS-5700, 400G, Optics, IOS-XR, NCS-5500, Cisco" } , "tutorials-bgp-evpn-based-port-active-multihoming": { "title": "BGP EVPN based Port Active MultiHoming", "content": " On This Page Implementation of BGP-EVPN based Port-Active Multi-Homing Reference Topology Task 1# Configure Ethernet bundle on Host-1 for multi-homing Task 2# Configure EVPN based port-active multi-homing Task 3# Configure BGP EVPN based layer-2 multipoint service Task 4# Verify that EVPN based Port-active multi-homing is operational Task 5# Configure and Verify BGP-EVPN Distributed Anycast Gateway for IRB service Implementation of BGP-EVPN based Port-Active Multi-HomingIn port-active multi-homing, a host/CE is multihomed to one or more Leaf/PEs and only one of the Leaf is active and forwards the traffic to and from the connected hosts. The rest of the Leaf remain in standby mode. Thus these mode offers an active-standby PE/Leaf redundancy for multihomed host/CE.In this post we will cover the BGP-EVPN based Port-Active Multi-Homing of CE/Hosts. Similar to All active or Single active mode, Ethernet Segment Identifier (ESI) is used to identify the links towards the same multihomed Host. Port-active offers active/standby redundant connectivity with forwarding for all traffic on a single link at a time with switchover to the second link in case of active link’s failure. Port-Active load balancing mode keeps only one link towards the host as active and rest of the link stays in LACP standby mode, thus creating a complete active standby multihoming for the connected host/CEs. This is useful when we need protocol simplification from the host network.Reference TopologyFor this post, we will leverage EVPN control-plane and ISIS Segment Routing based forwarding that we configured in a previous post. However, the choice of transport is not mandatorily ISIS+SR and we can have OSPF as IGP and LDP instead of SR as well.As shown in the above topology, Host-1 is multi-homed to Leaf-1 and Leaf-2. For EVPN port multi-homing, the link towards the Leaf will be a single ethernet bundle interface. This bundle may operate with different VLANs for different services. EVPN port-active mode at the leaf1 and leaf2 will elect only one leaf as the active node and the bundle on that leaf will be in active state. The bundle on the other leaf will move to standby state and signal LACP out of service towards the host. As a result all traffic from the host H-1 will be able to forward the traffic only towards the active lacp link to achieve port active redundancy for multihoming. The election of active Leaf is similar operation like all active DF election, however in this case the election happens based on the ethernet segment identifier.Task 1# Configure Ethernet bundle on Host-1 for multi-homingAs per the reference topology Host-1 is multi-homed to Leaf-1 and Leaf-2 via LACP bundle-ethernet 1 going to both Leaf-1 and Leaf-2. The host/CE with IP address 10.0.0.10/24 configured on a vlan sub interface on the bundle. . Following is the configuration of LAG on Host-1.The LAG on Host-1 will come up after we configure lacp and port-active multi-homing using EVPN Ether-Segment on the Leaf-1 and Leaf-2.Host-1#interface Bundle-Ether 1description ~Bundle to Leaf-1~!interface TenGigE0/0/2/0description ~Link to Leaf-1 ten0/0/0/47~bundle id 1 mode active!interface TenGigE0/0/2/1description ~Link to Leaf-2 ten0/0/0/47~bundle id 1 mode active!interface Bundle-Ether1.10encapsulation dot1q 10ipv4 address 10.0.0.10 255.255.255.0!Task 2# Configure EVPN based port-active multi-homingConfigure Leaf-1 and Leaf-2 to provision port-active multi-homing to host-1. The set of links from Host-1 to the Leafs will be configured as the same Ethernet Segment on the Leafs.Configure the LACP bundles on the Leaf-1 and Leaf-2. Use below configuration for the Leafs.Leaf-1#interface TenGigE0/0/0/47description ~Link to Host-1~bundle id 1 mode active!interface Bundle-Ether1 description ~Bundle to Host-1 for port-active~ lacp system mac 1212.1212.1212Leaf-2interface TenGigE0/0/0/47description ~Link to Host-1~bundle id 1 mode active!interface Bundle-Ether1 description ~Bundle to Host-1 for port-active~ lacp system mac 1212.1212.1212!Configure ESI for the bundle interface to enable multi-homing of the host. Use the identical ethernet-segment configuration on both the Leafs. Configure load-balancing mode to port-active using “port-active” keyword for ethernet-segment.Note# The configured ESI will be used for the selection of active port. Out of the 10 octet ESI, a modulo operation is performed on octet 3-6 to elect the active leaf/PELeaf-1 and leaf 2evpn interface Bundle-Ether1 ethernet-segment identifier type 0 12.12.12.12.12.12.12.12.12 load-balancing-mode port-activeUse “show bundle bundle-ether” CLI command to verify the state of the bundle interfaces on Leafs and Host-1.RP/0/RP0/CPU0#Leaf-1#show bundle bundle-ether 1Bundle-Ether1 Status# Up Local links &lt active/standby/configured &gt # 1 / 0 / 1 Local bandwidth &lt effective/available&gt # 10000000 (10000000) kbps MAC address (source)# 00bc.601c.d0d9 (Chassis pool) Inter-chassis link# No Minimum active links / bandwidth# 1 / 1 kbps Maximum active links# 64 Wait while timer# 2000 ms Load balancing# Link order signaling# Not configured Hash type# Default Locality threshold# None LACP# Operational Flap suppression timer# Off Cisco extensions# Disabled Non-revertive# Disabled mLACP# Not configured IPv4 BFD# Not configured IPv6 BFD# Not configured Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Te0/0/0/47 Local Active 0x8000, 0x0001 10000000 Link is ActiveRP/0/RP0/CPU0#Leaf-2#show bundle bundle-ether 1Bundle-Ether1 Status# LACP OOS (out of service) Local links &lt active/standby/configured &gt # 0 / 1 / 1 Local bandwidth &lt effective/available &gt # 0 (0) kbps MAC address (source)# 00bc.600e.40dc (Chassis pool) Inter-chassis link# No Minimum active links / bandwidth# 1 / 1 kbps Maximum active links# 64 Wait while timer# 2000 ms Load balancing# Link order signaling# Not configured Hash type# Default Locality threshold# None LACP# Operational Flap suppression timer# Off Cisco extensions# Disabled Non-revertive# Disabled mLACP# Not configured IPv4 BFD# Not configured IPv6 BFD# Not configured Port Device State Port ID B/W, kbps -------------------- --------------- ----------- -------------- ---------- Te0/0/0/47 Local Standby 0x8000, 0x0001 10000000 Link is in standby due to bundle out of service stateAlso, verify the port-active operation making one leaf active and one leaf standby by verifying the status of the ethernet segment on each PELEAF1#RP/0/RP0/CPU0#Leaf-1# sh evpn ethernet-segment interface bundle-Ether 1 detailLegend# B - No Forwarders EVPN-enabled, C - Backbone Source MAC missing (PBB-EVPN), RT - ES-Import Route Target missing, E - ESI missing, H - Interface handle missing, I - Name (Interface or Virtual Access) missing, M - Interface in Down state, O - BGP End of Download missing, P - Interface already Access Protected, Pf - Interface forced single-homed, R - BGP RID not received, S - Interface in redundancy standby state, X - ESI-extracted MAC Conflict SHG - No local split-horizon-group label allocatedEthernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0012.1212.1212.1212.1212 BE1 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.601c.d0d9 IfHandle # 0x08004034 State # Up Redundancy # Not Defined ESI type # 0 Value # 12.1212.1212.1212.1212 ES Import RT # 1212.1212.1212 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 0 Elected # 0 Not Elected # 0 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # None Remote SHG labels # 0 Access signal mode# Bundle OOS (Default)LEAF2# RP/0/RP0/CPU0#Leaf-2# sh evpn ethernet-segment interface bundle-Ether 1 detailLegend# B - No Forwarders EVPN-enabled, C - Backbone Source MAC missing (PBB-EVPN), RT - ES-Import Route Target missing, E - ESI missing, H - Interface handle missing, I - Name (Interface or Virtual Access) missing, M - Interface in Down state, O - BGP End of Download missing, P - Interface already Access Protected, Pf - Interface forced single-homed, R - BGP RID not received, S - Interface in redundancy standby state, X - ESI-extracted MAC Conflict SHG - No local split-horizon-group label allocatedEthernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0012.1212.1212.1212.1212 BE1 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.600e.40dc IfHandle # 0x08004014 State # Standby Redundancy # Not Defined ESI type # 0 Value # 12.1212.1212.1212.1212 ES Import RT # 1212.1212.1212 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 0 Elected # 0 Not Elected # 0 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # None Remote SHG labels # 0 Access signal mode# Bundle OOS (Default)Note In the example shown the ethernet segment Identifier is 00.12.12.12.12.12.12.12.12.12.12 and the portion impacting DF election is 12.12.12.12 as highlighted. For Dual homing an odd-even modulo operation will gives a result of 0. Therefore Leaf1 is our active PE as it has a lower BGP router ID of 1.1.1.1 compared to 2.2.2.2 of Leaf2.Above output shows that the bundle interfaces are up and port active redundancy mode has created an active standby Leaf redundancy for the dual homed Host-1. By default the ethernet segment signals bundle OOS on the non-DF PE. The ES may also be configured with ‘access-signal bundle-down’. This configuration is used to keep ES down instead of OOS when EVPN cost-out/core-isolation and similar triggers are applied. In the Down signalling mode, the CE side is able to switch ES from one to the other when LACP is not supported. The below snippet shows the configuration and CLI output.evpn interface Bundle-Ether2 ethernet-segment identifier type 0 18.44.18.44.18.44.18.44.00 load-balancing-mode port-active ! access-signal bundle-downRP/0/RP0/CPU0#LEAF-1#show evpn ethernet-segment interface bundle-Ether 2 detail Thu Nov 12 00#43#18.314 GMT+4Legend# B - No Forwarders EVPN-enabled, C - Backbone Source MAC missing (PBB-EVPN), RT - ES-Import Route Target missing, E - ESI missing, H - Interface handle missing, I - Name (Interface or Virtual Access) missing, M - Interface in Down state, O - BGP End of Download missing, P - Interface already Access Protected, Pf - Interface forced single-homed, R - BGP RID not received, S - Interface in redundancy standby state, X - ESI-extracted MAC Conflict SHG - No local split-horizon-group label allocatedEthernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0018.4418.4418.4418.4400 BE2 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether2 Interface MAC # 0032.1780.98de IfHandle # 0x080040c4 State # Up Redundancy # Not Defined ESI type # 0 Value # 18.4418.4418.4418.4400 ES Import RT # 1844.1844.1844 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Convergence # Mobility-Flush # Count 0, Skip 0, Last n/a Peering Details # 2 Nexthops 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 0 Elected # 0 Not Elected # 0 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # None Remote SHG labels # 0 Access signal mode# Bundle Down RP/0/RP0/CPU0#LEAF-2#show evpn ethernet-segment interface bundle-Ether 2 detail Thu Nov 12 04#49#28.018 UTCLegend# B - No Forwarders EVPN-enabled, C - Backbone Source MAC missing (PBB-EVPN), RT - ES-Import Route Target missing, E - ESI missing, H - Interface handle missing, I - Name (Interface or Virtual Access) missing, M - Interface in Down state, O - BGP End of Download missing, P - Interface already Access Protected, Pf - Interface forced single-homed, R - BGP RID not received, S - Interface in redundancy standby state, X - ESI-extracted MAC Conflict SHG - No local split-horizon-group label allocatedEthernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0018.4418.4418.4418.4400 BE2 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether2 Interface MAC # 00bc.6013.44de IfHandle # 0x0800403c State # Standby Redundancy # Not Defined ESI type # 0 Value # 18.4418.4418.4418.4400 ES Import RT # 1844.1844.1844 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Convergence # Mobility-Flush # Count 0, Skip 0, Last n/a Peering Details # 2 Nexthops 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 0 Elected # 0 Not Elected # 0 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # None Remote SHG labels # 0 Access signal mode# Bundle DownNext, lets’ provision the EVPN layer-2 service over this redundancy.Task 3# Configure BGP EVPN based layer-2 multipoint serviceHere we will configure a EVPN layer-2 service between Leaf-1, Leaf-2 and Leaf-5 to provide a L2VPN between H1 and H5. Post configuration we will check the status of ethernet segment. For detailed explanation of configuring BGP EVPN based layer-2 service, refer to this post.Here , the L2 service is configured on VLAN 10 (sub-interface on the bundle) and only one VPN (EVI) is shown. We may have multiple services running over different sub-interface (VLAN).Leaf-1#interface Bundle-Ether 1.10 l2transportencapsulation dot1q 10rewrite ingress tag pop 1 symmetric!l2vpnbridge group bg-1bridge-domain bd-10interface Bundle-Ether 11.10evi 10!!evpn evi 10 bgp route-target import 1001#11 route-target export 1001#11 ! advertise-mac ! !!Leaf-2#interface Bundle-Ether 1.10 l2transportencapsulation dot1q 10rewrite ingress tag pop 1 symmetric!l2vpnbridge group bg-1bridge-domain bd-10interface Bundle-Ether 1.10evi 10!!evpnevi 10bgproute-target import 1001#11route-target export 1001#11!advertise-mac!!Leaf-5#interface TenGigE0/0/0/45.10 l2transportencapsulation dot1q 10rewrite ingress tag pop 1 symmetric!evpnevi 10bgproute-target import 1001#11route-target export 1001#11!advertise-mac!!!l2vpnbridge group bg-1bridge-domain bd-10interface TenGigE0/0/0/45.10!evi 10!!Host-5 is single-homed to Leaf-5, below is the Host-5 configuration for reference.Host-5#interface TenGigE0/0/1/3.10description ~Link to Leaf-5~ipv4 address 10.0.0.50 255.255.255.0encapsulation dot1q 10Once , the EVPN service is up, H1 will be able to reach H5 and vice-versa.Task 4# Verify that EVPN based Port-active multi-homing is operationalAs we have configured the BGP EVPN layer-2 service as well as the ethernet segment, we have already verified the port active operation. Now using the same command again we can see in the service carving details and confirm that the EVPN service is only active on the active PE.LEAF1#RP/0/RP0/CPU0#Leaf-1#show evpn ethernet-segment interface bundle-Ether 1 detail Thu Aug 13 11#58#07.149 UTCLegend# B - No Forwarders EVPN-enabled, C - Backbone Source MAC missing (PBB-EVPN), RT - ES-Import Route Target missing, E - ESI missing, H - Interface handle missing, I - Name (Interface or Virtual Access) missing, M - Interface in Down state, O - BGP End of Download missing, P - Interface already Access Protected, Pf - Interface forced single-homed, R - BGP RID not received, S - Interface in redundancy standby state, X - ESI-extracted MAC Conflict SHG - No local split-horizon-group label allocatedEthernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0012.1212.1212.1212.1212 BE1 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.601c.d0d9 IfHandle # 0x08004034 State # Up Redundancy # Not Defined ESI type # 0 Value # 12.1212.1212.1212.1212 ES Import RT # 1212.1212.1212 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 1 Elected # 1 Not Elected # 0 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # 24001 Remote SHG labels # 1 24001 # nexthop 2.2.2.2 Access signal mode# Bundle OOS (Default)LEAF2#RP/0/RP0/CPU0#Leaf-2#show evpn ethernet-segment interface bundle-Ether 1 detailThu Aug 13 11#58#50.921 UTCLegend# B - No Forwarders EVPN-enabled, C - Backbone Source MAC missing (PBB-EVPN), RT - ES-Import Route Target missing, E - ESI missing, H - Interface handle missing, I - Name (Interface or Virtual Access) missing, M - Interface in Down state, O - BGP End of Download missing, P - Interface already Access Protected, Pf - Interface forced single-homed, R - BGP RID not received, S - Interface in redundancy standby state, X - ESI-extracted MAC Conflict SHG - No local split-horizon-group label allocatedEthernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0012.1212.1212.1212.1212 BE1 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.600e.40dc IfHandle # 0x08004014 State # Standby Redundancy # Not Defined ESI type # 0 Value # 12.1212.1212.1212.1212 ES Import RT # 1212.1212.1212 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 1 Elected # 0 Not Elected # 1 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # 24001 Remote SHG labels # 1 24001 # nexthop 1.1.1.1 Access signal mode# Bundle OOS (Default)The above output on both PE shows that elected field is up only for the active PE, although the output of both the Leafs show that both are forwarders of 1 service. Unlike All-active or Single-active, the same PE will be the elected PE for any other vlan configured on these ethernet segment. This is the nature of port active redundancy mode. ToPing from Host-1 to Host-5 shows that the hosts can reach each other.Host-1#RP/0/RSP0/CPU0#Host-1#ping 10.0.0.50Thu Aug 13 11#29#24.024 UTCType escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.0.0.50, timeout is 2 seconds#!!!!!Let’s now take a look at the BGP EVPN control plane by checking the types of routes received on different leaf’s. We are filtering the route for the specific PE and specific service using rd which is PE#EVI . for example , routes from leaf1 for EVI 10 will come with a RD of 1.1.1.1#10RP/0/RP0/CPU0#Leaf-1#show bgp l2vpn evpn rd 5.5.5.5#10------Status codes# s suppressed, d damped, h history, * valid, &gt best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 5.5.5.5#10*>i[2][0][48][a03d.6f3d.5447][0]/104 5.5.5.5 100 0 i*>i[3][0][32][5.5.5.5]/80 5.5.5.5 100 0 iProcessed 2 prefixes, 2 pathsRP/0/RP0/CPU0#Leaf-1#show bgp l2vpn evpn rd 2.2.2.2#10-------Status codes# s suppressed, d damped, h history, * valid, &gt best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 2.2.2.2#10*>i[1][0012.1212.1212.1212.1212][0]/120 2.2.2.2 100 0 i*>i[3][0][32][2.2.2.2]/80 2.2.2.2 100 0 iFrom Above output from Leaf-1 clearly shows it has reached the RT2 (MAC) from Leaf-5. From Leaf2 it has only received the ESI route.RP/0/RP0/CPU0#Leaf-2#show bgp l2vpn evpn rd 1.1.1.1#10-------------Status codes# s suppressed, d damped, h history, * valid, &gt best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 1.1.1.1#10*>i[1][0012.1212.1212.1212.1212][0]/120 1.1.1.1 100 0 i*>i[2][0][48][6c9c.ed6d.1d89][0]/104 1.1.1.1 100 0 i*>i[3][0][32][1.1.1.1]/80 1.1.1.1 100 0 iProcessed 3 prefixes, 3 pathsRP/0/RP0/CPU0#Leaf-2#show bgp l2vpn evpn rd 5.5.5.5#10-------------Status codes# s suppressed, d damped, h history, * valid, &gt best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 5.5.5.5#10*>i[2][0][48][a03d.6f3d.5447][0]/104 5.5.5.5 100 0 i*>i[3][0][32][5.5.5.5]/80 5.5.5.5 100 0 iProcessed 2 prefixes, 2 pathsAbove output shows Leaf-2 has learnt ESI and MAC of host 1 from Leaf1 and from Leaf 5 it has learnt the MAC of host-5.RP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 1.1.1.1#10-------Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 1.1.1.1#10*>i[1][0012.1212.1212.1212.1212][0]/120 1.1.1.1 100 0 i*>i[2][0][48][6c9c.ed6d.1d89][0]/104 1.1.1.1 100 0 i*>i[3][0][32][1.1.1.1]/80 1.1.1.1 100 0 iProcessed 3 prefixes, 3 pathsRP/0/RP0/CPU0#Leaf-5#show bgp l2vpn evpn rd 2.2.2.2#10-----------Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 2.2.2.2#10*>i[1][0012.1212.1212.1212.1212][0]/120 2.2.2.2 100 0 i*>i[3][0][32][2.2.2.2]/80 2.2.2.2 100 0 iProcessed 2 prefixes, 2 paths100 0 iLeaf-5 also learns the MAC of host1 only via Leaf1 as it was the only active PE and there is no aliasing in port active multihoming.Lastly, run “show evpn evi vpn-id 10 mac” command to verify the MAC address learnt for EVI 10. We see that Leaf-1 and Leaf-2 have learnt Host-5’s MAC address with Leaf-5 as the next-hop. However , Leaf5 has learnt Host-1’s MAC with only Leaf-1 as nexthop.RP/0/RP0/CPU0#Leaf-1#show evpn evi vpn-id 10 macThu Aug 13 12#12#45.065 UTCVPN-IDEncap\tMAC address\tIP address \tNexthop\t\tLabel---------- ---------- -------------- ----------------------------10 MPLS 6c9c.ed6d.1d89 ## \t\tBundle-Ether1.10\t2400010 MPLS a03d.6f3d.5447 ## \t5.5.5.5 \t 24004RP/0/RP0/CPU0#Leaf-2#show evpn evi vpn-id 10 macThu Aug 13 12#12#45.065 UTCVPN-IDEncap\tMAC address\tIP address \tNexthop\t\tLabel---------- ---------- -------------- ----------------------------10 MPLS 6c9c.ed6d.1d89 ## \t\t1.1.1.1\t\t 2400010 MPLS a03d.6f3d.5447 ## \t5.5.5.5 \t 24004RP/0/RP0/CPU0#Leaf-1#show evpn evi vpn-id 10 macThu Aug 13 12#12#45.065 UTCVPN-IDEncap\tMAC address\tIP address \tNexthop\t\tLabel---------- ---------- -------------- ----------------------------10 MPLS 6c9c.ed6d.1d89 ## \t\t1.1.1.1\t\t 2400010 MPLS a03d.6f3d.5447 ## TenGigE0/0/0/45.10\t 24004The above output verifies the BGP-EVPN control plane for EVPN multipoint service over Port-active multihoming. Note# As Leaf-2 sees Host-1's MAC reachable via Leaf-1, in case of another Host/ESI connected to Leaf-2 wants to reach to Host-1 it will have to go over Leaf-1 to reach to Host-1.Task 5# Configure and Verify BGP-EVPN Distributed Anycast Gateway for IRB serviceIn this section we will demonstrate the Layer-3 inter-subnet routing use case over EVPN port active multihoming. Similar to Host-1’s layer-2 reachability, Host-1’s IP will also only be reachable via Leaf-1 as next-hop. After we configure BGP-EVPN distributed anycast gateway for inter-subnet routing, we will observe the routing table of Leaf-5.Configure the BGP-EVPN Distributed Anycast Gateway on Leaf-1, Leaf-2 and Leaf-5. We will configure the IRB service over a different VLAN and show the coexistence of both service over the port active ESI. For detailed explanation of EVPN distributed anycast gateway, refer to this post. Configure VRFs on Leaf-1, Leaf-2 and Leaf-5. vrf 11 address-family ipv4 unicast import route-target 11#11 ! export route-target 11#11 ! router bgp 65001 address-family vpnv4 unicast ! vrf 11 rd auto address-family ipv4 unicast additional-paths receive maximum-paths ibgp 10 redistribute connected ! Configure BVI as distributed anycast gatewayOn Leaf 1 and Leaf 2# interface BVI11 host-routing vrf 11 ipv4 address 111.0.0.1 255.255.255.0 mac-address 1001.1001.1001 !interface Bundle-Ether1.11 l2transport encapsulation dot1q 11 rewrite ingress tag pop 1 symmetric! evpn evi 11 advertise-mac ! !!l2vpn bridge group bg1 bridge-domain irb1 interface Bundle-Ether1.11 ! routed interface BVI11 ! evi 11Configure BVI as distributed anycast gatewayOn Leaf 5#interface BVI11 host-routing vrf 11 ipv4 address 111.0.1.1 255.255.255.0 mac-address 5001.5001.5001interface TenGigE0/0/0/45.11 l2transport encapsulation dot1q 11 rewrite ingress tag pop 1 symmetric!evpn evi 11 advertise-mac !l2vpn bridge group bg1 bridge-domain irb1 interface TenGigE0/0/0/45.11 ! routed interface BVI11 ! evi 11 We will also configure a two different subnet on the Host’s and respective static routing towards the gateways.HOST1# interface Bundle-Ether1.11 ipv4 address 111.0.0.10 255.255.255.0 encapsulation dot1q 11!router static address-family ipv4 unicast 111.0.0.0/16 111.0.0.1 !!HOST5# interface TenGigE0/0/1/3.11 ipv4 address 111.0.1.50 255.255.255.0 encapsulation dot1q 11!router static address-family ipv4 unicast 111.0.0.0/16 111.0.1.1BGP-EVPN IRB control plane can be verified by observing the route tables on the Leaf node. As we can see the route for remote host’s are learnt on Leaf1 and Leaf-5 via BGP. As Leaf-2 is in standby mode it lean’s route to Host-1 from Leaf-1 via BGP instead of learning directly.RP/0/RP0/CPU0#Leaf-1#sh route vrf 11---------Gateway of last resort is not setC 111.0.0.0/24 is directly connected, 00#32#09, BVI11L 111.0.0.1/32 is directly connected, 00#32#09, BVI11B 111.0.1.50/32 [200/0] via 5.5.5.5 (nexthop in vrf default), 00#22#24RP/0/RP0/CPU0#Leaf-2#sh route vrf 11--------Gateway of last resort is not setB 111.0.0.10/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 00#25#16B 111.0.1.50/32 [200/0] via 5.5.5.5 (nexthop in vrf default), 00#23#21RP/0/RP0/CPU0#Leaf-5#show route vrf 11--------------Gateway of last resort is not setB 111.0.0.10/32 [200/0] via 1.1.1.1 (nexthop in vrf default), 00#26#38C 111.0.1.0/24 is directly connected, 00#29#27, BVI11L 111.0.1.1/32 is directly connected, 00#29#27, BVI11As of now we have configured 2 different services over the EVPN port-active multihoming and we see Leaf-1 as DF for both of this service. This is due to the fact that load balancing happens per port/ESI and the bundle on the non DF nodes are in LACP OOS status. If we see the Ethernet segment status on Leaf-1, we will see it as elected forwarder for all the configured services.RP/0/RP0/CPU0#Leaf-1#show evpn ethernet-segment in bundle-Ether 1 carving detail Thu Aug 13 13#16#23.816 UTC---------Ethernet Segment Id Interface Nexthops ------------------------ ---------------------------------- --------------------0012.1212.1212.1212.1212 BE1 1.1.1.1 2.2.2.2 ES to BGP Gates # Ready ES to L2FIB Gates # Ready Main port # Interface name # Bundle-Ether1 Interface MAC # 00bc.601c.d0d9 IfHandle # 0x08004034 State # Up Redundancy # Not Defined ESI type # 0 Value # 12.1212.1212.1212.1212 ES Import RT # 1212.1212.1212 (from ESI) Source MAC # 0000.0000.0000 (N/A) Topology # Operational # MH Configured # Port-Active Service Carving # Auto-selection Multicast # Disabled Peering Details # 1.1.1.1 [MOD#P#00] 2.2.2.2 [MOD#P#00] Service Carving Results# Forwarders # 2 Elected # 2 EVI E # 10, 11 Not Elected # 0 EVPN-VPWS Service Carving Results# Primary # 0 Backup # 0 Non-DF # 0 MAC Flushing mode # STP-TCN Peering timer # 3 sec [not running] Recovery timer # 30 sec [not running] Carving timer # 0 sec [not running] Local SHG label # 24001 Remote SHG labels # 1 24001 # nexthop 2.2.2.2 Access signal mode# Bundle OOS (Default)This concludes the BGP-EVPN based port-active implementation. We have shown example of both Layer2 bridging and IRB services over port-active redundancy. However, this redundancy mode can be used for any other services like layer3 or legacy layer 2. For further technical details refer to our e-vpn.io webpage that has a lot of material explaining the core concepts of EVPN, its operations and troubleshooting.", "url": "/tutorials/bgp-evpn-based-port-active-multihoming/", "author": "Paban Sarma", "tags": "iosxr, EVPN, NCS 5500" } , "tutorials-user-defined-field-ncs55xx-and-ncs5xx": { "title": "User Defined Field NCS55xx and NCS5xx", "content": " On This Page Introduction Overview UDF Feature Support UDF Definition and Configuration UDF Use cases Matching layer 2 entities on a Layer 3 interface Matching Fragments using UDF UDF Filtering for Layer 4 Header UDF based ACL for traffic mirroring UDF Matching on Inner Packet Header. UDF ACL for matching DVMRP packets References Summary IntroductionIn the previous tech-note, we had discussed the concept of User Defined Keys - UDK. We also compared the UDK with Default TCAM keys and the ways to optimize the memory usage. In this tech-note, we will deep dive into advanced filtering capabilities of NCS5xx and NCS55xx with User Defined Fields - UDF.OverviewIn most cases, matching criterias are statically defined. Users do not have control over the bits in the packet, they want to use for matching. However, in some cases, the user may desire to match on a set of bits in the packet which is not associated with a specific, pre-defined header field. This means we need a way to classify on new fields in addition to the existing fields. But there is a catch here. It is not straight forward to add new fields in addition to existing fields, as we have fixed key length in the TCAM. It may not be always possible to have room for all the new fields. Other limitations with standard ACL’s is, it does not provide granularity when dealing with tunneled traffic. Due to these criterias, hardware must be capable of matching traffic based on user defined fields.User Defined Field’s or UDF can be considered as an inspection of a packet based on offset values. An ACL can be defined with UDF matching capabilities to give granularity and flexibility when identifying traffic patterns. It is often used for deeper packet analysis. Typical use cases includes, finding out patterns inside inner header when packets are tunneled. Another use case is identifying traffic to mirror monitor sessions for analysis. Any offset value within 128 bytes in a packet can be matched. We will see different use cases in details in later section.UDF Feature Support UDF feature is supported only in the Ingress direction. It is not supported on the Egress direction. It is supported on NCS540, NCS560 and NCS5500 (J/J+/J2). The ingress PMF supports packet inspection of the first 128 bytes of a packet. The offset support is limited to 63 bytes from the beginning of a header. It is supported only on Layer3 interfaces. It is not suported on Layer2 interfaces. As of 7.2.1, only 7 UDF’s are supported. Prior to 7.2.1, 8 UDF’s were supported. No support for inner L2 header. It is only supported for outer header. Mask Length is 4 bytes and Length 0 is not supported. UDF name can be maximum of 16 characters. UDF is not supported in default TCAM keys. It must be used along with User-Defined TCAM Keys. UDF is supported for both IPv4 and IPv6.UDF Definition and Configuration The user needs to define the name of the UDF, the header (inner/outer), offset and length of data to extract. Then we need to define a UDK referencing the UDFhw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port udf1 udf_testThe hw-module verifies that the user has requested a valid tcam key. For IPv4 this will be either a 160 or 320 bit key. For IPv6 it will be a 320 bit key. Then it needs to be defined in the ACEipv4 access-list udf_acl10 deny ipv4 any any udf udf_test 0x4567 0xffffUDF Use casesIn this section, we will try to cover a few scenarios to show how we can use custom fields to match/filter traffic. We have taken simple examples to make the concept easier to understand. Users can take tips from this and can use their own way of classifying traffic based on their criterias.Matching layer 2 entities on a Layer 3 interface Consider the below toplogyWe have a layer 3 connectivity end to end and host 60.1.1.2 wants to reach 70.1.1.2. In a normal scenario, we wont we able to match the VLAN ID on a layer 3 interface. For doing that we would need to define a Layer 2 ACL. But that wont be allowed to configure on a Layer 3 interface. Now what if a network administrator wants to match or filter a packet on his desired field (in this case consider VLAN id). If the platform does not have the capability to filter on deep packet analysis, network administrator will not be able to achieve that. Thanks to UDF capability of NCS55xx and NCS5xx, we can filter packets on user defined fields.Configuring UDFConsider the packet below. We are sending the packet with source address 60.1.1.2 and destination address 70.1.1.2 with an encapsulation of 10.If we want to filter the packet on the basis of encapsulation 10 we need to define the below UDF globally.udf udf_vlan header outer l2 offset 14 length 2hw-module profile tcam format access-list ipv4 src-addr dst-addr udf1 udf_vlanNote# Reload of the router or line card will be needed after configuring or modifying the hw-module profile.Referring the UDF to the ACL and applying it on the interfaceipv4 access-list UDF_VLAN10 deny ipv4 any any udf udf_vlan 0xa 0xffff 20 permit ipv4 any anyinterface TenGigE0/0/0/0.10 description using it for ACL testing ipv4 address 60.1.1.1 255.255.255.0 ipv6 address 60##1/64 load-interval 30 encapsulation dot1q 10 ipv4 access-group UDF_VLAN ingressVerifying UDFRP/0/RP0/CPU0#N55-24#show access-lists ipv4 usage pfilter location 0/0/CPU0 Mon Aug 31 08#27#27.590 UTCInterface # TenGigE0/0/0/0.10 Input ACL # Common-ACL # N/A ACL # UDF_VLAN Output ACL # N/ARP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Mon Aug 31 08#28#34.944 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1898 91 36 INGRESS_LPTS_IPV40 0 160b pmf-0 1898 19 45 INGRESS_RX_ISIS0 0 160b pmf-0 1898 23 54 INGRESS_QOS_IPV40 0 160b pmf-0 1898 15 56 INGRESS_QOS_MPLS0 0 160b pmf-0 1898 2 60 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1992 4 47 INGRESS_ACL_L3_IPV4RP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_VLAN hardware ingress verify location 0/0/CPU0 Mon Aug 31 08#27#49.483 UTCVerifying TCAM entries for UDF_VLANPlease wait... INTF NPU lookup ACL # intf Total compression Total result failed(Entry) TCAM entries type ID shared ACES prefix-type Entries ACE SEQ # verified ---------- --- ------- --- ------ ------ ----------- ------- ------ ------------- ------------ TenGigE0_0_0_0.10 (ifhandle# 0x41b8) 0 IPV4 7 1 2 NONE 3 passed 3RP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_VLAN hardware ingress location 0/0/CPU0 Mon Aug 31 09#05#35.701 UTCipv4 access-list UDF_VLAN 10 deny ipv4 any any (43041269 matches) 20 permit ipv4 any anyWe can see the end to end traffic is getting dropped.Matching Fragments using UDFIn previous technote for Fragment Matching, we saw there are 2 keywords# Fragments and Fragment-type. We saw that if we want to match initial fragments we need to use keyword Fragment-type, but this was supported on systems with external TCAM only. What if we have a system without external TCAM ? Thanks to UDF, we dont need to upgrade the whole system. Let us see, how we can achieve that. Consider the same topology as above.We send the below fragmented packet with FO=0.As we have not applied any ACL, end to end traffic is flowing fine.For matching the Fragment Offset = 0, we will configure the below UDFudf udf_frag header outer l3 offset 24 length 2hw-module profile tcam format access-list ipv4 src-addr dst-addr udf1 udf_fragReferring the UDF to the ACL and applying it on the interfaceipv4 access-list UDF_FRAG 10 deny ipv4 any any udf udf_frag 0x2000 0xffff 20 permit ipv4 any anyinterface TenGigE0/0/0/0.10 description using it for ACL testing ipv4 address 60.1.1.1 255.255.255.0 ipv6 address 60##1/64 load-interval 30 encapsulation dot1q 10 ipv4 access-group UDF_FRAG ingressRP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_FRAG hardware ingress location 0/0/CPU0 Mon Aug 31 17#21#22.017 UTCipv4 access-list UDF_FRAG10 deny ipv4 any any (5491101 matches) 20 permit ipv4 any anyRP/0/RP0/CPU0#N55-24#UDF Filtering for Layer 4 HeaderThe TCP header contains several one-bit boolean fields known as flags used to influence the flow of data across a TCP connection. In our example, we will try to set the URG flag. It is used to inform a receiving station that certain data within a segment is urgent and should be prioritized. If the URG flag is set, the receiving station evaluates the urgent pointer, a 16-bit field in the TCP header. This pointer indicates how much of the data in the segment, counting from the first byte, is urgent. ReferenceConfiguring the UDFudf udf_l4 header outer l4 offset 13 length 1hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto frag-bit udf1 udf_l4ipv4 access-list UDF_L4 10 permit ipv4 any any udf udf_l4 0x21 0xffIn this example, we want to allow traffic with TCP urgent flag set and rest all traffic should be denied. As per the packet capture, we can see URG flag is set. Traffic is flowing end to end and the packets are matching ACE 10.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_L4 hardware ingress location 0/0/CPU0 Tue Sep 1 06#51#58.088 UTCipv4 access-list UDF_L4 10 permit ipv4 any any (585576 matches)RP/0/RP0/CPU0#N55-24#UDF_L4 Details#Sequence Number# 10NPU ID# 0Number of DPA Entries# 1ACL ID# 2ACE Action# PERMITACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 1101498UDF Entries# 1# udf_l4# 0x21 (mask# 0xff)DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8E97C0A8Sequence Number# IMPLICIT DENYNPU ID# 0Number of DPA Entries# 1ACL ID# 2ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8E97C888Let us change the traffic pattern with URG Flag unset.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_L4 hardware ingress location 0/0/CPU0 Tue Sep 1 07#04#41.524 UTCipv4 access-list UDF_L4 10 permit ipv4 any anyRP/0/RP0/CPU0#N55-24#UDF_L4 Details#Sequence Number# 10NPU ID# 0Number of DPA Entries# 1ACL ID# 2ACE Action# PERMITACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0UDF Entries# 1# udf_l4# 0x21 (mask# 0xff)DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8E97C0A8Sequence Number# IMPLICIT DENYNPU ID# 0Number of DPA Entries# 1ACL ID# 2ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 545482DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8E97C888UDF based ACL for traffic mirroringOne of the important use case for UDF based ACL is along with ERSPAN. ERSPAN feature allows you to monitor traffic on one or more ports. It is primarily for troubleshooting. The router encapsulates the traffic with ERSPAN header inside a GRE packet. It then sends this replicated monitor traffic to a destination IP address via a GRE tunnel. We will not go into more details on ERSPAN. For more details please refer.We will focus on how UDF ACL can be used along with ERSPAN to control the traffic which needs to be mirrored. UDF with ACL allows deeper packet analysis. ERSPAN ACL feature is used to mirror only interesting traffic. The main benefit is that it reduces the amount of traffic being mirrored and hence conserve bandwidth. Only ingress ACL is used for ERSPAN. Only Access Control Entries (ACEs) with capture keyword will be considered for mirroring. Both permit and deny packets will be captured if the ACE contains capture keyword. This feature is supported on all the platforms i.e NCS5xx and NCS55xx with J/J+/J2 asics.User-Defined Field allows a customer to define a custom key by specifying the location and size of the field to match on. UDF ACL can be used to find patterns inside the inner header when packets are tunneled like is the case of ERSPAN where mirrored packets are going over a GRE Tunnel, or in the payload if the pattern is within the first 128 bytes of the packet.GRE Tunnel Configsinterface tunnel-ip1 ipv4 address 15.1.1.1 255.255.255.0 tunnel mode gre ipv4 tunnel source 172.16.3.24 tunnel destination 70.1.1.2ERSPAN Configsmonitor-session 1 ethernet destination interface tunnel-ip1 interface TenGigE0/0/0/0 description *** To IXIA 1/3 *** ipv4 address 100.24.0.1 255.255.255.252 monitor-session 1 ethernet direction rx-only port-level This will remotely span the traffic over a GRE tunnelWe can see we are sending 20000 packets (vlan 10 and 20) and receiving 40000 packets. This is a combination of data traffic plus spanned trafficNow lets say for troubleshooting purpose, we only want traffic based on VLAN 10 to be mirrored. Other traffic should not be mirrored. It should be permitted and not denied. This can be achieved by configuring ACL along with ERSPAN. We need to apply the following configurations to achieve the same.hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto frag-bit udf1 udf_erspanudf udf_erspan header outer l2 offset 14 length 2Referencing it to ACL and applying on the interfaceRP/0/RP0/CPU0#N55-24#RP/0/RP0/CPU0#N55-24#show running-config interface tenGigE 0/0/0/0Tue Sep 1 10#24#50.937 UTCinterface TenGigE0/0/0/0 description *** To IXIA 1/3 *** ipv4 address 100.24.0.1 255.255.255.252 monitor-session 1 ethernet direction rx-only port-level aclipv4 access-list UDF_ERSPAN 10 permit ipv4 any any udf udf_erspan 0xa 0xffff capture 20 permit ipv4 any anyFrom the above output, we can see that the traffic is mirrored only for VLAN 10. The traffic for VLAN 20 is permitted but not mirrored.UDF Matching on Inner Packet Header.Let us see an example of how users can match the traffic pattern based on inner packet headers. We have generated a packet with custom payload.User wants to match on the basis of information type in the custom payload. In our example, we have the information type as Information Request in the custome ICMP packet. For matching the traffic on basis of inner header, we can use the below UDF.udf udf_custom header inner l3 offset 0 length 2hw-module profile tcam format access-list ipv4 src-addr dst-addr src-port dst-port proto frag-bit udf1 udf_customReferencing the UDF to ACL and attaching it to the interfaceipv4 access-list UDF_CUSTOM 10 permit ipv4 any any udf udf_custom 0xf00 0xffff 20 deny ipv4 any anyinterface TenGigE0/0/0/0.10 description using it for ACL testing ipv4 address 60.1.1.1 255.255.255.0 ipv6 address 60##1/64 load-interval 30 encapsulation dot1q 10 ipv4 access-group UDF_CUSTOM ingressWe can see the traffic is flowing end to end and packets are matching the ACE.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_CUSTOM hardware ingress location 0/0/CPU0 Wed Sep 2 08#24#51.944 UTCipv4 access-list UDF_CUSTOM 10 permit ipv4 any any (301342 matches) 20 deny ipv4 any anyRP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_CUSTOM hardware ingress detail location 0/0/CPU0 Wed Sep 2 09#05#30.678 UTCUDF_CUSTOM Details#Sequence Number# 10NPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# PERMITACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 1188455UDF Entries# 1# udf_custom# 0xf00 (mask# 0xffff)DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8EC490A8Sequence Number# 20NPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8EC49498Sequence Number# IMPLICIT DENYNPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8EC49888If we change the traffic pattern and remove the custom payload the traffic doesnt match the sequence 10 and traffic is denied.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 UDF_CUSTOM hardware ingress location 0/0/CPU0 Wed Sep 2 09#11#51.588 UTCipv4 access-list UDF_CUSTOM 10 permit ipv4 any any 20 deny ipv4 any any (44974 matches)UDF ACL for matching DVMRP packetsThe Distance Vector Multicast Routing Protocol or DVMRP, is a routing protocol used to share information between routers to facilitate the transportation of IP multicast packets among networks refer. DVMRP uses the Internet Group Management Protocol (IGMP) to exchangerouting datagrams. To know further details on DVMRP please refer RFC 1075. DVMRP DDoS can cause IGMP process crash and also potentially bring down the router. Thanks to NCS55xx and NCS5xx capability to filter the DVMRP packets with UDF, we can protect the network with such attacks. UDF can be used to match the DVMRP packets and drop it at the ingress interface.Packet with DVMRP headerConfiguring UDF and referencing it to the ACLudf udf_dvmrp header outer l4 offset 0 length 1hw-module profile tcam format access-list ipv4 src-addr dst-addr proto frag-bit udf1 udf_dvmrpipv4 access-list UDF_DVMRP10 deny igmp any any udf udf_dvmrp 0x13 0xff20 permit ipv4 any anyinterface Bundle-Ether31000ipv4 address 1.1.1.2 255.255.255.0ipv4 access-group UDF_DVMRP ingressWe can see that ACL is matching the ingress DVMRP packetsRP/0/RP0/CPU0#xrg-307-NCS-5501-SE#show access-lists ipv4 UDF_DVMRP hardware ingress location 0/0/CPU0Thu Sep 3 23#20#03.056 UTCipv4 access-list UDF_DVMRP10 deny igmp any any (1799642834 matches) 20 permit ipv4 any anyRP/0/RP0/CPU0#xrg-307-NCS-5501-SE#show access-lists ipv4 UDF_DVMRP hardware ingress det location 0/0/CPU0UDF_DVMRP Details#Sequence Number# 10NPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 1819195451 UDF Entries#1# udf_dvmrp# 0x13 (mask# 0xff)DPA Entry# 1Entry Index# 0DPA Handle# 0x8CD870A8Sequence Number# 20NPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# PERMITACE Logging# DISABLEDABF Action# 0(ABF_NONE)DPA Entry# 1Entry Index# 0DPA Handle# 0x8C4FBAC8Sequence Number# IMPLICIT DENYNPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0DPA Entry# 1Entry Index# 0DPA Handle# 0x8C4FC4E8Thanks to Santosh Sharma for helping out test this scenario.References CCO Config GuideSummaryIn this tech-note, we saw how we can overcome the limits of static matching criterias by using user defined fields. This gives the users capabilities of matching the packets dynamically as per the requirement. This is very useful in troubleshooting network issues and identifying packets. We saw a few simple examples with UDF. These can be customized as per the network requirements. With the availability of UDF, it is possible for the NCS55xx and NCS5xx platform to be more capable and flexible when fulfilling the packet classification functionalities.", "url": "/tutorials/user-defined-field-ncs55xx-and-ncs5xx/", "author": "Tejas Lad", "tags": "NCS5500, NCS500, NCS55xx, UDF, ACL" } , "tutorials-acl-based-forwarding-and-object-tracking-for-ncs5xx-and-ncs55xx": { "title": "ACL Based Forwarding and Object Tracking for NCS5xx and NCS55xx", "content": " On This Page Introduction Different Types of ABF Possible Use cases of ABF ABF Feature Support ABF Platform Implementation ABF Verification For Punt Packets Resource Utilization Important considerations for ABF Next Hop Object Tracking with ABF Reference Summary IntroductionIn today’s converged networks, operators have to implement packet forwarding in a way that goes beyond traditional routing protocol. Sometimes operators want certain traffic to be engineered to a separate path based on certain rules. They do not want it to take the path computated by the dynamic routing protocols. ACL Based Forwarding (ABF) can be used as a technique to achieve the same. ABF is a subset of PBR (Policy Based Routing) infrastructure in the XR platform. ABF allows traffic matching specific ACL rule to be forwarded to user specified nexthop instead of route selected by a routing protocol. ABF does not modify anything in the packet itself and therefore only affects the local routing/forwarding decision. The path for reaching the ABF next-hop however, is determined by the normal routing/forwarding table.Different Types of ABFThere are 3 types of ABF supported on NCS55xx and NCS5xx Reference ABF for IPv4/IPv6 Only the next-hop IP address is specified in the ACE rule. The matching traffic is forwarded to the first “up” next-hop, as specified in the ACE. Default VRF is used in this type of ABF. VRF-aware ABF for IPv4/IPv6 Both the next-hop IP address and the next-hop VRF are specified in the ACE rule. The matching traffic is forwarded to the first “up” next-hop, as specified in the ACE. The specified VRF is used instead of the default VRF for determining the path. VRF-select ABF for IPv4/IPv6 Only the next-hop VRF is specified in the ACE rule. The matching traffic is forwarded to the first “UP” VRF as specified in the ACE rule. Supported from IOS-XR 6.5.1 onwards. Possible Use cases of ABF(Reference 1)Source-Based Transit Provider Selection – Internet service providers and other organizations can use ABF to route traffic originating from different sets of users through different internet connections across the policy routers.Cost Savings – Organizations can achieve cost savings by distributing interactive and batch traffic among low-bandwidth, low-cost permanent paths and high-bandwidth, high-cost, switched paths.Load Sharing – In addition to the dynamic load-sharing capabilities offered by destination-based routing that the Cisco IOS-XR software provides network manager can implement policies to distribute traffic among multiple paths based on the traffic characteristics.ABF Feature Support ABF is supported only in Ingress Direction. ABF is not supported in the Egress direction ABF is supported only for IPv4 and IPv6. ABF is not supported for L2 ACL. ABF is supported upto only 3 next-hops. The next-hops are selected on the basis of its configuration in the ACE. It is supported only for permit action. Deny action is not supported with ABF. ABF is supported on Physical Interfaces, Sub-interfaces, Bundle Interfaces and Bundle Sub-Interfaces. ABF is supported for common-ACL’s from IOS-XR 7.6.1 ABF supports dynamic next-hop modifications. ABF default route is not supported. IPv4 ABF next hops routed over GRE interfaces is supported.ABF Platform ImplementationNow that we have understood the feature and its use cases, let us jump into the configuration and implementaion on NCS55xx and NCS5xx. We will be a taking a very simple topology for easier illustration of the feature.In the above topology, we have network 70.1.1.0/24 behind router R5 and network 60.1.1.0/24 behind router R1. We have configured two ISIS neighbors, but the path calculated is just via TenGig 0/0/0/6 as we have higher metric on TenGig 0/0/0/7RP/0/RP0/CPU0#N55-24#show isis neighbors IS-IS 1 neighbors#System Id Interface SNPA State Holdtime Type IETF-NSFN540-49 Te0/0/0/7 *PtoP* Up 26 L2 Capable N540-49 Te0/0/0/6 *PtoP* Up 28 L2 CapableRP/0/RP0/CPU0#N55-24#show route 70.1.1.2Routing entry for 70.1.1.0/24 Known via ~isis 1~, distance 115, metric 20, type level-2 Installed Sep 8 08#19#09.054 for 05#30#18 Routing Descriptor Blocks 65.1.1.2, from 172.16.4.49, via TenGigE0/0/0/6 Route metric is 20 No advertising protos. As we mentioned above, let us apply an ABF to influence the critical traffic to take a path as per our configured next-hop.ipv4 access-list ABF_Test 10 permit ipv4 any any dscp af11 nexthop1 ipv4 65.1.1.2 nexthop2 ipv4 70.1.1.1 20 permit ipv4 any any dscp ef nexthop1 ipv4 66.1.1.2 nexthop2 ipv4 70.1.1.1This ACL will sent the traffic destined for 70.1.1.2 with DSCF AF11 via interface TenGig 0/0/0/6 and traffic with DSCP EF (critical traffic) via TenGig 0/0/0/7. Let us verify the same.ABF VerificationRP/0/RP0/CPU0#N55-24#show access-lists ipv4 usage pfilter location 0/0/CPU0 Interface # TenGigE0/0/0/0.10 Input ACL # Common-ACL # N/A ACL # ABF_Test Output ACL # N/ARP/0/RP0/CPU0#N55-24#show access-lists ipv4 ABF_Test hardware ingress location 0/0/CPU0 ipv4 access-list ABF_Test 10 permit ipv4 any any dscp af11 (next-hop# addr=65.1.1.2, vrf name=default) 20 permit ipv4 any any dscp ef (next-hop# addr=66.1.1.2, vrf name=default)We can see the traffic (10000 packets of AF11 and 20000 packets of EF) is flowing fine.RP/0/RP0/CPU0#N55-24#show access-lists ipv4 ABF_Test hardware ingress location 0/0/CPU0 ipv4 access-list ABF_Test 10 permit ipv4 any any dscp af11 (99377687 matches) (next-hop# addr=65.1.1.2, vrf name=default) 20 permit ipv4 any any dscp ef (198755422 matches) (next-hop# addr=66.1.1.2, vrf name=default)RP/0/RP0/CPU0#N55-24#show access-lists ipv4 ABF_Test hardware ingress detail location 0/0/CPU0 ABF_Test Details#Sequence Number# 10NPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# PERMITACE Logging# DISABLEDABF Action# 1(ABF_NH)ABF IP# 65.1.1.2ABF VRF# defaultABF FEC ID# 0x2001ffb8Hit Packet Count# 99688849DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8ED2D0A8 DSCP# 0x28 (Mask 0xFC)Sequence Number# 20NPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# PERMITACE Logging# DISABLEDABF Action# 1(ABF_NH)ABF IP# 66.1.1.2ABF VRF# defaultABF FEC ID# 0x2001ffb9Hit Packet Count# 199377727DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8ED2D498 DSCP# 0xB8 (Mask 0xFC)Sequence Number# IMPLICIT DENYNPU ID# 0Number of DPA Entries# 1ACL ID# 1ACE Action# DENYACE Logging# DISABLEDABF Action# 0(ABF_NONE)Hit Packet Count# 0DPA Entry# 1 Entry Index# 0 DPA Handle# 0x8ED2D888From the above outputs, we could successfully influence the traffic to take a path which is not installed in the routing table. This can be used by the operators for diverting the traffic to load balance or for purpose of troubleshooting.For Punt Packets Packets punted in the ingress direction from the NPU to the linecard CPU are not subjected to ABF treatment due to lack of ABF support in the slow path. These packets will be forwarded normally based on destination-address lookup by the software dataplane. Some examples of these types of packets are (but are not limited to) packets with IPv4 options, IPv6 extension headers, and packets destined for glean (unresolved/incomplete) adjacencies. Packets destined to the local IP interface (“for-us” packets) are subjected to redirect if they match the rule containing the ABF action. This can be avoided by either designing the rule to be specific enough to avoid matching the “for-us” packets or placing an explicit permit ACE rule (with higher priority) into the ACL before the matching ABF rule.ReferenceResource UtilizationAs we already know the TCAM is very important resource in NCS55xx and NCS5xx. It needs to be optimized. The advantage of ABF, is that it is built within the same feature framework. ABF shares the same lookup field groups as ACL hereby saving the TCAM resources for additional lookups. Another advantage is ABF uses the CLI which is based on existing ACL infrastructure making it comfortable for the users to configure and implement. For further information on resource utilization please referImportant considerations for ABF Next HopWhile configuring the ABF next-hop we should consider the below# The next-hop address cannot be of the loopback interface. The next-hop address cannot an address routed/forwarded over the loopback interface. The next-hop address caanot be the address of a local port.Object Tracking with ABFIn some scenarios, the ABF may fail to recognise that the next hop is not reachable and will keep on forwarding the packet to that next hop. This will cause traffic drop end to end. Consider the same topology as above. Traffic is flowing fine as per the configured next-hop.Due to unknown reason, the link between the switch and R5 has gone down. Router R1 has no visibility of this and it will continue forwarding the traffic out of the interface Ten 0/0/0/7 as that link is in UP state. How to deal with this type of failure scenario ?This is where we need Object-Tracking along with ABF. Track option in ABF enables track object to be specified along with nexthop ip address. Let us see this with configuration example.ipv4 access-list ABF_Test 10 permit ipv4 any any dscp ef nexthop1 ipv4 66.1.1.2 nexthop2 ipv4 65.1.1.2RP/0/RP0/CPU0#N55-24#show interfaces tenGigE 0/0/0/7 | in rate Wed Sep 9 05#12#58.069 UTC 30 second input rate 1000 bits/sec, 0 packets/sec 30 second output rate 79361000 bits/sec, 10000 packets/secRP/0/RP0/CPU0#N55-24#Shutting the interface between R5 and SwitchWe can see the traffic has dropped completelyThe traffic is being forwarded over TenGig 0/0/0/7, as R1 has no visibility of the link down.RP/0/RP0/CPU0#N55-24#show interfaces tenGigE 0/0/0/7 | i rate Wed Sep 9 05#18#29.082 UTC 30 second input rate 0 bits/sec, 0 packets/sec 30 second output rate 81244000 bits/sec, 10238 packets/secRP/0/RP0/CPU0#N55-24#Configuring object tracking for ABFipsla operation 1 type icmp echo destination address 66.1.1.2 timeout 5000 frequency 5 ! ! schedule operation 1 start-time now !!Modifying the ACL to track the next hopipv4 access-list ABF_Test 10 permit ipv4 any any dscp ef nexthop1 track 1 ipv4 66.1.1.2 nexthop2 ipv4 65.1.1.2RP/0/RP0/CPU0#N55-24#show ipsla statistics Entry number# 1 Modification time# 05#22#50.959 UTC Wed Sep 09 2020 Start time # 14#07#07.439 UTC Tue Sep 08 2020 Number of operations attempted# 708 Number of operations skipped # 12 Current seconds left in Life # 0 Operational state of entry # Inactive Operational frequency(seconds)# 5 Connection loss occurred # FALSE Timeout occurred # FALSE Latest RTT (milliseconds) # 1 Latest operation start time # 15#07#02.440 UTC Tue Sep 08 2020 Next operation start time # Inactive Latest operation return code # OK RTT Values# RTTAvg # 1 RTTMin# 1 RTTMax # 1 NumOfRTT# 1 RTTSum# 1 RTTSum2# 1RP/0/RP0/CPU0#N55-24#show track 1Track 1 Response Time Reporter 1 reachability Reachability is UP 2 changes, last change 14#09#23 UTC Tue Sep 08 2020 Latest operation return code# OK Latest RTT (millisecs) # 1 Shutting the interface between R5 and Switch againRP/0/RP0/CPU0#N55-24#RP/0/RP0/CPU0#Sep 9 05#45#12.000 UTC# object_tracking[360]# %SERVICES-OT-6-TRACK_INFO # track 1 state Track_DownRP/0/RP0/CPU0#N55-24#show ipsla statistics Entry number# 1 Modification time# 05#44#45.008 UTC Wed Sep 09 2020 Start time # 05#44#45.011 UTC Wed Sep 09 2020 Number of operations attempted# 13 Number of operations skipped # 13 Current seconds left in Life # 3470 Operational state of entry # Active Operational frequency(seconds)# 5 Connection loss occurred # FALSE Timeout occurred # TRUE Latest RTT (milliseconds) # Unknown Latest operation start time # 05#46#45.012 UTC Wed Sep 09 2020 Next operation start time # 05#46#50.012 UTC Wed Sep 09 2020 Latest operation return code # TimeOut RTT Values# RTTAvg # 0 RTTMin# 0 RTTMax # 0 NumOfRTT# 0 RTTSum# 0 RTTSum2# 0RP/0/RP0/CPU0#N55-24#show track 1Track 1 Response Time Reporter 1 reachability Reachability is DOWN 3 changes, last change 05#45#12 UTC Wed Sep 09 2020 Latest operation return code# TimeOut Latest RTT (millisecs) # 0 RP/0/RP0/CPU0#N55-24#Traffic is moved to next UP next-hopRP/0/RP0/CPU0#N55-24#show interfaces tenGigE 0/0/0/6 | i rate Wed Sep 9 05#48#03.242 UTC 30 second input rate 1000 bits/sec, 0 packets/sec 30 second output rate 79362000 bits/sec, 10000 packets/secRP/0/RP0/CPU0#N55-24#show interfaces tenGigE 0/0/0/7 | i rate Wed Sep 9 05#47#58.801 UTC 30 second input rate 0 bits/sec, 0 packets/sec 30 second output rate 1000 bits/sec, 0 packets/secThat’s it!This is just one of scenario. There can be multiple sceanrios where network admins can identify the need of using object tracking along with ABF to prevent the loss of traffic.Reference Reference 1# https#//community.cisco.com/t5/service-providers-documents/asr9000-xr-abf-acl-based-forwarding/ta-p/3153403 CCO Config GuideSummaryIn this technote, we tried to cover the ABF feature, possible use cases and how to use ABF along with Object tracking to track the next-hops for failure scenarios. We also saw configuration examples on NCS55xx and NCS5xx. We have taken a simple network topology to explain the concept in easier way, but there is no restriction for ABF to work flawlessly in complex networks. Network admins can use this feature effectively to engineer the traffic along desired paths and perform load-balancing and troubleshooting as and when required. However we should note that since ABF is ACL-based, all packets which do not match an existing rules in the ACL will be subject to the default ACL rule i.e. drop all. Therefore, it is suggested that the user put an explicit rule which is of lower priority to “permit” all traffic. This will ensure that all traffic which does not match an ABF rule will be permitted and forwarded as normal.", "url": "/tutorials/acl-based-forwarding-and-object-tracking-for-ncs5xx-and-ncs55xx/", "author": "Tejas Lad", "tags": "NCS5500, NCS500, NCS5xx, NCS55xx, ACL, ABF, ABF_OT" } , "tutorials-chained-acl-for-ncs55xx-and-ncs5xx": { "title": "Chained ACL for NCS55xx and NCS5xx", "content": " On This Page Introduction Overview Feature Support What’s Not Supported Common ACL behaviour with hw-module profile Configuring a common ACL Hardware Programming Common ACL Implementation Resource Utilization with Common ACL. Reference Summary IntroductionIn our previous tech-notes, we introduced the ACL support (ACL Introduction), discussed different matching criteria’s (Matching Fragments, Packet Length Matching), and explored the concepts of different TCAM keys (UDK, UDF)for the NCS55xx and NCS5xx portfolio. In this tech-note, we will talk about Chained ACL and see its use cases.OverviewPrior to the IOS-XR release 7.2.1, the packet filter (pfilter) only supported one ACL to be applied per direction and per protocol on any given interface. In live production networks, there are instances where they have ACL’s applied on multiple interfaces (including physical and sub-interfaces) and most of the time it happens, many ACL’s end up having similar ACE’s. For example, consider an edge box of an ISP which has 2 sets of ACE’s. One set may be common ISP specific ACE’s to protect ISP’s infrastructure as a whole and other would be interface specific ACE’s which might be for blocking customer related address block. This is done to protect the ISP infrastructure against attacks by allowing only valid address blocks while denying the suspicious ones. To achieve this, we have to configure unique ACL per interface. By doing this we may end up having multiple ACL’s having common ACE’s across all/most of them.Modying ACL rules which are common to provider infrastructure will require changes to be done on each and every interface. This leads to manageability issues if there are any common ACL entries which are needed on every/most interface/interfaces. Also this ACL provisioning and modification can be very cumbersome for any operator or enterprise as any changes to the ACE impacts every customer interface. Another important thing to note is, as we have unique ACL’s per interface, it also wastes the valuable hardware resources, as the common ACEs are being replicated in all ACL’s.To avoid the impact to multiple customer interface due to modifications, there have been multiple request to support a feature, which can help accomodate more than one ACL a single interface. The goal is to separate various types of ACLs for specific reasons, yet apply both of them on the same interface, in a defined order. Therefore from IOS-XR release 7.2.1, we bring in the support for Chained ACL also known as Common ACL, across NCS55xx and NCS5xx portfolio. This feature will be supported on platforms having Q-MX, Jericho, Jericho+ and Jericho2 (Native and Compatible).Feature Support Only 1 common/chained IPv4 and IPv6 ACL is supported per line card. The common/chained ACL can be applied to any type of interfaces which supports interface ACL’s (i.e VRF enabled interfaces, VLAN interfaces, Bundle interfaces). The common/chained ACL is supported in the ingress direction only. The common/chained ACL is searched first before the interface ACL. Edit of common/chained ACL is supported. Statistics of common/chained ACL is supported. The is no change in scale after applying the common/chained ACL. There is no performance impact with both the common and interface ACL applied. ACL with ABF is supported with common ACL from IOS-XR 7.6.1What’s Not Supported This feature is not supported in egress direction. This feature is not supported on Layer 2 interfaces. ACL with object groups is not supported with common ACL. It cannot be configured on the same line card which has compression configured. Atomic replace of the common ACL is not supported.Common ACL behaviour with hw-module profile HW-module Profile Common ACL Configuration hw-module profile tcam format access-list is used and common-acl is included Allowed hw-module profile tcam format access-list is used and common-acl is not included Not Allowed hw-module profile tcam format access-list is not used Allowed Configuring a common ACLThis is how you configure the common ACL along with interface specific ACL. We will see the detailed configurations in later section.Hardware ProgrammingConsider you have configured below ACL’s on 2 interfaces along with a common ACL.interface TenGigE0/0/0/0ipv4 access-group common ACL_Comm ACL1 ingressinterface TenGigE0/0/0/1ipv4 access-group common ACL_Comm ACL2 ingressThe above figure shows how the hardware programming will happen in this case. The common ACL is programmed once in a TCAM and is located at the top of the TCAM. Interface ACL’s are programmed below the common ACL. The TCAM search order is from top to bottom which gives the common ACL precedence over the interface ACL. The single instance of the common ACL in a TCAM ensures scalability when thousands of interfaces are enabled on an NP. However, since the hardware resources for the common ACL must be reserved, a static number of TCAM entries are allocated.Note# An interface may contain only the common ACL, only an interface ACL, or both the common and interface ACL.Common ACL ImplementationNow that we have talked about the feature and its use case, let us move on to see how this feature is implemented. We have below 2 ACL’s configured as an example.ipv4 access-list ACL1_full 10 deny ipv4 any 62.6.69.88 0.0.0.7 15 deny ipv4 62.6.69.88 0.0.0.7 any 20 deny ipv4 any 62.6.69.112 0.0.0.15 25 deny ipv4 62.6.69.112 0.0.0.15 any 30 deny ipv4 any 62.6.69.128 0.0.0.15 35 deny ipv4 62.6.69.128 0.0.0.15 any 40 deny ipv4 any 62.80.66.128 0.0.0.15 45 deny ipv4 62.80.66.128 0.0.0.15 any 50 deny ipv4 any 62.134.38.0 0.0.0.127 60 permit tcp any eq bgp host 1.2.3.1 70 permit tcp any host 1.2.3.1 eq bgp 80 deny ipv4 any host 1.2.3.1 90 deny ipv4 any 212.21.217.0 0.0.0.255 100 permit ipv4 any anyipv4 access-list ACL2_full 10 deny ipv4 any 62.6.69.88 0.0.0.7 15 deny ipv4 62.6.69.88 0.0.0.7 any 20 deny ipv4 any 62.6.69.112 0.0.0.15 25 deny ipv4 62.6.69.112 0.0.0.15 any 30 deny ipv4 any 62.6.69.128 0.0.0.15 35 deny ipv4 62.6.69.128 0.0.0.15 any 40 deny ipv4 any 62.80.66.128 0.0.0.15 45 deny ipv4 62.80.66.128 0.0.0.15 any 50 deny ipv4 any 62.134.38.0 0.0.0.127 60 permit tcp any eq bgp host 7.8.9.6 70 permit tcp any host 7.8.9.6 eq bgp 80 deny ipv4 any host 7.8.9.6 90 permit ipv4 any anyApplying it on 2 separate interfacesRP/0/RP0/CPU0#N55-24#show access-lists ipv4 usage pfilter location 0/0/CPU0 Interface # TenGigE0/0/0/0.10 Input ACL # Common-ACL # N/A ACL # ACL1_full Output ACL # N/AInterface # TenGigE0/0/0/0.20 Input ACL # Common-ACL # N/A ACL # ACL2_full Output ACL # N/ARP/0/RP0/CPU0#N55-24#RP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Mon Sep 21 11#22#15.856 UTCInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1883 107 36 INGRESS_LPTS_IPV40 0 160b pmf-0 1883 18 45 INGRESS_RX_ISIS0 0 160b pmf-0 1883 23 54 INGRESS_QOS_IPV40 0 160b pmf-0 1883 15 56 INGRESS_QOS_MPLS0 0 160b pmf-0 1883 2 60 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1962 34 47 INGRESS_ACL_L3_IPV4 From the above we can see 34 entries are being programmed in the TCAM.If you closely look at the 2 ACL’s we have most of the ACE’s in commonLet us apply the same ACL’s along with common ACL.ipv4 access-list common-acl 10 deny ipv4 any 62.6.69.88 0.0.0.7 15 deny ipv4 62.6.69.88 0.0.0.7 any 20 deny ipv4 any 62.6.69.112 0.0.0.15 25 deny ipv4 62.6.69.112 0.0.0.15 any 30 deny ipv4 any 62.6.69.128 0.0.0.15 35 deny ipv4 62.6.69.128 0.0.0.15 any 40 deny ipv4 any 62.80.66.128 0.0.0.15 45 deny ipv4 62.80.66.128 0.0.0.15 any 50 deny ipv4 any 62.134.38.0 0.0.0.127ipv4 access-list ACL1_specific 10 permit tcp any eq bgp host 1.2.3.1 20 permit tcp any host 1.2.3.1 eq bgp 30 deny ipv4 any host 1.2.3.1 40 deny ipv4 any 212.21.217.0 0.0.0.255 50 permit ipv4 any any ipv4 access-list ACL2_specific 10 permit tcp any eq bgp host 7.8.9.6 20 permit tcp any host 7.8.9.6 eq bgp 30 deny ipv4 any host 7.8.9.6 40 permit ipv4 any anyResource Utilization with Common ACL.RP/0/RP0/CPU0#N55-24#show controllers npu internaltcam location 0/0/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b pmf-0 1883 107 36 INGRESS_LPTS_IPV40 0 160b pmf-0 1883 18 45 INGRESS_RX_ISIS0 0 160b pmf-0 1883 23 54 INGRESS_QOS_IPV40 0 160b pmf-0 1883 15 56 INGRESS_QOS_MPLS0 0 160b pmf-0 1883 2 60 INGRESS_EVPN_AA_ESI_TO_FBN_DB0 1 160b pmf-0 1970 26 47 INGRESS_ACL_L3_IPV4From the above output we can see that with the use of common ACL along with interface specific ACL’s we are using only 26 entries in the TCAM as compared to previously 34 entries. If we need to make changes in ACE’s which are common to interfaces, we just need to change it in the common-ACL and no need to make changes on each and every interface, making the manageability easier. This is a very simple example with only 2 interfaces. We can see its usage when we apply it to multiple interface at the same time.Reference CCO Config GuideSummaryIn this tech-note we successfully demonstrated the concept of Chained or Common ACL. We saw how it makes ACL manageability easy and also helps in saving the TCAM resources. This is another capability of the NCS55xx and NCS5xx in terms of dataplane security.", "url": "/tutorials/chained-acl-for-ncs55xx-and-ncs5xx/", "author": "Tejas Lad", "tags": "NCS55xx, NCS5xx, NCS5500, ACL, Chained ACL" } , "#": {} , "tutorials-introduction-to-ncs55xx-and-ncs5xx-lpts": { "title": "Introduction to NCS55xx and NCS5xx LPTS", "content": " On This Page Introduction Glossary LPTS Overview Main Components of LPTS LPTS in Pipeline Architecture NPU and Line Card CPU in action Supported Flow Types in Hardware LPTS as per IOS-XR 7.2.1 Handling Exception Packets How to check the default policer value of individual traps Dynamic LPTS Flow Type Feature Support Different Flow Type Categories LPTS Best Practices Reference Thanks Summary IntroductionIn our previous tech-notes, we focussed on exploring the data-plane security capabilities of NCS5xx and NCS55xx. In this tech-note, we will discuss the NCS55xx and NCS5xx capabilities around control plane protection. In today’s networks, control plane security is as important as data plane security. Customers need to focus on protecting the business critical control and management planes from malicious attacks and also misconfiguration events which may overwhelm the processor and bring down the router resulting in an unfortunate event. The Cisco IOS devices have the concept of control plane policing. And in IOS-XR devices, we use a very comprehensive and powerful feature called Local Packet Transport Services. (Reference).In IOS-XR LPTS, as part of “for-us” packet delivery process, the rate at which packets are delivered are selectively monitored to avoid overwhelming the CPU. LPTS filters and polices the packets based on the defined flow-type rate in hardware before punting it to the software. Punt is a popular terminology used for sending packets to the control-plane for processing. It deals with packets that require special handling or can not be switched by CEF (Cisco Express Forwarding). For more details please Visit. The LPTS takes the control plane protection (CoPP) concept to a new level by automatically applying rate limiting to all packets that must be handled by any CPU on the device. The use case of this is we achieve automated control to network health without relying on network operators, as it becomes difficult to configure these functions manually in large-scale networks.GlossaryBefore moving further, lets define the terminologies which we will be referring multiple times in the document. Terms Description CoPP Control Plane Protection (More Details) “For-us” packets Control/management plane packets destined to or to be processed by a node/element in IOS-XR system Flow A binding between a tuple (such as protocol, local address, local port, remote address, remote port) and an element FIB Forwarding Information Base – A Table which is used to determine where a packet is to be forwarded. iFIB Internal Forwarding Information Base – A Table that is used to determine where a “for-us” packet needs to be sent inside IOS-XR running system when pIFIB look up fails LPTS Local Packet Transport Services Netio An IOS-XR process which performs packet forwarding in software, equivalent of “process switching” pIFIB or pre-FIB Compact version of IFIB. A filtered version of pIFIB, HW pIFIB is loaded into LC HW TCAM Tuple is an ordered list of integer SDR Secure Domain Router. It provide a means of partitioning a router into multiple, independent routers. (More Details) SPP Software Path Process. Its a multiplexer/demultiplexer component NPU and clients on LC/RP CPU SPiO Streamlined Packet IO is for processing L2 control packets LPTS OverviewLPTS is an integral component of IOS-XR systems which provides firewall and policing functionality. LPTS maintains per interface complete table in netio chain in Line card CPU, making sure that packets are delivered to their intended destinations. IOS XR software classifies all ‘For Us’ control packets into 97 different flows. Each flow has it own hardware policer to restrict the punt traffic rate for the flow type. In traditional IOS-XR platforms (CRS/ASR9k) LPTS had different regions which consumed 16k entries each. But we do not have the same luxury in NCS55xx and NCS5xx due to limited TCAM resources. We need to use these resources very wisely to accomodate several features and functionalities. Therefore to minimize the usage of resources, the LPTS platform dependent (PD) layer programs only the protocol default entries in the hardware. It punts the control packets to Line card CPU Netio for full lpts lookup in LPTS Decap node. We will see this in details in later sections. The Network Input/Output (NetIO) process is responsible for forwarding packets in software. (For deeper understanding of process switching and slow path please refer).LPTS also plays a vital role in support of NSR (Non Stop Routing) enabled control plane. LPTS will make use of the fabric infrastructure for “for-us” control packets to both Active and Standby RP for NSR enabled processes. LPTS also installs dynamic flow entries for certain protocol packets on-demand. An example would be, for ICMP echo request sent out from the local router to peer router. LPTS will create flow entry in Pre-IFIB so that ICMP echo reply received from the peer router will be matched against it.Note# LPTS is applicable only for control and management plane traffic entering into the router and destined to the local router. Packets originated by the router and transit traffic are not processed by LPTS.Main Components of LPTSThe three component which LPTS needs to accomplish its task are# Port Arbitrator Process Aggregates bindings into flows Generates IFIB and Pre-IFIB Allocates Flow Group IDs Does not touch packets Flow Manager Process Copies IFIB slice updates from the Port Arbitrator to netio Does not touch packets Acts as a Sub-Server to the Port Arbitrator Pre-IFIB Manager Process Copies Pre-IFIB updates from the Port Arbitrator to netio Filters and translates Pre-IFIB updates into TCAM updates Does not touch packets LPTS in Pipeline ArchitectureThe Programmable Mapping and Filtering (PMF) engine block in IRPP forwarding asic is the most programmable block in the TCAM. It contains all the information of previous blocks like incoming PP port, packet type, forwarding lookup results. It also classifies packets based on predefined protocol fields all the way from Layer 2 to Layer 7. Based on these classifications, various actions can be applied to the packet. LPTS uses the PMF architecture to classify the packet and other pre-processing attributes to apply action (punt CPU, stats and police).NPU and Line Card CPU in action 1) A frame is received on the ingress interface of NCS55xx or NCS5xx. On receiving the packet necessary layer 1 and 2 checks are performed and layer 3 information is extracted from the packet and passes it to the forwarding ASIC. 2) Packet enters the ingress pipeline and traps are generated based on forwarding lookup. 3) The L3 forwarding engine does a Forwarding Information Base (FIB) lookup and determines whether the packet is a locally destined “for_us” packet. This is the first pass. The two traps generated are RxTrapReceive (for L3 packets) and RxTrapUserDefine_RECEIVE_L2 (for L2 packets). 4) By looking at the traps, the packets are recycled to ingress pipeline by skipping the egress pipeline lookups. The lookup key can contain various values such as# source IP, destination IP, source L4 port, destination L4 port, interface, vrf and compression id. 5) This TCAM lookup happens in PMF. The “for-us” control packets are then punted by encapsulating in the NPU header. In NPU, the “for-us” control packets undergo hardware TCAM lookup which will hit one of the protocol default TCAM entries. NPU header contains the control information such as – ingress port, destination node, trap code, trap qualifier, listener_tag, flow type, stats pointer etc. 6) LPTS entries are installed as 4 main groups i.e. IPv4, IPv6, CLNS and L2. In first pass the traps are different for L2 and L3 because the lookup tables are different which are generating the traps. After such classification, packets go into Software Path Process (SPP), that selects whether to send to L2/SPIO or NetIO process. The packets are received by SPP in IOS-XR by opening a raw socket to listen over the punt interface. SPP is a component that classifies packets in punt path to identify its client. Moreover, SPP does the parsing of npu header to extract the control information filled by NPU. However, it retains the npu header. Based on this parsing results it decides the client. (In our case NetIO or L2/SPIO). In case, there is no proper decision, the packet is forwarded to NetIO as a default client. 7) NetIO handles L3 Protocol Packets like ICMP, OSPF, BGP etc. Based on lookup result, the control packets will get policed and punted to LC CPU. 8) In case of errors, the packets are also punted for diagnostics (even if npu decides it to drop) or for some action (to generate ICMP error) via traps from first pass itself and policed. 9) When the “For Us” packets are received by the LPTS decap node in NetIO, LPTS does iFIB lookup and find the packet associated protocol client and deliver the packet to the protocol stack. Most of the times this lookup is skipped and that provides the performance as use the work done in NPU to be avoided in CPU. 10) The Streamlined Packet IO - SPIO is used by L2 processes CFM,STP,ARP etc. When a reaches from SPP to NetIO/SPIO, the control information is transferred. NetIO/SPIO strip off the npu header and process the actual packet. 11) The CPU runs the software processes that decapsulate the packet and deliver them to the correct stack.Note# Modular systems will have 2 Route Processors (RP). For fixed platforms there will be only 1 Route Processor (RP).Sample result of a LPTS entryWe have configured ISIS on the interface Ten0/0/0/6 and Ten0/0/0/7. Let us check the LPTS hardware entry.RP/0/RP0/CPU0#N55-24#show isis neighbors IS-IS 1 neighbors#System Id Interface SNPA State Holdtime Type IETF-NSFN540-49 Te0/0/0/7 *PtoP* Up 28 L2 Capable N540-49 Te0/0/0/6 *PtoP* Up 26 L2 Capable RP/0/RP0/CPU0#N55-24#show lpts pifib hardware entry location 0/0/CPU0 | beg TenGigE0/0/0/6Interface # TenGigE0/0/0/6Is Fragment # 0Domain # 0-defaultListener Tag # IPv4_STACKFlow Type # PIM-mcast-knownDestNode # Deliver RP0Dest Type # DlvrPunt Queue Prio # MEDIUMHardware NPU Data ------- NPU # 0TCAM entry # 209Policer_id # 0x7d78Stats_id # 0x800001fdStats_hdl # 0x8ed4ef10Compression_id # 0x1Accepted/Dropped # 0/0 ---------------------------------------------------As we discussed in the previous sections the hardware lookup results contains various fields like Interface, DestNode, Listener Tag, Flow-type and the hardware NPU data. Similarly we can get the output of default flow types as well. Policing is offloaded to NPU before packets hits the CPU to provide control plane security. Each flow policer has default static policer rate value. Each flow policer value can be configured from 0 (to drop all packet matching classification criteria) to max of 50K PPS as compared to 100k in ASR9k. Similar to ASR9k platform, the LPTS policers work on a per NPU basis. For example, if the LPTS police value is set to 1000pps for a flow, it means that every NPU on the LC can punt with 1000pps to the CPU for that flow.LPTS takes effect on all applications that receive packets from outside the router. LPTS functions without any need for customer configuration. However, the policer values can be customized if required. As we saw the ISIS known default police value is 2100.RP/0/RP0/CPU0#N55-24#show lpts pifib hardware police location 0/0/CPU0 | in PIM-mcast-knownPIM-mcast-known 32120 Static 2100 2078 0 0-defaultWe can configure it to a new value and see that it is taking affect. So now if the control packets goes beyond 1000 pps the packets will be policed.RP/0/RP0/CPU0#N55-24(config)#lpts pifib hardware police flow Pim multicast known rate 1000RP/0/RP0/CPU0#N55-24#show lpts pifib hardware police location 0/0/CPU0 | in PIM-mcast-knownPIM-mcast-known 32120 Global 1000 1000 0 0-defaultNote# You should be very careful while changing this default values. The generic best practice for LPTS is to use the default configurations and not change anything.LPTS can use the following tuples to identify a packet# Tuples VRF L3-protocol L4-protocol Interface Source Address Source Port Destination Address Destination Port Supported Flow Types in Hardware LPTS as per IOS-XR 7.2.1The below command displays pre-Internal Forwarding Information Base (pIFIB) information for the designated node. It shows all the Flow Types supported and their default policer values.RP/0/RP0/CPU0#N55-24#show lpts pifib hardware police location all Handling Exception PacketsSometimes routers need to manage unexpected packets that are not meant as “for-us” packets and software intervention is required to handle them. Therefore they need to be sent to the CPU. These packets are also known as exception packets (TTL expired, invalid headers, adjacency not available etc). For example, if a destination address is not present in the FIB and results in a miss, then an ICMP unreachable packet needs to be generated and sent back to the source system of the packet. Thus, it needs to be processed by RP CPU. Another example may be Glean Adjacency. If a L2 MAC address for a destination ip address or next hop is not present in the FIB, the packet gets sent to the LC CPU to trigger ARP request destined to the host or next-hop. These type of scenarios are handled by hardware traps. Traps are specific predefined rules in NPU. All hardware traps are statically programmed with predefined policer rates per NPU. LPTS on the other hand is dynamic and entries are created as per configurations. Whereas traps get defined in the router during the bootup itself. Same as LPTS punt policers, these trap policers can be configured with policer rate values from 0 pps (for complete drop) till predefined max limit per trap. As mentioned above, we need to take care while changing the default values as that will impact both functionality and CPU performance.The below output shows the full list of supported traps. (Full List of RX Traps)You can notice that for lpts flows and policer values we used the commandRP/0/RP0/CPU0#N55-24#show lpts pifib hardware police location all We check the hardware entries in the pIFIB. As mentioned above traps being rules in npu we need to check the npu stats for all the traps. Below command shows all the supoprted traps.RP/0/RP0/CPU0#N55-24#show controllers npu stats traps-all instance all location 0/0/CPU0How to check the default policer value of individual trapsRP/0/RP0/CPU0#N55-24#show controllers npu stats traps-all instance all location 0/0/CPU0 | in TtlRxTrapIpv4Ttl0 0 108 0x6c 32010 0 0 RxTrapIpv4Ttl1 0 112 0x70 32010 0 0 RxTrapMplsTtl0 0 141 0x8d 32014 0 0 RxTrapMplsTtl1 0 142 0x8e 32014 0 0 RP/0/RP0/CPU0#N55-24#attach location 0/0/CPU0 Tue Oct 6 16#04#19.751 UTCLast login# Tue Oct 6 15#56#35 2020 from 172.0.16.1export PS1='#'[xr-vm_node0_0_CPU0#~]$export PS1='#'#dpa_show puntpolicer | grep -e Def -e 32010 Def CIR Rate Conf CIR Rate CIR Burst ID 10 - 0# 100 0 100 32010From the above output we can see the trap policer default value is 100 pps.Configuring new policer rateRP/0/RP0/CPU0#N55-24#show running-config lpts Tue Oct 6 16#05#05.994 UTClpts punt police location 0/0/CPU0 exception ipv4 ttl-error rate 200RP/0/RP0/CPU0#N55-24#attach location 0/0/CPU0 Tue Oct 6 16#05#21.713 UTCLast login# Tue Oct 6 16#04#19 2020 from 172.0.16.1export PS1='#'[xr-vm_node0_0_CPU0#~]$export PS1='#'##dpa_show puntpolicer | grep -e Def -e 32010 Def CIR Rate Conf CIR Rate CIR Burst ID 10 - 0# 100 200 100 32010From the above output we can see that new policer value has been programmed in the hardware.Similarly we can change the default trap policer values of exception and protocol packets.Dynamic LPTS Flow TypeIn limited hardware resource platforms like NCS55xx and NCS5xx, we cannot support all LPTS flows in the TCAM hardware where resources are shared across multiple features. Therefore there is a need to provide configurable option for customers to decide what flow types they need to program in the hardware and maximum lpts entry per flow type. This can help in saving a lot of hardware resources. Customers can dynamically turn on/off a particular flowtype and max entry for the type using the CLI to decide the LPTS entries to be programmed in the hardware. This feature is supported from IOS-XR 6.2.x onwards.Feature Support All the mandatory entries will be programmed in TCAM irrespective of configurations. If we do not configure the new CLI, the pifib process will have the default behavior as earlier without any flow limitation or maximum size set. For unsupported flow types we will get errors in the platform. Flowtype maximum values learned from the configuration will take precedence to the default list. Un-configured flows will be set to default static values set by platform. Non-Mandatory entries can also be configured using the same CLI. The configuration has local scope - meaning we can set different flows and maximum flows per Line Card. Maximum hardware entries across all lpts flows is 8k. Let us verify the same with below outputRP/0/RP0/CPU0#N55-24#show lpts pifib dynamic-flows statistics location 0/0/CPU0 Dynamic-flows Statistics# ------------------------- (C - Configurable, T - TRUE, F - FALSE, * - Configured) Def_Max - Default Max Limit Conf_Max - Configured Max Limit HWCnt - Hardware Entries Count ActLimit - Actual Max Limit SWCnt - Software Entries Count P, (+) - Pending Software Entries FLOW-TYPE C Def_Max Conf_Max HWCnt/ActLimit SWCnt P -------------------- -- ------- -------- -------/-------- ------- - Fragment T 4 -- 2/4 2 OSPF-mc-known T 600 -- 6/600 6 OSPF-mc-default T 8 -- 4/8 4 OSPF-uc-known T 300 -- 3/300 3 OSPF-uc-default T 4 -- 2/4 2 ISIS-known T 300 -- 4/300 6 + ISIS-default T 2 -- 1/2 1 BGP-known T 900 -- 2/900 2 BGP-cfg-peer T 900 -- 2/900 2 BGP-default T 8 -- 4/8 4 PIM-mcast-default T 40 -- 0/40 0 PIM-mcast-known T 300 -- 10/300 10 PIM-ucast T 40 -- 2/40 2 IGMP T 1164 -- 31/1164 31 ICMP-local T 4 -- 4/4 4 ICMP-control T 10 -- 5/10 5 ICMP-default T 18 -- 9/18 9 ICMP-app-default T 4 -- 2/4 2 LDP-TCP-known T 300 -- 1/300 1 LDP-TCP-cfg-peer T 300 -- 1/300 1 LDP-TCP-default T 40 -- 0/40 0 LDP-UDP T 300 -- 2/300 2 All-routers T 300 -- 10/300 10 RSVP-default T 4 -- 1/4 1 RSVP-known T 300 -- 2/300 2 SNMP T 300 -- 0/300 0 SSH-known T 150 -- 0/150 0 SSH-default T 40 -- 0/40 0 HTTP-known T 40 -- 0/40 0 HTTP-default T 40 -- 0/40 0 SHTTP-known T 40 -- 0/40 0 SHTTP-default T 40 -- 0/40 0 TELNET-known T 150 -- 0/150 0 TELNET-default T 4 -- 1/4 1 UDP-known T 40 -- 0/40 0 UDP-listen T 40 -- 2/40 2 UDP-default T 4 -- 2/4 2 TCP-known T 40 -- 0/40 0 TCP-listen T 40 -- 0/40 0 TCP-default T 4 -- 2/4 2 Raw-default T 4 -- 2/4 2 ip-sla T 50 -- 0/50 0 EIGRP T 40 -- 0/40 0 PCEP T 20 -- 0/20 0 GRE T 4 -- 2/4 2 VRRP T 150 -- 0/150 0 HSRP T 40 -- 0/40 0 MPLS-oam T 40 -- 2/40 2 DNS T 40 -- 0/40 0 RADIUS T 40 -- 0/40 0 TACACS T 40 -- 0/40 0 NTP-default T 4 -- 0/4 0 NTP-known T 150 -- 0/150 0 DHCPv4 T 40 -- 0/40 0 DHCPv6 T 40 -- 0/40 0 TPA T 100 -- 0/100 0 PM-TWAMP T 36 -- 0/36 0 --------------------------------------------------- Active TCAM Usage # 7960/8000 [Platform MAX# 8000] HWCnt/SWCnt # 123/130---------------------------------------------------From the above output we can see the maximum entries supported in the hardware is 8000. Let us take example of our ISIS configuration. From the output we can see 300 entries alllocated for ISIS- PIM-mcast-known. That means for 300 sessions we will have the hardware programming or entries and we will be able to track that via LPTS and subjected to the properties of hardware LPTS (police/stats etc). The entries above 300 will be programmed in sofware. This can be seen via the column SWCnt. All the entries which are not having entries in hardware will be kept under a common pool in sofware and will be subjected to the properties different than hardware LPTS. Let us take an example of expanding the max entries from 300 to 400.Note# It is recommended to use HW flows and have no software flow.RP/0/RP0/CPU0#N55-24(config)#lpts pifib hardware dynamic-flows location 0/0/CPU0RP/0/RP0/CPU0#N55-24(config-pifib-flows-per-node)#flow isis known max 400RP/0/RP0/CPU0#N55-24(config-pifib-flows-per-node)#commit % Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted. Please issue 'show configuration failed [inheritance]' from this session to view the errorsRP/0/RP0/CPU0#N55-24(config-pifib-flows-per-node)#show configuration failedWed Oct 7 08#42#42.338 UTC!! SEMANTIC ERRORS# This configuration was rejected by !! the system due to semantic errors. The individual !! errors with each failed configuration command can be !! found below.lpts pifib hardware dynamic-flows location 0/0/CPU0 flow isis known max 400!!% invalid max_limit value input# Platform MAX limit is 8000. But the configured flow max value is 8060. Please decrement config total by 60 OR increase Platform MAX.!end```We can see that we are crossing the max limit due to which our configuration is getting rejected. How to overcome this ? To overcome this limitation we need to compromise on reducing max entries for other flow type. This will be different for every customers. Say for example customer decides reduce the IGMP to accomodate the extra ISIS entries.RP/0/RP0/CPU0#N55-24#show running-config lpts Wed Oct 7 08#49#44.605 UTClpts pifib hardware dynamic-flows location 0/0/CPU0 flow pim multicast known max 400 flow igmp max 500RP/0/RP0/CPU0#N55-24#show lpts pifib dynamic-flows statistics location 0/0/CPU$ Dynamic-flows Statistics# ------------------------- (C - Configurable, T - TRUE, F - FALSE, * - Configured) Def_Max - Default Max Limit Conf_Max - Configured Max Limit HWCnt - Hardware Entries Count ActLimit - Actual Max Limit SWCnt - Software Entries Count P, (+) - Pending Software Entries FLOW-TYPE C Def_Max Conf_Max HWCnt/ActLimit SWCnt P -------------------- -- ------- -------- -------/-------- ------- - Fragment T 4 -- 2/4 2 OSPF-mc-known T 600 -- 6/600 6 OSPF-mc-default T 8 -- 4/8 4 OSPF-uc-known T 300 -- 3/300 3 OSPF-uc-default T 4 -- 2/4 2 ISIS-known T 300 -- 4/300 6 + ISIS-default T 2 -- 1/2 1 BGP-known T 900 -- 2/900 2 BGP-cfg-peer T 900 -- 2/900 2 BGP-default T 8 -- 4/8 4 PIM-mcast-default T 40 -- 0/40 0 PIM-mcast-known T* 300 400 10/400 10 PIM-ucast T 40 -- 2/40 2 IGMP T* 1164 500 31/500 31Different Flow Type Categories Flow Type Details Not supported Flow types not supported by platform. All LPTS PI cli configuration will return error for unsupported flow types. Supported, Default and Mandatory Flow types Supported by platform and not configurable.FragmentRaw-DefaultOSPF Unicast defaultOSPF Multicast default - 224.0.0.5, 224.0.0.6, ff02##5 and ff02##6BGP default - two entry with src and dest port as 179ICMP-localICMP-controlICMP-defaultICMP-app-defaultAll-routersSSH-defaultGRESNMPPIM-mcast-defaultVRRPPIM-ucastIGMPUDP defaultTCP defaultISIS defaultRSVP defaultTelnet defaultDNSNTP defaultLDP-UDP Supported, Default and non-mandatory Default flow types which will be programmed in the hardware at process boot. These flows are also configurable via cli. Configurable Flows that are configurable via cli (non-default). Download to hardware based on cli config or profile.BGP-knownBGP-cfg-peerLDP-TCP-knownLDP-TCP-cfg-peerSSH-knownTelnet KnownNTP knownLDP-UDPOSPF-uc-knownOSPF-mc-knownRSVP knownISIS knownTPA LPTS Best Practices Cisco Control Plane Best Practices guide can be used as a best pratice. We will have a separate document dedicated for LPTS best pratices for NCS5500 and NCS500 portfolio as per use cases.Reference ASR9k LPTS CCO Configuration Guide Forwarding ArchitectureThanksA very big thanks to Balamurugan Subramanian (balasubr), Dipesh Raghuvanshi (diraghuv), Prafull Soni (sprafull) for their critical inputs during the collateral.SummaryHope this content was useful and you understood the importance of LPTS and its implementation on the platform. To summarize LPTS consists of three main functions# Filtering of what can be punted and categorised as a flow. Decides where the flow needs to go to. Policing of the flows per NPU. The key message is, LPTS functions without the need for any explicit configuration and is always ON feature on all routing platforms that run IOS-XR. Important thing to remember, LPTS is applicable for control plane traffic (“for-us” only traffic) entering into the router and destined to the local router. Packets originated by the router and transit traffic are not processed by LPTS. Besides performing forwarding of “for-us” packets to the right destination, it also polices the incoming “for-us” packets in the LC hardware. Whatever we saw in this document is at the global level. This takes effect on the entire router. In the next tech note we will cover how we can fine tune the hardware policing specific to domains. Stay tuned till then !!!", "url": "/tutorials/introduction-to-ncs55xx-and-ncs5xx-lpts/", "author": "Tejas Lad", "tags": "NCS5500, NCS500, CoPP, control plane, LPTS" } , "tutorials-ncs-5500-fabric-migration": { "title": "NCS-5500 Fabric Migration", "content": " NCS-5500 v1 to v2 Fabric Migration Introduction Video Lab topology Migration steps Before getting started Uploading IOS XR images and verification Software installation Hardware migration Other show commands you could use Configuration used for the test TREX configs Router under test config Acknowledgement You can find more content related to NCS-5500 and NCS-5700 following this link.IntroductionBefore insertion of 400GE line cards, the NCS-5500 needs to be modified to support packet forwarding and cooling. For instance, it’s necessary to use “version 2” (v2) fabric cards and fan trays. During the first 3 years of its existence, the NCS-5500 chassis where shipped with v1 FC/FT.In this blog post, Benoit Mercier Des Rochettes (Manager in CX organization) will detail the different step necessary to guarantee a smooth migration from v1 to v2.At the moment of this video and blog publication, the v2 “commons” were only available for 8-slot and 16-slot chassis. The 4-slot version being in the roadmap.Note# this is the short version of the MOP prepared by the CX team for this migration. The exhaustive one being specific to customer’s requirement, we purposefully removed a lot of content for this article.VideoLab topologyThe testbed is made of 3 routers (NCS-5508 with 36x100G-SE-S line card, NCS-5501-SE and ASR9000) and one traffic generator (TRex# more details available here).We will monitor 3 services (and flows) during the test# L3VPN PW PW over RSVP-TEA route generator completes this picture, advertising 996,940 IPv4 and 225,280 IPv6 routes over BGP (the internet routing scale projection for our customer).The purpose of this configuration being to identify if/when each step will be disruptive for the customer’s services.Migration stepsIn this demo, we used an 8-slot chassis with NC55-36X100G-SE-S (powered by Jericho+ ASICs) and starting with IOS XR 6.3.15.Before getting startedWith the following show command, we verify# line cards inserted the chassis type# 5508 the Fan Trays type# NC55-5508-FAN is v1 the Fabric Cards type# NC55-5508-FC is v1RP/0/RP0/CPU0#5508#show platformNode Type State Config state--------------------------------------------------------------------------------0/2/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/2/NPU0 Slice UP0/2/NPU1 Slice UP0/2/NPU2 Slice UP0/2/NPU3 Slice UP0/RP0/CPU0 NC55-RP-E(Active) IOS XR RUN NSHUT0/RP1/CPU0 NC55-RP-E(Standby) IOS XR RUN NSHUT0/FC0 NC55-5508-FC OPERATIONAL NSHUT0/FC1 NC55-5508-FC OPERATIONAL NSHUT0/FC2 NC55-5508-FC OPERATIONAL NSHUT0/FC3 NC55-5508-FC OPERATIONAL NSHUT0/FC4 NC55-5508-FC OPERATIONAL NSHUT0/FC5 NC55-5508-FC OPERATIONAL NSHUT0/FT0 NC55-5508-FAN OPERATIONAL NSHUT0/FT1 NC55-5508-FAN OPERATIONAL NSHUT0/FT2 NC55-5508-FAN OPERATIONAL NSHUT0/SC0 NC55-SC OPERATIONAL NSHUT0/SC1 NC55-SC OPERATIONAL NSHUTRP/0/RP0/CPU0#5508#We check the currently used IOS XR versionRP/0/RP0/CPU0#5508#show install active summary Active Packages# 18 ncs5500-xr-6.3.15 version=6.3.15 [Boot image] ncs5500-mcast-2.1.0.0-r6315 ncs5500-mpls-2.1.0.0-r6315 ncs5500-mgbl-4.0.0.0-r6315 ncs5500-mpls-te-rsvp-2.2.0.0-r6315 ncs5500-isis-1.3.0.0-r6315 ncs5500-k9sec-4.1.0.0-r6315 ncs5500-common-pd-fib-1.1.0.1-r6315.CSCvi05806 ncs5500-iosxr-fwding-5.0.0.4-r6315.CSCvi05806 ncs5500-fwding-4.0.0.1-r6315.CSCvi25051 ncs5500-dpa-fwding-4.1.0.7-r6315.CSCvh03531 ncs5500-dpa-3.0.0.12-r6315.CSCvh03531 cisco-klm-0.1.p1-r0.0.CSCvn55720.xr kernel-image-3.14.23-wr7.0.0.2-standard-3.14.p1-r0.1.CSCvn55720.xr cisco-klm-zermatt-0.1.p1-r0.0.CSCvn55720.xr ncs5500-os-support-4.0.0.3-r6315.CSCvq17485 ncs5500-infra-4.1.0.11-r6315.CSCvj64412 ncs5500-bgp-1.1.0.3-r6315.CSCvj90955RP/0/RP0/CPU0#5508#show install committed summary Committed Packages# 28 ncs5500-xr-6.3.15 version=6.3.15 [Boot image] ncs5500-mcast-2.1.0.0-r6315 ncs5500-mpls-2.1.0.0-r6315 ncs5500-mgbl-4.0.0.0-r6315 ncs5500-mpls-te-rsvp-2.2.0.0-r6315 ncs5500-isis-1.3.0.0-r6315 ncs5500-k9sec-4.1.0.0-r6315 ncs5500-infra-4.1.0.4-r6315.CSCvi54033 ncs5500-common-pd-fib-1.1.0.1-r6315.CSCvi05806 ncs5500-iosxr-fwding-5.0.0.4-r6315.CSCvi05806 ncs5500-dpa-3.0.0.6-r6315.CSCvi05806 ncs5500-dpa-fwding-4.1.0.4-r6315.CSCvi05806 ncs5500-dpa-3.0.0.8-r6315.CSCvh75088 ncs5500-infra-4.1.0.6-r6315.CSCvh03531 ncs5500-os-support-4.0.0.1-r6315.CSCvi50726 ncs5500-dpa-3.0.0.10-r6315.CSCvi25051 ncs5500-fwding-4.0.0.1-r6315.CSCvi25051 ncs5500-dpa-fwding-4.1.0.7-r6315.CSCvh03531 ncs5500-dpa-3.0.0.12-r6315.CSCvh03531 ncs5500-os-support-4.0.0.2-r6315.CSCvg80365 cisco-klm-0.1.p1-r0.0.CSCvn55720.xr kernel-image-3.14.23-wr7.0.0.2-standard-3.14.p1-r0.1.CSCvn55720.xr cisco-klm-zermatt-0.1.p1-r0.0.CSCvn55720.xr ncs5500-infra-4.1.0.9-r6315.CSCvq17485 ncs5500-os-support-4.0.0.3-r6315.CSCvq17485 ncs5500-infra-4.1.0.11-r6315.CSCvj64412 ncs5500-bgp-1.1.0.2-r6315.CSCvj15728 ncs5500-bgp-1.1.0.3-r6315.CSCvj90955RP/0/RP0/CPU0#5508#It’s important to verify with this show command output that both route processor processes are in sync. NSR status must be “Ready”.RP/0/RP0/CPU0#5508#show redundancy summary Active Node Standby Node ----------- ------------ 0/RP0/CPU0 0/RP1/CPU0 (Node Ready, NSR#Ready)RP/0/RP0/CPU0#5508#Verify we don’t have any pending firmware upgrade. “CURRENT” is the expected status for all elements.RP/0/RP0/CPU0#5508#show hw-module fpd FPD Versions =================Location Card type HWver FPD device ATR Status Running Programd------------------------------------------------------------------------------0/2 NC55-36X100G-A-SE 0.303 MIFPGA CURRENT 0.03 0.030/2 NC55-36X100G-A-SE 0.303 Bootloader CURRENT 0.13 0.130/2 NC55-36X100G-A-SE 0.303 DBFPGA CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 IOFPGA CURRENT 0.21 0.210/2 NC55-36X100G-A-SE 0.303 SATA CURRENT 5.00 5.000/RP0 NC55-RP 1.1 Bootloader CURRENT 9.28 9.280/RP0 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/RP1 NC55-RP 1.1 Bootloader CURRENT 9.28 9.280/RP1 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/FC0 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC0 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC1 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC1 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC2 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC2 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC3 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC3 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC4 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC4 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC5 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC5 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/SC0 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC0 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.100/SC1 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC1 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.10RP/0/RP0/CPU0#5508#adminsysadmin-vm#0_RP0# show hw-module fpd FPD Versions ===============Location Card type HWver FPD device ATR Status Run Programd-------------------------------------------------------------------------------0/2 NC55-36X100G-A-SE 0.303 Bootloader CURRENT 0.13 0.130/2 NC55-36X100G-A-SE 0.303 DBFPGA CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 IOFPGA CURRENT 0.21 0.210/2 NC55-36X100G-A-SE 0.303 SATA CURRENT 5.00 5.000/RP0 NC55-RP 1.1 Bootloader CURRENT 9.28 9.280/RP0 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/RP1 NC55-RP 1.1 Bootloader CURRENT 9.28 9.280/RP1 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/FC0 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC0 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC1 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC1 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC2 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC2 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC3 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC3 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC4 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC4 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC5 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC5 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/SC0 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC0 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.100/SC1 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC1 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.10sysadmin-vm#0_RP0#Make sure we have fpd auto-upgrade configured.RP/0/RP0/CPU0#5508#show running-config | i fpdBuilding configuration...fpd auto-upgrade enableRP/0/RP0/CPU0#5508#admincisco connected from 127.0.0.1 using console on 5508sysadmin-vm#0_RP0# show running-config | i fpdfpd auto-upgrade enablesysadmin-vm#0_RP0#We back up current configuration (and admin config) in both RP0 and RP1 harddisk.In normal condition, the system will reboot with a RP0 active and RP1 standby, but just in case RP0 is slow to restart and RP1 takes the ownership, it’s better to have the configurations on both cards.copy running-config harddisk#PRE_running_config_122719.txtcopy harddisk#PRE_running_config_122719.txt location 0/RP0/CPU0 harddisk# location 0/RP1/CPU0admincopy running-config harddisk#PRE_admin-running_config_122719.txtcopy harddisk#PRE_admin-running_config_122719.txt location 0/RP0 harddisk# location 0/RP1Uploading IOS XR images and verificationIOS XR images can be downloaded on Software Download site#Extract the tar file and upload the iso and rpm files in a dedicated folder on harddisk#Then verify the integrity of the files with the followingRP/0/RP0/CPU0#5508#cd harddisk#/663RP/0/RP0/CPU0#5508#show md5 file ncs5500-isis-2.2.0.0-r663.x86_64.rpmf48a8fc9bdf03848a8e1a9e02d30a9cdRP/0/RP0/CPU0#5508#show md5 file ncs5500-mcast-3.1.0.0-r663.x86_64.rpmcd587e15d686eec15b75d64f1c08e482RP/0/RP0/CPU0#5508#show md5 file ncs5500-mini-x-6.6.3.iso418e65ff1228b17ec376c768ba305abeRP/0/RP0/CPU0#5508#show md5 file ncs5500-mpls-2.1.0.0-r663.x86_64.rpm4d58ba5c05a677b3e58c65d528344270RP/0/RP0/CPU0#5508#show md5 file ncs5500-k9sec-3.1.0.0-r663.x86_64.rpmc9ea261c1416b529b89b2691a5a22e9fRP/0/RP0/CPU0#5508#show md5 file ncs5500-mpls-te-rsvp-4.1.0.0-r663.x86_64$65799fa3c474ae149bec05c73abfa93dRP/0/RP0/CPU0#5508#show md5 file ncs5500-mgbl-3.0.0.0-r663.x86_64.rpm71049785d117e75545d574a1e874a629RP/0/RP0/CPU0#5508#Software installationAfter verifying the matching of the MD5 hashing, we can proceed to the installation.RP/0/RP0/CPU0#5508#install add source /harddisk#/663 ncs5500-isis-2.2.0.0-r663.x86_64.rpm ncs5500-k9sec-3.1.0.0-r663.x86_64.rpm ncs5500-mcast-3.1.0.0-r663.x86_64.rpm ncs5500-mgbl-3.0.0.0-r663.x86_64.rpm ncs5500-mini-x-6.6.3.iso ncs5500-mpls-2.1.0.0-r663.x86_64.rpm ncs5500-mpls-te-rsvp-4.1.0.0-r663.x86_64.rpm Install operation 6 started by cisco# install add source /harddisk#/663 ncs5500-isis-2.2.0.0-r663.x86_64.rpm ncs5500-k9sec-3.1.0.0-r663.x86_64.rpm ncs5500-mcast-3.1.0.0-r663.x86_64.rpm ncs5500-mgbl-3.0.0.0-r663.x86_64.rpm ncs5500-mini-x-6.6.3.iso ncs5500-mpls-2.1.0.0-r663.x86_64.rpm ncs5500-mpls-te-rsvp-4.1.0.0-r663.x86_64.rpmDec 19 15#55#19 Install operation will continue in the backgroundRP/0/RP0/CPU0#5508#show install requestThe install add operation 6 is 60% completeRP/0/RP0/CPU0#5508#show install requestThe install add operation 6 is 60% completeRP/0/RP0/CPU0#Dec 19 16#08#50.052 # sdr_instmgr[1188]# %INSTALL-INSTMGR-2-OPERATION_SUCCESS # Install operation 6 finished successfully Dec 19 16#08#51 Install operation 6 finished successfullyRP/0/RP0/CPU0#5508#show install inactive summary10 inactive package(s) found# ncs5500-mpls-te-rsvp-4.1.0.0-r663 cisco-klm-zermatt-0.1-r0.0.xr ncs5500-isis-2.2.0.0-r663 ncs5500-mcast-3.1.0.0-r663 ncs5500-mini-x-6.6.3 cisco-klm-0.1-r0.0.xr ncs5500-k9sec-3.1.0.0-r663 ncs5500-mpls-2.1.0.0-r663 kernel-image-3.14.23-wr7.0.0.2-standard-3.14-r0.1.xr ncs5500-mgbl-3.0.0.0-r663RP/0/RP0/CPU0#5508# Then the preparation#RP/0/RP0/CPU0#5508#install prepare id 4Nov 18 08#16#07 Install operation 5 started by cisco# install prepare id 4Nov 18 08#16#07 Package list#Nov 18 08#16#08 ncs5500-mpls-2.1.0.0-r663.x86_64Nov 18 08#16#08 ncs5500-mpls-te-rsvp-4.1.0.0-r663.x86_64Nov 18 08#16#08 ncs5500-isis-2.2.0.0-r663.x86_64Nov 18 08#16#08 ncs5500-mgbl-3.0.0.0-r663.x86_64Nov 18 08#16#08 ncs5500-mini-x-6.6.3Nov 18 08#16#08 ncs5500-mcast-3.1.0.0-r663.x86_64Nov 18 08#16#08 ncs5500-k9sec-3.1.0.0-r663.x86_64Nov 18 08#16#14 Install operation will continue in the backgroundRP/0/RP0/CPU0#5508#RP/0/RP0/CPU0#Nov 18 08#16#43.510 UTC# sdr_instmgr[1188]# %PKT_INFRA-FM-6-FAULT_INFO # INSTALL-IN-PROGRESS #DECLARE #0/RP0/CPU0# INSTALL_IN_PROGRESS Alarm # being DECLARED for the systemRP/0/RP0/CPU0#Nov 18 08#17#20.002 UTC# resmon[175]# %HA-HA_WD-4-DISK_WARN # A monitored device / ( rootfs#/ ) is above 80% utilization. Current utilization = 83. Please remove unwanted user files and configuration rollback points.RP/0/RP0/CPU0#5508#show install requestThe install prepare operation 5 is 40% completeRP/0/RP0/CPU0#5508#RP/0/RP0/CPU0#Nov 18 08#32#19.970 UTC# resmon[175]# %HA-HA_WD-3-DISK_ALARM_ALERT # A monitored device / ( rootfs#/ ) is above 80% utilization. Current utilization = 83. Please remove unwanted user files and configuration rollback points.RP/0/RP0/CPU0#Nov 18 08#33#01.888 UTC# sdr_instmgr[1188]# %INSTALL-INSTMGR-2-OPERATION_SUCCESS # Install operation 5 finished successfullyNov 18 08#33#04 Install operation 5 finished successfullyRP/0/RP0/CPU0#Nov 18 08#33#16.498 UTC# SSHD_[69004]# %SECURITY-SSHD-6-INFO_REKEY # Server initiated time rekey for session 0 , session_rekey_count = 1RP/0/RP0/CPU0#Nov 18 08#33#19.971 UTC# resmon[175]# %HA-HA_WD-6-DISK_NORMAL # Device / ( rootfs#/ ) usage 43.RP/0/RP0/CPU0#5508#And finally the activation#RP/0/RP0/CPU0#5508#install activate id 6Dec 20 09#37#31 Install operation 9 started by cisco# install activate id 6Dec 20 09#37#31 Package list#Dec 20 09#37#31 ncs5500-is\tis-2.2.0.0-r663.x86_64Dec 20 09#37#31 ncs5500-k9sec-3.1.0.0-r663.x86_64Dec 20 09#37#31 ncs5500-mcast-3.1.0.0-r663.x86_64Dec 20 09#37#31 ncs5500-mgbl-3.0.0.0-r663.x86_64Dec 20 09#37#31 ncs5500-mini-x-6.6.3Dec 20 09#37#31 ncs5500-mpls-2.1.0.0-r663.x86_64Dec 20 09#37#31 ncs5500-mpls-te-rsvp-4.1.0.0-r663.x86_64This install operation will reload the system, continue? [yes/no]#[yes] yesRP/0/RP1/CPU0#Nov 14 14#03#30.464 UTC# fpd-serv[193]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #FPD-NEED-UPGRADE #DECLARE #0/2#RP/0/RP1/CPU0#Nov 14 14#03#30.959 UTC# fpd-serv[193]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #FPD-NEED-UPGRADE #DECLARE #0/RP0#RP/0/RP1/CPU0#Nov 14 14#03#31.597 UTC# fpd-serv[193]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #FPD-NEED-UPGRADE #DECLARE #0/RP1#0/2/ADMIN0#Nov 14 14#03#35.485 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 11 percent for fpd Bootloader@location 0/2.0/2/ADMIN0#Nov 14 14#03#40.486 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 12 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#03#40.983 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 1 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#03#41.612 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 1 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#03#45.486 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 13 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#03#45.983 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 2 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#03#46.612 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 2 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#03#50.487 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 15 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#03#50.984 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 3 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#03#51.613 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 4 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#03#55.487 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 16 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#03#55.984 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 4 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#03#56.613 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 5 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#00.487 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 18 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#00.984 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 5 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#01.613 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 6 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#05.488 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 19 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#05.984 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 7 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#06.613 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 7 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#10.488 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 22 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#10.985 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 8 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#11.614 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 8 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#15.488 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 28 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#15.985 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 9 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#16.614 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 9 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#20.489 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 34 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#20.985 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 10 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#21.614 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 10 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#25.489 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 39 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#25.986 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 11 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#26.615 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 12 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#30.490 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 44 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#30.986 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 12 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#31.615 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 13 percent for fpd Bootloader@location 0/RP1.0/RP0/ADMIN0#Nov 14 14#04#35.986 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 13 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#36.615 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 14 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#38.490 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 52 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#40.987 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 14 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#41.615 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 15 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#43.490 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 56 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#45.987 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 15 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#46.616 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 16 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#48.490 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 61 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#50.987 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 16 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#51.616 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 17 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#53.491 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 65 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#04#55.988 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 17 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#04#56.616 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 18 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#04#58.491 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 71 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#00.988 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 18 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#01.616 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 19 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#03.491 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 75 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#05.988 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 19 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#06.617 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 27 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#10.492 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 81 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#10.988 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 25 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#11.617 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 36 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#15.492 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 83 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#15.989 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 33 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#16.617 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 44 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#20.492 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 86 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#20.989 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 41 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#23.617 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 56 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#25.493 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 88 percent for fpd Bootloader@location 0/2.0/RP1/ADMIN0#Nov 14 14#05#28.618 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 61 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#30.493 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 91 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#30.989 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 55 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#33.618 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 68 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#35.493 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 93 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#35.990 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 61 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#38.618 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 74 percent for fpd Bootloader@location 0/RP1.0/2/ADMIN0#Nov 14 14#05#40.494 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 96 percent for fpd Bootloader@location 0/2.0/RP0/ADMIN0#Nov 14 14#05#40.990 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 67 percent for fpd Bootloader@location 0/RP0.0/2/ADMIN0#Nov 14 14#05#45.494 UTC# card_mgr[2507]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 98 percent for fpd Bootloader@location 0/2.0/2/ADMIN0#Nov 14 14#05#45.961 UTC# card_mgr[2507]# %INFRA-FPD_Driver-1-UPGRADE_ALERT # FPD Bootloader@0/2 image programming completed with UPGRADE DONE state Info# [Programming Done]0/2/ADMIN0#Nov 14 14#05#45.961 UTC# card_mgr[2507]# %INFRA-FPD_Driver-1-UPGRADE_ALERT # FPD Bootloader @location 0/2 upgrade completed.0/RP1/ADMIN0#Nov 14 14#05#45.962 UTC# shelf_mgr[2783]# %INFRA-SHELF_MGR-6-CARD_SW_OPERATIONAL # Card# 0/2 software state going to Operational0/RP1/ADMIN0#Nov 14 14#05#45.962 UTC# shelf_mgr[2783]# %INFRA-SHELF_MGR-6-CARD_HW_OPERATIONAL # Card# 0/2 hardware state going to Operational0/RP0/ADMIN0#Nov 14 14#05#45.990 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 73 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#46.618 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 84 percent for fpd Bootloader@location 0/RP1.0/RP1/ADMIN0#Nov 14 14#05#51.619 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 89 percent for fpd Bootloader@location 0/RP1.0/RP0/ADMIN0#Nov 14 14#05#53.991 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 83 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#05#56.619 UTC# card_mgr[2613]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 94 percent for fpd Bootloader@location 0/RP1.0/RP0/ADMIN0#Nov 14 14#05#58.991 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 88 percent for fpd Bootloader@location 0/RP0.0/RP1/ADMIN0#Nov 14 14#06#00.644 UTC# card_mgr[2613]# %INFRA-FPD_Driver-1-UPGRADE_ALERT # FPD Bootloader@0/RP1 image programming completed with UPGRADE DONE state Info# [Programming Done]0/RP1/ADMIN0#Nov 14 14#06#00.645 UTC# card_mgr[2613]# %INFRA-FPD_Driver-1-UPGRADE_ALERT # FPD Bootloader @location 0/RP1 upgrade completed.0/RP1/ADMIN0#Nov 14 14#06#00.648 UTC# shelf_mgr[2783]# %INFRA-SHELF_MGR-6-CARD_SW_OPERATIONAL # Card# 0/RP1 software state going to Operational0/RP1/ADMIN0#Nov 14 14#06#00.649 UTC# shelf_mgr[2783]# %INFRA-SHELF_MGR-6-CARD_HW_OPERATIONAL # Card# 0/RP1 hardware state going to Operational0/RP0/ADMIN0#Nov 14 14#06#03.991 UTC# card_mgr[2541]# %INFRA-FPD_Driver-6-UPGRADE_RESULT # Upgrade completes 93 percent for fpd Bootloader@location 0/RP0.0/RP0/ADMIN0#Nov 14 14#06#08.922 UTC# card_mgr[2541]# %INFRA-FPD_Driver-1-UPGRADE_ALERT # FPD Bootloader@0/RP0 image programming completed with UPGRADE DONE state Info# [Programming Done]0/RP0/ADMIN0#Nov 14 14#06#08.923 UTC# card_mgr[2541]# %INFRA-FPD_Driver-1-UPGRADE_ALERT # FPD Bootloader @location 0/RP0 upgrade completed.0/RP1/ADMIN0#Nov 14 14#06#08.924 UTC# shelf_mgr[2783]# %INFRA-SHELF_MGR-6-CARD_SW_OPERATIONAL # Card# 0/RP0 software state going to Operational0/RP1/ADMIN0#Nov 14 14#06#08.924 UTC# shelf_mgr[2783]# %INFRA-SHELF_MGR-6-CARD_HW_OPERATIONAL # Card# 0/RP0 hardware state going to OperationalRP/0/RP0/CPU0#5508#show install requestThe install service operation 9 is 20% completeRP/0/RP0/CPU0#5508#RP/0/RP0/CPU0#Nov 14 14#07#54.516 UTC# sdr_instmgr[1188]# %MGBL-SCONBKUP-6-INTERNAL_INFO # Reload debug script successfully spawnedRP/0/RP1/CPU0#Nov 14 14#07#54.527 UTC# sdr_instmgr[1188]# %INSTALL-INSTMGR-2-OPERATION_SUCCESS # Install operation 97 finished successfullyNov 14 14#07#56 Install operation 97 finished successfullyRP/0/RP1/CPU0#Nov 14 14#07#56.186 UTC# sdr_instmgr[1188]# %INSTALL-INSTMGR-2-SYSTEM_RELOAD_INFO # The whole system will be reloaded to complete install operation 970/RP1/ADMIN0#Nov 14 14#07#58.320 UTC# inst_mgr[3659]# %INFRA-INSTMGR-5-OPERATION_TO_RELOAD # This rack will now reload as part of the install operation0/RP1/ADMIN0#Nov 14 14#08#47.794 UTC# fsdbagg[3629]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3629] # Plane 0 state changed to DOWN0/RP1/ADMIN0#Nov 14 14#08#47.794 UTC# fsdbagg[3629]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3629] # Plane 1 state changed to DOWN0/RP1/ADMIN0#Nov 14 14#08#47.794 UTC# fsdbagg[3629]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3629] # Plane 2 state changed to DOWN0/RP1/ADMIN0#Nov 14 14#08#47.794 UTC# fsdbagg[3629]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3629] # Plane 3 state changed to DOWN0/RP1/ADMIN0#Nov 14 14#08#47.794 UTC# fsdbagg[3629]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3629] # Plane 4 state changed to DOWN0/RP1/ADMIN0#Nov 14 14#08#47.795 UTC# fsdbagg[3629]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3629] # Plane 5 state changed to DOWN0/FC2/ADMIN0#Nov 14 14#08#47.784 UTC# cm[1862]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/FC0/ADMIN0#Nov 14 14#08#47.781 UTC# cm[1866]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/FC4/ADMIN0#Nov 14 14#08#47.786 UTC# cm[1873]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/2/ADMIN0#Nov 14 14#08#47.800 UTC# cm[2508]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/FC1/ADMIN0#Nov 14 14#08#47.788 UTC# cm[1880]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/FC3/ADMIN0#Nov 14 14#08#47.784 UTC# cm[1859]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/RP0/ADMIN0#Nov 14 14#08#47.805 UTC# cm[2548]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/FC5/ADMIN0#Nov 14 14#08#47.786 UTC# cm[1869]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/SC0/ADMIN0#Nov 14 14#08#47.826 UTC# cm[1870]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.0/SC1/ADMIN0#Nov 14 14#08#47.833 UTC# cm[1857]# %ROUTING-TOPO-5-PROCESS_UPDATE # Got process update# Card shutdown.The router reloads at this point.Stopping OpenBSD Secure Shell server# sshdinitctl# Unknown instance#Stopping system message bus# dbus.Stopping random number generator daemon.Stopping system log daemon...0Stopping kernel log daemon...0Stopping internet superserver# xinetd.Stopping crond# OKStopping rpcbind daemon...done.Stopping S.M.A.R.T. daemon# smartd.Stopping Lighttpd Web Server# stopped /usr/sbin/lighttpd (pid 2342)lighttpd.Stopping libvirtd daemon# [ OK ]Deconfiguring network interfaces... done.Sending all processes the KILL signal...Unmounting remote filesystems...Deactivating swap...Unmounting local filesystems...mount# can't find /mnt/ram in /etc/fstabRebooting... [223559.311086] obfl_rb_notify_cb# reboot notifier calledBIOS Ver# 09.30 Date# 07/11/2019 11#10#50Press DEL or ESC to enter boot manager. GRUB Secure Boot Validation Result# PASSED[223561.069300] Requesting remote cold reset on 0/FC2 ...Requesting remote cold reset on 0/FC3 ...[223561.182530] Requesting remote cold reset on 0/FC4 ...Requesting remote cold reset on 0/FC5 ...GNU GRUB version 2.00sting remote cold reset on 0/SC0 ...Requesting remote cold reset on 0/SC1 ...Press F2 to goto grub Menu..emote cold reset on 0/RP1 ...[223561.478902] Requesting local cold reset on 0/RP0 ...Booting from Disk..cesfully wrote mtdoops at 0 size 32768Loading Kernel..pstore# Successfully logged oops info. Size 32740Kernel Secure Boot Validation Result# PASSEDLoading initrd..pervisor/RP ModuleInitrd Secure Boot Validation Result# PASSED[ 0.234639] Allocating netns hash tableEnable selinux to relabel filesystem from initramfs/usr/bin/chcon# cannot access '/dev/console'# No such file or directoryChecking SELinux security contexts# * First booting, filesystem will be relabeled...[*] Load IMA appraise policy# OKSwitching to new root and running init.Sourcing /etc/sysconfig/udev20160406Starting udev# [ OK ]r# 0x80000000/sbin/restorecon# lstat(/etc/resolv.conf) failed# No such file or directory/sbin/restorecon# lstat(/etc/adjtime) failed# No such file or directoryRunning postinst /etc/rpm-postinsts/100-dnsmasq...Running postinst /etc/rpm-postinsts/101-nfs-utils-client...Running postinst /etc/rpm-postinsts/102-nfs-utils...update-rc.d# /etc/init.d/run-postinsts exists during rc.d purge (continuing) Removing any system startup links for run-postinsts ... /etc/rcS.d/S99run-postinstspd_notify_host_startedSending Host OS Started eventLast Reset Reason = 0x80000000Watchdog Register = 0x000000FESPI Boot Timer Register = 0x000060FFConfiguring network interfaces... done.Starting system message bus# dbus.UBI device number 3, total 144 LEBs (18855936 bytes, 18.0 MiB), available 0 LEBs (0 bytes), LEB size 130944 bytes (127.9 KiB)Punching IOFPGA watchdogStarting OpenBSD Secure Shell server# sshd generating ssh RSA key... generating ssh ECDSA key... generating ssh DSA key... generating ssh ED25519 key...sshd start/running, process 3300Starting rpcbind daemon...done.Starting kdump#[ OK ]Starting random number generator daemonUnable to open file# /dev/tpm0.Starting system log daemon...0Starting kernel log daemon...0Starting HPA's tftpd# in.tftpd-hpa.Starting internet superserver# xinetd.Starting S.M.A.R.T. daemon# smartd.Starting Lighttpd Web Server# lighttpd.Starting libvirtd daemon# [ OK ]Starting crond# OKStarting cgroup-initNetwork ieobc_br defined from /etc/init/ieobc_br_network.xmlNetwork local_br defined from /etc/init/local_br_network.xmlNetwork xr_local_br defined from /etc/init/xr_local_br_network.xmlNetwork ieobc_br startedNetwork local_br startedNetwork xr_local_br started[*] ima_policy have loaded, or IMA policy file does not existmcelog start/running, process 5865diskmon start/running, process 5866Creating default host password fileinitctl# UnknownConnecting to 'default-sdr--1' console��������bootlogd# ioctl(/dev/pts/2, TIOCCONS)# Device or resource busy/sbin/restorecon# lstat(/etc/adjtime) failed# No such file or directoryRunning postinst /etc/rpm-postinsts/100-dnsmasq...Running postinst /etc/rpm-postinsts/101-nfs-utils-client...Running postinst /etc/rpm-postinsts/102-nfs-utils...update-rc.d# /etc/init.d/run-postinsts exists during rc.d purge (continuing) Removing any system startup links for run-postinsts ... /etc/rcS.d/S99run-postinstsConfiguring network interfaces... done.Starting system message bus# dbus.Starting OpenBSD Secure Shell server# sshd generating ssh RSA key... generating ssh ECDSA key... generating ssh DSA key... generating ssh ED25519 key...sshd start/running, process 2044Starting rpcbind daemon...done.Starting random number generator daemonUnable to open file# /dev/tpm0.Starting system log daemon...0Starting kernel log daemon...0Starting internet superserver# xinetd.Libvirt not initialized for container instanceStarting crond# OKSIOCADDRT# File exists[*] ima_policy have loaded, or IMA policy file does not existStart serial incoming on , Clearing ..DBG_MSG# platform type is 0RP/0/RP0/CPU0#Nov 18 08#46#47.172 UTC# rmf_svr[324]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #RP-RED-LOST-NNR #DECLARE #0/RP0/CPU0#ios con0/RP0/CPU0 is now availablePress RETURN to get started.This product contains cryptographic features and is subject to UnitedStates and local country laws governing import, export, transfer anduse. Delivery of Cisco cryptographic products does not imply third-partyauthority to import, export, distribute or use encryption. Importers,exporters, distributors and users are responsible for compliance withU.S. and local country laws. By using this product you agree to complywith applicable laws and regulations. If you are unable to comply withU.S. and local laws, return this product immediately.A summary of U.S. laws governing Cisco cryptographic products may befound at#http#//www.cisco.com/wwl/export/crypto/tool/stqrg.htmlIf you require further assistance please contact us by sending email toexport@cisco.com.RP/0/RP0/CPU0#Nov 18 08#47#20.681 UTC# rmf_svr[324]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #RP-RED-LOST-NSRNR #DECLARE #0/RP0/CPU0#0/RP1/ADMIN0#Nov 18 08#47#34.670 UTC# envmon[3173]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #Power Group redundancy lost #DECLARE #0#0/RP0/ADMIN0#Nov 18 08#47#35.457 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-0 #DECLARE ## Fabric Plane-0 DOWN0/RP0/ADMIN0#Nov 18 08#47#35.458 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-1 #DECLARE ## Fabric Plane-1 DOWN0/RP0/ADMIN0#Nov 18 08#47#35.458 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-2 #DECLARE ## Fabric Plane-2 DOWN0/RP0/ADMIN0#Nov 18 08#47#35.458 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-3 #DECLARE ## Fabric Plane-3 DOWN0/RP0/ADMIN0#Nov 18 08#47#35.458 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-4 #DECLARE ## Fabric Plane-4 DOWN0/RP0/ADMIN0#Nov 18 08#47#35.459 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-5 #DECLARE ## Fabric Plane-5 DOWNRP/0/RP0/CPU0#Nov 18 08#47#36.013 UTC# ipv4_static[1030]# %ROUTING-IP_STATIC-4-CONFIG_NEXTHOP_ETHER_INTERFACE # Route for 19.0.0.0 is configured via ethernet interface without nexthop, Please check if this is intendedRP/0/RP0/CPU0#Nov 18 08#47#36.028 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_PM_ROLE_CHG # ISIS NSR PM role change, reg-type 0, HA-role# , 1RP/0/RP0/CPU0#Nov 18 08#47#36.439 UTC# isis[1009]# %ROUTING-ISIS-6-INFO_STARTUP_START # Cold controlled start beginningRP/0/RP0/CPU0#Nov 18 08#47#36.505 UTC# bpm[1057]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # bpm-active#(bgp-bpm-active)inst-id 0, Service PublishedRP/0/RP0/CPU0#Nov 18 08#47#36.509 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_CONFIG_ENABLE # ISIS NSR is configuredRP/0/RP0/CPU0#Nov 18 08#47#36.515 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_NOTIFY_READY # ISIS NSR set nsr ISIS NSR not ready yet for instance ISIS_1. rc 00/RP0/ADMIN0#Nov 18 08#47#37.896 UTC# aaad[3158]# %MGBL-AAAD-7-DEBUG # Not allowing to sync from XR VM to Admin VM after first user creation.RP/0/RP0/CPU0#Nov 18 08#47#38.416 UTC# bpm[1057]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # bpm-default#(A)inst-id 0, Connection OpenRP/0/RP0/CPU0#Nov 18 08#47#38.668 UTC# bgp[1042]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # default, process instance 1#(A)inst-id 0, Connection EstablisedRP/0/RP0/CPU0#Nov 18 08#47#39.135 UTC# cinetd[292]# %SECURITY-MPP-6-MSG_INFO # Updated Management Plane configuration for service# telnetRP/0/RP0/CPU0#Nov 18 08#47#39.144 UTC# snmpd[1002]# %SECURITY-MPP-6-MSG_INFO # Updated Management Plane configuration for service# snmpSYSTEM CONFIGURATION IN PROCESSThe startup configuration for this device is presently loading.This may take a few minutes. You will be notified upon completion.Please do not attempt to reconfigure the device until this process is complete.Location # LAB ILM 3rd FloorFor operational problems contact #Benoit des Rochettes email# xxxxxxxx@cisco.comRP/0/RP0/CPU0#Nov 18 08#47#39.414 UTC# bgp[1042]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # default#(A)inst-id 0, Initial Config DoneUser Access VerificationUsername# 0/RP0/ADMIN0#Nov 18 08#47#41.516 UTC# shelf_mgr[3203]# %INFRA-SHELF_MGR-6-CARD_HW_OPERATIONAL # Card# 0/FC4 hardware state going to Operational0/RP0/ADMIN0#Nov 18 08#47#41.516 UTC# shelf_mgr[3203]# %INFRA-SHELF_MGR-6-CARD_SW_OPERATIONAL # Card# 0/FC4 software state going to Operational0/RP0/ADMIN0#Nov 18 08#47#41.521 UTC# shelf_mgr[3203]# %INFRA-SHELF_MGR-6-CARD_HW_OPERATIONAL # Card# 0/FC1 hardware state going to Operational0/RP0/ADMIN0#Nov 18 08#47#41.521 UTC# shelf_mgr[3203]# %INFRA-SHELF_MGR-6-CARD_SW_OPERATIONAL # Card# 0/FC1 software state going to Operational0/RP0/ADMIN0#Nov 18 08#47#41.532 UTC# shelf_mgr[3203]# %INFRA-SHELF_MGR-5-CARD_INSERTION # Location# 0/FC4, Serial ## SAL1945ST4F0/RP0/ADMIN0#Nov 18 08#47#41.572 UTC# shelf_mgr[3203]# %INFRA-SHELF_MGR-5-CARD_INSERTION # Location# 0/FC1, Serial ## SAL1945ST3F0/2/ADMIN0#Nov 18 08#47#43.989 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_ADMIN_UP # The admin state of the control ethernet switch port 7 has changed. New Admin state# UP, Link state UP0/2/ADMIN0#Nov 18 08#47#43.989 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 7 has changed. New Link state UP, Admin state# UP0/2/ADMIN0#Nov 18 08#47#45.199 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_DOWN # The physical link state of the control ethernet switch port 7 has changed. New Link state DOWN, Admin state# UP0/2/ADMIN0#Nov 18 08#47#45.662 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_ADMIN_UP # The admin state of the control ethernet switch port 6 has changed. New Admin state# UP, Link state UP0/2/ADMIN0#Nov 18 08#47#45.662 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 6 has changed. New Link state UP, Admin state# UP0/2/ADMIN0#Nov 18 08#47#46.862 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_DOWN # The physical link state of the control ethernet switch port 6 has changed. New Link state DOWN, Admin state# UP0/2/ADMIN0#Nov 18 08#47#47.615 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_ADMIN_UP # The admin state of the control ethernet switch port 5 has changed. New Admin state# UP, Link state UP0/2/ADMIN0#Nov 18 08#47#47.615 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 5 has changed. New Link state UP, Admin state# UP0/2/ADMIN0#Nov 18 08#47#48.815 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_DOWN # The physical link state of the control ethernet switch port 5 has changed. New Link state DOWN, Admin state# UP0/2/ADMIN0#Nov 18 08#47#49.635 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_ADMIN_UP # The admin state of the control ethernet switch port 4 has changed. New Admin state# UP, Link state UP0/2/ADMIN0#Nov 18 08#47#49.635 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 4 has changed. New Link state UP, Admin state# UP0/2/ADMIN0#Nov 18 08#47#50.825 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_DOWN # The physical link state of the control ethernet switch port 4 has changed. New Link state DOWN, Admin state# UPUsername# ciscoPassword#RP/0/RP0/CPU0#Nov 18 08#47#55.841 UTC# exec[67856]# %SECURITY-LOGIN-6-AUTHEN_SUCCESS # Successfully authenticated user 'cisco' from 'console' on 'con0_RP0_CPU0'0/FC1/ADMIN0#Nov 18 08#47#57.118 UTC# aaad[2021]# %MGBL-AAAD-7-DEBUG # Disaster-recovery account not configured. Using first user as disaster-recovery accountRP/0/RP0/CPU0#Nov 18 08#47#58.487 UTC# isis[1009]# %ROUTING-ISIS-6-INFO_STARTUP_FINISH # Cold controlled start completed0/FC4/ADMIN0#Nov 18 08#47#58.654 UTC# aaad[2016]# %MGBL-AAAD-7-DEBUG # Disaster-recovery account not configured. Using first user as disaster-recovery account0/2/ADMIN0#Nov 18 08#48#03.670 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 7 has changed. New Link state UP, Admin state# UPLC/0/2/CPU0#Nov 18 08#48#07.116 UTC# fia_driver[177]# %PLATFORM-OFA-6-INFO # NPU #3 Initialization Completed0/2/ADMIN0#Nov 18 08#48#07.264 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 6 has changed. New Link state UP, Admin state# UPLC/0/2/CPU0#Nov 18 08#48#11.023 UTC# fia_driver[177]# %PLATFORM-OFA-6-INFO # NPU #2 Initialization Completed0/2/ADMIN0#Nov 18 08#48#11.167 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 5 has changed. New Link state UP, Admin state# UPLC/0/2/CPU0#Nov 18 08#48#14.605 UTC# fia_driver[177]# %PLATFORM-OFA-6-INFO # NPU #1 Initialization Completed0/2/ADMIN0#Nov 18 08#48#14.754 UTC# esd[3180]# %INFRA-ESD-6-PORT_STATE_CHANGE_LINK_UP # The physical link state of the control ethernet switch port 4 has changed. New Link state UP, Admin state# UPLC/0/2/CPU0#Nov 18 08#48#19.805 UTC# fia_driver[177]# %PLATFORM-OFA-6-INFO # NPU #0 Initialization CompletedRP/0/RP0/CPU0#Nov 18 08#48#25.472 UTC# pim[1223]# %ROUTING-IPV4_PIM-5-INTCHG # PIM interface Lo0 UPRP/0/RP0/CPU0#Nov 18 08#48#25.472 UTC# pim[1223]# %ROUTING-IPV4_PIM-5-NBRCHG # PIM neighbor 75.75.127.1 UP on Lo0SYSTEM CONFIGURATION COMPLETEDRP/0/RP0/CPU0#Nov 18 08#48#26.274 UTC# l2vpn_mgr[1241]# %L2-L2VPN-6-CAPABILITY_CHANGE # Global L2VPN capabilities have been updatedRP/0/RP0/CPU0#Nov 18 08#48#26.388 UTC# ifmgr[427]# %PKT_INFRA-LINK-3-UPDOWN # Interface MgmtEth0/RP0/CPU0/0, changed state to DownRP/0/RP0/CPU0#Nov 18 08#48#26.388 UTC# ifmgr[427]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface MgmtEth0/RP0/CPU0/0, changed state to DownRP/0/RP0/CPU0#Nov 18 08#48#26.391 UTC# cfgmgr-rp[359]# %MGBL-CONFIG-6-OIR_RESTORE # Configuration for node '0/RP0/0' has been restored.RP/0/RP0/CPU0#Nov 18 08#48#26.395 UTC# ifmgr[427]# %PKT_INFRA-LINK-3-UPDOWN # Interface MgmtEth0/RP0/CPU0/0, changed state to UpRP/0/RP0/CPU0#Nov 18 08#48#26.397 UTC# ifmgr[427]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface MgmtEth0/RP0/CPU0/0, changed state to UpRP/0/RP0/CPU0#Nov 18 08#48#26.651 UTC# tacacsd[1193]# %SECURITY-TACACSD-6-SERVER_DOWN # TACACS+ server 192.0.0.2/49 is DOWN - Socket 116# No route to hostRP/0/RP0/CPU0#Nov 18 08#48#26.764 UTC# ifmgr[427]# %PKT_INFRA-LINK-3-UPDOWN # Interface MgmtEth0/RP0/CPU0/0, changed state to DownRP/0/RP0/CPU0#Nov 18 08#48#26.764 UTC# ifmgr[427]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface MgmtEth0/RP0/CPU0/0, changed state to DownRP/0/RP0/CPU0#Nov 18 08#48#27.258 UTC# smartlicserver[144]# %LICENSE-SMART_LIC-5-COMM_RESTORED # Communications with Cisco licensing cloud restoredLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/0, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/1, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/2, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/3, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/4, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/5, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/6, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/7, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/8, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/9, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/10, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/11, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/12, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/13, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/14, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/15, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/16, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/17, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/18, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/19, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/20, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/21, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/22, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/23, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/24, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/25, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/28, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/29, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/30, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/31, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/32, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/33, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/34, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.619 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/35, changed state to DownLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/0, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/1, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/2, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/3, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/4, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/5, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/6, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/7, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/14, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/15, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/16, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/17, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/20, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/28, changed state to UpLC/0/2/CPU0#Nov 18 08#48#29.630 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/29, changed state to UpRP/0/RP0/CPU0#Nov 18 08#48#29.991 UTC# ifmgr[427]# %PKT_INFRA-LINK-3-UPDOWN # Interface MgmtEth0/RP0/CPU0/0, changed state to UpRP/0/RP0/CPU0#Nov 18 08#48#29.992 UTC# ifmgr[427]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface MgmtEth0/RP0/CPU0/0, changed state to UpLC/0/2/CPU0#Nov 18 08#48#33.543 UTC# fia_driver[177]# %PLATFORM-DPA-6-INFO # Fabric BANDWIDTH above configured threshold0/RP0/ADMIN0#Nov 18 08#48#35.460 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-0 #CLEAR ## Fabric Plane-0 DOWN0/RP0/ADMIN0#Nov 18 08#48#35.461 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 0 state changed to UP0/RP0/ADMIN0#Nov 18 08#48#35.462 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-1 #CLEAR ## Fabric Plane-1 DOWN0/RP0/ADMIN0#Nov 18 08#48#35.462 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 1 state changed to UP0/RP0/ADMIN0#Nov 18 08#48#35.462 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 2 state changed to MCAST_DOWN0/RP0/ADMIN0#Nov 18 08#48#35.462 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-3 #CLEAR ## Fabric Plane-3 DOWN0/RP0/ADMIN0#Nov 18 08#48#35.463 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 3 state changed to UP0/RP0/ADMIN0#Nov 18 08#48#35.463 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-4 #CLEAR ## Fabric Plane-4 DOWN0/RP0/ADMIN0#Nov 18 08#48#35.463 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 4 state changed to UP0/RP0/ADMIN0#Nov 18 08#48#35.463 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-5 #CLEAR ## Fabric Plane-5 DOWN0/RP0/ADMIN0#Nov 18 08#48#35.464 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 5 state changed to UP0/RP0/ADMIN0#Nov 18 08#48#38.887 UTC# fsdbagg[3971]# %PKT_INFRA-FM-4-FAULT_MINOR # ALARM_MINOR #FABRIC-PLANE-2 #CLEAR ## Fabric Plane-2 MCAST_DOWN0/RP0/ADMIN0#Nov 18 08#48#38.887 UTC# fsdbagg[3971]# %FABRIC-FSDB_AGG-5-PLANE_UPDOWN # [3971] # Plane 2 state changed to UPRP/0/RP1/CPU0#Nov 18 08#48#39.108 UTC# ifmgr[314]# %PKT_INFRA-LINK-5-CHANGED # Interface MgmtEth0/RP1/CPU0/0, changed state to Administratively DownRP/0/RP1/CPU0#Nov 18 08#48#39.114 UTC# cfgmgr-rp[139]# %MGBL-CONFIG-6-OIR_RESTORE # Configuration for node '0/RP1/0' has been restored.RP/0/RP1/CPU0#Nov 18 08#48#39.316 UTC# cepki[426]# %SECURITY-PKI-6-LOG_INFO_DETAIL # FIPS POST Successful for cepkiRP/0/RP1/CPU0#Nov 18 08#48#39.714 UTC# smartlicserver[392]# %LICENSE-SMART_LIC-6-AGENT_READY # Smart Agent for Licensing is initializedRP/0/RP1/CPU0#Nov 18 08#48#39.888 UTC# smartlicserver[392]# %LICENSE-SMART_LIC-6-AGENT_ENABLED # Smart Agent for Licensing is enabledRP/0/RP1/CPU0#Nov 18 08#48#39.888 UTC# smartlicserver[392]# %LICENSE-SMART_LIC-6-AGENT_READY # Smart Agent for Licensing is initializedRP/0/RP1/CPU0#Nov 18 08#48#39.888 UTC# smartlicserver[392]# %LICENSE-SMART_LIC-6-AGENT_ENABLED # Smart Agent for Licensing is enabledRP/0/RP1/CPU0#Nov 18 08#48#39.910 UTC# smartlicserver[392]# %LICENSE-SMART_LIC-6-HA_ROLE_CHANGED # Smart Agent HA role changed to Standby.RP/0/RP1/CPU0#Nov 18 08#48#40.462 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_PM_ROLE_CHG # ISIS NSR PM role change, reg-type 0, HA-role# , 2RP/0/RP1/CPU0#Nov 18 08#48#41.652 UTC# ipsec_mp[362]# %SECURITY-IMP-6-PROC_READY # Process ipsec_mp is readyRP/0/RP1/CPU0#Nov 18 08#48#42.835 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_INTF_TOPO_UNREADY # ISIS NSR vm 2 intf init 'topo' not ready, retry laterRP/0/RP1/CPU0#Nov 18 08#48#42.835 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_CONFIG_ENABLE # ISIS NSR is configuredRP/0/RP1/CPU0#Nov 18 08#48#42.837 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_CONFIG_ENABLE # ISIS NSR is nsr ltopo ready triggeredRP/0/RP0/CPU0#Nov 18 08#48#42.840 UTC# bpm[1057]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # bpm-default#(S)inst-id 0, Connection OpenRP/0/RP1/CPU0#Nov 18 08#48#42.914 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_CONFIG_ENABLE # ISIS NSR is nsr ltopo ready triggeredRP/0/RP1/CPU0#Nov 18 08#48#43.024 UTC# isis[1009]# %ROUTING-ISIS-4-NSR_NII_BRINGUP # ISIS NSR NII intf 'Nii0' bring up timer runningRP/0/RP1/CPU0#Nov 18 08#48#43.024 UTC# isis[1009]# %ROUTING-ISIS-4-NSR_NII_BRINGUP # ISIS NSR NII intf 'Nii0' bring up timer runningRP/0/RP1/CPU0#Nov 18 08#48#43.028 UTC# snmpd[1002]# %SNMP-SNMP-3-INTERNAL # Snmp Trap source issue # Using best possible Src IP address for SNMP Traps. Failed to get the IP address of #(Check the Interface state)#FortyGigE0_2_0_20RP/0/RP1/CPU0#Nov 18 08#48#43.043 UTC# snmpd[1002]# %SNMP-SNMP-3-INTERNAL # Snmp Trap source issue # Using best possible Src IPV6 address for SNMP Traps. Failed to get the IPv6 address of #(Check the Interface state)#FortyGigE0_2_0_20RP/0/RP1/CPU0#Nov 18 08#48#43.447 UTC# bgp[1042]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # default, process instance 1#(S)inst-id 0, Connection EstablisedRP/0/RP1/CPU0#Nov 18 08#48#43.800 UTC# bgp[1042]# %ROUTING-BGP-5-ASYNC_IPC_STATUS # default#(S)inst-id 0, Initial Config DoneRP/0/RP0/CPU0#Nov 18 08#48#45.353 UTC# rmf_svr[324]# %HA-REDCON-1-STANDBY_READY # standby card is readyRP/0/RP1/CPU0#Nov 18 08#48#45.359 UTC# rmf_svr[359]# %HA-REDCON-6-STBY_STANDBY_READY # This card is standby and is readyRP/0/RP0/CPU0#Nov 18 08#48#46.007 UTC# rmf_svr[324]# %HA-REDCON-1-STANDBY_NOT_READY # standby card is NOT readyRP/0/RP1/CPU0#Nov 18 08#48#46.013 UTC# rmf_svr[359]# %HA-REDCON-1-STANDBY_NOT_READY # standby card is NOT readyRP/0/RP0/CPU0#Nov 18 08#48#56.007 UTC# rmf_svr[324]# %HA-REDCON-1-STANDBY_READY # standby card is readyRP/0/RP0/CPU0#Nov 18 08#49#00.158 UTC# rmf_svr[324]# %PKT_INFRA-FM-3-FAULT_MAJOR # ALARM_MAJOR #RP-RED-LOST-NNR #CLEAR #0/RP0/CPU0#LC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/0, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/1, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/2, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/3, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/0, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/1, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/2, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.457 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/3, changed state to DownLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/0, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/1, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/2, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/26/3, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/0, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/1, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/2, changed state to UpLC/0/2/CPU0#Nov 18 08#49#07.459 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface Optics0/2/0/27/3, changed state to UpRP/0/RP0/CPU0#Nov 18 08#49#09.891 UTC# isis[1009]# %ROUTING-ISIS-4-NSR_NII_BRINGUP # ISIS NSR NII intf 'Nii0' bring up timer runningRP/0/RP0/CPU0#Nov 18 08#49#09.898 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_CONN # ISIS NSR Connection 'Up' on 'Nii0'RP/0/RP1/CPU0#Nov 18 08#49#09.905 UTC# isis[1009]# %ROUTING-ISIS-5-NSR_CONN # ISIS NSR Connection 'Up' on 'Nii0'RP/0/RP0/CPU0#Nov 18 08#49#16.416 UTC# ztp.sh[65792]# %OS-SYSLOG-6-LOG_INFO # ZTP will abort as username is configured. Do not configure until ZTP aborted message is seen.RP/0/RP0/CPU0#Nov 18 08#49#20.941 UTC# ztp.sh[65825]# %OS-SYSLOG-6-LOG_INFO # ZTP abortedRP/0/RP0/CPU0#Nov 18 08#49#25.008 UTC# kim[278]# %INFRA-KIM-6-LOG_INFO # XR statistics will be pushed into the Linux kernel at 5 second intervalsLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-3-UPDOWN # Interface HundredGigE0/2/0/0, changed state to DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINEPROTO-5-UPDOWN # Line protocol on Interface HundredGigE0/2/0/0, changed state to DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/35, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/34, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/33, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/32, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/31, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/30, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/29, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/28, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/25, changed state to Administratively DownLC/0/2/CPU0#Nov 18 08#49#27.528 UTC# ifmgr[223]# %PKT_INFRA-LINK-5-CHANGED # Interface HundredGigE0/2/0/24, changed state to Administratively DownAt this step, the traffic of the different services is impacted.We verify the release and we commit it to “freeze” the installation (remember# if you don’t commit the install, the next reload will start the previous release).RP/0/RP0/CPU0#5508#show install active summary Active Packages# 7 ncs5500-xr-6.6.3 version=6.6.3 [Boot image] ncs5500-mpls-2.1.0.0-r663 ncs5500-isis-2.2.0.0-r663 ncs5500-mpls-te-rsvp-4.1.0.0-r663 ncs5500-mcast-3.1.0.0-r663 ncs5500-mgbl-3.0.0.0-r663 ncs5500-k9sec-3.1.0.0-r663RP/0/RP0/CPU0#5508#install commitNov 18 08#56#14 Install operation 7 started by cisco# install commitNov 18 08#56#15 Install operation will continue in the backgroundRP/0/RP0/CPU0#5508#RP/0/RP0/CPU0#Nov 18 08#56#36.025 UTC# sdr_instmgr[1245]# %PKT_INFRA-FM-6-FAULT_INFO # INSTALL-IN-PROGRESS #CLEAR #0/RP0/CPU0# INSTALL_IN_PROGRESS Alarm # being CLEARED for the system0/RP0/ADMIN0#Nov 18 08#56#37.022 UTC# inst_mgr[3978]# %PKT_INFRA-FM-6-FAULT_INFO # INSTALL-IN-PROGRESS #CLEAR #0/RP0# Calvados INSTALL_IN_PROGRESS Alarm # being CLEARED for the systemNov 18 08#56#42 Install operation 7 finished successfullyRP/0/RP0/CPU0#Nov 18 08#56#42.528 UTC# sdr_instmgr[1245]# %INSTALL-INSTMGR-2-OPERATION_SUCCESS # Install operation 7 finished successfullyRP/0/RP0/CPU0#Nov 18 08#58#18.895 UTC# bgp[1042]# %ROUTING-BGP-5-NSR_STATE_CHANGE # Changed state to NSR-ReadyRP/0/RP0/CPU0#Nov 18 08#58#35.888 UTC# pim6[1224]# %ROUTING-IPV4_PIM-5-HA_NOTICE # NSR readyRP/0/RP0/CPU0#Nov 18 08#58#38.326 UTC# pim[1223]# %ROUTING-IPV4_PIM-5-HA_NOTICE # NSR not readyRP/0/RP0/CPU0#5508#show install committed summary Committed Packages# 7 ncs5500-xr-6.6.3 version=6.6.3 [Boot image] ncs5500-mpls-2.1.0.0-r663 ncs5500-isis-2.2.0.0-r663 ncs5500-mpls-te-rsvp-4.1.0.0-r663 ncs5500-mcast-3.1.0.0-r663 ncs5500-mgbl-3.0.0.0-r663 ncs5500-k9sec-3.1.0.0-r663RP/0/RP0/CPU0#5508#We verify the firmware#RP/0/RP0/CPU0#5508#show hw-module fpd FPD Versions =================Location Card type HWver FPD device ATR Status Running Programd------------------------------------------------------------------------------0/2 NC55-36X100G-A-SE 0.303 MIFPGA CURRENT 0.03 0.030/2 NC55-36X100G-A-SE 0.303 Bootloader CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 DBFPGA CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 IOFPGA CURRENT 0.21 0.210/2 NC55-36X100G-A-SE 0.303 SATA CURRENT 5.00 5.000/RP0 NC55-RP 1.1 Bootloader CURRENT 9.30 9.300/RP0 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/RP0 NC55-RP 1.1 SATA CURRENT 6.00 6.000/RP1 NC55-RP 1.1 Bootloader CURRENT 9.30 9.300/RP1 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/FC0 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC0 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC1 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC1 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC2 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC2 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC3 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC3 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC4 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC4 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC5 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC5 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/SC0 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC0 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.100/SC1 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC1 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.10 RP/0/RP0/CPU0#5508#adminsysadmin-vm#0_RP0# show hw-module fpd FPD Versions ===============Location Card type HWver FPD device ATR Status Run Programd-------------------------------------------------------------------------------0/2 NC55-36X100G-A-SE 0.303 Bootloader CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 DBFPGA CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 IOFPGA CURRENT 0.21 0.210/2 NC55-36X100G-A-SE 0.303 SATA CURRENT 5.00 5.000/RP0 NC55-RP 1.1 Bootloader CURRENT 9.30 9.300/RP0 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/RP0 NC55-RP 1.1 SATA CURRENT 6.00 6.000/RP1 NC55-RP 1.1 Bootloader CURRENT 9.30 9.300/RP1 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/FC0 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC0 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC1 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC1 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC2 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC2 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC3 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC3 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC4 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC4 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/FC5 NC55-5508-FC 1.0 Bootloader CURRENT 1.74 1.740/FC5 NC55-5508-FC 1.0 IOFPGA CURRENT 0.16 0.160/SC0 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC0 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.100/SC1 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC1 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.10To make sure you are ready for the next step, one way could be check the service are restored. Another one, is to verify the “NSR ready” state in the show redundancy summary output. It shows that both RPs are in sync and the router is in stable/nominal state.Hardware migrationWe are now ready for the fan trays and fabric cards replacement.At this point, it’s recommended to perform the migration with the chassis shutdown electrically.Unplug all power blocks#Unscrew the fan trays#Remove the fan trays one by one#Unscrew the fabric cards#Eject all fabric cards#Replace them with v2 fabric cards#Make sure your remove the plastic cache before insertion #)(you have no idea what we’ve seen…)Insert the three v2 fan trays#IMPORTANT# Make sure you don’t insert any v1 Fan with v2 Fabric or vice-versa. These parts are shouldn’t allow the insertion (with a pin guide to avoid mistake), but don’t force the insertion. If it doesn’t fit, make sure you didn’t mix up cards and fans.At this point, we can re-plug the power blocks and watch the system booting up.Once completed, we verify the parts insertedRP/0/RP0/CPU0#5508#show platformNode Type State Config state--------------------------------------------------------------------------------0/2/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/2/NPU0 Slice UP0/2/NPU1 Slice UP0/2/NPU2 Slice UP0/2/NPU3 Slice UP0/RP0/CPU0 NC55-RP-E(Active) IOS XR RUN NSHUT0/RP1/CPU0 NC55-RP-E(Standby) IOS XR RUN NSHUT0/FC0 NC55-5508-FC2 OPERATIONAL NSHUT0/FC1 NC55-5508-FC2 OPERATIONAL NSHUT0/FC2 NC55-5508-FC2 OPERATIONAL NSHUT0/FC3 NC55-5508-FC2 OPERATIONAL NSHUT0/FC4 NC55-5508-FC2 OPERATIONAL NSHUT0/FC5 NC55-5508-FC2 OPERATIONAL NSHUT0/FT0 NC55-5508-FAN2 OPERATIONAL NSHUT0/FT1 NC55-5508-FAN2 OPERATIONAL NSHUT0/FT2 NC55-5508-FAN2 OPERATIONAL NSHUT0/SC0 NC55-SC OPERATIONAL NSHUT0/SC1 NC55-SC OPERATIONAL NSHUTRP/0/RP0/CPU0#5508#And the firmware#sysadmin-vm#0_RP0# show hw-module fpd FPD Versions ===============Location Card type HWver FPD device ATR Status Run Programd-------------------------------------------------------------------------------0/2 NC55-36X100G-A-SE 0.303 Bootloader CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 DBFPGA CURRENT 0.14 0.140/2 NC55-36X100G-A-SE 0.303 IOFPGA CURRENT 0.21 0.210/2 NC55-36X100G-A-SE 0.303 SATA CURRENT 5.00 5.000/RP0 NC55-RP 1.1 Bootloader CURRENT 9.30 9.300/RP0 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/RP0 NC55-RP 1.1 SATA CURRENT 6.00 6.000/RP1 NC55-RP 1.1 Bootloader CURRENT 9.30 9.300/RP1 NC55-RP 1.1 IOFPGA CURRENT 0.09 0.090/FC0 NC55-5508-FC2 1.0 Bootloader CURRENT 1.80 1.800/FC0 NC55-5508-FC2 1.0 IOFPGA CURRENT 0.12 0.120/FC1 NC55-5508-FC2 1.0 Bootloader CURRENT 1.80 1.800/FC1 NC55-5508-FC2 1.0 IOFPGA CURRENT 0.12 0.120/FC2 NC55-5508-FC2 1.0 Bootloader CURRENT 1.80 1.800/FC2 NC55-5508-FC2 1.0 IOFPGA CURRENT 0.12 0.120/FC3 NC55-5508-FC2 1.0 Bootloader CURRENT 1.80 1.800/FC3 NC55-5508-FC2 1.0 IOFPGA CURRENT 0.12 0.120/FC4 NC55-5508-FC2 1.0 Bootloader CURRENT 1.80 1.800/FC4 NC55-5508-FC2 1.0 IOFPGA CURRENT 0.12 0.120/FC5 NC55-5508-FC2 1.0 Bootloader CURRENT 1.80 1.800/FC5 NC55-5508-FC2 1.0 IOFPGA CURRENT 0.12 0.120/SC0 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC0 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.100/SC1 NC55-SC 1.4 Bootloader CURRENT 1.74 1.740/SC1 NC55-SC 1.4 IOFPGA CURRENT 0.10 0.10Very likely you’ll have to update the firmware for the new v2 parts and reload them to make it active.Other show commands you could useIn no specific order, you could have a look at the output of the following additional show command before and after the software upgrade and hardware migration. (admin) show controller card-mgr inventory (admin) show led (admin) show environment all (admin) show alarms show filesystem location 0/rp0/CPU0 show filesystem location 0/rp1/CPU0 show process cpu show processes blocked (several times to avoid transcient blocks) (admin) show controller fabric plane all (admin) show controller fabric health (admin) show controller fabric plane all statistics (admin) show controller fabric plane all detailConfiguration used for the testMostly for referenceTREX configsTREX-Config-port0.yaml- name# L3VPN-1 stream# name# L3VPN-1 advanced_mode# true enabled# true self_start# true next_stream_id# -1 isg# 0 flags# 0 action_count# 0 packet# meta# '' binary# >- AAAAAAAAAAAAAAAAgQAD6AgARQAFxgABAABABl4uCwAAAgwAAAInEE4gAAAAAAAAAABQAiAA/hAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= model# '' rx_stats# enabled# false vm# split_by_var# '' instructions# - next_var# null name# TCP_sport_5 min_value# 10000 init_value# 10000 step# 1 max_value# 11000 type# flow_var size# 2 op# inc split_to_cores# true - next_var# null name# pkt_size_1 min_value# 80 init_value# 80 step# 1 max_value# 1500 type# flow_var size# 2 op# inc split_to_cores# true - name# TCP_sport_5 add_value# 0 pkt_offset# 38 type# write_flow_var is_big_endian# true - type# trim_pkt_size name# pkt_size_1 - name# pkt_size_1 add_value# -18 pkt_offset# 20 type# write_flow_var is_big_endian# true - type# fix_checksum_ipv4 pkt_offset# 18 cache_size# 5000 mode# rate# type# pps value# 1 type# continuous count# 1 pkts_per_burst# 1 total_pkts# 1 ibg# 0 flow_stats# enabled# true stream_id# 100 rule_type# stats- name# L2VPN-1 stream# name# L2VPN-1 advanced_mode# true enabled# true self_start# true next_stream_id# -1 isg# 0 flags# 2 action_count# 0 packet# meta# '' binary# >- PP3+pKriAAAAAAAAgQAAZAgARQAFxgABAABABjUwEAAAATAAAAFOIHUwAAAAAAAAAABQAiAAhvIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= model# '' rx_stats# enabled# false vm# split_by_var# '' instructions# - next_var# null name# TCP_sport_5 min_value# 20000 init_value# 20000 step# 1 max_value# 21000 type# flow_var size# 2 op# inc split_to_cores# true - next_var# null name# pkt_size_1 min_value# 80 init_value# 80 step# 1 max_value# 1400 type# flow_var size# 2 op# inc split_to_cores# true - name# TCP_sport_5 add_value# 0 pkt_offset# 38 type# write_flow_var is_big_endian# true - type# trim_pkt_size name# pkt_size_1 - name# pkt_size_1 add_value# -18 pkt_offset# 20 type# write_flow_var is_big_endian# true - type# fix_checksum_ipv4 pkt_offset# 18 cache_size# 5000 mode# rate# type# pps value# 1 type# continuous count# 1 pkts_per_burst# 1 total_pkts# 1 ibg# 0 flow_stats# enabled# true stream_id# 200 rule_type# stats- name# L2VPN-RSVP-1 stream# name# L2VPN-RSVP-1 advanced_mode# true enabled# true self_start# true next_stream_id# -1 isg# 0 flags# 2 action_count# 0 packet# meta# '' binary# >- PP3+pKriAAAAAAAAgQAAZQgARQAFYgABAABABjWUEAAAATAAAAFOIHUwAAAAAAAAAABQAiAAh1YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== model# '' rx_stats# enabled# false vm# split_by_var# '' instructions# - next_var# null name# TCP_sport_5 min_value# 20000 init_value# 20000 step# 1 max_value# 21000 type# flow_var size# 2 op# inc split_to_cores# true - next_var# null name# pkt_size_1 min_value# 80 init_value# 80 step# 1 max_value# 1400 type# flow_var size# 2 op# inc split_to_cores# true - name# TCP_sport_5 add_value# 0 pkt_offset# 38 type# write_flow_var is_big_endian# true - type# trim_pkt_size name# pkt_size_1 - name# pkt_size_1 add_value# -18 pkt_offset# 20 type# write_flow_var is_big_endian# true - type# fix_checksum_ipv4 pkt_offset# 18 cache_size# 5000 mode# rate# type# pps value# 1 type# continuous count# 1 pkts_per_burst# 1 total_pkts# 1 ibg# 0 flow_stats# enabled# true stream_id# 300 rule_type# statsand TREX-Config-port1.yaml- name# L3VPN-2 stream# name# L3VPN-2 advanced_mode# true enabled# true self_start# true next_stream_id# -1 isg# 0 flags# 0 action_count# 0 packet# meta# '' binary# >- AAAAAAAAAAAAAAAAgQAH0AgARQAFxgABAABABl4uDAAAAgsAAAInECcQAAAAAAAAAABQAiAAJSEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= model# '' rx_stats# enabled# false vm# split_by_var# '' instructions# - next_var# null name# TCP_sport_5 min_value# 10000 init_value# 10000 step# 1 max_value# 11000 type# flow_var size# 2 op# inc split_to_cores# true - next_var# null name# pkt_size_1 min_value# 80 init_value# 80 step# 1 max_value# 1500 type# flow_var size# 2 op# inc split_to_cores# true - name# TCP_sport_5 add_value# 0 pkt_offset# 38 type# write_flow_var is_big_endian# true - type# trim_pkt_size name# pkt_size_1 - name# pkt_size_1 add_value# -18 pkt_offset# 20 type# write_flow_var is_big_endian# true - type# fix_checksum_ipv4 pkt_offset# 18 cache_size# 5000 mode# rate# type# pps value# 1 type# continuous count# 1 pkts_per_burst# 1 total_pkts# 1 ibg# 0 flow_stats# enabled# true stream_id# 101 rule_type# stats- name# L2VPN-2 stream# name# L2VPN-2 advanced_mode# true enabled# true self_start# true next_stream_id# -1 isg# 0 flags# 2 action_count# 0 packet# meta# '' binary# >- PP3+pKrhAAAAAAAAgQAAZQgARQAFxgABAABABjUwMAAAARAAAAFOIJxAAAAAAAAAAABQAiAAX+IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= model# '' rx_stats# enabled# false vm# split_by_var# '' instructions# - next_var# null name# TCP_sport_5 min_value# 20000 init_value# 20000 step# 1 max_value# 21000 type# flow_var size# 2 op# inc split_to_cores# true - next_var# null name# pkt_size_1 min_value# 80 init_value# 80 step# 1 max_value# 1400 type# flow_var size# 2 op# inc split_to_cores# true - name# TCP_sport_5 add_value# 0 pkt_offset# 38 type# write_flow_var is_big_endian# true - type# trim_pkt_size name# pkt_size_1 - name# pkt_size_1 add_value# -18 pkt_offset# 20 type# write_flow_var is_big_endian# true - type# fix_checksum_ipv4 pkt_offset# 18 cache_size# 5000 mode# rate# type# pps value# 1 type# continuous count# 1 pkts_per_burst# 1 total_pkts# 1 ibg# 0 flow_stats# enabled# true stream_id# 201 rule_type# stats- name# L2VPN-RSVP-2 stream# name# L2VPN-RSVP-2 advanced_mode# true enabled# true self_start# true next_stream_id# -1 isg# 0 flags# 2 action_count# 0 packet# meta# '' binary# >- PP3+pKrhAAAAAAAAgQAAZggARQAFYgABAABABjWUMAAAARAAAAFOIJxAAAAAAAAAAABQAiAAYEYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== model# '' rx_stats# enabled# false vm# split_by_var# '' instructions# - next_var# null name# TCP_sport_5 min_value# 20000 init_value# 20000 step# 1 max_value# 21000 type# flow_var size# 2 op# inc split_to_cores# true - next_var# null name# pkt_size_1 min_value# 80 init_value# 80 step# 1 max_value# 1400 type# flow_var size# 2 op# inc split_to_cores# true - name# TCP_sport_5 add_value# 0 pkt_offset# 38 type# write_flow_var is_big_endian# true - type# trim_pkt_size name# pkt_size_1 - name# pkt_size_1 add_value# -18 pkt_offset# 20 type# write_flow_var is_big_endian# true - type# fix_checksum_ipv4 pkt_offset# 18 cache_size# 5000 mode# rate# type# pps value# 1 type# continuous count# 1 pkts_per_burst# 1 total_pkts# 1 ibg# 0 flow_stats# enabled# true stream_id# 301 rule_type# statsRouter under test configRP/0/RP0/CPU0#NCS55K#show runBuilding configuration...!! IOS XR Configuration 6.6.3!hostname NCS55K!domain ipv4 host aslab.local 172.16.0.2domain name aslab.localdomain name-server 172.16.0.2telnet vrf mgmt ipv4 server max-servers 100telnet vrf default ipv4 server max-servers 100explicit-path name To-IOX index 10 next-address strict ipv4 unicast 75.75.127.3 index 20 next-address strict ipv4 unicast 75.75.127.2!vrf mgmt address-family ipv4 unicast ! address-family ipv6 unicast !!vrf EXTRATEST address-family ipv4 unicast import route-target 2000#1000 ! export route-target 2000#1000 ! !!fpd auto-upgrade enable!interface Bundle-Ether11 description to ASR9006# 4x 100G mtu 9114 ipv4 address 1.255.111.1 255.255.255.0 ipv4 verify unicast source reachable-via any ipv6 verify unicast source reachable-via any ipv6 address 2001#1#255#111##1/64 load-interval 30 dampening!interface Loopback0 ipv4 address 75.75.127.1 255.255.255.255!interface tunnel-te110 ipv4 unnumbered Loopback0 logging events lsp-status reoptimize logging events lsp-status state priority 3 3 destination 75.75.127.2 path-option 100 explicit name To-IOX!interface MgmtEth0/RP0/CPU0/0 vrf mgmt ipv4 address 192.168.255.124 255.255.254.0!interface HundredGigE0/2/0/14 description ASR9006 Hu0/1/0/2 bundle id 11 mode active cdp lacp period short carrier-delay up 30 down 0 load-interval 30!interface HundredGigE0/2/0/15 description ASR9006 Hu0/1/0/3 bundle id 11 mode active cdp lacp period short carrier-delay up 30 down 0 load-interval 30!interface HundredGigE0/2/0/23 mtu 9000 load-interval 30!interface HundredGigE0/2/0/23.100 l2transport encapsulation dot1q 100 rewrite ingress tag pop 1 symmetric!interface HundredGigE0/2/0/23.101 l2transport encapsulation dot1q 101 rewrite ingress tag pop 1 symmetric!interface HundredGigE0/2/0/23.1000 vrf EXTRATEST ipv4 address 11.0.0.1 255.255.255.0 encapsulation dot1q 1000!interface HundredGigE0/2/0/30 ipv4 address 1.250.1.2 255.255.255.252 load-interval 30!interface preconfigure HundredGigE0/5/1/0 description ASR9006 Hu0/1/0/0 bundle id 11 mode active cdp lacp period short carrier-delay up 30 down 0 load-interval 30!interface preconfigure HundredGigE0/5/1/1 description ASR9006 Hu01/0/1 bundle id 11 mode active cdp lacp period short carrier-delay up 30 down 0 load-interval 30!router static vrf mgmt address-family ipv4 unicast 0.0.0.0/0 172.16.0.1 ! !!router isis 1 set-overload-bit on-startup 240 is-type level-2-only net 49.0001.0000.0222.0032.00 segment-routing global-block 200000 207999 nsr nsf cisco log adjacency changes address-family ipv4 unicast metric-style wide mpls traffic-eng level-2-only mpls traffic-eng router-id 75.75.127.1 segment-routing mpls sr-prefer ! address-family ipv6 unicast metric-style wide ! interface Bundle-Ether11 bfd fast-detect ipv4 bfd fast-detect ipv6 point-to-point address-family ipv4 unicast fast-reroute per-prefix fast-reroute per-prefix ti-lfa metric 1 mpls ldp sync ! address-family ipv6 unicast metric 1 ! ! interface Loopback0 passive address-family ipv4 unicast prefix-sid index 1 ! address-family ipv6 unicast ! ! interface HundredGigE0/2/0/30 point-to-point address-family ipv4 unicast fast-reroute per-prefix fast-reroute per-prefix ti-lfa metric 1 adjacency-sid absolute 15001 ! address-family ipv6 unicast metric 1 ! !!snmp-server traps bgp cbgp2 updownsnmp-server traps bgp updownrouter bgp 2000 nsr bgp router-id 75.75.127.1 bgp graceful-restart address-family ipv4 unicast ! address-family ipv4 multicast ! address-family vpnv4 unicast ! neighbor 75.75.127.2 remote-as 2000 update-source Loopback0 address-family ipv4 unicast ! address-family vpnv4 unicast ! ! vrf EXTRATEST rd 1#1000 address-family ipv4 unicast redistribute connected ! !!l2vpn router-id 75.75.127.1 load-balancing flow src-dst-ip snmp mib pseudowire statistics pw-class PORT_PW encapsulation mpls protocol ldp control-word transport-mode ethernet load-balancing flow-label both ! ! ! pw-class strictpath-to-iox encapsulation mpls protocol ldp control-word transport-mode ethernet preferred-path interface tunnel-te 110 fallback disable ! ! xconnect group g2 p2p 100 interface HundredGigE0/2/0/23.100 neighbor ipv4 75.75.127.2 pw-id 100 pw-class PORT_PW ! ! p2p 101 interface HundredGigE0/2/0/23.101 neighbor ipv4 75.75.127.2 pw-id 101 pw-class strictpath-to-iox ! ! !!mpls oam!rsvp interface Bundle-Ether11 ! interface HundredGigE0/2/0/30 !!mpls traffic-eng interface Bundle-Ether11 admin-weight 6 ! interface HundredGigE0/2/0/30 admin-weight 20 !!mpls ldp log neighbor nsr ! nsr session backoff 5 10 router-id 75.75.127.1 ! interface Bundle-Ether11 ! interface HundredGigE0/2/0/30 !!snmp-server traps pim neighbor-changesnmp-server traps pim invalid-message-receivedsnmp-server traps pim rp-mapping-changesnmp-server traps pim interface-state-change!segment-routing local-block 15000 15999 traffic-eng logging policy status ! policy POLICY1 color 20 end-point ipv4 75.75.127.2 candidate-paths preference 50 explicit segment-list SIDLIST1 ! ! ! ! !!xml agent tty!xml agent!snmp-server traps sensorsnmp-server traps fru-ctrlnetconf agent tty!netconf-yang agent ssh!lldp!ssh client source-interface MgmtEth0/RP0/CPU0/0ssh server rate-limit 40ssh server session-limit 30ssh server v2ssh server vrf mgmtssh server vrf defaultssh server netconf vrf defaultsnmp-server traps fabric bundle linksnmp-server traps fabric bundle stateendRP/0/RP0/CPU0#NCS55K#AcknowledgementTons of thanks to Benoit who made himself available during vacations for the video recording and for providing such a detailed MOP.", "url": "/tutorials/ncs-5500-fabric-migration/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-decommissioning-internet-optimized-mode": { "title": "Decommissioning the Internet-Optimized Mode", "content": " Remove Internet-opt Mode Introduction Feature description Decommissioning default# host-optimized mode configure-able# host-optimized-disable and custom-lem Document updated You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN implementation following this link.Update# Contrary to what was initially described, the decommissionning in 7.3.1 only covers the IPv4 CLI. The v6 CLI will be removed in 7.4.1.IntroductionIn early 2021, with the IOS XR release 7.3.1, we will remove the internet-optimized mode.This feature was introduced to optimize the v4/v6 prefix distribution between LEM and LPM for the first generation of NCS5500 products.Feature descriptionMultiple hacks have been used for this optimization, changing the lookup order of operation, spliting the /23 prefixes in two sub-sequent /24s, …During years, it was working fine for internet distribution as demonstrated in December 2017 in this blog post# https#//xrdocs.io/ncs5500/tutorials/2017-12-30-full-internet-view-on-base-ncs-5500-systems-s01e04/.So why do you want to remove this feature?Multiple reasons# full internet view doesn’t fit anymore# we started passing the message two years ago in CiscoLive. With large IGP, MPLS and MAC addresses# it doesn’t work any more.Internet v4 and v6 is growing too vast and the optimization if not enough. We don’t recommend to use systems without eTCAM, or at least without large LPM (like the NCS55A1-24H and the NCS55A1-24Q-6H-SS, but not the -S) if you need a full internet view. it was not widely deployed. We identified just a small amount of customers enabling that feature. it’s using space in the code that could be re-allocated to something newDecommissioningStarting from IOS XR 7.3.1 (a release targeted for Feb/Mar 2021), we will remove this hw-module configuration options# hw-module fib ipv4 scale internet-optimizedRP/0/RP0/CPU0#5508-1-721(config)#hw-module fib ipv4 scale ? host-optimized-disable Configure Host optimization by default internet-optimized Configure Intetrnet optimizedRP/0/RP0/CPU0#5508-1-721(config)If you perform an upgrade to 7.3.1 and the config lines are present, it will be displayed as# ~Pre-existing [hw-module fib ipv4 scale internet-optimized] config has ~ ~been found. This feature isn't supported anymore and therefore ignored. ~ ~Please delete this config. ~ But it’s just a warning message, not an error.What will be the remaining options#default# host-optimized modeNo configuration, the IPv4 and IPv6 will be sorted in the internal database following this logic#configure-able# host-optimized-disable and custom-lemThe remaining configuration options will be# v4 host-optimized-disable v6 custom-lem and internet-optimizedDocument updatedWe also added a mention in the following docs and videos to inform that internet-optimized mode will be removed and shouldn’t be used anymore# https#//xrdocs.io/ncs5500/tutorials/2017-08-03-understanding-ncs5500-resources-s01e02/ https#//xrdocs.io/ncs5500/tutorials/2017-08-07-understanding-ncs5500-resources-s01e03/ https#//xrdocs.io/ncs5500/tutorials/2017-12-30-full-internet-view-on-base-ncs-5500-systems-s01e04/ https#//xrdocs.io/ncs5500/tutorials/Understanding-ncs5500-jericho-plus-systems/ https#//xrdocs.io/ncs5500/tutorials/ncs5500-urpf/ https#//xrdocs.io/ncs5500/tutorials/ncs5500-routing-resource-with-2020-internet/ https#//www.youtube.com/watch?v=8Tq4nyP2wuA https#//www.youtube.com/watch?v=nT31rHqFm-o", "url": "/tutorials/decommissioning-internet-optimized-mode/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-y-1564-sadt-nc5x-part1": { "title": "Y.1564 Service Activation Testing on NCS : Part1", "content": " Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) Chethan K .S, Software Test Lead (ches@cisco.com) OverviewThe ITU-T recommendation Y.1564 defines an out-of-service test methodology to confirm the proper configuration and performance of an Ethernet service prior to customer delivery and covers the case of both a point-to-point and point-to-multipoint topology. This service activation acceptance testing of Ethernet-based services can be implemented as a test function inside of a network element. This particular functionality was introduced on Cisco NCS 500 and NCS 5500 series routers starting release IOS XR 7.1.x.The objective of this series of articles is to capture the Y.1564 capabilities of NCS 5500 and NCS 500 series routers. We will discuss the implementation and various use-cases and their configurations & verifications. This part includes concepts, supported scenarios & configuration examples to demonstrate how operators can run Y.1564 SADT on the box.Y.1564 Concepts#Traffic Generation and MeasurementThere are various operations as per Y.1564 viz. Traffic Generation Mode# The device under test (DUT) generates the traffic and sends it out on the interface where service has been provisioned. This eliminates the need for an external probe Passive Measurement Mode# Here the DUT measures the traffic received on the service interface in order to verify the proper service configuration. This mode is not available in NCS 500 and NCS 5500 products. Two way statistics collection Mode# In this mode, traffic generation and all measurements are done locally on the DUT. Traffic is looped back on the far end after MAC swap. Based on the return traffic this mode calculates various statistics like throughput, loss etc. For this mode, the remote end needs to be properly configured to loopback the traffic sent towards it. The Y.1564 implementation on NCS 500 and NCS 5500 routers implements the two-way mode.The direction of Traffic Generation can be internal or external. In internal mode the traffic is generated at the UNI and forwarded towards the network (via the service). In external mode traffic is sent out of the interface.Target ServicesThe services running on the network can be both Layer2 and Layer3. The Y.1564 feature on NCS 500 and NCS 5500 routers addresses only the layer 2 point to point services, viz. Layer 2 local cross-connects Layer 2 VPWS (T-LDP PW) EVPN-VPWSThese L2 services can be configured on UNI based on main interface or sub-interface (both physical and bundle).Traffic ProfilesWith the current implementation of Y.1564 on NCS 500 and NCS 5500 routers can only generate layer2 traffic. The following different parameter can be specified in the Y.1564 test profile. Outer COS Inner COS DEI (for color aware flows) Packet Size Fixed size (range) EMIX pattern (defined in Y.1564) Destination MAC Information Rate (IR) Committed Information Rate (CIR) Excess Information Rate (EIR) only for color aware generation# (IR-CIR) Duration of the test 1 -1440 (minutes) The source MAC for generated traffic flow is taken from the interface on which Y.1564 profile is attached. Destination MAC address can be specified while starting an Y.1564 test.The following table shows the packet size for Y.1564 SADT on the NCS platforms. a b c d e f g h u 64 128 256 512 1024 1280 1518 MTU user defined Note# 'h' in EMIX pattern refers to MTU size. In current XR releases this refers to max NPU MTU size which is 9646 bytes. Starting XR 24.2.x EMIX pattern 'h' will generate packets acoording to interface MTU sizeMeasurement StatisticsY.1564 defines various parameters that can be measured on the DUT, which in turn related to the performance indicators of the service like throughput and latency. These parameters help in validating the committed SLA for the provisioned service. Cisco NCS 500 and NCS 5500 routers support the measurement of the following statistical parameters# Frame Loss Frame Loss Ratio (FLR) Frame Delay (FD) Min Max Average Measurement of Frame Delay Variation (Jitter) is not supported. The above parameters are computed over a test duration which is configurable under the SAT profile in the range of 1 minutes to 1440 Minutes (24 Hour).Using Y.1564 SATThe Y.1564 testing methodology on NCS 500/5500 routers includes the following steps. Enable Service Permit Y.1564 on the interface Configure and Y.1564 Profile Run an Y.1564 test by attaching the profile to the interface Verifying Y.1564 ResultsEnabling ServiceAs mentioned earlier only point-to-point L2VPN services are supported. Following config snippet shows targeted LDP based p2p VPWS configuration between two PEs Service Config on PE1 Config on PE2 LDP VPWS l2vpn xconnect group vpws p2p 1001 interface TenGigE0/0/0/2.1001 neighbor ipv4 172.16.3.44 pw-id 1001 l2vpnxconnect group vpwsp2p 1001interface TenGigE0/0/0/1.1001neighbor ipv4 172.16.3.18 pw-id 1001 Permit Y.1564 on Target InterfaceThe service UNI or Attachment circuit needs to be enabled to run Y.1564 tests. Ideally the target interface must be an L2 transport interface (physical port or sub-interface, bundle or non-bundle). The permitted test can be either in internal or external or both depending on the type of test we want to run. Following config snippet is shown from the node PE1.(refer to logical topology in the concept section).interface TenGigE0/0/0/2.1001 l2transport encapsulation dot1q 1001 ethernet service-activation-test permit [internal | external | all]Note# The Y.1564 mode by default is two-way measurement mode, so the generated traffic is expected to be looped back from remote end. This can be achieved by using Ethernet Data plane loopback feature on NCS 500 and NCS 5500 boxes. Example config to enable loopback below (from Node PE2)#interface TenGigE0/0/0/1.1000 l2transport \tencapsulation dot1q 1000 \tethernet loopback \t permit [internal | external]Configuring Y.1564 traffic profileThis section illustrates the different components of a Y.1564 test profile. As explained in earlier section, the Y.1564 SAT profile can be color aware or color blind and various layer 2 fields can be specified. Each profile is represented by a profile name which may contain alphanumeric and special character. The mode of operation is “two-way” by default and doesn’t need configuration. Following are some examples of Y.1564 profile configuration. Serial# Configuration Remarks 1 ethernet service-activation-test profile profile_#_1 outer-cos 5 duration 10 minutes packet-size emix information-rate 750 mbps Color blind profile with EMIX frame size at 750 Mbps IR for duration of 10 minutes 2 ethernet service-activation-test profile profile_#_2 outer-cos5 duration 5 minutes packet-size 1024 information-rate 1 gbps Color blind profile with fixed frame size of 1024 Bytes at 1Gbps IR for duration of 5 minutes 3 ethernet service-activation-test profile profile_#3 outer-cos4 duration 5 minutes color-aware cir 1500 mbps eir-color COS 3 packet-size 1500 information-rate 2 gbps Color aware profile at CIR 1500 Mbps and EIR 500 Mbps. CIR COS is 4 and EIR COS is 3. DEI for EIR is not set. 4 ethernet service-activation-test profile profile_#4 outer-cos1 duration 5 minutes color-aware cir 700 mbps eir-color set-dei COS 0 packet-size 512 information-rate 1 gbps Color aware profile at CIR 700 Mbps and EIR 300 Mbps. CIR COS is 1 and EIR COS is 0. DEI for EIR is set. 5 ethernet service-activation-test profile profile_#_5 outer-cos 5 duration 10 minutes packet-size emix sequence bchu u-value 800 information-rate 750 mbps Color blind profile with EMIX (equal number of 128,256, MTU and 800 bytes) at 750 Mbps IR for duration of 10 minutes Running Y.1564 Service Activation TestThe final step is starting an Y.1564 testing. In this step the Y.1564 profile is attached to a target interface and destination MAC is configured. MAC of the target interface is used as the source MAC of generated packet. This needs to be enabled on the exec mode.ethernet service-activation-test start interface tenGigE 0/0/0/2.1003 profile profile_#_1 destination 1.2.3 direction internalethernet service-activation-test start interface tenGigE 0/0/0/2.1002 profile profile_#_1 destination 1.2.3 direction externalThe destination MAC adress specified is used in the generated packets. The direction can be either internal or external. The important point is that, the interface must have the service-activation-test permitted in the particular direction.Note# Since the mode used is two way mode, the remote end must have the loopback enabled (MAC SWAP) before starting a test. Else, calculated statistics will be inaccurate.Verifying Y.1564 Test ResultsThe results of an ongoing or completed Y.1564 test can be seen using the “show ethernet service-activation-test” cli. The following table summarizes the relevant list of CLIs. The test results are preserved and can be viewed until a new Y.1564 test is started on the same target interface.List of CLIs CLI DEscription show ethernet service-activation-test Shows details of current and past Y.1564 tests. Also includes all the interface where test is permitted but no test is run show ethernet service-activation-test brief Shows list of interface, permission and status of Y.1564 test on the target interface show ethernet service-activation-test in-progress Shows details of the currently running Y.1564 test on all target show ethernet service-activation-test completed Shows details of the past Y.1564 test on all target show ethernet service-activation-test interface Shows details of current or past Y.1564 test on the target interface specified Detailed Output RP/0/RP0/CPU0#PE1#show ethernet service-activation-test interface tenGigE 0/0/0/2.1003Fri Dec 18 01#20#28.503 GMT+4Interface TenGigE0/0/0/2.1003 Service activation tests permitted Test completed# Duration 10 minute(s) Information rate 750 Mbps Color-blind Internal, Two-way, Destination 00#01#00#02#00#03 Packet size EMIX, Sequence 'abceg', Pattern hex 0x00 Outer CoS 5 Results# Step 1, Information Rate 750 Mbps CIR packets# Tx packets# 94286350, bytes# 0 Rx packets# 94286350, bytes# 38476876568 FL# 0, FLR# 0% FD# Min 10.680us, Mean 12.773us, Max 18.548us IFDV# Not supported Out of order packets# 0 (0%) Error packets# 0 (0%) EIR packets# Tx packets# 0, bytes# 0 Rx packets# 0, bytes# 0 FL# 0, FLR# 0% FD# Min 0.000us, Mean 0.000us, Max 0.000us IFDV# Min 0.000us, Mean 0.000us, Max 0.000us Out of order packets# 0 (0%) Error packets# 0 (0%)The above output can be interpreted as below# Test on interface Interface TenGigE0/0/0/2.1003 is completed. Test duration was 10 minutes. It was a color blind profile in two way mode in internal direction. Packet size is EMIX with sequence « abceg » is used, i.e equal ratio of packets sized 64,128,256,1024 and 1518 bytes are generated. The pattern is 0x00 i.e. data generated is 0x00 Outer COS value is 5 for the generated packets. From the results we can see the repecteive counts of Tx and Rx packets in CIR section only as it is a color blidn profile The FL and FLR are 0. Minimum, maximum and average delay values are also shown. The Jitter (IFDV) is not supported as stated earlier. There was no error packets. All statistics related to EIR are shown as 0 because this is a color blind testConclusion#In this article, we have captured the Y.1564 concepts and steps to implement a Y.1564 service activation test on NCS 500 and 5500 routers. In next articles we will focus on utilizing the Y.1564 functionalities like color aware generation to validate different service requirement.", "url": "/tutorials/y-1564-sadt-nc5x-part1/", "author": "Paban Sarma", "tags": "iosxr, cisco, NCS 5500, NCS 500" } , "tutorials-ncs55x-and-ncs5xx-domain-based-lpts-policers": { "title": "NCS55x and NCS5xx Domain Based LPTS Policers", "content": " On This Page Introduction Problem Definition Advantages of using per domain LPTS Domain Based LPTS Architecture Sample Use Case Feature Support Memory Impact Conclusion Reference IntroductionIt has been a while since we wrote the first article on LPTS. There we introduced the concept of LPTS and how it is implemented on NCS55xx and NCS5xx family of routers. We also saw with examples how LPTS entries are created in the hardware and how they can be altered as per different requirements. In this document we will explore Domain based LPTS Policers and understand the use case of the feature.Problem DefinitionThe usual LPTS implementation treats the traffic in a single domain and packet rate is controlled via single LPTS policer profile. Policer is implemented per flow-type which is classified by LPTS. But customer needs more granular control for LPTS configuration. They require to have separate policer values for different ports in the router. Some LCs in ASR9K support per NPU LPTS policer profiles, where for a set of ports under 1 NPU, a different policer profile can be defined. But the disadvantage is that again we do not have granularity per port. All the ports in the same NPU have to adhere the same LPTS policer defined. What if we want to have different policer for interfaces in the same NPU. Therefore on NCS55xx and NCS5xx we have the provision to achieve separate policer profiles for a set of ports or domain. Domain would be logical grouping of ports. Domains will provide the capability to select LPTS policer profile independently for the defined domains. This will provide better control of the ingress packets in the router.Advantages of using per domain LPTSLet us take an example of NCS 5501. It has 48 1G/10G ports and 6 40G/100G ports. The first set of ports are located in core 0 and second set in core 1. The advantage of using per domain LPTS are# We can use existing concept of known/default policers Per port resources are available on NCS55xx and NCS5xx It strengthens the existing security plus gives added granularity Domain creates individual isolation of ports (e.g VRF) This can scale even with just having internal TCAMNote# By default all the ports will be classified in default domain if no user defined domain is configured.Domain Based LPTS ArchitectureThe above figure represents the architecture of the domain based LPTS. For understanding of the terminology like port arbitrator, Pre-iFIB, iFIB etc and the basic flow of LPTS, it is highly recommended to read this Article. As per the flow described in that artcile, first the entries are downloaded in LPTS Pre-IFIB. LPTS HW Pre-iFIB creates/updates/delete with domain index in the key of Pre-iFIB entry. LPTS domain and its associated interfaces are added/deleted to update port variable of interfaces. LPTS HW Policer is updated or programmed for a specific flow type under a specific LPTS domain index. LPTS Pre-iFIB PD does hardware programming of Domain based policers and HW Pre-iFIB entries to database table. LPTS Pre-iFIB then programs PMF, policer and port variable. Management client access the Pre-iFIB HW entry or policer data via the SysDB.Sample Use CaseFor example take the above network. Customer has logically partioned the network into 2 domains i.e. core and peering. NCS5500 has its interfaces in both the domain’s. The core domain needs the BGP-known control packets to be policed at a different rate than the peering domain. To achieve this, concept of domain space partition in LPTS for the ports is very useful. It helps to utilize the port orientation in the network and will enable separate controllable policer profile per domain.Below output shows the default policer value under default domain for BGP-knownRP/0/RP0/CPU0#R1#show lpts pifib hardware police location all | in BGP Tue Feb 2 18#13#04.100 UTCBGP-known 32116 Static 2500 2975 0 0-defaultBGP-cfg-peer 32117 Static 2000 2000 0 0-defaultBGP-default 32118 Static 100 8 0 0-defaultBelow are some of the hardware entries programmed under default domain.Now let us configure one interface under the domain core and we will leave the peering domain as default domain and see how it impacts the values and programminglpts pifib hardware domain core interface TenGigE0/0/0/0!lpts pifib hardware police domain core flow bgp known rate 2000 !!RP/0/RP0/CPU0#R1#show lpts pifib hardware police location all | in BGP Tue Feb 2 18#44#46.931 UTCBGP-known 32116 Static 2500 2975 0 0-defaultBGP-cfg-peer 32117 Static 2000 2000 0 0-defaultBGP-default 32118 Static 100 8 0 0-defaultBGP-known 32216 Global 2000 2398 0 1-coreRP/0/RP0/CPU0#R1#We can see 2 different entries are created. One in default domain and other in the core domain.Note# The duplicate entries created cause just a marginal increase in the memory consumption. Hence to optimise the memory resources, we are allowing to create only one user configured domain.Feature Support Domain configuration is supported only on physical and bundle main interfaces. The configuration will be rejected if we apply on sub-interfaces. Domain name can be any word but can have up to a maximum of 32 characters Only 1 user configured domain is allowed along with default domain. The policer rates that are configured for ports or line cards will have policer rates of the domain after configuring the ports or line cards as part of a domain. For example, if port hundredGigE 0/0/0/1 and port hundredGigE 0/0/0/2 have policer rate of 3000 for ospf unicast known flow and if the ports are configured as part of domain CORE, then the policer rate of domain CORE for ospf unicast known flow is 3000 unless it is configured otherwise User can configure a particular port, a group of ports, or a line card of a router with LPTS policers of a single domain. It is supported on NCS540/NCS560 and NCS5500(J/J+/J2).Memory ImpactThere would be a slight increase in memory for pifibm_server_rp/lc process due to this functionality. Some heap memory would be utilized in keeping the domain states and for caching the entries within the process which would be dynamically updated into platform as and when needed. For normal programming of entries some extra checks on the entries would be added if this functionality is enabled to ensure domain information population. Overall in normal flow there would be very less impact while programming TCAM entries. With any configuration change there would be control plane churn as TCAM reprogramming is triggered. The TCAM entries in hardware would depend on the configuration used for ports and the scale of L3-routable entries (as L3 entries gets duplicated if additional domain is configured).ConclusionHope this technote helped to understand the NCS55xx and NCS5xx capability to configure single port/ports or LC into a single domain and separate it from the default domain. This helps in increased security and granularity. Though the memory occupied is a bit more as duplicate entries are created but that doesnt have much of operating issues.Reference CCO Config Guide", "url": "/tutorials/ncs55x-and-ncs5xx-domain-based-lpts-policers/", "author": "Tejas Lad", "tags": "NCS5500, NCS500, LPTS, Control Plane Protection, LPTS Domain Based Filtering" } , "tutorials-y-1564-sadt-nc5x-part2": { "title": "Y.1564 Service Activation Testing on NCS : Part2", "content": " Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) Chethan K .S, Software Test Lead (ches@cisco.com) OverviewIn Previous Article we looked at the Y.1564 concepts and example configuration to run a Y.1564 Service Activation Test on NCS 5500 and NCS 500 routers. In this part, we will explore more capabilities such as color aware test and their application.Y.1564 Color ProfileAn ethernet service activation test can be either color aware or color blind. By default, a test is color blind and only a single flow is generated at a rate configured in the information rate. The color aware mode is distinguished by use of a different COS marking. This different COS identifies the Excess Information Rate (EIR) flow. The committed flow uses the COS value provided in the “outer-COS” and “inner-COS” configuration. In a color aware flow, the EIR flow that is generated might have the discard eligibility indicator (DEI) set (dei =1). By default, it is not set (dei=0). The Committed information rate (CIR) needs to be configured for color-aware mode and EIR is obtained from the difference of information rate (IR) and committed information rate (CIR).ethernet service-activation-test profile profile_#_1 outer-cos 1 color-aware cir 700 mbps eir-color cos 0 information-rate 1 gbps packet-size 512The line color-aware cir 700 mbps eir-color cos 0 in the SADT profile denotes that this is a color aware profile. The committed information rate (CIR) is specified in the same configuration line. The excess information rate (EIR) is obtained from the difference of information rate (IR) and the CIR . When traffic is generated, traffic within CIR is marked with the outer-cos value in profile, traffic exceeding CIR profile is marked with the eir-color cos.There are certain restriction on cos values that can be used in an color aware Service Activation Profile shown in the below table# PCP value (COS) Support for Color Blind Support for color aware w/o DEI (0) Support for color aware w/ DEI set (1) 0 ✔ ✔ ✔ 1 ✔ ✔ ✔ 2 ✔ ✔ X 3 ✔ ✔ X 4 ✔ ✔ X 5 ✔ X X 6 ✔ ✔ X 7 ✔ X X Note# Color aware profile needs the cos values to be preserved in the packets. Therefore, color aware Service Activation Test can’t be supported if we have pop operation configured on the attachment circuits (EFPs)Y.1564 Color Aware Profile ConfigurationColor aware profile without DEI bitprofile ca1 outer-cos 4 mode two-way duration 5 minutes color-aware cir 300 mbps eir-color cos 3 description color_aware_sat_profile packet-size 1500 information-rate 500 mbps !The Profile Generates traffic at 500 Mbps and a CIR of 300 Mbps. The CIR traffic is marked with COS value of 4. The EIR is 200 Mbps (500-200) and EIR traffic is marked with COS value of 3. The following figures shows the snapshot of generated traffic.Color aware profile with DEI bitprofile ca2 outer-cos 1 mode two-way duration 5 minutes color-aware cir 300 mbps eir-color set-dei cos 0 description color_aware_sat_profile_w_dei packet-size 1500 information-rate 500 mbps !This profile generates traffic at 500 Mbps with a CIR if 300 Mbps. The CIR traffic is marked with COS value of 1. The EIR here is 200 Mbps (500- 300) and EIR traffic is marked with COS 0 and DEI bit for EIR traffic is also set to 1. The following figures shows the snapshot of generated traffic.Application# Validating Color Aware/Multi-CoS Bandwidth ProfileIn this section, we will explore the application of color aware traffic generation to validate a point-to-point L2 service with a multi COS bandwidth profile. The color aware profile uses two different COS values to generate a CIR and EIR flow. It can be used to validate a service where a color aware or a multi-COS policer is enforcedScenario#A 500 Mbps point-to-point circuit needs to be provisioned with a multi COS bandwidth profile. Both the traffic classes are identified by COS value of 4 and 3. COS 4 has a committed rate of 300 Mbps and peak up to 500 Mbps, similarly COS 3 has a committed rate of 200 Mbps and peak up to 500 Mbps. Total traffic is committed at 500 Mbps and peak rate is also 500 Mbps. This is achieved using a conform-aware hierarchical BWP where parent has a 500 Mbps CIR and PIR policer. The child classes have their individual policer with CIR of 300 and 200 Mbps respectively. Peak-rate for both child is 500 Mbps. The objective is to check if Committed rate is met for both child classes when total traffic rate matching both class cross 500 Mbps.Solution#Once the circuit and QoS all are in place, the service can be validated using a color aware SADT profile. We can leverage the CIR and EIR flows to simultaneously check traffic from both class. We will generate same rate of traffic for both flows (1Gbps each) and check if cos4 is limited to 300 and cos3 is limited to 200 Mbps. Individual class peak-rate can be validated by simply using color blind profiles.Configuration and Verificationsconfig on PE1interface TenGigE0/0/0/0.1001 l2transport encapsulation dot1q 1001 mtu 9000 ethernet service-activation-test permit internal ! service-policy input conform-parent!L2vpnxconnect group vpws p2p ll_pw interface TenGigE0/0/0/0.1001 neighbor ipv4 2.2.2.2 pw-id 1001config on PE2interface TenGigE0/0/0/2.1001 l2transport encapsulation dot1q 1001 mtu 9000 ethernet loopback permit internal ! service-policy input conform-parent!L2vpnxconnect group vpws p2p ll_pw interface TenGigE0/0/0/2.1001 neighbor ipv4 1.1.1.1 pw-id 1001Common QoS Configpolicy-map conform-parent class class-default service-policy conform-child police rate 500 mbps peak-rate 500 mbps ! ! end-policy-mapclass-map match-any green match COS 4 end-class-map!class-map match-any yellow match COS 3 end-class-mappolicy-map conform-child class green police rate 300 mbps peak-rate 500 mbps ! ! class yellow police rate 200 mbps peak-rate 500 mbps ! ! class class-default police rate 0 bps ! ! end-policy-mapSADT Profile configuration#ethernet service-activation-test profile color-aware outer-cos 4 mode two-way duration 15 minutes color-aware cir 1 gbps eir-color cos 3 packet-size 1500 information-rate 2 gbpsethernet service-activation-test profile color-blind outer-cos 4 mode two-way duration 15 minutes packet-size 1500 information-rate 2 gbpsThe color aware profile used to validate the BWP is generating IR at 2 Gbps (1Gbps CIR & 2-1=1Gbps EIR). The expectation is we would see 70% loss for CIR and 80% loss for the EIR streams. To validate individual class can peak upto 500 Mbps, color blind profile is used at 2 Gbps and expected FLR is 75%.SADT ResultNow, we start the service activation test on the UNI interface. The loopback is enabled on the remote UNI in internal direction. From below results, we see that both during the test and after the test the cumulative traffic loss for class green (CIR, COS 4) is 70% which means 300 Mbps traffic is flowing through the circuit. For EIR it is 80% FLR meaning 200 Mbps is guaranteed.ethernet service-activation-test start interface TenGigE0/0/0/0.1001 profile color-aware destination 1.2.3 direction internalNote# The remote PE must enable a MAC swap loop for correct SADT statistics. EDPL functionality can be used on Cisco NCS devices for this.RP/0/RP0/CPU0#PE1# RP/0/RP0/CPU0#T-2006#show ethernet service-activation-test interface TenGigE0/0/0/0.1001Mon Feb 1 10#50#24.335 UTCInterface TenGigE0/0/0/0.1001 Service activation tests permitted (internal only) Test completed# Duration 15 minute(s) Information rate 2 Gbps Color-aware, CIR# 1 Gbps, EIR# CoS 3 Internal, Two-way, Destination 00#01#00#02#00#03 Packet size 1500, Pattern hex 0x00 Outer CoS 4 Results# Step 1, Information Rate 2 Gbps CIR packets# Tx packets# 74923274, bytes# 112384911000 Rx packets# 22180963, bytes# 33271444500 FL# 52742311, FLR# 70% FD# Min 11.736us, Mean 14.605us, Max 17.532us IFDV# Not supported Out of order packets# 11220557 (15%) Error packets# 0 (0%) EIR packets# Tx packets# 74923274, bytes# 112384911000 Rx packets# 14696226, bytes# 22044339000 FL# 60227048, FLR# 80% FD# Min 12.224us, Mean 14.625us, Max 17.480us IFDV# Not supported Out of order packets# 10596475 (14%) Error packets# 0 (0%)RP/0/RP0/CPU0#T-2006#show policy-map int TenGigE 0/0/0/0.1001 input Mon Feb 1 10#20#32.468 UTCTenGigE0/0/0/0.1001 input# conform-parentClass class-default Classification statistics (packets/bytes) (rate - kbps) Matched # 191993780/288374657560 1999997 Transmitted # 47299928/71044491856 492736 Total Dropped # 144693852/217330165704 1507261 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 47299928/71044491856 492763 Policed(exceed) # 0/0 0 Policed(violate) # 144693852/217330165704 1507327 Policed and dropped # 144693852/217330165704 Policy conform-child Class green Classification statistics (packets/bytes) (rate - kbps) Matched # 21072224/31650480448 999951 Transmitted # 6230480/9358180960 295643 Total Dropped # 14841744/22292299488 704308 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 6230480/9358180960 295643 Policed(exceed) # 0/0 0 Policed(violate) # 14841744/22292299488 704308 Policed and dropped # 14841744/22292299488 Policed and dropped(parent policer) # 0/0 Policy conform-child Class yellow Classification statistics (packets/bytes) (rate - kbps) Matched # 95996887/144187324274 1000046 Transmitted # 41069448/61686310896 197093 Total Dropped # 54927439/82501013378 802953 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 41069448/61686310896 197093 Policed(exceed) # 0/0 0 Policed(violate) # 54927439/82501013378 802953 Policed and dropped # 54927439/82501013378 Policed and dropped(parent policer) # 0/0As we see from above results Committed rate for both service is met. The FLR of ~70% and 80% for the CIR and EIR flows mean a throughput of 300 Mbps and 200 Mbps for the CoS 4 and CoS 3 respectively. The below output for the color blind profile can validate that individually, the CoS 4 class can reach a peak-rate of 500 Mbps. The test is run on ideal scenario and the FLR of 75% on 2 Gbps IR indicates that a throughput of 500 Mbps is achieved.In Both of the test result there is out of order packets which is expected as OOO is flagged if a recieved packet sequence is different from the expected sequence and it is bound to happen when some of the generated packets are loss due to QoS enforecment.RP/0/RP0/CPU0#T-2006#show ethernet service-activation-test interface TenGigE0/0/0/0.1001Mon Feb 1 16#43#44.589 UTCInterface TenGigE0/0/0/0.1001 Service activation tests permitted (internal only) Test completed# Duration 15 minute(s) Information rate 2 Gbps Color-blind Internal, Two-way, Destination 00#01#00#02#00#03 Packet size 1500, Pattern hex 0x00 Outer CoS 4 Results# Step 1, Information Rate 2 Gbps CIR packets# Tx packets# 149851022, bytes# 224776533000 Rx packets# 36768654, bytes# 55152981000 FL# 113082368, FLR# 75% FD# Min 12.188us, Mean 14.825us, Max 20.252us IFDV# Not supported Out of order packets# 11284818 (8%) Error packets# 0 (0%) EIR packets# Tx packets# 0, bytes# 0 Rx packets# 0, bytes# 0 FL# 0, FLR# 0% FD# Min 0.000us, Mean 0.000us, Max 0.000us IFDV# Min 0.000us, Mean 0.000us, Max 0.000us Out of order packets# 0 (0%) Error packets# 0 (0%)RP/0/RP0/CPU0#T-2006#show policy-map interface tenGigE 0/0/0/0.1001Mon Feb 1 16#50#17.880 UTCTenGigE0/0/0/0.1001 input# conform-parentClass class-default Classification statistics (packets/bytes) (rate - kbps) Matched # 45190770/67876536540 2000402 Transmitted # 11133061/16721857622 492788 Total Dropped # 34057709/51154678918 1507614 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 11133061/16721857622 492788 Policed(exceed) # 0/0 0 Policed(violate) # 34057709/51154678918 1507614 Policed and dropped # 34057709/51154678918 Policy conform-child Class green Classification statistics (packets/bytes) (rate - kbps) Matched # 45190770/67876536540 2000402 Transmitted # 11133061/16721857622 492788 Total Dropped # 34057709/51154678918 1507614 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 11133061/16721857622 492788 Policed(exceed) # 0/0 0 Policed(violate) # 34057709/51154678918 1507614 Policed and dropped # 34057709/51154678918 Policed and dropped(parent policer) # 0/0 Policy conform-child Class yellow Classification statistics (packets/bytes) (rate - kbps) Matched # 0/0 0 Transmitted # 0/0 0 Total Dropped # 0/0 0 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 0/0 0 Policed(exceed) # 0/0 0 Policed(violate) # 0/0 0 Policed and dropped # 0/0 Policed and dropped(parent policer) # 0/0 Policy conform-child Class class-default Classification statistics (packets/bytes) (rate - kbps) Matched # 0/0 0 Transmitted # 0/0 0 Total Dropped # 0/0 0 Policing statistics (packets/bytes) (rate - kbps) Policed(conform) # 0/0 0 Policed(exceed) # 0/0 0 Policed(violate) # 0/0 0 Policed and dropped # 0/0 Policed and dropped(parent policer) # 0/0Policy Bag Stats time# 1612198187756 [Local Time# 02/01/21 16#49#47.756]ConclusionWe covered Y.1564 color aware test profile and various aspects of the configuration. We have also illustrated one use case scenario, where color aware SADT comes handy in verifying a L2 service with multi-CoS/color aware bandwidth profile.", "url": "/tutorials/y-1564-sadt-nc5x-part2/", "author": "Paban Sarma", "tags": "iosxr, NCS 5500, NCS 500" } , "tutorials-ncs-5500-interface-and-qos-1d-scale": { "title": "NCS-5500 Interface and QoS 1D Scale", "content": " On This Page Introduction Non-QoS L2 and L3 Interface Scale L3 Interface and Subinterface Scale L2 Interface and Subinterface Scale Mixed L2 and L3 Interface and Subinterface Scale Non-QoS Bundle Interface Scale Non-HQOS Mode Bundle Interface Scale HQOS mode Bundle Interface Scale Ingress and Egress QoS Interface Scale Ingress QoS Interface Scale Egress QoS Interface Scale IntroductionThis tutorial will discuss single dimensional interface and QoS scale of the NCS-5500 series.The first section will introduce the basic L2 and L3 interface scale, without enabling QoS.The next section will discuss the impact of HQOS mode on bundle interface scale.Then we will discuss the impact of enabling QoS on interface scale, with respect to both ingress and egress.Non-QoS L2 and L3 Interface ScaleBelow interface scales are effective from IOS XR 6.6.3 and 7.0.1. No QoS and no HQOS mode for these scales. Maximum bundle interface scale is assumed to be the maximum 1024, and more details will be discussed on bundle scale in the next section.L3 Interface and Subinterface ScaleFor L3 main interfaces, it is the number of physical interfaces present on a particular fixed or modular chassis, whatever the speed. Please note for breakout, it is counted as the number of interfaces after the breakout.L3 subinterfaces scale is both per interface and per system, relevant limits are#L3 subinterfaces <= 2000L3 bundle subinterfaces <= 1024L3 subinterfaces + L3 bundle subinterfaces <= 2000L3 bundle main interfaces + L3 bundle subinterfaces <= 1790L3 main interfaces + L3 subinterfaces < 2558L3 (main + subinterfaces) + L3 bundle (main + subinterfaces) < 2558L2 Interface and Subinterface ScaleMax subinterfaces per interface is 4094 (0,4095 reserved), relevant limits are#L2 subinterfaces <= 4094/4095 per interface/systemL2 (main + subinterfaces) <= 4094/4095 per systemL2 bundle subinterfaces <= 4094/4096 per interface/systemL2 (bundle main + bundle subinterfaces) <= 4097 per systemL2 (subinterfaces + bundle subinterfaces) <= 8191 per systemL2 (main + subinterfaces + bundle main + bundle subinterfaces) <=8192 per systemMixed L2 and L3 Interface and Subinterface ScaleFor mixed L2 and L2, relevant limits are#(L3 + L2) subinterfaces <= 6095 system(L3 + L2) main interfaces <= 2304 system(L3 + L2) main + subinterfaces <= 6652 per system(L3 + L2) subinterfaces + bundle subinterfaces <= 8192 per system(L3 + L2) maian and subinterfaces + bundle main and subinterfaces <= 10496 per systemBelow table will present the above scale limits in graphical format for ease of visualization# Non-QoS L2 and L3 Interface Scale Physical Interfaces Bundle Interfaces Total Scale Main Subinterface Main Subinterface L2 L3 L2 L3 L2 L3 L2 L3 Non-HQOS Bundle interfaces only 1024 4096 4096 1024 1024 1024 1790 4097 5120 5120 5120 Physical Interfaces only 2000 4095 6095 2304 2304 2304 2557 4095 6399 6652 6652 Non-HQOS Mixed Physical and Bundle Interfaces 2000 8191 5119 6096 8192 2557 8192 10496 Non-QoS Bundle Interface ScaleBundle interface scale is configurable in IOS XR, with less bundle members support for higher bundle interface scale.Let N be the maximum number of bundle interfaces supported. Default N is 256, and can be changed by#RP/0/RP0/CPU0#NCS5500(config)#hw-module profile bundle-scale ? 1024 Max 1024 trunks, Max 16 members (128 main + 896 sub) 512 Max 512 trunks, Max 32 members (128 main + 384 sub) 256 Max 256 trunks, Max 64 members (128 main + 128 sub)Non-HQOS Mode Bundle Interface ScaleThe bundle interface scale is independent of N for non-HQOS mode, and relevant limits are#L3 Bundle subinterfaces <= 1024 per systemL2 Bundle subinterfaces <= 4094/4096 per interface/systemL2 Bundle subinterfaces + L3 Bundle subinterfaces <= 4096 per systemL2 Bundle main interfaces + L3 Bundle main interfaces <=1024L2 Bundle main interfaces + L2 Bundle subinterfaces <=4097L2 Bundle main interfaces + L3 Bundle subinterfaces <=2047(L2 + L3) Bundle main interfaces + (L2 + L3) Bundle subinterfaces <= 5120HQOS mode Bundle Interface ScaleHQOS mode is needed to enable egress hierarchical QoS policies on main interfaces, or to enable egress flat or hierarchical QoS policies on subinterfaces.HQOS mode is disabled by default, and can be enabled by below command#hw-module profile qos hqos-enableHQOS mode will impose additional limits on bundle interface scale, depending on N#L3 Bundle subinterfaces + L2 Bundle subinterfaces < NL3 Bundle main interfaces + L2 Bundle main interfaces <= N(L2 + L3) Bundle main interfaces + (L2 + L3) Bundle subinterfaces <= NBelow table will present the above scale limits in graphical format for ease of visualization# Non-QoS Bundle Scale N (256/512/1024) Main Subinterface Total Scale L2 L3 L2 L3 Non-HQOS Mode 1024 4096 4096 1024 1024 1024 1790 4097 5120 5120 5120 HQOS Mode N-1 N-1 N-1 N N N N Ingress and Egress QoS Interface ScaleEnabling QoS will have impact on interface scale, and the effect will be different for ingress and egress, due to the difference in hardware resources required.Ingress QoS Interface ScaleIngress QoS scale is impacted by the available counters for statistics.In Normal QoS mode, 2 counters are used per ingress policy-map, and support the best interface scale.If Enhanced QoS mode is enabled, 4 counters are used per ingress policy-map, providing better statistics, but also used more hardware resources, resulting in lower interface scale#hw-module profile stats qos-enhancedThe default maximum number of class-maps per ingress policy-map is 32. If you configure a smaller max-classmap-size, it will result in higher interface scale#hw-module profile qos max-classmap-size [4|8|16|32]The maximum number of unique ingress policy-maps per NPU is increased from 30 to 250 from IOS XR 6.6.3/7.0.1. However, this does not impact on the ingress QoS interface scale.HQOS mode is not required for ingress qos, even with hierarchical policy-maps, and HQOS mode does not impact ingress QoS interface scale.A main interface consume the same resources as a subinterface with ingress QoS, so we would just refer to number of interfaces for scale purpose.For each bundle member with ingress QoS on a core of a NPU, QoS resources are consumed on both of the 2 cores of that NPU, so per core and per NPU interface scale are the same.Therefore a bundle main/subinterface with M bundle members will consume the equivalent resources of 2xM interfaces with ingress QoS.Below table is the number of interfaces with ingress policy-map attached# QoS Mode Class-Map Size Scale per Core Scale per NPU Bundle Scale per NPU Physical Normal 4 1023 1023 2046 Normal 8 511 511 1022 Normal 16 255 255 510 Normal 32 127 127 254 Enhanced 4 871 871 1742 Enhanced 8 435 435 870 Enhanced 16 217 217 434 Enhanced 32 108 108 216 Egress QoS Interface ScaleEgress QoS scale is impacted by the available queues (VOQ Virtual Output Queues).In NCS-5500 architecture, the egress queues are actually mapped to the buffer on ingress, and organized as virtual output queues. The benefits of this approach are minimized egress buffers, and allows single lookup on ingress for forwarding.In this model, the ingress NPU has to store information for all egress NPU/Linecard, so scale will vary with number of egress NPU/Linecards.Egress qos policy-map supports a maximum of 8 class-maps and each with 1 queue, by default, and no configuration is required.Each physical interface will always be assigned 8 queues, which is fixed.Each physical subinterface will be assigned 8 additional queues when it is created and with egress QoS configured.Bundle main interface with M bundle members does not consume additional queues other than the queues of the M bundle members.Each bundle member of a bundle subinterface will be assigned 8 additional qeueus when the bundle subinterface is created and with egress QoS configured.Therefore a bundle subinterface with M bundle members will consume the equivalent resources of M interfaces with egress QoS.HQOS mode is required if you use egress hierarchical policy-maps on a main/subinterface, and also egress flat policy-map on a subinterface. HQOS mode will not impact egress QoS interface scale, but will impact bundle interface scale as in above section.Below table is the number of main/subinterfaces with egress policy-map attached# Scale per Core Fixed 5504 5508 5516 Before IOS XR 7.0.1 512 48 48 48 IOS XR 7.0.1 1024 192 96 48 ", "url": "/tutorials/NCS-5500-interface-and-qos-1d-scale/", "author": "Vincent Ng", "tags": "" } , "tutorials-iosxr-731-innovations": { "title": "IOS XR 7.3.1 Innovations in NCS5500/NCS5700/NCS500 Platforms", "content": " XR 7.3.1 Innovations Introduction Segment Routing IPv6 (SRv6) EVPN Quality of Service Security MPLS Segment Routing IGPs Performance Monitoring and Traffic-Engineering Dynamic SR P2MP Policies Optics New products NCS57B1-6D24 / NCS57B1-5DSE Chassis commons IntroductionIOS XR 7.3.1 has been published in late February 2021 and is an ED version for many XR platforms including# NCS5500 NCS5700 NCS540 NCS560Release notes# NCS540# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5xx/release-notes/73x/b-release-notes-ncs540-r731.html NCS560# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs560/release-notes/73x/b-release-notes-ncs560-r731.html NCS5500# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/general/73x/release/notes/b-release-notes-ncs5500-r731.htmlSoftware download for NCS5500#https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/general/73x/release/notes/b-release-notes-ncs5500-r731.htmlWe asked a few colleagues to help documenting the new improvements brought by the 7.3.1 version. It will be split in software innovations on one side (Segment Routing, SRv6, EVPN, Multicast, QoS, Security, …) and new supported hardware (chassis, power supply, new line cards and new fixed-form-factor products).Segment Routing IPv6 (SRv6).Jakub Horn presents and demonstrates in his lab# SRv6 uSID uSID config uSID ISIS config uSID BGP L3VPN config TI-LFA and uLoop Avoidance config Performance Measurement config FlexAlgo configThe configuration used for the demos are available here#https#//www.segment-routing.net/tutorials/srv6-731-features/EVPN.Jiri Chaloupka and Lampros Gkavogiannis detail the following features# EVPN Head-EndEVPN control plane used on top of the head-end to decide which interface is active or stand-by.evpn interface PW-Ether 1 ethernet-segment identifier type 0 9.8.7.6.5.4.3.2.1l2vpn xconnect group xc100 p2p evpn-headend interface PW-Ether1 neighbor evpn evi 1 target 1 source 1 EVPN Convergence Fast ReRoute Handled by the transport layer (SR TI-LFA for example) MAC Mobility Available since EVPN inception, it uses a sequence number in the advertisement, to track VM move for example# Mass Withdraw Also available since the early EVPN days. In this example, CE1 is multihomed so PE3 sees the mac addresses through the same segment ESI1. PE1 losing the link to CE1, it just sends a RT1 update to PE3, that can reprogram the decision to send the traffic to PE2. But it takes a bit of time to generate, receive and compute this BGP advertisement. Hence the introduction of the next feature. Edge Failure Fast ReRouteSimilar to the L3VPN BGP PIC Edge feature, adapted for EVPN.Using EBGP betwen CE and PE. PE1 will pre-program the back up path over PE2 that will forward via PE1 until the network fully converged. During that short period of time, we will have a sub-optimal routing path, but no packet loss.And it works for both All-Active and Single-Active.All-Active configuration#evpn interface Bundle-Ether100  ethernet-segment   identifier type 0 36.37.36.37.36.37.36.37.01   convergence    rerouteSingle-Active configuration#evpn interface Bundle-Ether100  ethernet-segment   identifier type 0 36.37.36.37.36.37.36.37.01   load-balancing-mode single-active   convergence    reroute EVPN Load BalancingSince the beginning we have All-Active mode where PE1 and PE2 are configured to make CE1 believe it’s connected to a single device via link aggregation.Later, we introduced the single-active mode, followed by the port-active mode. SFA Single Flow ActiveTo improve the convergence, particularly when connected to legacy L2 protocols (like spanning-tree, REP-AG.G.8032), we introduce this new load balancing mode.SFA is leveraging pre-programmed information between PE1 and PE2.evpn interface Bundle-Ether100  ethernet-segment   identifier type 0 36.37.36.37.36.37.36.37.01   load-balancing-mode single-flow-active   convergence    mac-mobilityCheck https#//datatracker.ietf.org/doc/html/draft-brissette-bess-evpn-l2gw-proto-06 for more details. Next-Hop Tracking for DF ElectionWe are enabling next-hop tracking for Route-Type 4. When NH disappears from the table, we trigger immediately a DF election.evpn interface Bundle-Ether100 ethernet-segment identifier type 0 36.37.36.37.36.37.36.37.01 load-balancing-mode single-active convergence nexthop-tracking reroute NTP syncWhen you have two nodes on the same ESI segment, and these two routers are NTP synchronized (no need for additional configuration). It will add the timestamps in the RT-4 and the whole convergence process will speed up. Multicast MultiHomingThis new feature enables redundancy for multi-attached multicast receivers. It leverages Route Type 7 and 8 for IGMP Synchronization between PEs. Also, it requires Designated Forwarder (DF) Election between these PE routers.After IGMP snooping has been enabled and this information has been synced with the peer, both the peers need to act like a last hop router and send PIM join upstream. Once traffic arrives on both the peers, only one should forward it to the receiver. Designated Forwarder Election elects one peer to do the forwarding.EVPN IPv4 MC Enhanced DF election for mcast# As per RFC 7432, designated forwarder (DF) election is at the granularity of <ESI,EVI> . However, customers need multi tenancy at finer level. A per multicast stream DF election is required, as they use a single Vlan for IPTV serviceQuality of Service.Paban Sarma provides details on new QoS features# Shared-Policy InstancesThe purpose is to get aggregated QoS value for customers owning multiple sub-interfaces over one given physical port. Example# Customer A paid for 5Gbps on 3 sub-interfaces and Customer B, they paid for 3Gbps over 2 sub-interfaces.Limited to the same parent interface (physical or bundle) and is available for both ingress policers and egress queueing/shaper.policy-map spi-in class class-default police rate 500 mbps ! ! end-policy-map!policy-map spi-out class class-default shape average 500 mbps ! end-policy-map!interface TenGigE0/0/0/16.1001 l2transport encapsulation dot1q 1001 service-policy input spi-in shared-policy-instance spi-1-in service-policy output spi-out shared-policy-instance spi-1-out!interface TenGigE0/0/0/16.1002 l2transport encapsulation dot1q 1002 service-policy input spi-in shared-policy-instance spi-1-in service-policy output spi-out shared-policy-instance spi-1-out! Policy-map templates and uniquenessThis new feature is an enhancement of the unique policy-map scale.Before IOS XR 7.3.1# each policy-map has an ID (total unique ID is 250)# same policy map may be attached to different interface but only 250 unique ingress policy can exist on the system.Starting with IOS XR 7.3.1 several policy-maps can share the same ID if they have# Same classification (class-map name independent) Same action policing (police rate independent) Same marking actionExample#policy-map P1-100Mclass class-default  police rate 100 mbps  !  set traffic-class 5!end-policy-map!policy-map P2-1000Mclass class-default  police rate 1000 mbps  !  set traffic-class 5!end-policy-mapP1-100M and P2-1000M have the same classification, same action policing (even if it’s a policing at different rates) and same marking action# they are using the same ID and count for 1.Before 7.3.1, we verify it uses two entries#RP/0/RP0/CPU0#Router-721#show feature-mgr client qos-ea feature-info summary loc 0/0/CPU0 NPU DIR Lookup-type ACL-ID Refcnt Feature-Name--- --- -------------------- ------ ------ ------------0 IN L2_QOS 17 1 P1-100M#00 IN L2_QOS 18 1 P2-1000M#0RP/0/RP0/CPU0#Router-721#With 7.3.1, it’s only a single “feature-name”RP/0/RP0/CPU0#Router-731#show feature-mgr client qos-ea feature-info summary loc 0/0/CPU0 NPU DIR Lookup-type ACL-ID Refcnt Feature-Name PolicyMap-Name--- --- -------------------- ------ ------ ------------------------------- --------------0 IN L2_QOS 10 2 32b51d8e63702738b16423f7e8df7be7 P1-100M P2-1000MRP/0/RP0/CPU0#Router-731#For reference, let’s list a couple of examples that don’t share a common policy ID and count as 3 different entries#policy-map P1-100Mclass class-default  police rate 100 mbps  !  set traffic-class 5!end-policy-map!policy-map P2-1000Mclass class-default  police rate 1000 mbps  !  set traffic-class 3!end-policy-map!policy-map P3-1000Mclass class-default  police rate 1000 mbps  !  set traffic-class 3 set cos 3!end-policy-mapHere, P1 and P2 are different, because setting different traffic-class values (3 and 5).Also, P3 is different than P1 and P2, because it sets cos 3 (and not the others).So, these 3 policy-maps are using 3 different IDs and are considered unique.Security.Rakesh Kandula presents the latest improvements in IOS XR security brought in IOS XR 7.3.1# SSD encryption (note# it’s limited to the platforms running XR7)Feature developed for use-cases like# router theft, RMA scenario, unauthorized disk swap and router decommissionning.Provides data-at-rest protection for sensitive data like running configurationEncryption key is generated in hardware TAm (Trust Anchor module) chip and is unique per line card in a chassis, it encrypts only a disk partition and not the entire SSD.Configuration#disk encryption activate location 0/x/CPU0 X.509v3 Certificate based SSH authenticationThis new feature will remove the need for Username/Password management (no need for RSA key management for end users)# authenticate end users logging into routers using x.509v3 certificates (based on RFC6187).Once local authentication passes, authorization can be local or remote (TACACS)1- End user presents their certificate to the router for authentication2- Router validates the certificate locally for authentication3- Authorization request is then sent to TACACS server to fetch the group info of the user4- User gets access to the router with the appropriate privilege levelConfigurationRP/0/RP0/CPU0#5508-1-731(config)#ssh server algorithms host-key x509v3-ssh-rsaRP/0/RP0/CPU0#5508-1-731(config)#ssh server trustpoint host tp1RP/0/RP0/CPU0#5508-1-731(config)#ssh server certificate username ? common-name user common name(CN) from subject name field user-principle-name user principle name(UPN) from subject alternate nameRP/0/RP0/CPU0#5508-1-731(config)#ssh server certificate usernameConfig guide# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/security/70x/b-system-security-cg-ncs5500-70x/b-system-security-cg-ncs5500-70x_chapter_011.htmlMPLS Segment RoutingThis section is split itself in three partsIGPs.Jose Liste covers a lot of topics in this 30min long video# OSPF LFA/TI-LFA for FlexAlgoExtends the benefits of TI-LFA to Flex-Algo. Automated Topology Independent and guaranteed sub-50ms per-prefix protection optimized per Flex-Algo.Flex-Algo will run the shortest path computation for the primary and now also for the back-up path (and with OSPF). The back-up path is computed independantly.In this example, the flex-algo 128 definition is “min IGP metric and avoid RED affinity”.The support is now extended for OSPF at the level it was for ISIS# Rounding of Min-Delay Values OSPF Conditional Advertisement ISIS TI-LFA Protection of Unlabeled IPv6 Prefixes Inter-Level SRMS Advertisement PropagationPerformance Monitoring and Traffic-Engineering. Named Profile SR Policy Delay Measurement with Loopback-mode SR Policy Liveless Monitoring SR-TE Cumulative-Metric BoundDynamic SR P2MP Policies. Dynamic SR P2MP policies theory and lab demoOptics.New productsNCS57B1-6D24 / NCS57B1-5DSEhttps#//xrdocs.io/ncs5500/tutorials/introducing-ncs57b1-routers/Chassis commons.", "url": "/tutorials/iosxr-731-innovations/", "author": "Nicolas Fevrier", "tags": "iosxr" } , "tutorials-bfd-architecture-on-ncs5500-and-ncs500": { "title": "BFD Architecture on NCS55xx and NCS5xx", "content": " BFD Architecture on NCS55xx and NCS5xx Introduction Quick Refresh BFD Modes Supported on NCS55xx and NCS5xx BFD Single-Path and Multi-Path Sessions BFD Single-Path Support BFD Multi-Path Support BFD Hardware Implementation RX Path TX Path BFD Feature Support BFD Scale BFD Timers Reference Thanks Summary IntroductionAfter successfully introducing the LPTS for control plane protection and the ACL’s for data-plane protection on the NCS55xx and NCS5xx portfolio, we will start this new series on faster convergence. All of us are already aware of the concept Bidirectional Forwarding Detection - BFD. BFD is fairly old technology and is widely deployed across the industry for faster network convergence. In this tech-note, we will discuss the implementation of BFD w.r.t NCS55xx and NCS5xx platforms.Quick RefreshConvergence of business-critical applications is a very important for any network infrastructure, whether it is service provider or any enterprise. Though today’s network have various levels of redundancy, but the convergence is dependant upon the ability of individual network devices to quickly detect failures and reroute traffic to an alternate path. Bidirectional forwarding detection (BFD) provides low-overhead, short-duration detection of failures in the path between adjacent forwarding engines. BFD allows a single mechanism to be used for failure detection over any media and at any protocol layer, with a wide range of detection times and overhead. The fast detection of failures provides immediate reaction to failure in the event of a failed link or neighbor.Another benefit of BFD, is that it provides network administrators with a consistent method of detecting failures. Thus, one availability methodology could be used, irrespective of the Interior Gateway Protocol (IGP) or the topology of the target network. This eases network profiling and planning, because re-convergence time should be consistent and predictable. BFD function is defined in RFC 5880. BFD is essentially a Control plane protocol designed to detect the forwarding path failures. BFD adjacencies do not go down on Control-Plane restarts (e.g. RSP failover) since the goal of BFD is to detect only the forwarding plane failures. [Reference]For understanding the BFD in details please visit the excellent articles. [Reference1 Reference2]. In this document, we will try to focus on BFD architecture and its hardware implementation on NCS55xx and NCS5xx product families.BFD Modes Supported on NCS55xx and NCS5xx Mode Support Asynchronous Yes Echo No For further details and working on the modes of operation, please VisitBFD Single-Path and Multi-Path SessionsBFD IPv4/IPv6 sessions over physical interfaces and sub-interfaces or bundle are referred to as Single-Path sessions. BFD sessions between virtual interfaces (BVI, BE, GRE tunnel) of directly connected peers is referred to as Multi-Path session because of possible asymmetrical routing. Both BFD Single-Path and Multipath sessions are supported on NCS55xx and NCS5xx.BFD Single-Path Support BFD Type v4 v6 Physical Interface Yes Yes Sub-Interface Yes Yes BFDoBundle - BoB Yes Yes BFD Multi-Path Support BFD Type v4 v6 BFD over Logical Bundle - BLB Yes Yes BVI Yes Yes BGP MultiHop Yes Yes Note1# BFD Multi-Path (v4/v6) over BVI and BGP Multihop is supported on systems based on J2 from IOS-XR 7.4.1 in native and compatible mode.Note2# BFD Multi-Path (v6) over BVI is not supported on NCS560. It will be supported in future releases.BFD Hardware ImplementationBFD on the NCS55xx and NCS5xx is hardware offloaded. The hardware offload feature enables the offload of a BFD session to the network processing unit - NPU of the line cards on modular chassis or a standalone fixed chassis, in an IPv4 or IPv6 network. BFD hardware offload improves scale and reduces the overall network convergence time by sending rapid failure detection packets to the routing protocols for recalculating the routing table. NCS55xx and NCS5xx uses Pipeline architecture for packet processing. For details on the Pipeline Architecture please visit this excellent article. For further details on BFD hardware offload please visit. Let us understand the packet processing in the pipeline architecture w.r.t BFD.RX PathIngress PP PipelineThe ingress packet processing pipeline provides two main functions# BFD identification# Whether it is a BFD packet (after identifying it as for-us packet). BFD classification# Whether single path or a multi path BFD packet.2-Pass ProcessingAll the BFD packets are processed in 2 cycles. In first cycle, packet is recycled on a well known port (internal) for core0 and core1 before BFD packet processing starts.1st cycle# The first pass is based on L3 Lookup. BFD is treated as an IP packet and check will be made whether it is a ‘for-us’ packet. If the packet is identified as ‘for-us’ packet, it is recycled for second pass. The packet is then sent to the PMF stage. If the packet is not a for-us packet then based on the L3 lookup appropriate action is taken on the packet.2nd cycle# After recycling the packet, parser qualifies the packet as BFD with parser code. Parser is the block which extracts ethertype, MAC addresses and determines offset (where network header starts) for next stages in the pipeline There is a specific parser code for BFD packet which is internal to the hardware. BFD FLP is hit based on the parser context and trap code is generated accordingly. In PMF stage, BFD packet is processed and sets the required traps & destination accordingly. The packets are then sent to OAMP Engine. When OAMP Engine receives the BFD packet, it has 2 options# OAMP Engine consumes the BFD packet OAMP Engine punts the packet to CPU. OAMP Engine punts the BFD packet to CPU When a BFD packet is forwarded to the OAMP engine, it must receive some information on how to process the packet. Various blocks like FLP and PMF update this information in the pipeline and pass it on to the OAM engine. Upon the reception of the BFD packet, the following checks are done. Verify that the packet and the traps are correct or else generate the appropriate error code. Verify that the type taken from the trap is the correct. There are various other checks which are internally done in the pipeline. When any of the required checks fails, the corresponding failure bit is set and destination to the CPU is picked. The packet is then punted to the CPU. OAMP Engine consumes the BFD packet If all the checks pass without any failure bit set, the packet is consumed by the OAMP engine and processed. It is not punted to CPU.Highlevel definition of the blocks used in the above discussion Block Description OAM Operation, administration and Management used for administration of the ethernet and BFD OAM OAMP Engine It is a dedicated hardware engine to accelerate the handling of OAM packets. Main functionalities include#-Generates/Receives BFD packets-Generates timeout events for a session if continuous packets are not received.-Punts packet to LC-CPU (BFD stack) in case there is change in BFD session state or flags. Based on which state machine is maintained. FLP Forwarding lookup block in the forwarding engine in the IRPP.-It is a very programmable and flexible block.-It helps in lookup action using different database like LEM, LPM etc.-It has place for OAM classification too. PMF Programmable Mapping and Filtering block is another block present in the pipeline.-It is most programmable block in the pipeline. -It has all the history of the packet from other blocks like incoming ports, lookup results etc.-It takes care of ACL, LPTS, QoS etc. It is there in ingress and egress pipeline. Note#There are multiple blocks in the IRPP and ERPP in the pipeline for packet processing. Parser, FLP, PMF etc are one of those. This is specific to ASIC used, hence not exposing the internal details.TX Path OAMP Engine will generate the BFD packet with complete BFD payload, L3 partial header and UDP header This packet is then injected into the IRPP for further processing. IRPP and ITPP will send the packet to the ETPP. In ETPP and ERPP the complete L3 header is populated and L2 header is also added to the packet Packet is then sent out accordingly from the network interface.Note#OAMP Engine has various blocks which are internal and cannot be published. We have used a single block for simplicationBFD Feature Support BFD is supported only in Asynchronous mode. Demand and Echo mode are not supported. Static, OSPF, BGP and IS-IS applications are supported in IPv4 BFD. Only static routes are supported in IPv6 BFD. BFD over VRF is supported. BFD over BVI is supported only on fixed NCS5500 platforms. BFD support over VRRP interface is supported from IOS-XR 7.2.1 BFD dampening for IPv4 is supported. BFD multihop is supported over an IP and non-IP core. BoB and BLB coexistence is not supported at the moment. It will be supported in the future. BFD with IPv4 supports upto 6 unique timers. With IPv6 we do not have limit of unique timers.Note# We will dedicate separate technotes on some of the features described above for details.BFD ScaleScale on NCS55xx and NCS5xx is always a subject of discussion because of the hardware resources available on the chipsets. The hardware resources are limited and has to be utilised properly to achieve the best results. Let us discuss in more details on how BFD scale is determined for these platforms and how well we have managed to address the available resources.Scale is decided on multiple things. First is obviously the hardware resources. From BFD feature perspective, the OAMP Engine resources play a critical role. The resources are equally divided between CFM and BFD feature needs. Now if we double-click on BFD, then within that resources are again divided for BFD Single-Path and BFD Multi-Path. Again one more consideration, we need to divide the resources amongst IPv4 and IPv6. For IPv4 we only need to worry from the OAMP resources. But for IPv6 we also need to take into account other internal resources which limits the scale (internal to ASIC). IPv6 needs almost three times more resources than IPv4 w.r.t OAMP engine. Again when the packets are recycled, we need to take into consideration the queue and its bandwidth shared with other features and number of recycles. All these factors goes into scale considerations. Each Asic type i.e. Q-MX/J/J+/J2 has different processing, resources and bandwidth capacity. So the scale will vary across chipsets.Considering all the above criterias, we have succeeded in carving and the resources optimally , to provide right amount of BFD sessions (v4/v6) which suits every use cases.Note#If you are looking for details on the scale per product please contact us or your cisco representative.BFD TimersNCS55xx and NCS5xx BFD Implementation supports timers which can be used for faster detection of failures. These are configurable values and users have the flexibility of configuring different timers and multiplier values. BFD v4 sessions do not have any limitations in scale w.r.t minimum timer values. But the BFD v6 scale limit does depend on the configured minimum timer. Below is the list of timers and multiplier values supported.Support for IPv4 Timers and Multipliers Type of BFD Session Minimum Timer Supported Default/Minimum Multipliers Physical/Vlan 4ms 3 BOB 4ms 3 BLB 50ms 3 Multi-HOP 50ms 3 Note#Only 6 unique timers are supported with BFDv4.Scale numbers for v4 will not be affected by the timer valuesSupport for IPv6 Timers and Multipliers Type of BFD Minimum Timer Supported Default/Minimum Multipliers Scale Limitation w.r.t min_interval Single Hop 1. 4/5/6/7 ms2. 8ms & above 3ms 1. 1502. 250 BOB (BFD o/ Bundle Members) 1. 4/5/6/7 ms2. 8ms & above 3ms 1. 1502. 250 BLB (BFD o/ Logical bundle) 50 ms 3ms 250 MHOP 50 ms 3ms 250 Note#No restrictions on number of unique timers support with BFDv6.Scale numbers for v6 will be affected by the timer values. The above values are for J/J+/J2 compatible mode. For J2 native mode this numbers will be a bit higher. For other chipsets the numbers will vary. For detailed scale numbers please get in touch with us or your cisco representative.Reference ASR9k BFD Implementation BFD NCS5500 CCO Config Guide NCS5500 Deepdive Cisco LiveThanksSpecial thanks to Arun Vadamalai for his valueable feedback during the documentation.SummaryThis was an introductory article on BFD to get familiarise with feature with pipeline architecture. In the next article, we will focus more on the technical aspects of how to read the BFD parameters in the outputs and also see some basic debugging. We will touch upon the OAMP engine resource usage. We will discuss the concepts of BLB and BoB and look at its difference and use cases. Stay tuned !!!!", "url": "/tutorials/bfd-architecture-on-ncs5500-and-ncs500/", "author": "Tejas Lad", "tags": "NCS5500, NSC500, NCS540, NCS 5500, BFD, Protection" } , "tutorials-understanding-the-bfd-hardware-programming-on-ncs55xx-and-ncs5xx": { "title": "Understanding the BFD Hardware Programming on NCS55xx and NCS5xx", "content": " Understanding the BFD Hardware Programming on NCS55xx and NCS5xx Introduction Quick Refresh (RFC 5880) BFD Control Packet Format BFD State Machine Configuring a simple BFD session (NCS55xx and NCS5xx) Verification Packet Captures Timer Negotiations Configuring ISIS as a client for the same interface Summary Reference IntroductionIn our previous article, we discussed the BFD feature in the pipeline architecture (NCS55xx and NCS5xx). We discussed how the packet flow and the hardware resources are utilised. We saw how the scale is considered for the BFD feature and how well the resources have been carved to achieve the desired numbers. In this article, we will go a bit deeper in the BFD. We will see a sample configuration, and see how to read the BFD outputs (as per RFC 5880) and check the hardware programming.Quick Refresh (RFC 5880)As discussed in the previous article, the goal of Bidirectional Forwarding Detection (BFD) is to provide low-overhead, short-duration detection of failures in the path between adjacent forwarding engines, including the interfaces, data link(s), and, to the extent possible, the forwarding engines themselves. An additional goal is to provide a single mechanism that can be used for liveness detection over any media, at any protocol layer, with a wide range of Detection Times and overhead, to avoid a proliferation of different methods.BFD Control Packet FormatLet us look at the BFD control packet format briefly. This will be useful in understanding the CLI outputs and the messages being exchanged between 2 nodes.Apart from the above fields, we also have the optional Authentication section. For details please refer. Field Description Version (Vers) The version number of the protocol. Diagnostic (Diag) A diagnostic code specifying the local system’s reason for the last change in session state. There are various values for the diagnostics which are defined in the RFC State (Sta) This indicates the current BFD session state as seen by the transmitting system. The values are#0 – AdminDown1 – Down2 – Init3 – Up Poll (P) If set, the transmitting system is requesting verification of connectivity, or of a parameter change, and is expecting a packet with the Final (F) bit in reply. If clear, the transmitting system is not requesting verification. Final (F) If set, the transmitting system is responding to a received BFD Control packet that had the Poll (P) bit set. If clear, the transmitting system is not responding to a Poll. Control Plane Independent (C) If set, BFD is implemented in the forwarding plane and can continue to function through disruptions in the control plane). If clear, BFD implementation shares fate with its control plane. Authentication Present (A) If set, the Authentication Section is present and the session is to be authenticated. Demand (D) If set, Demand mode is active.If clear, Demand mode is not active in the transmitting system. Multipoint (M) This bit is reserved for future point-to-multipoint extensions to BFD. Detect Mult Detection time multiplier. The negotiated transmit interval, multiplied by this value, provides the Detection Time for the receiving system in Asynchronous mode. Length Length of the BFD Control packet, in bytes. My Discriminator A unique, nonzero discriminator value generated by the transmitting system, used to demultiplex multiple BFD sessions between the same pair of systems. Your Discriminator The discriminator received from the corresponding remote system. This field reflects back the received value of My Discriminator, or is zero if that value is unknown. Desired Min TX Interval This is the minimum interval, in microseconds, that the local system would like to use when transmitting BFD Control packets. Required Min RX Interval This is the minimum interval, in microseconds, between received BFD Control packets that this system is capable of supporting. Required Min Echo RX Interval This is the minimum interval, in microseconds, between received BFD Echo packets that this system is capable of supporting. For detailed explaination of each field please referBFD State MachineAs per the RFC 5880, the BFD state machine is quite straightforward. There are three states through which a session normally proceeds# two for establishing a session # Init and Up. And one for tearing down a session# Down. This allows a three-way handshake for both session establishment and session teardown, assuring that both systems are aware of all session state changes. A fourth state AdminDown exists so that a session can be administratively put down indefinitely. Each system communicates its session state in the State (Sta) field in the BFD Control packet. State Description Down state This state means that the session is down or has just been created. A session remains in Down state until the remote system indicates that it agrees that the session is down by sending a BFD Control packet with the State field set to anything other than Up. Init state This state means that the remote system is communicating, and the local system desires to bring the session up, but the remote system does not yet realize it. A session will remain in Init state until either a BFD Control Packet is received that is signalling Init or Up state in which case the session advances to Up state Up state This state means that the BFD session has successfully been established, and implies that connectivity between the systems is working. The session will remain in the Up state until either connectivity fails or the session is taken down administratively. AdminDown state This state means that the session is being held administratively down. This causes the remote system to enter Down state, and remain there until the local system exits AdminDown state Note# For details on state machine please refer the RFCConfiguring a simple BFD session (NCS55xx and NCS5xx)After a quick refresh of the theory behind the BFD packets, let us get into the routers and check it practically. We will take a simple example and walk through the hardware programming.In this example, we have configured OSPF between R1 and R2 and used BFD on the physical interface. The BFD configurations are configured under the physical interface.R1 (NCS-55A2-MOD-HD-S)router ospf 1 router-id 172.16.3.26 address-family ipv4 unicast area 0 ! interface TenGigE0/0/0/12 bfd minimum-interval 300 bfd fast-detect bfd multiplier 3 network point-to-point !R2 (N540-24Z8Q2C-M)router ospf 1 router-id 172.16.3.18 address-family ipv4 unicast area 0 ! interface TenGigE0/0/0/12 bfd minimum-interval 300 bfd fast-detect bfd multiplier 3 network point-to-point !VerificationLet us verify a few CLI commands and confirm the hardware programming. This command gives a quick information of all the sessions configured.The below command gives a detailed output of the different parameters of the BFD control packet which we mentioned in the earlier section. We can see the source and destination values, the version, state, discriminator values and different flags being set or clear. We can also see the hardware offloaded information and values. The state of the session is showing UP which indicates the programming in the hardware is done properly. If there is any programming issue, we will see the state stuck in admin down or init. Another important value to check in the output is Async Session ID and Async Tx Key. In case of hardware programming issues the value of the key would be 0.RP/0/RP0/CPU0#N55-26#show bfd ipv4 session interface tenGigE 0/0/0/12 detail I/f# TenGigE0/0/0/12, Location# 0/0/CPU0Dest# 192.18.26.18Src# 192.18.26.26 State# UP for 0d#2h#6m#39s, number of times UP# 1 Session type# PR/V4/SHReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147487749, your discr# 2147491924, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147491924, your discr# 2147487749, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Local Stats# Intervals between async packets# Tx# Number of intervals=3, min=7 ms, max=2184 ms, avg=829 ms Last packet transmitted 7598 s ago Rx# Number of intervals=6, min=4 ms, max=1700 ms, avg=461 ms Last packet received 7598 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- ospf-1 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 292/45 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x2054 Echo Session ID # 0x0Async Tx Key # 0x80002054 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0 Flags Session Type PR Pre-Routed Session mostly single path sessions applicable for Physical or Sub-interfaces and BFD over Bundle interfaces SW Switched Session mostly including BFD over Logical Bundle- BLB, BFD over BVI and Multipath session V4 IPv4 Session V6 IPv6 Session SH Single Hop Session MH Multi Hop Session BL BFD over Bundle Ethernet BR BVI The hardware programming in the OAM engine can also be verified with the below command. The Async Session ID can be used to get the output. It clearly mentions that the BFD is processed in the OAM engine and not in the CPU. Other hardware values programmed are also matching.RP/0/RP0/CPU0#N55-26#show controllers fia diagshell 0 ~diag oam ep id=0x2054 location 0/0/CPU0 Node ID# 0/0/CPU0=====BFD endpoint ID# 0X2054 ***Properties# Type# BFD over IPV4 BFD session is processed in OAMA (rather than CPU). TX gport# 0X6c000028 Remote gport# 0X160040e3 gport disabled Source address# 0.0.0.0 UDP source port# 49153 Local state# up, Remote state# up Local diagnostic# No Diagnostic, Remote diagnostic# No Diagnostic Remote Flags# 0x8, Local Flags 0x8 Queing priority# 24 BFD rate (ms)# 301 Desired local min TX interval# 300000, Required local RX interval# 300000 Local discriminator# 2147491924, Local detection multiplier# 3 Remote discriminator# 2147487749 Remote detection explicit timeout# 899300RP/0/RP0/CPU0#N55-26#Below is another important verification which shows the history of the BFD state machine. This command gives the history of the BFD session establishment and also gives the messages being exchanged during the handshake between the neighbors. The full state machine can be checked with this command.RP/0/RP0/CPU0#N55-26#show bfd all session status history location 0/0/CPU0 IPv4#-----I/f# TenGigE0/0/0/12, Location# 0/0/CPU0 table_id#0xe0000000State# UP, flags#0x80040Iftype# 0x1e, basecaps# 30Async dest addr# 192.18.26.18Async src addr# 192.18.26.26Echo dest addr# 192.18.26.26Echo src addr# 172.16.3.26Additional info from Flags# FIB is READY Session Active on 0/0/CPU0Platform Info# 0x0, Mac Length# 14Redundancy session info# Created from active BFD serverLast Down Diag# Nbor signalled downLast Rx Pkt Down Diag# Admin downLast Down Time# May 24 15#53#17.434 Last Async Tx Counters and Timestamps# Last Async Rx Counters and Timestamps# count 41 [May 24 15#53#17.434] [May 12 05#50#40.360] [May 12 05#50#40.059]Last Async Rx valid packets delayed or in transit# Last Echo Tx Counters and Timestamps# Last Echo Rx Counters and Timestamps# Last Echo Rx valid packets delayed# Last UP Time# May 24 15#54#23.237 Last IO EVM Scheduled Time# May 24 15#53#17.434 Last IO EVM Schedule Complete Time# May 24 15#53#16.674 Received parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147487750, your discr# 2147491924, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147491924, your discr# 2147487750, state UP, D/F/P/C/A# 0/1/0/1/0Tx Echo pkt # Version# 0, Local Discr# 2147491924, Sequence No# 0History#[May 24 15#54#23.237] Session (v1) state change, triggered by event 'Remote state up', from INIT to UP with current diag being None[May 24 15#54#19.886] Session (v1) state change, triggered by event 'Remote state down', from DOWN to INIT with current diag being Nbor signalled down[May 24 15#53#18.533] Session Down, session flags (0x80050), Echo Latency Last hwm 0 msrx async last hwm#3510217ms, tx async last hwm#2184ms[May 24 15#53#17.434] Session (v1) state change, triggered by event 'Remote state admindown', from UP to DOWN with current diag being Nbor signalled down[May 24 15#53#17.434] Session Down, session flags (0x80040), Echo Latency Last hwm 0 msrx async last hwm#3510217ms, tx async last hwm#2184ms[May 12 05#50#39.754] Session (v1) state change, triggered by event 'Remote state init', from DOWN to UP with current diag being None[May 12 05#50#39.754] Session out of Dampened State# Backoff Ctr#0, Waited#2184, Backoff#2000, Jitter#131[May 12 05#50#37.570] Session (v1) state change, triggered by event 'Session create', from Unknown to DOWN with current diag being None[May 12 05#50#37.570] Session in Dampened State# Backoff Ctr#0, Waited Past#0, Waited#0, Backoff#2000Note# This is trimmed outputThe below command gives the number of clients for the BFDRP/0/RP0/CPU0#N55-26#show bfd client Name Node Num sessions -------------------- ---------- --------------L2VPN_ATOM 0/RP0/CPU0 0 MPLS-TE 0/RP0/CPU0 0 XTC 0/RP0/CPU0 0 bgp-default 0/RP0/CPU0 0 bundlemgr_distrib 0/RP0/CPU0 0 ipv4_static 0/RP0/CPU0 0 isis-acr 0/RP0/CPU0 0 object_tracking 0/RP0/CPU0 0 ospf-1 0/RP0/CPU0 1 pim6 0/RP0/CPU0 0 pim 0/RP0/CPU0 0Packet CapturesLet us examine the packet capture of the control packets being exchanged between the routers. Below is the packet capture of the received parameters from the remote peer. We can verify that the values are matching with the CLI outputs we captured in the earlier commandsTimer NegotiationsLet us change the minimum timer and the multiplier value one end and see how the timer values are negotiated.RP/0/RP0/CPU0#N540-18#show running-config router ospf 1router ospf 1 router-id 172.16.3.18 address-family ipv4 unicast area 0 ! interface TenGigE0/0/0/12 bfd minimum-interval 100 bfd fast-detect bfd multiplier 3 network point-to-point !R1 has a minimum time interval value of 300ms. R2 has minimum time interval value of 100ms. What will be the negotiated timer value ? Will the session go down. Let us verify.RP/0/RP0/CPU0#N55-26#show bfd all session IPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------Te0/0/0/12 192.18.26.18 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 RP/0/RP0/CPU0#N540-18#show bfd all session Mon May 24 13#37#14.829 GMT+4IPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------Te0/0/0/12 192.18.26.26 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0We can see the higher timer value is negotiatedConfiguring ISIS as a client for the same interfaceAfter configuring the ISIS between the routers and configuring BFD on the interface, we can see that the BFD session will remain 1 but the clients will now show ospf and isisRP/0/RP0/CPU0#N55-26#show bfd all session IPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------Te0/0/0/12 192.18.26.18 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 RP/0/RP0/CPU0#N55-26#show bfd all session detail IPv4#-----I/f# TenGigE0/0/0/12, Location# 0/0/CPU0Dest# 192.18.26.18Src# 192.18.26.26 State# UP for 0d#0h#9m#39s, number of times UP# 3 Session type# PR/V4/SHReceived parameters# Version# 1, desired tx interval# 100 ms, required rx interval# 100 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147487751, your discr# 2147491924, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147491924, your discr# 2147487751, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Local Stats# Intervals between async packets# Tx# Number of intervals=3, min=304 ms, max=21 s, avg=8078 ms Last packet transmitted 579 s ago Rx# Number of intervals=5, min=2 ms, max=1832 ms, avg=683 ms Last packet received 578 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- ospf-1 300 ms 3 300 ms 3 isis-1 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 310/56 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x2054 Echo Session ID # 0x0Async Tx Key # 0x80002054 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0RP/0/RP0/CPU0#N55-26#show bfd client Mon May 24 17#32#04.194 UTCName Node Num sessions -------------------- ---------- --------------L2VPN_ATOM 0/RP0/CPU0 0 MPLS-TE 0/RP0/CPU0 0 XTC 0/RP0/CPU0 0 bgp-default 0/RP0/CPU0 0 bundlemgr_distrib 0/RP0/CPU0 0 ipv4_static 0/RP0/CPU0 0 isis-1 0/RP0/CPU0 1 isis-acr 0/RP0/CPU0 0 object_tracking 0/RP0/CPU0 0 ospf-1 0/RP0/CPU0 1 pim6 0/RP0/CPU0 0 pim 0/RP0/CPU0 0 So we saw a couple of examples of configuring the OSPF and ISIS as clients of BFD. The clients are not just restricted to OSPF and IS-IS. We support many other clients. For details on how to configure different clients like BGP, pim, BFD on sub interface please refer. In the netx artcile we will also cover the BFD w.r.t bundles and touch upon the concepts of BLB and BoB.SummaryHope this article helped you to understand the hardware programming of the BFD in the pipeline architecture. This can be used for a basic configuration and debugging before reaching out to Cisco TAC #). In the next arcticle we will try to explore the concepts of BLB/BOB/BVI and its hardware implementation.Reference ASR9k BFD Implementation CCO Config Guide", "url": "/tutorials/understanding-the-bfd-hardware-programming-on-ncs55xx-and-ncs5xx/", "author": "Tejas Lad", "tags": "NCS 5500, NCS 500, NCS5500, BFD, convergence" } , "tutorials-introducing-ncs57b1-routers": { "title": "Introducing NCS 57B1 Routers", "content": " NCS 57B1 Routers Introduction Videos Naming logic Product description Hardware NPU Block Diagrams Software Ports configuration breakout cables in top row Mixing 25G breakout and QSFP+ MACsec support Other things to know? Conclusion IntroductionWith IOS XR 7.3.1, we introduced multiple software features (https#//xrdocs.io/ncs5500/tutorials/iosxr-731-innovations/) but new hardware are also launched with this release. Among them, the NCS 57B1 series.These two new routers are the NCS57B1-6D24 and NCS57B1-5DSE, both 1RU systems with a mix of 100G and 400G ports and powered by a Broadcom Jericho2 NPU.They can be used in multiple places in the network, like aggregation (100G to 400G), 5G (class-C capable), internet peering, core, …If it can offer 10G breakout through QSA and break-out cables, this router is optimized for 100G and 400G interfaces.Videos.And since you didn’t ask for it, the unboxing #)).Naming logicTo understand where the product names NCS57B1-6D24 and -5DSE come from#Product descriptionYou can find the datasheet on Cisco’s website#https#//www.cisco.com/c/en/us/products/collateral/routers/network-convergence-system-5500-series/datasheet-c78-744698.htmlIt contains a lot of details on the part numbers, licenses, standards compliance, and much more.We differentiate# the “base” system (Jericho2 without eTCAM) offering 24x 100G and 6x 400G ports# NCS57B1-6D24 the “scale” system (Jericho2 with OP2 eTCAM) offering 24x 100G and 5x 400G ports# NCS57B1-5DSEThe difference between the two versions is simply the presence of an eTCAM and for the system equipped with this additional resource, we have one port QSFP-DD 400G left.This eTCAM is connected to the fabric ports for the routes and classifiers, but we are using “NIF” ports for the statistics, hence the impact on the 400G port density.HardwareFrom the back, you find# two power supply modules (AC or DC) 2kW offering 1+1 redundancy (PSU2KW-ACPI or PSU2KW-DCPI). Note we don’t support the mix of AC and DC, and the cooling is only front-to-back today. 6 fan trays (NC57-B1-FAN1-FW) offering 5+1 redundancy.In the front, we have 100G ports (QDD cages, designed for high power optics) on the left part and 400G (QDD too) on the right. These cages will permit the use of 100G ZR optics in the upper row (the exact amount of optics is still under validation and will be updated later) and 400G ZR/ZR+ in all the ports on the right.Finally, on the right end, we have the timing ports and management ports. RJ45 for TOD SMB connectors (1PPS and 10Mhz clock) One antenna port for GPSThe NCS 57B1 routers are Class-C timing capable, on all ports.NPUThese routers are “system on a chip” or SoC. All ports are connected to a single chipset (via reverse gear boxes, we will describe it further, later with the block diagram). That makes the design very simple and efficient in term of latency, performance and power consumption.At the heart, we have the Broadcom Jericho 2 NPU. It offers 4.8 Tbps and 2BPPS of forwarding capacity.Like it’s predecessors using in NCS 5500 platforms, it’s a VOQ-only forwarding architecture with ingress hybrid packet buffering. Hybrid here means the packets can be stored in a 32MB on-chip memory or in an 8GB external HBM in case of queue congestion.The NPU uses 50Gbps (a bit more than 53 with encoding and headers) SERDES to connect to the Reverse Gear Boxes#Block DiagramsNCS57B1-6D24 and NCS57B1-5DSE#From an architecture point of view, very few differences between the two platforms. Some interfaces links being used to connect to the statistic part of the OP2 eTCAM, hence the different 400G port density.Jericho 2 ASIC is connected to PHY chipsets operated in reverse gear boxes to connect to the 100G interfaces or switching mode to connect to the 400G interfaces.SoftwareThe NCS571B are both introduced with IOS XR 7.3.1, that’s the minimum release to support them.Since it’s a standalone platform, it operates by default in “Native Mode”, no configuration is required to enable that mode.Note# this system is using the newest version of IOS XR named XR7. XR7 is the latest evolution of the operating system and from feature and scale is similar to XR64bit used in other NCS5500 product. The main noticeable difference will be in the upgrade process based on Golden ISO images and the “install replace” approach instead of the “install add / install activate” methodology.Take a look at the setup guide for mode details.Ports configurationIt’s possible to use the router in multiple roles, like aggregation of 100G and 400G ports for example.It offers 24 ports 100G and 5 or 6 ports 400G. So it’s possible to imagine cases with 2.4Tbps to the clients in 100G and 2.4Tbps to the core in 400G. But nothing prevents from breaking some or all of the 400G ports in 4x100G, to build a 32x 100G + 4x 400G systems. It’s very flexible.A couple of examples among many other possibilities#Keep in mind some basic concept here# you can not break a 400G interfaces and expect to connect it to existing 100G SR4, LR4 or CWDM4. The 100G port facing the breakout cable must be “1-lambda” optics like DR/FR.The NCS57B1 internal architecture imposes two simple rules to respect when configuring ports# QSFP28 and QSFP+ breakout ports can only be configured on the top row and it disables the facing N+1 port you can not mix 4x25 break and QSFP+ (40G or 4x10G) in the same Quad.Let’s review these two rules in details#breakout cables in top rowThis first rule only applies to the 100G ports on the left part, not the 400G capable ports on the right.Only the ports with an even number, located on the top row, can be configured as 4x10G or 4x25G breakout.The configuration is different than the one available on other platforms under “controller …”, it’s now done under “hw-module port-range”#RP/0/RP0/CPU0#NCS5500(config)#hw-module port-range 6 7 ? instance card instance of MPA's location fully qualified location specificationRP/0/RP0/CPU0#NCS5500(config)#hw-module port-range 6 7 instance 0 location 0/RP0/CPU0 mode ? WORD port mode 40-100, 400, 2x100, 4x10, 4x25, 4x10-4x25, 1x100, 2x100-PAM4, 3x100, 4x100RP/0/RP0/CPU0#NCS5500(config)#hw-module port-range 6 7 instance 0 location 0/RP0/CPU0 mode 4x10So the configuration is effective for two subsequent ports, in the example above it’s 6 and 7. That means, the configuration of the port of the top row will automatically disable the port on the bottom (or N+1).In the example below, we configured the ports 16-17 in 4x10G breakout.RP/0/RP0/CPU0#ios#show contr npu voq-usage int all inst all loc 0/RP0/CPU0-------------------------------------------------------------------Node ID# 0/RP0/CPU0Intf Intf NPU NPU PP Sys VOQ Flow VOQ Portname handle # core Port Port base base port speed (hex) type ----------------------------------------------------------------------Hu0/0/0/0 3c000048 0 0 9 9 1024 6160 local 100GHu0/0/0/1 3c000058 0 0 11 11 1104 6256 local 100GHu0/0/0/2 3c000068 0 0 13 13 1112 6272 local 100GHu0/0/0/3 3c000078 0 0 15 15 1120 6288 local 100GHu0/0/0/4 3c000088 0 0 17 17 1128 6304 local 100GHu0/0/0/5 3c000098 0 0 19 19 1136 6320 local 100GHu0/0/0/6 3c0000a8 0 0 21 21 1144 6336 local 100GHu0/0/0/7 3c0000b8 0 0 23 23 1152 6352 local 100GFo0/0/0/8 3c0000c8 0 0 25 25 1184 6416 local 40GHu0/0/0/9 3c0000d8 0 0 27 27 1160 6368 local 100GFo0/0/0/10 3c0000e8 0 0 29 29 1192 6432 local 40GFo0/0/0/11 3c0000f8 0 0 31 31 1200 6448 local 40GHu0/0/0/12 3c000108 0 0 33 33 1168 6384 local 100GHu0/0/0/13 3c000118 0 0 35 35 1176 6400 local 100GHu0/0/0/14 3c000128 0 0 37 37 1096 6240 local 100GHu0/0/0/15 3c000138 0 0 39 39 1088 6224 local 100GHu0/0/0/18 3c000168 0 0 45 45 1072 6192 local 100GHu0/0/0/19 3c000178 0 0 47 47 1064 6176 local 100GFH0/0/0/28 3c000188 0 1 49 49 1224 6224 local 400GFH0/0/0/27 3c0001c8 0 1 57 57 1248 6272 local 400GFH0/0/0/26 3c000208 0 1 65 65 1240 6256 local 400GHu0/0/0/25 3c000248 0 1 73 73 1032 6144 local 100GFH0/0/0/24 3c000288 0 1 81 81 1232 6240 local 400GHu0/0/0/20 3c0002c8 0 1 89 89 1056 6192 local 100GHu0/0/0/21 3c0002d8 0 1 91 91 1048 6176 local 100GHu0/0/0/22 3c0002e8 0 1 93 93 1040 6160 local 100GFo0/0/0/23 3c0002f8 0 1 95 95 1216 6208 local 40GTe0/0/0/16/0 3c002148 0 0 41 41 1208 6464 local 10GTe0/0/0/16/1 3c002150 0 0 42 42 1256 6480 local 10GTe0/0/0/16/2 3c002158 0 0 43 43 1264 6496 local 10GTe0/0/0/16/3 3c002160 0 0 44 44 1080 6208 local 10GYou notice we have now 5-tuple 0/0/0/16/x for these interfaces, proof it’s broken out in 4.Also we see that port 17 disappeared of the inventory.Again, these rules don’t apply to the 400G ports on the right, all of them can be configured in break-out mode.Mixing 25G breakout and QSFP+The second rule is technically more complex but can be summarized with the following abstraction#“Each block of four ports (QUAD# 0-3, 4-7, 8-11, 12-15, 16-19 and 20-23) can not accept a mix of 4x25G and QSFP+ port, whether they are used in 40G or 4x10G”. All other options are supported, as long as they also follow the first rule described earlier.A couple of not-supported use-cases.Example 1#Here, we start with QSPF+ 40G in port 0, then we insert a QSFP28 100G in port2.No problem, it works.Then we try to configure the 100G in breakout mode, 4x25G. The system refuses to commit the configuration# we can’t get QSFP+ and 4x25G in the same quad.Example 2#We have a QSFP+ 40G in port 0 and we configured 0-1 in 4x10G breakout mode, so port 1 is disabled.We insert a QSFP28 100G in port2.No problem, this configuration is supported.Then we try to configure the 100G in breakout mode, 4x25G. The system refuses to commit the configuration for the same reason# we can’t get QSFP+ and 4x25G in the same quad.Example 3#We start with QSFP28 100G in ports 0 and 2.We configure ports 0-1 in breakout mode, port 1 is disabled and now we have 4x25G in port 0.Now we insert a QSFP+ in port 3 and the port doesn’t go up# we can’t get QSFP+ and 4x25G in the same quad.MACsec supportStarting IOS-XR 7.6.1, MACsec is supported on NCS-57B1-5DSE-SYS and NCS-57B1-6D24-SYS. All ports are connected to PHY supporting the feature. It will be supported on 100G ports, 400G ports and also breakout ports.MACsec configurations on NCS500/5700Other things to know? We will support 10G via QSFP to SFP Adaptor (QSA). Not supported in 7.3.1, in the roadmap. 1G will not be supportedConclusionThis new router is the first Jericho2-based platform in fixed form factor. It offers a very high 100G and 400G port density in a very small size (1RU).In a future article, we will demonstrate the routing scale and prefix programming speed in J2.", "url": "/tutorials/introducing-ncs57b1-routers/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-ncs5500-chassis-new-generation-commons": { "title": "NCS5500 Chassis New Generation Commons", "content": " New Commons for NCS5500 Chassis Introduction Video New parts for NCS5500 Chassis NC55-RP2-E NC55-5504-FC2 and NC55-5504-FAN2 NC55-PWR-4.4KW-DC Conclusion You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.IntroductionAmong the innovations brought with IOS XR 7.3.1, we introduced a lot of new software features but also new hardware.We now support new fixed-form systems (1xRU pizza box) like# NCS57B1-6D24 / NCS57B1-5DSE. And also new chassis line cards and new commons. By “commons”, we mean elements like power supply, fabric cards, fan trays, route processors or system controllers (we don’t actually have a new SC, but we have new generations for all others).Video.Mea Culpa# At multiple occasions, I said “RP-2” instead of “RP-E”, certainly creating some confusions to the watcher. So the chronology of the route processor options are# RP, RP-E and RP2-E. That’s the last one we are introducing today.New parts for NCS5500 ChassisNC55-RP2-EThis is the third generation of Route Processor, after the RP and RP-E. It’s supported in all types of chassis# 4-slot, 8-slot and 16-slot.The second generation (RP-E and not “RP-2” as mentioned in the video, RP2 doesn’t exist) supported Class-B timing quality. And the only noticeable difference with the new RP2-E will be the support of Class-C. To enable this feature, we will need the new RP but also specific line cards, for example the NC55-32T16Q4H-A (available in IOS XR 7.2.2 and 7.3.1) and the NC57-36H6D-S (coming in IOS XR 7.3.2).Timing is the only difference with the previous RP-E.It does support the same scalability, memory and CPU power. So it will not offer higher route scale or fast convergence time.Note# it’s not possible to mix different generations of route processor in the same chassis. RP2-E can only work with RP2-E.NC55-5504-FC2 and NC55-5504-FAN2The v2 fabric cards and fan trays are available for 8-slot and 16-slot chassis for a while (IOS XR 6.6.25). They are mandatory to support line cards equipped with Jericho2 NPU.In IOS XR 7.2.2 and 7.3.1, we complete the offer with the v2 fabric and fans for the 4-slot chassis. The same rules described in the past apply to this 4-slot version too# you need them installed before inserting any J2 line card you can’t mix v1 and v2 fan trays and fabric cards you can’t upgrade from v1 to v2 in-service, you need to shut down the system, replace the parts and reboot Once equipped with v2 commons, you can add J2-based line cards. If the system needs to run both J/J+ and J2 line cards, you will use the default mode called “compatibility”. If the system is populated with J2-cards exclusively, it’s possible to enable the “native” mode via configuration.We invite you to check the following articles and videos to get more details# https#//xrdocs.io/ncs5500/tutorials/ncs-5500-fabric-migration/ https#//www.youtube.com/watch?v=XMQumuTkzmg Comp/Native modes# https#//www.youtube.com/watch?v=oUdIBAghjgkNote# It’s not currently possible to integrate the fabric cards in the Cisco Power Calculator (CPC). If you need an estimate of the power consumption, please reach out to your Cisco Representative, they will be able to use internal tools to run the simulation.The v2 fabric cards are using “Ramon” Fabric Engines from Broadcom# 3 per NC55-5516-FC2 2 per NC55-5508-FC2 1 per NC55-5504-FC2As mentioned above, they can connect to J2 line cards with PAM4 50G SERDES and to J/J+ line cards with NRZ 25G SERDES.NC55-PWR-4.4KW-DCLatest addition to the power module supported in the modular NCS5500 chassis. It’s a DC-only power supply, offering up to 4400W, 3 inputs from -48V to -60V DC, and it’s supported on all types of chassis (4-slot# 4 PSUs, 8-slot# 8 PSUs and 16-slot# 10 PSUs).This level of power (4400W) is a significant improvement compared to the former generations (3000W in DC and 3300W in HVDC), something mandatory to support both N+1 and N+N redundancy with the new generations of line cards based on J2.Introduced with IOS XR 7.3.1, we support the mix with previous generation 3KW DC PSU.These new PSU has 6 lugs connectors for 3 DC inputs#It allows the connection of each PSU to grid A and grid B, and offers a third pair. We suggest to alternate the connection of this third feed between A and B.In such scenario, all PSUs are providing 4.4kW each.That will guarantee full feed redundancy in case of outage like the loss of Grid A in the example below#With this logic, whether we lose grid A or grid B, each pair of PSUs can provide 4400 + 2200 = 6.6kW.ConclusionWe introduced new components of the chassis to support new requirements# class-C timing, 400G requirements in the 4-slot chassis, N+N power supply redundancy with the new power needs.Next article and video will be dedicated to the new line cards introduced in IOS XR 7.3.1.Stay tuned.", "url": "/tutorials/ncs5500-chassis-new-generation-commons/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-bfd-over-bundle-interfaces-on-ncs5500-and-ncs500": { "title": "BFD over Bundle Interfaces on NCS5500 and NCS500", "content": " BFD over Bundle Interfaces on NCS55xx and NCS5xx Introduction Brief Background Why do we need BFD over Bundle - BoB ? BFD over Bundle - BoB on NCS5500 and NCS500 Configuring BoB Verifying BoB BFD over Bundle with IPv4 Unnumbered Interfaces Configuring the session over IPv4 unnumbered interfaces Verifying the BFD session Platform Support Summary References IntroductionWe started the NCS5500 and NCS500 BFD technotes with BFD architecture. Then we looked a bit deeper with the Hardware Programming. We confirmed the behaviour as per RFC 5880. We also checked the OAM engine programming in the hardware and saw a few commands for quick verification. In this tech-note we will discuss the BFD over Bundles aka BoB. What is the use case and see the programming/behaviour w.r.t RFC 7130.Brief BackgroundJust a quick background before we move. Link Bundle or Bundle Ethernet has been in the industry for a long time now. Link Bundle is simply a group of ports that are bundled together and act as a single link. The Link Bundling feature allows you to group multiple point-to-point links together into one logical link and provide higher bidirectional bandwidth, redundancy, and load balancing between two routers. A virtual interface is assigned to the bundled link. Cisco IOS XR software supports the IEEE 802.3ad—Standard technology that employs a Link Aggregation Control Protocol (LACP) to ensure that all the member links in a bundle are compatible. The advantages of link bundles are as follows# Multiple links can span several line cards to form a single interface. Thus, the failure of a single link does not cause a loss of connectivity. Bundled interfaces increase bandwidth availability, because traffic is forwarded over all available members of the bundle. Therefore, traffic can flow on the available links if one of the links within a bundle fails. Bandwidth can be added without interrupting packet flow.Why do we need BFD over Bundle - BoB ?As we know, LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to their directly connected peer. LACP has keep-alive mechanism for link member. The default being 30s and it can be configurable to around 1s. Link Aggregation Control Protocol candetect failures on a per-physical-member link. However the LACP timers do not fulfill the criteria of today’s fast convergence requirements. Therefore using BFD for failure detection would(RFC 7130)# Provide a faster detection Provide detection in the absence of LACP Would be able to verify the ability for each member link to be able to forward L3 packets.Running a single BFD session over the aggregation without internal knowledge of the member links would make it impossible for BFD to guarantee detection of the physical member link failures. The goal is to verify link Continuity for every member link.BFD over Bundle - BoB on NCS5500 and NCS500BFD over Bundle(BoB) implementation on NCS5500 and NCS500 is a standard based fast failure detection of link aggregation (LAG) memberlinks that is interoperable between different platforms. There are 2 modes available# Cisco and IETF. NCS5500 and NCS500 only supports the IETF mode. For BFD over Bundle, the BFD client is bundlemgr. Hence if BFD session goes down, bundlemgr will bring down the bundle if it violates the minimum link criteria.Configuring BoBConfiguring BoB is very simple. Let us take a simple topology as mentioned above with 3 links connected between 2 routers. We will bundle them into a single virtual interface. Let us configure BFD on the Bundle interfaces as below#interface Bundle-Ether24 bfd mode ietf bfd address-family ipv4 multiplier 3 bfd address-family ipv4 destination 192.6.17.17 bfd address-family ipv4 fast-detect bfd address-family ipv4 minimum-interval 300Thats it !!! We are done. We should see BFD neighbor coming up now. Let us verify the same.Verifying BoBRP/0/RP0/CPU0#T-2006#show bfd all session IPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------Hu0/0/1/1 192.6.17.17 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 TF0/0/0/24 192.6.17.17 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 TF0/0/0/29 192.6.17.17 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 BE24 192.6.17.17 n/a n/a UP No n/a RP/0/RP0/CPU0#T-2006#show bfd summary Node All PPS usage MP PPS usage Session number % Used Max % Used Max Total MP Max---------- --------------- --------------- ------------------0/0/CPU0 0 0 1000 0 0 700 3 0 756RP/0/RP0/CPU0#T-2006#From the above output we can confirm that the total number of sessions are 3. The Bundle Interface session is a dummy session and it does not count for the overall supported scale on the platform. The session output shows all the member links which can be used to quickly verify the individual members.Let us look the detailed output to understand other parameters.RP/0/RP0/CPU0#T-2006#show bfd all session interface bundle-ether 24 detail IPv4#-----I/f# Bundle-Ether24, Location# 0/RP0/CPU0Dest# 192.6.17.17Src# 192.6.17.6 State# UP for 0d#1h#21m#53s, number of times UP# 1 Session type# PR/V4/SH/BI/IB Session owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- TF0/0/0/24 192.6.17.17 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER TF0/0/0/29 192.6.17.17 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER Hu0/0/1/1 192.6.17.17 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER We can see the Bundle Interface 24 as the BFD interface with session owners as the individual member links. The session type is PR/V4/SH/BI/IB. Flags Session Type PR Pre-Routed Session mostly single path sessions applicable for Physical or Sub-interfaces and BFD over Bundle interfaces V4 IPv4 Session SH Single Hop Session BI Bundle Interface IB IETF BoB But the above output does not give us the full details on discriminators, negotiated timer values so-on and so-forth. For looking into those values individually, we need to check the below command.RP/0/RP0/CPU0#T-2006#show bfd all session detail interface twentyFiveGigE 0/0/0/24 IPv4#-----I/f# TwentyFiveGigE0/0/0/24, Location# 0/0/CPU0Dest# 192.6.17.17Src# 192.6.17.6 State# UP for 0d#1h#39m#22s, number of times UP# 1 Session type# PR/V4/SH/BM/IBReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147491864, your discr# 2147487752, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147487752, your discr# 2147491864, state UP, D/F/P/C/A# 0/0/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Local Stats# Intervals between async packets# Tx# Number of intervals=4, min=5 ms, max=14 s, avg=7164 ms Last packet transmitted 5949 s ago Rx# Number of intervals=13, min=2 ms, max=1700 ms, avg=1195 ms Last packet received 5962 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- BE24 192.6.17.17 BFD_SESSION_SUBTYPE_RTR_BUNDLE_INTERFACEH/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 49/35 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x1008 Echo Session ID # 0x0Async Tx Key # 0x80001008 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0 Flags Session Type PR Pre-Routed Session mostly single path sessions applicable for Physical or Sub-interfaces and BFD over Bundle interfaces V4 IPv4 Session SH Single Hop Session BM Bundle Member IB IETF BoB Similary we can check the parameters of the other bundle members as well. (show bfd all session detail interface twentyFiveGigE 0/0/0/29, show bfd all session detail interface hun 0/0/1/1)BFD over Bundle with IPv4 Unnumbered InterfacesWith IOSXR-7.3.1, NCS5500 and NCS500 supports BFD over Bundle with IPv4 Unnumbered Interfaces . This feature enables BFD to run on IP unnumbered interfaces, which take the IP address from the loopback address. The same loopback address is used on multiple interfaces. This helps to save IP addresses space or range. BFD creates a session on the unnumbered interface for which the BFD clients provide the source and destination IP address along with the interface index. BFD establishes the session on the Layer 3 unnumbered link to which the interface index corresponds. The source address is derived from the Loopback interface at the source. The destination node also uses IP unnumbered interface with loopback address and that is used as destination IP address. BFD sends control packets to the unnumbered interfaces. These control packets are the regular IP BFD packets. Address Resolution Protocol (ARP) resolves the destination loopback IP address to the destination node’s router MAC address. Let us verify the sameConfiguring the session over IPv4 unnumbered interfacesRP/0/RP0/CPU0#T-2006#show running-config interface bundle-ether 24interface Bundle-Ether24 bfd mode ietf bfd address-family ipv4 multiplier 3 bfd address-family ipv4 destination 172.16.4.41 bfd address-family ipv4 fast-detect bfd address-family ipv4 minimum-interval 300 ipv4 point-to-point ipv4 unnumbered Loopback0 bundle minimum-active links 1!RP/0/RP0/CPU0#T-2006#Verifying the BFD sessionRP/0/RP0/CPU0#T-2006#show bfd all session IPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------TF0/0/0/24 172.16.4.41 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 TF0/0/0/29 172.16.4.41 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 BE24 172.16.4.41 n/a n/a UP No n/a RP/0/RP0/CPU0#T-2006#show bfd all session interface twentyFiveGigE 0/0/0/24 detail IPv4#-----I/f# TwentyFiveGigE0/0/0/24, Location# 0/0/CPU0Dest# 172.16.4.41Src# 172.16.4.6 State# UP for 0d#0h#5m#36s, number of times UP# 1 Session type# PR/V4/SH/BM/IBReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147491940, your discr# 2147487763, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 2147487763, your discr# 2147491940, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Local Stats# Intervals between async packets# Tx# Number of intervals=6, min=110 ms, max=62 s, avg=19 s Last packet transmitted 241 s ago Rx# Number of intervals=19, min=2 ms, max=65 s, avg=6186 ms Last packet received 240 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- BE24 172.16.4.41 BFD_SESSION_SUBTYPE_RTR_BUNDLE_INTERFACEH/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 130/67 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x1013 Echo Session ID # 0x0Async Tx Key # 0x80001013 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0From the above output we can see that the source and destination value taken now, is of the loopback interface. We have not applied any IPv4 address on the Bundle Interface.Platform Support Platform Support NCS5500 Yes NCS540 Yes NCS560 Yes Note# BFD over Bundle is supported with mixed speed interfaces.SummaryHope this article helped to understand the BoB feature on the NCS5500 and NCS500 and its use case. We have also seen the basic commands for quick verification. We also covered the BoB with IP Unnumbered interface. In the next article, we will cover the multipath sessions# BLB and BVI and understand their use cases.ReferencesCCO Guide ", "url": "/tutorials/bfd-over-bundle-interfaces-on-ncs5500-and-ncs500/", "author": "Tejas Lad", "tags": "iosxr, NCS5500, NCS500, BFD, BoB, Bundle" } , "tutorials-multipath-and-multihop-bfd-sessions-on-ncs5500-and-ncs500": { "title": "Multipath and MultiHop BFD sessions on NCS5500 and NCS500", "content": " Multipath BFD sessions on NCS5500 and NCS500 Introduction BFD over Logical Bundle Configuring BLB Verifying BLB BFD over BVI Interface Configuring BFD over BVI Verifying BFD over BVI BFD MultiHop Session Configuring the multi-hop BFD Verifying the multi-hop BFD Timers and Scale Reference Summary IntroductionIn our previous article we discussed BFD over Bundle Interface or BoB. In this article, we discuss one more concept for bundle# BFD over Logical Bundle or BLB. We will understand the difference between BLB and BoB and see its use cases. We will also see what are Multipath and Multihop Sessions with BFD over BVI interfaces.BFD over Logical BundleWith IOS-XR 7.1.1, we support BLB on NCS5500 and NCS500 platforms. The Bidirectional Forwarding Detection (BFD) over Logical Bundle feature implements and deploys BFD over bundle interfaces based on RFC 5880. This is the fundamental difference between BLB and BoB. In the former the bundle interface is a single interface, whereas in the later we implement BFD per member link. BLB is a multipath (MP) single-hop session. BLB requires limited knowledge of the bundle interfaces on which the sessions run, this is because BFD treats the bundle as one big pipe. To function, BLB requires only information about IP addresses, interface types, and caps on bundle interfaces. Information such as list of bundle members, member states, and configured minimum or maximum bundle links are not required. In case of BLB, BFD client is not Bundle link but protocols running over bundle link. BLB is supported on IPv4 and IPv6 addresses and IPv6 link-local address. BFD Session Type BFD Client BFD over Bundle - BoB bundlemgr BFD over Logical Bundle - BLB protocols running over the bundle Configuring BLBrouter isis 1 is-type level-2-only net 49.0000.0000.0000.0006.00 address-family ipv4 unicast metric-style wide ! interface Bundle-Ether24 bfd minimum-interval 300 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast ! !The BFD instance will run locally on the line card CPU. Therefore, we must manually designate the line card CPU on which the BFD session will be run. If the bundle members are spread across different locations, we need to specify all of them.bfd multipath include location 0/0/CPU0!Verifying BLBRP/0/RP0/CPU0#T-2006#show bfd all session IPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------BE24 192.6.17.17 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 From the above output we can see, it shows a single session over the bundle interface. If we compare it with BoB, we can see there were individual member links in the session. Let us analyse the detailed output.RP/0/RP0/CPU0#T-2006#show bfd all session interface bundle-ether 24 detail IPv4#-----I/f# Bundle-Ether24, Location# 0/0/CPU0Dest# 192.6.17.17Src# 192.6.17.6 State# UP for 0d#0h#17m#25s, number of times UP# 1 Session type# SW/V4/SH/BLReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 4, your discr# 1, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 1, your discr# 4, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Label# Internal label# 24017/0x5dd1Local Stats# Intervals between async packets# Tx# Number of intervals=3, min=14 ms, max=299 ms, avg=115 ms Last packet transmitted 1045 s ago Rx# Number of intervals=4, min=3 ms, max=299 ms, avg=153 ms Last packet received 1045 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msMP download state# BFD_MP_DOWNLOAD_ACKState change time# Jun 22 09#30#22.849Session owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 19/4 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x1 Echo Session ID # 0x0Async Tx Key # 0x1 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0 Flags Session Type SW Switched Session which includes BFD over Logical Bundle- BLB V4 IPv4 Session SH Single Hop Session BL BFD Type is BLB From the detailed output we can see that its a switched session with session type as BLB. The BFD client is IS-IS. We can also see MP download state# BFD_MP_DOWNLOAD_ACK which shows its a Multi-path session. Like IS-IS we can also use the OSPF and Static route as clients. We can also use vlan sub-interfaces along with physical main interface in the routing process. We can see BLB is a single hop multipath session. Source/Destination MAC address for BFD control packet is MAC of Bundle-Ether interface. BLB treats the bundle as one single interface and BFD packet may take any physical link to reach other end device.BFD over BVI InterfaceFrom IOS-XR 7.1.1, BFD is supported over BVI Interface. Let us start with a quick background of BVI Interface. In order for a VLAN to span a router, the router must be capable of forwarding frames from one interface to another, while maintaining the VLAN header. If the router is configured for routing a Layer 3 (network layer) protocol, it will terminate the VLAN and MAC layers at the interface on which a frame arrives. The MAC layer header can be maintained if the router bridges the network layer protocol. However, even regular bridging terminates the VLAN header. Using this feature, a router can be configured for routing and bridging the same network layer protocol, on the same interface. This allows the VLAN header to be maintained on a frame while it transits a router from one interface to another. The BVI is a virtual interface within the router that acts like a normal routed interface that does not support bridging, but represents the comparable bridge group to routed interfaces within the router. The interface number of the BVI is the number of the bridge group that the virtual interface represents. This number is the link between the BVI and the bridge group. Because the BVI represents a bridge group as a routed interface, it must be configured only with Layer 3 (L3) characteristics, such as network layer addresses. Similarly, the interfaces configured for bridging a protocol must not be configured with any L3 characteristics.With IOS-XR 7.1.1 and beyond we can configure IPv4/IPv6 BFD over BVI interface with OSFP/ISIS/Static/BGP clients. BFD over BVI is a single hop (SH) multipath (MP)session. The details are very similar to existing multipath-based. BFD Packet Reception on BVI interfaces is similar to MP session. In the first pass, routing occurs as destination mac in received packet matching with bvi interface mac and packet is considered as a for us packets if the destination IP address is matching with BVI interface IP address. In the second pass BFD processes a packet and sends it to OAMP/ARM block. This is same way how packet will be processed as packet received on physical interface. BFD over BVI will be supported on IPv4 address, IPv6 global address and IPv6 link-local addresses. For details on BFD architecture and packet flow please visit.Configuring BFD over BVIinterface Bundle-Ether24 bundle minimum-active links 1 l2transport ! l2vpn bridge group 24 bridge-domain BE24 interface Bundle-Ether24 ! routed interface BVI24 !router isis 1 is-type level-2-only net 49.0000.0000.0000.0006.00 address-family ipv4 unicast metric-style wide ! interface BVI24 bfd minimum-interval 300 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast !bfd multipath include location 0/0/CPU0!Verifying BFD over BVIRP/0/RP0/CPU0#T-2006#show bfd all session Tue Jun 22 10#57#31.985 UTCIPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------BV24 192.6.17.17 0s(0s*0) 900ms(300ms*3) UP Yes 0/0/CPU0 RP/0/RP0/CPU0#T-2006#show bfd all session interface bvI 24 detail IPv4#-----I/f# BVI24, Location# 0/0/CPU0Dest# 192.6.17.17Src# 192.6.17.6 State# UP for 0d#0h#3m#39s, number of times UP# 1 Session type# SW/V4/SH/BRReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 12, your discr# 3, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 3, your discr# 12, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Label# Internal label# 24015/0x5dcfLocal Stats# Intervals between async packets# Tx# Number of intervals=3, min=5 ms, max=2750 ms, avg=1018 ms Last packet transmitted 218 s ago Rx# Number of intervals=6, min=5 ms, max=1446 ms, avg=384 ms Last packet received 218 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msMP download state# BFD_MP_DOWNLOAD_ACKState change time# Jun 22 10#54#26.323Session owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 16/9 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x3 Echo Session ID # 0x0Async Tx Key # 0x3 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0 Flags Session Type SW Switched Session which includes BVI V4 IPv4 Session SH Single Hop Session BR BFD Type is IRB Again we can see the BFD over BVI interface is a single hop but a multipath session.Note# BFD over BVI for IPv4 and IPv6 is not supported yet on systems based on Jericho2 and NCS560.BFD MultiHop SessionTill now we saw all the sessions be it Single-Path or Multi-Path, were Single-Hop. Now we will see Multi-Hop Multi-Path session. There are scenarios in which BFD needs to be enabled between two end-points which are not directly connected but across one or more Layer3 hops. For example BFD running over logical interfaces like GRE tunnel or protocol BGP. In such scenario path to reach other end-point will not always be same. As long as one path to the destination is active, BFD will come up. Also in such cases, there is not only one LC through which packet would leave the router and enter the router. And thus it is not just multi-hop but multi-path as well. The encapsulation of BFD Control packets for multihop application in IPv4 and IPv6 is identical to that defined in BFD single hop, except that the UDP destination port must have a value of 4784. This can aid in the demultiplexing and internal routing of incoming BFD packets.[RFC 5883]We support BFD Multihop sessions for NCS5500 family and with the latest IOS-XR release 7.3.1 we have included other platforms from the NCS500 family as well. Let us verify the multihop BFD.Configuring the multi-hop BFDrouter bgp 100 bgp router-id 6.6.6.6 address-family ipv4 unicastneighbor 172.16.4.41 remote-as 100 bfd fast-detect bfd multiplier 3 bfd minimum-interval 300 update-source Loopback0 address-family ipv4 unicast !Verifying the multi-hop BFDRP/0/RP0/CPU0#T-2006#show bfd all session detail IPv4#-----Location# 0/0/CPU0Dest# 172.16.4.41Src# 172.16.4.6VRF Name/ID# default/0x60000000 State# UP for 0d#0h#18m#4s, number of times UP# 1 Session type# SW/V4/MHReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Multiplier# 3, diag# None My discr# 16, your discr# 4, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Multiplier# 3, diag# None My discr# 4, your discr# 16, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 msasync detection time# 900 ms(300 ms*3)Local Stats# Intervals between async packets# Tx# Number of intervals=3, min=5 ms, max=1061 s, avg=354 s Last packet transmitted 1084 s ago Rx# Number of intervals=4, min=2 ms, max=302 ms, avg=152 ms Last packet received 1084 s agoMP download state# BFD_MP_DOWNLOAD_ACKState change time# Jun 22 11#58#19.853Session owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bgp-default 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/0/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 0/0 Platform Info#NPU ID# 0 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x4 Echo Session ID # 0x0Async Tx Key # 0x4 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0 Flags Session Type SW Switched Session which includes BFD over Logical Bundle- BLB V4 IPv4 Session MH Multi Hop Session From the flags we can see the client BGP is having a multi-hop session. MP download state# BFD_MP_DOWNLOAD_ACK indicates that it is multi-path as well.Timers and ScaleFor the minimum timers and scale support, please visit the BFD Architecture Document.ReferenceCCO Config GuideRFC 5883SummaryHope this tech-note helped understanding the concept of BLB and its difference between BoB. We also touched upon the concept of Multi-Path and Multi-Hop sessions.", "url": "/tutorials/multipath-and-multihop-bfd-sessions-on-ncs5500-and-ncs500/", "author": "Tejas Lad", "tags": "iosxr, NCS5500, NCS500, NCS 5500, BFD, BLB, BVI" } , "tutorials-full-internet-in-j2": { "title": "Full Internet in Jericho2 NPU and Programming Speed", "content": " Full Internet in J2 NCS5700 and the Jericho2 NPU Internal Resources Full internet view in Jericho2 platforms / LCs J2 no-eTCAM J2 with eTCAM Projected Internet view (2027) in Jericho2 platforms / LCs J2 no-eTCAM J2 with eTCAM Routes Programming Speed Methodology IPv4 in LPM in J2 non-eTCAM IPv6 in LPM in J2 non-eTCAM IPv4 in eTCAM in J2-SE IPv6 in eTCAM in J2-SE Conclusion Annex# Telemetry Config .Articles mentioned in the video# Decommissionning the internet-optimized mode#https#//xrdocs.io/ncs5500/tutorials/decommissioning-internet-optimized-mode/ NCS5500 Routing Resource with 2020 Internet (and Future)https#//xrdocs.io/ncs5500/tutorials/ncs5500-routing-resource-with-2020-internet/ NCS5500 FIB Programming Speedhttps#//xrdocs.io/ncs5500/tutorials/ncs5500-fib-programming-speed/NCS5700 and the Jericho2 NPUThe NCS5500 products series is composed of different routing devices, fixed port “pizza boxes” and modular chassis with line cards. To clearly identify the products based on Broadcom Jericho2 ASICs, we named the fixed platforms “NCS-57xx” and the line cards in the chassis “NC57-xx”. It’s also the case for J2C platforms.The suffix “-SE” is used to describe systems and line cards with external TCAM# PID Port Density NPU eTCAM NCS57B1-6D24 24x 100GE, 6x 400GE 1x J2 No NCS57B1-5DSE 24x 100GE, 5x 400GE 1x J2 Yes NC57-24DD 24x 400GE 2x J2 No NC57-18DD-SE 24x Flex Ports + 6x 400G 2x J2 Yes NC57-36H6D-S 36x Flex Ports 1x J2 No NC57-36H-SE 36x 100GE 1x J2 Yes Note# More platforms coming soon in IOS XR 7.4.1 and in following releasesInternal ResourcesLike their predecessors, the J2 NPUs are leveraging different internal memories to store information, including routing details.If not equipped with eTCAM, we have LEM and LPM#If the NPU is completed by an external TCAM#In next releases (roadmap at the moment of the redaction of this article), different MDB profiles will be available and the size of LEM and LPM may vary, but the logic used to store routing information will very likely stay the same. Of course, we will document all future deviation.And essentially, things are very simple# in J2 without eTCAM, all IPv4 and IPv6 prefixes are stored in LPM in J2 with eTCAM, all IPv4 and IPv6 prefixes are stored in… eTCAMImportant note#A display bug is present in IOS XR 7.3.x and 7.4.x, DDTS CSCvw55441 titled “J2-non SE # “show contr npu resources” discrepancy for iproute with v4/32 routes”This defect only impacts the systems with no eTCAM where the v4/v6 prefixes are all stored in LPM. It can create a confusion counting the v4/32 routes as stored in LEM which is not the case in reality, example to prove it with a prefix 1.1.1.191/32RP/0/RP1/CPU0#5508-1-741#sh route 1.1.1.191Routing entry for 1.1.1.191/32 Known via ~local~, distance 0, metric 0 (connected) Installed Jun 14 02#53#43.044 for 01#11#51 Routing Descriptor Blocks directly connected, via Loopback0 Route metric is 0 Redist Advertisers# 9 (protoid=9, clientid=33) 5 (protoid=5, clientid=25)RP/0/RP1/CPU0#5508-1-741#show controllers fia diagshell 0 ~dbal table dump table=IPV4_UNICAST_PRIVATE_LEM_FORWARD~ location 0/7/CPU0 | i 1.1.1.191RP/0/RP1/CPU0#5508-1-741#show controllers fia diagshell 0 ~dbal table dump table=IPV4_UNICAST_PRIVATE_LPM_FORWARD~ location 0/7/CPU0 | i 1.1.1.191| 31 | 0 mask# 0xffff | 1.1.1.191 (0x10101bf) mask# 0xffffffff || KAPS_FLOW_ID 19 (655379) |RP/0/RP1/CPU0#5508-1-741#Don’t use this shell command in production since it dumps the entirety of the table. It’s only meant to be used in the lab with small routing table, to verify where routes are actually going.Until this DDTS is fixed, take the output of “show controller npu resources” with a grain of salt (or do the math yourself, counting the /32s from the DPA and adjusting the counters accordingly).Full internet view in Jericho2 platforms / LCsWe will run the test on a chassis equipped with a J2 line cards# non eTCAM (NC57-24DD) and with eTCAM (NC57-36H-SE)#RP/0/RP1/CPU0#5508-1-731#sh verCisco IOS XR Software, Version 7.3.1Copyright (c) 2013-2021 by Cisco Systems, Inc.Build Information# Built By # ingunawa Built On # Thu Feb 25 19#43#35 PST 2021 Built Host # iox-ucs-023 Workspace # /auto/srcarchive17/prod/7.3.1/ncs5500/ws Version # 7.3.1 Location # /opt/cisco/XR/packages/ Label # 7.3.1cisco NCS-5500 () processorSystem uptime is 16 hours 8 minutesRP/0/RP1/CPU0#5508-1-731#sh platform | i IOS XR0/0/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/3/CPU0 NC57-36H-SE IOS XR RUN NSHUT0/7/CPU0 NC57-24DD IOS XR RUN NSHUT0/RP1/CPU0 NC55-RP-E(Active) IOS XR RUN NSHUTRP/0/RP1/CPU0#5508-1-731#We receive a full table v4 and v6 from a single peer (each)#RP/0/RP1/CPU0#5508-1-731#sh bgp sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 6144813BGP main routing table version 6144813BGP NSR Initial initsync version 7 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 6144813 6144813 6144813 6144813 6144813 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd25.2.19.2 0 100 968 282819 6144813 0 0 16#05#07 0192.168.100.151 0 1000 836808 172278 6144813 0 0 00#40#30 836741192.168.100.152 0 1000 691695 626 0 0 0 00#53#00 Active192.168.100.153 0 1000 1494 118138 0 0 0 00#03#04 Active192.168.100.154 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.155 0 1000 753 754 0 0 0 13#36#26 Active192.168.100.156 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.157 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.158 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.159 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.160 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.161 0 100 0 0 0 0 0 00#00#00 ActiveRP/0/RP1/CPU0#5508-1-731#sh bgp ipv6 un sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 599762BGP main routing table version 599762BGP NSR Initial initsync version 7 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 599762 599762 599762 599762 599762 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#25#1#15##2 0 100 0 0 0 0 0 00#00#00 Idle2001#25#2#19##2 0 100 0 0 0 0 0 00#00#00 Active2001#111##151 0 151 734 26620 0 0 0 00#03#17 Active2001#111##152 0 152 121543 29362 599762 0 0 00#40#06 121498RP/0/RP1/CPU0#5508-1-731#And we can verify the prefix distribution#RP/0/RP1/CPU0#5508-1-731#sh dpa resources iproute loc 0/7/CPU0~iproute~ OFA Table (Id# 48, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 11 /1 0 /2 0 /3 0 /4 11 /5 0 /6 0 /7 0 /8 18 /9 13 /10 40 /11 100 /12 302 /13 585 /14 1186 /15 2027 /16 13440 /17 8190 /18 13679 /19 24739 /20 40174 /21 48749 /22 104394 /23 89388 /24 489657 /25 91 /26 0 /27 0 /28 0 /29 0 /30 0 /31 0 /32 99OFA Infra Stats Summary Create Requests# 3171038 Delete Requests# 2334145 Update Requests# 8910 Get Requests# 0 Backwalk Stats Update Requests# 0 Update Skipped# 0 Errors Resolve Failures# 0 Not Found in DB# 0 Exists in DB# 0 No Memory in DB# 0 Reserve Resources# 0 Release Resources# 0 Update Resources# 0 Retry Attempts# 0 Recovered from error# 0 Errors from bwalk# 0RP/0/RP1/CPU0#5508-1-731#sh dpa resources ip6route loc 0/7/CPU0~ip6route~ OFA Table (Id# 49, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Prefix Actual /0 10 /1 0 /2 0 /3 0 /4 0 /5 0 /6 0 /7 0 /8 0 /9 0 /10 10 /11 0 /12 0 /13 0 /14 0 /15 0 /16 31 /17 0 /18 0 /19 1 /20 13 /21 3 /22 7 /23 7 /24 28 /25 8 /26 15 /27 20 /28 121 /29 3727 /30 521 /31 194 /32 16553 /33 2539 /34 2226 /35 922 /36 4777 /37 796 /38 1460 /39 830 /40 9269 /41 689 /42 2760 /43 700 /44 11231 /45 1077 /46 2337 /47 2102 /48 56544 /49 0 /50 0 /51 0 /52 1 /53 0 /54 0 /55 0 /56 1 /57 0 /58 0 /59 0 /60 0 /61 0 /62 0 /63 0 /64 30 /65 0 /66 0 /67 0 /68 0 /69 0 /70 0 /71 0 /72 0 /73 0 /74 0 /75 0 /76 0 /77 0 /78 0 /79 0 /80 0 /81 0 /82 0 /83 0 /84 0 /85 0 /86 0 /87 0 /88 0 /89 0 /90 0 /91 0 /92 0 /93 0 /94 0 /95 0 /96 0 /97 0 /98 0 /99 0 /100 0 /101 0 /102 0 /103 0 /104 11 /105 0 /106 0 /107 0 /108 0 /109 0 /110 0 /111 0 /112 0 /113 0 /114 0 /115 0 /116 0 /117 0 /118 0 /119 0 /120 1 /121 0 /122 0 /123 1 /124 1 /125 0 /126 3 /127 0 /128 32OFA Infra Stats Summary Create Requests# 360738 Delete Requests# 239129 Update Requests# 2 Get Requests# 0 Backwalk Stats Update Requests# 0 Update Skipped# 0 Errors Resolve Failures# 0 Not Found in DB# 0 Exists in DB# 0 No Memory in DB# 0 Reserve Resources# 0 Release Resources# 0 Update Resources# 0 Retry Attempts# 0 Recovered from error# 0 Errors from bwalk# 0 NPU ID# NPU-0 NPU-1 Create Server API Err# 0 0 Update Server API Err# 0 0 Delete Server API Err# 0 0RP/0/RP1/CPU0#5508-1-731#J2 no-eTCAMUsing streaming telemetry, we have a graphical visualization of the counters (having a very small portion of v4/32s in the lab, since they are not announced over the public internet, it only represents a negligeable number and we can trust what is streamed).RP/0/RP1/CPU0#5508-1-731#sh contr npu resources lpm loc 0/7/CPU0HW Resource Information Name # lpm Asic Type # Jericho 2NPU-0OOR Summary Estimated Max Entries # 2621440 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 958517 (37 %) iproute # 836794 (32 %) ip6route # 121577 (5 %) ipmcroute # 101 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)NPU-1OOR Summary Estimated Max Entries # 2621440 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 958517 (37 %) iproute # 836794 (32 %) ip6route # 121577 (5 %) ipmcroute # 101 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)RP/0/RP1/CPU0#5508-1-731#37% of the capacity with 2021 internet tables, that confirms we don’t need a -SE system or card specifically for internet handling. Of course, the -SE options are very relevant for other use-cases, but it’s no longer driven by the internet size, at least for a dozen of years.J2 with eTCAMAll routes are, as expected, present in eTCAM#RP/0/RP1/CPU0#5508-1-731#sh contr npu resources exttcamipv4 loc 0/3/CPU0HW Resource Information Name # ext_tcam_ipv4 Asic Type # Jericho 2NPU-0OOR Summary Estimated Max Entries # 5000000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 836868 (17 %) iproute # 836893 (17 %)RP/0/RP1/CPU0#5508-1-731#sh contr npu resources exttcamipv6HW Resource Information Name # ext_tcam_ipv6 Asic Type # Jericho 2NPU-0OOR Summary Estimated Max Entries # 2000000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 121559 (6 %) ip6route # 121609 (6 %)RP/0/RP1/CPU0#5508-1-731#Projected Internet view (2027) in Jericho2 platforms / LCsWe will reuse the projection done in this article last year to guesstimate the additional routes in year 2027# https#//xrdocs.io/ncs5500/tutorials/ncs5500-routing-resource-with-2020-internet/That’s 462,640 extra IPv4 and 239,128 extra IPv6 routes.RP/0/RP1/CPU0#5508-1-731#sh bgp sumBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0000000 RD version# 7531536BGP main routing table version 7531536BGP NSR Initial initsync version 7 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 7531536 7531536 7531536 7531536 7531536 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd25.2.19.2 0 100 1016 286333 7069295 0 0 16#53#51 0192.168.100.151 0 1000 836857 175793 7069295 0 0 01#29#15 836741192.168.100.152 0 1000 691695 626 0 0 0 01#41#44 Active192.168.100.153 0 1000 2426 187154 0 0 33564 00#00#09 462640192.168.100.154 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.155 0 1000 753 754 0 0 0 14#25#11 Active192.168.100.156 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.157 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.158 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.159 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.160 0 1000 0 0 0 0 0 00#00#00 Active192.168.100.161 0 100 0 0 0 0 0 00#00#00 ActiveRP/0/RP1/CPU0#5508-1-731#sh bgp ipv6 unBGP router identifier 1.3.5.9, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0xe0800000 RD version# 1317146BGP main routing table version 1317146BGP NSR Initial initsync version 7 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 1317146 1317146 1317146 1317146 1317146 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcd2001#25#1#15##2 0 100 0 0 0 0 0 00#00#00 Idle2001#25#2#19##2 0 100 0 0 0 0 0 00#00#00 Active2001#111##151 0 151 2176 79834 1317146 0 0 00#00#15 2391282001#111##152 0 152 121592 32694 1317146 0 0 01#28#50 121498RP/0/RP1/CPU0#5508-1-731#sh dpa resources iproute loc 0/7/CPU0~iproute~ OFA Table (Id# 48, Scope# Global)--------------------------------------------------IPv4 Prefix len distributionPrefix Actual Prefix Actual /0 11 /1 0 /2 0 /3 0 /4 11 /5 0 /6 0 /7 0 /8 18 /9 13 /10 40 /11 100 /12 302 /13 585 /14 1186 /15 2027 /16 13440 /17 8190 /18 13679 /19 24739 /20 40174 /21 48749 /22 175966 /23 89388 /24 875877 /25 91 /26 0 /27 0 /28 0 /29 0 /30 0 /31 0 /32 100OFA Infra Stats Summary Create Requests# 4042814 Delete Requests# 2748128 Update Requests# 21951 Get Requests# 0 Backwalk Stats Update Requests# 0 Update Skipped# 0 Errors Resolve Failures# 0 Not Found in DB# 0 Exists in DB# 0 No Memory in DB# 0 Reserve Resources# 0 Release Resources# 0 Update Resources# 0 Retry Attempts# 0 Recovered from error# 0 Errors from bwalk# 0RP/0/RP1/CPU0#5508-1-731#sh dpa resources ip6route loc 0/7/CPU0~ip6route~ OFA Table (Id# 49, Scope# Global)--------------------------------------------------IPv6 Prefix len distributionPrefix Actual Prefix Actual /0 10 /1 0 /2 0 /3 0 /4 0 /5 0 /6 0 /7 0 /8 0 /9 0 /10 10 /11 0 /12 0 /13 0 /14 0 /15 0 /16 31 /17 0 /18 0 /19 1 /20 13 /21 3 /22 7 /23 7 /24 28 /25 8 /26 15 /27 20 /28 121 /29 3727 /30 521 /31 194 /32 16553 /33 2539 /34 2226 /35 922 /36 4777 /37 796 /38 1460 /39 830 /40 9269 /41 689 /42 2760 /43 700 /44 11231 /45 1077 /46 2337 /47 2102 /48 239355 /49 0 /50 0 /51 0 /52 1 /53 0 /54 0 /55 0 /56 1 /57 0 /58 0 /59 0 /60 0 /61 0 /62 0 /63 0 /64 56347 /65 0 /66 0 /67 0 /68 0 /69 0 /70 0 /71 0 /72 0 /73 0 /74 0 /75 0 /76 0 /77 0 /78 0 /79 0RP/0/RP1/CPU0#5508-1-731#J2 no-eTCAMRP/0/RP1/CPU0#5508-1-731#sh contr npu resources lpm loc 0/7/CPU0HW Resource Information Name # lpm Asic Type # Jericho 2NPU-0OOR Summary Estimated Max Entries # 2621440 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 1655437 (63 %) iproute # 1294586 (49 %) ip6route # 360705 (14 %) ipmcroute # 101 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)NPU-1OOR Summary Estimated Max Entries # 2621440 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 1655437 (63 %) iproute # 1294586 (49 %) ip6route # 360705 (14 %) ipmcroute # 101 (0 %) ip6mcroute # 0 (0 %) ip6mc_comp_grp # 0 (0 %)RP/0/RP1/CPU0#5508-1-731#It proves that we will still have a reasonable amount of empty space (a third more, or less) in 2028 if internet growth keeps the current trends.J2 with eTCAMRP/0/RP1/CPU0#5508-1-731#sh contr npu resources exttcamipv4 loc 0/3/CPU0HW Resource Information Name # ext_tcam_ipv4 Asic Type # Jericho 2NPU-0OOR Summary Estimated Max Entries # 5000000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 1294661 (26 %) iproute # 1294686 (26 %)RP/0/RP1/CPU0#5508-1-731#sh contr npu resources exttcamipv6 loc 0/3/CPU0HW Resource Information Name # ext_tcam_ipv6 Asic Type # Jericho 2NPU-0OOR Summary Estimated Max Entries # 2000000 Red Threshold # 95 % Yellow Threshold # 80 % OOR State # GreenCurrent Usage Total In-Use # 360688 (18 %) ip6route # 360738 (18 %)RP/0/RP1/CPU0#5508-1-731#Certainly this projection of internet growth is showing we have tons of space in this external TCAM, space available for other data.Routes Programming SpeedWe performed these tests and documented them for J+ in this article# “NCS5500 FIB Programming Speed” - https#//xrdocs.io/ncs5500/tutorials/ncs5500-fib-programming-speed/It’s now time to do it again for Jericho2.MethodologyWe are running the test with J2 line cards in modular chassis. It’s important to shut down all line card except the one under test since it may impact the performance measurement.Indeed line cards will wait for the slowest element of the chassis, that is visible in the following graph with “plateaux” (the moment a line card waits for the lower one to catch up).Also, to make sure we are not “polluting” the test results with a slow announcement of the BGP routes, we are pushing large blocks of routes with an internal Cisco tool. 1M IPv4 prefixes for example.We will measure the programming speed in LPM in non-eTCAM card and in external TCAM in an -SE card. And the approach will be the same for each test# we will start the advertisement at T0 at T1, the RIB converged at T2, the hardware resource (LPM/eTCAM) is fully programmed at T3, we stop the route advertisement (withdrawal begings) at T4, all routes are flushed from RIB at T5, all routes are flushed from hardware resource (LPM/eTCAM)IPv4 in LPM in J2 non-eTCAMTO# 11#04#18#232, we start advertisementT1# 11#04#27#270 (9 seconds later)First result# RIB is programmed at an average of 1,000,000 / 9 = 111k pfx/sT2# 11#04#55#276 (37 after beginning of advertisement)Second result# LPM is programmed at an average of 1,000,000 / 37 = 27k pfx/sWhich is confirmed by this second diagram#T3# 11#05#32#282, we stop advertisementT4# 11#05#38#260 (6 seconds later)Third result# RIB is flushed at an average of 1,000,000 / 6 = 166k pfx/sT5# 11#06#07#248 (35 seconds after beginning of withdrawal)Fourth result# LPM is flushed at an average of 1,000,000 / 35 = 28.5 pfx/sWhich is also confirmed here#IPv6 in LPM in J2 non-eTCAMLet’s jump directly to the pfx/s graphs we generated#We program at 30K+ IPv6 pfx/s.And we flush this memory at 28K+ pfx/s.IPv4 in eTCAM in J2-SESame methodology here, we directly jump to the prefix / second graphs.We program routes in eTCAM at a speed varying between 31K and 32K pfx/s.Flushing is done at a speed between 31K and 35J pfx/s.IPv6 in eTCAM in J2-SEAgain, same test methodology for IPv6.Varies from 25K to 28K pfx/s.Fluctuates between 24K and 27K pfx/s.ConclusionOn Jericho2, the programming and flushing of the routes is more or less the same for both LPM and eTCAM, around 25-30K prefixes per second.A system based on Jericho2 with no eTCAM can perfectly handle a full internet view (v4 + v6) and has lot of room for the years to come. eTCAM systems will be used to extend the capacity of the J2 chipset (high interface scale, QoS, etc) via the enablement of specific “-SE” MDB profiles, but the internet size is no longer a criteria to select -SE or non-SE.Annex# Telemetry ConfigWe configured the following sensors on the router for streaming telemetry#telemetry model-driven destination-group DEST-GROUP address-family ipv4 192.168.100.142 port 57500 encoding self-describing-gpb protocol grpc no-tls ! ! sensor-group BGP-COUNTERS sensor-path Cisco-IOS-XR-ipv4-bgp-oc-oper#oc-bgp/bgp-rib/afi-safi-table/ipv4-unicast/open-config-neighbors/open-config-neighbor/adj-rib-in-post/num-routes/num-routes sensor-path Cisco-IOS-XR-ipv4-bgp-oc-oper#oc-bgp/bgp-rib/afi-safi-table/ipv6-unicast/open-config-neighbors/open-config-neighbor/adj-rib-in-post/num-routes/num-routes ! sensor-group FIB-COUNTERS sensor-path Cisco-IOS-XR-fib-common-oper#fib/nodes/node/protocols/protocol/vrfs/vrf/summary ! sensor-group OFA-COUNTERS sensor-path Cisco-IOS-XR-platforms-ofa-oper#ofa ! sensor-group RIB-COUNTERS sensor-path Cisco-IOS-XR-ip-rib-ipv4-oper#rib/rib-table-ids/rib-table-id/summary-protos/summary-proto/proto-route-count sensor-path Cisco-IOS-XR-ip-rib-ipv6-oper#ipv6-rib/rib-table-ids/rib-table-id/summary-protos/summary-proto/proto-route-count/active-routes-count ! subscription SUB-GROUP sensor-group-id BGP-COUNTERS sample-interval 1000 sensor-group-id OFA-COUNTERS sample-interval 1000 sensor-group-id RIB-COUNTERS sample-interval 1000 destination-id DEST-GROUP source-interface MgmtEth0/RP1/CPU0/0 ! subscription anx-1622637052563 sensor-group-id OFA-COUNTERS sample-interval 15000 ! subscription anx-1622667923101 sensor-group-id OFA-COUNTERS sample-interval 15000 ! subscription anx-1622714667389 sensor-group-id OFA-COUNTERS sample-interval 15000 !!Thanks to# Fred Cuiller for the corrections", "url": "/tutorials/full-internet-in-j2/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-coexistence-between-bfd-over-bundle-and-bfd-over-logical-bundle": { "title": "Coexistence Between BFD over Bundle and BFD over Logical Bundle", "content": " Coexistence Between BFD over Bundle and BFD over Logical Bundle Introduction Quick recap Limitations BoB and BLB Coexistence Feature Support Inherit and Logical Modes of Operation(Reference) Router Demo Inherit Mode Logical Mode BLB dependency on BoB in Inherit Mode Which mode to choose# Inherit vs Logical ? Memory and Scale Impact Summary IntroductionIn the previous articles, we introduced the concepts of BFD over Bundle and BFD over Logical Bundle. We saw the configurations and the use cases. In this article we will discuss their limitations, when they are used one-at-a-time and see the why we need their coexistence.Quick recapBFD over Bundle(BoB) implementation is a standard based fast failure detection of link aggregation (LAG) member links that is interoperable between different platforms. BFD over Bundle implements BFD per member link. Whereas the Bidirectional Forwarding Detection (BFD) over Logical Bundle feature implements and deploys BFD over bundle interfaces. This is the fundamental difference between BLB and BoB. In BLB the bundle interface is a single interface, whereas in the BoB we implement BFD per member link. BLB is a multipath (MP) single-hop session.If BLB is running on a bundle there is only one BFD session running. This implies that only one bundle-member is being monitored by BFD, at any given time. Whereas in case of BoB, we have BFD sessions equal to the number of member links.Limitations BoB does not provide true L3 check and is not supported on subinterfaces. With BLB, a failure of bundle members, which BFD is not running on is not detected. And a failure of a bundle member, which BFD is running on will cause BFD to declare a session failure on the bundle, even if there are sufficient numbers of other bundle members available and functional.To overcome these limitations, it is possible to run BoB and BLB in parallel on the same bundle interface. This provides the faster bundle convergence from BoB and the true L3 check from BLB (Reference)BoB and BLB Coexistence Feature Support The feature is supported from IOS-XR 741. It is supported on all the platforms including NCS 540, NCS 560 and NCS 5500 (including systems based on J2 Native and Compatible modes). Support for two modes# Inherit and Logical Supported for BFD v4 and v6 sessionsInherit and Logical Modes of Operation(Reference)BoB and BLB coexistence is supported with the following 2 modes#Inherit# When the “inherit” coexistence mode is configured then a BLB will always create a virtual session and never a BFD session with real packets. This means BLB will not send packets, but will refer the packets that are sent by BoB.Logical# When the option “logical” is used BLB will always create a real session even when BoB is on. There is one exception if the main bundle interface has an IPv4 address. In this case the session is inherited when BoB is on.We will see this is details when we check it on our routers.Router DemoAfter all the theory behind the need the BoB and BLB coexistence, let us see it in action. We will use 2 back to back connected routers with IOS-XR 741 and verify different scenarios with inherit and logical mode. Below is our lab setup.Inherit ModeLet us first configure the linecards to allow hosting of MP BFD sessions. If no linecards are included, linecards groups are not formed, and consequently no BFD MP sessions are created. And second configuration is related to the BoB-BLB coexistence.bfd multipath include location 0/7/CPU0 bundle coexistence bob-blb inherit!BoB and BLB configsinterface Bundle-Ether30 description connected to NCS5508-2 bfd mode ietf bfd address-family ipv4 multiplier 3 bfd address-family ipv4 destination 30.1.1.2 bfd address-family ipv4 fast-detect bfd address-family ipv4 minimum-interval 300 ipv4 address 30.1.1.1 255.255.255.252!router isis 1 is-type level-2-only net 49.0000.0000.0191.00 log adjacency changes log pdu drops address-family ipv4 unicast metric-style wide segment-routing mpls! interface Bundle-Ether30 bfd minimum-interval 300 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast !Verifying the BFD sessionsRP/0/RP1/CPU0#5508-1-74142I-C#show bfd all sessionIPv4#-----Interface Dest Addr Local det time(int*mult) State Echo Async H/W NPU ------------------- --------------- ---------------- ---------------- ----------FH0/7/0/23 30.1.1.2 0s(0s*0) 900ms(300ms*3) UP Yes 0/7/CPU0 FH0/7/0/21 30.1.1.2 0s(0s*0) 900ms(300ms*3) UP Yes 0/7/CPU0 BE30 30.1.1.2 n/a n/a UP No n/a RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30 detail IPv4#-----I/f# Bundle-Ether30, Location# 0/RP1/CPU0Dest# 30.1.1.2Src# 30.1.1.1 State# UP for 0d#0h#0m#51s, number of times UP# 2 Session type# PR/V4/SH/BI/IBSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 isis-1 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- FH0/7/0/23 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER FH0/7/0/21 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER Flags Session Type PR Pre-Routed Session mostly single path sessions applicable for Physical or Sub-interfaces and BFD over Bundle interfaces V4 IPv4 Session SH Single Hop Session BI Bundle Interface IB IETF BoB Now let us add a sub-interface of the same bundle for the ISIS adjacency. We will keep the mode as Inherit.router isis 1 is-type level-2-only net 49.0000.0000.0191.00 address-family ipv4 unicast metric-style wide ! interface Bundle-Ether30.1 bfd minimum-interval 300 bfd multiplier 3 bfd fast-detect ipv4 point-to-point address-family ipv4 unicast ! !RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30 detail IPv4#-----I/f# Bundle-Ether30, Location# 0/RP1/CPU0Dest# 30.1.1.2Src# 30.1.1.1 State# UP for 0d#0h#12m#47s, number of times UP# 2 Session type# PR/V4/SH/BI/IBSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- FH0/7/0/23 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER FH0/7/0/21 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER BE30.1 31.1.1.2 BFD_SESSION_SUBTYPE_STATE_INHERIT RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30.1 detail IPv4#-----I/f# Bundle-Ether30.1, Location# 0/RP1/CPU0Dest# 31.1.1.2Src# 31.1.1.1 State# UP for 0d#0h#1m#59s, number of times UP# 1 Session type# PR/V4/SH/IHSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- BE30 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_INTERFACE Flags Session Type PR Pre-Routed Session mostly single path sessions applicable for Physical or Sub-interfaces and BFD over Bundle interfaces V4 IPv4 Session SH Single Hop Session IH State Inherit As we discussed above, we can see that in the Inherit mode, BLB will always create a virtual session and never a BFD session with real packets.Logical ModeLet us verify the behaviour in the logical mode. We will change the configuration of the coexistence mode as logical in place of inherit.bfd multipath include location 0/7/CPU0 bundle coexistence bob-blb logical!RP/0/RP0/CPU0#5508-2-74142I-C#show bfd all session interface bundle-ether 30 detail IPv4#-----I/f# Bundle-Ether30, Location# 0/RP0/CPU0Dest# 30.1.1.1Src# 30.1.1.2 State# UP for 0d#0h#50m#32s, number of times UP# 3 Session type# PR/V4/SH/BI/IBSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- FH0/3/0/23 30.1.1.1 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER FH0/3/0/21 30.1.1.1 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER RP/0/RP0/CPU0#5508-2-74142I-C#show bfd all session interface bundle-ether 30.1 detail IPv4#-----I/f# Bundle-Ether30.1, Location# 0/3/CPU0Dest# 31.1.1.1Src# 31.1.1.2 State# UP for 0d#0h#5m#28s, number of times UP# 1 Session type# SW/V4/SH/BLReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 15728648, your discr# 7340048, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 7340048, your discr# 15728648, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Label# Internal label# 24036/0x5de4Local Stats# Intervals between async packets# Tx# Number of intervals=3, min=7 ms, max=5103 ms, avg=1802 ms Last packet transmitted 327 s ago Rx# Number of intervals=8, min=5 ms, max=1700 ms, avg=712 ms Last packet received 327 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msMP download state# BFD_MP_DOWNLOAD_ACKState change time# Aug 8 03#11#10.233Session owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/3/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 54/19 Platform Info#NPU ID# 1 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0x10 Echo Session ID # 0x0Async Tx Key # 0x700010 Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0From the above output we can see that, when the option “logical” is used BLB will always create a real session. We can see the BFD packets are exchanged between the two neighbors and BLB Flag is set.When we configure the BoB-BLB coexistence in Logical mode, there is one exception. If the main bundle interface has an IPv4 address, the session is inherited when BoB is on. Let us configure a BLB session via a bundle sub-interface.RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30 detail IPv4#-----I/f# Bundle-Ether30, Location# 0/RP1/CPU0Dest# 30.1.1.2Src# 30.1.1.1 State# UP for 0d#0h#41m#54s, number of times UP# 2 Session type# PR/V4/SH/BI/IBSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 isis-1 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- FH0/7/0/23 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER FH0/7/0/21 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER BLB dependency on BoB in Inherit ModeAs we saw in the earlier section, when the Inherit mode is configured, the BLB session inherits from BoB session, as the BLB session is created as a virtual session, but never gets downloaded to the LC and so, no real packets are sent out. The “logical” mode creates a BFD session with real packets and BLB requires multi-path (MP) configurations irrespective of the inherited mode. Inherited sessions when BOB is not enabled will be held in DOWN State. Let us verify the same.We have BoB configured on the interface and can see the session is UP.RP/0/RP1/CPU0#5508-1-74142I-C#show running-config interface bundle-ether 30interface Bundle-Ether30 description connected to NCS5508-2 bfd mode ietf bfd address-family ipv4 multiplier 3 bfd address-family ipv4 destination 30.1.1.2 bfd address-family ipv4 fast-detect bfd address-family ipv4 minimum-interval 300 ipv4 address 30.1.1.1 255.255.255.252!RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30 detail IPv4#-----I/f# Bundle-Ether30, Location# 0/RP1/CPU0Dest# 30.1.1.2Src# 30.1.1.1 State# UP for 1d#18h#48m#46s, number of times UP# 2 Session type# PR/V4/SH/BI/IBSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- bundlemgr_distrib 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- FH0/7/0/23 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER FH0/7/0/21 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_MEMBER BE30.1 31.1.1.2 BFD_SESSION_SUBTYPE_STATE_INHERITRP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30.1 detail IPv4#-----I/f# Bundle-Ether30.1, Location# 0/RP1/CPU0Dest# 31.1.1.2Src# 31.1.1.1 State# UP for 0d#0h#7m#58s, number of times UP# 1 Session type# PR/V4/SH/IHSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 Session association information# Interface Dest Addr / Type -------------------- ----------------------------------- BE30 30.1.1.2 BFD_SESSION_SUBTYPE_RTR_BUNDLE_INTERFACELet us remove the BoB configsinterface Bundle-Ether30 description connected to NCS5508-2 ipv4 address 30.1.1.1 255.255.255.252!We can see there is no BoB session now.RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30 detail IPv4#-----The BLB session has gone down as there is no way to inherit the BFD packets.RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30.1 detail IPv4#-----I/f# Bundle-Ether30.1, Location# 0/RP1/CPU0Dest# 31.1.1.2Src# 31.1.1.1 State# DOWN for 0d#0h#4m#46s, number of times UP# 1 Session type# PR/V4/SH/IHSession owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 RP/0/RP1/CPU0#Aug 9 21#17#09.336 PDT# bfd[1269]# %L2-BFD-6-SESSION_INHERIT_OFF # BFD session to neighbor 31.1.1.2 on interface Bundle-Ether30.1 is not inheriting state from session over Bundle-Ether30 RP/0/RP1/CPU0#Aug 9 21#17#09.336 PDT# bfd[1269]# %L2-BFD-6-SESSION_STATE_DOWN # BFD session to neighbor 31.1.1.2 on interface Bundle-Ether30.1 has gone down. Reason# Admin down RP/0/RP1/CPU0#Aug 9 21#17#09.336 PDT# bfd[1269]# %L2-BFD-6-SESSION_BUNDLE_ALL_MEMBER_OFF # BFD session to neighbor 30.1.1.2 on interface Bundle-Ether30 is not running in all member mode RP/0/RP1/CPU0#Aug 9 21#17#09.336 PDT# bfd[1269]# %L2-BFD-6-SESSION_REMOVED # BFD session to neighbor 30.1.1.2 on interface Bundle-Ether30 has been removed LC/0/7/CPU0#Aug 9 21#17#09.337 PDT# bfd_agent[342]# %L2-BFD-6-SESSION_REMOVED # BFD session to neighbor 30.1.1.2 on interface FourHundredGigE0/7/0/23 has been removed LC/0/7/CPU0#Aug 9 21#17#09.337 PDT# bfd_agent[342]# %L2-BFD-6-SESSION_REMOVED # BFD session to neighbor 30.1.1.2 on interface FourHundredGigE0/7/0/21 has been removed RP/0/RP1/CPU0#Aug 9 21#17#10.439 PDT# config[66222]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'root'. Use 'show configuration commit changes 1000000488' to view the changes. RP/0/RP1/CPU0#Aug 9 21#17#52.509 PDT# isis[1009]# %ROUTING-ISIS-5-ADJCHANGE # Adjacency to 5508-2-74142I-C (Bundle-Ether30.1) (L2) Down, Neighbor forgot us RP/0/RP1/CPU0#Aug 9 21#18#35.402 PDT# config[66222]# %MGBL-SYS-5-CONFIG_I # Configured from console by root on vty1 (192.168.100.151) RP/0/RP1/CPU0#5508-1-74142I-C# Changing the coexistence mode back to logical. We can see that the BLB session has come up, because during this mode real packets are exchanged and we do not inherit it from the BoB.RP/0/RP1/CPU0#5508-1-74142I-C#show running-config bfdbfd multipath include location 0/7/CPU0 bundle coexistence bob-blb logical!RP/0/RP1/CPU0#5508-1-74142I-C#show bfd all session interface bundle-ether 30.1 detail IPv4#-----I/f# Bundle-Ether30.1, Location# 0/7/CPU0Dest# 31.1.1.2Src# 31.1.1.1 State# UP for 0d#0h#0m#29s, number of times UP# 1 Session type# SW/V4/SH/BLReceived parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 7340052, your discr# 15728652, state UP, D/F/P/C/A# 0/0/0/1/0Transmitted parameters# Version# 1, desired tx interval# 300 ms, required rx interval# 300 ms Required echo rx interval# 0 ms, multiplier# 3, diag# None My discr# 15728652, your discr# 7340052, state UP, D/F/P/C/A# 0/1/0/1/0Timer Values# Local negotiated async tx interval# 300 ms Remote negotiated async tx interval# 300 ms Desired echo tx interval# 0 s, local negotiated echo tx interval# 0 ms Echo detection time# 0 ms(0 ms*3), async detection time# 900 ms(300 ms*3)Label# Internal label# 24025/0x5dd9Local Stats# Intervals between async packets# Tx# Number of intervals=3, min=6 ms, max=18 s, avg=6805 ms Last packet transmitted 29 s ago Rx# Number of intervals=5, min=3 ms, max=1680 ms, avg=520 ms Last packet received 29 s ago Intervals between echo packets# Tx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet transmitted 0 s ago Rx# Number of intervals=0, min=0 s, max=0 s, avg=0 s Last packet received 0 s ago Latency of echo packets (time between tx and rx)# Number of packets# 0, min=0 ms, max=0 ms, avg=0 msMP download state# BFD_MP_DOWNLOAD_ACKState change time# Aug 9 21#25#57.190Session owner information# Desired Adjusted Client Interval Multiplier Interval Multiplier -------------------- --------------------- --------------------- isis-1 300 ms 3 300 ms 3 H/W Offload Info# H/W Offload capability # Y, Hosted NPU # 0/7/CPU0 Async Offloaded # Y, Echo Offloaded # N Async rx/tx # 41/14 Platform Info#NPU ID# 1 Async RTC ID # 1 Echo RTC ID # 0Async Feature Mask # 0x0 Echo Feature Mask # 0x0Async Session ID # 0xc Echo Session ID # 0x0Async Tx Key # 0xf0000c Echo Tx Key # 0x0Async Tx Stats addr # 0x0 Echo Tx Stats addr # 0x0Async Rx Stats addr # 0x0 Echo Rx Stats addr # 0x0LC/0/7/CPU0#Aug 9 21#25#57.189 PDT# bfd_agent[342]# %L2-BFD-6-SESSION_DAMPENING_ON # Session to neighbor 31.1.1.2 on interface Bundle-Ether30.1 entered Dampened state (initial# 2000 ms,secondary# 5000 ms,maximum# 120000 ms). RP/0/RP1/CPU0#Aug 9 21#25#59.137 PDT# config[66604]# %MGBL-CONFIG-6-DB_COMMIT # Configuration committed by user 'root'. Use 'show configuration commit changes 1000000489' to view the changes. LC/0/7/CPU0#Aug 9 21#26#15.307 PDT# bfd_agent[342]# %L2-BFD-6-SESSION_DAMPENING_OFF # Session to neighbor 31.1.1.2 on interface Bundle-Ether30.1 moved out of Dampened state. LC/0/7/CPU0#Aug 9 21#26#17.601 PDT# bfd_agent[342]# %L2-BFD-6-SESSION_STATE_UP # BFD session to neighbor 31.1.1.2 on interface Bundle-Ether30.1 is up RP/0/RP1/CPU0#Aug 9 21#26#18.799 PDT# isis[1009]# %ROUTING-ISIS-5-ADJCHANGE # Adjacency to 5508-2-74142I-C (Bundle-Ether30.1) (L2) Up, Restarted Which mode to choose# Inherit vs Logical ?While choosing between the coexistence mode, we need to understand how each mode works and what do we want to achieve. As we saw in the earlier sections and examples, in Inherit mode, BLB sessions will simply inherit the BoB session state and doesn’t have its own state machine. If BoB is down then BLB also will go down. BLB clients (routing protocols) are tied to BoB session state. So if BoB session goes down, along with bundle-mgr, routing protocols too get notified. In the Logical mode BLB and BoB sessions are independent sessions and notify their respective clients based on their timeout configured. If link failure detection is the only criteria for customer then BoB should be enough in most cases. But convergence may get affected due to the point of indirect notification to the routing protocols. If forwarding detection is also required for the bundle main/sub-interfaces, then BLB should be used along with BoB. This will help in achieving faster convergence.Memory and Scale ImpactThere is no memory impact expected while configuring the BoB and BLB coexistence. When using inherit mode the scale will only be considered for the BoB sessions. With logical mode it would be a summation of BoB and BLB. The overall scale calculation will not change and remain as per the existing scale supported. For details on how the scale is calculated please refer.SummaryHope you find this article useful. We covered a quick background BoB and BLB and what are their limitations are when using either of them. We saw how the BoB and BLB coexistence can help BFD converge faster and give better results. We saw the configuration examples along with igp as ISIS. This feature is supported with OSPF, BGP as well as static. We discussed on what basis we can select the coexistence mode. As a best practice, for faster convergence we should use BoB-BLB coexistence wherever possible.", "url": "/tutorials/coexistence-between-bfd-over-bundle-and-bfd-over-logical-bundle/", "author": "Tejas Lad", "tags": "iosxr, BFD, BLB, BOB, NCS 5500, NCS 500, BLB BoB Coexistence" } , "tutorials-access-list-enhancements-on-ncs5500-j2-based-platforms": { "title": "Access-List Enhancements on NCS5500 J2 based platforms", "content": " Access-List Enhancements on NCS5500 J2 based platforms Introduction Permit Stats Verification Ingress ACL on External TCAM Summary of TCAM Usage Advantage of programming ingress ACLs in external TCAM Ingress and Egress Default TCAM Keys IPV4 Ingress Default TCAM keys IPV6 Ingress Default TCAM keys IPV4 Egress Default TCAM keys IPV6 Egress Default TCAM keys Scale Thank You Summary IntroductionIn our previous articles, we introduced the ACL features for NCS500 and NCS5500 platforms based on Qumran-MX, Jericho and Jericho+ based chipsets.We discussed ACL implementation, Hybrid ACL, matching criterias like Packet Length, IP Fragments. We also discussed the other important features like UDK and UDF. Finally we touched upon the concepts of ABF and Chained ACL. In this artcile and the ones to follow this, we will explore the ACL enhancements on Jericho2 based platforms.There have been quite a few changes in implementation and support for ACLs on Jericho2 based platforms. Right from the permit stats availability to the programming of Ingress ACL on eTCAM. We also have introduced the support for more Default Keys for both ingress and egress ACLs. We no more need recyling of the IPv6 egress ACLs. We have the more support on the BVI interfaces compared to previous platforms. So let us start looking into each of them.Permit StatsLet us first start with the permit statistics. As we know, when it comes to NCS5500 and NCS500, we have limited hardware resources and we need to use them wisely if we need to accomodate different features together. In these platforms, by default ACL permit stats is not accounted for in the ingress direction due to resource sharing. We need to enable hw-module profile stats acl-permit, to allocate statistic entries to permit ACEs. But there is a drawback after enabling this profile. If acl-permit is configured, qos-enhanced or other options are disabled. But with J2 based platforms, the above permit stats CLI is no longer needed. By default we can allocate statistic entries.Let us check this with an example. We have a NC57-18DD-SE in slot 3. We apply the ACL in the ingress direction and send traffic from IXIA.RP/0/RP0/CPU0#5508-2-74142I-C#show platformNode Type State Config state--------------------------------------------------------------------------------0/0/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/0/NPU0 Slice UP 0/0/NPU1 Slice UP 0/0/NPU2 Slice UP 0/0/NPU3 Slice UP 0/3/CPU0 NC57-18DD-SE IOS XR RUN NSHUTipv4 access-list permit-stats 10 permit ipv4 25.1.7.0 0.0.0.255 any 20 deny ipv4 26.1.7.0 0.0.0.255 any RP/0/RP0/CPU0#5508-2-74142I-C#show running-config interface fourHundredGigE 0/3/0/21interface FourHundredGigE0/3/0/21 cdp ipv4 address 30.1.1.2 255.255.255.0 ipv4 access-group permit-stats ingress!VerificationWe have not configured hw-module profile stats acl-permitRP/0/RP0/CPU0#5508-2-74142I-C#show running-config | in hw-module hw-module profile qos hqos-enableRP/0/RP0/CPU0#5508-2-74142I-C#show access-lists ipv4 permit-stats hardware ingress location 0/3/CPU0 ipv4 access-list permit-stats 10 permit ipv4 25.1.7.0 0.0.0.255 any (2290044 matches) 20 deny ipv4 26.1.7.0 0.0.0.255 anyRP/0/RP0/CPU0#5508-2-74142I-C#RP/0/RP0/CPU0#5508-2-74142I-C#The above output shows that Jericho2 based platforms, no longer need the hw-module profile for ingress stats. The stats are enabled by default.Ingress ACL on External TCAMPrior to IOS-XR release 7.2.1, traditional ingress IPv4/IPv6 ACLs were always programmed on the internal TCAM of a line card or fixed system be it a base or scale version. From IOS-XR 7.2.1, the programming of the ingress ACLs will be done on the external TCAM for the J2 based scale systems. Let us verify the same with an example. We have an access-list configured as below and applied in the ingress direction on interface of NC57-18DD-SEipv4 access-list permit-stats 10 permit ipv4 25.1.7.0 0.0.0.255 any 20 deny ipv4 26.1.7.0 0.0.0.255 any 30 permit ipv4 host 50.1.1.1 any 35 deny ipv4 62.6.69.128 0.0.0.15 any 40 deny ipv4 any 62.80.66.128 0.0.0.15 45 deny ipv4 62.80.66.128 0.0.0.15 any 50 deny ipv4 any 62.134.38.0 0.0.0.127 60 permit tcp any eq bgp host 1.2.3.1 70 permit tcp any host 1.2.3.1 eq bgp 80 deny ipv4 any host 1.2.3.1 90 deny ipv4 any 212.21.217.0 0.0.0.255 100 permit ipv4 any anyinterface FourHundredGigE0/3/0/21 cdp ipv4 address 30.1.1.2 255.255.255.0 ipv4 access-group permit-stats ingress!The above output shows that ACL programming has been done in the external TCAM. The interface belongs to NPU 1. It used the bank ID 15 and the database alloted for the ingress v4 ACL. It shows 15 entries per DB (12 ACEs plus 3 internal entries).Summary of TCAM UsageLet us summarise the TCAM usage for the ingress and egress ACLs w.r.t TCAMs used. System Traditional Ingress ACL Ingress ACL with UDK/UDF Egress ACL Hybrid Ingress ACL J2 with eTCAM External TCAM Internal TCAM Internal TCAM External and Internal TCAM J+ with eTCAM Internal TCAM Internal TCAM Internal TCAM External and Internal TCAM J with eTCAM Internal TCAM Internal TCAM Internal TCAM External and Internal TCAM J2 without eTCAM Internal TCAM Internal TCAM Internal TCAM Not Supported J+ without eTCAM Internal TCAM Internal TCAM Internal TCAM Not Supported J without eTCAM Internal TCAM Internal TCAM Internal TCAM Not Supported Note# This is applicable for both J2 Native and Compatible Mode.Advantage of programming ingress ACLs in external TCAMFrom the above table, we can see that when using a fixed system or Line card with external TCAM, the traditional ingress ACLs are programmed on external TCAM. The main advantage is that the resources on the internal TCAM can now be used for other features and statistics.Ingress and Egress Default TCAM KeysIn NCS5500 and NCS500, we have the concept of default TCAM keys and user-defined TCAM keys- UDK. For details on the two different key types please refer. Due to enhance capabilities of J2 chipset we have made changes to the default TCAM key support for IPv4 and IPv6 both in ingress and egress directions.IPV4 Ingress Default TCAM keys IPv4 Match Fields Support Comment Src Address Yes   Dst Address Yes   Source Port or ICMP Code/Type Yes   Destination Port Yes   TOS/DSCP Yes   Packet Length Yes Part of the default TCAM key from IOS-XR 7.2.1. Previously allowed only with UDK TCP Flags Yes   Fragments Yes Part of the default TCAM key from IOS-XR 7.2.1. Previously allowed only with UDK Fragment Offset Yes   Protocol Yes   IPV6 Ingress Default TCAM keys IPv6 Match Fields Support Comment Src Address Yes   Dst Address Yes   Source Port or ICMP Code/Type Yes   Destination Port Yes   Traffic Class (DSCP) Yes   Packet Length Yes Part of the default TCAM key from IOS-XR 7.2.1. Previously allowed only with UDK TCP Flags Yes   Next-header Yes   IPV4 Egress Default TCAM keys IPv4 Match Fields Support Comment Src Address Yes   Dst Address Yes   Source Port or ICMP Code/Type Yes   Destination Port Yes   TOS/DSCP Yes   Packet Length Yes Part of the default TCAM key from IOS-XR 7.4.1. Previously allowed only with UDK TCP Flags Yes   Fragments Yes   Protocol Yes   IPV6 Egress Default TCAM keys IPv6 Match Fields Support Comment Src Address Yes   Dst Address Yes Changed to 96 bits only Source Port or ICMP Code/Type Yes   Destination Port Yes   Traffic Class/DSCP Yes Part of the default TCAM key from IOS-XR 7.4.1. Previously allowed only with UDK Packet Length Yes Part of the default TCAM key from IOS-XR 7.4.1. Previously allowed only with UDK TCP Flags Yes Part of the default TCAM key from IOS-XR 7.4.1. Previously allowed only with UDK Fragments Yes Part of the default TCAM key from IOS-XR 7.4.1. Previously allowed only with UDK Next Header Yes   Note# This is applicable for both J2 Native and Compatible Mode.ScaleAs of IOS-XR 7.4.1, the scale on the J2 based platforms will be same as previous generations. There is a plan to increase the scale in future releases. Stay tuned !!!Thank YouSpecial thanks to Shruthi (shrucs@cisco.com) for her valuable inputs during the article.SummaryHope this article was helpful. We covered the new enhancements/support and the programming difference in the new platforms. In the upcoming articles we will cover the IPv6 egress ACL’s, ACL for IPv6 EH support, ACL support on BVI interface. We will also have a separate article on how to increase the scale. Stay tuned for the same !!!", "url": "/tutorials/access-list-enhancements-on-ncs5500-j2-based-platforms/", "author": "Tejas Lad", "tags": "iosxr, cisco, ACL, J2, data plane protection, access-list, NCS 5500, NCS 5700" } , "tutorials-acl-s-on-ncs5500-bvi-interfaces": { "title": "ACL's on NCS5500 BVI Interfaces", "content": " ACL's on NCS5500 and NCS500 BVI Interfaces Introduction Quick Recap# Bridged Virtual Interface - BVI BVI use cases ACLs with BVI interface Ingress V4 ACL BVI configurations Verification TCAM Programming for ingress ACLs on BVI interfaces TCAM entries on other LCs Ingress V6 ACL Verification TCAM entries on all the LCs Egress V4 ACL Verification TCAM Programming for egress ACLs on BVI interfaces TCAM entries on other LCs Failure of egress ACLs on non BVI interfaces Egress V6 ACL TCAM entries with multiple interfaces having ACLs Summary References IntroductionIn the previous article, we introduced the ACL enhancements on NCS5500 based on J2 chipsets. Now we will understand the ACL implementation on NCS5500 w.r.t to BVI interfaces. We will cover all the systems based on J/J+ and J2.Quick Recap# Bridged Virtual Interface - BVIBefore we move to the ACL features, let us do a quick recap of the BVI interface and understand its use cases. The BVI is a virtual interface within the router that acts like a normal routed interface. BVI provides link between the bridging and the routing domains on the router. The BVI does not support bridging itself, but acts as a gateway for the corresponding bridge-domain to a routed interface within the router. Bridge-Domain is a layer 2 broadcast domain. It is associated to a bridge group using the routed interface bvi command. ReferenceBVI use cases Interconnect bridged and routed networks Preserve network addresses Bridge local traffic for efficient network performanceBVI provides a much more flexible solution for bridging and routing Scenarios Supported by BVI Communication of multiple interfaces in same BD Yes Communication of multiple interfaces in different BD Yes Communication between Bridged interface and Routed interface Yes ACLs with BVI interfaceConfiguration and attachment of an ACL over BVI interface is similar to regular ACLs attachment to a physical interface. Let us check out the same.Ingress V4 ACL First we will start with Ingress IPv4 ACL. IPv4 ACL is supported in ingress direction for J/J+ and J2 (Native and Compatible mode).RP/0/RP0/CPU0#5508-2-74142I-C#show platformNode Type State Config state-------------------------------------------------------------------------------- 0/3/CPU0 NC57-18DD-SE IOS XR RUN NSHUT0/3/NPU0 Slice UP 0/3/NPU1 Slice UP 0/4/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/4/NPU0 Slice UP 0/4/NPU1 Slice UP 0/4/NPU2 Slice UP 0/4/NPU3 Slice UP 0/7/CPU0 NC55-24X100G-SE IOS XR RUN NSHUT0/7/NPU0 Slice UP 0/7/NPU1 Slice UP 0/7/NPU2 Slice UP 0/7/NPU3 Slice UP Note# Output is truncatedWe will consider 3 Line cards. Line Card No of NPU’s NC57-18DD-SE 2xJ2 NC55-36X100G-A-SE 4xJ+ NC55-24X100G-SE 4xJ We have ingress IPv4 ACL configured as below and applied to BVI interfaceipv4 access-list permit-stats 10 permit ipv4 25.1.7.0 0.0.0.255 any 20 deny ipv4 26.1.7.0 0.0.0.255 any 30 permit ipv4 host 50.1.1.1 any 35 deny ipv4 62.6.69.128 0.0.0.15 any 40 deny ipv4 any 62.80.66.128 0.0.0.15 45 deny ipv4 62.80.66.128 0.0.0.15 any 50 deny ipv4 any 62.134.38.0 0.0.0.127 60 permit tcp any eq bgp host 1.2.3.1 70 permit tcp any host 1.2.3.1 eq bgp 80 deny ipv4 any host 1.2.3.1 90 deny ipv4 any 212.21.217.0 0.0.0.255 100 permit ipv4 any any RP/0/RP0/CPU0#5508-2-74142I-C#show running-config interface bvI 21 interface BVI21 ipv4 access-group permit-stats ingress!BVI configurationsinterface FourHundredGigE0/3/0/21 l2transport !!l2vpn bridge group BVI bridge-domain 21 interface FourHundredGigE0/3/0/21 ! routed interface BVI21 ! !VerificationRP/0/RP0/CPU0#5508-2-74142I-C#show access-lists ipv4 usage pfilter location allInterface # BVI21Input ACL # Common-ACL # N/A ACL # permit-statsOutput ACL # N/ATCAM Programming for ingress ACLs on BVI interfacesBVI interfaces are designed in such a way that every feature attachment affects all NPUs on the Line Card. TCAM entries are always programmed across all LCs, regardless of interface membership. This is platform independent and should behave the same across all XR platforms. When it comes to the individual line card level, TCAM entries are programmed across all NPUs on the particular line card, regardless of interface membership.To understand this behaviour, we need to recap what we discussed in the earlier section w.r.t BVI. BVI has no mapping to a particular Physical port / LC. BVI is rather an entity of Bridge Domain. In XR, BVI has been defined globally just as a Bridge Domain. Hence it is present on all LCs. Another important consideration for this implementation is modification of the configurations. If we want to add or remove BVI entries in each LC dynamically - based on the presence of a local AC (from that LC) in the given BD it has a lot of overhead. Hence we allocate resources on all the NPU’s of the LCs and also from PI perspective we replicate that on all LCs.RP/0/RP0/CPU0#5508-2-74142I-C#show controllers npu externaltcam location 0/3/CPU0 External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 15 320b 4081 15 2725 ext_FG_INGR_V4_ACL1 15 320b 4081 15 2725 ext_FG_INGR_V4_ACLRP/0/RP0/CPU0#5508-2-74142I-C#Note# Output is truncatedTCAM entries on other LCsRP/0/RP0/CPU0#5508-2-74142I-C#show controllers npu internaltcam location 0/4/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV41 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV42 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV43 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV4RP/0/RP0/CPU0#5508-2-74142I-C#Note# Output is truncatedRP/0/RP0/CPU0#5508-2-74142I-C#show controllers npu internaltcam location 0/7/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV41 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV42 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV43 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV4RP/0/RP0/CPU0#5508-2-74142I-C#Note# Output is truncatedFrom the above outputs, we could see the replication is happening on all the LCs in the system and all the NPUs on each LC. Note that the programming of the ingress traditional ACLs is happening on the external TCAM for a LC based on J2 (same as normal L3 interface) and on the internal TCAM for LCs based on J/J+. TCAM resources are used even if the LCs are not part of that particular BVI or BD.Ingress V6 ACLLet us verify the behaviour with the IPv6 Ingress ACLs.RP/0/RP0/CPU0#5508-2-74142I-C#show access-lists ipv6 ipv6_1 ipv6 access-list ipv6_1 10 permit tcp 2001#1#2##/64 any eq 1024 20 permit tcp 2002#1#2##/64 any eq 1024 30 permit tcp 2003#1#2##/64 any eq 1024 40 permit tcp 2004#1#2##/64 any eq 1024 50 deny udp 2001#4#5##/64 any lt 1000 60 permit ipv6 any anyVerificationRP/0/RP0/CPU0#5508-2-74142I-C#show running-config interface bvI 21interface BVI21 ipv6 access-group ipv6_1 ingress!TCAM entries on all the LCsRP/0/RP0/CPU0#5508-2-74142I-C#show controllers npu externaltcam location 0/3/CPU0 External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 12 480b 1998 50 2722 ext_FG_INGR_V6_ACL1 12 480b 1998 50 2722 ext_FG_INGR_V6_ACLRP/0/RP0/CPU0#5508-2-74142I-C#RP/0/RP0/CPU0#5508-2-74142I-C#show controllers npu internaltcam location 0/4/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV61 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV62 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV63 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV6 RP/0/RP0/CPU0#5508-2-74142I-C#RP/0/RP0/CPU0#5508-2-74142I-C#show controllers npu internaltcam location 0/7/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV61 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV62 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV63 6\\7 320b pmf-0 2025 23 48 INGRESS_ACL_L3_IPV6 RP/0/RP0/CPU0#5508-2-74142I-C#Note# Output is truncatedWe see the same behaviour in terms of programming and TCAM resource utilization when it comes to IPv6 ingress ACLs as well. IPv6 ACL is supported in ingress direction for J/J+ and J2 (Native and Compatible mode)Egress V4 ACLBy default, Egress ACLs over BVI interface is disabled. ACL filtering will not take effect even after it is attached to BVI interfaces. To enable ACL over BVI in the egress direction, hw-module profile acl egress layer3 interface-based should be configured. When we enable the above profile, egress ACLs on any non BVI interface will not work. This limitation is because, we have different qualifiers for physical interfaces and BVI interfaces. To accomodate both qualifiers, we need to increase the key size and we need more memory banks. IPv4 ACL is supported in Egress direction for J/J+ and J2 (Native and Compatible mode).Similar to the ingress ACLs, TCAM entries are always programmed across all LCs, regardless of interface membership. When it comes to the individual line card level, TCAM entries are programmed across all NPUs on the particular line card, regardless of interface membership. In order to activate/deactivate Egress ACL support over BVI interfaces, you must manually reload the chassis/all line cards.VerificationRP/0/RP0/CPU0#5508-2-741C#show running-config | in hw-module hw-module profile acl egress layer3 interface-basedRP/0/RP0/CPU0#5508-2-741C#show running-config interface bvI 21interface BVI21 ipv4 access-group permit-stats egressRP/0/RP0/CPU0#5508-2-741C#show access-lists ipv4 usage pfilter location all Interface # BVI21 Input ACL # N/A Output ACL # permit-statsTCAM Programming for egress ACLs on BVI interfacesRP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/3/CPU0 Mon Sep 13 05#13#38.674 PDTInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 5 160b EPMF 2031 15 42 EGRESS_ACL_IPV41 5 160b EPMF 2031 15 42 EGRESS_ACL_IPV4RP/0/RP0/CPU0#5508-2-741C#Note# Output is truncatedTCAM entries on other LCsRP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/7/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV41 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV42 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV4 3 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV4 RP/0/RP0/CPU0#5508-2-741C#Note# Output is truncatedRP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/4/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV41 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV42 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV4 3 0 160b egress_acl 2016 15 30 EGRESS_ACL_IPV4 RP/0/RP0/CPU0#5508-2-741C#Note# Output is truncatedFailure of egress ACLs on non BVI interfacesRP/0/RP0/CPU0#5508-2-741C(config)#interface bundle-ether 30RP/0/RP0/CPU0#5508-2-741C(config-if)#ipv4 access-group permit-stats egress RP/0/RP0/CPU0#5508-2-741C(config-if)#commit % Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted. Please issue 'show configuration failed [inheritance]' from this session to view the errorsRP/0/RP0/CPU0#5508-2-741C(config-if)#show configuration failed Mon Sep 13 04#50#14.667 PDT!! SEMANTIC ERRORS# This configuration was rejected by !! the system due to semantic errors. The individual !! errors with each failed configuration command can be !! found below.interface Bundle-Ether30 ipv4 access-group permit-stats egress!!% 'DPA' detected the 'warning' condition 'SDK - Invalid configuration'!endFrom the above, we can see that non BVI interface will not allow the egress IPv4 ACL when hw-module profile acl egress layer3 interface-based is configured.Egress V6 ACLTill now we could see, the TCAM utilizations of the v4 ACLs in ingress and egress is same for platforms based on J/J+ and J2. It was same for the ingress v6 ACLs as well. The main difference is in the implementation of the Egress V6 ACL. Egress IPv6 ACLs on BVI interfaces is not supported on platforms based on J/J+. This is due to the absence of the qualifiers needed for the egress support. It is supported on the J2 based platforms only in the Native mode. When IPv6 Egress ACL is configured, all non-J2 cards reject it. Therefore when implementing the egress IPv6 ACLs on BVI interfaces, the chassis should not be operating in compatible mode.VerificationLet us see what happens when we apply the IPv6 Egress ACL on BVI interface for a J2 based Line card in a chassis operating in compatible mode.RP/0/RP0/CPU0#5508-2-741C(config)#interface bvI 21RP/0/RP0/CPU0#5508-2-741C(config-if)#ipv6 access-group ipv6_1 egress RP/0/RP0/CPU0#5508-2-741C(config-if)#commit % Failed to commit one or more configuration items during a pseudo-atomic operation. All changes made have been reverted. Please issue 'show configuration failed [inheritance]' from this session to view the errorsRP/0/RP0/CPU0#5508-2-741C(config-if)#show configuration failed !! SEMANTIC ERRORS# This configuration was rejected by !! the system due to semantic errors. The individual !! errors with each failed configuration command can be !! found below.interface BVI21 ipv6 access-group ipv6_1 egress!!% 'pfilter-ea' detected the 'warning' condition 'Egress ACL not supported on this interface type.'!endRP/0/RP0/CPU0#5508-2-741C(config-if)#We can see the configuration is rejected. We have system based on J2 Native Mode. Let us try configuring it the IPv6 ACL in the egress direction on the BVI interfaceRP/0/RP0/CPU0#NC57B1-5DSE-1-Vega-II5-5#show platform Node Type State Config state--------------------------------------------------------------------------------0/RP0/CPU0 NCS-57B1-5DSE-SYS(Active) IOS XR RUN NSHUT0/PM0 PSU2KW-ACPI OPERATIONAL NSHUT0/PM1 PSU2KW-ACPI OPERATIONAL NSHUT0/FT0 N5700-FAN OPERATIONAL NSHUT0/FT1 N5700-FAN OPERATIONAL NSHUT0/FT2 N5700-FAN OPERATIONAL NSHUT0/FT3 N5700-FAN OPERATIONAL NSHUT0/FT4 N5700-FAN OPERATIONAL NSHUT0/FT5 N5700-FAN OPERATIONAL NSHUTRP/0/RP0/CPU0#NC57B1-5DSE-1-Vega-II5-5#interface BVI1 ipv6 access-group ipv6_1 egress!RP/0/RP0/CPU0#NC57B1-5DSE-1-Vega-II5-5#show access-lists ipv6 usage pfilter lo$Interface # BVI1 Input ACL # N/A Output ACL # ipv6_1RP/0/RP0/CPU0#NC57B1-5DSE-1-Vega-II5-5#RP/0/RP0/CPU0#NC57B1-5DSE-1-Vega-II5-5#show controllers npu internaltcam locat$Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 4\\5 320b EPMF 2030 18 35 EGRESS_ACL_IPV6RP/0/RP0/CPU0#NC57B1-5DSE-1-Vega-II5-5#From the above output, we can see that Egress IPv6 ACL on BVI interface got accepted on the system with J2 Native Mode.TCAM entries with multiple interfaces having ACLs For ingress ACLs, TCAM entries can be shared between different interfaces in case of same ACL. For egress ACLs, TCAM entries are unique per interface, even for the same ACL. Let us verify the same.Ingressinterface BVI21 ipv4 access-group permit-stats ingress!interface BVI36 ipv4 access-group permit-stats ingress!RP/0/RP0/CPU0#5508-2-741C#RP/0/RP0/CPU0#5508-2-741C#show access-lists ipv4 usage pfilter location all Interface # BVI21 Input ACL # Common-ACL # N/A ACL # permit-stats Output ACL # N/AInterface # BVI36 Input ACL # Common-ACL # N/A ACL # permit-stats Output ACL # N/ARP/0/RP0/CPU0#5508-2-741C#RP/0/RP0/CPU0#5508-2-741C#show controllers npu externaltcam location 0/3/CPU0 External TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 15 320b 4081 15 2725 ext_FG_INGR_V4_ACL1 15 320b 4081 15 2725 ext_FG_INGR_V4_ACLRP/0/RP0/CPU0#5508-2-741C#Note# Output is truncatedRP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/4/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV41 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV42 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV43 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV4 RP/0/RP0/CPU0#5508-2-741C#Note# Output is truncatedRP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/7/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV41 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV42 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV43 2 160b pmf-0 2023 16 47 INGRESS_ACL_L3_IPV4 RP/0/RP0/CPU0#5508-2-741C#Note# Output is truncatedEgressinterface BVI21 ipv4 access-group permit-stats egress!interface BVI36 ipv4 access-group permit-stats egress!RP/0/RP0/CPU0#5508-2-741C#show access-lists ipv4 usage pfilter location all Interface # BVI21 Input ACL # N/A Output ACL # permit-stats Interface # BVI36 Input ACL # N/A Output ACL # permit-stats RP/0/RP0/CPU0#5508-2-741C#RP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/3/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 5 160b EPMF 2016 30 42 EGRESS_ACL_IPV41 5 160b EPMF 2016 30 42 EGRESS_ACL_IPV4RP/0/RP0/CPU0#5508-2-741C#RP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/4/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV41 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV42 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV43 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV4RP/0/RP0/CPU0#5508-2-741C#RP/0/RP0/CPU0#5508-2-741C#show controllers npu internaltcam location 0/7/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV41 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV42 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV43 0 160b egress_acl 2001 30 30 EGRESS_ACL_IPV4RP/0/RP0/CPU0#5508-2-741C#From the above outputs we can see that for egress ACLs on the BVI interface we do not share the TCAM entries. They are unique per interface. Whereas in the ingress direction, we share the TCAM entries if we apply the same ACL.SummarySummarizing the ACL support on BVI interfaces. ACL Direction J J+ J2 Compatible J2 Native Ingress v4 ACL Yes Yes Yes Yes Ingress v6 ACL Yes Yes Yes Yes Egress v4 ACL Yes Yes Yes Yes Egress v6 ACL No No No Yes We saw the support and programming of the ingress and egress ACLs on the BVI interfaces along with the TCAM resource utilization. In the next article we will explore the v6 ACLs in more details and its implementation across the chipsets. So stay tuned !!!References BVI Details CCO Config Guide", "url": "/tutorials/acl-s-on-ncs5500-bvi-interfaces/", "author": "Tejas Lad", "tags": "iosxr, NCS5500, NCS500, ACL, Access List, BVI" } , "tutorials-iosxr-741-innovations": { "title": "IOS XR 7.4.1 Innovations in NCS5500/NCS5700/NCS500 Platforms", "content": " XR 7.4.1 New Features and Hardware for DNX Platforms Introduction Software Images XR 64-bit XR7 New Features in IOS XR 7.4.1 ACL Security QoS BFD Scale improvements Segment Routing Misc New hardware in IOS XR 7.4.1 Introducing NCS57C3-MOD Deeper dive in Cisco NCS57C3 Introducing the new QSFP-DD MPA on NCS5500 Routers Newest Cisco Router Unboxed (NCS57C3-MOD Installation) New NCS560 IMA and 4x10G support .IntroductionIOS XR 7.4.1 has been published in August 2021 and is an ED version for many XR platforms including# NCS5500 NCS5700 NCS540 NCS560Release notes# NCS540# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5xx/release-notes/74x/b-release-notes-ncs540-r741.html NCS560# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs560/release-notes/74x/b-release-notes-ncs560-r741.html NCS5500# https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/general/74x/release/notes/b-release-notes-ncs5500-r741.htmlSoftware download for NCS5500#https#//software.cisco.com/download/home/286291132/typeNote# two flavors of IOSXR are now available for NCS5500/NCS5700 platformsSoftware ImagesXR 64-bitIn this section of the download center, you’ll find software images for all platforms, fixed or modular running IOS XR 64-bit, with the particular exception of the NCS-57B1 platforms.XR7In current release, one single platform type in the NCS5500/NCS5700 series is running the new version of IOS XR named “XR7” (“LNT” in the show commands). That’s the NCS57B1-6D24 and NCS57B1-5DSE. And image can be found in this section of the download center (under 5700)#Note# Even the latest platform NCS57C3-MOD is running XR 64-bit and not XR7.Please refer to the release notes and installation guide to get details on the upgrade process.New Features in IOS XR 7.4.1We asked a few colleagues to help documenting the new improvements brought by the 7.3.1 version. It will be split in software innovations on one side (Segment Routing, SRv6, EVPN, Multicast, QoS, Security, …) and new supported hardware (chassis, power supply, new line cards and new fixed-form-factor products).ACLTejas Lad demonstrates the improvements brought by J2 ASIC for egress ACL and statistics..With Jericho2 platforms, a series of limitations present in Jericho/Jericho+ have been lifted, for example# We don’t need to use a specific hardware module profile configuration to allocate counters for ACL permit packets. “hw-module profile stats acl-permit” is not used anymore. In Jericho and Qumran-MX platforms with NL12k eTCAM, it could be necessary to recarve the resource with “hw-module profile tcam acl-prefix percent <0-100>”. It’s all dynamic on Jericho+ with OP and Jericho2 with OP2 eTCAMs, no config is needed We don’t recycle packets for ACLv6 in egress direction like it was the case in Jericho and Jericho+ platforms, the packet treatment with J2 is done in one pass. We support egress ACLv6 on BVI interfaces, something not supported on J/J+ platforms.Tejas also published another article here with plenty of details#https#//xrdocs.io/ncs5500/tutorials/access-list-enhancements-on-ncs5500-j2-based-platforms/In IOS XR 7.4.1, we complete this list of improvements with more fields. Match Parameter IPv4 support pre-741 IPv4 support in 741 IPv6 support pre-741 IPv6 support in 741 Source Address Yes Yes Yes Yes Destination Address Yes Yes Yes Yes Source Port Yes Yes Yes Yes Destination Port Yes Yes Yes Yes Protocol/ Next Header Yes Yes Yes Yes Precedence/DSCP Yes Yes No Yes Packet Length No Yes No Yes TCP control Flags Yes Yes No Yes Fragment bit Yes Yes No Yes Extended Header N/A N/A No Yes Destination System port Yes Yes Yes Yes SecurityRakesh Kandula presented new security feature# Chip Guard on NCS540..Let’s first describe two essential components into every IOS XR router# TAm# the Trust Anchor module chip integrated in all recent device Cisco Secure Boot# the chain ensuring the integrity of the entire boot processAlso these components are not enough if the hardware itself has been replaced by malware infected parts (CPU or NPU). And that’s the purpose of this new feature# Chip Guard introduced now in our access platforms# NCS540.Chip Guard feature is triggered during the BIOS part of the boot sequence.During manufacturing# the SHA-256 hash of the Electronic Chip ID (ECID) of the CPU and NPU are calculated These hashes are then programmed inside the TAm chip The programmed hash values form the ImprintDB inside the TAm chip The ImprintDB cannot be modified during runtimeWhen the router boots# BIOS reads the ECID of the chips and computes their hashes Each of the hashes is then extended into a PCR inside TAm chip These set of observed hashes forms the ObserveDB BIOS fetches the factory programmed hash values from imprintDB The hash values are compared with the ObserveDB generated in the previous step BIOS continues with boot process if and only if the hashes matchQoSPaban Sarma introduces PPS-based policer for DNX platforms (NCS500 / NCS5500)..Before the introduction of this feature, we used absolute values in bps for policer rate and burst in bytes or unit of time#policy-map police-bps class class-default police rate 10 mpbs burst 12 kbytes ! ! end-policy-map!Or we can use relative units (percent)#policy-map police-percent class class-default police rate percent 1 burst 2 ms ! ! end-policy-map!Starting in IOS XR 7.4.1, we can use absolute units of PPS and burst expressed in number of packets.policy-map police-pps class class-default police rate 1000 pps burst 2000 packets ! ! end-policy-map!RP/0/RP0/CPU0#NCS-55A1-24Q6H-2#show qos interface tenGigE 0/0/0/1.1 inputNOTE#- Configured values are displayed within parenthesesInterface TenGigE0/0/0/1.1 ifh 0x8022 -- input policyNPU Id# 0Total number of classes# 1Interface Bandwidth# 10000000 kbpsPolicy Name# police-ppsSPI Id# 0x0Accounting Type# Layer2 (Include Layer 2 encapsulation and above)------------------------------------------------------------------------------Level1 Class = class-defaultPolicer Bucket ID = 0x100Policer Stats Handle = 0x0Policer committed rate = 998 kbps (1000 packets/sec)Policer conform burst = 256000 bytes (2000 packets)With this approach, the policer behaves regardless of the size of the packets handled by the policy.Min and max configurable values are 100 pps and 66,000,000 pps respectively.Burst size to be defined in packets PPS and BPS may coexist in a Flat Policy. PPS and BPS can’t coexist in an Hierarchical policy.BFDTejas Lad presents the BoB-BLB co-existence benefits..In IOS XR 7.4.1, we introduce the action of both BoB and BLB simultaneously on the same bundle for faster convergence. All technical details can be found in this dedicated article#https#//xrdocs.io/ncs5500/tutorials/coexistence-between-bfd-over-bundle-and-bfd-over-logical-bundle/Scale improvementsNicolas Fevrier presents the scale improvements directly available in J2 native mode for BGP Flowspec, ARP scale and ECMP FEC.Coming soonSegment RoutingJose Liste describes in details the new features introduced in SR MPLS world.Coming soonMiscComing soonNew hardware in IOS XR 7.4.1Introducing NCS57C3-MOD.https#//www.youtube.com/watch?v=ARKLok7dj-wDeeper dive in Cisco NCS57C3.https#//www.youtube.com/watch?v=MV2hNv4xn6QIntroducing the new QSFP-DD MPA on NCS5500 Routers.https#//www.youtube.com/watch?v=6Ksv8oqhBk0Newest Cisco Router Unboxed (NCS57C3-MOD Installation).https#//www.youtube.com/watch?v=sma8sQwSbfkNew NCS560 IMA and 4x10G supportPaban Sarma introduces the new IMA and the partial 4x10G mode for NCS560#.IOS XR 7.4.1 is used as the vehicule to introduce the support of a new IMA for NCS560 series (4-slot or 7-slot)# N560-IMA-N560-IMA-8Q/4LCisco NCS 560 Series Routers Interface Modules Data Sheet#https#//www.cisco.com/c/en/us/products/collateral/routers/network-convergence-system-560-series-routers/datasheet-c78-740295.html8 ports split in two “QUADs” (groups of 4 contiguous ports)# QUAD-1# port 0-3 QUAD-2# port 4-7You can configure these QUAD in a specific mode (by default, it’s 25G) and the four ports of the group will be configured in the same mode# all four ports 10GE all four ports 25GE one port on two at 50GSo, it’s possible to use these ports in various combinations# 8x 10G 4x 10G + 4x 25G (or vice versa) 4x 10G + 2x 50G (or vice versa) 4x 25G + 2x 50G (or vice versa) 8x 25G 4x 50GNote1# that configuring a port in 50G mode is only supported in the even-numbered port and it disables the N+1 port.Note2# in 10G mode, we don’t support 1G insertion eitherIn this example below, we have the IMA inserted in slot 7 and slot 9 and no QUAD configuration, all the ports are 25GE by default.RP/0/RP1/CPU0#N560-7#show versionWed Aug 11 05#36#26.011 UTCCisco IOS XR Software, Version 7.4.1Copyright (c) 2013-2021 by Cisco Systems, Inc.Build Information# Built By # ingunawa Built On # Thu Feb 25 19#40#08 PST 2021 Built Host # iox-ucs-024 Workspace # /auto/srcarchive17/prod/7.4.1/ncs560/ws Version # 7.4.1 Location # /opt/cisco/XR/packages/ Label # 7.4.1cisco NCS-560 () processorSystem uptime is 0 weeks 4 days 12 hours 44 minutesRP/0/RP1/CPU0#N560-7#show platform Wed Aug 11 05#39#29.011 UTCNode              Type                       State             Config state--------------------------------------------------------------------------------0/0/CPU0          A900-IMA8CS1Z-M            OPERATIONAL       NSHUT0/1/CPU0          A900-IMA8CS1Z-M            OPERATIONAL       NSHUT0/2/CPU0          A900-IMA8CS1Z-M            OPERATIONAL       NSHUT0/4/CPU0          A900-IMA8Z                 OPERATIONAL       NSHUT0/5/CPU0          A900-IMA8Z                 OPERATIONAL       NSHUT0/7/CPU0          N560-IMA-8Q/4L             OPERATIONAL       NSHUT0/9/CPU0          N560-IMA-8Q/4L             OPERATIONAL       NSHUT0/10/CPU0         A900-IMA8Z                 OPERATIONAL       NSHUT0/11/CPU0         A900-IMA8Z-L               OPERATIONAL       NSHUT0/12/CPU0         A900-IMA8Z                 OPERATIONAL       NSHUT0/13/CPU0         A900-IMA8Z-L               OPERATIONAL       NSHUT0/RP0/CPU0        N560-RSP4-E(Standby)       IOS XR RUN        NSHUT0/RP1/CPU0        N560-RSP4-E(Active)        IOS XR RUN        NSHUT0/FT0/CPU0        N560-FAN-H                 OPERATIONAL       NSHUT0/PM2/CPU0        A900-PWR1200-A             OPERATIONAL       NSHUTRP/0/RP1/CPU0#N560-7#RP/0/RP1/CPU0#N560-7#show ipv4 int brief | inc 0/9/0/Wed Aug 11 05#42#29.011 UTCTwentyFiveGigE0/9/0/0          unassigned      Up              Up       default TwentyFiveGigE0/9/0/1          120.0.1.1       Up              Up       vpn1    TwentyFiveGigE0/9/0/2          120.0.5.1       Up              Up       vpn5    TwentyFiveGigE0/9/0/3          120.0.5.2       Up              Up       vpn6    TwentyFiveGigE0/9/0/4          120.0.9.2       Up              Up       vpn10   TwentyFiveGigE0/9/0/5          120.0.9.1       Up              Up       vpn9    TwentyFiveGigE0/9/0/6          120.0.1.2       Up              Up       vpn2    TwentyFiveGigE0/9/0/7          120.0.0.1       Up              Up       vpn1If you configure the QUAD 1 in slot 9 to 10GE, we’ll have the first 4 ports in TenGigE and the last 4 ports in TwentyFiveGigE#RP/0/RP1/CPU0#N560-7#confWed Aug 11 05#44#29.011 UTCRP/0/RP0/CPU0#ios(config)#hw-module quad ?                  1-2  configure quad propertiesRP/0/RP0/CPU0#ios(config)#hw-module quad 1 slot ?  0-15  configure slot propertiesRP/0/RP0/CPU0#ios(config)#hw-module quad 1 slot 9 ?  mode  select mode 10g or 25g or 50g for a quad(group of 4 ports).  cr  RP/0/RP0/CPU0#ios(config)#hw-module quad 1 slot 9 mode 10g ?  cr  RP/0/RP0/CPU0#ios(config)#hw-module quad 1 slot 9 mode 10g RP/0/RP0/CPU0#ios(config)#commitRP/0/RP0/CPU0#ios(config)#exitRP/0/RP1/CPU0#N560-7#RP/0/RP1/CPU0#N560-7#show ipv4 int brief | inc 0/9/0/ Wed Aug 11 05#49#32.015 UTC TenGigE0/9/0/0                 unassigned      Down            Down     default  TenGigE0/9/0/1                 unassigned      Down            Down     default  TenGigE0/9/0/2                 unassigned      Down            Down     default  TenGigE0/9/0/3                 unassigned      Down            Down     default  TwentyFiveGigE0/9/0/4          120.0.9.2       Up              Up       vpn10    TwentyFiveGigE0/9/0/5          120.0.9.1       Up              Up       vpn9     TwentyFiveGigE0/9/0/6          120.0.1.2       Down            Down     vpn2     TwentyFiveGigE0/9/0/7          120.0.0.1       Up              Up       vpn1   RP/0/RP1/CPU0#N560-7#Now, we configure slot 7 QUAD-1 in 10G mode and QUAD-2 in 50G mode. Logically, we will see 4 ports TenGigE in 0-3 and two ports FiftyGigE in 4 and 6#RP/0/RP1/CPU0#N560-7#show run | inc ~hw-module|mode~                  Wed Aug 11 05#52#32.023 UTCBuilding configuration...hw-module quad 1 slot 7 mode 10ghw-module quad 2 slot 7 mode 50hw-module quad 1 slot 9  mode 10gRP/0/RP1/CPU0#N560-7#RP/0/RP1/CPU0#N560-7#show ipv4 int brief | inc 0/7/0/                 Wed Aug 11 05#54#12.009 UTCTenGigE0/7/0/0                 unassigned      Down            Down     default TenGigE0/7/0/1                 120.0.7.2       Up              Up       vpn8    TenGigE0/7/0/2                 120.0.3.2       Up              Up       vpn4    TenGigE0/7/0/3                 120.0.4.1       Up              Up       vpn4    FiftyGigE0/7/0/4               unassigned      down            down     default   FiftyGigE0/7/0/6               unassigned      down            down     default   RP/0/RP1/CPU0#N560-7#RP/0/RP1/CPU0#N560-7#show inventory location 0/7NAME# ~0/7~, DESCR# ~Cisco NCS 560 8-port 25G Interface Module, SFP+/SFP28 optics~ PID# N560-IMA-8Q/4L    , VID# V00, SN# xxxxxxxxxxx   NAME# ~TenGigE0/7/0/1~, DESCR# ~Cisco SFP+ 10G SR Pluggable Optics Module~ PID# SFP-10G-SR-S      , VID# V01, SN# xxxxxxxxxxx   NAME# ~TenGigE0/7/0/2~, DESCR# ~Cisco SFP+ 10G CWDM 1610nm Pluggable Optics Module~ PID# CWDM-SFP10G-1610  , VID# V01, SN# xxxxxxxxxxx   NAME# ~TenGigE0/7/0/3~, DESCR# ~Cisco SFP+ 10G BXU-I Pluggable Optics Module~ PID# SFP-10G-BXU-I     , VID# V01, SN# xxxxxxxxxxx   NAME# ~FiftyGigE0/7/0/4~, DESCR# ~Unknown Pluggable Optics Module~ PID# N/A               , VID# N/A, SN# xxxxxxx   NAME# ~FiftyGigE0/7/0/6~, DESCR# ~Unknown Pluggable Optics Module~ PID# N/A               , VID# N/A, SN# xxxxxxx RP/0/RP1/CPU0#N560-7#Also introduced in IOS XR 7.4.1, the support of “Partial” 4x10G to support the insertion of N560-IMA-8Z(-L) in 40Gbps slots.Before 7.4.1, these IMA were only supported in the 80+Gbps slots# 4, 5, 7, 9, 10 and 11#Now, it’s possible to insert the IMA in 40Gbps slots (2, 3, 12 and 13) and support half of the ports (and which ports will be disabled is dependent on the type of IMA)#In this example below, we inserted an IMA-8Z in slot 12 and an IMA-8Z-L in slot 13 (both 40Gbps). You can see the first ports are enabled for the 8Z and the last ports are enabled for the 8Z-L#RP/0/RP1/CPU0#N560-7#show platform Wed Aug 11 05#39#29.011 UTCNode              Type                       State             Config state--------------------------------------------------------------------------------0/0/CPU0          A900-IMA8CS1Z-M            OPERATIONAL       NSHUT0/1/CPU0          A900-IMA8CS1Z-M            OPERATIONAL       NSHUT0/2/CPU0          A900-IMA8CS1Z-M            OPERATIONAL       NSHUT0/4/CPU0          A900-IMA8Z                 OPERATIONAL       NSHUT0/5/CPU0          A900-IMA8Z                 OPERATIONAL       NSHUT0/7/CPU0          N560-IMA-8Q/4L             OPERATIONAL       NSHUT0/9/CPU0          N560-IMA-8Q/4L             OPERATIONAL       NSHUT0/10/CPU0         A900-IMA8Z                 OPERATIONAL       NSHUT0/11/CPU0         A900-IMA8Z-L               OPERATIONAL       NSHUT0/12/CPU0         A900-IMA8Z                 OPERATIONAL       NSHUT0/13/CPU0         A900-IMA8Z-L               OPERATIONAL       NSHUT0/RP0/CPU0        N560-RSP4-E(Standby)       IOS XR RUN        NSHUT0/RP1/CPU0        N560-RSP4-E(Active)        IOS XR RUN        NSHUT0/FT0/CPU0        N560-FAN-H                 OPERATIONAL       NSHUT0/PM2/CPU0        A900-PWR1200-A             OPERATIONAL       NSHUTRP/0/RP1/CPU0#N560-7#RP/0/RP0/CPU0#ios#show ipv4 int brief | inc 0/12/0Wed Aug 11 05#42#26.011 UTCInterface                      IP-Address      Status          Protocol Vrf-NameTenGigE0/12/0/0                unassigned      Shutdown        Down     default TenGigE0/12/0/1                unassigned      Down            Down     default TenGigE0/12/0/2                unassigned      Down            Down     default TenGigE0/12/0/3                unassigned      Down            Down     default RP/0/RP0/CPU0#ios#show ipv4 int brief | inc 0/13/0Wed Aug 11 05#43#12.011 UTC Interface                      IP-Address      Status          Protocol Vrf-NameTenGigE0/13/0/4                unassigned      Shutdown        Down     default TenGigE0/13/0/5                unassigned      Shutdown        Down     default TenGigE0/13/0/6                unassigned      Shutdown        Down     default TenGigE0/13/0/7                unassigned      Shutdown        Down     default RP/0/RP1/CPU0#N560-7#", "url": "/tutorials/iosxr-741-innovations/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-egress-ipv6-acls-on-ncs5500": { "title": "Egress IPv6 ACLs on NCS5500", "content": " Egress IPv6 ACLs on NCS5500 Introduction Overview Understanding the Two Pass Approach 1st Pass Ingress 1st Pass Egress 2nd Pass Ingress 2nd Pass Egress TCAM Entries J2 Enhancement Summary IntroductionIn the previous article, we discussed the ACLs on NCS5500 BVI interfaces. We covered the ACL implementation and support for IPv4 and IPv6 ACLs in both ingress and egress directions. We also discussed the enhancements w.r.t IPv6 Egress ACLs. In this article we discuss the overall implementation of IPv6 Egress ACLs across NCS5500 product family based on J/J+ and see how it differs with the platforms based on J2.OverviewLet us start with the understanding the PMF block. It is one of the blocks in the ingress and egress pipeline aka Programmable Mapping and Filtering. It is the most programmable and the last programmable block in the pipeline. It has all the history of the packet from other blocks (incoming port, lookup results, etc). We can override here every decision taken along the pipeline. Here we do ACL, QoS, LPTS classification and set actions (counters, policers, Traffic Class). Egress PMF is capable of doing internal TCAM lookup for egress ACLsIn order to support the match criteria that is needed for egress IPv6 ACL, resources in the Ingress PMF block will be used. Therefore, this requires us to recycle the IPv6 packets so that the ACL can be performed by the Ingress PMF. Sometimes it might happen that certain features will not be processed in a single pass. They will require a second pass in the pipeline for further processing. The recycle interface offers the capability to “re-inject” the packet from the end of the egress pipeline into the ingress pipeline for a “second pass” and may be for a third or fourth if needed. There are 2 types of recycling cases. One is ingress recycle and second is egress recycle. Egress IPv6 ACL is the case of egress recycling.Understanding the Two Pass ApproachEgress IPv6 ACL requires two passes of each IPv6 packet that is subject to ACL. The recycling of the IPv6 packets are controlled by entries in the Egress PMF. Each packet will go through the following stages# 1st pass ingress 1st pass egress (recycle packet) 2nd pass ingress 2nd pass egress1st Pass IngressDuring the first pass the normal ingress processing will be performed. This includes ingress PMF, QoS, and any other configured features. The forwarding decision, is determined based on the Destination System Port (DSP). Each DSP in the system has a corresponding set of VoQs. Since a FEC (Forwarding Equivalence Class) is pointing to the DSP, packets are sent to destination port on the destination NPU. The header contains DSP as system port. The packets will be put in the DSP’s VoQ and will be scheduled by the DSP’s End-to-End (E2E) scheduler1st Pass EgressWhen an IPv6 ACL is attached to an egress interface, a set of egress PMF entries will created to redirect the IPv6 packet out the recycle port. Based the recycle port configuration, a new program will be selected which will take care of internal processing. At high level it will ensure a fixed packet offset after recycling. It will also build a dummy Ethernet header with Ethertype=IPv6 to prepare for 2nd pass ingress parsing2nd Pass IngressAfter the IPv6 packet has been recycled, it will be received on the same NPU/Core as the egress DSP. A field group in PMF will retrieve DSP, Traffic class and drop precedence from the recycled packet’s system header. DSP is retrieved so that the recycled packet will go out the same Egress port. We don’t want to go through forwarding again and possibly choose a different egress port. The Traffic Class might have been set by QoS in the 1st pass Ingress processing. We want to preserve this TC. The Drop Precedence might have been set by QoS in the 1st pass ingress processing. We want to preserve this Drop Precedence. Ingress PMF will now perform the ACL. If matches on a Deny ACE, the normal processing will occur and the packet will be dropped or forwarded to the Control Plane for ICMPv6 handling. If matches on a Permit ACE, the configured action will occur plus the additional actions to strip the recycled system header. Forwarding will use the retrieved DSP from the recycled system header. The packets will be put in the DSP’s VoQ and will be scheduled by the DSP’s end to end scheduler a second time.2nd Pass EgressThe original system headers will be used for normal egress processingTCAM EntriesEgress IPv6 ACL has 2 databases# EGRESS_ACL_IPV6 RCY_ACL_L3_IPV6Because the packets are recycled, the EGRESS_ACL_IPV6 database has entries that will facilitate the recycling mechanism. The actual match entries are added at RCY_ACL_L3_IPV6. Let us verify the same on the routers. We will use 2 different Line cards as belowRP/0/RP0/CPU0#5508-2-74142I-C#show platformNode Type State Config state-------------------------------------------------------------------------------- 0/3/CPU0 NC57-18DD-SE IOS XR RUN NSHUT0/3/NPU0 Slice UP 0/3/NPU1 Slice UP 0/4/CPU0 NC55-36X100G-A-SE IOS XR RUN NSHUT0/4/NPU0 Slice UP 0/4/NPU1 Slice UP 0/4/NPU2 Slice UP 0/4/NPU3 Slice UP Note# Output is truncatedWe will consider 3 Line cards. Line Card No of NPU’s NC57-18DD-SE 2xJ2 NC55-36X100G-A-SE 4xJ+ Below ACL is applied on the interface in the egress direction in slot 4.ipv6 access-list ipv6_1 10 permit tcp 2001#1#2##/64 any eq 1024 20 permit tcp 2002#1#2##/64 any eq 1024 30 permit tcp 2003#1#2##/64 any eq 1024 40 permit tcp 2004#1#2##/64 any eq 1024 50 deny udp 2001#4#5##/64 any lt 1000 60 permit ipv6 any anyRP/0/RP0/CPU0#5508-2-741Cs#show running-config interface hundredGigE 0/4/0/2Wed Sep 29 22#51#52.131 PDTinterface HundredGigE0/4/0/2 description Local H0/4/0/2 to H0/4/0/10 ipv6 access-group ipv6_1 egressAs pointed out earlier, we can see 2 databases are created.RP/0/RP0/CPU0#5508-2-741Cs#show controllers npu internaltcam location 0/4/CPU0 Internal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b egress_acl 2026 5 31 EGRESS_ACL_IPV60 6\\7 320b pmf-0 2034 14 93 RCY_ACL_L3_IPV6RP/0/RP0/CPU0#5508-2-741Cs#Note# Output is truncatedLet us understand the entries in details#For database EGRESS_ACL_IPV6 we have# 1 static entry per NPU per core for the recycle-channel 3 entries for setting up the different kinds of TPIDs supported (IPv6 (0x86dd), VLAN (0x8100), or MPLS (0x8847)) We can see 5 entries in the database. (default internal entry + the ones explained above)For database RCY_ACL_L3_IPV6 we have# The match entries computed from the ACL. We can see 14 entries (default entries plus ACEs configured)Let us increase the number of ACEs and see how the database entries are changedipv6 access-list ipv6_1 10 permit tcp 2001#1#2##/64 any eq 1024 20 permit tcp 2002#1#2##/64 any eq 1024 30 permit tcp 2003#1#2##/64 any eq 1024 40 permit tcp 2004#1#2##/64 any eq 1024 50 deny udp 2001#4#5##/64 any lt 1000 60 permit tcp 2005#1#2##/64 any eq 1024 70 permit tcp 2006#1#2##/64 any eq 1024 80 permit tcp 2007#1#2##/64 any eq 1024 90 permit tcp 2008#1#2##/64 any eq 1024 100 permit ipv6 any anyRP/0/RP0/CPU0#5508-2-741Cs#show controllers npu internaltcam location 0/4/CPU0 Wed Sep 29 23#12#18.921 PDTInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================0 0 160b egress_acl 2026 5 31 EGRESS_ACL_IPV60 6\\7 320b pmf-0 2030 18 93 RCY_ACL_L3_IPV6RP/0/RP0/CPU0#5508-2-741Cs#We can see for database EGRESS_ACL_IPV6 the entries are not increasing. The entries are only increasing for the database RCY_ACL_L3_IPV6. This is because actual match entries are added only at RCY_ACL_L3_IPV6. This is very important to understand during debugging or verifying the outputs. Other thing to remember is, if the same ACL is applied to multiple egress interfaces, match entries and different kinds of TPIDs will increase as per number of interfaces. While the entries for the recycle channel will only increase when the interfaces are in different NPUs.J2 EnhancementNCS5500 based on J2 ASICs do not face the issue of packet recycling when it comes to IPv6 Egress ACLs. They can be processed in single pass like IPv4 Egress ACLs. This is possible because of the presence of more resources in the Egress PMF. Let us verify the same on the Line card based on J2.interface FourHundredGigE0/3/0/21 description 5508-2 FH0/3/0/21 to 5508-1 FH0/7/0/21 cdp ipv6 access-group ipv6_1 egressRP/0/RP0/CPU0#5508-2-741Cs#show controllers npu internaltcam location 0/3/CPU0 Wed Sep 29 23#42#35.179 PDTInternal TCAM Resource Information=============================================================NPU Bank Entry Owner Free Per-DB DB DB Id Size Entries Entry ID Name=============================================================1 6\\7 320b EPMF 2013 26 44 EGRESS_ACL_IPV6RP/0/RP0/CPU0#5508-2-741Cs#RP/0/RP0/CPU0#5508-2-741Cs#show controllers npu internaltcam location 0/3/CPU0 | in RCY_ACL_L3_IPV6RP/0/RP0/CPU0#5508-2-741Cs#From the above outputs, its clearly seen that we no more create a database for recycle port RCY_ACL_L3_IPV6. We can also see that owner of the database is EPMF which is Egress PMF. The Egress PMF is having enough resources to take care of the ACL processing.SummaryHope this article was helpful in understanding the IPv6 Egress ACL implementation on NCS5500 and the enhancement on the newer generation platforms. In the next article we will understand the various modes of ACLs and see how to use specific profile to increase the ACL scale. Stay tuned !!!", "url": "/tutorials/egress-ipv6-acls-on-ncs5500/", "author": "Tejas Lad", "tags": "iosxr, NCS5500, ncs5500, J2, Jericho2, ACL, IPv6, Egress ACL" } , "tutorials-introducing-ncs57c3-mod": { "title": "Introducing NCS57C3-MOD Routers", "content": " Cisco NCS 57C3 MOD Routers Introduction Videos Understanding the naming logic Product description Positioning CCO Documentation Software Hardware LEDs and Displays Route Processors Power supply Fan trays Ports identification MPA ports Fixed SFP ports Fixed QSFP ports Ports scale and support 1GE 10GE 25GE 40GE 100GE 400GE 400GE ZR Forwarding ASIC (J2C NPU) Differences with Jericho2 MDB Profiles Port assignment to ASIC Core Block Diagrams SFP28 “direct” ports SFP28 PHY ports QSFP28 ports EOBC and EPC internal networks MACsec Timing External timing ports Fixed ports MPA ports This article has been written and reviewed by (in alphabetical order)# Akshaya Kumar Sankaran, TME Amit Kumar Dey, PM Nicolas Fevrier, TME Paban Sarma, TME Tejas Lad, TME Vincent Ng, TMEUpdate 1 (12 Oct 2021)# fixed an error in the 1G scenario, MPA slot 2 and 3 don’t support 1G ports on the 12T MPA.IntroductionWith IOS XR 7.4.1, we introduced multiple software features (https#//xrdocs.io/ncs5500/tutorials/iosxr-741-innovations/) but new hardware are also launched with this release. We are very happy to introduce a new member to the NCS5500 family, the NCS57C3-MOD series.These two new routers are the NCS-57C3-MOD-SYS and NCS-57C3-MODS-SYS, that can be considered the successors of NCS55A2-MOD. They are built following the same philosophy# compact form-factor (less than 300mm deep and 3RU here) offering the highest level of flexibility with both fixed SFP and QSFP ports (1G, 10G, 25G, 40G, 100G) modular port adaptorsbut also new goodies are specific to this NCS57C3-MOD# much higher forwarding capability (2.4Tbps compared to 900G on NCS55A2-MOD) dual RP for control plane redundancy 3x MPA (two at 800G and one at 400G)They can be used in multiple places in the network# aggregation, pre-agg, 5G (class-C capable), internet peering, core, enterprise…Videos.Understanding the naming logicThe name of the product is different depending on the licensing model used.With Flexible Consumption Model# NCS-57C3-MOD-SYS is the “base” version NCS-57C3-MODS-SYS is the “scale” version (ie# equipped with External TCAM and half the numbers of 100G fixed ports)With Perpetual / Business As Usual model# NCS-57C3-MOD-S is the “base” version NCS-57C3-MOD-SE-S is the “scale” version (eTCAM)Note that a “show platform” will display the FCM naming#RP/0/RP0/CPU0#ios#show platNode Type State Config state--------------------------------------------------------------------------------0/0/CPU0 NCS-57C3-MODS-SYS IOS XR RUN NSHUT0/0/NPU0 Slice UP 0/RP0/CPU0 NC57-MOD-RP2-E(Active) IOS XR RUN NSHUT0/RP1/CPU0 NC57-MOD-RP2-E(Standby) IOS XR RUN NSHUT0/FT0 NC57-C3-FAN2-FW OPERATIONAL NSHUT0/FT1 NC57-C3-FAN2-FW OPERATIONAL NSHUT0/FT2 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/FT3 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/FT4 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/FT5 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/PM0 NC57-1600W-ACFW OPERATIONAL NSHUTRP/0/RP0/CPU0#ios#Or in admin mode#sysadmin-vm#0_RP0# show controller card-mgr inventory summaryCard Manager Inventory Summary # BP HWLocation Card Type ID Serial Number Ver Card State------------------------------------------------------------------------------0/0 NCS-57C3-MODS-SYS 1 FOCxxxxxxxx 0.1 CARD_READY0/RP0 NC57-MOD-RP2-E (Master) 27 FOCxxxxxxxx 1.0 CARD_READY0/RP1 NC57-MOD-RP2-E (Slave) 28 FOCxxxxxxxx 1.0 CARD_READYProduct descriptionPositioningNCS57C3-MOD can be positioned in a very large variety of roles in the network due to its flexibility, compact form factor and high-level forwarding capacity. Non exhaustively# 5G Mobile Backhaul (Class C, 1G to 400G) Core & Peering (MACSEC,100G, 400G, High Routing Scale) Enterprise & Residential Aggregation (MACSEC, 10G, 25G, 100G, Higher Aggregation Scale) Cloud Native Broadband Network Gateway (in roadmap) Routed Optical Networks (400G ZR/ZRP, PLE) (ZRP and PLE in roadmap)CCO DocumentationThe product documentation is available here# Datasheet#https#//www.cisco.com/c/en/us/products/collateral/routers/network-convergence-system-5500-series/ncs-57C3-fixed-chassis-ds.html dimensions power usage PIDs ports supported standards pretty much everything you need to know ;) Fixed systems white paper#https#//www.cisco.com/c/dam/en/us/products/collateral/routers/network-convergence-system-5500-series/ncs5500-fixed-platform-architecture-white-paper.pdf Installation guide#https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/hardware-install/b-ncs5700-hardware-installation-guide-fixed-port/m-ncs-5700-router-overview.htmlSoftwareThe product is launched with IOS XR 7.4.1.Note# the system is running XR 64bit and not “XR7”. So it shares the same image than other modular and fixed chassis (except the NCS57B1 variants).In the download center, the search engine will not help with NCS 57C3 keywords#But you can pick any NCS5500 image to upgrade your NCS57C3-MOD routers (except the one for NCS57B1, of course since it’s XR7)#The system is based on a single J2C, the operating system will be activated by default (no config required) in “Native mode”.From a security / integrity perspective, the NCS57C3-MOD implements the latest secureboot features with TAm (Trust Anchor module) FPGA.HardwareWe launch two versions of this new 3-RU modular system# base and scale.They only differ in two aspects# external TCAM to complete the J2C NPU and the number of QSFP ports.   NCS-57C3-MOD-S / NCS-57C3-MOD-SYS NCS-57C3-MOD-SE-S / NCS-57C3-MODS-SYS Fixed SFP 1G/10G/25G 48 48 Fixed QSFP 40G/100G 8 4 MPA bays 2x 800G + 1x 400G 2x 800G + 1x 400G Dual RP Yes Yes Forwarding ASIC J2C J2C eTCAM No OP2 Total interfaces 4Tbps 3.6Tbps NPU Forwarding 2.4Tbps 2.4 Tbps Fixed ports (interfaces), Modular Port Adaptors (MPA) slots, Route Processors (RPs) and Power Supply Units (PSUs) are reachable from the front#Only Fan trays are reachable from the back#LEDs and DisplaysThe front plane is very packed, therefore we don’t have dedicated status LED for each port. Instead, 4 LEDs and a switch button are available on the bottom right.Route ProcessorsThe NCS57C3-MOD is the first of the “fixed platforms” portfolio to offer control plane redundancy via the presence of these two NC57-MOD-RP2-E inserted in the front of the router (top right two slots).Each route processor offers an USB port (for log/dump storage or “USB boot”), a console port and an management ethernet port.Note# the system can operate in nominal manner with only one RP. The dual RP is optional.As mentioned above, it offers control plane redundancy and not forwarding plane redundancy. It means the protocols and processes will be checkpointed between the two route processors, in the same way it’s done on the chassis 5504/5508/5516, enabling the NSR/NSF/GR features between protocols. But at the difference of the chassis, we don’t have multiple fabric cards (no fabric card at all).The RP doesn’t contain the Jericho2C NPU, this ASIC is located in an “internal line card”.You’ll find details on the CPU type, memories, etc in the datasheet linked above.Notes# the NC57-MOD-RP2-E is specific to the NCS57C3 and can’t be used in the modular chassis (or vice versa), it’s a totally different form factor we don’t have plans to implement ISSU on this system. The RP doesn’t contain the forwarding device, so at best we should be able to speed up the reload process but not reach true ISSU capabilities. with this dual RP architecture, the system is rated for a 5x9 availability (99.999%) at 30C.Power supplyIn the bottom left side of the front, we can insert two PSUs.Two flavors exist# 1600W AC or 1600W DC. Both AC or DC options offer 1+1 redundancy (ie the system can operate on one module only).Mixed (AC+DC) is possible but only tested for short period of time (during a live migration, for example).Fan traysThe cooling of the chassis is guaranteed by a system of 6 fan trays inserted in the back of the box. It’s a front-to-back design with two types of FTs#NC57-C3-FAN1-FW (40mm)#NC57-C3-FAN2-FW (60mm)#From a back perspective, the fan slots are numbered from left to right, 0 to 5.   NC57-C3-FAN1-FW NC57-C3-FAN2-FW Size 40mm 60mm Position/Slot 2/3/4/5 0/1 In normal conditions (all 6 fan trays active), the system can operate at# 50C (at 1800m) 45C when using QSFP-DD MPA with low powered opticsWith a single fan failure (whether it is NC57-C3-FAN1-FW or NC57-C3-FAN2-FW), the system can operate at 40C.Ports identificationMPA portsPort numbering for MPA are the same for both base and scale versions of the router. The MPA type will of course influence the port name and numbers.NCS57C3-MOD offers three MPA bays# 2 for 800Gbps MPA (or 400Gbps)# slot 2/3 1 for 400Gbps MPA only# slot 1All slots support existing MPAs# NC55-MPA-2TH-S# NCS 5500 2X200G CFP2 MPA NC55-MPA-1TH2H-S# NCS 5500 1X200G CFP2 + 2X100G QSFP28 MPA NC55-MPA-12T-S# NCS 5500 12X10G MPA NC55-MPA-4H-S# NCS 5500 4X100G QSFP28 MPAAnd slots 1 and 2 support new generation MPAs at 800G# NC57-MPA-2D4H-S (New)# NCS 5700 4X QSFP-DD MPAThis new MPA is also supported in slot 1 in 400Gbps mode, and can’t offer 400GE connection over a single port (but 4x100GE instead).More details on the supported ports / optics in this article# https#//xrdocs.io/ncs5500/tutorials/introducing-nc57-mpa-2d4h-s/Fixed SFP portsNCS57C3-MOD systems are offering 1G, 10G, 25G native ports in the central raw of the system. This is actually representing an internal line card “0”. You can NOT eject this card of course. The ports will be numbered 0/0/0/x.On base system we have 48 SFP ports split in two blocks, separated by the 8 high speed (QSFP) ports in the middle.On scale system we have 48 ports split in two blocks, separated by the 4 high speed ports in the middle.Fixed QSFP portsThat’s one of the most apparent difference between base and scale NCS57C3-MOD systems, the scale one offers half the amount of 40G/100G ports, due to the required internal connection to the external TCAM.The base variant has 8 ports#The scale variant offers 4 ports#Ports scale and supportTo identify the optic types supported on the NCS57C3-MOD routers, please check the TMG matrix#https#//tmgmatrix.cisco.com/?npid=4662&npid=4661It contains details on the connector types, the reach, the minimum release required, etc.1GEThe following diagram represents the maximum number of 1G optics the NCS57C3-MOD routers can offer (same numbers for base and scale system).In this configuration, we are inserting NC55-MPA-12T-S in slot 1, 2 and 3 but only slot 1 (400G) can handle this MPA for 1G (it’s fine for 10G). Also, it can only support 8 ports 1GE in slot 1.All SFP fixed ports support 1G, and at the moment, we don’t plan to support QSA in the QSFP ports.We have a total of 48 + 8 = 56 ports 1GE.Note# only “optical” 1G ports are supported and not “copper” (since we don’t support auto-negociation at 1G).10GEWe repeat the same exercise to identify the max number of 10GE ports we can accomodate. We will re-use the same NC55-MPA-12T-S in all MPA slots. But this time, they will we can use all the MPA ports at 10GE.The QSFP slots in the center can all use QSFP+ in 4x10GE breakout mode (with no restriction).Base#Total# 48 + (8 x 4) + 12 + 12 + 12 = 116 ports 10GE.Scale#Total# 48 + (4 x 4) + 12 + 12 + 12 = 100 ports 10GE.Note# to reach even higher 10GE scale, we could have used NC55-MPA-4H-S with QSFP+ 4x10GE in each port. With this breakout approach, we can get 12 additional ports.25GETo reach the higher possible scale, we will use NC55-MPA-4H-S or NC57-MPA-2D4H-S, so we can break out the four 100GE ports in 4x 25GE.We will use the same break out option in the QSFP fixed ports in the center.Finally all SFP fixed ports support SFP28 25GE.Total# 48 + (8 x 4) + 16 + 16 + 16 = 128 ports 2GE.Total# 48 + (4 x 4) + 16 + 16 + 16 = 112 ports 25GE.40GEFor QSFP+ 40GE ports, we will only be able to use the fixed QSFP ports in the center, and three times NC55-MPA-4H-S or NC57-MPA-2D4H-S.Total# 8 + 4 + 4 + 4 = 20 ports 40GE.Total# 4 + 4 + 4 + 4 = 16 ports 40GE.100GEThe highest scale we can reach for 100GE connectivity will require NC57-MPA-2D4H-S in slot 2 and 3. In slot 1, we can use NC55-MPA-4H-S or NC57-MPA-2D4H-S. We are showing two different configuration of the NC57-MPA-2D4H-S. In slot 2, we have two ports 4x100GE while in slot 3, we have 2x100 in all four ports. Both offer 8 times 100GE.The QSFP fixed ports support natively QSFP28 optics.Total# 8 + (4+4) + (2+2+2+2) + 4 = 28 ports 100GETotal# 4 + (4+4) + (2+2+2+2) + 4 = 24 ports 100GENote# it’s also possible to use a NC57-MPA-2D4H-S in slot 1 with a 4x100G breakout, but we must use port 0 specifically in the MPA for that, other three ports will be disabled.400GEWe only support 400GE Grey optics in NC57-MPA-2D4H-S in slot 2 and 3# 2 ports per MPA, in position 0 and 2Note# this MPA in slot 1 can’t offer 400GE on a single port.Total # 2 + 2 = 4 ports 400GE400GE ZRFor future ZR/ZR+ use, with NC57-MPA-2D4H-S in all three MPA slots# in slot 2 and 3, we will support two ports in 400G Transponder mode or 4x100G Muxponder mode. in slot 1, we will only support one port in 4x100G Muxponder modeThe 400G-ZR/ZR+ will NOT be supported in the fixed QSFP since they require QSFP-DD cages. Same applies for the NC55-MPA-xxx.Note# the 100G/400G ZR/ZRP are not supported in IOS XR 7.4.1, but they are in the roadmap. Contact your Cisco representative for more details.Forwarding ASIC (J2C NPU)NCS57C3-MOD routers are powered by a single NPU# the Broadcom Jericho2C. It’s the first platform of its kind in the Cisco MIG portfolio to use this chipset (Jericho2 being used in multiple line cards and stand-alone platforms already).Differences with Jericho2At very high level, the J2C ASIC is a J2 with just one core instead of two. Therefore, it will have half the bandwidth (2.4Tbps) and forwarding capabilities (1BPPS) more resources since they don’t need to be shared between two cores   Jericho2 Jericho2C Bandwidth (Gbps) 4,800 2,400 PPS 2B 1B Network IF 96x50G 32x50G+96x25G Fabric IF 112x50G 48x50G On-Chip Buffer 32MB 16MB Off-Chip Buffer (HBM) 8GB 4GB Virtual Output Queues per Core 64K 128K Counters 192K 384K Timing Class B Class C MDB ProfilesThe same innovations are present in the J2C and J2. Among them, the capability to carve a large block of memory into “database”, will permit the creation of specific profiles# more L3-oriented for peering roles more L2-oriented for aggregation rolesBy default, in IOS XR 7.4.1, the base system will enable the L3max profile and the scale version will activate the L3max-SE profile. In future releases, we will allow the configuration of diverse profiles depending on the use-case (L2max, L2max-SE for example).RP/0/RP0/CPU0#57C3#show controllers fia diagshell 0 ~mdb info~ location 0/0/CPU0 Node ID# 0/0/CPU0R/S/I# 0/0/0 =============================| MDB Profile || MDB profile# l3max || MDB profile KAPS cfg# 2 |=============================--%--SNIP-SNIP-SNIP--%--RP/0/RP0/CPU0#57C3-SE#sh controllers fia diagshell 0 ~mdb info~ location 0/0/CPU0 Node ID# 0/0/CPU0R/S/I# 0/0/0 =============================| MDB Profile || MDB profile# l3max-se || MDB profile KAPS cfg# 9 |=============================--%--SNIP-SNIP-SNIP--%--Port assignment to ASIC CoreFor once, it will be very simple# it’s a single-ASIC system and it’s a single-core NPU. So all ports are connected to NPU 0 Core 0.RP/0/RP0/CPU0#Router# sh controllers npu voq-usage interface all instance$-------------------------------------------------------------------Node ID# 0/0/CPU0Intf Intf NPU NPU PP Sys VOQ Flow VOQ Portname handle # core Port Port base base port speed(hex) type----------------------------------------------------------------------Hu0/0/3/2 8 0 0 1 1 1520 6648 local 100GHu0/0/3/3 28 0 0 5 5 1528 6656 local 100GHu0/0/3/1 68 0 0 13 13 1512 6640 local 100GHu0/0/2/2 88 0 0 17 17 1488 6616 local 100GHu0/0/2/3 a8 0 0 21 21 1496 6624 local 100GHu0/0/2/0 c8 0 0 25 25 1472 6600 local 100GHu0/0/2/1 e8 0 0 29 29 1480 6608 local 100GTe0/0/0/8 188 0 0 49 49 1088 6216 local 10GTe0/0/0/9 190 0 0 50 50 1096 6224 local 10GTF0/0/0/10 198 0 0 51 51 1320 6448 local 25GTF0/0/0/11 1a0 0 0 52 52 1432 6560 local 25GTe0/0/0/12 1a8 0 0 53 53 1104 6232 local 10GTe0/0/0/13 1b0 0 0 54 54 1112 6240 local 10GTe0/0/0/14 1b8 0 0 55 55 1120 6248 local 10GTe0/0/0/15 1c0 0 0 56 56 1128 6256 local 10GTe0/0/0/16 1c8 0 0 57 57 1136 6264 local 10GTe0/0/0/17 1d0 0 0 58 58 1144 6272 local 10GTe0/0/0/18 1d8 0 0 59 59 1152 6280 local 10GTe0/0/0/19 1e0 0 0 60 60 1160 6288 local 10GTe0/0/0/20 1e8 0 0 61 61 1168 6296 local 10GTe0/0/0/21 1f0 0 0 62 62 1176 6304 local 10GTe0/0/0/22 1f8 0 0 63 63 1280 6408 local 10GTe0/0/0/23 200 0 0 64 64 1272 6400 local 10GHu0/0/0/24 208 0 0 65 65 1312 6440 local 100GHu0/0/0/25 228 0 0 69 69 1304 6432 local 100GHu0/0/0/26 248 0 0 73 73 1296 6424 local 100GHu0/0/0/27 268 0 0 77 77 1288 6416 local 100GTF0/0/0/28 288 0 0 81 81 1424 6552 local 25GTF0/0/0/29 290 0 0 82 82 1416 6544 local 25GTF0/0/0/30 298 0 0 83 83 1328 6456 local 25GTF0/0/0/31 2a0 0 0 84 84 1336 6464 local 25GTF0/0/0/32 2a8 0 0 85 85 1344 6472 local 25GTF0/0/0/33 2b0 0 0 86 86 1352 6480 local 25GTF0/0/0/34 2b8 0 0 87 87 1360 6488 local 25GTF0/0/0/35 2c0 0 0 88 88 1368 6496 local 25GTF0/0/0/36 2c8 0 0 89 89 1376 6504 local 25GTF0/0/0/39 2d0 0 0 90 90 1400 6528 local 25GTF0/0/0/38 2d8 0 0 91 91 1392 6520 local 25GTF0/0/0/37 2e0 0 0 92 92 1384 6512 local 25GTe0/0/0/40 2e8 0 0 93 93 1264 6392 local 10GTe0/0/0/41 2f0 0 0 94 94 1256 6384 local 10GTe0/0/0/42 2f8 0 0 95 95 1248 6376 local 10GTe0/0/0/43 300 0 0 96 96 1240 6368 local 10GTe0/0/0/4 308 0 0 97 97 1056 6184 local 10GTe0/0/0/5 310 0 0 98 98 1064 6192 local 10GTe0/0/0/6 318 0 0 99 99 1072 6200 local 10GTe0/0/0/7 320 0 0 100 100 1080 6208 local 10GTe0/0/0/0 328 0 0 101 101 1024 6152 local 10GTe0/0/0/1 330 0 0 102 102 1032 6160 local 10GTe0/0/0/2 338 0 0 103 103 1040 6168 local 10GTe0/0/0/3 340 0 0 104 104 1048 6176 local 10GHu0/0/1/0 348 0 0 105 105 1440 6568 local 100GHu0/0/1/1 368 0 0 109 109 1448 6576 local 100GHu0/0/1/2 388 0 0 113 113 1456 6584 local 100GHu0/0/1/3 3a8 0 0 117 117 1464 6592 local 100GTe0/0/0/48 3c8 0 0 121 121 1200 6328 local 10GTe0/0/0/49 3d0 0 0 122 122 1192 6320 local 10GTe0/0/0/50 3d8 0 0 123 123 1184 6312 local 10GTF0/0/0/51 3e0 0 0 124 124 1408 6536 local 25GTe0/0/0/45 3e8 0 0 125 125 1224 6352 local 10GTe0/0/0/44 3f0 0 0 126 126 1232 6360 local 10GTe0/0/0/46 3f8 0 0 127 127 1216 6344 local 10GTe0/0/0/47 438 0 0 135 135 1208 6336 local 10GTe0/0/3/0/0 2048 0 0 9 9 1504 6632 local 10GTe0/0/3/0/1 2050 0 0 10 10 1536 6664 local 10GTe0/0/3/0/2 2058 0 0 11 11 1544 6672 local 10GTe0/0/3/0/3 2060 0 0 12 12 1552 6680 local 10GRP/0/RP0/CPU0#Router#Block DiagramsThe systems are logically split in an (dual) RP part and a LC part, each powered by Intel 8-core CPUs.Interesting to note the NCS57C3-MOD-SYS and NCS57C3-MODS-SYS are SoC (system on the chip). That means all the ports are directly connected to a single forwarding ASIC.But not all fixed ports are directly connected to the NPU, some SFP ports are connected through an intermediate PHY chipset.It’s very important to identify clearly the SFP28 ports connected directly to the NPU or via this PHY, since it will impact the features you can expect to activate on them (MACsec support and Timing performance/quality).SFP28 “direct” portsIn the base version, the direct ports are 0/0/0/<8-23><32-39>In the scale/-SE version, the direct ports are 0/0/0/<8-23><28-35>SFP28 PHY portsIn the base version, the PHY ports are 0/0/0/<0-7><40-55>In the scale/-SE version, the PHY ports are 0/0/0/<0-7><36-51>QSFP28 portsAll these fixed 8 or 4 ports (on base and scale respectively), are directly connected to NPU.EOBC and EPC internal networksInternally, the different parts of the system are interconnected through an ethernet switch that will “service” both the EPC and EOBC networks# the EPC for Ethernet Protocol Channel for the punted traffic (“for us” packets or netflow samples for example). The LC CPU and LC NPU are connected through a PCIe connection. the EOBC# Ethernet Out-of Band Channel for system managementMACsecMACsec is supported on the PHY SFP28 port (check section SFP28 PHY ports above).On fixed ports, it works for 10G and 25G optics but not 1G ports# base system# ports 0/0/0/<0-7> and <40-55> scale system# ports 0/0/0/<0-7> and <36-51>MACsec is not supported on the direct SFP ports or the QSFP28 ports.All MPA support MACsec, regardless of the slot of insertion NC55-MPA-2TH-S# supported on coherent CFP2 ports NC55-MPA-1TH2H-S# on both CFP2 ports and all grey ports 40G, 100G, 4x10G and 4x25G but no QSA NC55-MPA-12T-S# on all ports with 10G optics but not 1G NC55-MPA-4H-S# 40G, 100G, 4x10G and 4x25G but no QSA NC57-MPA-2D4H-S MACsec planned for next release and is not supported in 7.4.1.TimingExternal timing portsIn the right part of the central row of the system, you’ll find all timing ports# 1PPS and 10MHz (DIN 1.0/2.3 50 Ohm Coax) GNSS antenna (SMA 50 Ohm Coax) Time Of Day (RJ45/RS-422, Cisco and NTPv4 TOD format support)No BITS support.Fixed portsCisco NCS57C3-MOD routers support Sync-E and PTP (no difference between base and scale systems).The systems support PTP T-GM (Grand Master) clock with PRTC-A performance (G.8272).PTP scale 128 sessions with max 64/16 pps Sync/DelayReq.All fixed ports support Sync-E# QSFP ports with QSFP28 and QSFP+ optics SFP ports with SFP28 and SFP+ optics SFP ports 1G are in the roadmap and will be on PHY ports only (not supported in 7.4.1)All fixed ports support T-BC (Boundary Clock) for both telecom profiles# G.8275.1/G.8273.2 G.8275.2 (IPv4)The clock quality (Class-B or Class-C) is dependant of the type of port, whether they are directly connected to the NPU or they are PHY ports# direct ports support Class-C (ports 0/0/0/<8-39> on base and ports 0/0/0/<8-35> on scale variant) PHY ports support Class-B (ports 0/0/0/<0-7><40-55> on base and ports 0/0/0/<0-7><36-51> on scale variant)MPA ports NC55-MPA-2TH-S# Class-A timing over coherent links is in the roadmap, not supported in 7.4.1 NC55-MPA-1TH2H-S# Class-A timing over coherent and Class-B over grey ports is in the roadmap NC55-MPA-12T-S# Class-B supported over 10G links in 7.4.1, but no support with 1G optics NC55-MPA-4H-S# Class-B is in the roadmap NC57-MPA-2D4H-S at FCS in 7.4.1, we support Class C performance on 40G/100GE and 400GE grey ports Timing over 2x100G and 4x100G grey breakout is in roadmap (Class C) Timing over ZR/ZRP, with Class A quality, is in the roadmap ", "url": "/tutorials/introducing-ncs57c3-mod/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-introducing-nc57-mpa-2d4h-s": { "title": "Introducing the new QSFP-DD MPA: NC57-MPA-2D4H-S", "content": " New QSFP-DD MPA# NC57-MPA-2D4H-S Introduction Video CCO documentation Product Description In NCS57C3-MOD Routers In 800Gbps Mode In 400Gbps Mode MACsec Support Timing Support Configuring Interface Speed 400G on the MPA Configuring one breakout option on the MPA In NCS55A2-MOD Routers Use case? In NC55-MOD-A Line Cards This article has been written and reviewed by (in alphabetical order)# Akshaya Kumar Sankaran, TME Amit Kumar Dey, PM Nicolas Fevrier, TME Paban Sarma, TME Tejas Lad, TME Vincent Ng, TMEIntroductionWith IOS XR 7.4.1, we introduced a lot of new features and new routers like the Cisco NCS57C3-MOD, but that’s not the only new hardware launched with this new release. We are very pleased to announce the availability of a new generation of Modular Port Adapter, with QSFP-DD support# the NC57-MPA-2D4H-S.This new MPA will be supported in the different slots of the NCS57C3-MOD routers, but also in the very popular NCS55A2-MOD series and the modular line cards NC55-MOD-A-S and NC55-MOD-A-SE-S.So it will be supported in the existing 400Gbps MPA slots, protecting your investment, but also in the new 800Gbps slots available in the NCS57C3-MOD routers, unleashing its fully capability.VideoVincent Ng presents the NC57-MPA-2D4H-S module#CCO documentation Datasheet#https#//www.cisco.com/c/en/us/products/collateral/routers/network-convergence-system-5500-series/ncs-5700-series-mpa-ds.html TMG matrix for supported optics#https#//tmgmatrix.cisco.com/?npid=4661 Visio stencils#https#//www.cisco.com/c/en/us/products/visio-stencil-listing.htmlProduct DescriptionThe form factor is comparable to the 400Gbps version available today (NC55-MPA-2TH-S, NC55-MPA-1TH2H-S, NC55-MPA-12T-S and NC55-MPA-4H-S).This MPA is 800Gbps capable and can accomodate a variety of port mixing combinations.The 4 ports are QSFP-DD cages, making it ready for ZR/ZRP optics (100G and 400G). It’s the perfect fit for the Routed Optical Network architectures, you can find more details on this design here.This MPA is useable in multiple NCS5500 and NCS5700 products and line cards, we will detail them now, one by one.In NCS57C3-MOD RoutersPlease check this artcle dedicated to the NCS57C3-MOD routers first.This fixed/modular router is the first of its kind in the portfolio, and its flexibility comes primarily from the three MPA slots. One at 400Gbps (slot 1) and two at 800Gbps (slot 2 and 3).The module presents 4 QDD ports in the front, connected to a PHY chipset, capable of different behaviors (reverse gear box, forward gear box, retimer, …) and handling features like timing and MACsec encryption.Depending on the slot we will insert it into, the NC57-MPA-2D4H-S will offer different capabilities and support different port combination.In 800Gbps ModeWhen inserted in slot 2 and 3, the MPA offers 800Gbps of throughput, 400G for each pair of ports (0-1 and 2-3). Port 0/2 Port 1/3 400G or 4x100G Empty QDD-2x100G QDD-2x100G 100G or 40G or 4x25G or 4x10G 100G or 40G or 4x25G or 4x10G QDD-2x100G 100G or 4x25G 100G or 4x25G QDD-2x100G 100G/400G-ZR/ZR+ Transponder and Nx100G Muxponder modes are in the roadmap. Contact your Cisco representative if you need more details.In 400Gbps ModeWhen inserted in slot 1, the MPA offers 400Gbps of throughput total.We can’t offer a single 400GE port in this mode, only 4x100G in port 0 or multiple ports of diverse speed as shown in the chart# Port 0 Port 1 Port 2 Port 3 4x100G breakout Disabled Disabled Disabled QDD-2x100G Disabled QDD-2x100G Disabled 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G QDD2x100G Disabled 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G QDD2x100G Disabled 100G/400G-ZR/ZR+ Nx100G Muxponder modes are in the roadmap. Contact your Cisco representative if you need more details.MACsec SupportAll speeds 400G, 100G, 40G supported such as MACsec over breakout (4x100G, 4x25G and 4x10G).Timing SupportAll speeds 400G, 100G, 40G support SyncE and PTP# G.8275.1 Telecom Profile supported with Class C Performance G.8275.2 Telecom Profile also supportedConfiguring Interface Speed 400G on the MPAAs discussed in the above section we can use the MPA is either slot 1,2,3. It will operate in either 400G or 800G mode. Now we will see the default mode of the ports when the MPA boots up and how to configure a port for 400G speed.We have a router with the MPA in slot 1 and slot 2. We will concentrate on slot 2 for this exampleRP/0/RP0/CPU0#NC57C3-Vega-II5-53#show platformThu Mar 3 11#21#37.216 UTCNode Type State Config state--------------------------------------------------------------------------------0/0/1 NC57-MPA-2D4H-S OK 0/0/2 NC57-MPA-2D4H-S OK 0/0/3 NC57-MPA-12L-S OK 0/0/CPU0 NCS-57C3-MODS-SYS IOS XR RUN NSHUT0/0/NPU0 Slice UP 0/RP0/CPU0 NC57-MOD-RP2-E(Active) IOS XR RUN NSHUT0/FT0 NC57-C3-FAN2-FW OPERATIONAL NSHUT0/FT1 NC57-C3-FAN2-FW OPERATIONAL NSHUT0/FT2 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/FT3 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/FT4 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/FT5 NC57-C3-FAN1-FW OPERATIONAL NSHUT0/PM0 NC57-1600W-ACFW OPERATIONAL NSHUT0/PM1 NC57-1600W-ACFW FAILED NSHUTRP/0/RP0/CPU0#NC57C3-Vega-II5-53#We have inserted 400G optics in slot 0 and slot 2RP/0/RP0/CPU0#NC57C3-Vega-II5-53#show inventory NAME# ~0/0~, DESCR# ~NCS 5700 Eyrie Line Card~PID# NCS-57C3-MODS-SYS , VID# V01, SN# FOC25296E5ZNAME# ~0/0/1~, DESCR# ~2X400G or 4X200/100G QSFP-DD MPA~PID# NC57-MPA-2D4H-S , VID# V01, SN# FOC252925T7NAME# ~0/0/2~, DESCR# ~2X400G or 4X200/100G QSFP-DD MPA~PID# NC57-MPA-2D4H-S , VID# V01, SN# FOC252925T8NAME# ~HundredGigE0/0/2/0~, DESCR# ~Cisco QSFPDD 400G AOC Pluggable Optics Module~PID# QDD-400-AOC3M , VID# V01 , SN# INL2528A9ZV-BNAME# ~HundredGigE0/0/2/2~, DESCR# ~Cisco QSFPDD 400G AOC Pluggable Optics Module~PID# QDD-400-AOC3M , VID# V01 , SN# INL2528A9XN-AWe can see that when the MPA boots up the interface comes up as 100G by default. It does not automatically take into consideration the inserted optics.RP/0/RP0/CPU0#NC57C3-Vega-II5-53#show ipv4 interface brief | in 0/0/2HundredGigE0/0/2/0 unassigned Down Down default HundredGigE0/0/2/1 unassigned Down Down default HundredGigE0/0/2/2 unassigned Down Down default HundredGigE0/0/2/3 unassigned Down Down default RP/0/RP0/CPU0#NC57C3-Vega-II5-53#For configuring the port as 400G, we need to execute the below commandshw-module port-range 0 1 instance 2 location 0/0/CPU0 mode 400hw-module port-range 2 3 instance 2 location 0/0/CPU0 mode 400 CLI portion Meaning port-range either 0-1 or 2-3 instance card instance of MPA’s location fully qualified location specification mode port mode After configuring the above profile the interface comes up in 400G mode.RP/0/RP0/CPU0#NC57C3-Vega-II5-53#show ipv4 interface brief | in FourHundredGigE0/0/2FourHundredGigE0/0/2/0 unassigned Up Up default FourHundredGigE0/0/2/2 unassigned Up Up default RP/0/RP0/CPU0#NC57C3-Vega-II5-53#Configuring one breakout option on the MPAAs seen in the above example, if we want to configure a port for a breakout option, please use the below CLIhw-module port-range 0 1 instance 2 location 0/0/CPU0 mode 4x100In this example, the 400G port in slot 0 is used in 4x100G mode.RP/0/RP0/CPU0#NC57C3-Vega-II5-53#show ip int brief | in 0/0/2/0/HundredGigE0/0/2/0/0 unassigned Down Down default HundredGigE0/0/2/0/1 unassigned Down Down default HundredGigE0/0/2/0/2 unassigned Down Down default HundredGigE0/0/2/0/3 unassigned Down Down default RP/0/RP0/CPU0#NC57C3-Vega-II5-53#Similarly you can use different ports in different speeds and breakout options. The table described above gives the combination details.In NCS55A2-MOD RoutersThe two MPA slots being 400G, it’s the same port capability than 400Gbps mode described above. We don’t support 1x400G Grey but will support 4x100G breakout. For 400G-ZR/ZR+ Coherent in the future plan, we will support Nx100G Muxponder modes. Current optics speeds and breakout supported are as follows# Port 0 Port 1 Port 2 Port 3 4x100G breakout Disabled Disabled Disabled QDD-2x100G Disabled QDD-2x100G Disabled 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G QDD2x100G Disabled 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G 100G/40G/4x25G/4x10G QDD2x100G Disabled Timing and MACsec support are not available at 7.4.1 and are currently tracked in the roadmap, contact your Cisco representative for more details.Use case?Most obvious question here will be# why not using the good ol’ NC55-MPA-4H-S since you can’t afford more than 4 ports 100G here?The main use case for using an NC57-MPA-2D4H-S in NCS55A2-MOD will be the Route Optical (RON) architectures. Indeed, for ZR and ZRP optics support at 100G, you need a QSFP-DD cage. So, using coherent ZR/ZR+ optics in 100G, 4x100G or 2x100G will require the new MPA.Note# support of ZR/ZRP is in the roadmap and currently not available in IOS XR 7.4.1 at FCS.In NC55-MOD-A Line CardsCurrently we don’t support the NC57-MPA-2D4H-S in the NC55-MOD-A line cards (base or scale).", "url": "/tutorials/introducing-nc57-mpa-2d4h-s/", "author": "Nicolas Fevrier", "tags": "" } , "tutorials-lpts-enhancements-on-ncs5500-ncs5700": { "title": "LPTS Enhancements on NCS5500/NCS5700 ", "content": " LPTS Enhancements on NCS5500/NCS5700 Introduction Brief Background Problem Statement Solution Platform Support Implementation 1st Pass 2nd Pass IOS-XR 7.6.1 Support Matrix Configurations Memory and Performance References Summary IntroductionIn our previous article, we had introduced the LPTS architecture on NCS5500 and NCS500 product family. There we discussed, concept of LPTS and its internal architecture. We also saw with examples how LPTS entries are created in the hardware and how they can be altered as per different requirements. We then followed it up with introduction to Domain based LPTS Policers and understanding its use cases. In this article, we will discuss the LPTS latest enhancements on the newer generation products.Brief BackgroundBefore we move on to this topic, it would be recommended to visit our LPTS architecture document for understanding the implementation on the platform. As discussed in the document, LPTS is an integral component of IOS-XR systems which provides firewall and policing functionality. LPTS maintains per interface complete table in netio chain in Line card CPU, making sure that packets are delivered to their intended destinations. IOS XR software classifies all ‘For Us’ control packets into 97 different flows. Each flow has it own hardware policer to restrict the punt traffic rate for the flow type. We also discussed how the LPTS processes the for-us packets in the two pass in the hardware pipeline. For-us packets will go through the ASIC twice before getting punted to the CPU. In the current implementation this happens in iTCAM.Problem StatementLocal Packet Transfer Services (LPTS) maintains tables that redirect packets to a logical router or the Secure Domain Router (SDR) to make sure that packets are delivered to their intended destination on the Routing Processor(RP). These packets are termed as “for-us” packets. Examples include PIM, IGMP, ICMP, RSVP other protocol packets like OSPFv2/v3 hello packets, ISIS packets, BGP packets etc. As mentioned above, the on chip TCAM or the iTCAM on previous generation NCS5500 can only support a maximum of 8K LPTS table entries along with other features. Entries exceeding the allowed numbers is processed under a common pool of software entries.SolutionFrom IOS-XR 7.6.1, the scale of the LPTS hardware entries has been increased. To achieve the same, the second pass will happen in the eTCAM instead of iTCAM. This will help increase the LPTS hardware entries to 12000 (from current support of 8k). This helps in scaling the other protocol entries up to 1.5 times the current scale. This gives more flexibility to the customers to choose the number of hardware entries for their protocols.Platform SupportThis enhancement is only supported on platforms based on Jericho2 and Jericho2C with external TCAM (NC57-18DD-SE, NC57-36H-SE, NCS-57C3-MOD-(SE)-S, NCS-57B1-5DSE). Platforms that do not have external TCAM does not support this enhancement. This is supported only in native mode. It is not supported in compatible mode. Earlier generation platforms based on Jericho/Jericho+ do not support this enhancement even if they have external TCAM.For understanding Native vs Compatible mode please watch the following video.ImplementationLet us have a high level understanding of how this works internally#1st Pass The packet ingresses on the network interface and a forwarding destination lookup. This gives the packet a valid compression id value or a FEC value. Also the forwarding trap value for the packet may be set depending on the type of packet. In this stage, the trap value for the packet is modified to a user defined recycle trap id. This happens for packets that have a valid compression ID value. For packets with TTL=1, the TTL1 trap is set via INGRESS_IPV4/6 instead. Both traps ensures that the packet is recycled back to the IRPP.2nd Pass The recycled packet lookup for 2nd pass happens in the external tcam is done via a recycle context selection criteria. This applies to all IPv4/v6 unicast and multicast packets and packets with the options attribute set. The hardware will have 2 lookups internally. The first will be for the forwarding destination or compression ID the other will be for LPTS.Note# This happens transparently in the platforms once it is upgraded to IOS-XR 7.6.1 and operating in native modeIOS-XR 7.6.1 Support Matrix Host Router Remote Router LPTS Scale Platforms with J2/J2C with eTCAM Native Mode Platforms with J2/J2C with eTCAM Native Mode 12k Platforms with J2/J2C with eTCAM Native Mode Platforms with J2/J2C with eTCAM Compatible Mode 8k Platforms with J2/J2C with eTCAM Compatible Mode Platforms with J2/J2C with eTCAM Native Mode 8k Platforms with J2/J2C without eTCAM Native/Compatible Mode Platforms with J2/J2C without eTCAM Native/Compatible Mode 8k Platforms with J2/J2C without eTCAM Native/Compatible Mode Platforms with J2/J2C with eTCAM Native/Compatible Mode 8k Platforms with J/J+ with eTCAM Platforms with J/J+ with eTCAM 8k Platforms with J/J+ without eTCAM Platforms with J/J+ without eTCAM 8k Platforms with J2/J2C with eTCAM Native/Compatible Mode Platforms with J/J+ with eTCAM 8k ConfigurationsFixed platforms with J2/J2C ASIC with eTCAM will automatically support 12k LPTS entries when operating with IOS-XR 7.6.1.RP/0/RP0/CPU0#N57B1-1-Vega-II5-57#show platform Node Type State Config state--------------------------------------------------------------------------------0/RP0/CPU0 NCS-57B1-5DSE-SYS(Active) IOS XR RUN NSHUT0/PM0 PSU2KW-ACPI OFFLINE NSHUT0/PM1 PSU2KW-ACPI OPERATIONAL NSHUT0/FT0 N5700-FAN OPERATIONAL NSHUT0/FT1 N5700-FAN OPERATIONAL NSHUT0/FT2 N5700-FAN OPERATIONAL NSHUT0/FT3 N5700-FAN OPERATIONAL NSHUT0/FT4 N5700-FAN OPERATIONAL NSHUT0/FT5 N5700-FAN OPERATIONAL NSHUTRP/0/RP0/CPU0#N57B1-2-Vega-II5-58#show version Mon Apr 11 09#17#51.091 UTCCisco IOS XR Software, Version 7.6.1 LNTCopyright (c) 2013-2022 by Cisco Systems, Inc.Build Information# Built By # ingunawa Built On # Sun Mar 27 01#23#01 UTC 2022 Build Host # iox-ucs-051 Workspace # /auto/srcarchive17/prod/7.6.1/ncs5700/ws Version # 7.6.1 Label # 7.6.1cisco NCS5700 (D-1563N @ 2.00GHz)cisco NCS-57B1-5DSE-SYS (D-1563N @ 2.00GHz) processor with 32GB of memoryN57B1-2-Vega-II5-58 uptime is 3 hours, 6 minutesNCS55B1 Fixed Scale HW Flexible Consumption Need Smart LicRP/0/RP0/CPU0#N57B1-2-Vega-II5-58#show lpts pifib dynamic-flows statistics location 0/RP0/CPU0 Dynamic-flows Statistics# ------------------------- (C - Configurable, T - TRUE, F - FALSE, * - Configured) Def_Max - Default Max Limit Conf_Max - Configured Max Limit HWCnt - Hardware Entries Count ActLimit - Actual Max Limit SWCnt - Software Entries Count P, (+) - Pending Software Entries FLOW-TYPE C Def_Max Conf_Max HWCnt/ActLimit SWCnt P -------------------- -- ------- -------- -------/-------- ------- - Fragment T 4 -- 2/4 2 OSPF-mc-known T 900 -- 0/900 0 OSPF-mc-default T 8 -- 4/8 4 OSPF-uc-known T 450 -- 0/450 0 OSPF-uc-default T 4 -- 2/4 2 ISIS-known T 300 -- 0/300 0 ISIS-default T 2 -- 1/2 1 BGP-known T 1800 -- 0/1800 0 BGP-cfg-peer T 1800 -- 0/1800 0 BGP-default T 8 -- 4/8 4 PIM-mcast-default T 40 -- 0/40 0 PIM-mcast-known T 450 -- 0/450 0 PIM-ucast T 40 -- 2/40 2 IGMP T 1464 -- 0/1464 0 ICMP-local T 4 -- 4/4 4 ICMP-control T 10 -- 5/10 5 ICMP-default T 18 -- 9/18 9 ICMP-app-default T 4 -- 2/4 2 LDP-TCP-known T 450 -- 0/450 0 LDP-TCP-cfg-peer T 450 -- 0/450 0 LDP-TCP-default T 40 -- 0/40 0 LDP-UDP T 450 -- 0/450 0 All-routers T 450 -- 0/450 0 RSVP-default T 4 -- 0/4 0 RSVP-known T 450 -- 0/450 0 IPSEC-known T 150 -- 0/150 0 SNMP T 150 -- 0/150 0 SSH-known T 150 -- 0/150 0 SSH-default T 40 -- 2/40 2 HTTP-known T 40 -- 0/40 0 HTTP-default T 40 -- 0/40 0 SHTTP-known T 40 -- 0/40 0 SHTTP-default T 40 -- 0/40 0 TELNET-known T 150 -- 0/150 0 TELNET-default T 4 -- 0/4 0 UDP-known T 40 -- 0/40 0 UDP-listen T 40 -- 0/40 0 UDP-default T 4 -- 2/4 2 TCP-known T 40 -- 0/40 0 TCP-listen T 40 -- 0/40 0 TCP-default T 4 -- 2/4 2 Raw-default T 4 -- 2/4 2 ip-sla T 50 -- 0/50 0 EIGRP T 40 -- 0/40 0 RIP T 40 -- 0/40 0 PCEP T 20 -- 0/20 0 GRE T 4 -- 0/4 0 VRRP T 150 -- 0/150 0 HSRP T 40 -- 0/40 0 MPLS-oam T 40 -- 0/40 0 DNS T 40 -- 0/40 0 RADIUS T 40 -- 0/40 0 TACACS T 40 -- 0/40 0 NTP-default T 4 -- 0/4 0 NTP-known T 150 -- 0/150 0 DHCPv4 T 40 -- 0/40 0 DHCPv6 T 40 -- 0/40 0 TPA T 100 -- 0/100 0 PM-TWAMP T 36 -- 0/36 0 --------------------------------------------------- Active TCAM Usage # 11450/12000 [Platform MAX# 12000] HWCnt/SWCnt # 43/47---------------------------------------------------From the above output, we can see that when the router boots up with IOS-XR 761, we have only 11450 TCAM entries occupied. Whereas the maximum supported in 12000. This means that we have room to increase a particular flow without affecting the others. Let us check with an example.RP/0/RP0/CPU0#N57B1-2-Vega-II5-58(config)#lpts pifib hardware dynamic-flows location 0/RP0/CPU0 flow bgp configured max 2000RP/0/RP0/CPU0#N57B1-2-Vega-II5-58(config)#commit RP/0/RP0/CPU0#N57B1-2-Vega-II5-58#show lpts pifib dynamic-flows statistics location 0/RP0/CPU0 Dynamic-flows Statistics# ------------------------- (C - Configurable, T - TRUE, F - FALSE, * - Configured) Def_Max - Default Max Limit Conf_Max - Configured Max Limit HWCnt - Hardware Entries Count ActLimit - Actual Max Limit SWCnt - Software Entries Count P, (+) - Pending Software Entries FLOW-TYPE C Def_Max Conf_Max HWCnt/ActLimit SWCnt P -------------------- -- ------- -------- -------/-------- ------- - Fragment T 4 -- 2/4 2 OSPF-mc-known T 900 -- 0/900 0 OSPF-mc-default T 8 -- 4/8 4 OSPF-uc-known T 450 -- 0/450 0 OSPF-uc-default T 4 -- 2/4 2 ISIS-known T 300 -- 0/300 0 ISIS-default T 2 -- 1/2 1 BGP-known T 1800 -- 0/1800 0 BGP-cfg-peer T* 1800 2000 0/2000 0 BGP-default T 8 -- 4/8 4 PIM-mcast-default T 40 -- 0/40 0 PIM-mcast-known T 450 -- 0/450 0 PIM-ucast T 40 -- 2/40 2 IGMP T 1464 -- 0/1464 0 ICMP-local T 4 -- 4/4 4 ICMP-control T 10 -- 5/10 5 ICMP-default T 18 -- 9/18 9 ICMP-app-default T 4 -- 2/4 2 LDP-TCP-known T 450 -- 0/450 0 LDP-TCP-cfg-peer T 450 -- 0/450 0 LDP-TCP-default T 40 -- 0/40 0 LDP-UDP T 450 -- 0/450 0 All-routers T 450 -- 0/450 0 RSVP-default T 4 -- 0/4 0 RSVP-known T 450 -- 0/450 0 IPSEC-known T 150 -- 0/150 0 SNMP T 150 -- 0/150 0 SSH-known T 150 -- 0/150 0 SSH-default T 40 -- 2/40 2 HTTP-known T 40 -- 0/40 0 HTTP-default T 40 -- 0/40 0 SHTTP-known T 40 -- 0/40 0 SHTTP-default T 40 -- 0/40 0 TELNET-known T 150 -- 0/150 0 TELNET-default T 4 -- 0/4 0 UDP-known T 40 -- 0/40 0 UDP-listen T 40 -- 0/40 0 UDP-default T 4 -- 2/4 2 TCP-known T 40 -- 0/40 0 TCP-listen T 40 -- 0/40 0 TCP-default T 4 -- 2/4 2 Raw-default T 4 -- 2/4 2 ip-sla T 50 -- 0/50 0 EIGRP T 40 -- 0/40 0 RIP T 40 -- 0/40 0 PCEP T 20 -- 0/20 0 GRE T 4 -- 0/4 0 VRRP T 150 -- 0/150 0 HSRP T 40 -- 0/40 0 MPLS-oam T 40 -- 0/40 0 DNS T 40 -- 0/40 0 RADIUS T 40 -- 0/40 0 TACACS T 40 -- 0/40 0 NTP-default T 4 -- 0/4 0 NTP-known T 150 -- 0/150 0 DHCPv4 T 40 -- 0/40 0 DHCPv6 T 40 -- 0/40 0 TPA T 100 -- 0/100 0 PM-TWAMP T 36 -- 0/36 0 --------------------------------------------------- Active TCAM Usage # 11650/12000 [Platform MAX# 12000] HWCnt/SWCnt # 43/47---------------------------------------------------We can see that the hardware entries for BGP configured flow has been increased to 2000 from 1800. In the previous release this was not possible. All the 8k entries were utilised at the boot time itself. If we wanted to increase any particular flow, it would be at the cost of other flows.For modular chassis we need to first bring the chassis to native mode using the below command. Then only this enhancement will be effective.hw-module profile npu native-mode-enableThis will need a router reload. Once the chassis is operating in native mode and has line cards with external TCAM then we should be able to get the 12k LPTS entries.If you want to toggle back to the previous behaviour for 8k entries then use the below profile and issue a router reloadhw-module profile tcam lpts-internalMemory and PerformanceCreating new field groups will take up hardware resources on the ASIC. But with the current implementation we will not face any memory issue. Though it uses 2 pass implementation but will not have any issues of latency in the platforms. This will be taken care at the boot time itself and will be transparent to the end users.ReferencesIntroduction to LPTS on NCS5500CCO Configuration GuideShort VideoSummaryLPTS provides a strong and compressive feature for control-plane protection. All the NCS5500 platforms are equipped with this feature, for helping the customers to achieve their SLAs and get better network stability. As discussed, this enhancement will help increase the LPTS scale and increase the number of hardware entries of the protocols along with other features. This will give flexibility to the customers in choosing hardware entries for the protocols. This is supported from IOS-XR 7.6.1 and only on the newer generation platforms with external TCAM. We have a roadmap to further increase this values from 12k. Stay tuned for the same.", "url": "/tutorials/lpts-enhancements-on-ncs5500-ncs5700/", "author": "Tejas Lad", "tags": "iosxr, cisco, LPTS, NCS5500, NCS5700, control plane, CoPP" } , "tutorials-latest-acl-enhancements-on-ncs5500-ios-xr-7-6-1": { "title": "Latest ACL Enhancements on NCS5500/NCS5700 - IOS-XR 7.6.1", "content": " Latest ACL Enhancements on NCS5500 - IOS-XR 7.6.1 Introduction IOS-XR 7.6.1 Enhancements Increased Ingress ACLs Support Matrix ACL Chaining with ACL Based Forwarding (ABF) Support Matrix Enable Ingress Interface Logging on ACE Support Matrix ACL-Based Policing Feature support Configurations Support Matrix Summary Reference IntroductionIn our previous article, we discussed the ACL enhancements on the newer generation NCS5500 plaforms. In this article we will discuss some more enhancements w.r.t ACLs, which we have done in IOS-XR 7.6.1IOS-XR 7.6.1 EnhancementsIn IOS-XR 7.6.1, we have brought in a number of new enhancements when it comes to ACLs. Increased Ingress ACLs ACL Chaining with ACL Based Forwarding (ABF) Enable Ingress Interface Logging on ACE ACL-Based PolicingWe will discuss about each enhancement in detailsIncreased Ingress ACLsIn earlier releases, we could configure maximum upto 127 different traditional ingress ACLs and 255 different hybrid ingress ACLs in shared ACL mode per line card. From IOS-XR 7.6.1 we can now configure an increased number of either traditional (non-compression) or hybrid (compression) ingress ACLs in shared ACL mode, as listed below# A maximum of 512 different traditional ingress ACLs per line card. A maximum of 1000 different hybrid ingress ACLs per line card.Increased ACLs provide you with enhanced traffic filtering capabilities to control how traffic packets move through the network and restrict the access of users and devices to the network. For further deepdive please follow the link.Support Matrix Platforms Support NCS5500 without eTCAM (J/J+) No NCS5500 with eTCAM (J/J+) No NCS5700 without eTCAM (J2/J2C) Yes NCS5700 with eTCAM (J2/J2C) Yes ACL Chaining with ACL Based Forwarding (ABF)Prior to IOS-XR 7.6.1, ABF and ACL chaining with Common ACL were mutually exclusive features. From 7.6.1 onwards, we can enable ABF in conjunction with ACL chaining in Common ACL. With this feature, the router can inspect and forward the packets based on the ABF rule in Common ACL. For further deepdive, please follow the links for Chained ACL and ABF.Support Matrix Platforms Support NCS5500 without eTCAM (J/J+) Yes NCS5500 with eTCAM (J/J+) Yes NCS5700 without eTCAM (J2/J2C) Yes NCS5700 with eTCAM (J2/J2C) Yes Enable Ingress Interface Logging on ACENCS5500 product family already supports logging messages about packets permitted or denied by IP access list. That is, any packet that matches the access list causes an informational logging message about the packet. The level of messages logged to the console is controlled by the logging console command in global configuration mode. The first packet that triggers the access list causes an immediate logging message, and subsequent packets are collected over 5-minute intervals before they are displayed or logged. (Reference). Let us verify the behaviour on the router.We have a router with ACL configured with logging enabled.RP/0/RP0/CPU0#N57B1-1-Vega-II5-57#show access-lists ipv4 acl_log ipv4 access-list acl_log 10 permit icmp host 172.16.0.57 host 172.16.0.53 log 20 permit icmp host 172.16.0.53 host 172.16.0.57 logRP/0/RP0/CPU0#N57B1-1-Vega-II5-57#When we hit a ACE, we see the below in the syslogsRP/0/RP0/CPU0#N57B1-1-Vega-II5-57#RP/0/RP0/CPU0#Apr 19 09#00#41.213 UTC# ipv4_acl_mgr[162]# %ACL-IPV4_ACL-6-IPACCESSLOGDP # access-list acl_log (20) permit icmp 172.16.0.53 -> 172.16.0.57 (8/0), 4406 packets If we do not specify the log keyword we would not have received this particular syslogs in the console. This gives us better readability of the syslogs as compared to no logging option.You can control the number of packets that, when they match an access list (and are permitted or denied), cause the system to generate a log message. You might do this to receive log messages more frequently than at 5-minute intervals. The below command helps to achieve the sameRP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#ipv4 access-list log-update threshold ? <1-2147483647> Log update threshold (number of hits)RP/0/RP0/CPU0#N57B1-1-Vega-II5-57(config)#ipv4 access-list log-update thresholdNote# The similar is applicable for IPv6 Access-list as well.From IOS-XR 7.6.1, we have enhanced the logging feature for the ACL to give more readability to the users. We have introduced the keyword log-input. This is an optional keyword and it provides the same functionality as the log keyword, as described above, except that the log-message also includes the ingress interface on which the router receives the packet. The router supports this feature for both IPv4 and IPv6 ingress ACLs on# Physical Interfaces Sub-interfaces and Bridged-virtual interfaces (BVI) Bundle InterfacesWhenever a permit/deny ACE is hit we get a syslog which also mentions the interface detailsRP/0/RP0/CPU0#N57B1-1-Vega-II5-57#show access-lists ipv4 acl_log ipv4 access-list acl_log 10 permit icmp host 172.16.0.57 host 172.16.0.53 log-input 20 permit icmp host 172.16.0.53 host 172.16.0.57 log-inputRP/0/RP0/CPU0#N57B1-1-Vega-II5-57# RP/0/RP0/CPU0#N57B1-1-Vega-II5-57#RP/0/RP0/CPU0#Apr 19 09#32#29.700 UTC# ipv4_acl_mgr[162]# %ACL-IPV4_ACL-6-IPACCESSLOGDP # access-list acl_log (20) permit icmp 172.16.0.53 HundredGigE0/0/0/8-> 172.16.0.57 (8/0), 1 packet Support Matrix Platforms Support NCS5500 without eTCAM (J/J+) Yes NCS5500 with eTCAM (J/J+) Yes NCS5700 without eTCAM (J2/J2C) Yes NCS5700 with eTCAM (J2/J2C) Yes ACL-Based PolicingPrior to IOS-XR 7.6.1, ACLs could only permit or deny packets based on the matching criteria. From IOS-XR 7.6.1, users can control the traffic that an access control entry (ACE) allows in the ingress direction by configuring the policing rate for the ACE in an IPv4 or IPv6 Hybrid ACL. This functionality limits packet rates and takes different actions for different packets. This feature brings in simplicity for traffic policing as users do not have to configure a QoS policy for the same.Feature support It is supported only in the ingress direction It is supported only with hybrid ACL It is supported only on J2/J2C based NCS5700 with external TCAM It is supported only on chassis operating in native mode Both IPv4 and IPv6 ACLs are supported with policing It i supported only with permit criteria Policing rate in PPS is not supported L2 ACL is not supported for policingConfigurationsLet us verify the feature functionality. We have a NCS5700 router with IOS-XR 7.6.1 with next with external TCAM operating in native mode. We have configured a simple ACL which matches traffic from a host destined to another host. If the criteria matches we have policed the traffic to 500 Mbps.ipv4 access-list acl_policing 10 permit ipv4 host 100.57.2.2 host 100.53.2.2 police 500 mbpsWe need to enable the below hw-module profile for the compressed ACL to be configured in the ingress direction on the interface.hw-module profile acl ingress compress enableinterface HundredGigE0/0/0/2 description IXIA_2/2_Non_ETM_port mtu 9000 ipv4 address 100.57.2.1 255.255.255.0 load-interval 30 ipv4 access-group acl_policing ingress compress level 3Let us verify the traffic and the router stats. From the below output we can see the received rate is 500 Mbps on the IXIAIngress interface statsRP/0/RP0/CPU0#N57B1-1-Vega-II5-57#show interfaces hundredGigE 0/0/0/2 | in rate 30 second input rate 9973195000 bits/sec, 833322 packets/sec 30 second output rate 0 bits/sec, 0 packets/secEgress interface statsRP/0/RP0/CPU0#N57B1-1-Vega-II5-57#show interfaces hundredGigE 0/0/0/8 | in rate 30 second input rate 0 bits/sec, 0 packets/sec 30 second output rate 498657000 bits/sec, 41666 packets/secWe also verify the ACL stats to verify that packets are getting dropped due to the policer.RP/0/RP0/CPU0#N57B1-1-Vega-II5-57#show access-lists ipv4 acl_policing hardware ingress location 0/RP0/CPU0 ipv4 access-list acl_policing 10 permit ipv4 host 100.57.2.2 host 100.53.2.2 police 500 mbps (Accepted# 210369109 packets, Dropped# 3997142972 packets)Support Matrix Platforms Support NCS5500 without eTCAM (J/J+) No NCS5500 with eTCAM (J/J+) No NCS5700 without eTCAM (J2/J2C) No NCS5700 with eTCAM (J2/J2C) Yes SummarySo with IOS-XR 7.6.1, we bring in new enhancements w.r.t data-plane security, as a part of our continous improvement. Each release we keep on enhancing our sofwtare and hardware capabilities. These enhancements help strengthen our portfolio and helps in catering customer requirements. Stay tuned for new updates in future releases !!!ReferenceCCO Config Guide", "url": "/tutorials/latest-acl-enhancements-on-ncs5500-ios-xr-7-6-1/", "author": "Tejas Lad", "tags": "iosxr, cisco, ACL, NCS5500, access-list, access control list, data plane" } , "tutorials-introducing-ncs-57c1-48q6d-s-router": { "title": "Introducing NCS-57C1-48Q6D-S Router", "content": " Introducing NCS-57C1-48Q6D-S Router Authors Introduction Video Naming Logic Future ready use cases CCO documentation Software Hardware Details Port Details Ports 0-5 Ports 6-21 Ports 22-53 Available Ports Summary Block Diagram Port Assigment to ASIC core MDB profile MACSEC TIMING Power Consumption References AuthorsThis article has been written and reviewed by MIG PM/TME teamIntroductionWith IOS XR 7.5.2, we introduce one rack unit (1RU) fixed port routers in the Cisco NCS 5700 series. It is a high-capacity, low power consuming router providing the following support and capabilities# Up to 4T total port bandwidth (oversubscribed) 2.4T forwarding capacity. Total of 54 ports - 4 ports of 400G QSFP-DD, 2 ports of 4x100G QSFP-DD, 16 ports of 50G SFP+ (also support traffic speed of 10G, 25G, and 1G), 32 ports of 25G SFP+ (also support traffic speed of 10G and 1G) Support for SFP, SFP+, SFP28, and QSFP28 optics Synchronous Ethernet (SyncE) and PTP Power supply redundancy (AC/DC) MACSECVideoNaming LogicNCS-57C1-48Q6D-S is the first product based on Qumran-2C chipset. The PIDs will vary depending on the licencing model used. PID Licensing Model NCS-57C1-48Q6-SYS Flexible Consumption Model (FCM) NCS-57C1-48Q6D-S Perpetual/BAE Note# The show platform output will display the FCM ModelRP/0/RP0/CPU0#ios#show platform Node Type State Config state--------------------------------------------------------------------------------0/RP0/CPU0 NCS-57C1-48Q6-SYS(Active) IOS XR RUN NSHUT0/FT0 FAN-1RU-PI-V2 OPERATIONAL NSHUT0/FT1 FAN-1RU-PI-V2 OPERATIONAL NSHUT0/FT2 FAN-1RU-PI-V2 OPERATIONAL NSHUT0/FT3 FAN-1RU-PI-V2 OPERATIONAL NSHUT0/FT4 FAN-1RU-PI-V2 OPERATIONAL NSHUTRP/0/RP0/CPU0#ios#Future ready use casesNCS-57C1-48Q6-SYS/NCS-57C1-48Q6D-S can be positioned in a very large variety of roles in the network due to its flexibility, compact form factor, combination of low and high rate ports and high-level forwarding capacity.CCO documentation Datasheet. You can find the dimensions, power usage, PIDsports, supported standards in the same. Fixed systems White Paper Hardware Installation GuideSoftwareThe product is launched with IOS-XR 7.5.2 and it will run XR7 and not XR 64bit.RP/0/RP0/CPU0#ios#show version Cisco IOS XR Software, Version 7.5.2.12I LNTCopyright (c) 2013-2021 by Cisco Systems, Inc.Build Information# Built By # ingunawa Built On # Mon Dec 20 20#45#07 UTC 2021 Build Host # iox-lnx-012 Workspace # /auto/iox-lnx-012-san2/prod/7.5.2.12I.DT_IMAGE/ncs5700/ws Version # 7.5.2.12I Label # 7.5.2.12Icisco NCS5700 (D-1633N @ 2.50GHz)cisco NCS-57C1-48Q6-SYS (D-1633N @ 2.50GHz) processor with 16GB of memoryios uptime is 1 day, 23 hours, 12 minutesNCS 57C1 Base Chassis, Flexible Consumption Need Smart LicThe system is based on a single Q2C, the operating system will be activated by default (no config required) in “Native mode”. From a security / integrity perspective, the NCS57C1 implements the latest secureboot features with TAM (Trust Anchor module) FPGA.Hardware DetailsNCS57C1 is available in 1 RU Form Factor. It is 19’’ rack mountable and fits in standard 600 mm depth rack. It has 54 ports in the front panel. Out of which we have 4 400G multirate ports. 2 4x100G multirate ports. 16 50GE multirate ports and 32 25G multirate ports. It has an integrated RSP, timing and synchronization and LC complex in one box. It has one console and management port in the front panel along with all the data ports. All the timing ports are in the rear of the system. It has one USB 2.0 host port access, 1 RJ45 TOD port, 2 SMB connectors 1 1PPS and 1 10MHZ clock and one port for the GPS receiver. It has 2 Power supply slots. They are field replaceable with both AC and DC options. It operates in 1+1 active-active redundant mode. Each power supply has a built-in fan to cool the power supply. It supports front to back airflow. It has 5 Fan Units for cooling and proper airflow for the system with N+1 redundancy.Port DetailsLet us see in details each block of ports#Ports 0-5The platform supports 400G ZR/ZRP Optics supported only on top row i.e., ports 0,2,4. Ports 0,2,4,5 will support 400G Grey, 4x100G, 2x100G mode. Ports 1 and 3 will support only 4x100G and 2x100G Breakout. Ports 0-5 will support all the breakout options. Ports 0-5 will also support native 100G and 40G.Ports 6-21Ports 6-21 will natively support 50G/25G/10/1G. No breakout is supported on these ports. No support for auto-negotiation as well.Ports 22-53Ports 22-53 will natively support 25G/10/1G. No breakout is supported on these ports.No 1G auto-negotiation supported at FCS.Available Ports Summary   Native Breakout Max Ports Comments 400G ZR/ZRP (native) 3 - 3 Ports 0/2/4 400G Grey 6 - 6 Port 0-5 4x400G + 2(4x100G) 100G 6 24 24 Ports 0-5 Native Support Ports 0-5 (4x100G) BO Support 50G 16 - 16 Ports 6-21 40G 6 - 6 Ports 0-5 25G 48 24 72 Ports 6-53 Native Support Ports 0-5 (4x25G) BO Support 10G 48 24 72 Ports 6-53 Native Support Ports 0-5 (4x10G) BO Support 1G 48 - 48 Ports 6-53 Block DiagramThe entire port configurations is supported with a single Q2C ASIC. 400G ports i.e ports 0/2 and is connected to the ASIC via MACSEC PHY . Another 400G port is directly connected to the ASIC. We have another intermediate PHY which is connected in gearbox mode to two QSFPDD ports i.e ports 1 and 3 which will support 4x100G mode in each port. 16x50G ports are connected from optics to intermediate PHY to the ASIC. For these intermediate PHY will be configured in gearbox mode. These ports can also support 25G and 10G. In this mode, intermediate PHY will be configured in retimer mode.32x25G ports are connected from optics to PHY to ASIC. ASIC will be configured in 1x25G mode for each port and PHY will be configured in retimer mode. These ports can also support 10G and 1G. Control path is connected to the CPU. Timing chips are available to support SYNCE and PTP for class C performance.Port Assigment to ASIC coreThe port mapping is pretty simple for this platform. As we have single NPU and single core all the interfaces will be mapped to a single core on the NPU. The important thing to highlight from the above output is the default speed of the interfaces when the platform boots up. As you can see ports 0/2/4 and 5 comes up as 400G whereas ports 1 and 3 comes up as 100G. So we need CLI command to change the speed of these ports. Ports 6 to 21 will come up as 50 gig and the rest 32 ports will come up as 25G when platform the first boots up. In these ports we do not need any CLI command to change the speed. It will automatically detect the optics and accordingly bring up the port.MDB profileNCS57C1 will also support the MDB profiles. At FCS we will support profile L3MAX and in upcoming releases we will support L2MAX along with L3MAX. The platform will not support the scale MDB profiles as we do not have eTCAM in this platform. Below is the MDB profile output on the platformRP/0/RP0/CPU0#ios#show controllers fia diagshell 0 ~mdb info~ location 0/RP0/CPU0Node ID# 0/RP0/CPU0R/S/I# 0/0/0 =============================| MDB Profile || MDB profile# l3max || MDB profile KAPS cfg# 2 |=============================MACSECAs mentioned earlier ports 0/2/4 will be supporting MACSEC through PHY. Below table summarises different modes and combinations which can be used for MACSEC support. Port Speed/Breakout Mode MACSEC Support Native 400G Yes Native 100G Yes Native 40G Yes 400G/4x100G Yes 400G/2x100G Yes 100G/4x25G Yes 40G/4x10G Yes Note# Ports 1,3,5 and 6-53 does not support MACSEC.TIMING From timing perspective, the platform is capable of supporting class C timing SyncE and PTP is also supported on the breakout options. 1G mode support Class A. AT FCS we will be supporting all the PTP profiles with IPv4. With IPv6 we will support in future releases. BITS is not supported at FCS.Power ConsumptionBelow is the power consumption of the router   NCS-57C1-48Q6D-S Power without Optics Typical#340W (25C) Max# 488W (40C) Power with Optics Typical#550W (25C) Max#690W (40C) Comments 48x SFP-25G-SR, 3x QDD-400G-ZRP-S, 3x QDD-400G-DR4-S ReferencesFor other recently launched products and understanding their architecture please check the below# Introducing the NCS57C3 Routers Introducing the NCS57B1 Routers", "url": "/tutorials/introducing-ncs-57c1-48q6d-s-router/", "author": "Tejas Lad", "tags": "iosxr, cisco, NCS5700, NCS5500" } , "tutorials-qos-enhancements-ncs5700-ios-xr-7-6-1": { "title": "QoS Enhancements on NCS5700 - IOS-XR 7.6.1", "content": " On This Page Introduction ETM Video on Youtube Quick Recap of VoQ Model Egress Traffic Manager (ETM) Architecture and Data Path VoQs with ETM Life of a Packet with ETM ETM Configuration Enabling ETM Defining ETM policy classification Actions in the policy-map Attaching Policy to Interface ETM Related Facts ETM and Queuing Scale New QoS functionality with ETM ETM vs throughput & latency ETM and shaper granularity ETM with Bundle Conclusion Revision History#v1# Updated as of IOS XR 7.6.1IntroductionAs explained in our previous articles, the queuing model on NCS 5500 is Virtual output Queues (VoQ) based and it happens on the ingress NPU in the packet path. With IOS XR 7.6.1, there is a new queuing mode intordocued on NCS 5700 system where queuing is done on the NPU where the egress port belong. This improves the overall system scale in terms of QoS scale by restricting VoQ distribution and also allows better flexibility in terms of QoS functionality. This feature is applicable to NCS 5700 system with external TCAM.This new mode is called Egress Traffic Manager (ETM), and can be enabled on port basis while non ETM port behaves the previous way. This article will cover in depth explanation on the implementation and configuration aspects of the newly introduced ETM mode for QoS.ETM Video on Youtubehttps#//youtu.be/Jfa6eWvyOlwQuick Recap of VoQ ModelAs per the below diagram, The NCS 5500 (or NCS 5700) system, there are 8 VoQs per attachment point. However, they are present at the ingress pipleine of the data path. Now, for a particular egress port/interface, traffic may ingess at any other port in the system. Therefore, the VoQ for the particular egress port is replicated on each ingress pipeline (NPU/LC) present in the system. The packets are forwarded to the egress port with exchange of credit messege from egress to ingress VoQ schedulars. Egress Traffic Manager (ETM) Architecture and Data PathVoQs with ETMThe new ETM mode, when enabled, restricts the replication of VoQ across the system. Rather packets are queued only on the egress NPU i.e VoQ on the ingress pipeline of the egress NPU. For non ETM port, the previous architercure holds good. For the ETM enabled ports, queues are created only on the local NPU. The recycle port VoQ on each NPU is replicated across the system where packets are queued first for ETM enabled ports. Therefore the VoQ replication with ETM can happen three ways, ETM enabled port VoQ replicated only on the local NPU non ETM port VoQs replicated across the system NPU recycle port VoQs replicated across the systemLife of a Packet with ETMThe following diagram explains the data path for packets destined to a port enabled with ETM. In a modular system it can be briefly explained as six step process. when packet enters the ingress interface the lookup at ingress points to remote RCY port Packet forwarding to destination NPU (If Queuing needed it is on RCY port VoQ) Packet reached to destination NPU egress pipeline (RCY port) Packet is recycled back to ingress pipeline of destination NPU. Lookup at this stage points to actual port Packet is forwarded to Egress Pipeline (actual port). Queuing at this stage is done on the actual port VoQ Packet Goes out of egress PortETM ConfigurationEnabling and configuring QoS policies in ETM mode is a three step process. Enabling ETM on the port Defining ETM Policy map Applying policy to intrefaceEnabling ETMETM needs to be enabled on the main port using controller optics configuration. Once enabled it erases the existing interface configuration and the same is shown as a warning during the configuration process. In case of breakout, ETM needs to be enabled under the controller optics for the newly created ports.RP/0/RP0/CPU0#NC57B1-57-II-5#configure terminal RP/0/RP0/CPU0#NC57B1-57-II-5(config)#controller optics 0/0/0/0 RP/0/RP0/CPU0#NC57B1-57-II-5(config-Optics)#mode etm Wed May 18 09#05#48.532 UTC!! Warning ! This will remove the existing interface configurationRP/0/RP0/CPU0#NC57B1-57-II-5(config-Optics)#commit controller Optics0/0/0/0 mode etm!Once ETM is enabled, we can verify the same by checking the VoQ allocation. As we can see in the below output, there are two VoQ bases allotted to the ETM enabled port. the first one corresponds to the VoQ base (1792) for the egress port whereas the second base corresponds to VoQ base (1024) of the recycle port where packets will be queued in the first passRP/0/RP0/CPU0#NC57B1-57-II-5#show controllers npu voq-usage interface all instance all location all Wed May 18 09#07#39.531 UTC-------------------------------------------------------------------Node ID# 0/RP0/CPU0Intf Intf NPU NPU PP Sys VOQ Flow VOQ Port name handle # core Port Port base base port speed (hex) type ----------------------------------------------------------------------Hu0/0/0/0 3c000048 0 0 9 521 1792 6912 local 100GHu0/0/0/0 3c000048 0 0 156 9 1024 6160 local 100GHu0/0/0/1 3c000058 0 0 11 11 1072 6192 local 100GHu0/0/0/2 3c000068 0 0 13 13 1080 6208 local 100GHu0/0/0/3 3c000078 0 0 15 15 1088 6224 local 100GHu0/0/0/4 3c000088 0 0 17 17 1096 6240 local 100GHu0/0/0/5 3c000098 0 0 19 19 1104 6256 local 100GDefining ETM policyQoS policy Map for ETM ports is just like a regular policy with few exceptions.classificationIdeally, queuing policy uses traffic-class for classifying traffic into different queues, these traffic-class values are set on the ingress. With ETM, since we get the feature rich ingress pipeline again, the classification can be done just like an ingress policy map. i.e we can classify and queue based on QoS fields present in the packet header. L2 # cos, dei L3 # precedence/dscp/ACLs/fragments MPLS# EXPThere is an way to match based on traffic-class as well which need a special hw-mdoule CLI needs to be enabled.hw-module profile qos ipv6 short-etm The unmatched traffic class in this case goes to class-default.Actions in the policy-mapFor and ETM policy-map we can have queing actions like shaping, queue-limit, priority, BWR for WFQ and RED/WRED. Upto 4 priority levels are supported in an ETM policy. There is no support for policing and bandwidth command.There must be a marking action with “set traffic class “ on each user defined class apart from class default. This is to choose the VoQ where the traffic will be queued. for class default TC value is 0, rest of the class can be allotted TC values between 1-7.class-map match-any prec4 match precedence 4 end-class-map! class-map match-any prec5 match precedence 5 end-class-map! class-map match-any prec6 match precedence 6 end-class-map! !policy-map etm-policy class prec6 shape average percent 2 priority level 1 set traffic-class 6 ! class prec5 shape average percent 38 priority level 2 set traffic-class 5 ! class prec4 bandwidth remaining percent 65 set traffic-class 4 ! class class-default bandwidth remaining percent 35 ! end-policy-map! Attaching Policy to InterfaceAn ETM policy can be applied to the main or the subinterface of the port enabled with ETM mode. Unlike previous releases with normal mode, we don’t need to enable hw-module profile qos hqos-enable to add policy on subinterafce for ETM ports. In fact both ETM & hqos mode can’t coexist together in the system.interface HundredGigE0/0/0/0.1 service-policy output etm-policy vrf test ipv4 address 57.1.0.1 255.255.255.0 encapsulation dot1q 1!interface HundredGigE0/0/0/0.2 l2transport encapsulation dot1q 2 service-policy output etm-policyETM Related FactsETM and Queuing Scalewhen ETM is enabled VoQ resources across the system is saved as there is no need to replicate the same across the system. Thus, queuing scale increases signicantly for the system.New QoS functionality with ETMETM makes the feature rich ingress pipeline available for the egress QoS function. Thus we are able to do classification based on parameters like cos/dscp/exp for egress. This adds support for QoS short pipe mode.with ETM, multicast traffic is also scheduled and can be shaped along with unicast which is not the case for normal/non-ETM mode.ETM vs throughput & latencyETM involves two pass where packet is recylced back on the egress NPU. This reduces the NPU level throughput and it may go down to 50% when ETM is enabled on all the ports.The second pass will also add few microseconds of added latency for the traffic destined towards an ETM enabled port.ETM and shaper granularityby default, shaper granularity on NCS 5700 system is ~4 mbps. With ETM, there is a low speed mode where more granular shaper ~122 kbps can be configured. This low rate mode is activated when any shaper present in the policy-map is less than 5 Mbps. The below outpit shows programming of a low rate shaper and a normal shaper.RP/0/RP0/CPU0#NC57B1-57-II-5#show qos int hundredGigE 0/0/0/0.3 output NOTE#- Configured values are displayed within parenthesesInterface HundredGigE0/0/0/0.3 ifh 0x3c00800a -- output policyNPU Id# 0Total number of classes# 1Interface Bandwidth# 100000000 kbpsPolicy Name# 4mbpsSPI Id# 0x0VOQ Base# 1816PFC enabled# 0Accounting Type# Layer1 (Include Layer 1 encapsulation and above)------------------------------------------------------------------------------Level1 Class = class-defaultEgressq Queue ID = 1816 (Default LP queue)Queue Max. BW. = <mark?4028 kbps (4 mbits/sec)</mark>Queue Min. BW. = 0 kbps (default)Inverse Weight / Weight = 1 / (BWR not configured)Guaranteed service rate = 4000 kbpsPeak burst = 32832 bytes (default)TailDrop Threshold = 4864 bytes / 10 ms (default)LOW SHAPER = EnabledWRED not configured for this classRP/0/RP0/CPU0#NC57B1-57-II-5#show qos int hundredGigE 0/0/0/0.4 output NOTE#- Configured values are displayed within parenthesesInterface HundredGigE0/0/0/0.4 ifh 0x3c008012 -- output policyNPU Id# 0Total number of classes# 1Interface Bandwidth# 100000000 kbpsPolicy Name# 5mbpsSPI Id# 0x0VOQ Base# 1824PFC enabled# 0Accounting Type# Layer1 (Include Layer 1 encapsulation and above)------------------------------------------------------------------------------Level1 Class = class-defaultEgressq Queue ID = 1824 (Default LP queue)Queue Max. BW. = 7812 kbps (5 mbits/sec)Queue Min. BW. = 0 kbps (default)Inverse Weight / Weight = 1 / (BWR not configured)Guaranteed service rate = 5000 kbpsPeak burst = 36864 bytes (default)TailDrop Threshold = 6144 bytes / 10 ms (default)WRED not configured for this classETM with BundleOn the NCS 5700 system, Egress policy on bundle is replicated per member interface. Therefore, all members of a bundle has to be either ETM or non ETM. we can’t have bundle with mix of ETM and non ETM ports.ConclusionSo with IOS-XR 7.6.1, we brought this in new enhancements in QoS segment, as a part of our continous improvement and innovations. ETM as a function will adress QoS scalability and functionality in the NCS 5700 scaled system. Each release we keep on enhancing our sofwtare and hardware capabilities. These enhancements help strengthen our portfolio and helps in catering customer requirements. Stay tuned for new updates in future releases !!!", "url": "/tutorials/qos-enhancements-ncs5700-ios-xr-7-6-1/", "author": "Paban Sarma", "tags": "iosxr, cisco, NCS 5700" } , "#": {} , "tutorials-srv6-transport-on-ncs-part-1": { "title": "Segment Routing v6 (SRv6) Transport on NCS5500/NCS500 Platforms - Part 1", "content": " Table of Contents Introduction Brief Background SRv6 Terminology Reference Topology Configuration Steps Configuring and Verifying ISIS for reachability Enabling SRv6 over IGP Platform hw-module profile Configuring SRv6 locators Enabling SRv6 over ISIS Verification of SRv6 transport Additional Configuration Step # Enabling and Verify TI-LFA Summary Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) Tejas Lad, Technical Marketing Engineer (telad@cisco.com) IntroductionThis is the first tutorial of the series focusing on SRv6 transport. In this document we will focus on SRv6 basics and understand how to bring up SRv6 transport on Cisco NCS 500 and NCS 5500 platforms. In the subsequent tutorials, we will cover more topics related to SRv6 transport showing implementation of layer2 and layer3 services, QoS behaviour, Traffic-Engineering etc.Brief BackgroundThe Service Provider transport network has evolved to provide converged transport for various 5G applications and use cases. As we already know, Segment Routing (SR) brings in programmability to the transport network using the MPLS data plane, whereas Segment Routing v6 (SRv6) uses IPv6 instead of MPLS. This offers a simpler way to build a programmable transport where we can easily do network slicing and traffic engineering for various services. This advanced series of tutorials will demonstrate new SRv6 transport on the IOS XR based NCS 5500/500 routers and explain how to configure and implement various overlay services like L3VPN/BGP-EVPN based E-Line service. In this document series, we will be using SRv6 Transport with ISIS as the IGP and provision end-to-end L3/L2 services over the transport.Note# This document is to familiarise users with the technology. The configurations and lab setup can be taken as a reference, and this by no means represent a real production network.SRv6 TerminologyBefore starting with the topology and the configurations, let us brush up some of the important terminologies w.r.t SRv6. In this series, we will use the SRv6 uSID implementation. The SRv6 micro-segment (uSID) is an extension of the SRv6 architecture. It leverages the SRv6 Network Programming architecture to encode several SRv6 Micro-SID (uSID) instructions within a single 128-bit SID address. (In SRv6, a SID represents a 128-bit value) Such a SID address is called a uSID Carrier. For further information on SRv6 usid please visit. SID components Details Locator This is the first part of the SID with most significant bits and represents an address of a specific SRv6 node Function This is the portion of the SID that is local to the owner node and designates a specific SRv6 function (network instruction) that is executed locally on a particular node, specified by the locator bits. Args This field is optional and represents optional arguments to the function. The locator part can be further divided into two parts# Locator Components Details SID Block This field is the SRv6 network designator and is a fixed or known address space for an SRv6 domain. This is the most significant bit (MSB) portion of a locator subnet. Node Id This field is the node designator in an SRv6 network and is the least significant bit (LSB) portion of a locator subnet. For understanding the technology in details and the latest enhancements, please visit the following pageReference TopologyThe topology used is a simple four node network comprising of Cisco NCS 540 and NCS 5500 series platforms. There are two CE nodes connected to PE1 and PE4 respectively to simulate customer networks. Details of each node along with Loopback IPs are mentioned in the below table. Nodes Device Type Software Version Loopback0 PE1 NCS 540 IOS XR 7.5.2 fcbb#bb00#1##1/128 P2 NCS 5500 IOS XR 7.5.2 fcbb#bb00#2##1/128 P3 NCS 5500 IOS XR 7.5.2 fcbb#bb00#3##1/128 PE4 NCS 5500 IOS XR 7.5.2 fcbb#bb00#4##1/128 The loopback0 IP are choosen as per the SRv6 addressing best practice (check out segment-routing.net for more details).Configuration StepsTo bring up SRv6 transport, the very first task to be performed is to make the underlay IGP ready. We will be using ISIS as the underlay IGP protocol to bring up IPv6 connecticity across the nodes. Once, the network is ready with IGP, there are three steps to enable SRv6, i.e. enabling platform hw-module profile configuring SRv6 locator enabling SRv6 over ISISConfiguring and Verifying ISIS for reachabilityISIS is used as IGP for the sample topology. The following table summarizes the ISIS NET and member interfaces for all the nodes Node net id member interfaces PE1 49.0000.0000.0001.00 BE 12, BE 13 P2 49.0000.0000.0002.00 BE 12, BE 23, BE 24 P3 49.0000.0000.0003.00 BE 13, BE 23, BE 34 PE4 49.0000.0000.0004.00 BE 24, BE 34 The following snippet is for configuration on router PE1. Similarly configure the IGP ISIS on all the other routers P2, P3 and PE4 and enable all the respective interfaces.router isis 1 is-type level-2-only net 49.0000.0000.0001.00 address-family ipv6 unicast metric-style wide ! interface Bundle-Ether12 point-to-point address-family ipv6 unicast !! interface Bundle-Ether13 point-to-point address-family ipv6 unicast !! interface Loopback0 address-family ipv6 unicast ! !!Once all nodes are configured, we can verify the IPv6 routes learned via ISIS and the reachability from one node to another over ISIS.RP/0/RP0/CPU0#LABSP-3393-PE1#sh route ipv6Codes# C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, su - IS-IS summary null, * - candidate default U - per-user static route, o - ODR, L - local, G - DAGR, l - LISP A - access/subscriber, a - Application route M - mobile route, r - RPL, t - Traffic Engineering, (!) - FRR Backup pathGateway of last resort is not setL fcbb#bb00#1##1/128 is directly connected, 10w0d, Loopback0i L2 fcbb#bb00#2##1/128 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#01#04, Bundle-Ether12i L2 fcbb#bb00#3##1/128 [115/20] via fe80##28a#96ff#fe2c#58dd, 00#01#04, Bundle-Ether13i L2 fcbb#bb00#4##1/128 [115/30] via fe80##28a#96ff#fe2d#18dd, 00#01#04, Bundle-Ether12 [115/30] via fe80##28a#96ff#fe2c#58dd, 00#01#04, Bundle-Ether13C 2001#0#0#12##/64 is directly connected, 10w0d, Bundle-Ether12L 2001#0#0#12##1/128 is directly connected, 10w0d, Bundle-Ether12C 2001#0#0#13##/64 is directly connected, 10w0d, Bundle-Ether13L 2001#0#0#13##1/128 is directly connected, 10w0d, Bundle-Ether13i L2 2001#0#0#23##/64 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#01#04, Bundle-Ether12 [115/20] via fe80##28a#96ff#fe2c#58dd, 00#01#04, Bundle-Ether13i L2 2001#0#0#24##/64 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#01#04, Bundle-Ether12i L2 2001#0#0#34##/64 [115/20] via fe80##28a#96ff#fe2c#58dd, 00#01#04, Bundle-Ether13i L2 fcbb#bb00#2##/48 [115/11] via fe80##28a#96ff#fe2d#18dd, 00#01#04, Bundle-Ether12i L2 fcbb#bb00#3##/48 [115/11] via fe80##28a#96ff#fe2c#58dd, 00#01#04, Bundle-Ether13i L2 fcbb#bb00#4##/48 [115/21] via fe80##28a#96ff#fe2d#18dd, 00#01#04, Bundle-Ether12 [115/21] via fe80##28a#96ff#fe2c#58dd, 00#01#04, Bundle-Ether13 RP/0/RP0/CPU0#LABSP-3393-PE1#ping fcbb#bb00#4##1 source loopback 0Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to fcbb#bb00#4##1, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 msEnabling SRv6 over IGPPlatform hw-module profileTo start with, we will configure the hw-module profile on all the routers. RP/0/RP0/CPU0#PE1#show running-config | in hw-module Building configuration...hw-module profile segment-routing srv6 mode micro-segment format f3216Note# Configure the same on all the routers.The reason behind configuring the above hw-module profile is to explicitly enable data plane in NCS500 and NCS5500 for SRv6 with specific mode i.e. either SRv6 Base or SRv6 micro-segment (uSID). We are using uSID based SRv6 transport and this is done by configuring the hardware module profiles#“hw-module profile segment-routing srv6 mode micro-segment format f3216”. The hw-module profile is self-explanatory, we have enabled segment routing v6 (srv6) and the mode used is micro-segment. (another mode for SRv6 is the base mode). The uSID carrier format used is f3216, i.e 32 bit block size and 16bit uSID. Thus a single DA can carry upto 6 micro-instructions or uSID.Note# This hardware-module profile configuration needs reload of the router.Configuring SRv6 locatorsAs discussed above, we need to configure f3216 format locator. So for each node we will configure /48 locator of which first 32 bits will be SID block and remaining 16 bits will be Node ID. PE1 P2 P3 PE4 segment-routing srv6 encapsulation source-address fcbb#bb00#1##1 ! locators locator POD0 micro-segment behavior unode psp-usd prefix fcbb#bb00#1##/48 ! !! segment-routing srv6 encapsulation source-address fcbb#bb00#2##1 ! locators locator POD0 micro-segment behavior unode psp-usd prefix fcbb#bb00#2##/48 ! !! segment-routing srv6 encapsulation source-address fcbb#bb00#3##1 ! locators locator POD0 micro-segment behavior unode psp-usd prefix fcbb#bb00#3##/48 ! !! segment-routing srv6 encapsulation source-address fcbb#bb00#4##1 ! locators locator POD0 micro-segment behavior unode psp-usd prefix fcbb#bb00#4##/48 ! !! Enabling SRv6 over ISISrouter isis 1 address-family ipv6 unicast segment-routing srv6 locator POD0 ! ! !!Configure the above on all the routers and Thats it !!!!! You are done with the SRv6 Underlay TransportVerification of SRv6 transportYou can use below verification commands to check the SRv6 transport.RP/0/RP0/CPU0#PE1#show segment-routing srv6 locator Name ID Algo Prefix Status Flags -------------------- ------- ---- ------------------------ ------- --------POD0 2 0 fcbb#bb00#1##/48 Up U RP/0/RP0/CPU0#PE1#show segment-routing srv6 locator POD0 detail Name ID Algo Prefix Status Flags -------------------- ------- ---- ------------------------ ------- --------POD0 2 0 fcbb#bb00#1##/48 Up U (U)# Micro-segment (behavior# uN (PSP/USD)) Interface# Name# srv6-POD0 IFH # 0x2000800c IPv6 address# fcbb#bb00#1##/48 Number of SIDs# 4 Created# Apr 21 08#34#45.886 (1w1d ago)RP/0/RP0/CPU0#PE1#show isis segment-routing srv6 locators detail IS-IS 1 SRv6 LocatorsName ID Algo Prefix Status------ ---- ---- ------ ------POD0 3 0 fcbb#bb00#1##/48 Active SID behavior# uN (PSP/USD) SID value# fcbb#bb00#1## Block Length# 32, Node Length# 16, Func Length# 0, Args Length# 0RP/0/RP0/CPU0#PE1#show segment-routing srv6 sid *** Locator# 'POD0' *** SID Behavior Context Owner State RW-------------------------- ---------------- ------------------------------ ------------------ ----- --fcbb#bb00#1## uN (PSP/USD) 'default'#1 sidmgr InUse Y fcbb#bb00#1#e001## uA (PSP/USD) [BE12, Link-Local]#0 isis-1 InUse Y fcbb#bb00#1#e002## uA (PSP/USD) [BE13, Link-Local]#0 isis-1 InUse Y The ipv6 route table will also be updated with the locators from the other nodes and can be verified using the routing table. RP/0/RP0/CPU0#LABSP-3393-PE1#show route ipv6 isis i L2 2001#0#0#23##/64 [115/20] via fe80##28a#96ff#fe2c#58dd, 00#10#31, Bundle-Ether13 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#10#31, Bundle-Ether12i L2 2001#0#0#24##/64 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#11#12, Bundle-Ether12i L2 2001#0#0#34##/64 [115/20] via fe80##28a#96ff#fe2c#58dd, 00#10#31, Bundle-Ether13i L2 fcbb#bb00#2##/48 [115/11] via fe80##28a#96ff#fe2d#18dd, 00#11#12, Bundle-Ether12i L2 fcbb#bb00#2##1/128 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#11#12, Bundle-Ether12i L2 fcbb#bb00#3##/48 [115/11] via fe80##28a#96ff#fe2c#58dd, 00#10#31, Bundle-Ether13i L2 fcbb#bb00#3##1/128 [115/20] via fe80##28a#96ff#fe2c#58dd, 00#10#31, Bundle-Ether13i L2 fcbb#bb00#4##/48 [115/21] via fe80##28a#96ff#fe2c#58dd, 00#10#00, Bundle-Ether13 [115/21] via fe80##28a#96ff#fe2d#18dd, 00#10#00, Bundle-Ether12i L2 fcbb#bb00#4##1/128 [115/30] via fe80##28a#96ff#fe2c#58dd, 00#10#00, Bundle-Ether13 [115/30] via fe80##28a#96ff#fe2d#18dd, 00#10#00, Bundle-Ether12Additional Configuration Step # Enabling and Verify TI-LFATopology Independent Loop Free Alternate (TI-LFA) is a method of fast convergence in SR networks; the principles are identical for SR MPLS and SRv6. The backup path has to be always pre-programmed on the router which detects failure. The backup path always has to be Loop-Free. The IGP protocol has detailed knowledge about entire topology, so the IGP is always able to calculate where a packet has to be sent in case of a particular failure (without any convergence in the network). In this step we will strengthen the SRv6 transport built by enabling TI-LFA on each node. The following snippet is for the additional configuration done on PE1, the same needs to be done on each node for each IGP member link.router isis 1 interface Bundle-Ether12 address-family ipv6 unicast fast-reroute per-prefix fast-reroute per-prefix ti-lfa ! ! interface Bundle-Ether13 address-family ipv6 unicast fast-reroute per-prefix fast-reroute per-prefix ti-lfa ! !!Now, when we verify the ipv6 routing entries, the pre-programmed backup paths will be seen. The following output shows the FRR backup paths for the respective prefix.RP/0/RP0/CPU0#LABSP-3393-PE1#show route ipv6Codes# C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, su - IS-IS summary null, * - candidate default U - per-user static route, o - ODR, L - local, G - DAGR, l - LISP A - access/subscriber, a - Application route M - mobile route, r - RPL, t - Traffic Engineering, (!) - FRR Backup pathGateway of last resort is not seti L2 fcbb#bb00#2##/48 [115/11] via fe80##28a#96ff#fe2d#18dd, 00#15#57, Bundle-Ether12 [115/21] via fe80##28a#96ff#fe2c#58dd, 00#15#57, Bundle-Ether13 (!)i L2 fcbb#bb00#2##1/128 [115/20] via fe80##28a#96ff#fe2d#18dd, 00#15#57, Bundle-Ether12 [115/30] via fe80##28a#96ff#fe2c#58dd, 00#15#57, Bundle-Ether13 (!)i L2 fcbb#bb00#3##/48 [115/21] via fe80##28a#96ff#fe2d#18dd, 00#15#15, Bundle-Ether12 (!) [115/11] via fe80##28a#96ff#fe2c#58dd, 00#15#15, Bundle-Ether13i L2 fcbb#bb00#3##1/128 [115/30] via fe80##28a#96ff#fe2d#18dd, 00#15#15, Bundle-Ether12 (!) [115/20] via fe80##28a#96ff#fe2c#58dd, 00#15#15, Bundle-Ether13Note# Above output is truncketed for brevity.SummaryThis concludes the Part 1 of this tutorial series. Stay tuned for the next article.", "url": "/tutorials/srv6-transport-on-ncs-part-1/", "author": "Tejas Lad", "tags": "iosxr, NCS5500, SRv6, Segment Routing v6, NCS500, NCS5700" } , "tutorials-mdb-ncs5700": { "title": "Understanding MDB (Modular Databases) on NCS5700 systems", "content": "PrefaceIn Today’s Network deployments we position our routers in various roles pertaining to different use cases. And it comes with it’s own interesting set of features and importantly different scale requirements.So how to deploy the same product addressing these different scale requirements?We bring in that flexibility in the form of MDB (Modular Database) profiles in our NCS5700 Fixed systems and NC57 Linecards on the modular NCS5500 router operating in Native mode.Introduction to MDBIn the NCS5500/NCS5700 routers we have various on-chip databases for different purposes like storing the IP prefix table, MAC information, Labels, Access-list rules and many more.On the scale variant we have an external-TCAM (OptimusPrime2 in NCS5700 family) to offload storage of prefixes, ACL rules, QOS, etc for a higher scale compared to the base systems.We also have the on-chip buffers and off-chip HBM buffers used for queuing the packets. They are out of the scope for the MDB discussion.Our focus of discussion will be on the memory databases available on the Jericho2 family of ASICS.Jericho On-Chip DatabasesA quick refresher of Jericho ASIC’s on-chip resources seen in the below table. This shows the applications/features mapped to the important databases in the ASIC.Table #1 Jericho on-chip databasesWe use different ASICS from the same ASIC family on our NCS5500 and NCS5700 routers which are Qumran-MX, Jericho, Jericho+ and the latest Jericho2, Jericho2C, Qumran2C and Qumran2A ASICS.Our latest platform in works will come with new Jericho2C+ ASIC from Broadcom.The on-chip databases mentioned in the table #1 are common across all these different ASICS. But the flexibility to carve resources for these databases are only supported with the NCS5700 systems or line-cards using Jericho2 family of ASICS (Jericho2, Jericho2C, Qumran2C and Qumran2A) NCS5500/5700 Products MDB supported Fixed Systems NCS-57B1-6D24H-S NCS-57B1-5D24H-SE NCS-57C3-MOD-S NCS-57C3-MOD-SE-S NCS-57C1-48Q6D-S NCS-57D2-18DD-S* Line cards Supported on NCS5504, NCS5508, NCS5516 (Native Mode only) NC57-18DD-SE NC57-24DD NC57-36H-SE NC57-36H6D-S NC57-MOD-S NC57-48Q2D-S* NC57-48Q2D-SE-S* Table #2 NCS5700 PIDs support for MDB (* PIDs in works for future releases)Let’s look into the forwarding ASIC layout. The blue boxes (which are circled) are some of the important on-chip memories which are used for storing information like IP prefixes, labels, MAC tables, next-hop information and more.While we have other components like buffers and blocks which are used for packets buffering and processing won’t be impacted by MDB carving.Picture #1 ASIC layoutIn Jericho2 based platforms we give user the flexibility to carve resources for the circled on-chip databases in picture #1 based on the MDB profiles which are configured during the system bootup.In the below pictorial representation (picture #2), we can see how the static carving of resources for on-chip databases have been made modular._Picture #2 MDB compatible Databases _On the left we have Jericho1 based systems where the database carving is always static which is now made modular in the Jericho2 based platforms. If we have a scaled system with external-TCAM , the on-chip resource carving for MDB is designed in way considering the feature’s usage of the resources in the OP2 external TCAM.Benefits of MDBIn today’s network deployments we position our routers in various usecases ranging from metro access, aggregation, core, Peering, DCI, Cloud Interconnect and more.Each usecase comes with it’s own interesting set of features and importantly the scale requirements.Like in the Peering use case, where we position our high-density aggregation devices in the edge of the network with high eBGP sessions scale for metro, Datacenters & Internet peering. We need features rich in Layer3 like high-capacity Routing scale, security features VRFs, ACL, FS and more.While for Business Aggregation use cases based on carrier-ethernet deployments are Layer2 heavy with requirements of higher MAC scale, L2VPN or Bridge-Domain scalePicture #3 Network deployment use casesSo, it’s obvious that the requirements are not same for these different use cases.Rather than just having a fixed profile why not give users some flexibility in carving resources to the databases which fits for their scale requirements. That flexibility is available with the MDB feature!Path to MDBDuring the initial release of NCS5700 platform, we started with shipping NCS57 based line cards on the NCS5500 system running in Compatible mode along with previous generation line cards based on Jericho1. We had default system profile to tune the scale and restrict the scale of system resources based on Jericho1 scale parameters.Then in the subsequent release we started supporting native mode with all LCs on a modular NCS5500 being Jericho2(NCS57) for higher scale than the compatible mode. We supported both base and scale variants of Jericho2 LCs with custom scale profiles.And in the next release(IOS-XR 7.3.1) we had the MDB infra developed in the IOS-XR software and introduced default profiles with higher scale. They were balanced (for base systems) and balanced-SE (for scale systems). And we made these as default profiles on the new SoC (system on chip) routers which were released in XR 7.3.1.Please note the balanced/balanced-SE profiles were enabled by default and were not user configurablePicture #4 Default MDB Profile - 7.3.1Above picture #4 depicts the behavior during 7.3.1 release time on J2 based modular and fixed systems operating in native mode.In XR 7.4.1, the balanced profiles were reincarnated as L3MAX and L3MAX-SE profiles with better scale optimizations. On NCS5500 modular systems on native mode, default was L3MAX and we introduced a “hw-module profile mdb l3max-se” to enable the L3MAX-SE if all the line cards on the system are scale (-SE) cards. https#//www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/system-setup/76x/b-system-setup-cg-ncs5500-76x/m-system-hw-profiles.htmlOn the SoC Jericho2 boxes, based on the availability of eTCAM we enable the right profile (base or SE) and they always operate in native mode.Picture #5 Modular ncs5500# MDB modes during 7.4.1Picture #6 SoC systems default modesPicture #7 Q2C based fixed system default mode in 7.5.2Then in release XR 7.6.1 we came up with layer-2 feature centric L2MAX and L2MAX-SE profiles for the base and scale variants of NCS5700 routers and line-cards.The default mode of operation will be L3MAX (-SE) and if a user wishes to do L2 rich resource carving they are provided options to configure the L2MAX(-SE) profiles.Please note, these MDB profiles are supported on our systems with J2, J2C, Q2C & Q2A ASICS.Also, MDB will be supported on the J2C+ based system being planned for XR 7.8.1In latest releases (7.6.1 onwards) all the SoC and modular systems (in native mode) supports all 4 MDB profiles No MDB Systems config external TCAM presense 1 L3MAX Base LCs & Fixed Systems No external TCAM 2 L3MAX-SE Scale LCs & Fixed Systems With external TCAM 3 L2MAX Base LCs & Fixed Systems No external TCAM 4 L2MAX-SE Scale LCs & Fixed Systems With external TCAM Table #3 MDB profilesMDB on Modular systemsThis is a bit tricky. By default, the NCS5500 systems boots up in compatible mode.We will need explicit “hw-module profile npu native-mode-enable” configuration to enable native mode provided we have all the line-cards being nc57 based.With native mode configured (post reload,) by default system will operate in L3MAX mode in latest releases (7.4.1 and beyond).Users are given options to configure any of the MDB profiles based on their card configurations and requirements. (where L2 profiles are included in 7.6.1)Step by Step transition towards NCS5700 with MDB#Let’s start with NCS5500 system having mix of Jericho1 and Jericho2 LCs. It will by default operate in compatible mode having a default scale profile. First step is to remove Jericho/Jericho+ LCs and convert it into Jericho2 only system. Next step is to configure “native mode” and reload the system for the mode to take effect and the MDB profile will set to L3max (This is the right profile if we have a mix of scale and base J2 LCs)Picture #9 Compatible to native mode migration In the above state #2 (with mixed scale and base J2 LCs), user can configure L2max profile based on their requirementsPicture #10 L2max MDB configuration If the user wants to operate only with the J2 scale line-cards to use the full potential of the scale and the extra features (ex# Flowspec) it offers, the l3max-se/l2max-se profiles can be enabled.If we have the base non-SE cards in the system, they won’t bootup as shown in below picturePicture #11 L3max-SE/L2max-SE MDB configurationWe can also club the native mode conversation and new MDB profile configuration in a single reload. Ex- Step #1 to #3 or #4 can be achieved with a single reload.MDB on Fixed systemsThis is straightforward. As the fixed NC5700 SoC systems always operate in native mode and comes up by default with L3max on base systems and L3max-SE on scale systems. Users are given options to configure L2max(-SE) profiles with a system reload. Platform Default Configurable Options(Recommended**) NCS57B1-24H6D L3MAX L2MAX NCS57B1-24HSE L3MAX-SE L2MAX-SE NCS57C3-MOD-S L3MAX L2MAX NCS57C3-MOD-SE-S L3MAX-SE L2MAX-SE NCS57C1-48Q6D-S L3MAX L2MAX N540-24Q8L2DD-SYS* L3MAX* L2MAX* Table #4 MDB options on fixed systems*N540-24Q8L2DD-SYS (Q2A based) only ncs540 system to support MDB at present.*Resource carving on Q2A is different from J2/J2C/Q2C based on resource availability** On SE systems base profiles can be configured but not recommended to use low scale profileConfiguration & VerificationConfigure Native Modehw-module profile npu native-mode-enableVerify J2 Native mode - Only for Modular (NCS5504/08/16)show hw-module profile npu-operating-modeNative ModeConfigure MDB Profilehw-module profile mdb l3max | l3max-se | l2max | l2max-seVerify MDB Profile show hw-module profile mdb-scale MDB scale profile# l3max-seVerify Resource Utilizationshow controllers npu resources lem location 0/LC/CPU0show controllers npu resources lpm location 0/LC/CPU0show controllers npu resources exttcamipv4 | exttcamipv6 location 0/LC/CPU0 (for -SE)show controllers npu resources fec location 0/LC/CPU0show controllers npu resources encap location 0/LC/CPU0show controllers npu external location 0/LC/CPU0 (for -SE)ConclusionWe conclude here understanding the flexibility of MDB in the NCS5700 systems which provides the user with options to choose the resource carving based on their requirements.Picture #12 Flexibility with MDB carvingAs depicted in Picture#12, based on the profile carving we get more on resources carved for specific databases to support to certain higher scale requirements. On a broader level with L2MAX profiles we get more resources for applications mapped to L2 features like MAC scale, L2VPN etc.While with L3MAX profiles we get higher resource carving for applications mapped to L3 features like routes, L3VPN etc.Also these MDB profiles goes through the continuous process of fine-tuning to adapt to new technology areas like SRv6 and to accommodate the new critical use cases.Please stay tuned for more updates!", "url": "/tutorials/mdb-ncs5700/", "author": "Deepak Balasubramanian", "tags": "iosxr" } , "tutorials-srv6-transport-on-ncs-part-2": { "title": "Implementing Layer3 VPN over SRv6 Transport on NCS 5500/500", "content": " Table of Contents Overview Topology Configuration & Verification for VPNv4 Configuring BGP Control Plane BGP configuration on PE1 BGP configuration on PE4 Configuring VRF and PE-CE links PE1 PE4 CE1 CE2 Configuring VRF under BGP Verification of VPNv4 Summary Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) Tejas Lad, Technical Marketing Engineer (telad@cisco.com) OverviewIn Previous Article, we discussed how to set up segment routing v6 (SRv6) transport on the NCS 500 and NCS 5500 platforms. In this article, we will explore setting up Layer3 vpn services over SRv6 transport.TopologyThe topology used is a simple four node network comprising of Cisco NCS 540 and NCS 5500 series platforms. There are two CE nodes connected to PE1 and PE4 respectively to simulate customer networks. Details of each node along with Loopback IPs are mentioned in the below table. Nodes Device Type Software Version Loopback0 PE1 NCS 540 IOS XR 7.5.2 fcbb#bb00#1##1/128 P2 NCS 5500 IOS XR 7.5.2 fcbb#bb00#2##1/128 P3 NCS 5500 IOS XR 7.5.2 fcbb#bb00#3##1/128 PE4 NCS 5500 IOS XR 7.5.2 fcbb#bb00#4##1/128 The loopback0 IPs are chosen as per the SRv6 addressing best practice (check out segment-routing.net for more details).In this tutorial, we will establish a L3VPN (VPNv4) connecting two subnets across CE1 and CE2.Configuration & Verification for VPNv4Configuring BGP Control PlaneAt first, we will set up the BGP control plane with VPNv4 address family between PE1 and PE4. We are directly peering between PE1 and PE4 in this example, however in a real network there can be route reflectors used for BGP to improve simplicity and scalability.BGP configuration on PE1router bgp 100 bgp router-id 1.1.1.1 address-family vpnv4 unicast ! neighbor fcbb#bb00#4##1 remote-as 100 update-source Loopback0 address-family vpnv4 unicast ! !!BGP configuration on PE4router bgp 100 bgp router-id 4.4.4.4 address-family vpnv4 unicast ! neighbor fcbb#bb00#1##1 remote-as 100 update-source Loopback0 address-family vpnv4 unicast ! !!We can verify the BGP neighbourship with VPNv4 AFI using show bgp vpnv4 unicast summaryRP/0/RP0/CPU0#LABSP-3393-PE1#show bgp vpnv4 unicast summary BGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0BGP main routing table version 23BGP NSR Initial initsync version 2 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsBGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 23 23 23 23 23 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcdfcbb#bb00#4##1 0 100 10 12 23 0 0 00#00#26 0Configuring VRF and PE-CE linksThe next step is to confgure the virtual forwarding instances (VRFs) on each PE. We are using vrf 1 on both PE1 and PE4. The following needs to be configured on both the nodes. Note that we are using the same route-targets import/export on both ends.vrf 1 address-family ipv4 unicast import route-target 100#1 ! export route-target 100#1 ! !!For simplicty, we will connect subnet 192.168.1.0/24 present in CE1 to 192.168.2.0/24 present in CE2. We are using a PE-CE subinterface (with VLAN 1) under VRF 1 on PE1 and PE4. On both of the CE nodes we are using static routing to reach gateway PE. For a scaled network, there can be eBGP or other routing protocols for exchange of route information between PE and CE which is out of scope for this tutorial. The respective configurations on the PE/CE nodes are listed below#PE1interface TenGigE0/0/0/0.1 vrf 1 ipv4 address 192.168.1.1 255.255.255.0 encapsulation dot1q 1PE4interface TenGigE0/0/0/0.1 vrf 1 ipv4 address 192.168.2.1 255.255.255.0 encapsulation dot1q 1CE1interface TenGigE0/0/0/0.1 ipv4 address 192.168.1.10 255.255.255.0 encapsulation dot1q 1!router static address-family ipv4 unicast 192.168.2.0/24 192.168.1.1 !!CE2interface TenGigE0/0/0/0.1 ipv4 address 192.168.2.10 255.255.255.0 encapsulation dot1q 1!router static address-family ipv4 unicast 192.168.1.0/24 192.168.2.1 !!Configuring VRF under BGPNow to establish the L3VPN, the final step is to advertise the VRF routes via BGP. This is established by configuring the VRF under BGP on each PE. For simplicity we are using auto rd under the VRF and redistributing the connected routes. For SRv6 we will specify the locator to be used and the label mode as per VRF.The following configurations needs to be added on both PE1 and PE4.router bgp 100 vrf 1 rd auto address-family ipv4 unicast segment-routing srv6 locator POD0 alloc mode per-vrf ! redistribute connected Verification of VPNv4The control plane for the layer3 VPN established can be verified using different CLI commands related to SRv6 SIDs and BGP. We can see the respective uSIDs (uDT4) on each PE for the VRF using show segment routing srv6 sidRP/0/RP0/CPU0#PE1#show segment-routing srv6 locator POD0 sidSID Behavior Context Owner State RW-------------------------- ---------------- ------------------------------ ------------------ ----- --fcbb#bb00#1## uN (PSP/USD) 'default'#1 sidmgr InUse Y fcbb#bb00#1#e004## uDT4 '1' bgp-100 InUse YRP/0/RP0/CPU0#PE4#show segment-routing srv6 locator POD0 sid SID Behavior Context Owner State RW-------------------------- ---------------- ------------------------------ ------------------ ----- --fcbb#bb00#4## uN (PSP/USD) 'default'#4 sidmgr InUse Y fcbb#bb00#4#e000## uDT4 '1' bgp-100 InUse Y The Prefix received from BGP can also be verified using some of the commands/outputs listed below#RP/0/RP0/CPU0#LABSP-3393-PE1#show bgp vpnv4 unicast summary BGP is operating in STANDALONE mode.Process RcvTblVer bRIB/RIB LabelVer ImportVer SendTblVer StandbyVerSpeaker 23 23 23 23 23 0Neighbor Spk AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down St/PfxRcdfcbb#bb00#4##1 0 100 12 14 23 0 0 00#02#08 1RP/0/RP0/CPU0#PE1#show bgp vpnv4 unicast received-sids Status codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Received SidRoute Distinguisher# 1.1.1.1#0 (default for vrf 1)*> 192.168.1.0/24 0.0.0.0 NO SRv6 Sid*>i192.168.2.0/24 fcbb#bb00#4##1 fcbb#bb00#4#e000##Route Distinguisher# 4.4.4.4#0*>i192.168.2.0/24 fcbb#bb00#4##1 fcbb#bb00#4#e000##Processed 3 prefixes, 3 pathsRP/0/RP0/CPU0#PE#show route vrf 1Gateway of last resort is not setC 192.168.1.0/24 is directly connected, 2w6d, TenGigE0/0/0/0.1L 192.168.1.1/32 is directly connected, 2w6d, TenGigE0/0/0/0.1B 192.168.2.0/24 [200/0] via fcbb#bb00#4##1 (nexthop in vrf default), 00#03#46The above outputs are taken from PE1 for reference. We can also verify the equivalent outputs on the other end (i.e PE4). Now if we check the CEF entry in PE4, we can see that it points to the respective uDT4 SID on PE1.RP/0/RP0/CPU0#LABSP-3393-PE4#show cef vrf 1 192.168.1.0/24192.168.1.0/24, version 11, SRv6 Headend, internal 0x5000001 0x30 (ptr 0x8afe0198) [1], 0x0 (0x0), 0x0 (0x8bf261e8) Updated Sep 22 05#09#52.385 Prefix Len 24, traffic index 0, precedence n/a, priority 3 gateway array (0x8c49f0a8) reference count 1, flags 0x2010, source rib (7), 0 backups [1 type 3 flags 0x48441 (0x8a097128) ext 0x0 (0x0)] LW-LDI[type=0, refc=0, ptr=0x0, sh-ldi=0x0] gateway array update type-time 1 Sep 22 05#09#52.385 LDI Update time Sep 22 05#09#52.407 Level 1 - Load distribution# 0 [0] via fcbb#bb00#1##/128, recursive via fcbb#bb00#1##/128, 3 dependencies, recursive [flags 0x6000] path-idx 0 NHID 0x0 [0x8b091778 0x0] next hop VRF - 'default', table - 0xe0800000 next hop fcbb#bb00#1##/128 via fcbb#bb00#1##/48 SRv6 H.Encaps.Red SID-list {fcbb#bb00#1#e004##} Load distribution# 0 1 (refcount 1) Hash OK Interface Address 0 Y Bundle-Ether24 fe80##28a#96ff#fe2d#18db 1 Y Bundle-Ether34 fe80##28a#96ff#fe2c#58dbWe can also verify that the destination SID is a combination of the remote node SID (uN) and the received label using the below command#RP/0/RP0/CPU0#LABSP-3393-PE4#show bgp vrf 1 192.168.1.0/24BGP routing table entry for 192.168.1.0/24, Route Distinguisher# 4.4.4.4#0Versions# Process bRIB/RIB SendTblVer Speaker 21 21Last Modified# Sep 22 05#09#51.939 for 00#04#52Paths# (1 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Not advertised to any peer Local fcbb#bb00#1##1 (metric 30) from fcbb#bb00#1##1 (1.1.1.1) Received Label 0xe0040 Origin incomplete, metric 0, localpref 100, valid, internal, best, group-best, import-candidate, imported Received Path ID 0, Local Path ID 1, version 21 Extended community# RT#100#1 PSID-Type#L3, SubTLV Count#1 SubTLV# T#1(Sid information), Sid#fcbb#bb00#1##, Behavior#63, SS-TLV Count#1 SubSubTLV# T#1(Sid structure)# Source AFI# VPNv4 Unicast, Source VRF# default, Source Route Distinguisher# 1.1.1.1#0Finally, the data plane can be verified by simply pinging CE2 from CE1RP/0/RP0/CPU0#LABSP-3393-CE1#ping 192.168.2.10 repeat 10Thu Sep 1 09#14#37.319 UTCType escape sequence to abort.Sending 10, 100-byte ICMP Echos to 192.168.2.10, timeout is 2 seconds#!!!!!!!!!!Success rate is 100 percent (10/10), round-trip min/avg/max = 1/5/21 msSummaryThis concludes the tutorial on provisioning Layer3 VPN services over SRv6 transport on NCS 500 and 5500 platforms. We covered sample example of VPNv4 service. Similarly VPNv6 can also be configured (uDT6). Stay tuned for upcoming tutorial covering layer2 services over SRv6 transport.", "url": "/tutorials/srv6-transport-on-ncs-part-2/", "author": "Paban Sarma", "tags": "iosxr, cisco, SRv6, NCS 5500, NCS 500, NCS 5700" } , "tutorials-srv6-transport-on-ncs-part-3": { "title": "Implementing Layer2 VPN Over SRv6 Transport on NCS 5500/500", "content": " Table of Contents Topology Configuration Steps BGP Control Plane Configuring Layer2 Attachment Circuits Configuring EVPN and L2VPN Service Verifiation Steps Conclusion Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) Tejas Lad, Technical Marketing Engineer (telad@cisco.com) Overview Until now we covered setting up, SRv6 Transport and bringing up Layer3 VPN using that on NCS 5500 and NCS 500 platforms. In this tutorial, we will cover the impelementaion of EVPN based point-to-point (E-Line) L2 service (EVPN-VPWS) over SRv6.TopologyThe topology used is a simple four node network comprising of Cisco NCS 540 and NCS 5500 series platforms. There are two CE nodes connected to PE1 and PE4 respectively to simulate customer networks. Details of each node along with Loopback IPs are mentioned in the below table. Nodes Device Type Software Version Loopback0 PE1 NCS 540 IOS XR 7.5.2 fcbb#bb00#1##1/128 P2 NCS 5500 IOS XR 7.5.2 fcbb#bb00#2##1/128 P3 NCS 5500 IOS XR 7.5.2 fcbb#bb00#3##1/128 PE4 NCS 5500 IOS XR 7.5.2 fcbb#bb00#4##1/128 The loopback0 IPs are chosen as per the SRv6 addressing best practice (check out segment-routing.net for more details).In this tutorial, we will establish a L2VPN (EVPN-VPWS) connecting CE1 and CE2. the example will demonstrate VLAN based E-Line (EVPL) service and establish L2 stretch across CE1 and CE2 for VLAN 100.Configuration StepsEVPN based P2P service over SRv6 transport will involve 3 steps, viz. Establishing EVPN control plane over BGP Configuring l2transport between CE-PE links Configuring EVPN EVI and L2VPN ServiceBGP Control PlaneTraditional L2 services uses LDP for signalling, which is simplified by EVPN with the use of BGP for control plane operation. In our previous tutorial, we established BGP neighborship between PE1 and PE4 with VPNv4 AFI. Now we need to enable EVPN AFI over BGP. Below snippet shows full BGP configuration needed for layer2 service over SRv6.PE1router bgp 100 bgp router-id 1.1.1.1 address-family l2vpn evpn ! neighbor fcbb#bb00#4##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn ! !!PE4router bgp 100 bgp router-id 4.4.4.4 address-family l2vpn evpn ! neighbor fcbb#bb00#1##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn ! !!Configuring Layer2 Attachment CircuitsWe need to configure l2transport sub-interface (on the PE-CE link) with appropriate VLAN encapsulations. This tutorial is showing VLAN based service with VLAN ID 100. We are not showing any VLAN translation operation (rewrite commands) as the are out of scope of this tutorial.PE1 and PE4interface TenGigE0/0/0/0.2 l2transport encapsulation dot1q 2Configuring EVPN and L2VPN ServiceNext step is to configure EVPN and L2VPN service construct on both the PE. since we have a symmetric topology, our configuration on both node will be similar. Configure the below on PE1 and PE4.evpn interface TenGigE0/0/0/0 ! segment-routing srv6 locator POD0 !!l2vpn xconnect group 2 p2p 2 interface TenGigE0/0/0/0.2 neighbor evpn evi 2 service 2 segment-routing srv6 ! ! !!The interface under EVPN configuration doesn’t have any ESI configured, this is because of single homed service and default ESI being used. For detailed understanding on evpn configuration and modes refer e-evpn.io.We have globally enabled srv6 locator POD0 under evpn, this means l2vpn SIDs (UDX2) will be allocated from the same locator. The srv6 configuration under l2vpn xconnect group service construct can be used to override the global evpn configuration and assign new locator.Verifiation StepsAt first we will verify that the layer2 P2P service is up,RP/0/RP0/CPU0#LABSP-3393-PE1#show l2vpn xconnect Legend# ST = State, UP = Up, DN = Down, AD = Admin Down, UR = Unresolved, SB = Standby, SR = Standby Ready, (PP) = Partially Programmed, LU = Local Up, RU = Remote Up, CO = Connected, (SI) = Seamless InactiveXConnect Segment 1 Segment 2 Group Name ST Description ST Description ST ------------------------ ----------------------------- -----------------------------2 2 UP Te0/0/0/0.2 UP EVPN 2,2,##ffff#10.0.0.2 UP ----------------------------------------------------------------------------------------The local SID information for the configured service is updated in the SRv6 SID table as well.RP/0/RP0/CPU0#LABSP-3393-PE1#show segment-routing srv6 locator POD0 sid SID Behavior Context Owner State RW-------------------------- ---------------- ------------------------------ ------------------ ----- --fcbb#bb00#1## uN (PSP/USD) 'default'#1 sidmgr InUse Y fcbb#bb00#1#e001## uA (PSP/USD) [BE12, Link-Local]#0#P isis-1 InUse Y fcbb#bb00#1#e002## uA (PSP/USD) [BE12, Link-Local]#0 isis-1 InUse Y fcbb#bb00#1#e003## uA (PSP/USD) [BE13, Link-Local]#0#P isis-1 InUse Y fcbb#bb00#1#e004## uA (PSP/USD) [BE13, Link-Local]#0 isis-1 InUse Y fcbb#bb00#1#e005## uDT4 '1' bgp-100 InUse Y fcbb#bb00#1#e006## uDX2 2#2 l2vpn_srv6 InUse Y The SID details and functions can also be verified using SID details CLI as shown below. It shows that the SID function is 0xe0006 and it is in the context of EVPN EVI 2 with AC IDs 2 (eth-tag=2).RP/0/RP0/CPU0#LABSP-3393-PE1#show segment-routing srv6 sid fcbb#bb00#1#e006## detail *** Locator# 'POD0' *** SID Behavior Context Owner State RW-------------------------- ---------------- ------------------------------ ------------------ ----- --fcbb#bb00#1#e006## uDX2 2#2 l2vpn_srv6 InUse Y SID Function# 0xe006 SID context# { evi=2, eth-tag=2 } Locator# 'POD0' Allocation type# Dynamic Created# Nov 14 04#49#43.505 (00#08#54 ago) We can also view, the SRv6 uDX2 SID assigned to each segment of the service in the detailed show command below#RP/0/RP0/CPU0#LABSP-3393-PE1#show l2vpn xconnect group 2 detail Group 2, XC 2, state is up; Interworking none AC# TenGigE0/0/0/0.2, state is up Type VLAN; Num Ranges# 1 Rewrite Tags# [] VLAN ranges# [2, 2] MTU 1504; XC ID 0x2; interworking none Statistics# packets# received 0, sent 0 bytes# received 0, sent 0 drops# illegal VLAN 0, illegal length 0 EVPN# neighbor ##ffff#10.0.0.2, PW ID# evi 2, ac-id 2, state is up ( established ) XC ID 0xc0000002 Encapsulation SRv6 Encap type Ethernet Ignore MTU mismatch# Enabled Transmit MTU zero# Enabled Reachability# Up SRv6 Local Remote ---------------- ---------------------------- -------------------------- uDX2 fcbb#bb00#1#e006## fcbb#bb00#4#e006## AC ID 2 2 MTU 1518 0 Locator POD0 N/A Locator Resolved Yes N/A SRv6 Headend H.Encaps.L2.Red N/A Statistics# packets# received 0, sent 0 bytes# received 0, sent 0 We can verify the Remote uSID advertised via BGP with the help of the below CLI outputs.RP/0/RP0/CPU0#PE1#show bgp l2vpn evpn BGP router identifier 1.1.1.1, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0BGP main routing table version 20BGP NSR Initial initsync version 1 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsStatus codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 1.1.1.1#2 (default for vrf VPWS#2)*> [1][0000.0000.0000.0000.0000][2]/120 0.0.0.0 0 i* i fcbb#bb00#4##1 100 0 iRoute Distinguisher# 4.4.4.4#2*>i[1][0000.0000.0000.0000.0000][2]/120 fcbb#bb00#4##1 100 0 iProcessed 2 prefixes, 3 pathsRP/0/RP0/CPU0#LABSP-3393-PE1#show bgp l2vpn evpn rd 4.4.4.4#2 [1][0000.0000.0000.0000.0000][2]/120BGP routing table entry for [1][0000.0000.0000.0000.0000][2]/120, Route Distinguisher# 4.4.4.4#2Versions# Process bRIB/RIB SendTblVer Speaker 19 19Last Modified# Nov 14 04#50#18.448 for 2d00hPaths# (1 available, best #1) Not advertised to any peer Path #1# Received by speaker 0 Not advertised to any peer Local fcbb#bb00#4##1 (metric 30) from fcbb#bb00#4##1 (4.4.4.4) Received Label 0xe00600 Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, not-in-vrf Received Path ID 0, Local Path ID 1, version 19 Extended community# EVPN L2 ATTRS#0x02#0 RT#100#2 PSID-Type#L2, SubTLV Count#1 SubTLV# T#1(Sid information), Sid#fcbb#bb00#4##, Behavior#65, SS-TLV Count#1 SubSubTLV# T#1(Sid structure)# There is single RT1 coming from PE4 with all zero ESI as we have only used single-homing. A detailed look into the advertised route shows that the remote uDX2 sid comprise of the two parts, the Sid information fcbb#bb00#4## with Behavior#65 meaning this is uDX2. Also the received label is 0xe00600 . Thus, we can see that the remote uDX2 SID fcbb#bb00#4#e006## comprises of SID and the Received_Label.Finally, to verify the data plane operation we will initiate ICMP ping from CE1 to CE2. we already have configured CE1 & CE2 in the same subnet and established the L2 stretch between the two nodes with EVPN-VPWS over SRv6 transort.RP/0/RP0/CPU0#CE1#show run int tenGigE 0/0/0/0.2interface TenGigE0/0/0/0.2 ipv4 address 192.2.0.1 255.255.255.0 encapsulation dot1q 2!RP/0/RP0/CPU0#CE2#show run int tenGigE 0/0/0/0.2interface TenGigE0/0/0/0.2 ipv4 address 192.2.0.2 255.255.255.0 encapsulation dot1q 2!RP/0/RP0/CPU0#CE1#ping 192.2.0.2 repeat 20Type escape sequence to abort.Sending 20, 100-byte ICMP Echos to 192.2.0.2, timeout is 2 seconds#!!!!!!!!!!!!!!!!!!!!Success rate is 100 percent (20/20), round-trip min/avg/max = 1/3/45 msRP/0/RP0/CPU0#CE1# ConclusionThis concludes Part 3 of our tutorial explaing point-to-point l2 serviec over SRv6 transport. Stay tuned for our upcoming tutorials.", "url": "/tutorials/srv6-transport-on-ncs-part-3/", "author": "Paban Sarma", "tags": "" } , "tutorials-simple-scalable-programmable-sustainable-metro-with-cisco-ncs5500-nc5700": { "title": "Simple, Scalable, Programmable & Sustainable Metro with Cisco NCS5500/NC5700", "content": " Simple,Scalable, Programmable & Sustainable Metro with Cisco NCS5500/5700 Introduction Metro Architecture Shift Need for architecture transistion Evolved Metro Use cases NCS5500/5700 Portfolio Evolution for supporting the converged metro Portfolio details Fixed Modular Line Cards NCS5500/NCS5700 Capabilities to design the next generation Metro Conclusion References IntroductionAccording to Cisco’s Annual Internet Report, the number of devices connected to IP networks will be more than three times the global population by 2023. There will be 3.6 networked devices per capita by 2023, up from 2.4 networked devices per capita in 2018. There will be 29.3 billion networked devices by 2023, up from 18.4 billion in 2018. According to Gartner, by 2025 more than 75% of the computing data will need to be analyzed, processed, and stored closer to the end users. New paradigms such as 5G introduction, rapid growth in video traffic, and proliferation of IoT and cloud services model require unprecedented flexibility, elasticity and scale from the network. Increasing bandwidth demands and decreasing ARPU is putting pressure to reduce network cost. At the same time, services need to be deployed faster and more cost-effectively to stay competitive. The metro network becomes the key to this transformation where we see services getting terminated and end users looking for better experiences. In this article, we will discuss latest trends in the metro architecture and how communication service providers (CSPs) can design their network for optimized value proposition.Metro Architecture ShiftWe are seeing a major architecture shift in the way the new metro fabric is evolving. Applications (8K Video, Gaming, AR/VR) are becoming more dynamic requiring differentiated user experiences and optimizing distributed compute and decomposed network resources in places where power and space are constrained. These trends have driven the evolution of our product portfolio with enhanced software capabilities and compact form-factors. As we look into the future, the trends will require even more accelerated innovation.Need for architecture transistionIn the past CSPs used to build siloed networks for their mobile, enterprise and residential networks. But this creates challenges for them to maintain their networks efficiently. They need to move to a single transport network which allows them to move to faster service delivery. The Converged Transport architecture uses advanced features and technology to help CSPs design and migrate to a network prepared to scale to meet the stringent bandwidth and performance demands of their customers. At the fundamental level, it drives simplification into the CSP’s network and operations.Implementing the Converged Transport architecture will allow CSPs to realize the following business benefits# Reduced operational complexity for network management Increased revenue with a service-focused, service-centric network Improved time to market for new value-added services Optimized utilization of fiber capacity Decreased operational and capital expenditures associated with the networkEvolved Metro Use casesBelow are some of the use cases which we see will drive the evolution of the metro architecture as well as the products.NCS5500/5700 Portfolio Evolution for supporting the converged metroThe Cisco NCS 5500/5700 Series is an industry-leading portfolio developed to handle massive traffic growth. It is designed for operational simplicity and for efficiently meeting the scaling needs of large enterprises, web, utilities and service providers. It comprises a range of products in fixed (1RU,2RU,3RU) and Modular (4,8,16 slots) form factors.Portfolio detailsFixed Product ID Form Factor Description NCS-5501(SE) 1 Base# 48x 1/10G SFP + 6x 40/100G QSFP Scale # 40x 1/10G SFP + 4x 40/100G QSFP NCS-55A1-36H(-SE)-S 1 36x 40/100G QSFP NCS-55A1-24H 1 24x 40/100G QSFP NCS-55A1-24Q6H-S 1 24x 1/10G SFP + 24x 1/10/25G SFP+ 6x 40/100G QSFP NCS-55A1-48Q6H 1 48x 1/10/25G SFP + 6x 40/100G QSFP NCS-55A2-MOD 2 24x 1/10G SFP + 16x 1/10/25G SFP+ 2x 400G MPAs MPA 4x 100G QSFPMPA 2x 100/200G DWDM CFP2MPA 2x 100G QSFP + 1x 100/200G DWDM CFP2MPA 12x 1/10G SFP NCS-57B1-6D24/5DSE 1 Base# 24x 100G QSFPDD + 6x400G QSFPDD Scale#24x 100G QSFPDD + 5x400G QSFPDD NCS-57C1-48Q6D-S 1 32x SFP28 + 16x SFP56 +2x QSFP-DD(4x100G) + 4x QSFP-DD (400G) NCS-57C3-MOD(S)-SYS 3 48x 1/10/25G + 8x or 4x port 40/100G + MPAs#2x400G or 4x 100G MPAMPA 4x 100G QSFPMPA 2x 100/200G DWDM CFP2MPA 2x 100G QSFP + 1x 100/200G DWDM CFP2MPA 12x 1/10G SFP NCS-57D2-18DD-S 2 66 Ports# 2x 400G QSFP-DD + 16x 400GQSFP-DD / 64x 100G QSFP-DD Modular Line Cards Product ID Description NC55-32T16Q4H-A 32x1/10GE SFP/SFP+ 16x10/25GE SFP+/SFP28 4x40/100GE QSFP+/QSFP28 (oversubscribed) NC55-MOD-A(-SE)-S Fixed#12x1/10G SFP/SFP+2x40G QSFP+ 2X MPA#2x400G or 12x50G NC55-36X100-A-SE 36x100/40G NC57-24DD 24x400G QSFPDD NC57-18DD-SE 18x400G QSFPDD or 30x200/100G QSFPDD NC57-36H6D-S 24x100G + 12 flex ports (6x400GE or 12x200GE/100GE) NC57-36H-SE 36x100G QSFPDD NC57-MOD-S Fixed#2x400G QSFPDD+ 8x10/25/50G SFP56 2X MPA#2x400G or 12x50G NCS5500/NCS5700 Capabilities to design the next generation MetroPowered by Cisco IOS-XR, NCS5500/5700 has all the capabilities to design the next-gen metro. They provide cost structures and power efficiency in a fixed systems as well as flexibility of a modular system in terms of redundancy options, upgradability, higher radix, and interface mixing. Below are the top capabilities we think are needed to build a successful network. Metro Network Capabilities Support High Speed Transport Supports 400G ZR/ZRP Capability of dense 100G/400G in small 1RU form factors Ability to support low speeds from 1G/10G/25G/40G along with 100G/200G/400G Packet and Optical Convergence Unleash transformative operational efficiency with Cisco Routed optical Network -aka RON Achieve Network efficiency and Service Profitability Lower TCO with Simple Architecture Network Slicing using Segment Routing Full support for Segment Routing/SRv6 Features Capability to impose 26 usids in a single pass Support for Flex Algo/SRTE/Ti-LFAMicroLoop Avoidance and SR-PM plus many more features Service deployment using EVPN Overlay Support for next-gen EVPN Features EVPN-VPWS,EVPN-ELAN, EVPN Etree EVPN IRB Anycast Gateway EVPN Single home and Multihome EVPN Single Active, Active-Active, Port-Active Capabilities Network Security with Trustworth Platforms Built-in hardware security along with software security in IOS-XR Secure-boot with Cisco TAM chipDDoS protectionMalware protectionAnti-Theft protectionSecure app-hosting environment IP-SLA Network performance monitoring with rich set of IP-SLA features TWAMP/TWAMP-Lite Ability to terminate non-Ethernet ports Transparently transfer ODU/SONET/SDH/Eth/FC packets over PSN using PLE Flexible Resource Carving Support for different MDB profiles to cater multiple use cases Quality of Service Rich ingress and egress QoS features Support for Policing/Shaping/WFQ/WRED etc Enhance QoS scale with Egress Traffic Manager implementation MACsec Layer 2 security with full MACsec support Timing Class C timing support for low latency applications Telemetry Support for Model Driven as well as AI driven Telemetry Automation Flexible Automation with Cisco Crossworks Network Automation Programmability Drive Network Automation with Using Programmable models Support for Yang Models Support for netconf and grpc protocols Automation scripts Sustainability Reduce environmental impacts with systems delivering lower power consumption and by integrating automation tools to reduce manual and onsite operation Cisco Sustainability Report ConclusionGoing forward the challenge for CSPs will not just be to deliver services, but also to deliver experience. With metro network as the center of gravity along edge/cloud convergence combined with the broadband, it is crucial for the CSPs to seriously invest in upgrading their network. CSPs need to think not only about delivering the experience but also about ways to monetize it. Cisco is committed to be a trusted partner for the same and is equipped with the capabilities to help CSPs build a simple, scalable, programmable and sustainable network.References NCS5500/5700 Resources Metro Fabric High Level Design Implementation Guide Inflection Points of a Converged Metro", "url": "/tutorials/simple-scalable-programmable-sustainable-metro-with-cisco-ncs5500-nc5700/", "author": "Tejas Lad", "tags": "iosxr, cisco, NCS5500, NCS5700, Metro, Metro Fabric" } , "tutorials-introducing-ncs-57d2-18dd-sys": { "title": "Introducing NCS-57D2-18DD-SYS", "content": " Introducing NCS-57D2-18DD-SYS Introduction NCS5500/5700 Fixed Portfolio NCS-57D2 Product Specifications Product Design Physical Interfaces Physical Specifications Naming Logic NCS-57D2 Use cases High End Aggregation Rethink the architecture End to End Network Slicing using Segment Routing/Segment Routing v6 Scale better with 3-layer spine-leaf architecture Trustworthy Infrastructure Other feature support NCS-57D2 Port Details Port Numbering 400G 100G 40G Breakout combination 25G Breakout combination 10G Breakout combination 25G and 10G Port Assignment to ASIC core MACSEC and IPSEC support MDB Profiles NCS-57D2 Video References IntroductionLatest trends in the metro architecture have driven the evolution of our product portfolio with enhanced software capabilities and compact form-factors. The Cisco NCS 5500/5700 Series offers industry-leading 100 GbE and 400 GbE port density to handle massive traffic growth. It is designed for operational simplicity and to efficiently meet the scaling needs of large enterprises, web, and service providers. With IOS-XR 7.8.1, we have introduced the latest addition to our portfolio# NCS-57D2-18DD-SYS. In this article, we will deepdive into the features and understand its use cases.NCS5500/5700 Fixed PortfolioBefore deep-diving into the NCS-57D2, let us have a quick refresher of the NCS5500/5700 fixed portfolio. Product ID Form Factor Description NCS-5501(SE) 1 Base# 48x 1/10G SFP + 6x 40/100G QSFP Scale # 40x 1/10G SFP + 4x 40/100G QSFP NCS-55A1-36H(-SE)-S 1 36x 40/100G QSFP NCS-55A1-24H 1 24x 40/100G QSFP NCS-55A1-24Q6H-S 1 24x 1/10G SFP + 24x 1/10/25G SFP+ 6x 40/100G QSFP NCS-55A1-48Q6H 1 48x 1/10/25G SFP + 6x 40/100G QSFP NCS-55A2-MOD 2 24x 1/10G SFP + 16x 1/10/25G SFP+ 2x 400G MPAs MPA 4x 100G QSFPMPA 2x 100/200G DWDM CFP2MPA 2x 100G QSFP + 1x 100/200G DWDM CFP2MPA 12x 1/10G SFP NCS-57B1-6D24/5DSE 1 Base# 24x 100G QSFPDD + 6x400G QSFPDD Scale#24x 100G QSFPDD + 5x400G QSFPDD NCS-57C1-48Q6D-S 1 32x SFP28 + 16x SFP56 +2x QSFP-DD(4x100G) + 4x QSFP-DD (400G) NCS-57C3-MOD(S)-SYS 3 48x 1/10/25G + 8x or 4x port 40/100G + MPAs#2x400G or 4x 100G MPAMPA 4x 100G QSFPMPA 2x 100/200G DWDM CFP2MPA 2x 100G QSFP + 1x 100/200G DWDM CFP2MPA 12x 1/10G SFP NCS-57D2-18DD-S 2 66 Ports# 2x 400G QSFP-DD + 16x 400GQSFP-DD / 64x 100G QSFP-DD NCS-57D2 Product SpecificationsProduct DesignFor NCS-57D2 we have come up with Belly-to-Belly design concept. This helps in better air-flow and cooling and providing the environment to pack in more ZR/ZRP optics in a single chassis. It is designed to have air flow on both sides of PCB for uniform cooling of upper and lower rows to allow for coherent support. Total 16 Quads with 32 cages and 64 QSFP-DD ports.Front viewRear viewNCS-57D2 is a compact 2RU chassis purpose built for dense 400G aggregation. It offers 7.2 Terabits of 400GE/100GE optimized forwarding capacity, high power efficiency, QSFP-DD optics, deep packet buffering combined with next-generation security, automation, telemetry, segment routing, EVPN, and Equal-Cost Multipathing (ECMP) capabilities making it suitable for high capacity metro applications. The industry-leading Cisco IOS XR pushes the scalability of the hardware even further through the use of flexible MDB profiles (modular database) that allows ASIC and TCAM resources to be reconfigured as per the network use cases.Physical InterfacesFixed ports Integrated 40/100/400 Gigabit Ethernet support 18x 400G ports with QSFP-DD 66x 100G ports with QSFP-DD All ports support QSFP-28 and QSFP+ Breakout options supported for 10G/25G/100G Support for ZR/ZR+ Coherent opticsOther ports RJ-45 ports for console and management 1 Pulse Per Second (PPS) in/out, 10 MHz in/out, Time of Day (ToD) Internal Global Navigation Satellite System (GNSS) with antenna port 1 USB portPhysical SpecificationsDimensions Height# 2RU 3.45” in. (8.76 cm) Width# 17.60 in. (43.94 cm) Depth# 23.62 in. (59.99 cm)Weight 53 lb (24 kg) including fans and PSUsPower Typical# 500W without optics Power handling of each quad = 43WNote# Quad is a group of 4 ports. For example ports 0,1,2,3 makes one quad.Naming LogicNCS-57D2 is built with J2C+ chipset. The PIDs will vary depending on the licencing model used. PID Licensing Model NCS-57D2-18DD-SYS Flexible Consumption Model (FCM) NCS-57D2-18DD-S Perpetual/BAE Note# The show platform output will display the FCM ModelRP/0/RP0/CPU0#BGL-CB#show platform Node              Type                     State                    Config state--------------------------------------------------------------------------------0/RP0/CPU0        NCS-57D2-18DD-SYS(Active) IOS XR RUN               NSHUT0/PM0             PSU2KW-ACPI              OPERATIONAL              NSHUT0/PM1             PSU2KW-ACPI              OPERATIONAL              NSHUT0/FT0             NC57-D2-FAN-FW           OPERATIONAL              NSHUT0/FT1             NC57-D2-FAN-FW           OPERATIONAL              NSHUT0/FT2             NC57-D2-FAN-FW           OPERATIONAL              NSHUT0/FT3             NC57-D2-FAN-FW           OPERATIONAL              NSHUT0/FB0             NC57-D2-FAN-FW           OPERATIONAL              NSHUTRP/0/RP0/CPU0#BGL-CB#NCS-57D2 Use casesHigh End AggregationAs the user applications are becoming more dynamic, requiring differentiated user experiences network resources needs to be optimised in terms of power and real estate. Catering bandwidth demand by deploying huge equipments is really not a smart business model both in terms of OPEX and environment. NCS-57D2 has been custom designed to cater high end aggregation use cases in a 2RU compact form factor. It supports 7.2Tbps of forwarding capacity. It provides 18 ports of high speed 400G or 66 ports of 100G at extremely low power.Rethink the architectureCisco’s Converged SDN Transport Solution, is an architecture that delivers improved operational efficiencies and simplicity. The solution works by merging IP and Optical onto a single layer where all the switching is done at Layer 3. Routers are connected with standardized 400G ZR/ZR+ coherent pluggable optics. With a single service layer based upon IP, flexible management tools can leverage telemetry and model-driven programmability to streamline lifecycle operations.End to End Network Slicing using Segment Routing/Segment Routing v6Latency is one of most important KPIs when it comes to service provider networks. Applications must be capable of reaching the end-user quickly enough to prevent the degradation of the experience. Network slicing and segment routing provide intelligent routing and traffic differentiation required to efficiently support this distributed architecture. With Network Slicing we can have independent networks on the same physical infrastructure. NCS5500/5700 portfolio supports full set of Segment routing/SRV6 features to help the operators design an efficient and future ready architecture.Scale better with 3-layer spine-leaf architectureThe above design example is a three-tiered spine-leaf topology. Leverage the power of MP-BGP EVPN to distributes the Layer 2 and Layer 3 reachability information for the overlay network. NCS5500/5700 portfolio supports full set of EVPN features to scale the datacenter fabric. The design allows flexibility to run MP-BGP in a multi-POD environment across different AS or within a single AS. EVPN routes can be distributed among PODs through MP-eBGP peering without the need for additional configuration.Trustworthy InfrastructureBuild a trustworthy infrastructure with in-built software security of Cisco IOS-XR along with hardware security features of NCS5500/5700 portfolio. The NCS5500/5700 supports# Secure boot with Cisco TAM chip Anti-counterfeit and Trust Anchor Infrastructure Image Signing Run-time Defense Encrypted Transport DDoS Protection Integrity Measurement and Verification InfrastructureOther feature supportAlong with above major use cases, the NCS-57D2 (and entire NCS5500/5700 portfolio) support full features of Overlay Layer2 services using EVPN and VPLS IPSLA support including TWAMP and TWAMP Lite Multicast BGP features Class C timing QoS TelemetryNCS-57D2 Port DetailsPort NumberingAbove figure shows the port numbering of all the 66 ports of NCS-57D2400GNCS-57D2 support 1 400G ZR/ZRP/Grey optics per quad. Only P0 in each quad can be configured with 400G i.e ports 0,4,8 .. 60. When ports P0 are configured with 400G, other 3 ports in the quad are disabled. Ports 65 and 66 can be individually configured for 400G100GAll the 66 ports can be configured with 100G native speeds40GAll the 66 ports can be configured with 40G native speedsBreakout combination 25G 4x25G breakout option is supported only in ports P0 and P3 in each quad. If P0 and P3 is configured with the above breakout option, ports P1 and P2 are disabled. 4x25G Breakout with native 100G in a port group is supported. 4x25G Breakout with native 40G in a port group is not supported. Port 64 and 65 can both be configured with 4x25G breakout option. Above breakout is not supported with 400G in any port.Breakout combination 10G 4x10G breakout option is supported only in ports P0 and P3 in each quad. If P0 and P3 is configured with the above breakout option, ports P1 and P2 are disabled. 4x10G breakout option plus native 100G/40G in a port group is supported. Port 64 and 65 can both be configured with 4x10G breakout option. Above breakout is not supported with 400G in any portBreakout combination 25G and 10G 4x25G and 4x10G Breakout cannot co-exist together in a single quad 4x25G and 4x10G Breakout can be configured individually on port 64 and 65Port Assignment to ASIC coreThe ASIC has 2 cores. The above figure shows the port mapping to all the interfaces. The important thing to highlight from the above output is the default speed of the interfaces when the platform boots up is 100G. We can change the port speed of each port as per the requirements.MACSEC and IPSEC supportThe hardware is fully capable of MACSEC and IPSEC. The features will be support post FCS. These features are not supported IOS-XR 7.8.1MDB ProfilesThe platform will support the below profiles# L2MAX L3MAXNCS-57D2 VideoReferences NCS5500/5700 XRdocs NCS5500/5700 Modular WhitePaper NCS5500/5700 Fixed WhitePaper", "url": "/tutorials/introducing-ncs-57d2-18dd-sys/", "author": "Tejas Lad", "tags": "iosxr, cisco, NCS5500, NCS5700, metro, NCS57-D2-18DD" } , "tutorials-srv6-transport-on-ncs-part-4": { "title": "Segment Routing v6 (SRv6) Transport on NCS 5700", "content": " Table of Contents Platform hw-module command for format NCS 5500 Configuration via hw-module NCS 5700 Configuration Defining traffic-class encapsulations NCS 5500 Configuration via hw-module NCS 5700 Configuration Defining source encapsulation SRv6 Locator blocks Service Configurations Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) Deepak Balasubramanian, Technical Leader, Technical Marketing (deebalas@cisco.com) Overview Cisco MIG Access & Aggregation platforms, i.e NCS 500 and NCS 5500 series have variants built with BCM J1 and J2 ASICs. In our previous articles on SRv6 transport and services, we covered platforms built with first generation of BCM ASIC. While service configuration and behaviour are same on both the generations of platforms, there are specific config knobs needed on the first generation of platforms. We discussed the same in detail in our first article on SRv6 Transport.This tutorial will focus on various config knob differences to set up the SRv6 transport on the new generation of products i.e. the NCS 5700 and particular variants in NCS 540 series. Applicable Platform Models NCS 500 Series – N540-24Q8L2DD-SYS NCS 5500 Series– All NCS 57xx Fixed Platforms– NCS 5500 modular routers with NCS 57xx Line Cards in Native Mode Differences in ConfigurationPlatform hw-module command for formatTo enable SRv6 transport on the NCS 5500 series (1st gen) we need to enable hw-module profile. This is not needed on the NCS 5700 series. SRv6 Mode base or uSID is configured directly under segment routing global configuration.NCS 5500 Configuration via hw-modulehw-module profile segment-routing srv6 mode micro-segment format f3216NCS 5700 Configurationsegment-routing srv6 formats format usid-f3216Defining traffic-class encapsulationsAnother important factor in SRv6 is the traffic-class field in the encapsulated SRv6 header. The option is to either propagate from the payload or define a global value for all services. With NCS 5500 this is enabled along with the hw-module profile. On NCS 5700 this is configured under SRv6 encapsulation. The hw-module profile knob allows separate treatment for L2 and L3 encapsulation.NCS 5500 Configuration via hw-modulehw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation l2-traffic traffic-class propagate ! l3-traffic traffic-class propagateNCS 5700 Configurationsegment-routing srv6 encapsulation traffic-class propagate ! !!Defining source encapsulationThe SRv6 header source address definition is done the same way on both NCS 5500 and NCS 5700 platform. This configuration is present under the segment-routing global configuration block.segment-routing srv6 encapsulation source-address 2001##1 ! !!SRv6 Locator blocksSRv6 locator is an important configuration parameter for SRv6. The locator is combination of Base and Node ID. For SRv6 uSID (f3216 format), the 32 bit base is divided into 24 bit BASE and 8 bit block ID. The following example shows configuration example from SRv6 locator for uSID.segment-routing srv6 locators locator LOC micro-segment behavior unode psp-usd prefix fcbb#bb 00#0001##/48 ! ! !!In the example above, the highlighted two nibbles can be in the range of 00-ff on NCS 5700 platforms. For the first generation, NCS 5500/500 the block ID range is 00-3f.Service ConfigurationsAs discussed in the Overview section, any type of service creation, show comamnds related to transport and service infra are common to all the NCS 500 and NCS 5500/5700 PIDs. The basic Layer3 and Layer2 service over SRv6 transport is already covered in our previous tutorials. Summary In this short article, we covered the fundamental difference in configuration approach for SRv6 transport between NCS 5700 and NCS 5500 platforms. Stay tuned for more SRv6 transport related content.", "url": "/tutorials/srv6-transport-on-ncs-part-4/", "author": "Paban Sarma", "tags": "iosxr, SRv6, Segment-Routing, NCS 5700" } , "tutorials-srv6-transport-on-ncs-part-5": { "title": "SRv6 Transport on NCS5500/5700/500 : Capabilities & Resource Monitoring", "content": " Table of Contents Overview Platform Capabilties SRv6 Manager CLI for details SRv6 MAX SIDs OOR (Out Of Resource) monitoring for MAX SIDs SRv6 Encap resource usage SRv6 FEC resource usage SRv6 ECMP_FEC resource usage SRv6 Ultra USID scale with NCS5700 Showtechs required to troublesshoot SRv6 issues Summary Deepak Balasubramanian, Technical Leader, Technical Marketing (deebalas@cisco.com) Paban Sarma, Technical Marketing Engineer (pasarma@cisco.com) OverviewThis is the 5th tutorial in SRv6 series on NCS5500/NCS500 platforms. In this article we will focus on covering the platform capabilties and resources usage when we provision SRv6 services on NCS5500/5700/500 product-family which is build with BCM DNX Jericho/Jericho+/Jericho2 NPUs.Platform CapabiltiesLet’s get started with a summary on capabilties of the platforms under discussion with respect to SRv6 features & scale.Details are also captured in correspondance to different MDB profiles which are supported in NCS5700 platforms. For more understanding on MDB refer to NCS700-MDB-InfoTable #1 Platform Capabilities Please note# Above scale numbers are subjected to change in the future XR releases along with more feature enhancements in the pipelineMDB to PID mapping for referenceTable #2 Platform MDB mappingSRv6 Manager CLI for detailsWe have an IOS-XR CLI to check the SRv6 functional and scale capabilities on any of the Cisco XR platforms. CLI # “show segment-routing srv6 manager”Illustration below“sh segment-routing srv6 manager”Parameters#  SRv6 Enabled# Yes  SRv6 Operational Mode#     Micro-segment#      SID Base Block# fc05##/24       /*Configured base block*/  Encapsulation#    Source Address#      Configured# ##      Default# 100#1#1##1                        /*Loopback IP*/    Hop-Limit# Propagate                   /*Configured Values*/        Traffic-class# Propagate               /*Configured Values*/Summary#  Number of Locators# 15 (15 operational)     /*Current scale*/    Number of SIDs# 6749 (0 stale)              /*Current scale*/    Max SID resources# 12256           /Scale Limit J2 L3MAX-SE/  Number of free SID resources# 4487  OOR#    Thresholds (resources)# Green 613, Warning 368     Status# Resource Available            /*Current OOR state*/        History# (0 cleared, 0 warnings, 0 full) Block fc05##/32#        Number of free SID resources# 7607        Max SIDs# 7680        Thresholds# Green 384, Warning 231        Status# Resource Available            History# (0 cleared, 0 warnings, 0 full)_/snipped/_    Block fc05#4##/32#        Number of free SID resources# 4946 /*Current Available resources*/       Max SIDs# 7680            /Max scale limit per block/        Thresholds# Green 384, Warning 231        Status# Resource Available            History# (0 cleared, 0 warnings, 0 full)_/snipped/_Platform Capabilities#  SRv6# Yes  TILFA# Yes  Microloop-Avoidance# Yes  Endpoint behaviors#     End.DX6                                 /*F1 per-CE v6*/    End.DX4       /*F1 per-CE v4*/    End.DT6                                /*F1 per-VRF v6*/    End.DT4           /*F1 per-VRF v4*/    End.DX2                                  /*F1 P2P VPWS*/    End (PSP/USD)                     End.X (PSP/USD)    uN (PSP/USD)                              /*node locator*/    uA (PSP/USD)                            /*Adjacency usid*/     uDX6                                    /*usid per-CE v6*/    uDX4                                    /*usid per-CE v4*/    uDT6                                   /*usid per-VRF v6*/    uDT4                                   /*usid per-VRF v4*/    uDX2                                     /*usid P2P VPWS*/    uB6 (Insert.Red)                       /*BSID for SRv6TE*/ Headend behaviors#     T    H.Insert.Red                        /*SID Insert - LFA,TE*/    H.Encaps.Red                            /*L3 encap*/                      H.Encaps.L2.Red                         /*L2 encap*/Security rules#     SEC-1    SEC-2    SEC-3  Counters#     CNT-3 Signaled parameters#    Max-SL          # 11                                              Max-End-Pop-SRH # 5Max-H-Insert    # 4 sids       / 4 Max carriers Inserted/    Max-H-Encap     # 5 sids    / 5 MAX carriers while encap/    Max-End-D       # 8  Configurable parameters (under srv6)#     Encapsulation#       Source Address# Yes      Hop-Limit     # value=Yes, propagate=Yes      Traffic-class # value=Yes, propagate=Yes  Default parameters (under srv6)#     Encapsulation#       Hop-Limit     # value=0, propagate=No      Traffic-class # value=0, propagate=NoMax Locators# 16                 /Max scale for locators/  Max SIDs# 12256                  /Max scale for L3MAX-SE/  SID Holdtime# 3 mins /*Configurable value for SID clean=up*/ Highlights for critical params in above outputSRv6 MAX SIDsEvery platform with the combination of MDB profiles (NCS5700) will have maximum SID allocation limit. Numbers mentioned in Table#1We can use the below CLI to check the max SIDs (local) supported on the router,sh segment-routing srv6 manager | i Max SID r  Max SID resources# 12256In SRv6, we can support multiple locators for various reasons like slicing/flex-algo. And we enforce max CAP per locator block which can be checked with,sh segment-routing srv6 manager | i ~Block|Max SIDs~    Block fc05##/32#        Max SIDs# 7680Please note, Sum of the SIDs in different locator blocks should be <= Max SID resources supportedWe will see in detail about the local SID allocation for different SID functionalities. Lets keep the focus restricted to uSID and not Format1These are some of the important uSID functionalities we have today uN # Locator SID uA # Adjacency SID uDT4/6 # Per-VRF SID for GRT/VPN (ipv4/ipv6) uDX4/6 # Per-CE SID for GRT/VPN (ipv4/ipv6) uDX2 # L2VPN SID for EVPN VPWS uB6 # BSID for SRv6 Policy uDT2U# EVPN ELAN SID for known-unicast uDT2M # EVPN ELAN SID for BUM trafficCalculaton goes like this,MAX SIDs allocated =  sum (uN#uLocal) + 2*uA 16 uN SIDs are reserved by default in hardware to avoid issues with locator configs during OOR conditions uLocal here is the local SID on the box which can fall into categories like (uA, uDT4/6, uDX4/6/2 .. etc ) uA is counted twice, once as a part of uN#uLocal (uLocal = uA),and twice for local-only uAExample#sh segment-routing srv6 manager Summary#  Number of Locators# 15 (15 operational)                            /*uN or Locators*/  Number of SIDs# 6749 (0 stale)                                     /* SID usage*/  Max SID resources# 12256  Number of free SID resources# 4487                                 /* 12256 - 6749 != 4487 #) */  OOR#    Thresholds (resources)# Green 613, Warning 368                   /* Free resources > 5% of MAX SID for Green*/    Status# Resource Available                                /*To recover from OOR state, need to be in GREEN threshold*/ sh segment-routing srv6 sid | i uA | ut wc    510    4080   56100RP/0/RP0/CPU0#J2-PE1#sh segment-routing srv6 sid | i ~uDT|uDX|uB~ | ut wc   6239/   37482  686534 Number of SIDs# Equals to sum of uLocals (uA,uDT,uDX,uB) = 6749Actual SID resource usage# 6749 + 2*510 (uA) = 7769Number of free SID resources# Max SID (12256) - SID in use (7769) = 4487  OOR (Out Of Resource) monitoring for MAX SIDsIllustration#sh segment-routing srv6 manager Summary#  Number of Locators# 15 (15 operational)                           Number of SIDs# 6749 (0 stale)                                    Max SID resources# 12256  Number of free SID resources# 4487                              OOR#    Thresholds (resources)# Green 613, Warning 368      /*Global Free resources > 5% of MAX SID for Green*/    Status# Resource Available                                   /*To recover from OOR state, need to be in GREEN threshold*/     Block fc05#3e##/32#        Number of free SID resources# 4946                       /* Block level MAX free SIDs = 7680 - current allocation from the block*/        Max SIDs# 7680        Thresholds# Green 384, Warning 231                       / > 5% green threshold for Block level max SID/        Status# Resource Available Global level OOR Threshold set based on SID usage across all the locator blocks (system level) Green threshold if (global level free resources > 5% of MAX SID on system level) Warning if (global level free resources < 3% of MAX SID on system level)Block level OOR Threshold set based on SID usage on a specific block Green threshold if (Block level free resources > 5% of MAX SID for locator block) Warning if (Block level free resources < 3% of MAX SID for locator block)If we reach the state of OOR “Out of Resources” we will stop programming the SIDs till we go back to GREEN ThresholdSRv6 Encap resource usageSRv6 uSIDs (remote) consumes encap resources (in EEDB-Egress Encap DataBase) like the MPLS labels. In NC57 (J2) systems we have encap bank carving in the form of cluster bank pairs which are dedicated to different network applications like SRv6, L3VPN, BGP LU ..etcSRv6 encap usage can be monitered as srv6nh category which is similar to mplsnh for labelsRP/0/RP0/CPU0#J2-PE1#sh controllers npu resources encap location 0/4/CPU0 HW Resource Information    Name                            # encap    Asic Type                       # Jericho TwoOFA Table Information(May not match HW usage)        ipnh                        # 1038             ip6nh                       # 2042             mplsnh                      # 79                    srv6nh                      # 5185 SRv6NH scale depends on the remote SIDs which we use for the H/T.Encap (like encapsulating with DT4/6 or DX4/6/2 SIDs) and T.Insert (which we use for TILFA or SRv6TE scenarios)We also use the encap resources for SRv6-Policy which is not tied to the H.encap numbers in Table#1SRv6 FEC resource usageFEC table is present in the ingress pipleline of the NPU forwarding block. Basically a prefix/label/SID lookup can point to a FEC entry which will have information about the VOQ of the egress interface and also the encap pointers where the remote labels/SIDs are stored.In NCS5700 we have 3 levels of FEC hierarchy vs 2 levels in NCS5500/540 systemsBelow is an ouput from NCS5700 system where we can see the services SID in H2 FEC and locator SIDs in H3 FEC.show controllers npu resources fec location 0/1/CPU0     Name# hier_0           Estimated Max Entries       # 52416              Total In-Use                # 8        (0 %)           OOR State                   # Green           Bank Info                   # H1 FEC              Name# hier_1           Estimated Max Entries       # 209664             Total In-Use                # 45       (0 %)           OOR State                   # Green           Bank Info                   # H2 FEC  >> Services SID       Name# hier_2           Estimated Max Entries       # 78592              Total In-Use                # 9    (0 %)           OOR State                   # Green           Bank Info                   # H3 FEC >> uN locator SIDSRv6 ECMP_FEC resource usageWe use ECMP_FEC for multipath/ECMP entries pointing to a list of FEC array. This database is also used by SRv6 when we have to deal with multipaths. In NC5700 ecmp_fec mapping follows similar application mapping hierarchy as that of FEC.This can be monitored with,show controllers npu resources ecmpfec location 0/RP0/CPU0Current Hardware Usage    Name# ecmp_fec        Estimated Max Entries       # 32768           Total In-Use                # 6        (0 %)        OOR State                   # Green        Bank Info                   # ECMP        Name# hier_0           Estimated Max Entries       # 30720              Total In-Use                # 0                   OOR State                   # Green           Bank Info                   # H1 ECMP        Name# hier_1           Estimated Max Entries       # 30720              Total In-Use                # 1                   OOR State                   # Green           Bank Info                   # H2 ECMP  >> Services SID       Name# hier_2           Estimated Max Entries       # 32763              Total In-Use                # 5                   OOR State                   # Green           Bank Info                   # H3 ECMPSRv6 Ultra USID scale with NCS5700With the encap budget in NCS5700 we can do “Ultra-Scale packing for SRv6 uSIDs” Max-H-Insert    # 4 sids       / 4 Max carriers Inserted/    Max-H-Encap     # 5 sids    / 5 MAX carriers while encap/Using this encap budget we have succesfully validated 26 usids packing in a single pass. (24 Transport uSIDs + 2 Service USIDs)!Showtechs required to troublesshoot SRv6 issues show tech cefshow tech cef platformshow tech l2vpnshow tech l2vpn platformshow tech ofashow tech segment-routing traffic-engshow tech routing isisshow tech routing bgpshow tech ipv6 nd SummaryThis article has given an overview on the SRv6 capabilities with respect to the NCS platforms based on the differences in the forwarding asics and the MDB profiles.We also touched upon the important XR commands and outputs to check on the capabilities, scale of the platform with respect to SRv6 and also covered the resource usage for important on chip databases.This concludes SRv6 series Part-5. Please stay tuned for more !", "url": "/tutorials/srv6-transport-on-ncs-part-5/", "author": "Deepak Balasubramanian", "tags": "iosxr, SRv6, NCS5700, NCS5500, Segment Routing" } , "tutorials-introducing-nc57-mod-s-lc": { "title": "Introducing NC57-MOD-S: NCS 5700 Modular Line Card", "content": "IntroductionThe Cisco NCS 5500 Series Modular Platforms offers industry-leading 100 GbE and 400 GbE port density to handle massive traffic growth. Latest trends in the Metro Architecture have driven the evolution of the product portfolio. Starting XR release 7.0.2 , the NC57 Line Cards are introduced in the platform to support higher speed and flexible scalablity. With IOS XR 7.6.1, we have introduced the Modular flavour line card NC57-MOD-S, which supports great flexibility in terms of supported speed and form factors of optics.Figure 1# The New Cisco NC57-MOD-S Modular Line CardNC57 Series Modular Line Cards for NCS 55xx ChassisThe NC57 Series Line Cards brought the support for 400G with enhanced scalability in the NCS 5500 Modular Chassis Family. The table below summarises the NC57 series line card family as of today.Table 1# List of NC57 Line Cards for NCS 55xx Modular Chassis Line Card PID Line Card Description FCS Release NC57-18DD-SE 18x400G or 30x200/100G Scale LC IOS XR 7.0.2 NC57-24DD 24x400G Base LC IOS XR 7.0.2 NC57-36H-SE 36x100G Scale Line Card IOS XR 7.3.1 NC57-36H6D-S 24x400G + 12 Flex Port (6x400GE or 12x200GE/100GE) base LC IOS XR 7.4.1 NC57-MOD-S 2x400G + 8x50G + 2xMPA Base LC IOS XR 7.6.1 NC57-MOD-S Video IntroductionNC57-MOD-S Line Card SpecificationLine Card ArchitectureNC57-MOD-S is a modular line card available only in Base version. It is based on a single Jericho2 ASIC. It is the successor of the NC5500 modular line card (NC55-MOD-S). It offers a mix of fixed port and two slots for modular port adapters (MPAs). The Fixed ports include 2xQSFP56-DD 100/200/400G ports and 8xSFP56 offering 10/25/50G. The MPA slots have maximum throughput of 800G each. In terms of throughput and packet processing capability, the ASIC can process a massive 4.8 Tbps/2BPPS. The front panel ports utilizes only 2.6 Tbps when all ports and MPAs are utilized at full line rate.Note# 1G port speed are not supported on the NC57-MOD line card on fixed or MPA based SFP portsFigure 2# NC57-MOD-S Modular Line Card Fixed Ports & MPA slotsFigure 3# NC57-MOD-S Modular Line Card ArchtectureA single J2 ASIC is the primary building block of the line card. There is also LC CPU complex present in the line card. This line card supports deep buffering with the help of integrated High Bandwidth Memory(8GB). The NIF interfaces are connected to the fixed front panel ports via a PHY that enables MACsec capability in those ports. The two MPA slots are directly connected via a group of NIFs allowing a maximum throughput of 800Gbps per MPA slot. Most of the MPAs contain PHY internally enabling support of MACsec encryption.Modular Port AdaptersAs Mentioned earlier the two MPA slots on the NC57-MOD-S line card have a bandwidth upto 800 Gbps/slot. It supports the new generation of MPAs (Upto 800G/MPA) supporting form factors like QSFP56-DD, 400G CFP2DCO, SFP56 etc. The line card is also backward compatible with the previous generation of MPAs (Upto 400G/MPA). The below table lists out all the MPA modules supported on the line card.Table 2# List Supported MPAs on NC57-MOD Line Card MPA PID MPA Port Configuration MACSec Support NC57-MPA-2D4H 2x400G QDD or 4x100/200G QDD Yes NC57-MPA-1FH1D 1x400G CFP2 + 1x400G QDD Yes NC57-MPA-12L 12xSFP56 (10/25/50G) Yes NC55-MPA-4H-S 4xQSFP28 (40/100G) Yes NC55-MPA-12T-S 12xSFP+ (10G) Yes NC55-MPA-2TH-S 2xCFP2DCO (200G) Yes NC55-MPA-1TH2H-S 1xCFP2DCO (100G) + 2xQSFP28 (40/100G) Yes MACsec & TimingMACsec is supported on all the fixed ports on the NC57-MOD line card. As shown in the figure 3 above, MACsec support is achieved via the MACsec capable PHY in the system. For the MPA ports, MACsec support is on all the MACsec capable MPAs. This is listed in the table 2 above.NC57-MOD line card supports all the timing functionality supported by NCS 5500 modular platforms. The line card is capable of supporting PTP Class-C timing accuracy, when the chassis is equipped with NC55-RP2-E as the route processor.Scalability & Use CasesAs NC57-MOD LC is built on the J2 generation of BCM ASIC, it also supports the modular Database profiles (MDB) for scale. We have already covered the MDB profiles in our previous article. Since the line card comes only in base version, it can be configured with both L2MAX and L3MAX MDB profiles when operating in J2 native Mode.NC57-MOD-S Line card can be inserted in all the three variants of NCS 5500 modular chassis, viz. 5504, 5508 and 5516 with v2 FAN & Fabric Cards. These modular chassis equipped with this line card can be positioned in various roles such as Metro Aggregation , Core and Peering etc. Also, the support of ZR/ZR+ in the fixed QDD port & MPA-2D4H and 400G CFP2DCO optics via the MPA-1FH1D make this line card an important piece in Cisco’s Routed Optical Networking Architecture.Summary & ReferencesThe Cisco NCS 5700 modular line card NC57-MOD-S is a great choice for a versatile, high-performance networking solution. More information on the line card and the NCS 5500/5700 product family can be found in the below links. NCS 5500/5700 @ Cisco.com NCS5500/5700 Modular WhitePaper", "url": "/tutorials/introducing-nc57-mod-s-lc/", "author": "Paban Sarma", "tags": "iosxr, NCS 5700, NCS 5500" } , "tutorials-srv6-transport-on-ncs-part-6": { "title": "Implementing EVPN ELAN over SRv6 Transport on NCS 500/5500", "content": " Table of Contents Overview Topology Configuration Steps BGP configuration for EVPN EVPN ES and EVI configuration Configuring Layer2 Attachment Circuits & Bridge-Domain Verification Steps Verifying EVPN ELAN control Plane Verifying EVPN ELAN Data Plane and MAC Learnings Summary OverviewIn our previous tutorials, we covered SRv6 Transport with uSID on the NCS 500 and 5500 platforms, and L3/L2 P2P services on top of it. This tutorial will cover implementaion of ethernet VPN (EVPN) based multipoint layer 2 service (ELAN) over SRv6 uSID transport. As of today, only Single homed EVPN ELAN is supported on these platforms. EVPN ELAN is not supported on the NCS 5700 series platforms as of latest XR release.Topology Nodes Device Type Software Version Loopback0 PE1 NCS 540 IOS XR 7.5.2 fcbb#bb00#1##1/128 P2 NCS 5500 IOS XR 7.5.2 fcbb#bb00#2##1/128 P3 NCS 5500 IOS XR 7.5.2 fcbb#bb00#3##1/128 PE4 NCS 5500 IOS XR 7.5.2 fcbb#bb00#4##1/128 PE5 NCS 5500 IOS XR 7.5.2 fcbb#bb00#5##1/128 The loopback0 IPs are chosen as per the SRv6 addressing best practice (check out segment-routing.net for more details).In this tutorial, we will establish a multipoint L2VPN (EVPN-ELAN) connecting CE1, CE2 and CE3. The example will demonstrate VLAN based ELAN (EVPLAN) service and establish L2 stretch across CE1, CE2 and CE3 for VLAN 200.Configuration StepsWe already covered the configuration steps for the transport in our previous tutorial. The below table summerizes the SRv6 uSID locator used (name POD0) on each node for reference. Nodes SRv6 Locator PE1 fcbb#bb00#1##/48 P2 fcbb#bb00#2##/48 P3 fcbb#bb00#3##/48 PE4 fcbb#bb00#4##/48 PE5 fcbb#bb00#5##/48 Configuration steps included in this tutorial will focus only on the service specific tasks including, BGP EVPN control plane EVPN ES and EVI configuration Layer 2 UNI and L2VPN configurationBGP configuration for EVPNBGP configuration is similar to what we did in our previous tutorial. However, since we have multiple PE nodes here, we need to establish full mesh BGP with EVPN AFI. For simplicity, we are using P2 as a route-reflector (RR). In real time deployment, it is recommended to use dedicated route-reflectors in the network. The following config snippet shows the BGP configuration on all the PEs and the RR node.PE1router bgp 100 bgp router-id 1.1.1.1 address-family l2vpn evpn ! neighbor fcbb#bb00#2##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn ! !!PE4router bgp 100 bgp router-id 4.4.4.4 address-family l2vpn evpn ! neighbor fcbb#bb00#2##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn ! !!PE5router bgp 100 bgp router-id 5.5.5.5 address-family l2vpn evpn ! neighbor fcbb#bb00#2##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn ! !!P2 as RRrouter bgp 100 bgp router-id 2.2.2.2 address-family vpnv4 unicast ! address-family l2vpn evpn ! neighbor fcbb#bb00#1##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn route-reflector-client ! ! neighbor fcbb#bb00#4##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn route-reflector-client ! ! neighbor fcbb#bb00#5##1 remote-as 100 update-source Loopback0 address-family l2vpn evpn route-reflector-client ! !!EVPN ES and EVI configurationThe next step is to configure the EVPN. It includes three steps, ES configuration# Since we are not using Multihoming CE, we won’t explicitly configure any ESI but simply enable to physical PE-CE link under EVPNevpn interface TenGigE0/0/0/0 ethernet-segment identifier type 0 1.1.1.1.1.1.0 ! !! Enabling SRv6# we need to globally enable SRv6 for EVPN under EVPN global configuration. This step optionally includes specifying the SRv6 locator to be used for EVPN services.evpn segment-routing srv6 locator POD0 !! EVI configuartion# The next important step is to configure the EVPN identifier (EVI) and enable MAC advertisement and SRv6 for the EVI. We can also specify the locator per EVI in this stage. We are using per EVI locator in this tutorial.evpn evi 200 segment-routing srv6 advertise-mac ! locator POD0 !!Below is the config snippet from all the PE nodes. (we have the exact same configuration on all three PEs as the topology is symmetric i.e same interfaces are used on each PE)PE1, PE4 and PE5evpn evi 200 segment-routing srv6 advertise-mac ! locator POD0 ! interface TenGigE0/0/0/0 ! segment-routing srv6 !!Configuring Layer2 Attachment Circuits & Bridge-DomainWe need to configure l2transport sub-interface (on the PE-CE link) with appropriate VLAN encapsulations. This tutorial is showing VLAN based service with VLAN ID 200. We are not showing any VLAN translation operation (rewrite commands) as they are out of scope of this tutorial.PE1, PE4 and PE5interface TenGigE0/0/0/0.2 l2transport encapsulation dot1q 200The layer2 sub interface created now needs to be stitched with the EVI by using the l2vpn bridge-domain service construct as shown below. Note that the segment-routing srv6 keyword is must here . We can also specify a locator if we wish to use a different locator for the bridge-domain.PE1, PE4 and PE5l2vpn bridge group POD0 bridge-domain POD0 interface TenGigE0/0/0/0.2 ! evi 200 segment-routing srv6 ! ! !!Verification StepsVerifying EVPN ELAN control PlaneThe very first step is to verify whether the configured bridge-domain is in Up state on all the PE nodes. For brevity we have included the verification outputs only from PE5.RP/0/RP0/CPU0#LABSP-3393-PE5#show l2vpn bridge-domain brief Fri May 12 04#35#22.647 UTCLegend# pp = Partially Programmed.Bridge Group#Bridge-Domain Name ID State Num ACs/up Num PWs/up Num PBBs/up Num VNIs/up-------------------------------- ----- -------------- ------------ ------------- ----------- -----------POD0#POD0 1 up 1/1 0/0 0/0 0/0 The next step is to see the programmed SRv6 SIDs for the service we configured. For each EVI , there are two SRv6 SID programmed, uDT2U and uDT2M for Unicast and BUM traffic respectively.RP/0/RP0/CPU0#LABSP-3393-PE5#show segment-routing srv6 sid Fri May 12 04#38#57.345 UTC*** Locator# 'POD0' *** SID Behavior Context Owner State RW-------------------------- ---------------- -------------------------------- ------------------ ----- --fcbb#bb00#5## uN (PSP/USD) 'default'#5 sidmgr InUse Y fcbb#bb00#5#e000## uA (PSP/USD) [BE35, Link-Local]#0#P isis-1 InUse Y fcbb#bb00#5#e001## uA (PSP/USD) [BE35, Link-Local]#0 isis-1 InUse Y fcbb#bb00#5#e002## uA (PSP/USD) [BE25, Link-Local]#0#P isis-1 InUse Y fcbb#bb00#5#e003## uA (PSP/USD) [BE25, Link-Local]#0 isis-1 InUse Y fcbb#bb00#5#e004## uA (PSP/USD) [BE45, Link-Local]#0#P isis-1 InUse Y fcbb#bb00#5#e005## uA (PSP/USD) [BE45, Link-Local]#0 isis-1 InUse Y fcbb#bb00#5#e006## uDT2U 200#0 l2vpn_srv6 InUse Yfcbb#bb00#5#e007## uDT2M 200#0 l2vpn_srv6 InUse YThe same SIDs can also be verified using the EVI detail CLI.RP/0/RP0/CPU0#LABSP-3393-PE5#show evpn evi vpn-id 200 detail Fri May 12 06#04#53.970 UTCVPN-ID Encap Bridge Domain Type ---------- ---------- ---------------------------- ------------------- 200 SRv6 POD0 EVPN Stitching# Regular Unicast SID# fcbb#bb00#5#e006## Multicast SID# fcbb#bb00#5#e007## E-Tree# Root Forward-class# 0 Advertise MACs# Yes Advertise BVI MACs# No Aliasing# Enabled UUF# Enabled Re-origination# Enabled Multicast# Source connected # No IGMP-Snooping Proxy# No MLD-Snooping Proxy # No BGP Implicit Import# Enabled VRF Name# SRv6 Locator Name# POD0 Preferred Nexthop Mode# Off BVI Coupled Mode# No BVI Subnet Withheld# ipv4 No, ipv6 No RD Config# none RD Auto # (auto) 5.5.5.5#200 RT Auto # 100#200 Route Targets in Use Type ------------------------------ --------------------- 100#200 Import 100#200 Export The above output shows the two SIDs for two types of traffic for the EVI. We can check details of the SIDs using show segment-routing srv6 sid <> detail commands as below#RP/0/RP0/CPU0#LABSP-3393-PE5#show segment-routing srv6 sid fcbb#bb00#5#e006## detail Fri May 12 06#11#15.177 UTC*** Locator# 'POD0' *** SID Behavior Context Owner State RW-------------------------- ---------------- -------------------------------- ------------------ ----- --fcbb#bb00#5#e006## uDT2U 200#0 l2vpn_srv6 InUse Y SID Function# 0xe006 SID context# { evi=200, opaque-id=0 } Locator# 'POD0' Allocation type# Dynamic Created# May 11 06#29#44.913 (23#41#30 ago) Verifying EVPN ELAN Data Plane and MAC LearningsThe CE nodes are configured in the same L2 subnets which we want to stitch using the EVPN service. Below are the IP configurations on each CE. CE1 CE2 CE3 \t\t\t\t\t\t\tinterface TenGigE0/0/0/0.2 \tipv4 address 200.0.0.1 255.255.255.0 \tencapsulation dot1q 200\t!\t\t\t\t\t\t \t\t\t\t\t\t\tinterface TenGigE0/0/0/0.2 \tipv4 address 200.0.0.2 255.255.255.0 \tencapsulation dot1q 200\t!\t\t\t\t\t\t \t\t\t\t\t\t\tinterface TenGigE0/0/0/0.2 \tipv4 address 200.0.0.3 255.255.255.0 \tencapsulation dot1q 200\t!\t\t\t\t\t\t We can now try verifying end-to-end ping from CE1 to the other CE nodes to confirm the working of the EVPN service.\t\t\t\t\tRP/0/RP0/CPU0#LABSP-3393-CE1#ping 200.0.0.1Fri May 12 05#22#43.472 UTCType escape sequence to abort.Sending 5, 100-byte ICMP Echos to 200.0.0.1, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 msRP/0/RP0/CPU0#LABSP-3393-CE1#ping 200.0.0.2Fri May 12 05#22#44.911 UTCType escape sequence to abort.Sending 5, 100-byte ICMP Echos to 200.0.0.2, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 msRP/0/RP0/CPU0#LABSP-3393-CE1#ping 200.0.0.3Fri May 12 05#22#46.593 UTCType escape sequence to abort.Sending 5, 100-byte ICMP Echos to 200.0.0.3, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms\t\t\t\t\t\tAs the packets went through the EVPN PEs, we will see the respective MAC addresses learnt locally and via EVPN. The below CLI snippets shows the learnings on PE5.\t\t\t\t\tRP/0/RP0/CPU0#LABSP-3393-PE5#show evpn evi vpn-id 200 mac Fri May 12 06#32#40.216 UTCVPN-ID Encap MAC address IP address Nexthop Label SID ---------- ---------- -------------- ---------------------------------------- --------------------------------------- -------- ---------------------------------------200 SRv6 00bc.6016.5800 ## fcbb#bb00#1##1 IMP-NULL fcbb#bb00#1#e000## 200 SRv6 00bc.6024.c400 ## fcbb#bb00#4##1 IMP-NULL fcbb#bb00#4#e000## 200 SRv6 00bc.6027.6400 ## TenGigE0/0/0/0.2 0 fcbb#bb00#5#e006## \t\t\t\t\tWe can see one locally learnt MAC on the UNI side and two remote MACs learnt from the peer PEs. The same can also be seen in the EVPN RT2 received.\t\t\t\t\tRP/0/RP0/CPU0#LABSP-3393-PE5# show bgp l2vpn evpn rd 1.1.1.1#200Fri May 12 06#36#10.445 UTCBGP router identifier 5.5.5.5, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0BGP main routing table version 218BGP NSR Initial initsync version 1 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsStatus codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 1.1.1.1#200*>i[2][0][48][00bc.6016.5800][0]/104 fcbb#bb00#1##1 100 0 i*>i[3][0][32][1.1.1.1]/80 fcbb#bb00#1##1 100 0 iProcessed 2 prefixes, 2 pathsRP/0/RP0/CPU0#LABSP-3393-PE5#show bgp l2vpn evpn rd 4.4.4.4#200Fri May 12 06#36#28.768 UTCBGP router identifier 5.5.5.5, local AS number 100BGP generic scan interval 60 secsNon-stop routing is enabledBGP table state# ActiveTable ID# 0x0BGP main routing table version 218BGP NSR Initial initsync version 1 (Reached)BGP NSR/ISSU Sync-Group versions 0/0BGP scan interval 60 secsStatus codes# s suppressed, d damped, h history, * valid, > best i - internal, r RIB-failure, S stale, N Nexthop-discardOrigin codes# i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight PathRoute Distinguisher# 4.4.4.4#200*>i[2][0][48][00bc.6024.c400][0]/104 fcbb#bb00#4##1 100 0 i*>i[3][0][32][4.4.4.4]/80 fcbb#bb00#4##1 100 0 iProcessed 2 prefixes, 2 pathsSummaryThis concludes Part 6 of our SRv6 tutorial series explaining multipoint L2 service over SRv6 transport with EVPN control Plane (EVPN ELAN). Stay tuned for our upcoming tutorials.", "url": "/tutorials/srv6-transport-on-ncs-part-6/", "author": "Paban Sarma", "tags": "" } , "#": {} , "#": {} , "tutorials-introducing-nc57-48q2d-lc": { "title": "Introducing NC57-48Q2D : Low Speed NCS 5700 Line Card", "content": " Paban Sarma, Technical Marketing Engineer, Cisco (pasarma@cisco.com) Bala Murali Krishna Sanka, Technical Marketing Engineer, Cisco (bsanka@cisco.com) IntroductionThe Cisco NCS 5500 Series Modular Platforms offers industry-leading 100 GbE and 400 GbE port density to handle massive traffic growth. Latest trends in the Metro Architecture have driven the evolution of the product portfolio. Starting XR release 7.0.2 , the NC57 Line Cards are introduced in the platform to support higher speed and flexible scalablity. With IOS-XR 7.10.1, two new variants of line cards are introduced viz. NC57-48Q2D-S and NC57-48Q2D-SE-S, bringing highly scalable dense low speed line card to the family of NCS 5700 modular line card.Figure 1# NC57-48Q2D-S/NC57-48Q2D-SE-S Front ViewNC57 Series Modular Line Cards for NCS 55xx ChassisThe NC57 Series Line Cards brought the support for 400G with enhanced scalability in the NCS 5500 Modular Chassis Family. The family of line card now offers speed range starting from 1G to 400G at unmatched scalability and functionality. The table below summarises the NC57 series line card family as of today.Table 1# List of NC57 Line Cards for NCS 55xx Modular Chassis Line Card PID Line Card Description FCS Release NC57-18DD-SE 18x400G or 30x200/100G Scale LC IOS XR 7.0.2 NC57-24DD 24x400G Base LC IOS XR 7.0.2 NC57-36H-SE 36x100G Scale Line Card IOS XR 7.3.1 NC57-36H6D-S 24x400G + 12 Flex Port (6x400GE or 12x200GE/100GE) base LC IOS XR 7.4.1 NC57-MOD-S 2x400G + 8x50G + 2xMPA Base LC IOS XR 7.6.1 NC57-48Q2D-S 32x25G+16x50G+2x400G Base LC IOS XR 7.10.1 NC57-48Q2D-SE-S 32x25G+16x50G+2x400G Scale LC IOS XR 7.10.1 NC57-48Q2D Video IntroductionNC57-48Q2D Line Card SpecificationThe new NCS5700 line card variants are built to bring in dense low speed variant to the fold. The NC57-48Q2D-(SE)-S line card is comprised of multirate SFP28, SFP56 & QSFP56-DD ports. The 32xSFP28 ports can work at 1/10/25G, the 16xSFP56 ports can work at 1/10/25/50G and the 2xQDD ports work at 40/100/200/400G with Breakout support._Figure 2# NC57-48Q2D-(SE)-S Line Card Ports viewNote# 1G support on the SFP56 will be availble later releaseFront Panel Ports and SpeedThe NC57-48Q2D line card is built to serve as a dense low speed aggregation line card and the very first line card in NCS 5700 modular family to support native 1G. All the front panel ports are multirate in nature and the speed ranges from 1G to 400G.SFP28 PortsAs described in Figure 2, the first 32 ports (P0 to P31) are of SFP28 form factor and work at 1/10/25G speed. By default, ports comes up as 25G and the speed can be changed with controller optics command per port.SFP56 PortsThe next 16 ports (P32 to P47) can support a speed upto 50G. These are true multirate port capabale of supporting 1/10/25/50G.Note# 1G support on the SFP56 will be availble later releaseQSFP56DD PortsApart from the low speed SFP form factors, the line card variant is also equipped with 2 Nos. of QSFP56DD ports supporting upto 400G. The ports are backward compitable with QSFP+/QSFP28 and support 40G/100G, 200G and breakout options.Internal ArchitectureFigure 3# NC57-48Q2D-S/NC57-48Q2D-SE-S Building BlocksThe NC57-48Q2D-(SE)-S line card is built with single J2C ASIC and offers a massive throughput of 2.4 Tbps/1BPPS. The scaled variant of the line card, NC57-48Q2D-SE-S comes with OP2 external TCAM that assists in achieveing higher prefix and service scale. All the front panel ports of the line cards are connected to network IFs of the NPU via PHY element which works as retimer to set interface speeds and also enables MACsec capability on all the ports.The first set of 32 SFP28 ports are coming from 32 set of 25GE NIF on the ASIC via 2x PHY elements. The next set of 16xSFP56 ports are also connected via 2xPHY element , each taking 16x25GE NIF line and connecting 8xSFP56 ports at the faceplate. Thus each individual port works at 1/10/25/50G multirate.The QSFP-DD ports take out 16x50GE NIF and each group of 8x50GE NIF is converted to one QDD 400G port at the faceplate. This is a true multirate port and natively works at 40/100/200/400G speed. It can be also used in various breakout mode combination.Figure 4# NC57-48Q2D-S/NC57-48Q2D-SE-S Supported Port Speed & Breakout OptionsPort Speeds & Breakouts OptionsBy default, the NC57-48Q2D-(SE)-S line cards comes up as an 32x25G+16x50G+2x400G line card. following snippet shows the default port speeds. RP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int br | in E0/5Mon Aug 29 19#57#58.648 UTCTwentyFiveGigE0/5/0/0 unassigned Shutdown Down default TwentyFiveGigE0/5/0/1 unassigned Shutdown Down default TwentyFiveGigE0/5/0/2 unassigned Shutdown Down default TwentyFiveGigE0/5/0/3 unassigned Shutdown Down default TwentyFiveGigE0/5/0/4 unassigned Shutdown Down default TwentyFiveGigE0/5/0/5 unassigned Shutdown Down default TwentyFiveGigE0/5/0/6 unassigned Shutdown Down default TwentyFiveGigE0/5/0/7 unassigned Shutdown Down default TwentyFiveGigE0/5/0/8 unassigned Shutdown Down default TwentyFiveGigE0/5/0/9 unassigned Shutdown Down default TwentyFiveGigE0/5/0/10 unassigned Shutdown Down default TwentyFiveGigE0/5/0/11 unassigned Shutdown Down default TwentyFiveGigE0/5/0/12 unassigned Shutdown Down default TwentyFiveGigE0/5/0/13 unassigned Shutdown Down default TwentyFiveGigE0/5/0/14 unassigned Shutdown Down default TwentyFiveGigE0/5/0/15 unassigned Shutdown Down default TwentyFiveGigE0/5/0/16 unassigned Shutdown Down default TwentyFiveGigE0/5/0/17 unassigned Shutdown Down default TwentyFiveGigE0/5/0/18 unassigned Shutdown Down default TwentyFiveGigE0/5/0/19 unassigned Shutdown Down default TwentyFiveGigE0/5/0/20 unassigned Shutdown Down default TwentyFiveGigE0/5/0/21 unassigned Shutdown Down default TwentyFiveGigE0/5/0/22 unassigned Shutdown Down default TwentyFiveGigE0/5/0/23 unassigned Shutdown Down default TwentyFiveGigE0/5/0/24 unassigned Shutdown Down default TwentyFiveGigE0/5/0/25 unassigned Shutdown Down default TwentyFiveGigE0/5/0/26 unassigned Shutdown Down default TwentyFiveGigE0/5/0/27 unassigned Shutdown Down default TwentyFiveGigE0/5/0/28 unassigned Shutdown Down default TwentyFiveGigE0/5/0/29 unassigned Shutdown Down default TwentyFiveGigE0/5/0/30 unassigned Shutdown Down default TwentyFiveGigE0/5/0/31 unassigned Shutdown Down default FiftyGigE0/5/0/32 unassigned Shutdown Down default FiftyGigE0/5/0/33 unassigned Shutdown Down default FiftyGigE0/5/0/34 unassigned Shutdown Down default FiftyGigE0/5/0/35 unassigned Shutdown Down default FiftyGigE0/5/0/36 unassigned Shutdown Down default FiftyGigE0/5/0/37 unassigned Shutdown Down default FiftyGigE0/5/0/38 unassigned Shutdown Down default FiftyGigE0/5/0/39 unassigned Shutdown Down default FiftyGigE0/5/0/40 unassigned Shutdown Down default FiftyGigE0/5/0/41 unassigned Shutdown Down default FiftyGigE0/5/0/42 unassigned Shutdown Down default FiftyGigE0/5/0/43 unassigned Shutdown Down default FiftyGigE0/5/0/44 unassigned Shutdown Down default FiftyGigE0/5/0/45 unassigned Shutdown Down default FiftyGigE0/5/0/46 unassigned Shutdown Down default FiftyGigE0/5/0/47 unassigned Shutdown Down default FourHundredGigE0/5/0/48 unassigned Shutdown Down default FourHundredGigE0/5/0/49 unassigned Shutdown Down default The speeds of each port can be set individully using the controller optics configuration. Different supported breakout modes are also configured under the controller optics configuration for the QDD ports.Snippet showing 100g speed on QDD port controller Optics0/5/0/48\t\t speed 100g\t\t! RP/0/RP0/CPU0#NCS5508-II9-43#sho ipv4 int br | i 0/5/0/48\t Mon Aug 29 20#08#46.900 UTC\t HundredGigE0/5/0/48 unassigned Down Down default Snippet showing 4x100G breakout on QDD port controller Optics0/5/0/48 breakout 4x100RP/0/RP0/CPU0#NCS5508-II9-43#sho ipv4 int br | i 0/5/0/48Mon Aug 29 20#07#42.253 UTC HundredGigE0/5/0/48/0 unassigned Down Down default HundredGigE0/5/0/48/1 unassigned Down Down default HundredGigE0/5/0/48/2 unassigned Down Down default HundredGigE0/5/0/48/3 unassigned Down Down default Snippet showing 4x25G breakout on QDD port controller Optics0/5/0/48 speed 100g breakout 4x25RP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int br | in 0/5/0/48Tue Aug 30 17#36#51.439 UTCTwentyFiveGigE0/5/0/48/0 unassigned Down Down default TwentyFiveGigE0/5/0/48/1 unassigned Down Down default TwentyFiveGigE0/5/0/48/2 unassigned Down Down default TwentyFiveGigE0/5/0/48/3 unassigned Down Down default Snippet showing changing speed for SFP56 ports controller Optics0/5/0/32 speed 25g!controller Optics0/5/0/33 speed 10g!RP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int brief | in 0/5/0/32Tue Aug 30 18#25#21.057 UTCTwentyFiveGigE0/5/0/32 unassigned Down Down default RP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int brief | in 0/5/0/33Tue Aug 30 18#25#22.564 UTCTenGigE0/5/0/33 unassigned Down Down default On the NC5748Q-2D series line cards, port speeds are reset based on inserted optics as well. For example, when 1G SFP is inserted on the SFP28 ports or a 10G or 25G optics is inserted in the SFP 56 ports, the corresponding ports will be set to a GigabitE or TenGE or TwentyFiveGE port respectively. However, it is best practice to configure the speed manually, as the reset doesn’t happen after removing the optics from the port. RP/0/RP0/CPU0#NCS5508-II9-43#show run controller optics 0/5/0/34Tue Aug 30 21#22#51.676 UTC% No such configuration item(s)RP/0/RP0/CPU0#NCS5508-II9-43#show inventory | in 25G Tue Aug 30 21#23#02.160 UTCNAME# ~0/5~, DESCR# ~NCS 5700 32x1/10/25G + 16x1/10/25/50G + 2x400G Line Card BASE~NAME# ~TwentyFiveGigE0/5/0/34~, DESCR# ~Cisco SFP28 25G SR Pluggable Optics Module~PID# SFP-25G-SR-S , VID# V01, SN# INL2453AF2MRP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int br | in 0/5/0/34Tue Aug 30 21#23#20.934 UTCTwentyFiveGigE0/5/0/34 unassigned Shutdown Down default RP/0/RP0/CPU0#NCS5508-II9-43#show inventory | in 1G Tue Aug 30 21#25#08.065 UTCNAME# ~GigabitEthernet0/5/0/20~, DESCR# ~Cisco SFP 1G 1000BASE-SX Pluggable Optics Module RP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int br | in 0/5/0/20Tue Aug 30 21#25#20.803 UTCGigabitEthernet0/5/0/20 unassigned Shutdown Down default RP/0/RP0/CPU0#NCS5508-II9-43#show run controller optics 0/5/0/20Tue Aug 30 21#25#29.688 UTC% No such configuration item(s)RP/0/RP0/CPU0#NCS5508-II9-43#show run controller optics 0/5/0/48Tue Aug 30 21#28#35.462 UTC% No such configuration item(s)RP/0/RP0/CPU0#NCS5508-II9-43#show ipv4 int br | in 0/5/0/48Tue Aug 30 21#28#46.336 UTCFortyGigE0/5/0/48 unassigned Shutdown Down default RP/0/RP0/CPU0#NCS5508-II9-43#show inventory | in FortyTue Aug 30 21#28#52.485 UTCNAME# ~FortyGigE0/5/0/48~, DESCR# ~Cisco QSFP+ 40G SR4 Pluggable Optics Module~RP/0/RP0/CPU0#NCS5508-II9-43# MACsec & TimingThe NC57-48Q2D series line card supports MACsec encryption on all the ports. This is achieved by the use of the PHY element. The following table summarizes MACsec support at different port speed for the type of ports. SFP 28 ports 0-31 SPF56 ports 32-47 QDD56 Ports 48,49 1G, 10G,25G 10G, 25G, 50G 4x10G/40G/4x25G/100G/2x100G/4x100G/400G Note# 1G MAcsec support on the SFP56 ports (32-47) are not available in the initail release andThis line card variants can support industry standard class C timing on all the ports when the modular chassis is equiped with NC55-RP2-E card.Note that Class-C timing support on J2 Native mode is in software roadmapScalability & Use CasesThe new dense low speed line card comes with high scalability and powered with the flexibility of MDB Profiles. By default, in J2 Native the base line card runs L3MAX and the scale line card runs L3MAX-SE profile. Since we are talking about modular platform, -SE profile is usefull when all the line card in the system are eTCAM variants. The table below summarizes supported MDB profiles for the line card variants. LC variant L3MAX L3MAX-SE L2MAX L2MAX-SE NC57-48Q2D-S Yes No Yes No NC57-48Q2D-SE-S Yes Yes Yes yes With L3max-SE profile, NC57-48Q2D-SE-S can hold up to 5M IPv4 or 3M IPv6 prefixes. Also, the presence of eTCAM on this card enables additional features, e.g. BGP-Flow spec, strict uRPF etc.Summary & ReferencesWith the addition of the NC57-48Q2D line cards to the existing portfolio of 400G, 100G and modular line cards, the NCS 5500 modular systems can be deployed in a diverse set of use cases and unleash the full power of Jericho 2 native mode. More information on the line card and the NCS 5500/5700 product family can be found in the below links. NCS 5500/5700 @ Cisco.com NCS5500/5700 Modular WhitePaper", "url": "/tutorials/introducing-nc57-48q2d-lc/", "author": "Paban Sarma", "tags": "iosxr, NCS 5500, NCS 5700" } , "tutorials-2024-04-24-next-gen-utility-wan-arch": { "title": "Next Generation Utility WAN Architecture", "content": "OverviewIn an era where digital transformation is paramount, utilities are actively modernizing their network infrastructure to harness the evolving advantages of grid digitization and substation automation, with a focus on operational simplicity, scalability and efficient power management. At the forefront of this modernization effort is the next-generation utility wide area network (WAN) architecture, leveraging the new-age transport and services. The simplification provided by this new-age networking paradigm incorporating Cisco’s state-of-the-art technologies is tailored to fulfill the growing demands of substation automation. By undertaking this journey in digitization, utilities not only enhance their operational efficiency through improved connectivity and performance but also achieve cost savings, thereby establishing a new benchmark for resilient and robust networking within the industry.The transition from IP MPLS transport to Segment Routing (SR), based on the source routing paradigm, evolves the transport by collapsing multiple layers of technology into the simplified SR with IGP extensions. SR can operate with either an MPLS or an IPv6 forwarding data plane. Currently, utilities are shifting from IP MPLS to SR-MPLS.In the realm of services, a significant shift is underway from traditional L2VPN services, which rely on LDP data plane based signaling, to EVPN that leverages MP-BGP control plane signaling. The adoption of EVPN by today’s utilities eliminates the redundancy of a separate LDP data plane for layer 2 services and reinforces the move towards simplification. EVPN offers substantial benefits to operators, including a wide range of multihoming and load balancing capabilities, superior scalability, and robust loop avoidance mechanisms.Broadly, utilities’ services are classified into three categories# Layer 3 Substation to Datacenter# IP based supervisory control and data acquisition (SCADA) data, IP based CCTV, enterprise data and IP telephony Layer 2 Substation to Substation for Ethernet based Teleprotection# Layer 2 multicast protocols (IEC61850 GOOSE & SV), Virtual machine migrations (for virtualized applications) and third-party SCADA traffic Layer 2 Substation to Substation for Traditional Teleprotection# Power Protection point-to-point services using specific utility protocols over non-Ethernet connections As the first phase of network transformation, L3 services are now delivered over SR-MPLS based transport, connecting the substation to the Datacenter. eBGP peering is implemented between Cisco’s IR8340 router as CE and Cisco’s NCS540 router as PE at either site. Considering that the core capacity requirement of utilities is not expected to surpass 50G in the near future, the low-density compact NCS540 from the Cisco NCS portfolio is the transport platform of choice for the core infrastructure. For utilities demanding sub-50ms convergence for any Layer 3 service, SR’s Topology-Independent Loop-Free Alternate Fast Reroute (TI-LFA FRR) capabilities can be activated.Layer 2 TeleprotectionA key requirement of utility WAN transport networks is path predictability for latency sensitive applications such as Teleprotection. To this end, utilities require the ability to co-route traffic over a bidirectional and defined path in the network, minimizing the asymmetry delay for both the active and protection paths. This requirement mirrors the behavior of traditional synchronous optical network (SONET) and synchronous digital hierarchy (SDH) networks that operated with unidirectional path switched ring (UPSR) and subnetwork connection protection (SNC-P).In traditional IP MPLS based networks, Flex LSP is leveraged to meet utilities’ Teleprotection requirements. Flex LSP facilitates the establishment of bidirectional label-switched paths (LSPs) dynamically through the Resource Reservation Protocol–Traffic Engineering (RSVP-TE). Call Admission Control (CAC) ensures that the LSP has sufficient bandwidth to accommodate the Layer 2 circuit.With the evolution in transport, SR now offers path predictability required for utility WAN Layer 2 Teleprotection services by leveraging the latest Circuit-style Segment Routing Traffic Engineering (CS SR-TE) capabilities. CS SR-TE offers bidirectional co-routed path behavior, end-to-end path protection + restoration and strict bandwidth guarantees. CS SR-TE therefore provides deterministic path behavior aligning with utility WAN network requirements.Latest CS SR-TE enhancements guarantee sub-50 ms switching times in the event of failures, a crucial requirement for utility WAN Teleprotection. Segment Routing Performance Measurement (SR-PM) offers the functionality of path protection via liveness detection for the working and protect paths. Recent innovations in SR-PM enables offload of this path liveness-check functionality from software to hardware, thereby enforcing the strict convergence times, as required for utilities’ Teleprotection applications.L2 Teleprotection use cases are delivered over EVPN VPWS point-to-point service between the substation PEs stitched to the underlying CS SR-TE policy, thereby offering circuit-like performance. Herein CE IE9300 is single homed to PE NCS540 at each substation end. Path predictability is therefore guaranteed with a deterministic next hop PE at the remote end, underpinned by CS SR-TE’s co-routed bidirectional path behavior.Layer 2 Teleprotection service with Circuit-style SR-TECisco is partnering with Schweitzer Engineering Laboratories and Valiant Communications to deliver traditional Teleprotection services between substations involving non-Ethernet interfaces.SEL’s ICON platform provides support for non-Ethernet interfaces e.g. T1/E1, C37.94, RS-232 Async, E&M signalling, etc., required for utilities’ specific substation protection components to deliver Teleprotection services. ICON simultaneously provides Ethernet uplink interfacing to Cisco’s transport platform NCS540. NCS540 PEs form a ring interconnecting substations across the core, with an EVPN-VPWS services steered to an SR-TE policy between back-to-back PEs. The critical operational technology (OT) traffic for Teleprotection is then carried over the dedicated point-to-point paths in a single VLAN. The ring architecture ensures network resilience with fast convergence to the backup path on the ring between any two substations in the event of a failure, provided by a proprietary mechanism on SEL ICON.NCS540 + SEL ICON solution for non-Ethernet based TeleprotectionCisco has also validated solutions to deliver Teleprotection in collaboration with Valiant Communications, specifically IEEE C37.94 Line Differential Protection and Distance Protection solutions over SR-MPLS Transport. Here again, the vendor device supports non-Ethernet connections within the substation and interfaces to NCS540 over Ethernet. VCL-2711 is the IEEE C37.94 over IP/MPLS transmission equipment that provides Line Differential Protection, and VCL-TP is the Distance Protection over IP/MPLS transmission equipment that provides support for 8 Binary Commands. The architecture relies on the fast convergence offered by the underlying SR Transport. For example, CS SR-TE can be implemented to enforce bidirectional co-routed path behaviour, with an EVPN-VPWS service stitched to the SR-TE policy.NCS540 + VCL-2711 C37.94 Line Differential solution over SR-MPLSNCS540 + VCL-TP Distance Protection solution over SR-MPLSSEL/Valiant’s expertise in utilities’ protection and automation equipment, combined with Cisco’s technology leadership in transport offers a comprehensive solution to address non-Ethernet based Teleprotection.References Cisco Validated Design Guide Implementation Guide Next generation digital substation WAN blog Solution BriefProduct Details Cisco Network Convergence System (NCS) 540 Series Cisco Catalyst IR8300 Rugged Series Router Cisco Catalyst IE9300 Rugged Series Switches SEL ICON Integrated Communications Optical Network Platform Valiant Comm. IEEE C37.94 over IP/MPLS Platform Valiant Comm. Teleprotection over IP/MPLS Platform", "url": "/tutorials/2024-04-24-next-gen-utility-wan-arch/", "author": "Ananya Bose", "tags": "" } , "tutorials-srv6-transport-on-ncs-part-7": { "title": "SRv6 on NCS 500/5500: SRv6 QoS", "content": "OverviewIn our previous tutorials in this series, we covered various aspects on SRv6 transport implementation on the NCS 500/5500/5700 series platforms. In this tutorial, we will cover another important aspect, i.e. QoS propagation for SRv6 transport on the NCS 500 and NCS 5500 series platforms. The following figure shows a typical SRv6 encapsulated traffic and from the same it is evident that managing core quality of service for an SRv6 transport network is as simple as managing IPv6 QoS. The simplest way would be to manage the same, by use of the IPv6 DSCP or Precedence field in the SRv6 encapsulation field.Figure 1# Explaining QoS Bits in the SRv6 (IPv6) HeaderSRv6 QoS Options/ModesThe following table summerizes the qos modes available. Mode # Ingress Policy-map L2VPN VPNv4 VPNv6 1# Default NA TC ==0 TC == 0 TC ==0 2# Propagation NA IPv6 Prec == PCP IPv6 DSCP = IPv4 DSCP IPv6 TC == IPv6 TC 3# Precedence set qos-group X IPv6 Prec == X IPv6 Prec == X IPv6 Prec == X 4# DSCP set ip encapsulation class-of-service X IPv6 DSCP == X IPv6 DSCP == X IPv6 DSCP == X 1# DefaultComing to platform implementation on the NCS 500/5500 and 5700 series routers, by default the QoS fields in the SRv6 header are not set.Figure 2# Explaining QoS Default Mode2# Propagation ModeIn this mode QoS bits from the actual payload is propagated to the imposed SRv6 header QoS field in the following manner# L2VPN # PCP bits of the l2 vlan header are copied to IPv6 Precedence field of the SRv6 header VPNv4 # DSCP bits of the IPv4 header are copied to IPv6 DSCP field of SRv6 header VPNv6 # DSCP (TC) bits of the IPv6 header copied to IPv6 DSCP (TC) field of SRv6 headerFigure 3# Explaining QoS propagation Mode3# Ingress Policy Map for IPv6 precedenceIn this mode we can apply an ingress policy-map on the UNI i.e. the interface where customer traffic is entering the ingress PE. The use of set qos-group <0-7> within the classes of the policy-map sets the IPv6 Precedence corresponding to the qos-group value.This Mode is available from IOS XR 7.7.x. The below policy-map for example will set IPv6 precedence values as 7, 5 and 1 respectively for the traffic matching PRIO, DATA and default classes while egressing out of the Core interface.policy-map srv6-qos-group class PRIO set qos-group 7 ! class DATA set qos-group 5 ! class class-default set qos-group 1 ! end-policy-map! endFigure 4# Explaining Policy-Map based DSCP with qos-group4# Ingress Policy Map for IPv6 DSCPIn this mode we can apply an ingress policy-map on the UNI i.e the interface where customer traffic is entering the ingress PE. There is a new modular qos CLI (MQC) introduced to use set ip encapsulation class-of-service <0-63> within the classes of the policy-map. This marking action sets the IPv6 DSCP values corresponding to the class-of-service value. This mode brings in more granularity to the QoS options within the SRv6 Core.This Mode is available from IOS XR 24.2.x . The below policy-map for example will set IPv6 DSCP values as 56 (cs7), 40 (cs5) and 8(cs1) respectively for the traffic matching PRIO, DATA and default classes while egressing out of the Core interface.policy-map srv6-ip-encap class PRIO set ip encapsulation class-of-service 56 ! class DATA set ip encapsulation class-of-service 40 ! class class-default set ip encapsulation class-of-service 8 ! end-policy-mapFigure 5# Explaining Policy-Map based DSCP with ip encapsulation class-of-serviceConfigurations on NCS 5500/500 SystemsConfigurations on NCS 5500/500 system as discussed in our previous articles are done via hw-module profiles.Default ModeBy default, there is no traffic-class related configuration needed. Only the basic hw-module profile to enable SRv6 needs to configured.hw-module profile segment-routing srv6 mode micro-segment format f3216Propagation ModeThe hardware module profile is appended with traffic-class propagate option.hw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation traffic-class propagateIngress Policy Map for IPv6 precedencehw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation l2-traffic traffic-class propagate ! l3-traffic traffic-class policy-map !Along with the hardware module profile an ingress policy-map marking qos-group values must be applied to the UNI interface. In this article we will refer to the sample policy-map documented in previous section.interface TenG 0/0/0/1 service-policy input srv6-qos-group . . . . . .Ingress Policy Map for IPv6 DSCPhw-module profile segment-routing srv6 mode micro-segment format f3216 encapsulation traffic-class policy-map-extend Along with the hardware module profile an ingress policy-map marking qos-group values must be applied to the UNI interface. In this article we will refer to the sample policy-map documented in previous section.interface TenG 0/0/0/1 service-policy input srv6-ip-encap . . . . . .Configurations for NCS 5700 SystemsNCS 5700 platforms don’t need hw-module profile. QoS encapsulations options are configured under global “segment-routing srv6” configurations. NCS 5700 also gives more flexibility in terms of opting for policy-map based mode and propagation modes on services selectively.Default ModeNo additional config is needed.Propagation ModePropagation mode on these platforms is configured under global segment routing configuration.segment-routing srv6 encapsulation traffic-class propagate !Ingress Policy Map for IPv6 precedenceFor this mode, we must have the propagation mode enabled under global segment-routing configuration. Once, that is in place, we can apply the policy-map on ingress to set IPv6 prec on SRv6 header.segment-routing srv6 encapsulation traffic-class propagate !interface TenG 0/0/0/1 service-policy input srv6-qos-group . . . . . .Ingress Policy Map for IPv6 DSCPFor this mode, we must have the propagation mode enabled under global segment-routing configuration. Once, that is in place, we can apply the policy-map on ingress to set IPv6 DSCP on SRv6 header.segment-routing srv6 encapsulation traffic-class propagate !interface TenG 0/0/0/1 service-policy input srv6-ip-encap . . . . . .Restrictions & Best PracticesFew Points to Note # Setting of qos-group and ip encapsulation class-of-service is not supproted together on both ncs 5500/500 and 5700 platforms on the NCS 5500 & 500 Platform set ip encapsulation class-of-service is allowed when the specific hardware module profile is applied. Change in the srv6 hardware module profile needs a reload of the router.ConclusionIn this article we discussed the different qos modes available for SRv6 transport on the NCS 5500/5700 platform as of the latest IOS XR 24.2.1 release. While propagation mode is the simplest way to maintain QoS with respect to payload QoS. Use of IPv6 precedence or DSCP via the policy-map options (either qos-group or ip encapsulation class-of-service) gives better control and granularity over the QoS options for different services within the same PE.", "url": "/tutorials/srv6-transport-on-ncs-part-7/", "author": "Paban Sarma", "tags": "iosxr, SRv6, NCS 5500, NCS 5700" } , "#": {} , "tutorials-use-tcpdump-on-exr-for-control-plane-troubleshooting-part1": { "title": "Use tcpdump on eXR for control plane troubleshooting - part 1", "content": " Use tcpdump on eXR for control plane troubleshooting - part 1 Introduction Process debugs Interpreting process debugs Interface packet capture Tcpdump Control plane vocabulary Using tcpdump on RP punt interfaces Demo Video Conclusion IntroductionIn today’s fast-paced world, efficient and quick troubleshooting is a key skill for any network engineer. This article will shed some light on a few available tools used for investigating control plane protocol issues on IOS-XR devices. Each tool has its own strengths and limitations. Understanding them will help you select the best tool for a given task in order to get the most valuable data and save time while avoiding pitfalls.The most common tools are the process debugs, so let’s start by looking at how they work.Process debugsIn a nutshell, debugs are simply a series of messages that a process may display as a response to certain events. Debugs are user-friendly but can consume CPU and despite being quite verbose, often it is required to run several debugs at once to get all the relevant data. Last but not least, to print debugs, a process would always require some kind of trigger, such as the reception of a control plane packet on process level. Key takeaways Run by a CPU process at application level. Easy to use with a similar syntax across different platforms. Often very verbose, and sometimes not very granular. Display data, very specific to a given process. Require a valid trigger to display a debug message. Interpreting process debugsFor example, the debug output from a Border Gateway Protocol process for an incoming BGP packet suggests two things# First, the packet has been successfully received by our network interface. Second, the packet has reached the BGP process level.If debugs fail to show traces of our packets, it would prompt several questions# Did the BGP packets reach our process level ? If not, where and why were they dropped ? Most importantly, did these drops occurred inside the device, or before reaching our network interface ?Interface packet captureTo answer that last question, we can try capturing traffic directly from the network interface, to get the evidence of what is happening on a wire. Unfortunately, this approach isn’t always feasible due to a potentially high interface traffic volume or other factors such as the availability of a packet capturing equipment. Key takeaways Collects packets from the device network interface. Acts as evidence of what is happening on a wire. May be challenging due to# High interface traffic. Availability or hardware capability of a packet capturing equipment. So, what other options do we have when both process debugs and interface capture tools can’t provide us with the answers we’re looking for ?I’m glad you asked!And in this first article, I’ll show you a less known technique for collecting control plane packets with the built-in tcpdump tool. TcpdumpTcpdump is a common Unix utility for capturing network traffic, which can also be used internally on IOS-XR devices. The tcpdump is available on the 64-bit IOS-XR Software or eXR, running on Cisco platforms such as the ASR9000 or NCS5500.Before diving in, let me first clarify a few keywords we’ll be using going forward.Control plane vocabularyTo start with, any packet handled by the device local CPU is click called “for us”. Next, let’s take a look at how these “for us” packets reach our process level. The incoming network interface traffic is always a combination of transit packets and some locally destined control plane packets. Incoming packets on the linecard or LC network interface will reach a Network Processor Unit or NPU, where “for us” packets are redirected towards the local CPU, an action called PUNT. To reach the linecard CPU, punted packets will follow a green pathway via a Control Ethernet Switch, acting as a bridge between CPU and NPU. The transit traffic will use the orange path towards the fabric. Control Ethernet Switches from each linecard are interconnected as an internal ethernet control plane network to which the Route Processor or RP have access. A punt interface is a generic term describing a control ethernet interface used to deliver control plane traffic to either RP or LC CPU.Using tcpdump on RP punt interfacesSo with all those definitions out of the way, the main idea now is to use the tcpdump on punt interfaces available from the RP or the LC shell.While access to the RP shell is straightforward, connecting to the LC shell is more platform-dependent. As such, this topic will require a separate article in the future to cover all aspects, so stay tuned.The most common use case for this technique is to capture routing protocol packets punted to the RP CPU. However, it can also be used for any other packets the CPU would receive, such as SNMP, NTP, LDP, etc.DemoLet me show you the next steps by using our lab NCS5500 router.RP/0/RP0/CPU0#FRETA3#show platformSun Sep 15 20#56#49.809 UTCNode Type State Config state--------------------------------------------------------------------------------0/0/CPU0 NC57-18DD-SE IOS XR RUN NSHUT0/0/NPU0 Slice UP 0/0/NPU1 Slice UP 0/1/CPU0 NC55-32T16Q4H-AT IOS XR RUN NSHUT0/1/NPU0 Slice UP 0/RP0/CPU0 NC55-RP2-E(Active) IOS XR RUN NSHUT0/FC1 NC55-5508-FC2 OPERATIONAL NSHUT0/FC3 NC55-5508-FC2 OPERATIONAL NSHUT0/FC5 NC55-5508-FC2 OPERATIONAL NSHUT0/FT0 NC55-5508-FAN2 OPERATIONAL NSHUT0/FT1 NC55-5508-FAN2 OPERATIONAL NSHUT0/FT2 NC55-5508-FAN2 OPERATIONAL NSHUT0/PM0 N9K-PAC-3000W-B OPERATIONAL NSHUT0/PM1 N9K-PAC-3000W-B OPERATIONAL NSHUT0/PM2 N9K-PAC-3000W-B OPERATIONAL NSHUT0/SC0 NC55-SC OPERATIONAL NSHUT0/SC1 NC55-SC OPERATIONAL NSHUT RP/0/RP0/CPU0#FRETA3#show install active summarySun Sep 15 20#56#52.036 UTCLabel # 7.10.2 Active Packages# 4 ncs5500-xr-7.10.2 version=7.10.2 [Boot image] ncs5500-isis-1.0.0.0-r7102 ncs5500-mpls-1.0.0.0-r7102 ncs5500-mcast-1.0.0.0-r7102For this simple test, we’ll use locally generated traffic, such as ICMP requests sourced from the RP, towards our own interface IP address 192.168.99.1. By using the loopback internal configuration on that interface, we can loop packets in such a way that they will appear to the NP as incoming from the physical interface itself.RP/0/RP0/CPU0#FRETA3#sh run int Te0/1/0/31Wed Oct 2 20#35#47.213 UTCinterface TenGigE0/1/0/31 ipv4 address 192.168.99.1 255.255.255.0 loopback internal!First, I’ll connect to the RP shell, using a run command from the XR command line.RP/0/RP0/CPU0#FRETA3#runSun Sep 15 20#56#57.093 UTC[xr-vm_node0_RP0_CPU0#~]$Second, I’ll confirm the punt interface for our capture, that interface is platform specific as you can see in this table. Platform Punt interface asr9000 eth-punt.1282 ncs5500 ps-inb.1538 NCS55A2 eth-vf2 NCS540 eth-vf2 NCS560 spp_br, but we’ll use ps-inb.1538, instead Instead of trying to remember it, the punt interface can be found by reading the bootstrap config file named calvados_bootstrap.cfg and searching for the GUEST1_PUNT_ETH string using Unix cat and grep commands.[xr-vm_node0_RP0_CPU0#~]$cat /etc/init.d/calvados_bootstrap.cfg | grep GUEST1_PUNT_ETHGUEST1_PUNT_ETH=ps-inb.1538The output shows the punt interface as ps-inb.1538 which we will use with -i flag to set the tcpdump capturing source interface, as our final step#[xr-vm_node0_RP0_CPU0#~]$tcpdump -i ps-inb.1538tcpdump# verbose output suppressed, use -v or -vv for full protocol decodelistening on ps-inb.1538, link-type EN10MB (Ethernet), capture size 262144 bytes20#58#06.243219 4e#41#50#00#10#01 (oui Unknown) > 4e#41#50#00#01#01 (oui Unknown), ethertype Unknown (0x876e), length 342# \t0x0000# ffff ffff 4350 5548 5800 0000 0000 0000 ....CPUHX.......\t0x0010# 0000 0000 0200 0060 006f 0000 0000 0000 .......`.o......\t0x0020# 0000 0000 0000 0000 e000 0000 0000 0000 ................\t0x0030# 0000 0000 0000 0000 0000 0000 0000 0000 ................\t0x0040# 0000 0070 001a 0001 0000 0000 4500 0064 ...p........E..d\t0x0050# 009b 0000 ff01 73aa c0a8 6301 c0a8 6301 ......s...c...c.\t0x0060# 0800 07c0 c6b8 009b abcd abcd abcd abcd ................\t0x0070# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x0080# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x0090# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x00a0# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x00b0# fd01 10df 0000 0038 0000 0098 0000 006c .......8.......l\t0x00c0# 0000 000c 0000 4000 e0f9 020a 647f 0000 ......@.....d...\t0x00d0# 0000 0001 0000 0000 0000 0000 0000 0000 ................\t0x00e0# 0000 0000 0000 0000 0000 0000 006b 0000 .............k..\t0x00f0# 0000 0000 0200 0060 6000 0000 e000 0000 .......``.......\t0x0100# 0000 0000 0000 0008 0000 001c 0000 001c ................\t0x0110# c0a8 6301 0000 0010 0000 0000 00ff 0100 ..c.............\t0x0120# 0000 0000 0000 0000 0000 0000 0000 0000 ................\t0x0130# 0000 0000 0000 0000 0000 0000 0000 0000 ................\t0x0140# 0000 0000 0000 0000 ........20#58#06.244265 4e#41#50#00#01#01 (oui Unknown) > 4e#41#50#00#10#01 (oui Unknown), ethertype Unknown (0x876e), length 178# \t0x0000# ffff ffff 4350 5548 3000 0000 0000 0000 ....CPUH0.......\t0x0010# 0001 0000 0000 0000 006f 0000 0200 0060 .........o.....`\t0x0020# 0000 0000 0000 0000 e000 0000 0000 0000 ................\t0x0030# 0000 0000 0000 0000 0000 0000 0000 0000 ................\t0x0040# 4500 0064 009b 0000 ff01 73aa c0a8 6301 E..d......s...c.\t0x0050# c0a8 6301 0000 0fc0 c6b8 009b abcd abcd ..c.............\t0x0060# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x0070# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x0080# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x0090# abcd abcd abcd abcd abcd abcd abcd abcd ................\t0x00a0# abcd abcd ....Unfortunately tcpdump couldn’t display those punted packets in an easy-to-read format, due to the presence of additional IOS-XR system headers.To overcome this, we will dump full packet, including all headers, in hexadecimal format, by using -xx flag, and then use hex patterns to search through the result. The full packet dump will include Ethernet headers, and therefore the 0x0000 address will represent the beginning of the frame.[xr-vm_node0_RP0_CPU0#~]$tcpdump -xxi ps-inb.1538tcpdump# verbose output suppressed, use -v or -vv for full protocol decodelistening on ps-inb.1538, link-type EN10MB (Ethernet), capture size 262144 bytes20#58#32.248474 4e#41#50#00#10#01 (oui Unknown) > 4e#41#50#00#01#01 (oui Unknown), ethertype Unknown (0x876e), length 342# \t0x0000# 4e41 5000 0101 4e41 5000 1001 876e ffff\t0x0010# ffff 4350 5548 5800 0000 0000 0000 0000\t0x0020# 0000 0200 0060 006f 0000 0000 0000 0000\t0x0030# 0000 0000 0000 e000 0000 0000 0000 0000\t0x0040# 0000 0000 0000 0000 0000 0000 0000 0000\t0x0050# 0070 001a 0001 0000 0000 4500 0064 00a8\t0x0060# 0000 ff01 739d c0a8 6301 c0a8 6301 0800\t0x0070# 07b3 c6b8 00a8 abcd abcd abcd abcd abcd\t0x0080# abcd abcd abcd abcd abcd abcd abcd abcd\t0x0090# abcd abcd abcd abcd abcd abcd abcd abcd\t0x00a0# abcd abcd abcd abcd abcd abcd abcd abcd\t0x00b0# abcd abcd abcd abcd abcd abcd abcd fd01\t0x00c0# 10df 0000 0038 0000 0098 0000 006c 0000\t0x00d0# 000c 0000 4000 e0f9 020a 647f 0000 0000\t0x00e0# 0001 0000 0000 0000 0000 0000 0000 0000\t0x00f0# 0000 0000 0000 0000 0000 006b 0000 0000\t0x0100# 0000 0200 0060 6000 0000 e000 0000 0000\t0x0110# 0000 0000 0008 0000 001c 0000 001c c0a8\t0x0120# 6301 0000 0010 0000 0000 00ff 0100 0000\t0x0130# 0000 0000 0000 0000 0000 0000 0000 0000\t0x0140# 0000 0000 0000 0000 0000 0000 0000 0000\t0x0150# 0000 0000 000020#58#32.249461 4e#41#50#00#01#01 (oui Unknown) > 4e#41#50#00#10#01 (oui Unknown), ethertype Unknown (0x876e), length 178# \t0x0000# 4e41 5000 1001 4e41 5000 0101 876e ffff\t0x0010# ffff 4350 5548 3000 0000 0000 0000 0001\t0x0020# 0000 0000 0000 006f 0000 0200 0060 0000\t0x0030# 0000 0000 0000 e000 0000 0000 0000 0000\t0x0040# 0000 0000 0000 0000 0000 0000 0000 4500\t0x0050# 0064 00a8 0000 ff01 739d c0a8 6301 c0a8\t0x0060# 6301 0000 0fb3 c6b8 00a8 abcd abcd abcd\t0x0070# abcd abcd abcd abcd abcd abcd abcd abcd\t0x0080# abcd abcd abcd abcd abcd abcd abcd abcd\t0x0090# abcd abcd abcd abcd abcd abcd abcd abcd\t0x00a0# abcd abcd abcd abcd abcd abcd abcd abcd\t0x00b0# abcdVideoThis entire presentation is also available as part of my video series published in the Cisco TAC YouTube playlist.ConclusionThis wraps up part 1. Its goal was to provide an introduction to control plane troubleshooting techniques. In the part 2, I’ll show you a few simple and advanced packet filtering techniques and how to export packets in pcap format for later analysis.", "url": "/tutorials/use-tcpdump-on-exr-for-control-plane-troubleshooting-part1/", "author": "Dimitri Mikhailov", "tags": "iosxr, cisco, Control-Plane, Packet-Capture, Tcpdump" } , "#": {} }