You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.
We are trying something new today, and we hope it will really help speeding up the validation process for our customers and partners.
In these videos and blog posts, we will present tests performed in our labs in different situations. The purpose is to:
- present tests requiring extremely large or complex setup. Cisco account teams and customer will not need to rebuild them during Proof of Concept (CPOC)
- provide recommendations on test methodology
- comment the results and provide more internal details to explain the behavior experienced during the tests
We open the books.
Here are the tests already performed and documented, and the ones we will present in the near(ish) future:
Auto-Mitigation of a Memcached Attack
Video: interoperability demo between NetScout (Arbor) DDoS mitigation system and NCS5500 (using Jericho+ and eTCAM, -SE system):
- attack detection via Netflow / IPFIX
- BGP flowspec rules injection and attack mitigation
Flowspec scale and resources
Test of various aspects around NCS5500 Flowspec implementation:
- scale validation: injection of 3000 simple rules
- out of resource (oor) validation: check the behavior when injecting, 4000, 6000 and 9000 rules
- programming rate: validation of the speed to write the rules in the eTCAM
- max-prefix: validation of the behavior when exceeding authorized number of advertisements per session
- memory consumption: verification of memory space used by different rules. Tests with the auto-mitigations generated by the NetScout/Arbor flowspec controller
Hardware High Availability and Redundancy
Test performed with a snake topology on a single line card. The purpose is to identify the impact on performance and bandwidth, per NPU, when removing a fabric cards. It will also point the limit of using a snake for performance testing.
Route Processor, System Controller and Power Modules Redundancy (coming soon) Demonstration of the impact (or lack of impact for instance) when ejecting Active or Standby RP and System Controller of an NCS5508. Then, the test is focusing on the power supply modules.
Netflow and IPFIX
Pushing Netflow to 11
Validation of the netflow implementation on Jericho+ (with eTCAM) line card: 36x100G-SE:
- impact of the packet size
- impact of the port load / bandwidth
- impact of sampling interval
- impact of the number of flows
- impact of the active / inactive timers
- scale test on a full loaded chassis
- stress test with process crash, config/rollback, …
FIB Programming Rate Demonstrates how fast routing information is programmed in control plane and data plane.
First, the BGP table at the Route Processor level.
Second, the FIB table programmed in the External TCAM of a Jericho+ based system.
FIB Writing Rate on Jericho+ w/ eTCAM
Video recording of the demo described in the blog post above:
FIB Writing Rate on Jericho+ w/o eTCAM but Large LEM / NCS55A1-24H
Follow up of the previous test but this time, in the LPM / LEM of an NCS55A1-24H:
FIB Scale on Jericho+ w/ eTCAM
Validation of the capability to store a full routing table first then to push the cursor to 4M IPv4 entries in the hardware (for instance, the eTCAM of a Jericho+ system).
Verification of the impact of enabling URPF on the interfaces and activating 3000 BGP Flowspec rules with the 4M IPv4 routes.
Buffer and Burst Test
Two parts in this demo:
- Data collected from hundreds of production routers to identify the amount of traffic handled in On-Chip Buffer compared to traffic evicted to DRAM. Also the number of packets dropped because of DRAM bandwidth exhaustion (spoiler alert: zero)
- Burst test in a large lab setup with 27x 100G tester interfaces. We run 80% of background traffic and we generate 20% or more of bursty traffic.
First part in the video: https://www.youtube.com/watch?v=1qXD70_cLK8
Second part, lab demo with Sai Venka: https://youtu.be/1qXD70_cLK8?t=291
Performance / Snake
IPv4 and IPv6 VRF Snake
Pratyusha Aluri set up a very large testbed with two NCS5508 back to back, interconnected through 288x 100GE interfaces. Tests performed on 36x100G-SE line cards (Jericho+ with eTCAM)
- snake IPv4 with 129B, 130B and IMIX traffic distribution
- snake IPv6 with same packet sizes
- performed with and without configuration on interfaces (ACL ingress+egress and QoS: classification and remarking)
ECMP and Link Aggregation
Same testbed then above, reconfigured to validate ECMP and LAG load balancing with multiple bundles of 64x 100GE interfaces each.
Test performed on Jericho+ systems. Identification and explanation of the different packet size performances.
Conclusion: What’s next?
We plan to add more and more test demos to this page, so the first call to action is to come back regularly to stay informed.
Also, you can use the comments section in the video or this blog post to tell us what would be of interest for you specifically. Let’s be clear ;) We don’t guarantee we will do it, but as much as possible we will take your feedback into consideration for the next ones.