Testing the NCS5500: The Lab Series

4 minutes read

LabSeries-banner-thin.png

You can find more content related to NCS5500 including routing memory management, VRF, URPF, Netflow, QoS, EVPN, Flowspec implementation following this link.

Introduction

We are trying something new today, and we hope it will really help speeding up the validation process for our customers and partners.

In these videos and blog posts, we will present tests performed in our labs in different situations. The purpose is to:

  • present tests requiring extremely large or complex setup. Cisco account teams and customer will not need to rebuild them during Proof of Concept (CPOC)
  • provide recommendations on test methodology
  • comment the results and provide more internal details to explain the behavior experienced during the tests

We open the books.

Video

Lab Tests

Here are the tests already performed and documented, and the ones we will present in the near(ish) future:

BGP Flowspec

Auto-Mitigation of a Memcached Attack
Video: interoperability demo between NetScout (Arbor) DDoS mitigation system and NCS5500 (using Jericho+ and eTCAM, -SE system):

Flowspec scale and resources
Test of various aspects around NCS5500 Flowspec implementation:

  • scale validation: injection of 3000 simple rules
  • out of resource (oor) validation: check the behavior when injecting, 4000, 6000 and 9000 rules
  • programming rate: validation of the speed to write the rules in the eTCAM
  • max-prefix: validation of the behavior when exceeding authorized number of advertisements per session
  • memory consumption: verification of memory space used by different rules. Tests with the auto-mitigations generated by the NetScout/Arbor flowspec controller
    https://xrdocs.io/ncs5500/tutorials/bgp-flowspec-on-ncs5500/

Hardware High Availability and Redundancy

Fabric Redundancy
https://xrdocs.io/ncs5500/tutorials/ncs5500-fabric-redundancy-tests/
Test performed with a snake topology on a single line card. The purpose is to identify the impact on performance and bandwidth, per NPU, when removing a fabric cards. It will also point the limit of using a snake for performance testing.

Route Processor, System Controller and Power Modules Redundancy
No article posted, just a video: https://www.youtube.com/watch?v=Y_RoK2PsC1k

.
Demonstration of the impact (or lack of impact for instance) when ejecting Active or Standby RP and System Controller of an NCS5508. Then, the test is focusing on the power supply modules.

Fabric and Fan Trays upgrade
Not exactly a test, but the demo of a migration from v1 to v2 fabric cards and fan trays with the software upgrade to prepare this operation.
https://xrdocs.io/ncs5500/tutorials/ncs-5500-fabric-migration/

Netflow and IPFIX

Pushing Netflow to 11
Validation of the netflow implementation on Jericho+ (with eTCAM) line card: 36x100G-SE:

FIB

FIB Programming Rate Demonstrates how fast routing information is programmed in control plane and data plane.
First, the BGP table at the Route Processor level.
Second, the FIB table programmed in the External TCAM of a Jericho+ based system.
https://xrdocs.io/ncs5500/tutorials/ncs5500-fib-programming-speed/

FIB Writing Rate on Jericho+ w/ eTCAM
No article posted, just a video: https://www.youtube.com/watch?v=3tIVveCOZHs

.

FIB Writing Rate on Jericho+ w/o eTCAM but Large LEM / NCS55A1-24H
Follow up of the previous test but this time, in the LPM / LEM of an NCS55A1-24H:
https://www.youtube.com/watch?v=nT31rHqFm-o

.

FIB Scale on Jericho+ w/ eTCAM
Validation of the capability to store a full routing table first then to push the cursor to 4M IPv4 entries in the hardware (for instance, the eTCAM of a Jericho+ system).
Verification of the impact of enabling URPF on the interfaces and activating 3000 BGP Flowspec rules with the 4M IPv4 routes.
https://xrdocs.io/ncs5500/tutorials/ncs5500-fib-scale-test/
https://www.youtube.com/watch?v=oglYEDpKsLY

Buffers

Buffer and Burst Test
Two parts in this demo:

Performance / Snake

IPv4 and IPv6 VRF Snake
https://xrdocs.io/ncs5500/tutorials/ncs5500-performance-and-load-balancing/
Pratyusha Aluri set up a very large testbed with two NCS5508 back to back, interconnected through 288x 100GE interfaces. Tests performed on 36x100G-SE line cards (Jericho+ with eTCAM)

  • snake IPv4 with 129B, 130B and IMIX traffic distribution
  • snake IPv6 with same packet sizes
  • performed with and without configuration on interfaces (ACL ingress+egress and QoS: classification and remarking)

ECMP and Link Aggregation
https://xrdocs.io/ncs5500/tutorials/ncs5500-performance-and-load-balancing/
https://youtu.be/s6qSt6C2D5U?t=598
Same testbed then above, reconfigured to validate ECMP and LAG load balancing with multiple bundles of 64x 100GE interfaces each.

Non-Drop Rate
https://xrdocs.io/ncs5500/tutorials/testing-ndr-on-ncs5500/
Test performed on Jericho+ systems. Identification and explanation of the different packet size performances.

NDR for NC57-18DD-SE line cards
Hari and Sai provides an overview of the tests they performed for their customer’s CPOC.
https://www.youtube.com/watch?v=10XBBe_uYKc

.

Conclusion: What’s next?

We plan to add more and more test demos to this page, so the first call to action is to come back regularly to stay informed.
Also, you can use the comments section in the video or this blog post to tell us what would be of interest for you specifically. Let’s be clear ;) We don’t guarantee we will do it, but as much as possible we will take your feedback into consideration for the next ones.

Leave a Comment