{ "blogs-2016-06-11-nanog-67-meet-us-there": { "title": "NANOG 67: Meet us there!", "content": "We’re up at NANOG 67 in Chicago, IL, next week!Over the course of more than 2 decades, the North American Network Operators’ Group (NANOG) has been at the epicenter of the net-ops transformation. This unique congregation of vendors, service-provider, Data-Center and Enterprise SMEs and network engineers provides a perfect opportunity to showcase some of the new enhancements in IOS-XR.As always, there is a fair bit of Cisco participation, with content focused on Streaming Telemetry, connectivity evaluation techniques and remediation techniques using Linux applications.Of course, if you’re looking to talk about IOS-XR and other Cisco solutions for the Web, Data Center and Service Provider domain, you’ll find a lot of us roaming about!If you’re heading over to NANOG 67, then check out the following sessions to get a gist of the upcoming enhancements in IOS-XR#Ten Lessons From Telemetry Shelly Cadora, Cisco SystemsSuffering Withdrawal; an automated approach to connectivity evaluationMicah Croff, GitHubTim Hoffman, TwitterBruce McDougall, CiscoNick Slabakov, Juniper NetworksBeer-n-Gear Demo Booth# iperf driven OSPF path remediation demoTuesday, June 14 2016# 6#00 - 8#00 pm.Akshat Sharma, Cisco SystemsThat’s a lot to look out for! But as part of the application-hosting team in IOS-XR, we’re particularly excited about showing you the underlying linux infrastructure in IOS-XR and its integration with typical linux applications.At the Beer-n-Gear demo booth, We will have our own booth showcasing some cool new tricks with IOS-XR.The demo would be based off the following application on Github#https#//github.com/ios-xr/ospf-iperf-ncclient#The figure below should pretty much explain what we’re going for#We’ll show how you can bring up Containers on XR to run iperf with XR interfaces, and leverage YDK to affect OSPF path cost.Further, We bring you these demos using the new IOS-XR Vagrant box that is currently in private-beta.If you haven’t heard about it yet, take a look at the following quick-start guide to get you going# IOS-XR Vagrant Quick-StartSee you in Chicago!", "url": "/blogs/2016-06-11-nanog-67-meet-us-there/", "author": "Akshat Sharma", "tags": "iosxr, cisco, linux" } , "blogs-2016-06-28-xr-app-hosting-architecture-quick-look": { "title": "XR App-hosting architecture: Quick Look!", "content": "If you’ve been following the set of tutorials in the XR toolbox series# XR Toolbox SeriesYou might have noticed that we haven’t actually delved into the internal architecture of IOS-XR. While there are several upcoming documents that will shed light on the deep internal workings of IOS-XR, I thought I’ll take a quick stab at the internals for the uninitiated.This is what the internal software architecture and plumbing, replete with the containers, network namespaces and XR interfaces, looks like#Alright, back up. The above figure seems pretty daunting to understand, so let’s try to deconstruct it# At the bottom of the figure, in gray, we have the host (hypervisor) linux environment. This is a 64-bit linux kernel running the Windriver linux 7 (WRL7) distribution. The rest of the components run as containers (LXCs) on top of the host. In green, we see the container called the XR Control plane LXC (or XR LXC). This runs a Windriver Linux 7 (WRL7) environment as well and contains the XR control plane and the XR linux environment# Inside the XR control plane LXC, if we zoom in further, the XR control plane processes are represented distinctly in blue as shown below. This is where the XR routing protocols like BGP, OSPF etc. run. The XR CLI presented to the user is also one of the processes. See the gray box inside the XR control plane LXC ? This is the XR linux shell. P.S. This is what you drop into when you issue a vagrant ssh [*].Another way to get into the XR linux shell is by issuing a bash command in XR CLI. The XR linux shell that the user interacts with is really the global-vrf network namespace inside the control plane container. This corresponds to the global/default-vrf in IOS-XR. Only the interfaces in global/default vrf in XR appear in the XR linux shell today when you issue an ifconfig# RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#show ip int brSun Jul 17 11#52#15.049 UTC Interface IP-Address Status Protocol Vrf-NameLoopback0 1.1.1.1 Up Up default GigabitEthernet0/0/0/0 10.1.1.10 Up Up default GigabitEthernet0/0/0/1 11.1.1.10 Up Up default MgmtEth0/RP0/CPU0/0 10.0.2.15 Up Up default RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#bash Sun Jul 17 11#52#22.904 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ifconfigGi0_0_0_0 Link encap#Ethernet HWaddr 08#00#27#e0#7f#bb inet addr#10.1.1.10 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fee0#7fbb/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#546 errors#0 dropped#3 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#49092 (47.9 KiB)Gi0_0_0_1 Link encap#Ethernet HWaddr 08#00#27#26#ca#9c inet addr#11.1.1.10 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fe26#ca9c/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#547 errors#0 dropped#3 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#49182 (48.0 KiB)Mg0_RP0_CPU0_0 Link encap#Ethernet HWaddr 08#00#27#ab#bf#0d inet addr#10.0.2.15 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#feab#bf0d/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#210942 errors#0 dropped#0 overruns#0 frame#0 TX packets#84664 errors#0 dropped#0 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#313575212 (299.0 MiB) TX bytes#4784245 (4.5 MiB)---------------------------------- snip output ----------------------------------------- Any Linux application hosted in this environment shares the process space with XR, and we refer to it as a native application. The FIB is programmed by the XR control plane exclusively. The global-vrf network namespace only sees a couple of routes by default# A default route pointing to XR FIB. This way any packet with an unknown destination is handed-over by a linux application to XR for routing. This is achieved through a special interface called fwdintf as shown in the figure above. Routes in the subnet of the Management Interface# Mgmt0/RP0/CPU0. The management subnet is local to the global-vrf network namespace. To view these routes, simply issue an ip route in the XR linux shell# AKSHSHAR-M-K0DS#native-app-bootstrap akshshar$ vagrant ssh rtr xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ ip route default dev fwdintf scope link src 10.0.2.15 10.0.2.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 10.0.2.15 xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ However if we configure loopback 1 in XR, a new route appears in the XR linux environment# RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#conf tSun Jul 17 11#59#33.014 UTCRP/0/RP0/CPU0#rtr1(config)#RP/0/RP0/CPU0#rtr1(config)#int loopback 1RP/0/RP0/CPU0#rtr1(config-if)#ip addr 6.6.6.6/32RP/0/RP0/CPU0#rtr1(config-if)#commitSun Jul 17 11#59#49.970 UTCRP/0/RP0/CPU0#rtr1(config-if)#RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#bash Sun Jul 17 11#59#58.941 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ip routedefault dev fwdintf scope link src 10.0.2.15 6.6.6.6 dev fwd_ew scope link src 10.0.2.1510.0.2.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 10.0.2.15 [xr-vm_node0_RP0_CPU0#~]$ This is what we call the east-west route. Loopback1 is treated as a special remote interface from the perspective of the XR linux shell. It does not appear in ifconfig like the other interfaces. This way an application sitting inside the global-vrf network namespace can talk to XR on the same box by simply pointing to loopback1. Finally, if you followed the Bring your own Container (LXC) App, you’ll notice that in the XML file meant to launch the lxc, we share the global-vrf network namespace with the container; specifically, in this section# Create LXC SPEC XML File This makes the architecture work seamlessly for native and container applications. An LXC app has the same view of the world, the same routes and the same XR interfaces to take advantage of, as any native application with the shared global-vrf namespace. You’ll also notice my awkward rendering for a linux app# Notice the TPA IP ? This stands for Third Party App IP address. The purpose of the TPA IP is simple. Set a src-hint for linux applications, so that originating traffic from the applications (native or LXC) could be tied to the loopback IP or any reachable IP of XR. This approach mimics how routing protocols like to identify routers in complex topologies# through router-IDs. With the TPA IP, application traffic can be consumed, for example, across an OSPF topology just by relying on XR’s capability to distribute the loopback IP address selected as the src-hint. We go into further detail here# Set the src-hint for Application traffic That pretty much wraps it up. Remember, XR handles the routing and applications use only a subset of the routing table to piggy-back on XR for reachability!", "url": "/blogs/2016-06-28-xr-app-hosting-architecture-quick-look/", "author": "Akshat Sharma", "tags": "iosxr, cisco, architecture, xr toolbox" } , "blogs-2016-07-12-building-an-ios-xrv-vagrant-virtualbox": { "title": "Building your own IOS XRv Vagrant box", "content": "A new way to try out IOS-XR..An ‘IOS XRv (64-bit)’ image will be available for users from IOS XR 6.1.1 onwards. This is the successor to the previous IOS XRv (32-bit) QNX based virtual platform. It is based on the latest IOS XR OS which is built on 64-bit Wind River Linux, and has amongst many other changes, a separate Adminplane and complete access to the underlying linux environment.You will likely see newer and better variants of this image as we continue to work on tools for developers.Our primary focus will continue to be on consistent tooling and workflows such as Vagrant boxes, build tools and open-source sample applications to get developers and users started quickly and easily.We hope that these tools help enable developers consume and learn IOS-XR, build applications for it and participate in our community on Github. Check us out on Github# Sample Applications and Build tools # https#//github.com/ios-xr Open Source Documentation# https#//github.com/xrdocs hosted here# https#//xrdocs.github.ioCan’t wait to get the ISO? Jump here# Getting your hands on the ISOLinux and XR 6.0.0+ This version of IOS XR has an infrastructure that allows people to develop and run their own applications in Linux containers on the router itself. Network and System Automation can be accomplished using shell scripts, puppet,chef, Ansible etc. Customers can tap into telemetry that provides improved visibility into a network at a far granular level and through a much more scalable approach than SNMP. XR configuration itself can be automated using Model Driven APIs with native, common and OpenConfig YANG models supported. IOS XRv (64-bit)Naming and release The image itself is named# name-features-architecture.format, e.g# iosxrv-fullk9-x64.iso Producing a box and releasing it through devhub.cisco.com allows us to get code to the developer far quicker than the standard process. This platform is provided free but as free software has no support - please read the licenses very carefully. Vagrant VirtualBoxCisco is providing customers with a Vagrant VirtualBox offering. Vagrant is a superb tool for application development. Amongst others you can use this box to# Test native and container applications on IOS-XR Use configuration management tools like Chef/Puppet/Ansible/Shell as Vagrant provisioners Create complicated topologies and a variety of other use cases This box is designed to come up fully operational with an embedded Vagrantfile that does all of the work to provide a user and tools access to the box. With a simple ‘vagrant add’ and ‘vagrant up’ you will have a IOS XR virtual router to play with. ‘vagrant ssh’ drops the user directly into the XR Linux namespace as user ‘vagrant’. Using vagrant port, you can see which port (usually 2222 with a single node) to ssh to get access to the IOS XR Console/CLI.The user can design their own Vagrantfiles to do more complex bringups including multiple nodes, and bootstrap configuration. There are examples below.How to get hold of the box, bring it up etc# https#//xrdocs.github.io/application-hosting/tutorials/iosxr-vagrant-quickstartYou will need an active Cisco CCO id.For tutorials on some of the cool things you can do with this box see# https#//xrdocs.github.io/application-hosting/tutorials/Cisco has open-sourced the toolingFinally as was the purpose of this blog, we have open-sourced the code to build the Vagrant VirtualBox from an IOS XR ISO.https#//github.com/ios-xr/iosxrv-x64-vboxGetting your hands on the ISO To download the ISO, you will need an API-KEY and a CCO-ID To get the API-KEY and a CCO-ID, browse to the following link and follow the steps# Steps to Generate API-KEYOnce done, download the ISO as shown#$ ISOURL=~https#//devhub.cisco.com/artifactory/XRv64-snapshot/latest/iosxrv-fullk9-x64.latest.iso~$ curl -u your-cco-id#API-KEY $ISOURL --output ~/iosxrv-fullk9-x64.isoI hope you enjoyed this quick blog. The links above provide far more information. As one of the technical leads behind the new platform, and author of the vagrant tooling I’m very motivated to make this a great platform for Cisco customers. Please contact me at rwellum@cisco.com for any questions or concerns.", "url": "/blogs/2016-07-12-building-an-ios-xrv-vagrant-virtualbox/", "author": "Richard Wellum", "tags": "vagrant, iosxr, cisco, linux" } , "blogs-2018-05-01-anatomy-of-a-network-app-xr-auditor": { "title": "Anatomy of a Network App: "xr-auditor"", "content": " On This Page Introduction User Story IOS-XR architecture Enter xr-auditor The Build Environment Building the Application Transferring the app to the router Running the auditor app Dump Auditor Version Install the App List generated XML files Deconstructing the XML content Uninstall the app Verbose Debugging Troubleshooting# Gathering logs Support for Active/Standby RP systems IntroductionThis application enables periodic auditing of the linux shells in the IOS-XR container-based architecture by running individual python applications in each individual environment in IOS-XR (across Active-Standby HA systems), i.e.#   XR-LXC ADMIN-LXC HOST   Functionally, the individual python applications#   Collect local data based on a YAML based user-config provided during the build process Store accummulated data in the form of XML that is strictly validated against a user-defined XML schema. Send the accummulated XML data periodically to an external server over SSH where it may be easily processed and visualized using any tools that can consume the XML schema and the data.   Further, the application supports#   Installation# on a High-Availability (Active/Standby RP) system through a single command. A clean uninstallation across the entire system through a single command. Troubleshooting# Dump filesystem view - The ability to view the entire system’s affected (user-defined in YAML file) system across active/standby RPs using a single command. Troubleshooting# Gather debug Data - The ability to collect generated logs from all the environments (Active/Standby XR LXC, Admin LXC, HOST) and create a single tar ball using a single command.   No SMUs needed, leverages the native app-hosting architecture in IOS-XR and the internal SSH-based access between different parts of the IOS-XR architecture - namely, XR-LXC, Admin-LXC and HOST of the active and/or Standby RPs to easily manage movement of data, logs and apps across the system. The complete code can be found here# https#//github.com/akshshar/xr-auditorUser Story(Click to Expand)     IOS-XR architectureFor a quick refresher on the IOS-XR container based architecture, see the figure below#     .   IOS-XR AAA support vs LinuxAs shown above, access to the linux shells (in blue inside the containers) and the underlying shells is protected through XR AAA authentication and authorization.IOS-XR AAA also supports accounting which sends logs to a remote TACACS/RADIUS server to log what an authenticated and authorized user is upto on the XR interface.While IOS-XR supports the 3 A’s of AAA (Authentication, Authorization and Accounting), Linux supports only 2 of them# Authentication and authorization.Usually accounting is handled through separate tools such as auditd, snoopy etc. We showcase the usage of snoopy with IOS-XR here# https#//github.com/akshshar/snoopy-xrIOS-XR Telemetry support vs LinuxSimilarly, IOS-XR also supports sending structured operational data (modeled using Yang models) over transports such as gRPC, TCP and UDP to external receivers that can process the data - You can learn more about IOS-XR telemetry here# https#//xrdocs.github.io/telemetry/Further, Linux doesn’t really have a telemetry system by default - there are variety of solutions available that can provide structured data for various individual applications and files on the system, but none of them support a clean one step installation, collection and troubleshooting capabilities across container based architecture as shown above.Enter xr-auditorThis is where xr-auditor shines. It allows a user to specify their collection requirements through YAML files, build the application into single binary and deploy the auditors in each domain(container) of the system in a couple of steps.xr-auditor is installed using a single binary generated out of the code in this git repo using pyinstaller. More details below. The installation involves running the binary on the XR-LXC shell of the Active RP#    .   Once the install is triggered, individual cron jobs and apps are set up in the different domains as shown below to start sending collected data periodically to a remote server (identified in the SERVER_CONFIG in userfiles/auditor.cfg.yml) securely over SSH#      The Build EnvironmentAll you need to build the application is a linux environment with python 2.7 installed.To make things simpler, there is a vagrant setup already included with the code. We will use the vagrant setup to build and test our application against IOS-XRv64 on our laptops before we run it on physical hardware (NCS5500)#The vagrant setup looks something like this# If you’re not familiar with vagrant and associated workflows I would suggest first going through the following tutorials on xrdocs before continuing (these tutorials will also show how to gain access to the IOS-XR vagrant box if you don’t already have it)# XR toolbox, Part 1 # IOS-XR Vagrant Quick Start XR Toolbox, Part 2 # Bootstrap XR configuration with Vagrant XR Toolbox, Part 3 # App Development Topology .    Building the Application    Step 1# Clone the xr-auditor git repo# AKSHSHAR-M-33WP# akshshar$ git clone https#//github.com/akshshar/xr-auditor.gitCloning into 'xr-auditor'...remote# Counting objects# 502, done.remote# Compressing objects# 100% (23/23), done.remote# Total 502 (delta 12), reused 4 (delta 1), pack-reused 478Receiving objects# 100% (502/502), 8.92 MiB | 4.19 MiB/s, done.Resolving deltas# 100% (317/317), done.AKSHSHAR-M-33WP# akshshar$ AKSHSHAR-M-33WP# akshshar$ cd xr-auditor/AKSHSHAR-M-33WP#xr-auditor akshshar$ lsREADME.md\t\tcleanup.sh\t\tcron\t\t\trequirements.txt\tuserfilesbuild_app.sh\t\tcore\t\t\timages\t\t\tspecs\t\t\tvagrantAKSHSHAR-M-33WP#xr-auditor akshshar$     Step 2# Drop into the vagrant directory and spin up the vagrant topology (shown above)# Note# Make sure you’ve gone through the tutorial# XR toolbox, Part 1 # IOS-XR Vagrant Quick Start and already have the IOS-XRv vagrant box on your system# AKSHSHAR-M-33WP#~ akshshar$ vagrant box listIOS-XRv (virtualbox, 0)AKSHSHAR-M-33WP#~ akshshar$ Now, in the vagrant directory, issue a vagrant up# AKSHSHAR-M-33WP#vagrant akshshar$ vagrant up Bringing machine 'rtr' up with 'virtualbox' provider... Bringing machine 'devbox' up with 'virtualbox' provider... ==> rtr# Importing base box 'IOS-XRv'... ==> rtr# Matching MAC address for NAT networking... ==> rtr# Setting the name of the VM# vagrant_rtr_1525415374584_85170 ==> rtr# Clearing any previously set network interfaces... ==> rtr# Preparing network interfaces based on configuration... rtr# Adapter 1# nat rtr# Adapter 2# intnet ==> rtr# Forwarding ports... rtr# 57722 (guest) => 2222 (host) (adapter 1) rtr# 22 (guest) => 2223 (host) (adapter 1) ==> rtr# Running 'pre-boot' VM customizations... ==> rtr# Booting VM... ....... devbox# Removing insecure key from the guest if it's present... devbox# Key inserted! Disconnecting and reconnecting using new SSH key... ==> devbox# Machine booted and ready! ==> devbox# Checking for guest additions in VM... ==> devbox# Configuring and enabling network interfaces... ==> devbox# Mounting shared folders... devbox# /vagrant => /Users/akshshar/xr-auditor/vagrant ==> rtr# Machine 'rtr' has a post `vagrant up` message. This is a message ==> rtr# from the creator of the Vagrantfile, and not from Vagrant itself# ==> rtr# ==> rtr# ==> rtr# Welcome to the IOS XRv (64-bit) VirtualBox. ==> rtr# To connect to the XR Linux shell, use# 'vagrant ssh'. ==> rtr# To ssh to the XR Console, use# 'vagrant port' (vagrant version > 1.8) ==> rtr# to determine the port that maps to guestport 22, ==> rtr# then# 'ssh vagrant@localhost -p <forwarded port>' ==> rtr# ==> rtr# IMPORTANT# READ CAREFULLY ==> rtr# The Software is subject to and governed by the terms and conditions ==> rtr# of the End User License Agreement and the Supplemental End User ==> rtr# License Agreement accompanying the product, made available at the ==> rtr# time of your order, or posted on the Cisco website at ==> rtr# www.cisco.com/go/terms (collectively, the 'Agreement'). ==> rtr# As set forth more fully in the Agreement, use of the Software is ==> rtr# strictly limited to internal use in a non-production environment ==> rtr# solely for demonstration and evaluation purposes. Downloading, ==> rtr# installing, or using the Software constitutes acceptance of the ==> rtr# Agreement, and you are binding yourself and the business entity ==> rtr# that you represent to the Agreement. If you do not agree to all ==> rtr# of the terms of the Agreement, then Cisco is unwilling to license ==> rtr# the Software to you and (a) you may not download, install or use the ==> rtr# Software, and (b) you may return the Software as more fully set forth ==> rtr# in the Agreement. AKSHSHAR-M-33WP#vagrant akshshar$ Once you see the above message, the devices should have booted up. You can check the status using vagrant status AKSHSHAR-M-33WP#vagrant akshshar$ vagrant status Current machine states# rtr running (virtualbox) devbox running (virtualbox) This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`. AKSHSHAR-M-33WP#vagrant akshshar$     Step 3# Note down the ports used for SSH (port 22) by the rtr and by devbox# AKSHSHAR-M-33WP#vagrant akshshar$ vagrant port rtrThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host)AKSHSHAR-M-33WP#vagrant akshshar$ AKSHSHAR-M-33WP#vagrant akshshar$ AKSHSHAR-M-33WP#vagrant akshshar$ vagrant port devboxThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2200 (host) AKSHSHAR-M-33WP#vagrant akshshar$ AKSHSHAR-M-33WP#vagrant akshshar$     Step 4# SSH into the vagrant box (either by using vagrant ssh devbox or by using the port discovered above (2200 for devbox)# ssh -p 2200 vagrant@localhost)# Password is vagrant AKSHSHAR-M-33WP#vagrant akshshar$ ssh -p 2200 vagrant@localhostvagrant@localhost's password# Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-87-generic x86_64)* Documentation# https#//help.ubuntu.com* Management# https#//landscape.canonical.com* Support# https#//ubuntu.com/advantage0 packages can be updated.0 updates are security updates. Last login# Fri May 4 10#41#50 2018 from 10.0.2.2vagrant@vagrant#~$ vagrant@vagrant#~$   Now again clone the xr-auditor app so that you have the application code available for build inside the devbox environment# vagrant@vagrant#~$ git clone https#//github.com/akshshar/xr-auditor.gitCloning into 'xr-auditor'...remote# Counting objects# 390, done.remote# Compressing objects# 100% (185/185), done.remote# Total 390 (delta 252), reused 333 (delta 195), pack-reused 0Receiving objects# 100% (390/390), 7.56 MiB | 3.51 MiB/s, done.Resolving deltas# 100% (252/252), done.Checking connectivity... done.vagrant@vagrant#~$ cd xr-auditor/vagrant@vagrant#~/xr-auditor$     Step 5# Create a new ssh-key pair for your devbox environment (if you see see the earlier image), the devbox will serve as the remote server to which the router sends the collected XML data. For password-less operation, the way we make this work is# Create an ssh-key pair on the server (devbox) . Add the public key of the pair to the devbox (server)’s ~/.ssh/authorized_keys file . Package the private key as part of the app during the build process and transfer to the router . The app on the router then uses the private key to ssh and transfer files to the server (devbox) without requiring a password. Following the above steps on devbox# Create the ssh-key pair# vagrant@vagrant#~/xr-auditor$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/vagrant/.ssh/id_rsa)# Enter passphrase (empty for no passphrase)# Enter same passphrase again# Your identification has been saved in /home/vagrant/.ssh/id_rsa.Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.The key fingerprint is#SHA256#nUQqNANDpVUjwJLZ+7LrFY4go/y+yBcc+ProRqYejF8 vagrant@vagrantThe key's randomart image is#+---[RSA 2048]----+| *=+B.o . || + o= + + || ..... . . || . .. . o . ||o + ... S o ||== =.o.. ||*+. Eoo ||o+=o.. ||+*=*+. |+----[SHA256]-----+vagrant@vagrant#~/xr-auditor$ Add the public key to authorized_keys# vagrant@vagrant#~/xr-auditor$ vagrant@vagrant#~/xr-auditor$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys vagrant@vagrant#~/xr-auditor$ Copy the private key to the folder userfiles/ in the xr-auditor directory# vagrant@vagrant#~/xr-auditor$ vagrant@vagrant#~/xr-auditor$ cp ~/.ssh/id_rsa userfiles/id_rsa_server vagrant@vagrant#~/xr-auditor$     Step 6# Edit the appropriate settings in the userfiles/auditor.cfg.yml file to match the environment you are building for. This file encapsulates information about the router, the server to which the data will be sent, the installation directories for the app and the compliance data that the app must collect. Follow the instructions specified in the yml file to fill everything out# Step 7# Now you’re all ready to build the app. Eventually a single binary will be created as part of the build process and this app will be called auditor. This app internally will consist of the following file structure# auditor | |--- userfiles | | | |--- audit.cfg.yml | | | |--- compliance.xsd | | | |--- id_rsa_server | | |--- xr | | | |--- audit_xr.bin | | | | | |--- userfiles | | | | | | | |--- audit.cfg.yml | | | | | | | |--- compliance.xsd | | | | | | | |--- id_rsa_server | | | | | | | | |--- audit_xr.py | | | |--- audit_xr.cron | | | |--- admin | | | |--- audit_admin.bin | | | | | |--- userfiles | | | | | | | |--- audit.cfg.yml | | | | | | | |--- compliance.xsd | | | | | | | |--- id_rsa_server | | | | | | | | |--- audit_admin.py | | | |--- audit_admin.cron | | | |--- host | | | |--- audit_host.bin | | | | | |--- userfiles | | | | | | | |--- audit.cfg.yml | | | | | | | |--- compliance.xsd | | | | | | | |--- id_rsa_server | | | | | | | | |--- audit_host.py | | | |--- audit_host.cron | |--- collector | | | |---collector.bin | | | | | |--- userfiles | | | | | | | |--- audit.cfg.yml | | | | | | | |--- compliance.xsd | | | | | | | |--- id_rsa_server | | | | | | | | |--- collector.py | | | |--- collector.cron | To build the app, at the root of the git repo, issue the following command# (The build_app.sh shell script will automatically install the required dependencies, including pyinstaller, inside the devbox) vagrant@vagrant#~/xr-auditor$ vagrant@vagrant#~/xr-auditor$ sudo -E ./build_app.sh +++ which ./build_app.sh ++ dirname ./build_app.sh + SCRIPT_PATH=. + apt-get install -y git python-pip Reading package lists... Done Building dependency tree Reading state information... Done git is already the newest version (1#2.7.4-0ubuntu1.3). python-pip is already the newest version (8.1.1-2ubuntu0.4). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. + cd . + echo + [[ '' == '' ]] + pip install --upgrade pip==9.0.3 .... + pyinstaller ./specs/xr.spec 20 INFO# PyInstaller# 3.3.1 20 INFO# Python# 2.7.12 21 INFO# Platform# Linux-4.4.0-87-generic-x86_64-with-Ubuntu-16.04-xenial 25 INFO# UPX is not available. 26 INFO# Extending PYTHONPATH with paths ['/home/vagrant/xr-auditor/core', '/home/cisco/audit_xr_linux/specs'] 26 INFO# checking Analysis 30 INFO# Appending 'datas' from .spec 31 INFO# checking PYZ 34 INFO# checking PKG 34 INFO# Bootloader /usr/local/lib/python2.7/dist-packages/PyInstaller/bootloader/Linux-64bit/run 35 INFO# checking EXE + pyinstaller ./specs/admin.spec 20 INFO# PyInstaller# 3.3.1 21 INFO# Python# 2.7.12 21 INFO# Platform# Linux-4.4.0-87-generic-x86_64-with-Ubuntu-16.04-xenial 24 INFO# UPX is not available. 26 INFO# Extending PYTHONPATH with paths ['/home/vagrant/xr-auditor/core', '/home/cisco/audit_xr_linux/specs'] 26 INFO# checking Analysis 30 INFO# Appending 'datas' from .spec 30 INFO# checking PYZ 33 INFO# checking PKG 33 INFO# Bootloader /usr/local/lib/python2.7/dist-packages/PyInstaller/bootloader/Linux-64bit/run 34 INFO# checking EXE + pyinstaller ./specs/host.spec 20 INFO# PyInstaller# 3.3.1 20 INFO# Python# 2.7.12 21 INFO# Platform# Linux-4.4.0-87-generic-x86_64-with-Ubuntu-16.04-xenial 24 INFO# UPX is not available. 25 INFO# Extending PYTHONPATH with paths ['/home/vagrant/xr-auditor/core', '/home/cisco/audit_xr_linux/specs'] 25 INFO# checking Analysis ..... 67 INFO# running Analysis out00-Analysis.toc 81 INFO# Caching module hooks... 83 INFO# Analyzing core/auditor.py 1855 INFO# Processing pre-safe import module hook _xmlplus 1996 INFO# Processing pre-find module path hook distutils 2168 INFO# Loading module hooks... 2169 INFO# Loading module hook ~hook-distutils.py~... 2170 INFO# Loading module hook ~hook-xml.py~... 2171 INFO# Loading module hook ~hook-lxml.etree.py~... 2178 INFO# Loading module hook ~hook-httplib.py~... 2179 INFO# Loading module hook ~hook-encodings.py~... 2500 INFO# Looking for ctypes DLLs 2557 INFO# Analyzing run-time hooks ... 2563 INFO# Looking for dynamic libraries 2702 INFO# Looking for eggs 2702 INFO# Python library not in binary dependencies. Doing additional searching... 2722 INFO# Using Python library /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 2724 INFO# Warnings written to /home/vagrant/xr-auditor/build/auditor/warnauditor.txt 2736 INFO# Graph cross-reference written to /home/vagrant/xr-auditor/build/auditor/xref-auditor.html 2771 INFO# Appending 'datas' from .spec 2773 INFO# checking PYZ 2776 INFO# checking PKG 2776 INFO# Building because /home/vagrant/xr-auditor/core/auditor.py changed 2777 INFO# Building PKG (CArchive) out00-PKG.pkg 6099 INFO# Building PKG (CArchive) out00-PKG.pkg completed successfully. 6110 INFO# Bootloader /usr/local/lib/python2.7/dist-packages/PyInstaller/bootloader/Linux-64bit/run 6111 INFO# checking EXE 6113 INFO# Rebuilding out00-EXE.toc because pkg is more recent 6114 INFO# Building EXE from out00-EXE.toc 6119 INFO# Appending archive to ELF section in EXE /home/vagrant/xr-auditor/dist/auditor 6172 INFO# Building EXE from out00-EXE.toc completed successfully. vagrant@vagrant#~/xr-auditor$ At the end of the build, you will see the auditor binary appear inside a dist/ directory at the root of the git repo# vagrant@vagrant#~/xr-auditor$ ls -lrt dist/total 61672-rwxr-xr-x 1 root root 7046744 May 4 10#43 audit_xr.bin-rwxr-xr-x 1 root root 7046848 May 4 10#43 audit_admin.bin-rwxr-xr-x 1 root root 7046616 May 4 10#43 audit_host.bin-rwxr-xr-x 1 root root 7049952 May 4 10#43 collector.bin-rwxr-xr-x 1 root root 34949880 May 4 10#49 auditorvagrant@vagrant#~/xr-auditor$ Transferring the app to the routerYou will need the ssh credentials for your IOS-XR router to transfer the generated app to its /misc/scratch directory (also called disk0#. In our vagrant setup, the credentials are vagrant/vagrant.Note, 2223 is the port used by the vagrant IOS-XRv instance for its SSH session (See vagrant port output from earlier)vagrant@vagrant#~/xr-auditor$ scp -P 2223 dist/auditor vagrant@10.0.2.2#/misc/scratch/vagrant@10.0.2.2's password# auditor Running the auditor appYou can easily run the following steps over SSH itself (in fact Ansible Playbooks will be used for this purpose, explained in the /ansible directory README.For now, let’s jump into the router and manually try out the options available# ssh is triggered from your laptop or the host on which vagrant is runningAKSHSHAR-M-33WP#vagrant akshshar$ AKSHSHAR-M-33WP#vagrant akshshar$ ssh -p 2223 vagrant@localhostvagrant@localhost's password# RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#View the options availableJump into the bash shell in the IOS-XRv instance and use the -h option for the auditor app#RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#bashFri May 4 15#28#45.654 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -husage# auditor [-h] [-v] [-i] [-u] [-c] [-l] [-o TARFILE_OUTPUT_DIR] [-d]optional arguments# -h, --help show this help message and exit -v, --version Display Current version of the Auditor app and exit -i, --install Install the required artifacts (audit apps, collectors and cron jobs) to default locations or to those specified in auditor.cfg.yml -u, --uninstall Uninstall all the artifacts from the system based on auditor.cfg.yml settings -c, --clean-xml Remove old XML files from the system -l, --list-files List all the audit related files (apps, cron jobs, xml files) currently on the system -o TARFILE_OUTPUT_DIR, --output-logs-to-dir TARFILE_OUTPUT_DIR Specify the directory to use to collect the collated logs from all nodes on the system -d, --debug Enable verbose logging[xr-vm_node0_RP0_CPU0#~]$Dump Auditor VersionUse the -v option#RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#bashFri May 4 15#25#52.977 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -vv1.0.0[xr-vm_node0_RP0_CPU0#~]$Install the AppUse the -i option to install the apps and cron jobs#[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -i2018-05-04 15#26#37,536 - DebugZTPLogger - INFO - Using root-lr user specified in auditor.cfg.yml, Username# vagrant2018-05-04 15#26#37,545 - DebugZTPLogger - INFO - XR LXC audit app successfully copied2018-05-04 15#26#37,550 - DebugZTPLogger - INFO - XR LXC audit cron job successfully set up2018-05-04 15#26#40,012 - DebugZTPLogger - INFO - Admin LXC audit app successfully copied2018-05-04 15#26#42,791 - DebugZTPLogger - INFO - Admin LXC audit cron file successfully copied and activated2018-05-04 15#26#46,506 - DebugZTPLogger - INFO - HOST audit app successfully copied2018-05-04 15#26#50,851 - DebugZTPLogger - INFO - Host audit cron file successfully copied and activated2018-05-04 15#26#50,863 - DebugZTPLogger - INFO - Collector app successfully copied2018-05-04 15#26#50,868 - DebugZTPLogger - INFO - Collector cron job successfully set up in XR LXC2018-05-04 15#26#50,868 - DebugZTPLogger - INFO - Successfully set up artifacts, IOS-XR Linux auditing is now ON[xr-vm_node0_RP0_CPU0#~]$The locations where the apps are installed and where the XML files get dumped are all defined in userfiles/auditor.cfg.yml. View the INSTALL_CONFIG section of the yml file#These locations are used by the auditor app to install the audit_xr.bin, collector.bin, audit_host.bin and audit_admin.bin apps in the right directory and by the individual apps to determine where to generate and store the XML outputs.The cron jobs get installed in /etc/cron.d of each location (XR, admin, host).Use the -l option with the auditor app to dump the current state of all the relevant filesystems across active and standby RPs, as defined by the userfiles/auditor.cfg.yml file #[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -l2018-05-04 16#03#28,841 - DebugZTPLogger - INFO - Using root-lr user specified in auditor.cfg.yml, Username# vagrant2018-05-04 16#03#28,841 - DebugZTPLogger - INFO - #################################################### ACTIVE-RP XR #####################################################2018-05-04 16#03#28,841 - DebugZTPLogger - INFO - ###### App Directory ######2018-05-04 16#03#28,846 - DebugZTPLogger - INFO - /misc/scratch#total 48172drwxr-xr-x 2 root root 4096 Apr 24 2017 corelrwxrwxrwx 1 root root 12 Apr 24 2017 config -> /misc/configdrwx------ 2 root root 4096 Apr 24 2017 clihistorydrwxr-xr-x 2 root root 4096 Apr 24 2017 crypto-rw-r--r-- 1 root root 1549 May 4 06#32 status_filedrwxr-xr-x 8 root root 4096 May 4 06#32 ztpdrwxr-xr-x 2 root root 4096 May 4 07#27 nvgen_traces-rwxr-xr-x 1 root root 34949880 May 4 10#49 auditor-rw-r--r-- 1 root root 798 May 4 10#50 auditor_collated_logs.tar.gz-rwx------ 1 root root 7049952 May 4 15#26 collector.bin-rwx------ 1 root root 7046744 May 4 15#26 audit_xr.bin-rwx------ 1 root root 1675 May 4 16#03 id_rsa-rw-r--r-- 1 root root 240786 May 4 16#03 tpa.log2018-05-04 16#03#28,846 - DebugZTPLogger - INFO - ###### Cron directory ######2018-05-04 16#03#28,851 - DebugZTPLogger - INFO - /etc/cron.d#total 12-rw-r--r-- 1 root root 73 Apr 24 2017 logrotate.conf-rw-r--r-- 1 root root 86 May 4 15#26 audit_cron_xr_2018-05-04_15-26-37-rw-r--r-- 1 root root 87 May 4 15#26 audit_cron_collector_2018-05-04_15-26-502018-05-04 16#03#28,851 - DebugZTPLogger - INFO - ###### XML Output Directory ######2018-05-04 16#03#28,855 - DebugZTPLogger - INFO - /misc/app_host#total 84drwx------ 2 root root 16384 Apr 24 2017 lost+founddrwxr-xr-x 5 root root 4096 Apr 24 2017 etcdrwxrwxr-x 2 root sudo 4096 Apr 24 2017 scratchdrwx-----x 9 root root 4096 Apr 24 2017 dockerdrwxr-xr-x 5 root root 4096 Apr 24 2017 app_reposrw-rw---- 1 root root 0 May 4 06#31 docker.sock-rw-r--r-- 1 root root 8111 May 4 16#03 HOST.xml-rw-r--r-- 1 root root 7908 May 4 16#03 ADMIN-LXC.xml-rw-r--r-- 1 root root 23798 May 4 16#03 compliance_audit_rtr_11_1_1_10.xml-rw-r--r-- 1 root root 8264 May 4 16#03 XR-LXC.xml2018-05-04 16#03#28,855 - DebugZTPLogger - INFO - #################################################### ACTIVE-RP COLLECTOR #####################################################2018-05-04 16#03#28,856 - DebugZTPLogger - INFO - ###### App Directory ######2018-05-04 16#03#28,860 - DebugZTPLogger - INFO - /misc/scratch#total 48172drwxr-xr-x 2 root root 4096 Apr 24 2017 corelrwxrwxrwx 1 root root 12 Apr 24 2017 config -> /misc/configdrwx------ 2 root root 4096 Apr 24 2017 clihistorydrwxr-xr-x 2 root root 4096 Apr 24 2017 crypto-rw-r--r-- 1 root root 1549 May 4 06#32 status_filedrwxr-xr-x 8 root root 4096 May 4 06#32 ztpdrwxr-xr-x 2 root root 4096 May 4 07#27 nvgen_traces-rwxr-xr-x 1 root root 34949880 May 4 10#49 auditor-rw-r--r-- 1 root root 798 May 4 10#50 auditor_collated_logs.tar.gz-rwx------ 1 root root 7049952 May 4 15#26 collector.bin-rwx------ 1 root root 7046744 May 4 15#26 audit_xr.bin-rwx------ 1 root root 1675 May 4 16#03 id_rsa-rw-r--r-- 1 root root 240786 May 4 16#03 tpa.log2018-05-04 16#03#28,860 - DebugZTPLogger - INFO - ###### Cron directory ######2018-05-04 16#03#28,866 - DebugZTPLogger - INFO - /etc/cron.d#total 12-rw-r--r-- 1 root root 73 Apr 24 2017 logrotate.conf-rw-r--r-- 1 root root 86 May 4 15#26 audit_cron_xr_2018-05-04_15-26-37-rw-r--r-- 1 root root 87 May 4 15#26 audit_cron_collector_2018-05-04_15-26-502018-05-04 16#03#28,866 - DebugZTPLogger - INFO - ###### XML Output Directory ######2018-05-04 16#03#28,872 - DebugZTPLogger - INFO - /misc/app_host#total 84drwx------ 2 root root 16384 Apr 24 2017 lost+founddrwxr-xr-x 5 root root 4096 Apr 24 2017 etcdrwxrwxr-x 2 root sudo 4096 Apr 24 2017 scratchdrwx-----x 9 root root 4096 Apr 24 2017 dockerdrwxr-xr-x 5 root root 4096 Apr 24 2017 app_reposrw-rw---- 1 root root 0 May 4 06#31 docker.sock-rw-r--r-- 1 root root 8111 May 4 16#03 HOST.xml-rw-r--r-- 1 root root 7908 May 4 16#03 ADMIN-LXC.xml-rw-r--r-- 1 root root 23798 May 4 16#03 compliance_audit_rtr_11_1_1_10.xml-rw-r--r-- 1 root root 8264 May 4 16#03 XR-LXC.xml2018-05-04 16#03#28,872 - DebugZTPLogger - INFO - #################################################### ACTIVE-RP ADMIN #####################################################2018-05-04 16#03#28,872 - DebugZTPLogger - INFO - ###### App Directory ######2018-05-04 16#03#29,460 - DebugZTPLogger - INFO - /misc/scratch#total 7164drwxr-xr-x 2 root root 4096 Apr 24 2017 coredrwxr-xr-x 2 root root 4096 Apr 24 2017 shelf_mgr_pds-rw-r--r-- 1 root root 579 May 4 06#31 card_specific_install--wxr-s--- 1 root root 11974 May 4 06#31 calvados_log_tacacsd_0_0.out--wxr-sr-- 1 root root 1822 May 4 06#31 calvados_log_instagt_log_0_0.out--wxr-Sr-- 1 root root 3388 May 4 06#31 calvados_log_vmm_0_0.out--wxr-sr-x 1 root root 28112 May 4 06#33 calvados_log_confd_helper_0_0.out-rwx------ 1 root root 7046848 May 4 15#26 audit_admin.bin-rw-r--r-- 1 root root 7908 May 4 16#03 ADMIN-LXC.xml--wxr-Sr-x 1 root root 211763 May 4 16#03 calvados_log_aaad_0_0.out2018-05-04 16#03#29,460 - DebugZTPLogger - INFO - ###### Cron directory ######2018-05-04 16#03#30,075 - DebugZTPLogger - INFO - /etc/cron.d#total 8-rw-r--r-- 1 root root 73 Apr 24 2017 logrotate.conf-rw-r--r-- 1 root root 89 May 4 15#26 audit_cron_admin_2018-05-04_15-26-402018-05-04 16#03#30,076 - DebugZTPLogger - INFO - ###### XML Output Directory ######2018-05-04 16#03#30,639 - DebugZTPLogger - INFO - /misc/scratch#total 7168drwxr-xr-x 2 root root 4096 Apr 24 2017 coredrwxr-xr-x 2 root root 4096 Apr 24 2017 shelf_mgr_pds-rw-r--r-- 1 root root 579 May 4 06#31 card_specific_install--wxr-s--- 1 root root 11974 May 4 06#31 calvados_log_tacacsd_0_0.out--wxr-sr-- 1 root root 1822 May 4 06#31 calvados_log_instagt_log_0_0.out--wxr-Sr-- 1 root root 3388 May 4 06#31 calvados_log_vmm_0_0.out--wxr-sr-x 1 root root 28112 May 4 06#33 calvados_log_confd_helper_0_0.out-rwx------ 1 root root 7046848 May 4 15#26 audit_admin.bin-rw-r--r-- 1 root root 7908 May 4 16#03 ADMIN-LXC.xml--wxr-Sr-x 1 root root 215191 May 4 16#03 calvados_log_aaad_0_0.out2018-05-04 16#03#30,640 - DebugZTPLogger - INFO - #################################################### ACTIVE-RP HOST #####################################################2018-05-04 16#03#30,640 - DebugZTPLogger - INFO - ###### App Directory ######2018-05-04 16#03#31,334 - DebugZTPLogger - INFO - /misc/scratch#total 6888drwxr-xr-x 2 root root 4096 Apr 24 2017 core-rwx------ 1 root root 7046616 May 4 15#26 audit_host.bin2018-05-04 16#03#31,334 - DebugZTPLogger - INFO - ###### Cron directory ######2018-05-04 16#03#32,063 - DebugZTPLogger - INFO - /etc/cron.d#total 8-rw-r--r-- 1 root root 73 Apr 24 2017 logrotate.conf-rw-r--r-- 1 root root 88 May 4 15#26 audit_cron_host_2018-05-04_15-26-462018-05-04 16#03#32,064 - DebugZTPLogger - INFO - ###### XML Output Directory ######2018-05-04 16#03#32,788 - DebugZTPLogger - INFO - /misc/app_host#total 84drwx------ 2 root root 16384 Apr 24 2017 lost+founddrwxr-xr-x 5 root root 4096 Apr 24 2017 etcdrwxrwxr-x 2 root sudo 4096 Apr 24 2017 scratchdrwx-----x 9 root root 4096 Apr 24 2017 dockerdrwxr-xr-x 5 root root 4096 Apr 24 2017 app_reposrw-rw---- 1 root root 0 May 4 06#31 docker.sock-rw-r--r-- 1 root root 8111 May 4 16#03 HOST.xml-rw-r--r-- 1 root root 7908 May 4 16#03 ADMIN-LXC.xml-rw-r--r-- 1 root root 23798 May 4 16#03 compliance_audit_rtr_11_1_1_10.xml-rw-r--r-- 1 root root 8264 May 4 16#03 XR-LXC.xml[xr-vm_node0_RP0_CPU0#~]$List generated XML filesYou will see the generated XML files in the directories specified in the userfiles/auditor.cfg.yml files as explained in the previous section. The recommendation is to set the output_xml_dir for both XR and collector to /misc/app_host to view all the XML files in one location, but it is not mandatory.[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ls -lrt /misc/app_host/total 84drwx------ 2 root root 16384 Apr 24 2017 lost+founddrwxr-xr-x 5 root root 4096 Apr 24 2017 etcdrwxrwxr-x 2 root sudo 4096 Apr 24 2017 scratchdrwx-----x 9 root root 4096 Apr 24 2017 dockerdrwxr-xr-x 5 root root 4096 Apr 24 2017 app_reposrw-rw---- 1 root root 0 May 4 06#31 docker.sock-rw-r--r-- 1 root root 7908 May 4 16#05 ADMIN-LXC.xml-rw-r--r-- 1 root root 8111 May 4 16#05 HOST.xml-rw-r--r-- 1 root root 23799 May 4 16#05 compliance_audit_rtr_11_1_1_10.xml-rw-r--r-- 1 root root 8264 May 4 16#05 XR-LXC.xml[xr-vm_node0_RP0_CPU0#~]$Here,-rw-r--r-- 1 root root 7908 May 4 16#05 ADMIN-LXC.xml is generated by audit_admin.bin running in the Admin LXC-rw-r--r-- 1 root root 8111 May 4 16#05 HOST.xml is generated by audit_host.bin running in the Host shell-rw-r--r-- 1 root root 8264 May 4 16#05 XR-LXC.xml is generated by audit_xr.bin running in the XR shell-rw-r--r-- 1 root root 23799 May 4 16#05 compliance_audit_rtr_11_1_1_10.xml is generated by the collector app running in the XR shell.Deconstructing the XML contentAs specified earlier, the XML content generated by the collector app is transferred over SSH to the remote server, based on the SERVER_CONFIG settings in userfiles/auditor.cfg.yml#So log back into devbox(server) and drop into the directory that you specified as REMOTE_DIRECTORY in userfiles/auditor.cfg.yml. # Specify the remote directory on the server # where the compliance XML file should be copied REMOTE_DIRECTORY# ~/home/vagrant~In this case, it is set to /home/vagrant so checking there#AKSHSHAR-M-33WP#vagrant akshshar$ AKSHSHAR-M-33WP#vagrant akshshar$ vagrant ssh devboxvagrant@127.0.0.1's password# Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com * Management# https#//landscape.canonical.com * Support# https#//ubuntu.com/advantage0 packages can be updated.0 updates are security updates.Last login# Fri May 4 16#13#51 2018 from 10.0.2.2vagrant@vagrant#~$ vagrant@vagrant#~$ vagrant@vagrant#~$ ls -lrt /misc/appls# cannot access '/misc/app'# No such file or directoryvagrant@vagrant#~$ ls -lrt /home/vagrant/total 28drwxrwxr-x 10 vagrant vagrant 4096 May 4 10#42 xr-auditor-rw-rw-r-- 1 vagrant vagrant 23799 May 4 16#14 compliance_audit_rtr_11_1_1_10.xmlvagrant@vagrant#~$ vagrant@vagrant#~$ Great! We see the xml file appear on the server, transmitted by the collector app.Let’s dump the content#vagrant@vagrant#~$ cat compliance_audit_rtr_11_1_1_10.xml <?xml version=~1.0~ encoding=~utf-8~?><COMPLIANCE-DUMP xmlns#xsi=~http#//www.w3.org/2001/XMLSchema-instance~ version=~1.0.0~ xsi#noNamespaceSchemaLocation=~compliance.xsd~>\t<GENERAL>\t\t<PRODUCT>XRV-P-L--CH</PRODUCT>\t\t<VENDOR>Cisco</VENDOR>\t\t<IPADDR>11.1.1.10/24</IPADDR>\t\t<HOST>rtr</HOST>\t\t<VERSION>6.1.2</VERSION>\t\t<DATE>20180504-16#14 UTC</DATE>\t\t<OS>IOS-XR</OS>\t</GENERAL>\t<INTEGRITY-SET>\t\t<INTEGRITY domain=~XR-LXC~>\t\t\t<FILES>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~#\\t$OpenBSD# sshd_config,v 1.80 2008/07/02 02#24#18 djm Exp $~, ~# This is the sshd server system-wide configuration file. See~, ~# sshd_config(5) for more information.~, ~# This sshd was compiled with PATH=/usr/bin#/bin#/usr/sbin#/sbin~, ~# The strategy used for options in the default sshd_config shipped with~, ~# OpenSSH is to specify options with their default value where~, ~# possible, but leave them commented. Uncommented options change a~, ~# default value.~, ~#Port 22~, ~AddressFamily inet~, ~#ListenAddress 0.0.0.0~, ~#ListenAddress ##~, ~# Disable legacy (protocol version 1) support in the server for new~, ~# installations. In future the default will change to require explicit~, ~# activation of protocol 1~, ~Protocol 2~, ~# HostKey for protocol version 1~, ~#HostKey /etc/ssh/ssh_host_key~, ~# HostKeys for protocol version 2~, ~#HostKey /etc/ssh/ssh_host_rsa_key~, ~#HostKey /etc/ssh/ssh_host_dsa_key~, ~# Lifetime and size of ephemeral version 1 server key~, ~#KeyRegenerationInterval 1h~, ~#ServerKeyBits 1024~, ~# Logging~, ~# obsoletes QuietMode and FascistLogging~, ~#SyslogFacility AUTH~, ~#LogLevel INFO~, ~# Authentication#~, ~#LoginGraceTime 2m~, ~PermitRootLogin yes~, ~#StrictModes yes~, ~#MaxAuthTries 6~, ~#MaxSessions 10~, ~#RSAAuthentication yes~, ~#PubkeyAuthentication yes~, ~#AuthorizedKeysFile\\t.ssh/authorized_keys~, ~# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts~, ~#RhostsRSAAuthentication no~, ~# similar for protocol version 2~, ~#HostbasedAuthentication no~, ~# Change to yes if you don't trust ~/.ssh/known_hosts for~, ~# RhostsRSAAuthentication and HostbasedAuthentication~, ~#IgnoreUserKnownHosts no~, ~# Don't read the user's ~/.rhosts and ~/.shosts files~, ~#IgnoreRhosts yes~, ~# To disable tunneled clear text passwords, change to no here!~, ~#PasswordAuthentication yes~, ~PermitEmptyPasswords yes~, ~# Change to no to disable s/key passwords~, ~#ChallengeResponseAuthentication yes~, ~# Kerberos options~, ~#KerberosAuthentication no~, ~#KerberosOrLocalPasswd yes~, ~#KerberosTicketCleanup yes~, ~#KerberosGetAFSToken no~, ~# GSSAPI options~, ~#GSSAPIAuthentication no~, ~#GSSAPICleanupCredentials yes~, ~# Set this to 'yes' to enable PAM authentication, account processing,~, ~# and session processing. If this is enabled, PAM authentication will~, ~# be allowed through the ChallengeResponseAuthentication and~, ~# PasswordAuthentication. Depending on your PAM configuration,~, ~# PAM authentication via ChallengeResponseAuthentication may bypass~, ~# the setting of \\~PermitRootLogin without-password\\~.~, ~# If you just want the PAM account and session checks to run without~, ~# PAM authentication, then enable this but set PasswordAuthentication~, ~# and ChallengeResponseAuthentication to 'no'.~, ~#UsePAM no~, ~#AllowAgentForwarding yes~, ~#AllowTcpForwarding yes~, ~#GatewayPorts no~, ~#X11Forwarding no~, ~#X11DisplayOffset 10~, ~#X11UseLocalhost yes~, ~#PrintMotd yes~, ~#PrintLastLog yes~, ~#TCPKeepAlive yes~, ~#UseLogin no~, ~UsePrivilegeSeparation no~, ~#PermitUserEnvironment no~, ~Compression no~, ~ClientAliveInterval 15~, ~ClientAliveCountMax 4~, ~UseDNS no~, ~#PidFile /var/run/sshd.pid~, ~#MaxStartups 10~, ~#PermitTunnel no~, ~#ChrootDirectory none~, ~# no default banner path~, ~#Banner none~, ~# override default of no subsystems~, ~Subsystem\\tsftp\\t/usr/lib64/openssh/sftp-server~, ~# Example of overriding settings on a per-user basis~, ~#Match User anoncvs~, ~#\\tX11Forwarding no~, ~#\\tAllowTcpForwarding no~, ~#\\tForceCommand cvs server~]</CONTENT>\t\t\t\t\t<CHECKSUM>97884b5c2cb2b75022c4b440ddc4245a</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rwxr-xr-x 1 root root 3275 Apr 24 2017 /etc/ssh/sshd_config</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/ssh/sshd_config</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~root#x#0#0#root#/root#/bin/sh~, ~daemon#x#1#1#daemon#/usr/sbin#/bin/sh~, ~bin#x#2#2#bin#/bin#/bin/sh~, ~sys#x#3#3#sys#/dev#/bin/sh~, ~sync#x#4#65534#sync#/bin#/bin/sync~, ~games#x#5#60#games#/usr/games#/bin/sh~, ~man#x#6#12#man#/var/cache/man#/bin/sh~, ~lp#x#7#7#lp#/var/spool/lpd#/bin/sh~, ~mail#x#8#8#mail#/var/mail#/bin/sh~, ~news#x#9#9#news#/var/spool/news#/bin/sh~, ~uucp#x#10#10#uucp#/var/spool/uucp#/bin/sh~, ~proxy#x#13#13#proxy#/bin#/bin/sh~, ~www-data#x#33#33#www-data#/var/www#/bin/sh~, ~backup#x#34#34#backup#/var/backups#/bin/sh~, ~list#x#38#38#Mailing List Manager#/var/list#/bin/sh~, ~irc#x#39#39#ircd#/var/run/ircd#/bin/sh~, ~gnats#x#41#41#Gnats Bug-Reporting System (admin)#/var/lib/gnats#/bin/sh~, ~nobody#x#65534#65534#nobody#/nonexistent#/bin/sh~, ~messagebus#x#999#998##/var/lib/dbus#/bin/false~, ~rpc#x#998#996##/#/bin/false~, ~sshd#x#997#995##/var/run/sshd#/bin/false~, ~vagrant#x#1000#1009##/home/vagrant#/bin/sh~]</CONTENT>\t\t\t\t\t<CHECKSUM>0cabf9f93101d6876bba590e48bfda5e</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rw-r--r-- 1 root root 874 Apr 24 2017 /etc/passwd</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/passwd</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CHECKSUM>7ea587858977ef205c6a7419463359f7</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>lrwxrwxrwx 1 root root 7 Apr 24 2017 /usr/bin/python -&gt; python2</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin/python</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~# Defaults for dhcp initscript~, ~# sourced by /etc/init.d/dhcp-server~, ~# installed at /etc/default/dhcp-server by the maintainer scripts~, ~# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?~, ~# Separate multiple interfaces with spaces, e.g. \\~eth0 eth1\\~.~, ~INTERFACES=\\~\\~~]</CONTENT>\t\t\t\t\t<CHECKSUM>1c905007d96a8b16c58454b6da8cfd86</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -lhrt</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rw-r--r-- 1 root root 290 Apr 24 2017 /etc/default/dhcp-server</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/default/dhcp-server</NAME>\t\t\t\t</FILE>\t\t\t</FILES>\t\t\t<DIRECTORIES>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 3 root root 20480 Apr 24 2017 /usr/bin</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -lrt</REQUEST>\t\t\t\t\t\t\t<RESPONSE>total 8-rwx------ 1 root root 0 Apr 24 2017 log.txt-rwx------ 1 root root 13 Apr 24 2017 card_instances.txt-rw-r--r-- 1 root root 218 Apr 24 2017 cmdline-rw-r--r-- 1 root root 0 May 4 16#13 test.txt</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>touch test.txt</REQUEST>\t\t\t\t\t\t\t<RESPONSE></RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/root</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 7 root root 4096 May 4 15#26 /misc/scratch</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/misc/scratch</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 7 root root 4096 May 4 16#03 /misc/app_host</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/misc/app_host</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 57 root root 4096 May 4 06#32 /etc</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 34 root root 4096 May 4 10#52 /</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/</NAME>\t\t\t\t</DIRECTORY>\t\t\t</DIRECTORIES>\t\t</INTEGRITY>\t\t<INTEGRITY domain=~ADMIN-LXC~>\t\t\t<FILES>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~#\\t$OpenBSD# sshd_config,v 1.80 2008/07/02 02#24#18 djm Exp $~, ~# This is the sshd server system-wide configuration file. See~, ~# sshd_config(5) for more information.~, ~# This sshd was compiled with PATH=/usr/bin#/bin#/usr/sbin#/sbin~, ~# The strategy used for options in the default sshd_config shipped with~, ~# OpenSSH is to specify options with their default value where~, ~# possible, but leave them commented. Uncommented options change a~, ~# default value.~, ~#Port 22~, ~AddressFamily inet~, ~#ListenAddress 0.0.0.0~, ~#ListenAddress ##~, ~# Disable legacy (protocol version 1) support in the server for new~, ~# installations. In future the default will change to require explicit~, ~# activation of protocol 1~, ~Protocol 2~, ~# HostKey for protocol version 1~, ~#HostKey /etc/ssh/ssh_host_key~, ~# HostKeys for protocol version 2~, ~#HostKey /etc/ssh/ssh_host_rsa_key~, ~#HostKey /etc/ssh/ssh_host_dsa_key~, ~# Lifetime and size of ephemeral version 1 server key~, ~#KeyRegenerationInterval 1h~, ~#ServerKeyBits 1024~, ~# Logging~, ~# obsoletes QuietMode and FascistLogging~, ~#SyslogFacility AUTH~, ~#LogLevel INFO~, ~# Authentication#~, ~#LoginGraceTime 2m~, ~PermitRootLogin yes~, ~#StrictModes yes~, ~#MaxAuthTries 6~, ~#MaxSessions 10~, ~#RSAAuthentication yes~, ~#PubkeyAuthentication yes~, ~#AuthorizedKeysFile\\t.ssh/authorized_keys~, ~# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts~, ~#RhostsRSAAuthentication no~, ~# similar for protocol version 2~, ~#HostbasedAuthentication no~, ~# Change to yes if you don't trust ~/.ssh/known_hosts for~, ~# RhostsRSAAuthentication and HostbasedAuthentication~, ~#IgnoreUserKnownHosts no~, ~# Don't read the user's ~/.rhosts and ~/.shosts files~, ~#IgnoreRhosts yes~, ~# To disable tunneled clear text passwords, change to no here!~, ~#PasswordAuthentication yes~, ~PermitEmptyPasswords yes~, ~# Change to no to disable s/key passwords~, ~#ChallengeResponseAuthentication yes~, ~# Kerberos options~, ~#KerberosAuthentication no~, ~#KerberosOrLocalPasswd yes~, ~#KerberosTicketCleanup yes~, ~#KerberosGetAFSToken no~, ~# GSSAPI options~, ~#GSSAPIAuthentication no~, ~#GSSAPICleanupCredentials yes~, ~# Set this to 'yes' to enable PAM authentication, account processing,~, ~# and session processing. If this is enabled, PAM authentication will~, ~# be allowed through the ChallengeResponseAuthentication and~, ~# PasswordAuthentication. Depending on your PAM configuration,~, ~# PAM authentication via ChallengeResponseAuthentication may bypass~, ~# the setting of \\~PermitRootLogin without-password\\~.~, ~# If you just want the PAM account and session checks to run without~, ~# PAM authentication, then enable this but set PasswordAuthentication~, ~# and ChallengeResponseAuthentication to 'no'.~, ~#UsePAM no~, ~#AllowAgentForwarding yes~, ~#AllowTcpForwarding yes~, ~#GatewayPorts no~, ~#X11Forwarding no~, ~#X11DisplayOffset 10~, ~#X11UseLocalhost yes~, ~#PrintMotd yes~, ~#PrintLastLog yes~, ~#TCPKeepAlive yes~, ~#UseLogin no~, ~UsePrivilegeSeparation no~, ~#PermitUserEnvironment no~, ~Compression no~, ~ClientAliveInterval 15~, ~ClientAliveCountMax 4~, ~UseDNS no~, ~#PidFile /var/run/sshd.pid~, ~#MaxStartups 10~, ~#PermitTunnel no~, ~#ChrootDirectory none~, ~# no default banner path~, ~#Banner none~, ~# override default of no subsystems~, ~Subsystem\\tsftp\\t/usr/lib64/openssh/sftp-server~, ~# Example of overriding settings on a per-user basis~, ~#Match User anoncvs~, ~#\\tX11Forwarding no~, ~#\\tAllowTcpForwarding no~, ~#\\tForceCommand cvs server~]</CONTENT>\t\t\t\t\t<CHECKSUM>97884b5c2cb2b75022c4b440ddc4245a</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rwxr-xr-x 1 root root 3275 Apr 24 2017 /etc/ssh/sshd_config</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/ssh/sshd_config</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~root#x#0#0#root#/root#/bin/sh~, ~daemon#x#1#1#daemon#/usr/sbin#/bin/sh~, ~bin#x#2#2#bin#/bin#/bin/sh~, ~sys#x#3#3#sys#/dev#/bin/sh~, ~sync#x#4#65534#sync#/bin#/bin/sync~, ~games#x#5#60#games#/usr/games#/bin/sh~, ~man#x#6#12#man#/var/cache/man#/bin/sh~, ~lp#x#7#7#lp#/var/spool/lpd#/bin/sh~, ~mail#x#8#8#mail#/var/mail#/bin/sh~, ~news#x#9#9#news#/var/spool/news#/bin/sh~, ~uucp#x#10#10#uucp#/var/spool/uucp#/bin/sh~, ~proxy#x#13#13#proxy#/bin#/bin/sh~, ~www-data#x#33#33#www-data#/var/www#/bin/sh~, ~backup#x#34#34#backup#/var/backups#/bin/sh~, ~list#x#38#38#Mailing List Manager#/var/list#/bin/sh~, ~irc#x#39#39#ircd#/var/run/ircd#/bin/sh~, ~gnats#x#41#41#Gnats Bug-Reporting System (admin)#/var/lib/gnats#/bin/sh~, ~nobody#x#65534#65534#nobody#/nonexistent#/bin/sh~, ~messagebus#x#999#998##/var/lib/dbus#/bin/false~, ~rpc#x#998#996##/#/bin/false~, ~sshd#x#997#995##/var/run/sshd#/bin/false~]</CONTENT>\t\t\t\t\t<CHECKSUM>591fb16f798d29aa9dab2db5557ff4f8</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rw-r--r-- 1 root root 831 Apr 24 2017 /etc/passwd</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/passwd</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CHECKSUM>7ea587858977ef205c6a7419463359f7</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>lrwxrwxrwx 1 root root 7 Apr 24 2017 /usr/bin/python -&gt; python2</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin/python</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~# Defaults for dhcp initscript~, ~# sourced by /etc/init.d/dhcp-server~, ~# installed at /etc/default/dhcp-server by the maintainer scripts~, ~# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?~, ~# Separate multiple interfaces with spaces, e.g. \\~eth0 eth1\\~.~, ~INTERFACES=\\~\\~~]</CONTENT>\t\t\t\t\t<CHECKSUM>1c905007d96a8b16c58454b6da8cfd86</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -lhrt</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rw-r--r-- 1 root root 290 Apr 24 2017 /etc/default/dhcp-server</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/default/dhcp-server</NAME>\t\t\t\t</FILE>\t\t\t</FILES>\t\t\t<DIRECTORIES>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-sr-x 3 root root 20480 Apr 24 2017 /usr/bin</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -lrt</REQUEST>\t\t\t\t\t\t\t<RESPONSE>total 4-rwx------ 1 root root 0 Apr 24 2017 calv_setup_ldpath.log-rw-r--r-- 1 root root 227 Apr 24 2017 cmdline-rw-r--r-- 1 root root 0 May 4 16#14 test.txt</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>touch test.txt</REQUEST>\t\t\t\t\t\t\t<RESPONSE></RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/root</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 4 root root 4096 May 4 15#27 /misc/scratch</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/misc/scratch</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE></RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/misc/app_host</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-sr-x 55 root root 4096 May 4 06#32 /etc</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-sr-x 28 root root 4096 May 4 06#30 /</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/</NAME>\t\t\t\t</DIRECTORY>\t\t\t</DIRECTORIES>\t\t</INTEGRITY>\t\t<INTEGRITY domain=~HOST~>\t\t\t<FILES>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~#\\t$OpenBSD# sshd_config,v 1.80 2008/07/02 02#24#18 djm Exp $~, ~# This is the sshd server system-wide configuration file. See~, ~# sshd_config(5) for more information.~, ~# This sshd was compiled with PATH=/usr/bin#/bin#/usr/sbin#/sbin~, ~# The strategy used for options in the default sshd_config shipped with~, ~# OpenSSH is to specify options with their default value where~, ~# possible, but leave them commented. Uncommented options change a~, ~# default value.~, ~#Port 22~, ~AddressFamily inet~, ~#ListenAddress 0.0.0.0~, ~#ListenAddress ##~, ~# Disable legacy (protocol version 1) support in the server for new~, ~# installations. In future the default will change to require explicit~, ~# activation of protocol 1~, ~Protocol 2~, ~# HostKey for protocol version 1~, ~#HostKey /etc/ssh/ssh_host_key~, ~# HostKeys for protocol version 2~, ~#HostKey /etc/ssh/ssh_host_rsa_key~, ~#HostKey /etc/ssh/ssh_host_dsa_key~, ~# Lifetime and size of ephemeral version 1 server key~, ~#KeyRegenerationInterval 1h~, ~#ServerKeyBits 1024~, ~# Logging~, ~# obsoletes QuietMode and FascistLogging~, ~#SyslogFacility AUTH~, ~#LogLevel INFO~, ~# Authentication#~, ~#LoginGraceTime 2m~, ~PermitRootLogin yes~, ~#StrictModes yes~, ~#MaxAuthTries 6~, ~#MaxSessions 10~, ~#RSAAuthentication yes~, ~#PubkeyAuthentication yes~, ~#AuthorizedKeysFile\\t.ssh/authorized_keys~, ~# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts~, ~#RhostsRSAAuthentication no~, ~# similar for protocol version 2~, ~#HostbasedAuthentication no~, ~# Change to yes if you don't trust ~/.ssh/known_hosts for~, ~# RhostsRSAAuthentication and HostbasedAuthentication~, ~#IgnoreUserKnownHosts no~, ~# Don't read the user's ~/.rhosts and ~/.shosts files~, ~#IgnoreRhosts yes~, ~# To disable tunneled clear text passwords, change to no here!~, ~#PasswordAuthentication yes~, ~PermitEmptyPasswords yes~, ~# Change to no to disable s/key passwords~, ~#ChallengeResponseAuthentication yes~, ~# Kerberos options~, ~#KerberosAuthentication no~, ~#KerberosOrLocalPasswd yes~, ~#KerberosTicketCleanup yes~, ~#KerberosGetAFSToken no~, ~# GSSAPI options~, ~#GSSAPIAuthentication no~, ~#GSSAPICleanupCredentials yes~, ~# Set this to 'yes' to enable PAM authentication, account processing,~, ~# and session processing. If this is enabled, PAM authentication will~, ~# be allowed through the ChallengeResponseAuthentication and~, ~# PasswordAuthentication. Depending on your PAM configuration,~, ~# PAM authentication via ChallengeResponseAuthentication may bypass~, ~# the setting of \\~PermitRootLogin without-password\\~.~, ~# If you just want the PAM account and session checks to run without~, ~# PAM authentication, then enable this but set PasswordAuthentication~, ~# and ChallengeResponseAuthentication to 'no'.~, ~#UsePAM no~, ~#AllowAgentForwarding yes~, ~#AllowTcpForwarding yes~, ~#GatewayPorts no~, ~#X11Forwarding no~, ~#X11DisplayOffset 10~, ~#X11UseLocalhost yes~, ~#PrintMotd yes~, ~#PrintLastLog yes~, ~#TCPKeepAlive yes~, ~#UseLogin no~, ~UsePrivilegeSeparation no~, ~#PermitUserEnvironment no~, ~Compression no~, ~ClientAliveInterval 15~, ~ClientAliveCountMax 4~, ~UseDNS no~, ~#PidFile /var/run/sshd.pid~, ~#MaxStartups 10~, ~#PermitTunnel no~, ~#ChrootDirectory none~, ~# no default banner path~, ~#Banner none~, ~# override default of no subsystems~, ~Subsystem\\tsftp\\t/usr/lib64/openssh/sftp-server~, ~# Example of overriding settings on a per-user basis~, ~#Match User anoncvs~, ~#\\tX11Forwarding no~, ~#\\tAllowTcpForwarding no~, ~#\\tForceCommand cvs server~, ~#~, ~# Permit access from calvados and XR to host~, ~#~, ~Match Address 10.11.12.*~, ~PermitRootLogin yes~, ~#~, ~# Permit access from host to calvados and XR~, ~#~, ~Match Address 10.0.2.*~, ~PermitRootLogin yes~]</CONTENT>\t\t\t\t\t<CHECKSUM>5b4a5d15629e9a81e16d64f8a7f2e873</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rwxr-xr-x 1 root root 3466 Apr 24 2017 /etc/ssh/sshd_config</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/ssh/sshd_config</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~root#x#0#0#root#/root#/bin/sh~, ~daemon#x#1#1#daemon#/usr/sbin#/bin/sh~, ~bin#x#2#2#bin#/bin#/bin/sh~, ~sys#x#3#3#sys#/dev#/bin/sh~, ~sync#x#4#65534#sync#/bin#/bin/sync~, ~games#x#5#60#games#/usr/games#/bin/sh~, ~man#x#6#12#man#/var/cache/man#/bin/sh~, ~lp#x#7#7#lp#/var/spool/lpd#/bin/sh~, ~mail#x#8#8#mail#/var/mail#/bin/sh~, ~news#x#9#9#news#/var/spool/news#/bin/sh~, ~uucp#x#10#10#uucp#/var/spool/uucp#/bin/sh~, ~proxy#x#13#13#proxy#/bin#/bin/sh~, ~www-data#x#33#33#www-data#/var/www#/bin/sh~, ~backup#x#34#34#backup#/var/backups#/bin/sh~, ~list#x#38#38#Mailing List Manager#/var/list#/bin/sh~, ~irc#x#39#39#ircd#/var/run/ircd#/bin/sh~, ~gnats#x#41#41#Gnats Bug-Reporting System (admin)#/var/lib/gnats#/bin/sh~, ~nobody#x#65534#65534#nobody#/nonexistent#/bin/sh~, ~messagebus#x#999#998##/var/lib/dbus#/bin/false~, ~rpc#x#998#996##/#/bin/false~, ~sshd#x#997#995##/var/run/sshd#/bin/false~]</CONTENT>\t\t\t\t\t<CHECKSUM>591fb16f798d29aa9dab2db5557ff4f8</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rw-r--r-- 1 root root 831 Apr 24 2017 /etc/passwd</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/passwd</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CHECKSUM>7ea587858977ef205c6a7419463359f7</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>lrwxrwxrwx 1 root root 7 Apr 24 2017 /usr/bin/python -&gt; python2</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin/python</NAME>\t\t\t\t</FILE>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~# Defaults for dhcp initscript~, ~# sourced by /etc/init.d/dhcp-server~, ~# installed at /etc/default/dhcp-server by the maintainer scripts~, ~# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?~, ~# Separate multiple interfaces with spaces, e.g. \\~eth0 eth1\\~.~, ~INTERFACES=\\~\\~~]</CONTENT>\t\t\t\t\t<CHECKSUM>1c905007d96a8b16c58454b6da8cfd86</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -lhrt</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rw-r--r-- 1 root root 290 Apr 24 2017 /etc/default/dhcp-server</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/default/dhcp-server</NAME>\t\t\t\t</FILE>\t\t\t</FILES>\t\t\t<DIRECTORIES>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-sr-x 3 root root 20480 Apr 24 2017 /usr/bin</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -lrt</REQUEST>\t\t\t\t\t\t\t<RESPONSE>total 4-rw-r--r-- 1 root root 97 Apr 24 2017 cmdline-rw-r--r-- 1 root root 0 May 4 16#14 test.txt</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>touch test.txt</REQUEST>\t\t\t\t\t\t\t<RESPONSE></RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/root</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 3 root root 4096 May 4 15#26 /misc/scratch</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/misc/scratch</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 7 root root 4096 May 4 16#03 /misc/app_host</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/misc/app_host</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-sr-x 56 root root 4096 May 4 06#29 /etc</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-sr-x 27 root root 4096 May 4 06#29 /</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/</NAME>\t\t\t\t</DIRECTORY>\t\t\t</DIRECTORIES>\t\t</INTEGRITY>\t</INTEGRITY-SET></COMPLIANCE-DUMP>vagrant@vagrant#~$ Excellent, how does one read this data?The Basic structure is defined based on the XML schema userfiles/compliance.xsd in the git repo. NOTE# The XML file will only be generated by the apps if the xml content validates successfully against the compliance.xsd file. So if you’re receiveing XML content from the collector app, then you can be rest assured that it is already validated based on the schema.If we deconstruct parts of the XML data, we can see the basic structure starts with the tag and the version of the auditor app (remember we used the `-v` option with the app earlier?) as an attribute#<COMPLIANCE-DUMP xmlns#xsi=~http#//www.w3.org/2001/XMLSchema-instance~ version=~1.0.0~ xsi#noNamespaceSchemaLocation=~compliance.xsd~>....The next set of higher level tags are# <GENERAL&gt; <COMPLIANCE-DUMP xmlns#xsi=~http#//www.w3.org/2001/XMLSchema-instance~ version=~1.0.0~ xsi#noNamespaceSchemaLocation=~compliance.xsd~> <GENERAL>\t\t<PRODUCT>XRV-P-L--CH</PRODUCT>\t\t<VENDOR>Cisco</VENDOR>\t\t<IPADDR>11.1.1.10/24</IPADDR>\t\t<HOST>rtr</HOST>\t\t<VERSION>6.1.2</VERSION>\t\t<DATE>20180504-16#14 UTC</DATE>\t\t<OS>IOS-XR</OS>\t</GENERAL>The <GENERAL> data is used to collect relevant information from the router config and oper state in order to uniquely identify the router that produced this XML content. <INTEGRITY-SET&gt;This is the actual compliance/audit data being collected by the apps from the individual Linux shells. It can be seen from the snippets below, that the integrity set consists of three sections identified by the domain which can be XR-LXC, ADMIN-LXC or HOST.Within each domain, there are a list of <FILES> each of which can be subjected to a list of commands along with content, checksum outputs. Further, there is a section called <DIRECTORIES> which is very similar to the <FILES> section and also contains a list of directories with the outputs of a list of commands on each directory.\t<INTEGRITY-SET>\t\t<INTEGRITY domain=~XR-LXC~>\t\t\t<FILES>\t\t\t\t<FILE>\t\t\t\t\t<CONTENT>[~#\\t$OpenBSD# sshd_config,v 1.80 2008/07/02 02#24#18 djm Exp $~, ~# This is the sshd server system-wide configuration file. See~, ~# sshd_config(5) for more information.~, ~# This sshd was compiled with ...... ~UsePrivilegeSeparation no~, ~#PermitUserEnvironment no~, ~Compression no~, ~ClientAliveInterval 15~, ~ClientAliveCountMax 4~, ~UseDNS no~, ~#PidFile /var/run/sshd.pid~, ~#MaxStartups 10~, ~#PermitTunnel no~, ~#ChrootDirectory none~, ~# no default banner path~, ~#Banner none~, ~# override default of no subsystems~, ~Subsystem\\tsftp\\t/usr/lib64/openssh/sftp-server~, ~# Example of overriding settings on a per-user basis~, ~#Match User anoncvs~, ~#\\tX11Forwarding no~, ~#\\tAllowTcpForwarding no~, ~#\\tForceCommand cvs server~]</CONTENT>\t\t\t\t\t<CHECKSUM>97884b5c2cb2b75022c4b440ddc4245a</CHECKSUM>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -la</REQUEST>\t\t\t\t\t\t\t<RESPONSE>-rwxr-xr-x 1 root root 3275 Apr 24 2017 /etc/ssh/sshd_config</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/etc/ssh/sshd_config</NAME>\t\t\t\t</FILE> ..... </FILES> <DIRECTORIES>\t\t\t\t<DIRECTORY>\t\t\t\t\t<CMD-LIST>\t\t\t\t\t\t<CMD>\t\t\t\t\t\t\t<REQUEST>ls -ld</REQUEST>\t\t\t\t\t\t\t<RESPONSE>drwxr-xr-x 3 root root 20480 Apr 24 2017 /usr/bin</RESPONSE>\t\t\t\t\t\t</CMD>\t\t\t\t\t</CMD-LIST>\t\t\t\t\t<NAME>/usr/bin</NAME>\t\t\t\t</DIRECTORY>\t\t\t\t<DIRECTORY> .... \t\t\t\t</DIRECTORY>\t\t\t</DIRECTORIES>\t\t</INTEGRITY>\t\t<INTEGRITY domain=~ADMIN-LXC~>\t\t\t<FILES>\t\t\t\t<FILE> .... \t\t\t</DIRECTORIES>\t\t</INTEGRITY>\t\t<INTEGRITY domain=~HOST~>\t\t\t<FILES>\t\t\t\t<FILE>\t\t     So, where are the commands and the list of files and directories defined?? This is part of the userfiles/auditor.cfg.yml file as well. Jump to the COMPLIANCE_CONFIG section and you will see a YAML specification as shown below# .   Uninstall the appTo uninstall everything that the auditor app installs and to return back to the clean original state, use -u option.To clean up the generated XML files along with the apps and cronjobs, add the `-c’ option to the command#RP/0/RP0/CPU0#rtr#bashFri May 4 16#43#25.437 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -u -c2018-05-04 16#43#39,234 - DebugZTPLogger - INFO - Using root-lr user specified in auditor.cfg.yml, Username# vagrant2018-05-04 16#43#40,388 - DebugZTPLogger - INFO - Successfully removed xr audit app from XR LXC# audit_xr.bin2018-05-04 16#43#40,389 - DebugZTPLogger - INFO - Successfully cleaned up XR audit cron jobs2018-05-04 16#43#42,714 - DebugZTPLogger - INFO - Successfully removed audit app from Admin LXC# audit_admin.bin2018-05-04 16#43#43,868 - DebugZTPLogger - INFO - Successfully cleaned up admin audit cron jobs2018-05-04 16#43#47,888 - DebugZTPLogger - INFO - Successfully removed audit app from HOST# audit_host.bin2018-05-04 16#43#49,271 - DebugZTPLogger - INFO - Successfully cleaned up host audit cron jobs2018-05-04 16#43#50,388 - DebugZTPLogger - INFO - Successfully removed Collector audit app from XR LXC# collector.bin2018-05-04 16#43#50,388 - DebugZTPLogger - INFO - Successfully cleaned up collector audit cron jobs2018-05-04 16#43#50,388 - DebugZTPLogger - INFO - Starting cleanup of accumulated xml files as requested on Active-RP2018-05-04 16#44#20,471 - DebugZTPLogger - INFO - Cleaned up xml files on Active-RP XR LXC2018-05-04 16#44#26,199 - DebugZTPLogger - INFO - Cleaned up xml files on Active-RP Admin LXC2018-05-04 16#44#26,200 - DebugZTPLogger - INFO - Successfully uninstalled artifacts, IOS-XR Linux auditing is now OFF[xr-vm_node0_RP0_CPU0#~]$You can issue a /misc/scratch/auditor -l again to check that all the relevant directories got cleaned up.Verbose DebuggingAll the options support verbose debugging, use the -d flag if you’d like to peak into what’s happening behind the scenes when the auditor app installs or uninstalls individual apps and cron jobs.For example, if we use the -d flag during the installation process, we get#[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -i -d2018-05-04 16#49#08,511 - DebugZTPLogger - INFO - Using root-lr user specified in auditor.cfg.yml, Username# vagrant2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/xr2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/userfiles2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/lib2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/include2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/host2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/collector2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/admin2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/termios.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/resource.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/readline.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/pyexpat.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/lxml.etree.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libz.so.12018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libz-a147dcb0.so.1.2.32018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libtinfo.so.52018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libssl.so.1.0.02018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libreadline.so.62018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libpython2.7.so.1.02018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libffi.so.62018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libexpat.so.12018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libcrypto.so.1.0.02018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/libbz2.so.1.02018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/bz2.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_ssl.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_multibytecodec.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_json.so2018-05-04 16#49#08,512 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_hashlib.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_ctypes.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_codecs_tw.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_codecs_kr.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_codecs_jp.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_codecs_iso2022.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_codecs_hk.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/_codecs_cn.so2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/xr/audit_xr.cron2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/xr/audit_xr.bin2018-05-04 16#49#08,513 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/userfiles/id_rsa_server2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/userfiles/compliance.xsd2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/userfiles/auditor.cfg.yml2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/lib/python2.72018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/lib/python2.7/config-x86_64-linux-gnu2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/lib/python2.7/config-x86_64-linux-gnu/Makefile2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/include/python2.72018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/include/python2.7/pyconfig.h2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/host/audit_host.cron2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/host/audit_host.bin2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/collector/collector.cron2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/collector/collector.bin2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/admin/audit_admin.cron2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - /tmp/_MEIMqQ1ge/admin/audit_admin.bin2018-05-04 16#49#08,514 - DebugZTPLogger - DEBUG - bash cmd being run# ls /misc/scratch/2018-05-04 16#49#08,521 - DebugZTPLogger - DEBUG - output# auditorauditor_collated_logs.tar.gzclihistoryconfigcorecryptoid_rsanvgen_tracesstatus_filetpa.logztp2018-05-04 16#49#08,521 - DebugZTPLogger - DEBUG - error# 2018-05-04 16#49#08,529 - DebugZTPLogger - INFO - XR LXC audit app successfully copied2018-05-04 16#49#08,531 - DebugZTPLogger - DEBUG - bash cmd being run# chmod 0644 /etc/cron.d/audit_cron_xr_2018-05-04_16-49-082018-05-04 16#49#08,537 - DebugZTPLogger - DEBUG - output# 2018-05-04 16#49#08,537 - DebugZTPLogger - DEBUG - error# 2018-05-04 16#49#08,537 - DebugZTPLogger - INFO - XR LXC audit cron job successfully set up2018-05-04 16#49#08,537 - DebugZTPLogger - DEBUG - Received bash cmd# ls /misc/scratch to run in shell of active RP's admin LXC2018-05-04 16#49#08,537 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 ls /misc/scratch~2018-05-04 16#49#09,093 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 ls /misc/scratch', 'Fri May 4 16#49#08.992 UTC', 'calvados_log_aaad_0_0.out', 'calvados_log_confd_helper_0_0.out', 'calvados_log_instagt_log_0_0.out', 'calvados_log_tacacsd_0_0.out', 'calvados_log_vmm_0_0.out', 'card_specific_install', 'core', 'shelf_mgr_pds', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#09,094 - DebugZTPLogger - DEBUG - Inside active_adminscp2018-05-04 16#49#09,094 - DebugZTPLogger - DEBUG - Received scp request to transfer file from XR LXC to admin LXC2018-05-04 16#49#09,094 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp root@192.0.0.4#/tmp/_MEIMqQ1ge/./admin/audit_admin.bin /misc/scratch/audit_aadscp_audit_admin.bin~2018-05-04 16#49#09,727 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp root@192.0.0.4#/tmp/_MEIMqQ1ge/./admin/audit_admin.bin /misc/scratch/audit_aadscp_audit_admin.bin', 'Fri May 4 16#49#09.545 UTC', 'audit_admin.bin 0% 0 0.0KB/s --#-- ETA', 'audit_admin.bin 100% 6882KB 6.7MB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#09,727 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp /misc/scratch/audit_aadscp_audit_admin.bin root@192.0.0.1#/misc/scratch/audit_admin.bin~2018-05-04 16#49#10,379 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp /misc/scratch/audit_aadscp_audit_admin.bin root@192.0.0.1#/misc/scratch/audit_admin.bin', 'Fri May 4 16#49#10.200 UTC', 'audit_aadscp_audit_admin.bin 0% 0 0.0KB/s --#-- ETA', 'audit_aadscp_audit_admin.bin 100% 6882KB 6.7MB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#10,380 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run rm -f /misc/scratch/audit_aadscp_audit_admin.bin~2018-05-04 16#49#11,056 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run rm -f /misc/scratch/audit_aadscp_audit_admin.bin', 'Fri May 4 16#49#10.991 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#11,056 - DebugZTPLogger - INFO - Admin LXC audit app successfully copied2018-05-04 16#49#11,057 - DebugZTPLogger - DEBUG - ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp /misc/scratch/audit_aadscp_audit_admin.bin root@192.0.0.1#/misc/scratch/audit_admin.bin', 'Fri May 4 16#49#10.200 UTC', 'audit_aadscp_audit_admin.bin 0% 0 0.0KB/s --#-- ETA', 'audit_aadscp_audit_admin.bin 100% 6882KB 6.7MB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#11,057 - DebugZTPLogger - DEBUG - Received bash cmd# rm -f /etc/cron.d/audit_cron_admin_* to run in shell of active RP's admin LXC2018-05-04 16#49#11,057 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 rm -f /etc/cron.d/audit_cron_admin_*~2018-05-04 16#49#11,624 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 rm -f /etc/cron.d/audit_cron_admin_*', 'Fri May 4 16#49#11.509 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#11,625 - DebugZTPLogger - DEBUG - Received bash cmd# ls /etc/cron.d/ to run in shell of active RP's admin LXC2018-05-04 16#49#11,625 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 ls /etc/cron.d/~2018-05-04 16#49#12,199 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 ls /etc/cron.d/', 'Fri May 4 16#49#12.950 UTC', 'logrotate.conf', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#12,200 - DebugZTPLogger - DEBUG - bash cmd being run# chmod 0644 /misc/app_host/audit_cron_admin_2018-05-04_16-49-112018-05-04 16#49#12,205 - DebugZTPLogger - DEBUG - output# 2018-05-04 16#49#12,205 - DebugZTPLogger - DEBUG - error# 2018-05-04 16#49#12,205 - DebugZTPLogger - DEBUG - Inside active_adminscp2018-05-04 16#49#12,205 - DebugZTPLogger - DEBUG - Received scp request to transfer file from XR LXC to admin LXC2018-05-04 16#49#12,205 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp root@192.0.0.4#/misc/app_host/audit_cron_admin_2018-05-04_16-49-11 /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11~2018-05-04 16#49#12,822 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp root@192.0.0.4#/misc/app_host/audit_cron_admin_2018-05-04_16-49-11 /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11', 'Fri May 4 16#49#12.660 UTC', 'audit_cron_admin_2018-05-04_16-49-11 0% 0 0.0KB/s --#-- ETA', 'audit_cron_admin_2018-05-04_16-49-11 100% 89 0.1KB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#12,822 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11 root@192.0.0.1#/etc/cron.d/audit_cron_admin_2018-05-04_16-49-11~2018-05-04 16#49#13,381 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11 root@192.0.0.1#/etc/cron.d/audit_cron_admin_2018-05-04_16-49-11', 'Fri May 4 16#49#13.275 UTC', 'audit_aadscp_audit_cron_admin_2018-05-04_16-4 0% 0 0.0KB/s --#-- ETA', 'audit_aadscp_audit_cron_admin_2018-05-04_16-4 100% 89 0.1KB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#13,381 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run rm -f /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11~2018-05-04 16#49#13,935 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run rm -f /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11', 'Fri May 4 16#49#13.887 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#13,935 - DebugZTPLogger - INFO - Admin LXC audit cron file successfully copied and activated2018-05-04 16#49#13,935 - DebugZTPLogger - DEBUG - ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp /misc/scratch/audit_aadscp_audit_cron_admin_2018-05-04_16-49-11 root@192.0.0.1#/etc/cron.d/audit_cron_admin_2018-05-04_16-49-11', 'Fri May 4 16#49#13.275 UTC', 'audit_aadscp_audit_cron_admin_2018-05-04_16-4 0% 0 0.0KB/s --#-- ETA', 'audit_aadscp_audit_cron_admin_2018-05-04_16-4 100% 89 0.1KB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#13,935 - DebugZTPLogger - DEBUG - Received host command request# ~ls /misc/scratch~2018-05-04 16#49#13,936 - DebugZTPLogger - DEBUG - Received bash cmd# ssh root@10.0.2.16 ls /misc/scratch to run in shell of active RP's admin LXC2018-05-04 16#49#13,936 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 ssh root@10.0.2.16 ls /misc/scratch~2018-05-04 16#49#14,610 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 ssh root@10.0.2.16 ls /misc/scratch', 'Fri May 4 16#49#14.390 UTC', 'core', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#14,610 - DebugZTPLogger - DEBUG - Received scp request to transfer file from XR LXC to host shell2018-05-04 16#49#14,611 - DebugZTPLogger - DEBUG - Inside active_adminscp2018-05-04 16#49#14,611 - DebugZTPLogger - DEBUG - Received scp request to transfer file from XR LXC to admin LXC2018-05-04 16#49#14,611 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp root@192.0.0.4#/tmp/_MEIMqQ1ge/./host//audit_host.bin /misc/scratch/audit_aadscp_audit_host.bin~2018-05-04 16#49#15,388 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp root@192.0.0.4#/tmp/_MEIMqQ1ge/./host//audit_host.bin /misc/scratch/audit_aadscp_audit_host.bin', 'Fri May 4 16#49#15.990 UTC', 'audit_host.bin 0% 0 0.0KB/s --#-- ETA', 'audit_host.bin 100% 6881KB 6.7MB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#15,389 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp /misc/scratch/audit_aadscp_audit_host.bin root@192.0.0.1#/misc/scratch/audit_ahscp_audit_host.bin~2018-05-04 16#49#16,006 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp /misc/scratch/audit_aadscp_audit_host.bin root@192.0.0.1#/misc/scratch/audit_ahscp_audit_host.bin', 'Fri May 4 16#49#15.846 UTC', 'audit_aadscp_audit_host.bin 0% 0 0.0KB/s --#-- ETA', 'audit_aadscp_audit_host.bin 100% 6881KB 6.7MB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#16,006 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run rm -f /misc/scratch/audit_aadscp_audit_host.bin~2018-05-04 16#49#16,596 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run rm -f /misc/scratch/audit_aadscp_audit_host.bin', 'Fri May 4 16#49#16.538 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#16,596 - DebugZTPLogger - DEBUG - Received bash cmd# scp /misc/scratch/audit_ahscp_audit_host.bin root@10.0.2.16#/misc/scratch/audit_host.bin to run in shell of active RP's admin LXC2018-05-04 16#49#16,597 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 scp /misc/scratch/audit_ahscp_audit_host.bin root@10.0.2.16#/misc/scratch/audit_host.bin~2018-05-04 16#49#17,321 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 scp /misc/scratch/audit_ahscp_audit_host.bin root@10.0.2.16#/misc/scratch/audit_host.bin', 'Fri May 4 16#49#17.810 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#17,321 - DebugZTPLogger - DEBUG - Received bash cmd# rm -f /misc/scratch/audit_ahscp_audit_host.bin to run in shell of active RP's admin LXC2018-05-04 16#49#17,322 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 rm -f /misc/scratch/audit_ahscp_audit_host.bin~2018-05-04 16#49#17,949 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 rm -f /misc/scratch/audit_ahscp_audit_host.bin', 'Fri May 4 16#49#17.841 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#17,949 - DebugZTPLogger - INFO - HOST audit app successfully copied2018-05-04 16#49#17,949 - DebugZTPLogger - DEBUG - ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 scp /misc/scratch/audit_ahscp_audit_host.bin root@10.0.2.16#/misc/scratch/audit_host.bin', 'Fri May 4 16#49#17.810 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#17,949 - DebugZTPLogger - DEBUG - Received host command request# ~rm -f /etc/cron.d/audit_cron_host_*~2018-05-04 16#49#17,949 - DebugZTPLogger - DEBUG - Received bash cmd# ssh root@10.0.2.16 rm -f /etc/cron.d/audit_cron_host_* to run in shell of active RP's admin LXC2018-05-04 16#49#17,949 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 ssh root@10.0.2.16 rm -f /etc/cron.d/audit_cron_host_*~2018-05-04 16#49#18,628 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 ssh root@10.0.2.16 rm -f /etc/cron.d/audit_cron_host_*', 'Fri May 4 16#49#18.414 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#18,628 - DebugZTPLogger - DEBUG - Received host command request# ~ls /etc/cron.d/~2018-05-04 16#49#18,628 - DebugZTPLogger - DEBUG - Received bash cmd# ssh root@10.0.2.16 ls /etc/cron.d/ to run in shell of active RP's admin LXC2018-05-04 16#49#18,629 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 ssh root@10.0.2.16 ls /etc/cron.d/~2018-05-04 16#49#19,290 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 ssh root@10.0.2.16 ls /etc/cron.d/', 'Fri May 4 16#49#19.900 UTC', 'logrotate.conf', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#19,290 - DebugZTPLogger - DEBUG - bash cmd being run# chmod 0644 /misc/app_host/audit_cron_host_2018-05-04_16-49-172018-05-04 16#49#19,295 - DebugZTPLogger - DEBUG - output# 2018-05-04 16#49#19,295 - DebugZTPLogger - DEBUG - error# 2018-05-04 16#49#19,295 - DebugZTPLogger - DEBUG - Received scp request to transfer file from XR LXC to host shell2018-05-04 16#49#19,295 - DebugZTPLogger - DEBUG - Inside active_adminscp2018-05-04 16#49#19,295 - DebugZTPLogger - DEBUG - Received scp request to transfer file from XR LXC to admin LXC2018-05-04 16#49#19,295 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp root@192.0.0.4#/misc/app_host/audit_cron_host_2018-05-04_16-49-17 /misc/scratch/audit_aadscp_audit_cron_host_2018-05-04_16-49-17~2018-05-04 16#49#19,902 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp root@192.0.0.4#/misc/app_host/audit_cron_host_2018-05-04_16-49-17 /misc/scratch/audit_aadscp_audit_cron_host_2018-05-04_16-49-17', 'Fri May 4 16#49#19.753 UTC', 'audit_cron_host_2018-05-04_16-49-17 0% 0 0.0KB/s --#-- ETA', 'audit_cron_host_2018-05-04_16-49-17 100% 88 0.1KB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#19,903 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run scp /misc/scratch/audit_aadscp_audit_cron_host_2018-05-04_16-49-17 root@192.0.0.1#/misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17~2018-05-04 16#49#20,500 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run scp /misc/scratch/audit_aadscp_audit_cron_host_2018-05-04_16-49-17 root@192.0.0.1#/misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17', 'Fri May 4 16#49#20.389 UTC', 'audit_aadscp_audit_cron_host_2018-05-04_16-49 0% 0 0.0KB/s --#-- ETA', 'audit_aadscp_audit_cron_host_2018-05-04_16-49 100% 88 0.1KB/s 00#00', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#20,500 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run rm -f /misc/scratch/audit_aadscp_audit_cron_host_2018-05-04_16-49-17~2018-05-04 16#49#21,032 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run rm -f /misc/scratch/audit_aadscp_audit_cron_host_2018-05-04_16-49-17', 'Fri May 4 16#49#20.991 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#21,033 - DebugZTPLogger - DEBUG - Received bash cmd# scp /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17 root@10.0.2.16#/etc/cron.d/audit_cron_host_2018-05-04_16-49-17 to run in shell of active RP's admin LXC2018-05-04 16#49#21,033 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 scp /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17 root@10.0.2.16#/etc/cron.d/audit_cron_host_2018-05-04_16-49-17~2018-05-04 16#49#21,693 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 scp /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17 root@10.0.2.16#/etc/cron.d/audit_cron_host_2018-05-04_16-49-17', 'Fri May 4 16#49#21.488 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#21,694 - DebugZTPLogger - DEBUG - Received bash cmd# rm -f /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17 to run in shell of active RP's admin LXC2018-05-04 16#49#21,694 - DebugZTPLogger - DEBUG - Received admin exec command request# ~run ssh root@192.0.0.1 rm -f /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17~2018-05-04 16#49#22,283 - DebugZTPLogger - DEBUG - Exec command output is ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 rm -f /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17', 'Fri May 4 16#49#22.174 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#22,283 - DebugZTPLogger - INFO - Host audit cron file successfully copied and activated2018-05-04 16#49#22,284 - DebugZTPLogger - DEBUG - ['vagrant connected from 127.0.0.1 using console on xr-vm_node0_RP0_CPU0', '\\x1b[?7hsysadmin-vm#0_RP0# run ssh root@192.0.0.1 scp /misc/scratch/audit_ahscp_audit_cron_host_2018-05-04_16-49-17 root@10.0.2.16#/etc/cron.d/audit_cron_host_2018-05-04_16-49-17', 'Fri May 4 16#49#21.488 UTC', 'sysadmin-vm#0_RP0#']2018-05-04 16#49#22,285 - DebugZTPLogger - DEBUG - bash cmd being run# ls /misc/scratch2018-05-04 16#49#22,304 - DebugZTPLogger - DEBUG - output# audit_xr.binauditorauditor_collated_logs.tar.gzclihistoryconfigcorecryptoid_rsanvgen_tracesstatus_filetpa.logztp2018-05-04 16#49#22,304 - DebugZTPLogger - DEBUG - error# 2018-05-04 16#49#22,312 - DebugZTPLogger - INFO - Collector app successfully copied2018-05-04 16#49#22,313 - DebugZTPLogger - DEBUG - bash cmd being run# chmod 0644 /etc/cron.d/audit_cron_collector_2018-05-04_16-49-222018-05-04 16#49#22,318 - DebugZTPLogger - DEBUG - output# 2018-05-04 16#49#22,318 - DebugZTPLogger - DEBUG - error# 2018-05-04 16#49#22,319 - DebugZTPLogger - INFO - Collector cron job successfully set up in XR LXC2018-05-04 16#49#22,319 - DebugZTPLogger - INFO - Successfully set up artifacts, IOS-XR Linux auditing is now ON[xr-vm_node0_RP0_CPU0#~]$Troubleshooting# Gathering logsIn case something goes wrong and a particular app or cron job does not behave properly, it is advisable to collect logs from all the domains into a single tar ball.This is made easy by the -o <tarfile_output_dir> option which allows a user to quickly gather the logs from all domains (Active and Standby RP) and create a tarfile called auditor_collated_logs.tar.gz for you.[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -o /misc/scratch/2018-05-04 16#53#34,073 - DebugZTPLogger - INFO - Using root-lr user specified in auditor.cfg.yml, Username# vagrant2018-05-04 16#53#34,079 - DebugZTPLogger - INFO - Successfully saved audit logs for Active XR LXC to /misc/scratch/auditor_collected_logs/ACTIVE-XR-LXC.audit.log2018-05-04 16#53#35,906 - DebugZTPLogger - INFO - Successfully copied audit logs from Active Admin LXC to Active XR LXC at /misc/scratch/auditor_collected_logs/ACTIVE-ADMIN-LXC.audit.log2018-05-04 16#53#38,967 - DebugZTPLogger - INFO - Successfully copied audit logs from Active HOST to Active XR LXC at /misc/scratch/auditor_collected_logs/ACTIVE-HOST.audit.log2018-05-04 16#53#39,001 - DebugZTPLogger - INFO - Audit logs tarfile created at# /misc/scratch//auditor_collated_logs.tar.gz[xr-vm_node0_RP0_CPU0#~]$The log tar ball will then be available to copy from the router directory to another location for inspection and troubleshooting.[xr-vm_node0_RP0_CPU0#~]$ls -lrt /misc/scratch/auditor_collated_logs.tar.gz -rw-r--r-- 1 root root 28019 May 4 16#53 /misc/scratch/auditor_collated_logs.tar.gz[xr-vm_node0_RP0_CPU0#~]$Support for Active/Standby RP systemsAs mentioned earlier, the app supports active/standby systems as well. To demonstate see the outputs from an NCS5508 device with an active/standby RP to see how the application installs components on both the RPs#NCS-5500 router running IOS-XR 6.1.31#RP/0/RP0/CPU0#rtr#show version Sat May 5 00#43#20.533 UTCCisco IOS XR Software, Version 6.1.31Copyright (c) 2013-2016 by Cisco Systems, Inc.Build Information# Built By # radharan Built On # Wed May 24 02#15#20 PDT 2017 Build Host # iox-lnx-049 Workspace # /san2/production/6.1.31/ncs5500/workspace Version # 6.1.31 Location # /opt/cisco/XR/packages/cisco NCS-5500 () processor System uptime is 2 days, 17 hours, 47 minutesRP/0/RP0/CPU0#rtr#Active and Standby RPs are present and in the High-Availability State#RP/0/RP0/CPU0#rtr#show redundancy summary Sat May 5 00#43#28.737 UTC Active Node Standby Node ----------- ------------ 0/RP0/CPU0 0/RP1/CPU0 (Node Ready, NSR#Not Configured)RP/0/RP0/CPU0#rtr#RP/0/RP0/CPU0#rtr#Follow the same process as shown earlier# Set up userfiles/auditor.cfg.yml appropriately with ROUTER_CONFIG and SERVER_CONFIG sections and transfer the app to the router. In this case, I have set up the auditor.cfg.yml ROUTER_CONFIG such that the OUTGOING_INTERFACE is unset. This forces the app to use the mgmt port ip by default. I do this so that when I force a switchover to happen, it becomes easy to distinguish on the server that is receiving the XML data that there is switch from the active to standby based on the name of the compliance file generated.When you run the installation of the app, you will see logs indicating that components were installed on the standby RP as well#[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$/misc/scratch/auditor -i2018-05-05 00#45#58,971 - DebugZTPLogger - INFO - Using root-lr user specified in auditor.cfg.yml, Username# vagrant2018-05-05 00#46#00,588 - DebugZTPLogger - INFO - XR LXC audit app successfully copied2018-05-05 00#46#00,595 - DebugZTPLogger - INFO - XR LXC audit cron job successfully set up2018-05-05 00#46#14,238 - DebugZTPLogger - INFO - Admin LXC audit app successfully copied2018-05-05 00#46#23,931 - DebugZTPLogger - INFO - Admin LXC audit cron file successfully copied and activated2018-05-05 00#46#39,428 - DebugZTPLogger - INFO - HOST audit app successfully copied2018-05-05 00#46#52,978 - DebugZTPLogger - INFO - Host audit cron file successfully copied and activated2018-05-05 00#46#54,534 - DebugZTPLogger - INFO - Collector app successfully copied2018-05-05 00#46#54,540 - DebugZTPLogger - INFO - Collector cron job successfully set up in XR LXC2018-05-05 00#46#54,964 - DebugZTPLogger - INFO - Standby XR LXC auditor app successfully copied2018-05-05 00#46#56,634 - DebugZTPLogger - INFO - Standby XR LXC audit app successfully copied2018-05-05 00#46#56,958 - DebugZTPLogger - INFO - Standby XR LXC audit cron file successfully copied and activated2018-05-05 00#47#09,404 - DebugZTPLogger - INFO - Standby Admin LXC audit app successfully copied2018-05-05 00#47#20,155 - DebugZTPLogger - INFO - Standby Admin LXC audit cron file successfully copied and activated2018-05-05 00#47#36,755 - DebugZTPLogger - INFO - Standby HOST audit app successfully copied2018-05-05 00#47#50,351 - DebugZTPLogger - INFO - Standby host audit cron file successfully copied and activated2018-05-05 00#47#52,011 - DebugZTPLogger - INFO - Standby XR LXC Collector app successfully copied2018-05-05 00#47#52,335 - DebugZTPLogger - INFO - Standby XR LXC collector cron file successfully copied and activated2018-05-05 00#47#52,336 - DebugZTPLogger - INFO - Successfully set up artifacts, IOS-XR Linux auditing is now ON[xr-vm_node0_RP0_CPU0#~]$On my connected server, I see the compliance file appear just like the vagrant scenario#cisco@dhcpserver#~$ cisco@dhcpserver#~$ ls -lrt ~/compliance_audit*-rw-rw-r-- 1 cisco cisco 24629 May 4 00#56 /home/cisco/compliance_audit_rtr_11_11_11_42.xmlcisco@dhcpserver#~$ Now let’s force a switchover to happen on the router#RP/0/RP0/CPU0#rtr#redundancy switchover Sat May 5 00#53#38.742 UTCProceed with switchover 0/RP0/CPU0 -> 0/RP1/CPU0? [confirm]RP/0/RP1/CPU0#May 5 00#53#39.486 # rmf_svr[328]# %HA-REDCON-4-FAILOVER_REQUESTED # failover has been requested by operator, waiting to initiate Initiating switch-over.RP/0/RP0/CPU0#rtr#[00#53#44.034] Sending KILL signal to ds..[00#53#44.034] Sending KILL signal to processmgr..PM disconnect successStopping OpenBSD Secure Shell server# sshdinitctl# Unknown instance# Stopping system message bus# dbus.Libvirt not initialized for container instanceStopping system log daemon...0Stopping internet superserver# xinetd.Now let’s wait on the server for about 2-3 minutes, and we should see a new compliance file show up#cisco@dhcpserver#~$ cisco@dhcpserver#~$ cisco@dhcpserver#~$ ls -lrt ~/compliance_audit*-rw-rw-r-- 1 cisco cisco 24629 May 4 00#57 /home/cisco/compliance_audit_rtr_11_11_11_42.xmlcisco@dhcpserver#~$ cisco@dhcpserver#~$ cisco@dhcpserver#~$ ls -lrt ~/compliance_audit*-rw-rw-r-- 1 cisco cisco 24629 May 4 00#57 /home/cisco/compliance_audit_rtr_11_11_11_42.xml-rw-rw-r-- 1 cisco cisco 24373 May 4 00#59 /home/cisco/compliance_audit_rtr_11_11_11_41.xmlcisco@dhcpserver#~$ Perfect! Within 2 minutes, we have the auditor apps on the standby RP sending us the required compliance data!", "url": "/blogs/2018-05-01-anatomy-of-a-network-app-xr-auditor/", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, linux, security, audit, cron, python, pyinstaller, application" } , "blogs-2019-08-19-application-hosting-and-packet-io-on-ios-xr-a-deep-dive": { "title": "Application-hosting and Packet-IO on IOS-XR : A Deep Dive", "content": " On This Page Application-hosting and Packet-IO on IOS-XR 6.X Prerequisites Reserve the IOS-XR Programmability sandbox Understanding App-Hosting on IOS-XR Breaking down the IOS-XR Software Architecture Types of Linux Applications on IOS-XR Understanding Linux Pkt/IO on IOS-XR Native Applications on IOS-XR Container applications on IOS-XR LXC Containers Docker Containers Pull the ubuntu_iproute2 docker image Application-hosting and Packet-IO on IOS-XR 6.XFor readers looking to get hands-on with application-hosting on IOS-XR - bringing up native, lxc or docker applications on IOS-XR plaforms - check out the xr-toolbox series. The goal of this series is to get an unintiated user up and running with simple Linux applications on IOS-XR in a step-by-step manner, enabling them to bring and integrate their own tools and applications.While the existing tutorials describe the operative aspect of application-hosting, it is often necessary to elucidate the architecture behind these capabilities so that users can better troubleshoot their applications and plan the resources their applications should acquire post deployment.We delve into these matters and more in this blog, providing an in-depth overview of all the elements of the application-hosting and packet-io architecture that enables Linux applications to run on IOS-XR, route packets to-and-fro and interact with APIs on-box.PrerequisitesWe intend to dive into the specifics of the app-hosting infrastructure on IOS-XR using the on-demand IOS-XR programmability Sandbox on Devnet. This sandbox is easily reservable and allows the reader to walk through the steps in the blog. You can also choose to skip the reservation and simply read through the steps to gain a better understanding of the apphosting and packet-io architecture on IOS-XR.Reserve the IOS-XR Programmability sandboxTake some time to reserve and familiarize yourself with the IOS-XR programmability Sandbox on Devnet.Getting started is pretty straightforward - once you hit the above URL, click on the Reserve button on the top right#As part of the reservation, select the duration for which you’d like to reserve the sandbox (maximum duration = 1 week).To view the dropdown menu with variable reservation options, hit the edit button (pencil icon) next to schedule. Once your reservation is active, you can keep extending the duration if you start running out of time (with the maximum limit set to a total time of 1 week).Once reserved, expect an initial email (associated with your login) indicating that your sandbox environment is being set up. Within about 10 minutes, the entire sandbox environment should be ready and you’ll get another email detailing the Anyconnect server and credential information you need to connect to the same network as the sandbox environment.These instructions and more are detailed here# Reserving and Connecting to a Devnet Sandbox.Connect to the SandboxOnce you’re connected to the Anyconnect server#You should be able to ping the address# 10.10.20.170 which represents the External NAT address of the virtualization host on which the IOS-XRv9000 instances and the development environment (devbox) are running.More details can be found at the IOS-XR programmability Sandbox link.The topology that you will have access to is shown below#You have SSH access to each of the virtual machines - the two IOS-XRv9000 instances (r1 and r2) and the devbox (an Ubuntu 16.04 instance for access to a development environment).Further, some special ports (Netconf port# 830, gRPC port# 57777, and XR-Bash-SSH port# 57722) for each of the routers r1 and r2 have been uniquely forwarded to the external NAT IP# 10.10.20.170 as shown in the figure above.To be clear, the connection details are listed below# Developer Box (devbox)   IP 10.10.20.170 (Post VPN connection) SSH Port 2211 Username admin Password admin IOS-XRv9000 R1   IP 10.10.20.170 (Post VPN connection) XR-SSH Port 2221 NETCONF Port 8321 gRPC Port 57021 XR-Bash SSH Port 2222 Username admin Password admin IOS-XRv9000 R2   IP 10.10.20.170 (Post VPN connection) XR-SSH Port 2231 NETCONF Port 8331 gRPC Port 57031 XR-Bash SSH Port 2232 Username admin Password admin Connect to the nodes in the TopologyTo connect to the nodes in the topology, you have 3 options#Browser Based#If you don’t have an SSH Client or Terminal available on your Laptop/Machine that you’re using to walk through this lab, then use the UI that Devnet Sandbox provides to connect to the instances within your browser (Chrome or Firefox).Just hover over a node in the topology and hit SSH from the dropdown menu. This is shown below for the devbox#Pro Tip# This browser based session uses Guacamole on the server side to serve up the SSH connection. If you’d like to enable easy copy-paste from your laptop/machine into the session in the browser, then use Chrome as your browser and install the following plugin. Once installed, then within the browser tab that has the SSH session open, enable clipboard copying by clicking the plugin icon on the top right and allowing clipboard permissions for the particular host/IP as shown below# SSH CLient#If you have a 3rd party SSH client, use the SSH ports as described in the previous section to connect to the node of your choice. IP address is the same for all the nodes# 10.10.20.170 Terminal#If you have a Terminal to work with (with an SSH client utility), then to connect to the devbox, run#Username# adminPassword# adminSSH port# 2211Laptop-terminal#$ ssh -p 2211 admin@10.10.20.170admin@10.10.20.170's password#Last login# Sat Aug 18 23#12#52 2018 from 192.168.122.1admin@devbox#~$admin@devbox#~$Or to connect to router r1#Username# adminPassword# adminSSH port# 2221Laptop-terminal#$ ssh -p 2221 admin@10.10.20.170-------------------------------------------------------------------------- Router 1 (Cisco IOS XR Sandbox)--------------------------------------------------------------------------Password#RP/0/RP0/CPU0#r1#RP/0/RP0/CPU0#r1#RP/0/RP0/CPU0#r1#RP/0/RP0/CPU0#r1#show versionSun Aug 19 07#10#06.826 UTCCisco IOS XR Software, Version 6.4.1Copyright (c) 2013-2017 by Cisco Systems, Inc.Build Information# Built By # nkhai Built On # Wed Mar 28 19#20#20 PDT 2018 Build Host # iox-lnx-090 Workspace # /auto/srcarchive14/prod/6.4.1/xrv9k/ws Version # 6.4.1 Location # /opt/cisco/XR/packages/cisco IOS-XRv 9000 () processorSystem uptime is 1 day, 13 hours, 30 minutesRP/0/RP0/CPU0#r1#Perfect! You are now all set to proceed with the rest of this blog.Understanding App-Hosting on IOS-XRPost Release 6.0.0, IOS-XR made a jump from a 32-bit QNX operating system to a 64-Bit Linux operating system. This 64-bit Linux environment with a modern Kernel and the ability to run Linux processes and applications from the larger Linux ecosystem enables a whole variety of new use cases.Breaking down the IOS-XR Software ArchitectureThe Deployment Architecture of IOS-XR on compatible platforms can be classified into two types#LXC-Based Deployment (Contemporary platforms like NCS55xx, NCS5xxx platforms)VM-Based DeploymentASR9xxx platformsIrrespective of the LXC or VM based architecture, the common components of the architecture are defined below# Host(Hypervisor)# This is the underlying 64-bit Operating system that acts as the hypervisor on top of which the XR LXC/VM and the Admin LXC/VM are spawned. For LXC based platforms, it provides the shared kernel for the system. It also runs the container/VM daemons like libvirt and docker to spawn the XR and Calvados instances or even user spawned LXC/Docker instances (on LXC based platforms). XR LXC/VM# The IOS-XR control plane processes as shown in green above, run within an isolated LXC (on most contemporary IOS-XR platforms such as the NCS55xx and NCS5xxx platforms) or within an isolated VM (on the ASR9k platform). This LXC/VM contains all of the IOS-XR control plane processes (Protocol Stacks such as BGP, ISIS, OSPF, Internal Database - SYSDB, APIs etc. ).For VM-Based platforms, the XR VM brings its own kernel and the runs the libvirt daemon and the docker daemon inside the XR VM. Consequently the User LXC/Docker containers are spawned inside XR VM unlike LXC-Based platforms where the user containers are spawned on the Host kernel. Admin LXC/VM# Also called Calvados, the Admin LXC/VM is the first instance that comes up once the Host layer is up. The Admin LXC/VM then helps handle the lifecycle of the XR LXC/VM. The primary purpose of Calvados was to enable multi-tenancy on the same router by spawning multiple IOS-XR instances that act as logical separate routers (Secure domain routers or SDRs).For most operations, you will mostly deal with the just the XR LXC/VM on an IOS-XR platform.Throughout the rest of the lab our focus will be on more contemporary platforms such as NCS55xx and NCS5xxx that are LXC-Based systems.Types of Linux Applications on IOS-XRLinux applications supported by IOS-XR can be classified into two types# Native Applications# An application that runs within the same process space as XR control-plane processes (BGP, OSPF, SYSDB etc.) is considered a native application. In either type of deployment as shown above, the native application runs inside the XR LXC/VM. There is an obvious red-flag when it comes to native applications that there is no well defined method to constrain the amount of resources (CPU, Memory, Disk) that a native application utilizes on the system. It is up to the native application itself to police itself in its use of CPU, memory, Disk and other resources on the system. However, the advantage is the amount of visibility (file system access, XR processes information etc.) that a native application has on the system. This is why most configuration management tool clients (like puppet-client, chef-client, salt-minion are ) end up being native applications. Another example is auditing apps, for example the xr-auditor which actually runs within XR, Admin as well as the Host layer natively# https#//xrdocs.io/application-hosting/blogs/2018-05-01-anatomy-of-a-network-app-xr-auditor/ Container applications# Container applications require a container daemon available to launch the container on an available shared kernel on the system. The container daemon could be libvirt (LXCs) or the Docker Daemon(Docker Containers). As shown in the figures above, the User managed container would be launched on the Host Layer in case of LXC-Based platforms and inside XR VM in case of VM-Based platforms.The advantage of Container based applications is the resource isolation that is achieved by containerizing the application. The user is able to pre-allocate the CPU share for the application, maximum memory limits and even launch the container within a pre-allocated mount volume to limit the disk usage of the container. Further, being able to select a distribution of your choice as the container rootfs significantly eases the requirements on the development of the application - if your application works on Ubuntu#16.04, then simply select Ubuntu#16.04 as the rootfs for the container image/tarball.The disadvantage of container based applications is the lack of visibility compared to native applications, but owing to the rich RPC based APIs available with IOS-XR there isn’t much that container applications cannot access on the system.There are several examples of container applications that have built to operate on IOS-XR# a more recent one is when we ran Facebook’s Open/R protocol as a docker application on IOS-XR. You can read more about it here# https#//xrdocs.io/cisco-service-layer/blogs/2018-02-16-xr-s-journey-to-the-we-b-st-open-r-integration-with-ios-xr/Understanding Linux Pkt/IO on IOS-XRThe most important component of the application hosting infrastructure that enables applications to open up TCP/UDP sockets, and send or receive traffic across Management and production data interfaces of IOS-XR is the Pkt/IO infrastructure also commonly known as the KIM-Netstack infrastructure.The core idea behind the Pkt/IO infrastructure in IOS-XR is to make the linux environment on an IOS-XR system appear identical to a typical Linux box.Now this would be simpler on a fixed (1RU or 2RU) system without any linecards by leveraging the inherent Linux network stack.But today, there is no infrastructure available in the Linux ecosystem to manage Distributed systems with multiple linecards.This is where the IOS-XR network stack excels with its long history of running across fixed systems with no linecards as well as modular systems with multiple linecards.Knowing this, we implemented the Pkt/IO system such that the Linux kernel network stack on the RP in an IOS-XR platform can piggy-back on the IOS-XR network stack to allow the RP linux environment to have access to all the interfaces and packet paths on the distributed system for free, without any additional investment on the Linux kernel network stack.The way we accomplish this is shown through the series of images below.Even though a distributed modular chassis consists of 1 or 2 RPs and multiple Linecards, each of which runs a Linux kernel and a Linux userspace environment on its own, a network operator or any end user typically has access only to the active RP of the system where all the interaction interfaces (CLI, APIs, SSH, Telnet) are available.So it makes sense to have a system that mimics a fixed (pizza-box) system for the Active RP’s Linux kernel even on a distributed modular chassis. This way a Linux application running on the Active RP’s linux environment will assume that it is on a fixed Linux box with all the (actually distributed) Linecard interfaces connected locally. Expose Interfaces to the RP kernel# As shown below, all of the interfaces that you typically see in the show ipv4 interface brief command are mirrored into the RP’s Linux kernel. The difference in the Name is that only a shorter form of the name is selected (HundredGig becomes Hg, GigabitEthernet becomes Gi) and the numbering is preserved with / replaced with _. So for example HundredGig0/0/0/0 becomes Hg0_0_0_0 in the Active RP’s kernel. All L3 interfaces (Physical interfaces, L3 Subinterfaces, Bundle Interfaces and Loopback interfaces) are supported and synced into the kernel. While physical interfaces are always created in the kernel (can be checked with ifconfig -a, all other interfaces will only appear in the kernel if they have up and have an ip address configured on them). Support for routed BVI interfaces (routable virtual interfaces in an integrated Routing and Bridging (IRB) setup) was brought in with IOS-XR Release 6.3.1. This implies one can configure Layer-2 Subinterfaces, place them in a bridge-domain configuration and configure ip addresses on a routable BVI interface in the same bridge domain and the BVI interface will be synced into the kernel. Of course, like other virtual interfaces, an ip address configuration on the BVI interface is necessary for the kernel sync. Routes in the kernel# This was a decision with multiple implications. One way to solve the problem is to sync all routes from the IOS-XR RIB into the Active RP’s kernel. This is possible but would be highly resource intensive to have two parallel RIBs on the system (one in IOS-XR, and the other in the kernel on the same node - RP), especially if the Router is being utilized to download internet Routing tables over BGP. So, in order to save resources, we program only 3 potential routes into the kernel# Default route# The default route is used to handover the packets from the RP’s kernel to IOS-XR so that the lookup for the packet is done by the IOS-XR stack. There are two potential nexthop interfaces that the default route might point to# fwdintf# This is the default forwarding port to send packets from the RP over the fabric to the Linecards with the required platform headers to classify the packets as tpa packets (short for third party app). This interface has no ip address on it. So the packets exiting this interface cannot have their routing decision (outgoing linecard, interface, nexthop etc.) taken on the RP itself. In this situation, the lookup for the packet’s destination happens on the designated Linecard (every modular chassis has a designated linecard, used for packets for which the routing is not complete.). The designated Linecard then does a FIB lookup for the packet, sends the packet back over the fabric to the correct Linecard where it can finally exit the box. fwd_ew# This interface is also called the netio interface and its job is send the packet to the slow packet path and force IOS-XR software running on the Active RP to do a routing lookup before sending the packet out. This nexthop interface is used when packets need to be sent over the Mgmt port on the RP. Another use case where this interface may be used is when a Linux process running in the RP’s kernel network stack needs to communicate with a process running in the IOS-XR network stack. We call this east-west communication and hence the name fwd_ew for the interface. This interface too does not have an IP address and the Pkt/IO infrastructure opens up a raw socket on the interface to absorb traffic going out of it. So, irrespective of the nexthop interface for the default route, there MUST be a route to the required destination in IOS-XR RIB, since the final lookup (whether on RP or LC) will utilize the IOS-XR RIB/FIB routes. Local Management Subnet# This is obvious. The Management port is local to the RP, hence the Management port subnet is programmed into the kernel by default. This implies that irrespective of the default route in the kernel, you will always have reachability from the kernel to destinations on the Management LAN. These routes are shown in the figure below, with the example showcasing a default route through fwdintf.These routes are controlled through the tpa configuration CLI in IOS-XR# ! tpa ! vrf default ! address-family ipv4 default-route mgmt update-source MgmtEth0/RP0/CPU0/0 ! address-family ipv6 default-route mgmt update-source MgmtEth0/RP0/CPU0/0 ! ! !Where# vrf <> defines the vrf (and hence the network namespace in the kernel) for which the route configurations are valid. address-family <> determine the address family within the vrf/netns for which the route configurations are valid. default-route mgmt# This is a “Flag” CLI. Its presence forces the default route in kernel to use fwd_ew as the nexthop interface and thereby cause lookups to happen on RP through the slow packet path for application traffic originating from the Kernel. Its absence will change the default route to point to fwdintf interface instead. update-source <># This CLI is used to set the src-hint field in the linux route. Remember that Linux requires the outgoing interface to have an IP address to determine the source IP to be used for the packet destined to a particular destination in case the application does not set the source IP itself and depends on the kernel to do so. Since both fwd_ew and fwdintf are interfaces without any IP address, a src-hint must be set to ensure correct source-IP being set on the outgoing packet.If we apply the above configuration on an IOS-XR router (Release 6.3.1+), the routes in the kernel end up being# [r1#~]$ ip route default dev fwd_ew scope link src 192.168.122.21 192.168.122.0/24 dev Mg0_RP0_CPU0_0 scope link src 192.168.122.21 [r1#~]$where 192.168.122.21 was the IP address of the MgmtEth0/RP0/CPU0/0 interface that got configured as src-hint because of the update-source CLI under tpa. Network Namespace# If a vrf (complete BGP vrf or just vrf-lite) is configured in IOS-XR CLI and the same vrf (same name) is configured under the tpa configuration, then a corresponding network namespace is allocated in the kernel with the same name as the VRF. This is very useful when trying to isolate your applications into separate VRFs and running across specific interfaces. If a vrf is mapped to a network namespace of the same name using the tpa CLI then Interfaces configured under the vrf in XR CLI will automatically be isolated into the corresponding namespace in the RP’s kernel. Let’s try this out on route r2 in the sandbox# Username# adminPassword# adminSSH port# 2231 Laptop-terminal#$ ssh -p 2231 admin@10.10.20.170-------------------------------------------------------------------------- Router 2 (Cisco IOS XR Sandbox)--------------------------------------------------------------------------Password#RP/0/RP0/CPU0#r2#RP/0/RP0/CPU0#r2# Initially, the interface GigabitEthernet0/0/0/2 on router r2 is shutdown and is by default in the global-vrf network namespace which corrsponds to vrf default in IOS-XR. RP/0/RP0/CPU0#r2#show running-config int gigabitEthernet 0/0/0/2Mon Sep 10 05#13#15.645 UTCinterface GigabitEthernet0/0/0/2 shutdown! Check that this interface is visible in the global-vrf netns. We use ifconfig -a instead of just ifconfig since the interface is currently shutdown. We use the netns_identify utility in XR bash with the $$ argument (represents the process ID of the current XR bash shell) to determine the netns we are dropped into when we issue the bash CLI in XR# RP/0/RP0/CPU0#r2#RP/0/RP0/CPU0#r2#conf tMon Sep 10 05#13#23.081 UTCRP/0/RP0/CPU0#r2(config)# RP/0/RP0/CPU0#r2#RP/0/RP0/CPU0#r2#RP/0/RP0/CPU0#r2#bashMon Sep 10 05#13#30.038 UTC[r2#~]$[r2#~]$ netns_identify $$tpnnsglobal-vrf[r2#~]$[r2#~]$ ifconfig -a Gi0_0_0_2Gi0_0_0_2 Link encap#Ethernet HWaddr 52#54#00#93#8a#b2 [NO FLAGS] MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#0 errors#0 dropped#0 overruns#0 carrier#0 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#0 (0.0 B)[r2#~]$ Next, configure vrf blue at the global configuration level as well as under tpa to ensure that the netns called blue gets created in the kernel# RP/0/RP0/CPU0#r2#conf t Mon Sep 10 05#14#16.867 UTC RP/0/RP0/CPU0#r2(config)#vrf blue RP/0/RP0/CPU0#r2(config-vrf)#tpa RP/0/RP0/CPU0#r2(config-tpa)#vrf blue RP/0/RP0/CPU0#r2(config-tpa-vrf)#exit RP/0/RP0/CPU0#r2(config-tpa)# RP/0/RP0/CPU0#r2(config-tpa)# RP/0/RP0/CPU0#r2(config-tpa)#commit RP/0/RP0/CPU0#r2(config)# RP/0/RP0/CPU0#r2# RP/0/RP0/CPU0#r2#bash Mon Sep 10 05#14#39.438 UTC [r2#~]$ ls /var/run/netns blue global-vrf tpnns xrnns [r2#~]$ Perfect! Now configure interface GigabitEthernet0/0/0/2 under vrf blue to trigger its migration in the kernel netns as well# RP/0/RP0/CPU0#r2# RP/0/RP0/CPU0#r2# RP/0/RP0/CPU0#r2#conf t Mon Sep 10 05#14#54.079 UTC RP/0/RP0/CPU0#r2(config)#interface gigabitEthernet 0/0/0/2 RP/0/RP0/CPU0#r2(config-if)#no shutdown RP/0/RP0/CPU0#r2(config-if)#ip addr 101.1.1.20/24 RP/0/RP0/CPU0#r2(config-if)#vrf blue RP/0/RP0/CPU0#r2(config-if)#commit Mon Sep 10 05#15#15.502 UTC RP/0/RP0/CPU0#r2(config-if)# RP/0/RP0/CPU0#r2#Drop into bash and check if Gi0_0_0_2 is still present in global-vrf netns# RP/0/RP0/CPU0#r2#bash Mon Sep 10 05#15#19.643 UTC [r2#~]$ [r2#~]$ [r2#~]$ ifconfig -a Gi0_0_0_2 Gi0_0_0_2# error fetching interface information# Device not found [r2#~]$Nope! now let’s drop into netns blue and check the same# [r2#~]$ ip netns exec blue bash [r2#~]$ [r2#~]$ source /etc/init.d/operns-functions [r2#~]$ netns_identify $$ blue [r2#~]$ [r2#~]$ ifconfig -a Gi0_0_0_2 Gi0_0_0_2 Link encap#Ethernet HWaddr 52#54#00#93#8a#b2 inet addr#101.1.1.20 Mask#255.255.255.0 inet6 addr# fe80##5054#ff#fe93#8ab2/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1500 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#0 errors#0 dropped#0 overruns#0 carrier#0 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#0 (0.0 B) [r2#~]$Exactly what we expected. The interface Gi0_0_0_2 has now migrated to netns blue. KIM (kernel Interface Module) The KIM module is an important part of the current IOS-XR architecture. It is an XR process that serves as an interface to Kernel (hence the name) and is used to trigger route creation, interface creation, vrf creation in the kernel in response to variable set of inputs# User configured CLI (under tpa), Interface events from the interface manager and even RIB events for static routes through the Management port that are automatically synced into the kernel. KIM also handles the programming of LPTS (local packet transport services) in XR in response to netlink events that apps use to open sockets (TCP/UDP) in the kernel. Interface Sync# As shown below, KIM utilizes interface manager events (shut/no-shut) to trigger a kernel module called LCND to create/delete/migrate mirrored interfaces corresponding to all the RP and LC interfaces in the kernel. Route Sync# Similarly, as shown below, KIM will accept the tpa CLI as described earlier to determine the default routes, src-hints, next-hop interface etc. and program the kernel to reflect the requirement. It also works with RIB events to configure static routes with the Mgmt port as the next hop into the kernel (since the Mgmt port is local to the RP). TCP/UDP socket sync# When applications running (either natively or inside containers) attempt to open up TCP/UDP sockets, then it is essential for the IOS-XR LPTS framework (which is a distributed database of ports open on the system and exists across the RP and LCs) to be programmed to reflect these open ports so that any received packets can be subject to an LPTS lookup by an LC or RP and correctly forward the incoming traffic to the application opening up the socket.For this purpose, the netlink messages generated by the application when it tries to open sockets (TCP/UDP) in the kernel are captured by a kernel module and the events are forwarded to KIM, which in turn programs LPTS to reflect the newly opened sockets. A similar process also happens when the application closes the socket causing KIM to remove the entry from LPTS. Traffic flow for Linux/Application Traffic# Let’s take a quick look at how the traffic flow and lookups happen for application traffic with the Pkt/IO infrastructure in place for some unique scenarios# Transmit TCP/UDP traffic over Data ports# In this scenario, typically, the default route is set up to point to fwdintf as the next hop interface. The packet generated from the userspace process/application, exits fwdintf, is captured by a kernel module listening on a raw socket on fwdintf and the packet is then injected into the fabric towards the designated Linecard. The Designated linecard then does a FIB lookup, determines if it can directly send it out of the interface on the same linecard, else it sends it back over the fabric to the correct linecard based on the FIB lookup result. Receive TCP/UDP traffic over Data port# Here the packets arrive over the data port into the Linecard where an LPTS lookup takes place. Thanks to KIM which populates the LPTS entries based on TCP/UDP sockets opened by applications on the RP, the packets are then forwarded over the fabric towards the RP kernel and given to the application that opened up the matching socket. Receive Exception Traffic(ping) Any traffic received that does not contain a layer 4 header specifying a TCP/UDP port is treated as exception traffic since it will not match any entry during LPTS lookup (which only handles TCP/UDP sockets). For such a packet, there will be a punt towards the RP’s slow packet path where a software lookup will happen and the packet will be forwarded to the kernel as well as XR (replication). This is how a ping initiated in the Linux kernel of the RP will be able to receive a reply back. Management traffic Transmit/Receive# This is fairly straightforward. All the traffic transmitted and received via the Local Management port on the RP is handled by the slow packet path as shown below. Transmit Traffic through Mgmt Port# Receive Traffic via Mgmt Port# Native Applications on IOS-XRIf you’ve gone through some of the CLI automation labs already, then it must be fairly clear that being able to run bash and python scripts natively in the XR shell is fairly common. Extend that to cron jobs and you get all the typical scripting capabilities that you expect on a Linux environment.Further, IOS-XR uses the Windriver Linux 7 (WRL7) distribution which is RPM based and also support yum natively for WRL7 applications.You can certainly build your own WRL7 RPMs and install them on the system. This scenario is covered in great detail on xrdocs, here# https#//xrdocs.io/application-hosting/tutorials/2016-06-17-xr-toolbox-part-5-running-a-native-wrl7-app/Further, the IOS-XR team also hosts some pre-built libraries for common Linux applications on the Yum repository here# https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/To show how a native application RPM could be installed from this yum repo, let’s connect to router r1 in the sandbox#Username# adminPassword# adminSSH port# 2221Laptop-terminal#$ ssh -p 2221 admin@10.10.20.170-------------------------------------------------------------------------- Router 1 (Cisco IOS XR Sandbox)--------------------------------------------------------------------------Password#RP/0/RP0/CPU0#r1#RP/0/RP0/CPU0#r1#Drop into the bash shell and run ip route#RP/0/RP0/CPU0#r1#bashMon Sep 10 01#56#15.560 UTC[r1#~]$[r1#~]$ ip routedefault dev fwdintf scope link192.168.122.0/24 dev Mg0_RP0_CPU0_0 scope link src 192.168.122.21[r1#~]$It can be seen that the current default route for the global-vrf network namespace in the shell is set to go over the fabric and expects the default route in IOS-XR RIB to contain a route to the internet over the data ports.But the way the sandbox VMs are connected, internet access is available over the Management port sitting behind the gateway 192.168.122.1 and natted to the external world. Therefore to establish connectivity to the yum repo, we need to configure both the default route and the update-source config (setting the source IP of outgoing packets) to point to the Management port.Configuring thus#RP/0/RP0/CPU0#r1(config)#tpaRP/0/RP0/CPU0#r1(config-tpa)#vrf defaultRP/0/RP0/CPU0#r1(config-tpa-vrf)#address-family ipv4RP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#update-source MgmtEth 0/RP0/CPU0/0RP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#default-route mgmtRP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#address-family ipv6 RP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#update-source MgmtEth 0/RP0/CPU0/0RP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#default-route mgmt RP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#commitMon Sep 10 01#57#49.243 UTCRP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#RP/0/RP0/CPU0#r1(config-tpa-vrf-afi)#RP/0/RP0/CPU0#r1#show configuration commit changes last 1Mon Sep 10 02#08#56.266 UTCBuilding configuration...!! IOS XR Configuration version = 6.4.1tpa vrf default address-family ipv4 default-route mgmt update-source dataports MgmtEth0/RP0/CPU0/0 ! address-family ipv6 default-route mgmt update-source dataports MgmtEth0/RP0/CPU0/0 ! !!endRP/0/RP0/CPU0#r1#We can now check the output of ip route again#RP/0/RP0/CPU0#r1#bashMon Sep 10 02#09#25.572 UTC[r1#~]$[r1#~]$ ip routedefault dev fwd_ew scope link src 192.168.122.21192.168.122.0/24 dev Mg0_RP0_CPU0_0 scope link src 192.168.122.21[r1#~]$Great, notice how the default route changed to point to the fwd_ew interface so that the packet will be sent to the slow packet path where IOS-XR can also forward packets out to the Management port.Finally, set up the dns resolution by adding a nameserver to /etc/resolv.conf through the bash cli#RP/0/RP0/CPU0#r1#bashMon Sep 10 01#56#15.560 UTC[r1#~]$[r1#~]$ ping google.comping# unknown host google.com[r1#~]$[r1#~]$ echo ~nameserver 8.8.8.8~ > /etc/resolv.conf[r1#~]$[r1#~]$ ping google.comPING google.com (216.58.217.78) 56(84) bytes of data.64 bytes from iad23s41-in-f78.1e100.net (216.58.217.78)# icmp_seq=1 ttl=49 time=113 ms64 bytes from iad23s41-in-f78.1e100.net (216.58.217.78)# icmp_seq=2 ttl=49 time=113 ms^C--- google.com ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 113.174/113.573/113.973/0.522 ms[r1#~]$[r1#~]$Perfect, we now have DNS resolving towards the internet, so let’s set up access to the yum repo, Again in the bash shell #RP/0/RP0/CPU0#r1#bashMon Sep 10 01#56#15.560 UTC[r1#~]$[r1#~]$ yum-config-manager --add-repo https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/adding repo from# https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/[devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_]name=added from# https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/baseurl=https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/enabled=1[r1#~]$Let’s install the available iperf RPM from the yum repo#[r1#~]$[r1#~]$ yum install iperfLoaded plugins# downloadonly, protect-packages, rpm-persistencedevhub.cisco.com_artifactory_xr600_3rdparty_x86_64_ | 1.3 kB 00#00 devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_/primary | 1.1 MB 00#00 devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_ 5912/5912Setting up Install ProcessResolving Dependencies--> Running transaction check---> Package iperf.core2_64 0#2.0.5-r0.0 will be installed--> Finished Dependency ResolutionDependencies Resolved=================================================================================================================================================================================================================================== Package Arch Version Repository Size===================================================================================================================================================================================================================================Installing# iperf core2_64 2.0.5-r0.0 devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_ 34 kTransaction Summary===================================================================================================================================================================================================================================Install 1 PackageTotal download size# 34 kInstalled size# 67 kIs this ok [y/N]# yDownloading Packages#iperf-2.0.5-r0.0.core2_64.rpm | 34 kB 00#00 Running Transaction CheckRunning Transaction TestTransaction Test SucceededRunning Transaction Installing # iperf-2.0.5-r0.0.core2_64 1/1Installed# iperf.core2_64 0#2.0.5-r0.0 Complete![r1#~]$[r1#~]$[r1#~]$[r1#~]$ iperf -viperf version 2.0.5 (08 Jul 2010) pthreads[r1#~]$There you go, iperf version 2.0.5 has been installed and is ready to run as a native application.Container applications on IOS-XRSome common setup details for both LXC containers and Docker containers on the IOS-XR system are# Disk Volume# By default the mount volume /misc/app_host on the underlying host layer is mounted into the XR LXC by default. The amount of space allocated is about 3.9G on an xrv9k and about 3.7G on an NCS5500. It varies based on the resource availability of the platform, but is within this range.This can be checked by running df -h in the bash shell# [r1#~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/loop5 2.9G 1.4G 1.4G 49% / run 7.3G 476K 7.3G 1% /bindmnt_netns devfs 1.0M 16K 1008K 2% /dev tmpfs 64K 0 64K 0% /dev/cgroup /dev/mapper/panini_vol_grp-host_lv0 969M 412M 492M 46% /dev/sde none 7.3G 787M 6.5G 11% /dev/shm none 7.3G 787M 6.5G 11% /dev/shm /dev/mapper/app_vol_grp-app_lv0 3.9G 106M 3.6G 3% /misc/app_host tmpfs 7.3G 4.0K 7.3G 1% /var/volatile tmpfs 7.3G 68K 7.3G 1% /run tmpfs 7.3G 0 7.3G 0% /media/ram tmpfs 64M 208K 64M 1% /tmp tmpfs 64M 208K 64M 1% /tmp /dev/mapper/panini_vol_grp-ssd_disk1_xr_1 1.5G 233M 1.1G 18% /misc/disk1 /dev/mapper/xr-vm_encrypted_log 475M 16M 430M 4% /var/log /dev/mapper/xr-vm_encrypted_config 475M 3.3M 443M 1% /misc/config /dev/mapper/xr-vm_encrypted_scratch 989M 21M 902M 3% /misc/scratch none 512K 0 512K 0% /mnt [r1#~]$ CPU shares# Both LXC containers and Docker containers on IOS-XR have their resources (CPU/Memory) governed by cgroups settings. To figure out the limits on these container applications by default, we need to drop into the Host shell and check the cgroups settings# To drop into the host shell, follow the exact sequence of commands shown below, starting from the XR CLI# RP/0/RP0/CPU0#r1# RP/0/RP0/CPU0#r1#admin Mon Sep 10 02#33#43.456 UTC admin connected from 127.0.0.1 using console on r1 sysadmin-vm#0_RP0#run Mon Sep 10 02#33#45.243 UTC [sysadmin-vm#0_RP0#~]$ [sysadmin-vm#0_RP0#~]$ssh 10.0.2.16 [host#~]$ Let’s break down the cpu-shares allocated to different parts of the system# [host]$ [host]$ cd /dev/cgroup/ [host#/dev/cgroup]$ virsh -c lxc#/// list Id Name State ---------------------------------------------------- 6396 sysadmin running 14448 default-sdr--1 running 22946 default-sdr--2 running [host#/dev/cgroup]$ [host#/dev/cgroup]$ cat cpu/cpu.shares >>>> Host CPU shares 1024 [host#/dev/cgroup]$ cat cpu/machine/cpu.shares >>>> machine subgroup CPU shares 1024 [host#/dev/cgroup]$ [host#/dev/cgroup]$ cat cpu/machine/default-sdr--1.libvirt-lxc/cpu.shares >>>>> XR Control Plane 1024 [host#/dev/cgroup]$ cat cpu/machine/default-sdr--2.libvirt-lxc/cpu.shares >>>> Data Plane LC container 1024 [host#/dev/cgroup]$ cat cpu/machine/sysadmin.libvirt-lxc/cpu.shares >>> Sysadmin Container 1024 [host#/dev/cgroup]$ [host#/dev/cgroup]$ cat cpu/machine/tp_app.partition/cpu.shares >>>>> Allocation for the tp_app.partition subgroup 1024 [host#/dev/cgroup]$ [host#/dev/cgroup]$ cat cpu/machine/tp_app.partition/docker/cpu.shares >>>> Allocation for third party docker container subgroup under the tp_app.partition subgroup 1024 [host#/dev/cgroup]$ [host#/dev/cgroup]$ cat cpu/machine/tp_app.partition/lxc.partition/cpu.shares >>>> Allocation for third party LXC container subgroup under the tp_app.partition subgroup 1024 [host#/dev/cgroup]$ What do these cpu share allocations mean? CPU shares help determine the relative allocation of CPU resources when processes are running in all groups and subgroups. Now at the highest level (Host), there is no competing group defined. So Host gets 1024 CPU shares. This is the root cgroup. All processes at this level get to utilize 1024 shares One level down, the “machine” sub group is defined and is given 1024 CPU shares. Again, no competing subgroup at this level, so all cpu resources get passed down. Next level, machine is divided into 4 groups# default-sdr—1 , default-sdr—2, sysadmin and tp_app.partition with cpu shares at 1024, 1024, 1024 and 1024 respectively. On the NCS55xx devices, tp_app.partition is allocated 256 cpu shares. CPU shares do NOT easily map to percentages of the CPU that will get used up because the percentage of CPU utilized is a function of the distribution of CURRENTLY running processes across different cgroups (root, machine, tp_app.partition etc.). The cpu shares are not hard limits, but rather guide how the CPU gets utilized across different process groups. I found a nice explanation of this breakdown in the RedHat documentation here# https#//access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/process_behavior. So, in our case, assuming the same number of processes are running in the Host (root) and in each of the “machine” subgroups# default-sdr–1, default-sdr–2, syasdmin, tp_app.partition (This is not the case, but we use this to simplify the calculation)# When no 3rd party application (docker or LXC, I’m not talking about native) is running on the system, then the allocation of CPU for the 3 system subgroups are (Remember 1024 cpu shares are reserved for the Host(root) layer)# machine subgroup share = 1024/(1024+1024) = 50%Host (root) share = 1024/(1024+1024) = 50%default-sdr—1 share = (1024/(1024+1024+1024)) * 50% = 16.67% default-sdr—2 share = (1024/(1024+1024+1024)) * 50% = 16.67% sysadmin share = (1024/(1024+1024+1024)) * 50% = 16.67% When an application is running on the system as part of the tp_app.partition subgroup (either docker or LXC or both), then the remaining 3 subgroups are already active. Now, the allocations for the system subgroups are reduced to# machine subgroup share = 1024/(1024+1024) = 50%Host share = 1024/(1024+1024) = 50%tp_app.partition process = 1024/(1024+1024+1024+1024) * 50% = 12.5%default-sdr—1 share = 1024/(1024+1024+1024+1024) * 50% = 12.5% default-sdr—2 share = 1024/(1024+1024+1024+1024) * 50% = 12.5%sysadmin share = 1024/(1024+1024+1024+1024) * 50% = 12.5% On the NCS55xx platforms the tp_app.partition is actually allocated 256 CPU shares. So, for these platforms, the calculations would change to# For NCS55xx platforms# machine subgroup share = 1024/(1024+1024) = 50%Host share = 1024/(1024+1024) = 50%tp_app.partition process = 256/(256+1024+1024+1024) * 50% = 3.84%default-sdr—1 share = 1024/(256+1024+1024+1024) * 50% = 15.38% default-sdr—2 share = 1024/(256+1024+1024+1024) * 50% = 15.38%sysadmin share = 1024/(256+1024+1024+1024) * 50% = 15.38% Further, under the tp_app.partition subgroup, Docker and LXC get 1024 and 1024 shares respectively. So, in case you’re running an LXC app and a Docker app at the same time, they will get 12.5/2 = 6.25% of the CPU each. Again, this is for the IOS-XRv9000 platform in this devnet sandbox.For NCS55XX platforms, running both an LXC and Docker app would reduce the CPU share to 3.84%/2 = 1.92% based on our calculations above. If you run any one of them (typically the case), then they get to use all of 12.5% (3.84% in case of NCS55xx platforms). Not all platforms have the default-sdr–2 container running on the RP. This is true for IOS-XRv9000 and the fixed boxes like the NCS540 or NCS5501/5502. But, for modular chassis like NCS5508/5516 etc., there is NO default–sdr-2 container. In such a case, the tp_app.partition subgroup gets to utilize 5.56% (256 shares) of the cpu with remaining 94.46% used by default-sdr–1 (22.22%), sysadmin container (22.22%) and the host layer (50%). Memory Limits# Similarly, cgroups settings on the host shell can also be used to determine the maximum limit on the amount of memory that a container app can utilize on the system# [host#~]$ cat /dev/cgroup/memory/machine/tp_app.partition/lxc.partition/memory.limit_in_bytes 536870912 [host#~]$ [host#~]$ cat /dev/cgroup/memory/machine/tp_app.partition/docker/memory.limit_in_bytes 536870912 [host#~]$ The above values are in bytes but it translates to 512 MB memory limit on container apps. For the NCS55XX platforms, the same limits are increased to 1G# [host#0_RP0#~]$ cat /dev/cgroup/memory/machine/tp_app.partition/lxc.partition/memory.limit_in_bytes 1073741824 [host#0_RP0#~]$ [host#0_RP0#~]$ cat /dev/cgroup/memory/machine/tp_app.partition/docker/memory.limit_in_bytes 1073741824 [host#0_RP0#~]$ Bear in mind that the above values are for an IOS-XRv9000 platform. Each platform would handle these limits based on the resources available. But you can use the same techniques described above across all IOS-XR platforms to determine the limits imposed by the one selected.LXC ContainersThe libvirt daemon runs on the host layer of the system. However the interaction client for libvirt, i.e. virsh has been made available for use within the XR LXC for end users.These implies that a user in the XR bash shell can run the virsh command to display the running containers or even launch new custom containers#Trying this out on router r1#RP/0/RP0/CPU0#r1#RP/0/RP0/CPU0#r1#bashMon Sep 10 02#20#21.486 UTC[r1#~]$[r1#~]$ virsh list Id Name State---------------------------------------------------- 7554 sysadmin running 17021 default-sdr--1 running 27481 default-sdr--2 running[r1#~]$Here sysadmin corresponds to the admin LXC and default-sdr--1 corresponds to the XR LXC. Further, since r1 is a virtual XRv9000 platform, it doesn’t contain physical linecards. Hence the XR LXC that would typically run on a linecard runs as a separate LXC on the RP host itself to mimic the behaviour of a linecard on an XRv9000.Bringing up your own LXC container on IOS-XR is fairly simple, and this has been covered in great detail on xrdocs, here# https#//xrdocs.io/application-hosting/tutorials/2016-06-16-xr-toolbox-part-4-bring-your-own-container-lxc-app/We will skip bringing up an LXC container and will leave it as a exercise for the reader based on the blog above.Docker ContainersThe Docker daemon runs on the host layer as well, much like the libvirt daemon. Again, to simplify operations, the docker client has been made available in the XR bash shell.Drop into the bash shell of router r1, so we can play around with the docker client#RP/0/RP0/CPU0#r1#bashMon Sep 10 02#20#21.486 UTC[r1#~]$[r1#~]$[r1#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[r1#~]$[r1#~]$[r1#~]$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE[r1#~]$There are multiple different techniques that can be used to bring up a docker container on IOS-XR and all of them are discussed in great detail on xrdocs, here# https#//xrdocs.io/application-hosting/tutorials/2017-02-26-running-docker-containers-on-ios-xr-6-1-2/Since docker supports pulling docker images directly from dockerhub over the internet, we can try a quick demonstration. If you already ran through the steps to set up the yum repository and install iperf in the native application example above, then your bash in the XR LXC must have the following routes#RP/0/RP0/CPU0#r1#bashMon Sep 10 03#02#56.144 UTC[r1#~]$[r1#~]$ ip routedefault dev fwd_ew scope link src 192.168.122.21192.168.122.0/24 dev Mg0_RP0_CPU0_0 scope link src 192.168.122.21[r1#~]$To demonstrate how the IOS-XR Pkt/IO environment can be inherited by the docker container, we pull in a pre-built ubuntu container with iproute2 pre-installed from dockerhub.The advantage of iproute2 being available is that we can execute the ip netns exec commands inside the container to change into the the required mounted netns where the container will have full access to the exposed IOS-XR interfaces and routes in the kernel.Pull the ubuntu_iproute2 docker image[r1#~]$[r1#~]$[r1#~]$ docker pull akshshar/ubuntu_iproute2_dockerUsing default tag# latestlatest# Pulling from akshshar/ubuntu_iproute2_docker124c757242f8# Pull complete9d866f8bde2a# Pull completefa3f2f277e67# Pull complete398d32b153e8# Pull completeafde35469481# Pull completeacff1696516b# Pull completeDigest# sha256#e701a744a300effd821b1237d0a37cb942f67d102c9b9752e869caa6cb91e5faStatus# Downloaded newer image for akshshar/ubuntu_iproute2_docker#latest[r1#~]$You should see the pulled docker image using the docker images command#[r1#~]$[r1#~]$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEakshshar/ubuntu_iproute2_docker latest e0dd0f444715 About a minute ago 128.5 MB[r1#~]$Next, launch the docker image with the following essential parameters# --cap-add SYS_ADMIN# This is capability added to the container to give it enough privileges to run the ip netns exec <> command. -v /var/run/netns#/var/run/netns# The -v flag is a mount flag that will mount the /var/run/netns volume from the host into the docker container. This ensures that all the network namespaces created in the kernel by IOS-XR are available for use within the container’s filesystem. [r1#~]$ [r1#~]$ docker run -itd --name ubuntu_iproute2 --cap-add SYS_ADMIN -v /var/run/netns#/var/run/netns akshshar/ubuntu_iproute2_docker bash 871fe3fd745e903b3652ad89e013e83b64fa476ca79731037f9542c1fbca8b7f [r1#~]$Check that the container is now running#[r1#~]$[r1#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES871fe3fd745e akshshar/ubuntu_iproute2_docker ~bash~ 25 minutes ago Up 25 minutes ubuntu_iproute2[r1#~]$Exec into the container#[r1#~]$[r1#~]$ docker exec -it ubuntu_iproute2 bashroot@871fe3fd745e#/#Exec into the global-vrf network namespace to gain access to all the interfaces and routes#root@871fe3fd745e#/# ip netns exec global-vrf bashroot@871fe3fd745e#/#root@871fe3fd745e#/# ip link show1# lo# <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00#00#00#00#00#00 brd 00#00#00#00#00#006# fwdintf# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 00#00#00#00#00#0a brd ff#ff#ff#ff#ff#ff7# fwd_ew# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 00#00#00#00#00#0b brd ff#ff#ff#ff#ff#ff8# Mg0_RP0_CPU0_0# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 52#54#00#28#c2#94 brd ff#ff#ff#ff#ff#ff9# Gi0_0_0_0# <> mtu 1514 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 52#54#00#1c#5e#e0 brd ff#ff#ff#ff#ff#ff10# Gi0_0_0_1# <> mtu 1514 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 52#54#00#1c#5e#e1 brd ff#ff#ff#ff#ff#ff11# Gi0_0_0_2# <> mtu 1514 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 52#54#00#1c#5e#e2 brd ff#ff#ff#ff#ff#ff12# Gi0_0_0_3# <> mtu 1514 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 52#54#00#1c#5e#e3 brd ff#ff#ff#ff#ff#ff13# Gi0_0_0_4# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 52#54#00#1c#5e#e4 brd ff#ff#ff#ff#ff#ffroot@871fe3fd745e#/#root@871fe3fd745e#/# ip routedefault dev fwd_ew scope link src 192.168.122.21192.168.122.0/24 dev Mg0_RP0_CPU0_0 scope link src 192.168.122.21root@871fe3fd745e#/#root@871fe3fd745e#/# exitroot@871fe3fd745e#/# exit[r1#~]$[r1#~]$Great, now let’s clean up.Stop the Docker container#[r1#~]$[r1#~]$ docker stop ubuntu_iproute2ubuntu_iproute2[r1#~]$[r1#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[r1#~]$Remove the Docker container#Stopping the docker container is equivalent to “pausing” it. It does not free up the disk space used by the container. To free up the disk space, remove the container#[r1#~]$[r1#~]$ docker rm ubuntu_iproute2ubuntu_iproute2[r1#~]$Finally, remove the docker image[r1#~]$[r1#~]$ docker rmi akshshar/ubuntu_iproute2_dockerUntagged# akshshar/ubuntu_iproute2_docker#latestUntagged# akshshar/ubuntu_iproute2_docker@sha256#e701a744a300effd821b1237d0a37cb942f67d102c9b9752e869caa6cb91e5faDeleted# sha256#e0dd0f44471590a8327802dc447606a9dc36dc8ce0f3df2aee6c75b6e9190eb0Deleted# sha256#a12f84e26b7ab31a36f53feb6cca22222f4ce211a4a7cdc449e0e5a179b0ec23Deleted# sha256#2416e906f135eea2d08b4a8a8ae539328482eacb6cf39100f7c8f99e98a78d84Deleted# sha256#7f8291c73f3ecc4dc9317076ad01a567dd44510e789242368cd061c709e0e36dDeleted# sha256#4b3d88bd6e729deea28b2390d1ddfdbfa3db603160a1129f06f85f26e7bcf4a2Deleted# sha256#f51700a4e396a235cee37249ffc260cdbeb33268225eb8f7345970f5ae309312Deleted# sha256#a30b835850bfd4c7e9495edf7085cedfad918219227c7157ff71e8afe2661f63[r1#~]$And that completes the crash-course on the application hosting infrastructure on IOS-XR! Use this blog along with the other blogs in the xr-toolbox series to build and host your own applications and tools on routers running IOS-XR.", "url": "/blogs/2019-08-19-application-hosting-and-packet-io-on-ios-xr-a-deep-dive/", "author": "Akshat Sharma", "tags": "iosxr, cisco, linux, application-hosting, xr6, packet-io, xr toolbox, xr-toolbox" } , "#": {} , "#": {} , "blogs-2024-08-26-ai-ml-applications-for-optical-networking": { "title": "Six Key AI/ML Applications for Optical Networking", "content": "AI applications in optical networks are becoming increasingly important for enhancing the performance and reliability of data transport.By leveraging AI/ML in optical networks, network operators can achieve higher data rates, improved reliability, and lower operational costs. AI allows for the management of complex networks at a scale and speed that would be unattainable with traditional methods. As optical network technology evolves and data demands grow, the role of AI is expected to expand even further, driving innovation in network design, operation, and maintenance.What are the possible AI/ML applications for Optical Networking ? Network Design, Planning and Optimization# •\tTraffic Prediction# AI can predict traffic patterns and adjust bandwidth allocation proactively to meet demand, thus optimizing the use of network resources. •\tRoute Optimization# Machine learning algorithms analyze network data to determine the most efficient paths for data packets, reducing latency and congestion driving to the concept of Self-Healing Networks •\tSelf-Configuring Networks# AI/ML enables optical networks to \t configure themselves automatically when new devices are added or when changes in traffic are detected. •\tResource Allocation# AI/ML dynamically allocates network resources such as wavelengths and bandwidth, optimizing for current network conditions and demand. Failure Prediction# •\tBy analyzing network data (hystorical and current), AI can predict when components are likely to fail and schedule maintenance before issues occur, improving network reliability. Anomaly Detection for Proactive Restoration# AI/ML systems can monitor the network for anomalies that may indicate an impending failure, allowing for preemptive restoration of the services Adaptive Transmission Systems# •\tModulation Format Adjustment# AI/ML can select the optimal modulation format for data transmission based on real-time network conditions, such as signal quality and channel impairments. •\tPower Level Optimization# AI/ML algorithms adjust the power levels of optical signals to ensure efficient transmission while minimizing interference and cross-talk. Learn from Real network# •\tNetwork Data Interpretation# AI/ML techniques allows to provide constructive data interpreation from Optical Time Domain Reflectometer (OTDR) and ONM raw data Quality of Transmission (QoT) Estimation# •\tQoT Prediction# AI models predict the quality of transmission for new connections based on various network parameters, helping to ensure that SLAs (Service Level Agreements) are met. Learn from Real Network # Automatic OTDR events recognitionLet’s take a closer look at the learn from real network application. Optical experts analyze OTDR traces to identify faults in fiber links and guarantee the quality of transmissions. This is achieved by examining event signatures, which denote the location in the traces of the malfunctioning of a specific device or a fault, such as a broken fiber, a bad connector, or a bent fiber. OTDR systems operate by injecting a short laser pulse at one end of the fiber and measuring the backscattered and reflected light with a photodiode at the same location. The result of this process is termed OTDR trace, i.e., a graphical representation of the optical power as a function of the distance along the fiber. A typical example is reported in the below pictureIllustration of an OTDR trace with multiple events. The text annotations describe the root causes of these events.It is now possible to use the recent automatic event detection AI/ML algorithms to bypass time-consuming and tedious human inspections. The application is “trained” to understand and recognize the different event patterns like the one below.Possible patterns used to “train” the alghorithm.AI/ML events recognition is a visual recognition process# the AI/ML can see events that mathematical OTDR analysis cannot find.This results in a very powerful analysis for the user to extrapolate where the optical fiber had an issue in order to be able to fix it.Example of an AI/ML describe the “events” to the user.Streamline and Simplify Managing Optical NetworksCognitive networks are a subset of AI applications tailored specifically for network management, capable of gathering data, learning from it, devising strategies, making decisions, and executing appropriate actions. Machine learning algorithms are the cornerstone of this approach, offering in-depth insights into network behavior, which, in turn, enable operators to make informed and efficient decisions for network optimization.These principles are equally relevant to optical networks, where they unlock a multitude of use cases, including network optimization, proactive network recovery, and enhanced analysis of network conditions. Although we are in the early stages of integrating AI and ML into network management, the potential is undeniable. AI and ML tools present a valuable asset for network operators, promising significant advancements in the efficiency and reliabilityFor more information, please refer to NCS 1000 product page", "url": "/blogs/2024-08-26-ai-ml-applications-for-optical-networking/", "author": "Maurizio Gazzola", "tags": "" } , "#": {} , "#": {} , "techdocs-app-hosting-on-iosxr-xr-linux-shell": { "title": "XR Linux Shell", "content": "Minimal Mistakes has been developed to be 100% compatible with hosting a site on GitHub Pages. To get up and running with a new GitHub repository quickly, follow these steps or jump ahead to the full installation guide.Fork the ThemeFork the Minimal Mistakes theme, then rename the repo to USERNAME.github.io — replacing USERNAME with your GitHub username. Note# Your Jekyll site should be viewable immediately at http#//USERNAME.github.io. If it’s not, you can force a rebuild by Customizing Your Site (see below for more details).If you’re hosting several Jekyll based sites under the same GitHub username you will have to use Project Pages instead of User Pages. Essentially you rename the repo to something other than USERNAME.github.io and create a gh-pages branch off of master. For more details on how to set things up check GitHub’s documentation. ProTip# Be sure to delete the gh-pages branch if you forked Minimal Mistakes. This branch contains the documentation and demo site for the theme and you probably don’t want that showing up in your repo.Customize Your SiteOpen up _config.yml found in the root of the repo and edit anything under Site Settings. For a full explanation of every setting be sure to read the Configuration section, but for now let’s just change the site’s title. Edit text files without leaving GitHub.comCommitting a change to _config.yml (or any file in your repository) will force GitHub Pages to rebuild your site with Jekyll. It should then be viewable a few seconds later at https#//USERNAME.github.io.Congratulations! You’ve successfully forked the theme and are up an running with GitHub Pages. Now you’re ready to add content and customize the site further.", "url": "/techdocs/app_hosting_on_iosxr/xr_linux_shell", "tags": "" } , "tutorials-iosxr-vagrant-quickstart": { "title": "XR toolbox, Part 1 : IOS-XR Vagrant Quick Start", "content": " IOS-XR Vagrant# Quick Start Introduction Pre-requisites# Single Node Bringup Download and Add the IOS-XRv vagrant box Pick the last stable version Pick the latest (run with scissors) Initialize a Vagrantfile Bring up the Vagrant Instance Access the XR Linux shell Access XR Console Multi Node Bringup Set up the Vagrantfile Bring up the topology Access the nodes IntroductionThis tutorial is meant to be a quick-start guide to get you up and running with an IOS-XRv Vagrant box.If you’re unfamiliar with Vagrant as a tool for development, testing and design, then here’s a quick look at why Vagrant is useful, directly from the folks at Hashicorp# To learn more about how to use IOS-XR + Vagrant to Test native and container applications on IOS-XR Use configuration management tools like Chef/Puppet/Ansible/Shell as Vagrant provisioners Create complicated topologies and a variety of other use cases, take a look at the rest of the “XR toolbox” series.Note to Users# We try our best to offer the community a development platform for IOS-XR that brings the latest features in XR to your laptop or development environment. Having said that, any issues found on IOS-XRv64 are fixed on a best-effort basis and we do not guarantee an SLA on fixes. We strive to provide a platform that the community can work with and help stabilize through their use. Any feedback is appreciated and you can always leave your comments below.Pre-requisites# Vagrant for your Operating System. 1.8+There is currently a bug in vagrant version 1.8.7 causing a failure in vagrant box add on Mac OSX. Either follow the workaround as specified here# https#//github.com/mitchellh/vagrant/issues/7997 or downgrade to Vagrant version 1.8.6 Virtualbox for your Operating System. 5.1+ A laptop with atleast 4-5G free RAM. (Each XR vagrant instance uses upto 4G RAM, so plan ahead based on the number of XR nodes you want to run)Tha above items are applicable to all operating systems - Mac OSX, Linux or Windows.If you’re using Windows, we would urge you to download a utility like Git Bash so that all the commands provided below work as advertised.Single Node BringupDownload and Add the IOS-XRv vagrant box IOS-XR Vagrant is currently in Private Beta To download the box, you will need an API-KEY and a CCO-ID To get the API-KEY and a CCO-ID, browse to the following link and follow the steps# Steps to Generate API-KEYPick the last stable versionThe last stable version of XR vagrant was 6.1.2.These images have been out for a while, and should work well. Pick this if you want something that works for sure.$ BOXURL=~https#//devhub.cisco.com/artifactory/appdevci-release/XRv64/6.1.2/iosxrv-fullk9-x64.box~$ curl -u your-cco-id#API-KEY $BOXURL --output ~/iosxrv-fullk9-x64.box$ vagrant box add --name IOS-XRv ~/iosxrv-fullk9-x64.boxPick the latest (run with scissors)If you’re feeling adventurous, pick the latest version of the XR vagrant box as shown below.Bear in mind, there may be bugs and you are free to ask us questions and/or raise issues on our github repo# https#//github.com/xrdocs/application-hosting/issues$ BOXURL=~https#//devhub.cisco.com/artifactory/XRv64-snapshot/latest/iosxrv-fullk9-x64.latest.box~$ curl -u your-cco-id#API-KEY $BOXURL --output ~/iosxrv-fullk9-x64.box$ vagrant box add --name IOS-XRv ~/iosxrv-fullk9-x64.boxOf course, you should replace your-cco-id with your actual Cisco.com ID and API-KEY with the key you generated and copied using the above link.The curl command will take around 10-15 mins as it downloads the box for you. If it happens pretty quickly then it probably means you still don’t have access and you can check the downloaded box file to see if it is a vagrant box (about 1.8G) or a simple “unauthorized” html document.Once it completes, you should be able to see the box added as “IOS-XRv” in your local vagrant box list#AKSHSHAR-M-K0DS#~ akshshar$ vagrant box listIOS-XRv (virtualbox, 0)AKSHSHAR-M-K0DS#~ akshshar$ Initialize a VagrantfileLet’s create a working directory (any name would do) for our next set of tasks#mkdir ~/iosxrv; cd ~/iosxrvNow, in this directory, let’s initialize a Vagrantfile with the name of the box we added.AKSHSHAR-M-K0DS#iosxrv akshshar$ vagrant init IOS-XRv A `Vagrantfile` has been placed in this directory. You are nowready to `vagrant up` your first virtual environment! Please readthe comments in the Vagrantfile as well as documentation on`vagrantup.com` for more information on using Vagrant.AKSHSHAR-M-K0DS#iosxrv akshshar$Bring up the Vagrant InstanceA simple vagrant up will bring up the XR instancevagrant up This bootup process will take some time, (close to 5 minutes).You might see some ` Warning# Remote connection disconnect. Retrying…` messages. Ignore them. These messages appear because the box takes longer than a normal linux machine to boot.Look for the green “vagrant up” welcome message to confirm the machine has booted#Now we have two options to access the Vagrant instance#Access the XR Linux shellVagrant takes care of key exchange automatically. We’ve set things up to make sure that the XR linux shell (running SSH on port 57722) is the environment a user gets dropped into when using vagrant sshAKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ vagrant ssh xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ The reason we select the XR linux shell as the default environment and not XR CLI, should be obvious to seasoned users of Vagrant. In the future,Vagrantfiles that integrate chef/puppet/Ansible/Shell as Vagrant provisioners would benefit from linux as the default environment.Access XR ConsoleXR SSH runs on port 22 of the guest IOS-XR instance.First, determine the port to which the XR SSH port (port 22) is forwarded by vagrant by using the vagrant port command#AKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ vagrant port The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host)As shown above, port 22 of XR is fowarded to port 2223#Use port 2223 to now ssh into XR CLIThe password is “vagrant”AKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ ssh -p 2223 vagrant@localhostThe authenticity of host '[localhost]#2223 ([127.0.0.1]#2223)' can't be established.RSA key fingerprint is 65#d1#b8#f6#68#9c#04#a2#d5#db#17#d8#de#04#cb#22.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[localhost]#2223' (RSA) to the list of known hosts.vagrant@localhost's password# RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#Multi Node BringupBear in mind the RAM and CPU resource requirements per IOS-XR vagrant instance before you proceed with this section. A 3 node topology as shown below will require 8-9G RAM and can be shared on a 4-core CPU with your laptop’s OS.Let’s try to bring up a multi-node topology as shown below#Set up the VagrantfileFor this purpose, Let’s use a Sample vagrantfile located here#https#//github.com/ios-xr/vagrant-xrdocs/blob/master/simple-mixed-topo/Vagrantfilegit clone https#//github.com/ios-xr/vagrant-xrdocs.gitcd vagrant-xrdocs/simple-mixed-topoShown below is a snippet of the Vagrantfile#Vagrant.configure(2) do |config| config.vm.define ~rtr1~ do |node| node.vm.box = ~IOS-XRv~ # gig0/0/0/0 connected to link2, # gig0/0/0/1 connected to link1, # gig0/0/0/2 connected to link3, # auto-config not supported. node.vm.network #private_network, virtualbox__intnet# ~link2~, auto_config# false node.vm.network #private_network, virtualbox__intnet# ~link1~, auto_config# false node.vm.network #private_network, virtualbox__intnet# ~link3~, auto_config# false end config.vm.define ~rtr2~ do |node| node.vm.box = ~IOS-XRv~ # gig0/0/0/0 connected to link1, # gig0/0/0/1 connected to link3, # auto-config not supported node.vm.network #private_network, virtualbox__intnet# ~link1~, auto_config# false node.vm.network #private_network, virtualbox__intnet# ~link3~, auto_config# false endIf you compare this with the topology above it becomes pretty clear how the interfaces of the XR instances are mapped to individual links.The order of the “private_networks” is important.For each XR node, the first “private_network” corresponds to gig0/0/0/0, the second “private_network” to gig0/0/0/1 and so on.Bring up the topologyAs before, we’ll issue a vagrant up to bring up the topology.vagrant upThis will take some time, possibly over 10 minutes.Look for the green “vagrant up” welcome message to confirm the three machines have booted#Access the nodesThe only point to remember is that in a multinode setup, we “name” each node in the topology.For example, let’s access “rtr2” Access the XR Linux shell#AKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ vagrant ssh rtr2 Last login# Tue May 31 05#43#44 2016 from 10.0.2.2xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ Access XR Console#Determine the forwarded port for port 22 (XR SSH) for rtr2#AKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ vagrant port rtr2 The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2201 (host) 57722 (guest) => 2200 (host)AKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ For rtr2 port 22, the forwarded port is 2201. So, to get into the XR CLI of rtr2, we use#AKSHSHAR-M-K0DS#simple-mixed-topo akshshar$ ssh -p 2201 vagrant@localhost The authenticity of host '[localhost]#2201 ([127.0.0.1]#2201)' can't be established.RSA key fingerprint is 65#d1#b8#f6#68#9c#04#a2#d5#db#17#d8#de#04#cb#22.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[localhost]#2201' (RSA) to the list of known hosts.vagrant@localhost's password# RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#That’s it for the quick-start guide on XR vagrant. Launch your very own XR instance using vagrant and let us know your feedback in the comments below!Head over to Part 2 of the XR Toolbox series where we look at bootstrapping a Vagrant XR instance with a user-defined configuration on boot —> Bootstrap XR configuration with Vagrant.", "url": "/tutorials/iosxr-vagrant-quickstart", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, xr toolbox, apphosting" } , "tutorials-iosxr-vagrant-bootstrap-config": { "title": "XR Toolbox, Part 2 : Bootstrap XR configuration with Vagrant", "content": " IOS-XR Vagrant# Bootstrap Config Introduction Pre-requisite Bootstrap Configuration# Shell Provisioner Transfer a Configuration file to XR bash Use a Shell script to Apply XR Config Single node bootstrap Configuration File Bootstrap script Vagrantfile Bootstrap in action! Check out Part 1 of the XR toolbox series# IOS-XR Vagrant quick-start.IntroductionThe IOS-XR Vagrant Quick Start guideshowcases how a user can get started with an IOS-XR vagrant box.This tutorial will extend the quick-start guide to showcase how one can apply a node-specific configuration to an XR vagrant instance during boot-up itself.Make sure you take a look at the quick-start guide before proceeding. Bear in mind that the IOS-XR vagrant box is published without a need for any custom plugins.We thought about it and felt that masking the core functionality of the router with Vagrant workflows could prevent us from showcasing some core functionalities of IOS-XR, namely # Day 0# ZTP helpers and shell/bash based automation Day 1# automation techniques based off YANG models Day 2# Streaming Telemetry and application-hosting This tutorial invariably ends up using the new shell/bash based automation techniques that have been introduced as part of the Zero Touch provisioning (ZTP) functionality in IOS-XR.Pre-requisite Meet the pre-requisites specified in the IOS-XR Vagrant Quick Start guide# Pre-requisites Clone the following repository# https#//github.com/ios-xr/vagrant-xrdocs, before we start. cd ~/git clone https#//github.com/ios-xr/vagrant-xrdocs.gitcd vagrant-xrdocs/You will notice a couple of directories. We will utilize the single_node_bootstrap directory in this tutorial.AKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ pwd/Users/akshshar/vagrant-xrdocsAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ ls single_node_bootstrap/Vagrantfile\tconfigs\t\tscriptsAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ Bootstrap Configuration# Shell ProvisionerThe concept is simple# We’ll use the Vagrant shell provisioner to apply a bootstrap configuration to an XR instance when we issue a vagrant up.All we need is a shell provisioner section in the Vagrantfile for each node# #Source a config file and apply it to XR config.vm.provision ~file~, source# ~configs/rtr_config~, destination# ~/home/vagrant/rtr_config~ config.vm.provision ~shell~ do |s| s.path = ~scripts/apply_config.sh~ s.args = [~/home/vagrant/rtr_config~] endWe will look at a complete Vagrantfile in a bit. But let’s desconstruct the above piece of code.Transfer a Configuration file to XR bashconfig.vm.provision ~file~, source# ~configs/rtr_config~, destination# ~/home/vagrant/rtr_config~The above line uses the Vagrant “file” provisioner to transfer a file from the host (your laptop) to the XR linux shell (bash).The root of the source directory is the working directory for your vagrant instance. Hence, the rtr_config file is located in the configs directory.Use a Shell script to Apply XR Configconfig.vm.provision ~shell~ do |s| s.path = ~scripts/apply_config.sh~ s.args = [~/home/vagrant/rtr_config~] endThe shell script will eventually be run on XR bash of the vagrant instance. This script is placed in the scripts directory and is named apply_config.sh.Further, the script needs the location of the router config file as an argument. This is the destination parameter in the “file” provisioner above.So, in short, Vagrant copies a config file to the router bash, and then runs a shell script on the router bash to apply the config file that was copied!Single node bootstrapTo meet the above requirements, you will need a directory structure as laid out under ~/vagrant-xrdocs/single_node_bootstrap#AKSHSHAR-M-K0DS#single_node_bootstrap akshshar$ pwd/Users/akshshar/vagrant-xrdocs/single_node_bootstrapAKSHSHAR-M-K0DS#single_node_bootstrap akshshar$ tree ././├── Vagrantfile├── configs│   └── rtr_config└── scripts └── apply_config.sh2 directories, 3 filesWe will stick to the single_node_bootstrap directory throughout this section.Configuration FileLet’s assume we’re applying a simple XR config that configures the grpc server on port 57891.This will be the contents of our configs/rtr_config fileThis configuration will be an addendum to the pre-existing configuration on the vagrant instance.AKSHSHAR-M-K0DS#iosxrv akshshar$ cat configs/rtr_config !! XR configuration!grpc port 57891!endBootstrap scriptThe shell script to apply the configuration will run on XR bash. The following new shell commands are made available to enable this# xrcmd# This command allows a user to run “exec” commands on XR CLI from the shell. For eg. “show run”, “show version” etc. xrapply# This command allows a user to apply (append) a config file to the existing configuration. xrapply_string# This command can be used to apply a config directly using a single inline string. For eg. xrapply_string ~interface Gig0/0/0/0\\n ip address 1.1.1.2/24 \\n no shutdown~Only the root user is allowed to run the above commands as a good security practice. Unless specified, Vagrant will always escalate the privilege to run the shell provisioner script as root.Our shell script will look something like this#AKSHSHAR-M-K0DS#iosxrv akshshar$ cat scripts/apply_config.sh #!/bin/bash## Source ztp_helper.sh to get the xrapply and xrcmd functions.source /pkg/bin/ztp_helper.shfunction configure_xr() { ## Apply a blind config xrapply $1 if [ $? -ne 0 ]; then echo ~xrapply failed to run~ fi xrcmd ~show config failed~ > /home/vagrant/config_failed_check}## The location of the config file is an argument to the scriptconfig_file=$1## Call the configure_xr() function to use xrapply and xrcmd in parallelconfigure_xr $config_file## Check if there was an error during config applicationgrep -q ~ERROR~ /home/vagrant/config_failed_check## Condition based on the result of grep ($?)if [ $? -ne 0 ]; then echo ~Configuration was successful!~ echo ~Last applied configuration was#~ xrcmd ~show configuration commit changes last 1~else echo ~Configuration Failed. Check /home/vagrant/config_failed on the router for logs~ xrcmd ~show configuration failed~ > /home/vagrant/config_failed exit 1fi Few things to note in the above script# source /pkg/bin/ztp_helper.sh is necessary for the xrapply, xrcmd commands to be available. There are comments in the script to help understand the steps taken. Essentially, the shell script blindly applies the config file specified as an argument ($1) and then checks to see if there was an error during config application. VagrantfileTake a look at the Vagrantfile in the same directory. The shell provisioner code has been added## -*- mode# ruby -*-# vi# set ft=ruby ## All Vagrant configuration is done below. The ~2~ in Vagrant.configure# configures the configuration version (we support older styles for# backwards compatibility). Please don't change it unless you know what# you're doing.Vagrant.configure(2) do |config| config.vm.box = ~IOS-XRv~ #Source a config file and apply it to XR config.vm.provision ~file~, source# ~configs/rtr_config~, destination# ~/home/vagrant/rtr_config~ config.vm.provision ~shell~ do |s| s.path = ~scripts/apply_config.sh~ s.args = [~/home/vagrant/rtr_config~] endendBootstrap in action!Assuming that the box (IOS-XRv) is already in the vagrant box list as shown in the IOS-XR Vagrant Quick Start guide, just issue a vagrant up to see the magic happen#Let’s get into the XR CLI to check that it worked#AKSHSHAR-M-K0DS#single_node_bootstrap akshshar$ vagrant port The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host)AKSHSHAR-M-K0DS#single_node_bootstrap akshshar$ AKSHSHAR-M-K0DS#single_node_bootstrap akshshar$ ssh -p 2223 vagrant@localhost vagrant@localhost's password# RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#show running-config grpcTue May 31 16#59#44.581 UTCgrpc port 57891!RP/0/RP0/CPU0#ios#show configuration commit changes last 1Tue May 31 17#02#45.770 UTCBuilding configuration...!! IOS XR Configuration version = 6.1.1.17Igrpc port 57891!endRP/0/RP0/CPU0#ios#It worked! The config was applied as part of the vagrant up process.Head over to Part 3 of the XR Toolbox series where we bring up a typical app-development topology —> App Development Topology.", "url": "/tutorials/iosxr-vagrant-bootstrap-config", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, xr toolbox, configuration" } , "tutorials-iosxr-ansible": { "title": "Using Ansible with IOS-XR 6.1.1", "content": " IOS-XR# Ansible and Vagrant Introduction Prerequisites Vagrant pre-setup devbox box pre-configuration IOS-XRv box pre-configuration Configure Passwordless Access into XR Linux shell Configure Passwordless Access into XR CLI Using Ansible Playbooks Ansible Pre-requisites Running Playbooks IntroductionThe goal of this tutorial is to set up an environment that is identical for Windows, Linux or Mac-OSX users. So instead of setting up Ansible directly on the User’s Desktop/Host, we simply spin up an Ubuntu vagrant instance to host our Ansible playbooks and environment. Let’s call it devbox. We’ll do a separate tutorial on using Ansible directly on Mac-OSX/Windows.Prerequisites Computer with 4-5GB free RAM; Vagrant; Ansible; IOS-XRv Vagrant Box Vagrantfile and scripts for provisioning IOS-XR Vagrant is currently in Private Beta We explain the steps to in the section below#Vagrant pre-setupClone the repo with Vagrantfile and assisting files#$ git clone https#//github.com/ios-xr/vagrant-xrdocs.git$ cd vagrant-xrdocs/ansible-tutorials/$ lsubuntu.sh* Vagrantfile xr-configSetup was tested on Windows, but the workflow is the same for other environments. To add an IOS-XR box, you must first download it. IOS-XR Vagrant is currently in Private Beta To download the box, you will need an API-KEY and a CCO-ID To get the API-KEY and a CCO-ID, browse to the following link and follow the steps# Steps to Generate API-KEY$ BOXURL=~http#//devhub.cisco.com/artifactory/appdevci-release/XRv64/latest/iosxrv-fullk9-x64.box~$ curl -u your-cco-id#API-KEY $BOXURL --output ~/iosxrv-fullk9-x64.box$ vagrant box add --name IOS-XRv ~/iosxrv-fullk9-x64.boxOf course, you should replace your-cco-id with your actual Cisco.com ID and API-KEY with the key you generated and copied using the above link.Image for devbox will be downloaded from official source#$ vagrant box add ubuntu/trusty64We should now have both the boxes available, Use the vagrant box list command to display the current set of boxes on your system as shown below#The Vagrantfile contains 2 Vagrant boxes and looks like#Vagrant.configure(2) do |config| config.vm.provision ~shell~, inline# ~echo Hello User~ config.vm.define ~devbox~ do |devbox| devbox.vm.box = ~ubuntu/trusty64~ devbox.vm.network #private_network, virtualbox__intnet# ~link1~, ip# ~10.1.1.10~ devbox.vm.provision #shell, path# ~ubuntu.sh~, privileged# false end config.vm.define ~xr~ do |xr| xr.vm.box = ~xrv64~ xr.vm.network #private_network, virtualbox__intnet# ~link1~, ip# ~10.1.1.20~ end endNow we are ready to boot up the boxes#mkorshun@MKORSHUN-2JPYH MINGW64 ~/Documents/workCisco/tutorial$ lsubuntu.sh* Vagrantfile xr-configmkorshun@MKORSHUN-2JPYH MINGW64 ~/Documents/workCisco/tutorial$ vagrant updevbox box pre-configurationTo access the devbox box just issue the command (no password required)#vagrant ssh devboxThe devbox instance is already configured via file “ubuntu.sh”. This section is only for the user’s information. Let’s review the content of the script “ubuntu.sh”The first four lines are responsible for downloading required packages for Ansible and updating the system. sudo apt-get updatesudo apt-get install -y python-setuptools python-dev build-essential git libssl-dev libffi-dev sshpasssudo easy_install pip wget https#//bootstrap.pypa.io/ez_setup.py -O - | sudo python Next, the script clones the Ansible and the IOSXR-Ansible repos# git clone https#//github.com/ios-xr/iosxr-ansible.gitgit clone git#//github.com/ansible/ansible.git --recursive It then installs Ansible and applies the variables from “ansible_env” to the system. cd ansible/ && sudo python setup.py installecho ~source /home/vagrant/iosxr-ansible/remote/ansible_env~ >> /home/vagrant/.profile The last section is responsible for generating a public key for paswordless authorization (for XR linux) and a base 64 version of it (for XR CLI)# ssh-keygen -t rsa -f /home/vagrant/.ssh/id_rsa -q -P ~~cut -d~ ~ -f2 ~/.ssh/id_rsa.pub | base64 -d > ~/.ssh/id_rsa_pub.b64 IOS-XRv box pre-configurationTo access XR Linux Shell#$ vagrant ssh rtrTo access XR console it takes one additional step to figure out port (credentials for ssh# vagrant/vagrant)#mkorshun@MKORSHUN-2JPYH MINGW64 ~/Documents/workCisco/tutorial$ vagrant port rtrThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) = 2223 (host) 57722 (guest) = 2200 (host) mkorshun@MKORSHUN-2JPYH MINGW64 ~/Documents/workCisco/tutorial$ ssh -p 2223 vagrant@localhostvagrant@localhost's password#RP/0/RP0/CPU0#ios#Now, let’s configure an IP address on the IOS-XRv instance. Issue the following command on XR cli#conf thostname xrinterface GigabitEthernet0/0/0/0 ipv4 address 10.1.1.20 255.255.255.0 no shutdown!commitendChecking connectivity between boxes#RP/0/RP0/CPU0#ios#ping 10.1.1.10Mon May 9 08#36#33.071 UTCType escape sequence to abort.Sending 5, 100-byte ICMP Echos to 10.1.1.10, timeout is 2 seconds#!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 1/5/20 msRP/0/RP0/CPU0#ios#Configure Passwordless Access into XR Linux shellLet’s copy public part of key from devbox box and allow access without password. First, connect to the devbox instance and copy file to XR via SCP#vagrant ssh devbox scp -P 57722 /home/vagrant/.ssh/id_rsa.pub vagrant@10.1.1.20#/home/vagrant/id_rsa_ubuntu.pubNow add the copied keys to authorized_keys in XR linuxvagrant ssh rtr cat /home/vagrant/id_rsa_ubuntu.pub >> /home/vagrant/.ssh/authorized_keysConfigure Passwordless Access into XR CLIIf we want passwordless SSH from devbox to XR CLI, issue the following commands in XR CLI#The first command uses scp to copy the public key (base 64 encoded) to XR. Once we have the key locally, we import it using XR CLI’s crypto key import command.Execute in XR CLIscp vagrant@10.1.1.10#/home/vagrant/.ssh/id_rsa_pub.b64 /disk0#/id_rsa_pub.b64crypto key import authentication rsa disk0#/id_rsa_pub.b64File “id_rsa_pub.b64” was created by provisioning script “Ubuntu.sh”, during Vagrant provisioning.Using Ansible PlaybooksAnsible Pre-requisitesOn the devbox box let’s configure Ansible prerequisites. We need to configure 2 files# File “ansible_hosts”# It contains the ip address of the XR instance.We also specify a user to connect to the machine# “ansible_ssh_user=vagrant” File “ansible_env”# Used to set up the environment for Ansible. We do not delve into YDK for now, it’s a topic for another tutorial. Note that the files ansible_hosts and ansible_env are preconfigured for our needs.cd iosxr-ansible/cd remote/vagrant@vagrant-ubuntu-trusty-64#~/iosxr-ansible/remote$ cat ansible_hosts[ss-xr]10.1.1.20 ansible_ssh_user=vagrantvagrant@vagrant-ubuntu-trusty-64#~/iosxr-ansible/remote$ cat ansible_envexport BASEDIR=/home/vagrantexport IOSXRDIR=$BASEDIR/iosxr-ansibleexport ANSIBLE_HOME=$BASEDIR/ansibleexport ANSIBLE_INVENTORY=$IOSXRDIR/remote/ansible_hostsexport ANSIBLE_LIBRARY=$IOSXRDIR/remote/libraryexport ANSIBLE_CONFIG=$IOSXRDIR/remote/ansible_cfgexport YDK_DIR=$BASEDIR/ydk/ydk-pyexport PYTHONPATH=$YDK_DIRRunning Playbookscd ~/iosxr-ansible/remote/ ansible-playbook samples/iosxr_get_facts.yml ansible-playbook iosxr_cli.yml -e 'cmd=~show interface brief~' Usual playbook would look like#Output from our XR instance#Samples folder contains various playbooks, files started with “show_” using iosxr_cli playbook and passing cmd to XR as parameter. To run playbook as “vagrant” user, playbook should contain string# “become# yes”Feel free to play with any playbook!", "url": "/tutorials/IOSXR-Ansible", "author": "Mike Korshunov", "tags": "vagrant, iosxr, cisco, linux, Ansible, xr toolbox" } , "tutorials-2016-06-06-xr-toolbox-app-development-topology": { "title": "XR Toolbox, Part 3 : App Development Topology ", "content": " App Development Topology Introduction Pre-requisites Understand the topology Bring up the topology Download and Add the XR Vagrant box Launch the nodes Check Reachability Check out Part 2 of the XR toolbox series# Bootstrap XR configuration with Vagrant.IntroductionWithout diving too deep into the IOS-XR architecture, it might be useful to state that applications on IOS-XR may be deployed in two different ways# natively (inside the XR process space) OR as a container (LXC)In this quick start guide we introduce a typical vagrant topology that we intend to use in other quick start guides in the XR Toolbox series. This topology will be used to build and deploy container (LXC) as well as native XR applications and test them on Vagrant IOS-XR.Pre-requisites Meet the pre-requisites specified in the IOS-XR Vagrant Quick Start guide# Pre-requisites. The topology here will require about 5G RAM and 2 cores on the user’s laptop. Clone the following repository# https#//github.com/ios-xr/vagrant-xrdocs, before we start.cd ~/git clone https#//github.com/ios-xr/vagrant-xrdocs.gitcd vagrant-xrdocs/You will notice a few directories. We will utilize the lxc-app-topo-bootstrap directory in this tutorial.AKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ pwd/Users/akshshar/vagrant-xrdocsAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ ls lxc-app-topo-bootstrap/Vagrantfile\tconfigs\t\tscriptsAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ Understand the topologyFor this tutorial, we’ll use a two-node topology# An XR vagrant instance connected to Linux instance (devbox). For illustrative purposes, we use Ubuntu as our devbox OS#The Vagrantfile to bring up this topology is already in your cloned directory#vagrant-xrdocs/lxc-app-topo-bootstrap/VagrantfileVagrant.configure(2) do |config| config.vm.define ~rtr~ do |node| node.vm.box = ~IOS-XRv~ # gig0/0/0 connected to ~link1~ # auto_config is not supported for XR, set to false node.vm.network #private_network, virtualbox__intnet# ~link1~, auto_config# false #Source a config file and apply it to XR node.vm.provision ~file~, source# ~configs/rtr_config~, destination# ~/home/vagrant/rtr_config~ node.vm.provision ~shell~ do |s| s.path = ~scripts/apply_config.sh~ s.args = [~/home/vagrant/rtr_config~] end end config.vm.define ~devbox~ do |node| node.vm.box = ~ubuntu/trusty64~ # eth1 connected to link1 # auto_config is supported for an ubuntu instance node.vm.network #private_network, virtualbox__intnet# ~link1~, ip# ~11.1.1.20~ endendNotice the #Source a config file and apply it to XR section of the Vagrantfile? This is derived from the Bootstrap XR configuration with Vagrant tutorial. Check it out if you want to know more about how shell provisioning with XR worksThe configuration we wish to apply to XR on boot is pretty simple. You can find it in the lxc-app-topo-bootstrap/configs directory.We want to configure the XR interface# GigabitEthernet0/0/0/0 with the ip-address# 11.1.1.10AKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ cat lxc-app-topo-bootstrap/configs/rtr_config !! XR configuration!interface GigabitEthernet0/0/0/0 ip address 11.1.1.10/24 no shutdown!endAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ Take a look at the Vagrantfile above, again. We use the Vagrant auto_config capabilities to make sure “eth1” interface of the Ubuntu VM (called devbox) is configured in the same subnet (11.1.1.20) as XR gig0/0/0/0.Bring up the topologyDownload and Add the XR Vagrant box IOS-XR Vagrant is currently in Private Beta To download the box, you will need an API-KEY and a CCO-ID To get the API-KEY and a CCO-ID, browse to the following link and follow the steps# Steps to Generate API-KEY$ BOXURL=~http#//devhub.cisco.com/artifactory/appdevci-release/XRv64/latest/iosxrv-fullk9-x64.box~$ curl -u your-cco-id#API-KEY $BOXURL --output ~/iosxrv-fullk9-x64.box$ vagrant box add --name IOS-XRv ~/iosxrv-fullk9-x64.boxOf course, you should replace your-cco-id with your actual Cisco.com ID and API-KEY with the key you generated and copied using the above link.The vagrant box add command will take around 10-15 mins as it downloads the box for you.Once it completes, you should be able to see the box added as “IOS-XRv” in your local vagrant box list#AKSHSHAR-M-K0DS#~ akshshar$ vagrant box listIOS-XRv (virtualbox, 0)AKSHSHAR-M-K0DS#~ akshshar$ Launch the nodesMake sure you’re in the lxc-app-topo-bootstrap/ directory and issue a vagrant upAKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$pwd/Users/akshshar/vagrant-xrdocs/lxc-app-topo-bootstrapAKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant up Bringing machine 'rtr' up with 'virtualbox' provider...Bringing machine 'devbox' up with 'virtualbox' provider...==> rtr# Importing base box 'IOS-XRv'...==> rtr# Matching MAC address for NAT networking...==> rtr# Setting the name of the VM# lxc-app-topo-bootstrap_rtr_1465208784531_75603==> rtr# Clearing any previously set network interfaces...==> rtr# Preparing network interfaces based on configuration... rtr# Adapter 1# nat rtr# Adapter 2# intnet==> rtr# Forwarding ports... rtr# 57722 (guest) => 2222 (host) (adapter 1) rtr# 22 (guest) => 2223 (host) (adapter 1)==> rtr# Running 'pre-boot' VM customizations...==> rtr# Booting VM...==> rtr# Waiting for machine to boot. This may take a few minutes... rtr# SSH address# 127.0.0.1#2222 rtr# SSH username# vagrant rtr# SSH auth method# private key rtr# Warning# Remote connection disconnect. Retrying... rtr# Warning# Remote connection disconnect. Retrying... Once it completes, you should be able to see both the VMs running by using the vagrant status command inside the lxc-app-topo-bootstrap/ directory#Check ReachabilityTo get into the Ubuntu “devbox”, issue a vagrant ssh devbox#AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant ssh devboxWelcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information as of Mon Jun 6 11#20#37 UTC 2016 System load# 0.0 Processes# 74 Usage of /# 3.5% of 39.34GB Users logged in# 0 Memory usage# 25% IP address for eth0# 10.0.2.15 Swap usage# 0% IP address for eth1# 11.1.1.20 Graph this data and manage this system at# https#//landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloud0 packages can be updated.0 updates are security updates.vagrant@vagrant-ubuntu-trusty-64#~$ From “devbox”, you should be able to ping the XR Gig0/0/0/0 interface#vagrant@vagrant-ubuntu-trusty-64#~$ ping 11.1.1.10 -c 2PING 11.1.1.10 (11.1.1.10) 56(84) bytes of data.64 bytes from 11.1.1.10# icmp_seq=1 ttl=255 time=1.56 ms64 bytes from 11.1.1.10# icmp_seq=2 ttl=255 time=1.44 ms--- 11.1.1.10 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1003msrtt min/avg/max/mdev = 1.447/1.504/1.562/0.069 msvagrant@vagrant-ubuntu-trusty-64#~$ To get into XR linux shell, issue vagrant ssh rtrAKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant ssh rtrLast login# Mon Jun 6 11#20#58 2016 from 10.0.2.2xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ifconfig Gi0_0_0_0Gi0_0_0_0 Link encap#Ethernet HWaddr 08#00#27#46#1f#b2 inet addr#11.1.1.10 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fe46#1fb2/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#1 errors#0 dropped#3 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#42 (42.0 B)xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ To get into XR CLI, remember that XR SSH runs on port 22 of the guest IOS-XR instance.First, determine the port to which the XR SSH port (port 22) is forwarded by vagrant by using the vagrant port command#AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant port rtr The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host)As shown above, port 22 of XR is fowarded to port 2223#Use port 2223 to now ssh into XR CLIThe password is “vagrant”AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ ssh -p 2223 vagrant@localhostThe authenticity of host '[localhost]#2223 ([127.0.0.1]#2223)' can't be established.RSA key fingerprint is 7f#1a#56#e1#3c#7f#cf#a4#ee#ac#20#3a#e6#cf#ad#f5.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[localhost]#2223' (RSA) to the list of known hosts.vagrant@localhost's password# RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#show ipv4 interface gigabitEthernet 0/0/0/0 brief Tue Jun 7 03#23#31.324 UTCInterface IP-Address Status ProtocolGigabitEthernet0/0/0/0 11.1.1.10 Up Up RP/0/RP0/CPU0#ios#You’re all set! You can now use this topology to build applications (native-WRL7 or LXC containers) on the “devbox” and test them out on the IOS-XR vagrant node. We will explore these scenarios in the next set of tutorials in the XR Toolbox series.Head over to Part 4 of the XR Toolbox series where we create and bring up a container (LXC) app on IOS-XR —> Bring your own Container (LXC) App.", "url": "/tutorials/2016-06-06-xr-toolbox-app-development-topology", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, linux, xr toolbox, apphosting, topology" } , "tutorials-2016-06-16-xr-toolbox-part-4-bring-your-own-container-lxc-app": { "title": "XR toolbox, Part 4: Bring your own Container (LXC) App", "content": " Launching a Container App Introduction Pre-requisites Create a container rootfs Install lxc tools on devbox Launch an LXC container on the devbox Create/Install your app Change SSH port inside your container Shutdown and package your container Create LXC SPEC XML File Transfer rootfs and XML file to XR Untar rootfs under /misc/app_host/ Use virsh to launch container Test your app! Set the src-hint for Application traffic See if things work! Check out Part 3 of the XR toolbox series# App Development Topology.IntroductionIf you haven’t checked out the earlier parts to the XR toolbox Series, then you can do so here# XR Toolbox SeriesThe purpose of this series is simple. Get users started with an IOS-XR setup on their laptop and incrementally enable them to try out the application-hosting infrastructure on IOS-XR.In this part, we explore how a user can build and deploy their own container (LXC) based applications on IOS-XR.Pre-requisitesBefore we begin, let’s make sure you’ve set up your development environment.If you haven’t checked it out, go through the “App-Development Topology” tutorial here# XR Toolbox, Part 3# App Development TopologyFollow the instructions to get your topology up and running as shown below#If you’ve reached the end of the above tutorial, you should be able to issue a vagrant status in the vagrant-xrdocs/lxc-app-topo-bootstrap directory to see a rtr (IOS-XR) and a devbox (Ubuntu/trusty) instance running.AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ pwd/Users/akshshar/vagrant-xrdocs/lxc-app-topo-bootstrap AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant status Current machine states#rtr running (virtualbox)devbox running (virtualbox)This environment represents multiple VMs. The VMs are all listedabove with their current state. For more information about a specificVM, run `vagrant status NAME`.AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ All good? Perfect. Let’s start building our container application tar ball. The figure on the right illustrates the basic steps to undertake to launch an lxc container on IOS-XR 6.0+# We will build the container rootfs tar ball on our devbox (see topology above) The rootfs tar ball will then be transferred to IOS-XR The rootfs will be launched on the underlying hypervisor using the virsh command in XR shell. Create a container rootfs Using a custom rootfs tar ballThe technique presented here focuses on the creation of a container from scratch (using a base ubuntu template) followed by the installation of an application for first-time users.A user can easily use their own pre-built rootfs tar ball and ignore this section altogether.The only point to remember is that if you expect to use SSH access into the container after deployment to XR, then change the default SSH port in /etc/ssh/sshd_config in your rootfs to something other than 22/57722 (or any other port you expect XR to use based on your config).This is showcased in the following section below#Change SSH port inside your containerTo launch an LXC container we need two things# A container rootfs tar ball An XML file to launch the container using virsh/libvirtTo create them, we’ll hop onto our devbox (Ubuntu/trusty) VM in the topology and install lxc-tools. lxc-tools will be used to create a container rootfs tar ball.Install lxc tools on devboxSSH into the devbox#AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant ssh devboxWelcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information as of Thu Jun 16 14#27#47 UTC 2016 System load# 0.0 Processes# 74 Usage of /# 3.5% of 39.34GB Users logged in# 0 Memory usage# 25% IP address for eth0# 10.0.2.15 Swap usage# 0% IP address for eth1# 11.1.1.20 Graph this data and manage this system at# https#//landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloud0 packages can be updated.0 updates are security updates.Last login# Thu Jun 16 14#27#47 2016 from 10.0.2.2vagrant@vagrant-ubuntu-trusty-64#~$ Install lxc tools inside the devboxsudo apt-get updatesudo apt-get -y install lxcCheck that lxc was properly installed#vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-start --version1.0.8vagrant@vagrant-ubuntu-trusty-64#~$ Launch an LXC container on the devboxUsing the standard ubuntu template available with lxc, let’s create and start the ubuntu container inside devbox#vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-create -t ubuntu --name xr-lxc-appChecking cache download in /var/cache/lxc/trusty/rootfs-amd64 ... Installing packages in template# ssh,vim,language-pack-enDownloading ubuntu trusty minimal ...I# Retrieving Release I# Retrieving Release.gpg I# Checking Release signatureI# Valid Release signature (key id 790BC7277767219C42C86F933B4FE6ACC0B21F32)I# Retrieving Packages ------------------------------ snip output ------------------------------------This process will take some time as the ubuntu rootfs template is downloaded for you by the lxc tools.Once the container template is installed successfully, it should show up in the lxc-ls output#vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART ------------------------------------------xr-lxc-app STOPPED - - NO vagrant@vagrant-ubuntu-trusty-64#~$ Now let’s start the container#vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-start --name xr-lxc-app <4>init# plymouth-upstart-bridge main process (5) terminated with status 1<4>init# plymouth-upstart-bridge main process ended, respawning<4>>init# hwclock main process (7) terminated with status 77<4>>init# plymouth-upstart-bridge main process (15) terminated with status 1<4>>init# plymouth-upstart-bridge main process ended, respawning------------------------------ snip output ------------------------------------You will be taken to the login prompt.The Default credentials are#Username# ubuntuPassword# ubuntuUbuntu 14.04.4 LTS xr-lxc-app consolexr-lxc-app login# <4>init# setvtrgb main process (428) terminated with status 1<4>init# plymouth-upstart-bridge main process (23) killed by TERM signalUbuntu 14.04.4 LTS xr-lxc-app consolexr-lxc-app login# ubuntuPassword# Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/The programs included with the Ubuntu system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted byapplicable law.ubuntu@xr-lxc-app#~$ Perfect! You’ve launched an ubuntu container on your devbox.Create/Install your appIn this example we’ll install iperf as a sample application.You may choose to skip this step if you have another app in mind.sudo password# ubuntuubuntu@xr-lxc-app#~$ sudo apt-get -y install iperf[sudo] password for ubuntu# Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed# iperf0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.Need to get 56.3 kB of archives.After this operation, 174 kB of additional disk space will be used.Get#1 http#//archive.ubuntu.com/ubuntu/ trusty/universe iperf amd64 2.0.5-3 [56.3 kB]Fetched 56.3 kB in 2s (23.5 kB/s)Selecting previously unselected package iperf.(Reading database ... 14629 files and directories currently installed.)Preparing to unpack .../iperf_2.0.5-3_amd64.deb ...Unpacking iperf (2.0.5-3) ...Setting up iperf (2.0.5-3) ...ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ iperf -viperf version 2.0.5 (08 Jul 2010) pthreadsubuntu@xr-lxc-app#~$ Change SSH port inside your containerWhen we deploy the container to IOS-XR, we will share XR’s network namespace. Since IOS-XR already uses up port 22 and port 57722 for its own purposes, we need to pick some other port for our container.Our recommendation? - Pick some port in the 58xxx range.Let’s change the SSH port to 58822#ubuntu@xr-lxc-app#~$ sudo sed -i s/Port\\ 22/Port\\ 58822/ /etc/ssh/sshd_config ubuntu@xr-lxc-app#~$ Check that your port was updated successfully#ubuntu@xr-lxc-app#~$ cat /etc/ssh/sshd_config | grep PortPort 58822ubuntu@xr-lxc-app#~$ We’re good!Shutdown and package your containerIssue a shutdown to escapeubuntu@xr-lxc-app#~$ sudo shutdown -h nowubuntu@xr-lxc-app#~$ Broadcast message from ubuntu@xr-lxc-app\t(/dev/lxc/console) at 19#37 ...The system is going down for halt NOW!------------------------------ snip output ------------------------------------mount# cannot mount block device /dev/sda1 read-only * Will now haltvagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ We’re back on our devbox.Now hop over to the directory /var/lib/lxc/xr-lxc-app and package the rootfs into a tar ball.In the end we transfer the tar ball to the home directory (~/ or /home/vagrant)You will need to be root for this operationvagrant@vagrant-ubuntu-trusty-64#~$ sudo -s root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# whoami rootroot@vagrant-ubuntu-trusty-64#~# cd /var/lib/lxc/xr-lxc-app/ root@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app# lsconfig fstab rootfsroot@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app# cd rootfs root@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app/rootfs# root@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app/rootfs# tar -czf xr-lxc-app-rootfs.tar.gz * tar# dev/log# socket ignoredroot@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app/rootfs#root@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app/rootfs#mv *.tar.gz /home/vagrantroot@vagrant-ubuntu-trusty-64#/var/lib/lxc/xr-lxc-app/rootfs#ls -l /home/vagranttotal 119984-rw-r--r-- 1 root root 122863332 Jun 16 19#41 xr-lxc-app-rootfs.tar.gzCreate LXC SPEC XML FileWe need to create an XML file that will define different parameters (cpu, mem, rootfs location etc.) for the container launch on IOS-XR (which uses libvirt).On the devbox, use your favorite editor (vi, nano, pico etc.) to create a new file called xr-lxc-app.xml under /home/vagrant of the devbox with the following content#<domain type='lxc' xmlns#lxc='http#//libvirt.org/schemas/domain/lxc/1.0' ><name>xr-lxc-app</name><memory>327680</memory><os><type>exe</type><init>/sbin/init</init></os><lxc#namespace><sharenet type='netns' value='global-vrf'/></lxc#namespace><vcpu>1</vcpu><clock offset='utc'/><on_poweroff>destroy</on_poweroff><on_reboot>restart</on_reboot><on_crash>destroy</on_crash><devices><emulator>/usr/lib64/libvirt/libvirt_lxc</emulator><filesystem type='mount'><source dir='/misc/app_host/xr-lxc-app/'/><target dir='/'/></filesystem><console type='pty'/></devices></domain> A couple of configuration knobs seem interesting in the above XML file# The netns (network namespace) setting# `<sharenet type='netns' value='global-vrf'/>`; In IOS-XR the ‘global-vrf’ network namespace houses all the XR Gig/Mgmt interfaces that are in the global/default VRF. The sharenet setting above makes sure that the container on launch will also have access to all of XR’s interfaces natively The rootfs mount volume# `<source dir='/misc/app_host/xr-lxc-app/'/>`; /misc/app_host/ in IOS-XR is a special mount volume that is designed to provide nearly 3.9G of Disk space on IOS-XRv and varying amounts on other platforms (NCS5508, ASR9k) etc. This mount volume may be used to host custom container rootfs and other large files without using up XR’s disk space. In this case we expect the rootfs to be untarred in the /misc/app_host/xr-lxc-app/ directory Your LXC app is now ready to be deployed! You should have the following two components in the home directory of the devbox#root@vagrant-ubuntu-trusty-64#~# pwd/home/vagrantroot@vagrant-ubuntu-trusty-64#~# ls -ltotal 119988-rw-r--r-- 1 root root 122863332 Jun 16 19#41 xr-lxc-app-rootfs.tar.gz-rw-r--r-- 1 root root 590 Jun 16 23#29 xr-lxc-app.xmlroot@vagrant-ubuntu-trusty-64#~# Transfer rootfs and XML file to XRWe can either use the XR Gig or Mgmt interface to transfer the files.IOS-XR runs openssh in the linux environment on port 57722. We need to transfer the files to the /misc/app_host volume on IOS-XR.However, /misc/app_host is owned by root and root access over SSH is not allowed, for obvious security reasons. Hence, to enable the transfer of custom files to IOS-XR, we provide a /misc/app_host/scratch directory which is owned by the app_host group. Any user transferring files over SSH to this directory must be part of the app_host group to have access.The user vagrant is already part of the app_host group.Transfer using the Gig interface#The password for the vagrant user is vagrantscp -P 57722 /home/vagrant/xr-lxc-app-rootfs.tar.gz vagrant@11.1.1.10#/misc/app_host/scratch/scp -P 57722 /home/vagrant/xr-lxc-app.xml vagrant@11.1.1.10#/misc/app_host/scratch/Where 11.1.1.10 is the directly connected Gig0/0/0/0 interface of IOS-XR instance (this config was explained in the XR Toolbox, Part 3# App Development Topology tutorial).But this process might be slow since Gig interfaces in the Vagrant IOS-XR image are rate-limited.Transfer using the Mgmt interfaceVagrant forwards the port 57722 to some host port for IOS-XR over the management port. In Virtualbox, the IP address of the host (your laptop) is always 10.0.2.2 for the NAT’ed port.So determine the forwarded port for port 57722 for XR on your laptop shell (in a separate window)#AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant port rtrThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host)AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ Now use port 2222 to transfer the files over the management port using the host IP = 10.0.2.2 from your devboxvagrant@vagrant-ubuntu-trusty-64#~$ scp -P 2222 /home/vagrant/*.* vagrant@10.0.2.2#/misc/app_host/scratchThe authenticity of host '[10.0.2.2]#2222 ([10.0.2.2]#2222)' can't be established.ECDSA key fingerprint is db#25#e2#27#49#2a#7b#27#e1#76#a6#7a#e4#70#f5#f7.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[10.0.2.2]#2222' (ECDSA) to the list of known hosts.vagrant@10.0.2.2's password# xr-lxc-app-rootfs.tar.gz 100% 117MB 16.7MB/s 00#07 xr-lxc-app.xml 100% 590 0.6KB/s 00#00 vagrant@vagrant-ubuntu-trusty-64#~$ Untar rootfs under /misc/app_host/Let’s hop onto the IOS-XR instance.AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant ssh rtrLast login# Thu Jun 16 19#45#33 2016 from 10.0.2.2xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ Create a directory xr-lxc-app/(remember the source dir in the XML file?) under /misc/app_host#You need to be sudo to perform the next set of tasks.sudo mkdir /misc/app_host/xr-lxc-app/Now untar the rootfs tar-ball that we transferred to the /misc/app_host/scratch directory into the newly created /misc/app_host/xr-lxc-app/ directory.xr-vm_node0_RP0_CPU0#~$cd /misc/app_host/xr-lxc-app/ xr-vm_node0_RP0_CPU0#/misc/app_host/xr-lxc-app$ xr-vm_node0_RP0_CPU0#/misc/app_host/xr-lxc-app$sudo tar -zxf ../scratch/xr-lxc-app-rootfs.tar.gztar# dev/mpu401data# Cannot mknod# Operation not permittedtar# dev/rmidi3# Cannot mknod# Operation not permittedtar# dev/rmidi2# Cannot mknod# Operation not permittedtar# dev/smpte1# Cannot mknod# Operation not permittedtar# dev/audio1# Cannot mknod# Operation not permittedtar# dev/smpte0# Cannot mknod# Operation not permittedtar# dev/midi0# Cannot mknod# Operation not permittedtar# dev/mixer1# Cannot mknod# Operation not permittedtar# dev/smpte3# Cannot mknod# Operation not permitted--------------------------- snip output --------------------------Ignore the “Operation not permitted” messages when you untar. These are harmless.Use virsh to launch containerNow we use the XML file that we transferred to /misc/app_host/scratch to launch our container.libvirtd is the daemon running on IOS-XR to help launch LXC containers. The client for libvirtd (virsh) is made available in the XR linux shell to interact with the libvirtd daemon.To perform the virsh client commands, you will need to be root. In order to properly source the right environment variables for the virsh commands to connect to the libvirtd daemon, use the “-i” flag with “sudo” when becoming root.Become root#xr-vm_node0_RP0_CPU0#~$ sudo -ixr-vm_node0_RP0_CPU0#~$The “vagrant” user is already a part of the sudoers group, so you won’t be asked for the sudo password. But when you create your own users, expect the password prompt to show up.To list the current running containers#xr-vm_node0_RP0_CPU0#~$ virsh list Id Name State---------------------------------------------------- 4922 sysadmin running 12010 default-sdr--1 runningxr-vm_node0_RP0_CPU0#~$ Now launch the container using virsh create and the XML file we transferred earlier#xr-vm_node0_RP0_CPU0#~$ virsh create /misc/app_host/scratch/xr-lxc-app.xml Domain xr-lxc-app created from /misc/app_host/scratch/xr-lxc-app.xmlxr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ virsh list Id Name State---------------------------------------------------- 4922 sysadmin running 7315 xr-lxc-app running 12010 default-sdr--1 runningxr-vm_node0_RP0_CPU0#~$ To get into the container, you have two options#Our credentials for the container were#Username# ubuntuPassword# ubuntu Use virsh console# xr-vm_node0_RP0_CPU0#~$ virsh console xr-lxc-appConnected to domain xr-lxc-appEscape character is ^]init# Unable to create device# /dev/kmsg* Stopping Send an event to indicate plymouth is up [ OK ]* Starting Mount filesystems on boot [ OK ]* Starting Signal sysvinit that the rootfs is mounted [ OK ]* Starting Fix-up sensitive /proc filesystem entries [ OK ]-------------------------------- snip output --------------------------------- Ubuntu 14.04.4 LTS xr-lxc-app tty1xr-lxc-app login# ubuntuPassword# Last login# Thu Jun 16 19#23#10 UTC 2016 on lxc/consoleWelcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)* Documentation# https#//help.ubuntu.com/ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ To get out of the container console, issue Ctrl+] Use SSH to get into the container# We set the SSH port to 58822 earlier, we can use any of XR’s interface addresses to log in# xr-vm_node0_RP0_CPU0#~$ ssh -p 58822 ubuntu@11.1.1.10Warning# Permanently added '[11.1.1.10]#58822' (ECDSA) to the list of known hosts.ubuntu@11.1.1.10's password# Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)* Documentation# https#//help.ubuntu.com/Last login# Fri Jun 17 16#42#13 2016ubuntu@xr-lxc-app#~$ If you’d like to be able to access the container directly from your laptop, then make sure youforward the intended port (in this case 58822) to your laptop (any port of your choice), in theVagrantfile# node.vm.network ~forwarded_port~, guest# 58822, host# 58822 With the above setting in the Vagrantfile, you can ssh to the container directly from your laptop using# ssh -p 58822 vagrant@localhost Perfect! Our container is up and running!Test your app!Now that we have our container up and running, let’s see how we run our app (iperf in our case).Think of the LXC container as your own linux server on the router. Because we share the network namespace between the LXC and XR, all of XR's interfaces (Gig, Mgmt etc.) are available to bind to and run your applications. We can see this by issuing an ifconfig inside the running container#xr-vm_node0_RP0_CPU0#~$ssh -p 58822 ubuntu@11.1.1.10 Warning# Permanently added '[11.1.1.10]#58822' (ECDSA) to the list of known hosts.ubuntu@11.1.1.10's password# Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64) * Documentation# https#//help.ubuntu.com/Last login# Fri Jun 17 16#42#13 2016ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ ifconfig Gi0_0_0_0 Link encap#Ethernet HWaddr 08#00#27#17#f9#a8 inet addr#11.1.1.10 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fe17#f9a8/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#1 errors#0 dropped#3 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#42 (42.0 B)Mg0_RP0_CPU0_0 Link encap#Ethernet HWaddr 08#00#27#13#ad#eb inet addr#10.0.2.15 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fe13#adeb/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#89658 errors#0 dropped#0 overruns#0 frame#0 TX packets#34130 errors#0 dropped#0 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#127933763 (127.9 MB) TX bytes#2135907 (2.1 MB)------------------------------- snip output -----------------------------------Set the src-hint for Application trafficBy default, your XR Vagrant box is set up to talk to the internet using a default route through your management port.If you want the router to use XR’s routing table and talk to other nodes in the topology, then you need to set the “tpa address” in XR’s configuration. This becomes the “src-hint” for all linux application traffic. The reason we use something like “loopback 0” is to make sure that the IP for any originating traffic for applications on the router is a reachable IP address across your topology.AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant port rtr | grep 22 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host)AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ ssh -p 2223 vagrant@localhost vagrant@localhost's password# RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#conf tFri Jun 17 17#34#45.707 UTCRP/0/RP0/CPU0#ios(config)#int loopback 0RP/0/RP0/CPU0#ios(config-if)#ip address 1.1.1.1/32RP/0/RP0/CPU0#ios(config-if)#exit RP/0/RP0/CPU0#ios(config)#tpa address-family ipv4 update-source loopback 0RP/0/RP0/CPU0#ios(config)#commitFri Jun 17 17#35#19.815 UTCRP/0/RP0/CPU0#ios(config)#RP/0/RP0/CPU0#ios(config)#exitRP/0/RP0/CPU0#ios#Let’s say we’ve set up the TPA address as shown above, you should see the following route in XR’s linux shell#RP/0/RP0/CPU0#ios#bashFri Jun 17 17#39#37.771 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ip routedefault dev fwdintf scope link src 1.1.1.1 10.0.2.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 10.0.2.15 [xr-vm_node0_RP0_CPU0#~]$So all you’ve really done using the tpa address-family... config is to set src address for all application traffic to XR’s loopback0 address.The advantage of this approach is that when you use larger topologies that may include routing protocols like OSPF,BGP or even static routes, all you have to do is make loopback0 reachable and the application will be able to communicate across the entire topology. Also, this significantly reduces the routing table size in the linux environment as you can see in the output above.See if things work!We’re going to use an iperf-server inside our container on XR and an iperf-client running on devbox. You could reverse the client-server setup if you want.Start the iperf server inside the Container on XR#xr-vm_node0_RP0_CPU0#~$ ssh -p 58822 ubuntu@11.1.1.10 ubuntu@11.1.1.10's password# Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64) * Documentation# https#//help.ubuntu.com/Last login# Fri Jun 17 18#09#50 2016 from 11.1.1.10ubuntu@xr-lxc-app#~$ ubuntu@xr-lxc-app#~$ iperf -s -u ------------------------------------------------------------Server listening on UDP port 5001Receiving 1470 byte datagramsUDP buffer size# 64.0 MByte (default)------------------------------------------------------------Keep the iperf server (started above) running, as you proceed to initiate the iperf client on the devbox.Let’s make sure XR’s loopback0 is reachable from the devbox (since we’re not running routing protocols in this topology, this isn’t automatic)#AKSHSHAR-M-K0DS#lxc-app-topo-bootstrap akshshar$ vagrant ssh devboxWelcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/---------------------------- snip output -------------------------------vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ sudo ip route add 1.1.1.1/32 via 11.1.1.10 vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ ping 1.1.1.1PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.64 bytes from 1.1.1.1# icmp_seq=1 ttl=255 time=6.53 ms64 bytes from 1.1.1.1# icmp_seq=2 ttl=255 time=1.77 msInstall iperf on devbox and start the iperf client there (to point to XR loopback=1.1.1.1)#vagrant@vagrant-ubuntu-trusty-64#~$ sudo apt-get install iperfReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed# iperf---------------------------- snip output -------------------------------vagrant@vagrant-ubuntu-trusty-64#~$ iperf -u -c 1.1.1.1 ------------------------------------------------------------Client connecting to 1.1.1.1, UDP port 5001Sending 1470 byte datagramsUDP buffer size# 208 KByte (default)------------------------------------------------------------[ 3] local 11.1.1.20 port 54284 connected with 1.1.1.1 port 5001[ ID] Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec[ 3] Sent 893 datagrams[ 3] Server Report#[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.275 ms 0/ 893 (0%)vagrant@vagrant-ubuntu-trusty-64#~$ There you have it! iperf running inside an Ubuntu Container on IOS-XR. Too many steps to look up? In our next tutorial, we look at automating all of the steps needed to bring up a container using an Ansible Playbook# IOS-XR# Ansible based LXC deployment", "url": "/tutorials/2016-06-16-xr-toolbox-part-4-bring-your-own-container-lxc-app/", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, linux, lxc, containers, xr toolbox" } , "tutorials-2016-06-08-ios-xr-ansible-container-deployment": { "title": "IOS-XR: Ansible based LXC deployment", "content": " Ansible LXC deployment Introduction Pre-requisite Boot up the environment Configure Passwordless Access into XR Linux shell Create LXC Tar ball in devbox Create XML file in devbox Ansible Playbook Run playbook to deploy LXC Slow playbook run? XR Gig interfaces are rate limited! IntroductionThe goal of this tutorial to deploy a container (LXC) on XR using an Ansible playbook.In this tutorial we will use techniques from 2 other tutorials# IOS-XR# Ansible and Vagrant. enable connectivity between machines and have preinstalled Ansible on devbox instance. XR Toolbox, Part 2# Bootstrap XR configuration with Vagrant# using the new shell/bash based automation techniques. The figure below illustrates the basic steps required to launch an lxc container on IOS-XR 6.0+#If you’ve gone through the tutorial# XR toolbox, Part 4# Bring your own Container (LXC) App, you would have a fair idea about how to accomplish the manual steps illustrated above. In this tutorial, we want to automate all of these steps using an Ansible Playbook.Pre-requisite Vagrant box added for IOS-XRv# If you don’t have it, get it using the steps specified here# XR Toolbox, Part 1# IOS XR Vagrant quick start Clone the following repository before we start# cd ~/git clone https#//github.com/ios-xr/vagrant-xrdocs.gitcd vagrant-xrdocs/ansible-tutorials/app_hosting/Boot up the environmentWe are ready to start, boot the boxes by issuing the vagrant up commandbash$ vagrant up Bringing machine 'devbox' up with 'virtualbox' provider...Bringing machine 'rtr' up with 'virtualbox' provider...==> devbox# Importing base box 'ubuntu/trusty64'...---------------------------snip output -----------------------Configure Passwordless Access into XR Linux shellLet’s copy public part of key from devbox box and allow access without apassword.First, connect to the devbox instance and copy its public key to XR via SCP#vagrant ssh devbox scp -P 57722 /home/vagrant/.ssh/id_rsa.pub vagrant@10.1.1.20#/home/vagrant/id_rsa_ubuntu.pubNow add the copied keys to authorized_keys in XR linuxvagrant ssh rtr cat /home/vagrant/id_rsa_ubuntu.pub >> /home/vagrant/.ssh/authorized_keysAnsible is ready to work without password.Create LXC Tar ball in devboxThe user is free to bring their own lxc rootfs tar-ball for deployment on IOS-XR. This section is meant to help a user create a rootfs tar ball from scratch. All the steps required to create a container rootfs are already covered in detail in the tutorial# XR toolbox, Part 4# Bring your own Container (LXC) App Specifically, head over to the following section of the tutorial#XR toolbox, Part 4…/create-a-container-rootfsAt the end of the section, you should have your very own rootfs (xr-lxc-app-rootfs.tar.gz), ready for deployment.Copy and keep the rootfs tar ball in the /home/vagrant/ directory of your devbox. The Ansible playbook will expect the tar ball in this directory, so make sure an ls -l for the tar ball in /home/vagrant returns something like#vagrant@vagrant-ubuntu-trusty-64#~$ ls -l /home/vagrant/xr-lxc-app-rootfs.tar.gz-rw-r--r-- 1 root root 101246853 Jun 20 02#34 /home/vagrant/xr-lxc-app-rootfs.tar.gzvagrant@vagrant-ubuntu-trusty-64#~$ Great! Ansible will copy this tar ball to XR for you.Create XML file in devboxTo create a container, we need an xml file with the specifications for the container. Create the following file in the /home/vagrant directory of your devbox #cat /home/vagrant/xr-lxc-app.xml<domain type='lxc' xmlns#lxc='http#//libvirt.org/schemas/domain/lxc/1.0' > <name>xr-lxc-app</name> <memory>327680</memory> <os> <type>exe</type> <init>/sbin/init</init> </os> <lxc#namespace> <sharenet type='netns' value='global-vrf'/> </lxc#namespace> <vcpu>1</vcpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/lib64/libvirt/libvirt_lxc</emulator> <filesystem type='mount'> <source dir='/misc/app_host/xr-lxc-app/'/> <target dir='/'/> </filesystem> <console type='pty'/> </devices></domain>Ansible PlaybookAnsible playbook contains 7 tasks#cat deploy_container.yml---- hosts# ss-xr tasks# - name# Copy XML file copy# src=/home/vagrant/xr-lxc-app.xml dest=/home/vagrant/xr-lxc-app.xml owner=vagrant force=no - name# Copy rootfs tar ball copy# src=/home/vagrant/xr-lxc-app-rootfs.tar.gz dest=/misc/app_host/scratch/xr-lxc-app-rootfs.tar.gz owner=vagrant force=no - name# Create rootfs directory file# path=/misc/app_host/xr-lxc-app/rootfs state=directory become# yes - command# tar -zxf /misc/app_host/scratch/xr-lxc-app-rootfs.tar.gz -C /misc/app_host/xr-lxc-app/rootfs become# yes register# output ignore_errors# yes - debug# var=output.stdout_lines - name# grep shell# sudo -i virsh list | grep xr-lxc-app args# warn# no register# container_exist ignore_errors# yes - debug# var=output.stdout_lines - name# virsh create shell# sudo -i virsh create /home/vagrant/xr-lxc-app.xml args# warn# no register# output when# ~ container_exist | failed ~ - debug# var=output.stdout_lines - shell# sudo -i virsh list args# warn# no register# output - debug# var=output.stdout_linesTasks overview# Task 1 copies xr-lxc-app.xml to XR ; Task 2 copies the xr-lxc-app-rootfs.tar.gz tar ball to XR; Task 3 creates folder “xr-lxc-app” at XR, if it does not exist; Task 4 unpacks archive with container filesystem (Notice the ignore_errors?- we’re simply avoiding the mknod warnings); Task 5 getting list of container and checking if ‘xr-lxc-app’ in grep output. In case of success variable would be changed and Task 6 would be skipped; Task 6 creates the container itself using the virsh alias in the XR linux shell (issuecommand “type virsh” on XR Linux to check. “sudo -i” is important, to load up Aliases for the root user). Triggered only if container not exist; Task 7 dumps the virsh list output to show that container is up and running.Run playbook to deploy LXCAnsible playbook is ready to use. Issue command in devbox#ansible-playbook deploy_container.yml Slow playbook run? XR Gig interfaces are rate limited! The default ansible setup uses the Gig0/0/0/0 XR interface (connected to eth1 of devbox) to transfer the files over port 57722 (ssh to XR linux). This playbook could be directly used for physical devices as well. But, bear in mind that the IOS-XR Vagrant instance is rate-limited on its Gig interfaces. So the copy process might be quite slow. To speed up the process we could use the Management interface instead. To do this, determine to which port vagrant forwards port 57722 from XR#bash-3.2$ vagrant port rtrThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 58822 (guest) => 58822 (host) 22 (guest) => 2223 (host) 57722 (guest) => 2200 (host)bash-3.2$ Based on the above output# Change the ansible host IP address from 10.1.1.20 (Gig0/0/0/0) address to 10.0.2.2 (gateway/laptop address) in /home/vagrant/iosxr-ansible/remote/ansible_hosts on devbox. Change the remote_port from 57722 to 2200 (forwarded port determined above, in your case it may be different) in /home/vagrant/iosxr-ansible/remote/ansible_cfg on devbox Thus by using 10.0.2.2#2200 to run the playbook over the management port we significantly reduce the runtime of the playbook.Verify container is up from XR Linux shell#xr-vm_node0_RP0_CPU0#/misc/app_host/rootfs$ virsh list Id Name State---------------------------------------------------- 4907 sysadmin running 8087 xr-lxc-app running 12057 default-sdr--1 running Container is up and running. It might take some time to be fully up. Give it about 20-30 seconds and you should be able to SSH to it from your laptop# ssh -p 58822 ubuntu@127.0.0.1 Congratulations!", "url": "/tutorials/2016-06-08-ios-xr-ansible-container-deployment/", "author": "Mike Korshunov", "tags": "vagrant, iosxr, cisco" } , "tutorials-2016-06-17-xr-toolbox-part-5-running-a-native-wrl7-app": { "title": "XR toolbox, Part 5: Running a native WRL7 app", "content": " Running a Native (WRL7) App Introduction What’s a native app? Spin up the build environment Clone the git repo Build iperf from source on WRL7 Build Server Fetch iperf source code Set up the SPEC file for rpmbuild Build RPM Transfer the iperf RPM to router Install iperf as native WRL7 app Test the Native app Set TPA IP (Src-hint) for App Traffic Start iperf server on router Install iperf in devbox (ubuntu server) Set a route to TPA IP on devbox Run iperf! Check out Part 4 of the XR toolbox series# Bring your own Container (LXC) App.IntroductionIf you haven’t checked out the earlier parts to the XR toolbox Series, then you can do so here# XR Toolbox SeriesThe purpose of this series is simple. Get users started with an IOS-XR setup on their laptop and incrementally enable them to try out the application-hosting infrastructure on IOS-XR.In this part, we explore how a user can build and deploy native WRL7 RPMs that they may host in the same process space as XR.What’s a native app?I go into some detail with respect to the IOS-XR application hosting architecture in the following blog# XR app-hosting infrastructure# Quick LookFor reference, a part of the architecture is shown below. We focus on the green container in the figure from the original blog#This is the XR control plane LXC. XR processes (routing protocols, XR CLI etc.) are all housed in the blue region. We represent XR FIB within the same region to indicate that the XR control plane exclusively handles the data-plane programming and access to the real XR interfaces (Gig, Mgmt etc.)The gray region inside the control plane LXC represents the global-vrf network namespace in the XR linux environment. Today, IOS-XR only supports the mapping of global/default VRF in IOS-XR to the global-vrf network namespace in XR linux. To get into the XR linux shell (global-vrf network namespace), we have two possible techniques# From XR CLI# Issue the bash command to drop into the XR linux shell from the CLI. Over SSH using port 57722# Port 22 is used by XR SSH. To enable a user/tool to drop directly into the XR linux shell, we enable SSH over port 57722. Any reachable IP address of XR could be used for this purpose. Once in the XR linux shell, if we issue an ifconfig we should see all the interfaces (that are up/unshut) in the global/default VRF# RP/0/RP0/CPU0#rtr1# RP/0/RP0/CPU0#rtr1# RP/0/RP0/CPU0#rtr1#show ip int br Sun Jul 17 11#52#15.049 UTC Interface IP-Address Status Protocol Vrf-Name Loopback0 1.1.1.1 Up Up default GigabitEthernet0/0/0/0 10.1.1.10 Up Up default GigabitEthernet0/0/0/1 11.1.1.10 Up Up default GigabitEthernet0/0/0/2 unassigned Shutdown Down default MgmtEth0/RP0/CPU0/0 10.0.2.15 Up Up default RP/0/RP0/CPU0#rtr1# RP/0/RP0/CPU0#rtr1# RP/0/RP0/CPU0#rtr1#bash Sun Jul 17 11#52#22.904 UTC [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ifconfig Gi0_0_0_0 Link encap#Ethernet HWaddr 08#00#27#e0#7f#bb inet addr#10.1.1.10 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fee0#7fbb/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#546 errors#0 dropped#3 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#49092 (47.9 KiB) Gi0_0_0_1 Link encap#Ethernet HWaddr 08#00#27#26#ca#9c inet addr#11.1.1.10 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#fe26#ca9c/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#547 errors#0 dropped#3 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#49182 (48.0 KiB) Mg0_RP0_CPU0_0 Link encap#Ethernet HWaddr 08#00#27#ab#bf#0d inet addr#10.0.2.15 Mask#255.255.255.0 inet6 addr# fe80##a00#27ff#feab#bf0d/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#210942 errors#0 dropped#0 overruns#0 frame#0 TX packets#84664 errors#0 dropped#0 overruns#0 carrier#1 collisions#0 txqueuelen#1000 RX bytes#313575212 (299.0 MiB) TX bytes#4784245 (4.5 MiB) ---------------------------------- snip output -----------------------------------------Any Linux application hosted in this environment shares the process space with XR, and we refer to it as a native application.Spin up the build environmentWe’re going to spin up a topology with 3 vagrant instances as shown below# WRL7 Build# Since IOS-XR uses a streamlined custom WRL7 distribution, we need to make sure we have the latest WRL7 environment available to build “native” apps. For this reason we have released the vagrant box to match IOS-XR release 6.1.1. You will simply need to reference “ciscoxr/appdev-xr6.1.1” in your Vagrantfile to spin it up. IOS-XR# This is the 6.1.1 IOS-XR vagrant instance you would have already downloaded and installed as explained in the vagrant quick-start tutorial# IOS-XR vagrant box download In the end, vagrant box list must list your IOS-XRv vagrant box# AKSHSHAR-M-K0DS#~ akshshar$ vagrant box listIOS-XRv (virtualbox, 0)AKSHSHAR-M-K0DS#~ akshshar$ devbox# This is the ubuntu/trusty64 image we have been using in the other tutorials for LXC creation and generic application testing. IOS-XR and devbox instances talk to each other over Gig0/0/0/0 and eth1 interfaces respectively.Clone the git repoClone the following git repo# https#//github.com/ios-xr/vagrant-xrdocs.git AKSHSHAR-M-K0DS#~ akshshar$ git clone https#//github.com/ios-xr/vagrant-xrdocs.git Cloning into 'vagrant-xrdocs'...remote# Counting objects# 204, done.remote# Compressing objects# 100% (17/17), done.remote# Total 204 (delta 4), reused 0 (delta 0), pack-reused 187Receiving objects# 100% (204/204), 27.84 KiB | 0 bytes/s, done.Resolving deltas# 100% (74/74), done.Checking connectivity... done.AKSHSHAR-M-K0DS#~ akshshar$ AKSHSHAR-M-K0DS#~ akshshar$ AKSHSHAR-M-K0DS#~ akshshar$ cd vagrant-xrdocs/native-app-topo-bootstrap/AKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ pwd/Users/akshshar/vagrant-xrdocs/native-app-topo-bootstrapAKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ lsVagrantfile\tconfigs\t\tscriptsAKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ Once you’re in the right directory, simply issue a vagrant up# AKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ vagrant up Bringing machine 'rtr' up with 'virtualbox' provider...Bringing machine 'devbox' up with 'virtualbox' provider...Bringing machine 'wrl7_build' up with 'virtualbox' provider...--------------------------- snip output ----------------------------Build iperf from source on WRL7 Build ServerAssuming everything came up fine, let’s ssh into the wrl7_build instance#AKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ vagrant ssh wrl7_buildlocalhost#~$ localhost#~$ localhost#~$ localhost#~$ lsb_release -aLSB Version#\tcore-4.1-noarch#core-4.1-x86_64Distributor ID#\twrlinuxDescription#\tWind River Linux 7.0.0.2Release#\t7.0.0.2Codename#\tn/alocalhost#~$ Fetch iperf source codeGreat! Let’s fetch the source code of iperf (iperf2) from its official location#Current latest version is# iperf-2.0.9Download this tar ball into the wrl7_build vagrant instance# localhost#~$ localhost#~$ wget https#//iperf.fr/download/source/iperf-2.0.9-source.tar.gz --2016-07-17 14#57#13-- https#//iperf.fr/download/source/iperf-2.0.9-source.tar.gzResolving iperf.fr... 194.158.119.186, 2001#860#f70a##2Connecting to iperf.fr|194.158.119.186|#443... connected.HTTP request sent, awaiting response... 200 OKLength# 277702 (271K) [application/x-gzip]Saving to# 'iperf-2.0.9-source.tar.gz'100%[===================================================================================>] 277,702 345KB/s in 0.8s 2016-07-17 14#57#14 (345 KB/s) - 'iperf-2.0.9-source.tar.gz' saved [277702/277702]localhost#~$ localhost#~$ localhost#~$ lsiperf-2.0.9-source.tar.gz localhost#~$Copy the source code tar ball into the expected location for rpmbuild# /usr/src/rpm/SOURCES/localhost#~$ sudo cp /home/vagrant/iperf-2.0.9-source.tar.gz /usr/src/rpm/SOURCES/localhost#~$ Set up the SPEC file for rpmbuildWe will need a spec file to build the RPM. The spec file we intend to use is shown below. The highlighted sections are important.This file is already available in /home/vagrant of wrl7_build server, thanks to the “file” provisioner that run as part of “vagrant up”. Name# iperf Version# 2.0.9Release# XR_6.1.1License# Copyright (c) 2015 Cisco Systems Inc. All rights reserved.Packager# ciscoSOURCE0 # %{name}-%{version}-source.tar.gzGroup# 3rd party applicationSummary# iperf compiled for WRL7# XR 6.1.1%descriptionThis is a compiled version of iperf-2.0.9 for WRL7# XR 6.1.1%prep%setup -q -n %{name}-%{version}%build./configuremake%installmkdir -p %{buildroot}%{_sbindir}install -m755 src/iperf %{buildroot}%{_sbindir}%files%defattr(-,root,root)%{_sbindir}/iperf%cleanrm -rf %{buildroot}Build RPMIssue the rpmbuild command# localhost#~$ sudo rpmbuild -ba iperf.spec Executing(%prep)# /bin/sh -e /var/tmp/rpm-tmp.59743+ umask 022+ cd /usr/lib64/rpm/../../src/rpm/BUILD+ cd /usr/src/rpm/BUILD+ rm -rf iperf-2.0.9+ /bin/tar -xf ------------------------------ snip output -------------------------------Requires# libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3)(64bit) libc.so.6(GLIBC_2.7)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) rtld(GNU_HASH)Checking for unpackaged file(s)# /usr/lib64/rpm/check-files /usr/lib64/rpm/../../../var/tmp/iperf-rootWrote# /usr/src/rpm/SRPMS/iperf-2.0.9-XR_6.1.1.src.rpmWrote# /usr/src/rpm/RPMS/x86_64/iperf-2.0.9-XR_6.1.1.x86_64.rpmlocalhost#~$ The final RPM should be available in /usr/src/rpm/RPMS/x86_64#localhost#~$ ls -l /usr/src/rpm/RPMS/x86_64/total 48-rw-r--r-- 1 root root 48119 Jul 17 16#46 iperf-2.0.9-XR_6.1.1.x86_64.rpmlocalhost#~$ Transfer the iperf RPM to routerWe can transfer the iperf RPM to the router directly over the management network.First determine the forwarded port for XR linux shell (port 57722) for the running router#This command must of course be issued from your laptop running the vagrant environment AKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ vagrant port rtr The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host)57722 (guest) => 2222 (host) AKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ Get back into wrl7_build and use HOST ip address = 10.0.2.2 with port 2222 to transfer the RPM to the router over the management network#The password for user vagrant on the router is “vagrant”.localhost#~$ localhost#~$ scp -P 2222 /usr/src/rpm/RPMS/x86_64/iperf-2.0.9-XR_6.1.1.x86_64.rpm vagrant@10.0.2.2#/home/vagrant/ vagrant@10.0.2.2's password# iperf-2.0.9-XR_6.1.1.x86_64.rpm 100% 47KB 47.0KB/s 00#00 localhost#~$ Install iperf as native WRL7 appLogin to the router and install the iperf RPM transferred in the previous step using yum#xr-vm_node0_RP0_CPU0#~$ pwd/home/vagrant xr-vm_node0_RP0_CPU0#~$ ls -l iperf-2.0.9-XR_6.1.1.x86_64.rpm -rw-r--r-- 1 vagrant vagrant 48011 Jul 17 21#11 iperf-2.0.9-XR_6.1.1.x86_64.rpm xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ sudo yum install -y iperf-2.0.9-XR_6.1.1.x86_64.rpm Loaded plugins# downloadonly, protect-packages, rpm-persistenceSetting up Install ProcessExamining iperf-2.0.9-XR_6.1.1.x86_64.rpm# iperf-2.0.9-XR_6.1.1.x86_64Marking iperf-2.0.9-XR_6.1.1.x86_64.rpm to be installedResolving Dependencies--> Running transaction check---> Package iperf.x86_64 0#2.0.9-XR_6.1.1 will be installed--> Finished Dependency ResolutionDependencies Resolved================================================================================================================================= Package Arch Version Repository Size=================================================================================================================================Installing# iperf x86_64 2.0.9-XR_6.1.1 /iperf-2.0.9-XR_6.1.1.x86_64 103 kTransaction Summary=================================================================================================================================Install 1 PackageTotal size# 103 kInstalled size# 103 kDownloading Packages#Running Transaction CheckRunning Transaction TestTransaction Test SucceededRunning Transaction Installing # iperf-2.0.9-XR_6.1.1.x86_64 1/1 Installed# iperf.x86_64 0#2.0.9-XR_6.1.1 Complete!xr-vm_node0_RP0_CPU0#~$ Check the installation#xr-vm_node0_RP0_CPU0#~$ iperf -viperf version 2.0.9 (1 June 2016) pthreadsxr-vm_node0_RP0_CPU0#~$ We’re all set!Test the Native appAs we have seen in greater detail in the LXC container app tutorial#Setting the src-hint for application trafficwe need to set the src-hint for applications to ensure reachability in routed networks.Set TPA IP (Src-hint) for App TrafficAKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ vagrant port rtr The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host)57722 (guest) => 2222 (host) AKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ ssh -p 2223 vagrant@localhost vagrant@localhost's password# RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#conf tSun Jul 17 21#23#04.140 UTCRP/0/RP0/CPU0#ios(config)#tpa address-family ipv4 update-source loopback 0RP/0/RP0/CPU0#ios(config)#commitSun Jul 17 21#23#23.464 UTCRP/0/RP0/CPU0#ios(config)#endRP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#bash -c ip routeSun Jul 17 21#23#35.008 UTCdefault dev fwdintf scope link src 1.1.1.1 10.0.2.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 10.0.2.15 RP/0/RP0/CPU0#ios#Start iperf server on routerAKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$vagrant ssh rtrLast login# Sun Jul 17 21#11#44 2016 from 10.0.2.2xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ iperf -s -u ------------------------------------------------------------Server listening on UDP port 5001Receiving 1470 byte datagramsUDP buffer size# 64.0 MByte (default)------------------------------------------------------------Yay! iperf server is running natively in IOS-XR.Install iperf in devbox (ubuntu server)We will use devbox (ubuntu server) in the topology as an iperf clientAKSHSHAR-M-K0DS#native-app-topo-bootstrap akshshar$ vagrant ssh devbox Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information as of Sun Jul 17 20#19#54 UTC 2016 System load# 0.0 Processes# 74 Usage of /# 3.5% of 39.34GB Users logged in# 0 Memory usage# 25% IP address for eth0# 10.0.2.15 Swap usage# 0% IP address for eth1# 11.1.1.20 Graph this data and manage this system at# https#//landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloud0 packages can be updated.0 updates are security updates.Last login# Sun Jul 17 20#19#54 2016 from 10.0.2.2vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ sudo apt-get -y install iperfReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed# iperf0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/56.3 kB of archives.After this operation, 174 kB of additional disk space will be used.Selecting previously unselected package iperf.(Reading database ... 62989 files and directories currently installed.)Preparing to unpack .../iperf_2.0.5-3_amd64.deb ...Unpacking iperf (2.0.5-3) ...Processing triggers for man-db (2.6.7.1-1ubuntu1) ...Setting up iperf (2.0.5-3) ...vagrant@vagrant-ubuntu-trusty-64#~$ Set a route to TPA IP on devboxLet’s make sure XR’s loopback0 (used as TPA IP) is reachable from the devbox (since we’re not running routing protocols in this topology, this isn’t automatic)#vagrant@vagrant-ubuntu-trusty-64#~$ sudo ip route add 1.1.1.1/32 via 11.1.1.10 vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.64 bytes from 1.1.1.1# icmp_seq=1 ttl=255 time=1.52 ms64 bytes from 1.1.1.1# icmp_seq=2 ttl=255 time=1.94 ms^C--- 1.1.1.1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 1.526/1.734/1.943/0.212 msRun iperf!Initiate the iperf client on the devbox pointing to the router’s loopback0 (TPA IP)#vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ iperf -c 1.1.1.1 -u ------------------------------------------------------------Client connecting to 1.1.1.1, UDP port 5001Sending 1470 byte datagramsUDP buffer size# 208 KByte (default)------------------------------------------------------------[ 3] local 11.1.1.20 port 34348 connected with 1.1.1.1 port 5001[ ID] Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec[ 3] Sent 893 datagrams[ 3] Server Report#[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.256 ms 0/ 893 (0%)vagrant@vagrant-ubuntu-trusty-64#~$ We’ve successfully built iperf as a WRL7 RPM, installed it natively inside XR and tested iperf operation over XR’s data port (Gig0/0/0/0 connected to devbox eth1).", "url": "/tutorials/2016-06-17-xr-toolbox-part-5-running-a-native-wrl7-app/", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, linux, wrl7, rpm, xr toolbox" } , "tutorials-2016-07-09-pathchecker-iperf-netconf-for-ospf-path-failover": { "title": "Pathchecker: iperf + netconf for OSPF path failover", "content": " Launching a Container App Introduction Understand the topology Pre-requisites Clone the git repo Spin up the devbox Create the Pathchecker LXC tar ball Launch an Ubuntu LXC inside devbox Install Application dependencies inside LXC Fetch the application code from Github Change SSH port inside the container Package up the LXC Launch Router Topology Test out pathchecker! Check current OSPF cost/path state Start iperf server on rtr2 Start pathchecker on rtr1 (LXC) Create impairment on Active path Verify the Failover was successful IntroductionIf you haven’t checked out the XR toolbox Series, then you can do so here# XR Toolbox SeriesThis series is meant to help a beginner get started with application-hosting on IOS-XR.In this tutorial we intend to utilize almost all the techniques learnt in the above series to solve a path remediation problem# Set up a couple of paths between two routers. Bring up OSPF neighborship on both links. One link is forced to be the reference link by increasing the ospf cost of the other link. Use a monitoring technique to determine the bandwidth, jitter, packet loss etc. parameters along the active traffic path. In this example, we utilize a python app called pathchecker that in turn uses iperf to measure link health. Simulate network degradation to force pathchecker (running inside an LXC) to initiate failover by changing the OSPF path cost over a netconf session. This is illustrated below#Understand the topologyAs illustrated above, there are 3 nodes in the topology# rtr1 # The router on the left. This is the origin of the traffic. We run the pathchecker code inside an ubuntu container on this router. The path failover happens rtr1 interfaces as needed. devbox # This node serves two purposes. We use it to create our ubuntu LXC tar ball with the pathchecker code before deploying it to the router. It also houses two bridge networks (one for each path) so that we can create very granular impairment on each path to test our app. rtr2 # This is the destination router. pathchecker uses an iperf client on rtr1 to get a health estimate of the active path. You need an iperf server running on rtr2 for the pathchecker app to talk to. Pre-requisites Make sure you have Vagrant and Virtualbox installed on your system. The system must have 9-10G RAM available. Go through the Vagrant quick-start tutorial, if you haven’t already, to learn how to use Vagrant with IOS-XR# IOS-XR vagrant quick-start It would be beneficial for the user to go through the XR Toolbox Series. But it is not a hard requirement. Following the steps in this tutorial should work out just fine for this demo. Once you have everything set up, you should be able to see the IOS-XRv vagrant box in the vagrant box list command# AKSHSHAR-M-K0DS#~ akshshar$ vagrant box list IOS-XRv (virtualbox, 0) AKSHSHAR-M-K0DS#~ akshshar$ Clone the git repoThe entire environment can be replicated on any environment running vagrant provided around 9-10G RAM is available. The topology will include 2 IOS-XR routers (8G RAM) and an ubuntu instance (around 512 MB RAM).Clone the pathchecker code from here# https#//github.com/ios-xr/pathcheckerAKSHSHAR-M-K0DS#~ akshshar$ git clone https#//github.com/ios-xr/pathchecker.git Cloning into 'pathchecker'...remote# Counting objects# 46, done.remote# Compressing objects# 100% (28/28), done.remote# Total 46 (delta 8), reused 0 (delta 0), pack-reused 18Unpacking objects# 100% (46/46), done.Checking connectivity... done.AKSHSHAR-M-K0DS#~ akshshar$ Spin up the devboxBefore we spin up the routers, we need to create the container tar ball for the pathchecker code. The way I’ve set up the launch scripts for rtr1, the bringup will fail without the container tar ball in the directory.Move to the Vagrant directory and launch only the devbox node#AKSHSHAR-M-K0DS#~ akshshar$ cd pathchecker/AKSHSHAR-M-K0DS#pathchecker akshshar$ cd vagrant/AKSHSHAR-M-K0DS#vagrant akshshar$ pwd/Users/akshshar/pathchecker/vagrantAKSHSHAR-M-K0DS#vagrant akshshar$ vagrant up devbox Bringing machine 'devbox' up with 'virtualbox' provider...==> devbox# Importing base box 'ubuntu/trusty64'...---------------------------- snip output ---------------------------------==> devbox# Running provisioner# file...AKSHSHAR-M-K0DS#vagrant akshshar$ AKSHSHAR-M-K0DS#vagrant akshshar$ AKSHSHAR-M-K0DS#vagrant akshshar$ vagrant status Current machine states#rtr1 not created (virtualbox)devbox running (virtualbox)rtr2 not created (virtualbox)This environment represents multiple VMs. The VMs are all listedabove with their current state. For more information about a specificVM, run `vagrant status NAME`.AKSHSHAR-M-K0DS#vagrant akshshar$ Create the Pathchecker LXC tar ballLaunch an Ubuntu LXC inside devboxSSH into “devbox”#vagrant ssh devboxCreate the pathchecker lxc template#vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-create -t ubuntu --name pathchecker Checking cache download in /var/cache/lxc/trusty/rootfs-amd64 ... Installing packages in template# ssh,vim,language-pack-enDownloading ubuntu trusty minimal ...I# Retrieving Release I# Retrieving Release.gpg I# Checking Release signature------------------------------ snip output ------------------------------------Start the container. You will be dropped into the console once boot is complete.Username# ubuntuPassword# ubuntuvagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-start --name pathchecker <4>init# hostname main process (3) terminated with status 1<4>init# plymouth-upstart-bridge main process (5) terminated with status 1<4>init# plymouth-upstart-bridge main process ended, respawningUbuntu 14.04.4 LTS nc_iperf consolepathchecker login# ubuntuPassword# Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/The programs included with the Ubuntu system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted byapplicable law.ubuntu@pathchecker#~$ Install Application dependencies inside LXCInstall iperf and all the dependencies required to install ncclient inside the container. We’ll also install git, will need it to fetch our app.sudo apt-get -y install python-pip python-lxml python-dev libffi-dev libssl-dev iperf gitInstall the latest ncclient code and jinja2 code using pip (required for our app). We also downgrade the cryptography package to 1.2.1 to circumvent a current bug in the package.sudo pip install ncclient jinja2 cryptography==1.2.1Perfect, all the dependencies for our app are now installed.Fetch the application code from GithubFetch our app from Github#ubuntu@pathchecker#~$ git clone https#//github.com/ios-xr/pathchecker.gitCloning into 'pathchecker'...remote# Counting objects# 46, done.remote# Compressing objects# 100% (28/28), done.remote# Total 46 (delta 8), reused 0 (delta 0), pack-reused 18Unpacking objects# 100% (46/46), done.Checking connectivity... done.ubuntu@pathchecker#~$ Change SSH port inside the containerWhen we deploy the container to IOS-XR, we will share XR’s network namespace. Since IOS-XR already uses up port 22 and port 57722 for its own purposes, we need to pick some other port for our container.P.S. If you check the Vagrantfile, we intend to expose port 58822 to the user’s laptop directly, on rtr1.Let’s change the SSH port to 58822#ubuntu@pathchecker#~$ sudo sed -i s/Port\\ 22/Port\\ 58822/ /etc/ssh/sshd_config ubuntu@pathchecker#~$ Check that your port was updated successfully#ubuntu@pathchecker#~$ cat /etc/ssh/sshd_config | grep PortPort 58822ubuntu@pathchecker#~$ We’re good!Package up the LXCNow, shutdown the container#ubuntu@pathchecker#~$ sudo shutdown -h now ubuntu@pathchecker#~$ Broadcast message from ubuntu@pathchecker\t(/dev/lxc/console) at 10#24 ...The system is going down for halt NOW!------------------------------ snip output ------------------------------------You’re back on devbox.Become root and package up your container tar ballsudo -scd /var/lib/lxc/pathchecker/rootfs/tar -czvf /vagrant/pathchecker_rootfs.tar.gz * See what we did there? We packaged up the container tar ball as pathchecker_rootfs.tar.gz under /vagrant directory. Why is this important?Well, Vagrant also automatically shares a certain directory with your laptop (for most types of guest operating systems). So the /vagrant is automatically mapped to the directory in which you launched your vagrant instance. To check this, let’s get out of our vagrant instance and issue an ls in your launch directory#vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ exitlogoutConnection to 127.0.0.1 closed.AKSHSHAR-M-K0DS#vagrant akshshar$ AKSHSHAR-M-K0DS#vagrant akshshar$ pwd /Users/akshshar/pathchecker/vagrantAKSHSHAR-M-K0DS#vagrant akshshar$ ls -l pathchecker_rootfs.tar.gz -rw-r--r-- 1 akshshar staff 301262995 Jul 18 07#57 pathchecker_rootfs.tar.gzAKSHSHAR-M-K0DS#vagrant akshshar$ Launch Router TopologyTo launch the two routers in the topology, make sure you are in the vagrant directory under pathchecker and issue a vagrant upAKSHSHAR-M-K0DS#vagrant akshshar$ pwd/Users/akshshar/pathchecker/vagrantAKSHSHAR-M-K0DS#vagrant akshshar$ AKSHSHAR-M-K0DS#vagrant akshshar$ vagrant upBringing machine 'rtr1' up with 'virtualbox' provider...Bringing machine 'devbox' up with 'virtualbox' provider...Bringing machine 'rtr2' up with 'virtualbox' provider...-------------------------------- snip output --------------------------------------Once everything is up, you should see the three nodes running# AKSHSHAR-M-K0DS#vagrant akshshar$ vagrant statusCurrent machine states#rtr1 running (virtualbox)devbox running (virtualbox)rtr2 running (virtualbox)This environment represents multiple VMs. The VMs are all listedabove with their current state. For more information about a specificVM, run `vagrant status NAME`.We’re all set! Let’s test out our application.Test out pathchecker!Before we begin, let’s dump some configuration outputs on rtr1#Check current OSPF cost/path stateAKSHSHAR-M-K0DS#vagrant akshshar$ vagrant port rtr1The forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution.22 (guest) => 2223 (host) 57722 (guest) => 2200 (host) 58822 (guest) => 58822 (host)AKSHSHAR-M-K0DS#vagrant akshshar$ ssh -p 2223 vagrant@localhost The authenticity of host '[localhost]#2223 ([127.0.0.1]#2223)' can't be established.RSA key fingerprint is b1#c1#5e#a5#7e#e7#c0#4f#32#ef#85#f9#3d#27#36#0f.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[localhost]#2223' (RSA) to the list of known hosts.vagrant@localhost's password# RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#show running-config router ospf Mon Jul 18 15#25#53.875 UTCrouter ospf apphost area 0 interface Loopback0 ! interface GigabitEthernet0/0/0/0 ! interface GigabitEthernet0/0/0/1 cost 20 ! !!RP/0/RP0/CPU0#rtr1#show route 2.2.2.2 Mon Jul 18 15#26#03.576 UTCRouting entry for 2.2.2.2/32 Known via ~ospf apphost~, distance 110, metric 2, type intra area Installed Jul 18 15#18#28.218 for 00#07#35 Routing Descriptor Blocks 10.1.1.20, from 2.2.2.2, via GigabitEthernet0/0/0/0 Route metric is 2 No advertising protos. RP/0/RP0/CPU0#rtr1#We can see that the current OSPF cost on Gig0/0/0/1 is 20, higher than Gig0/0/0/0. Hence as the route to 2.2.2.2 (loopback 0 of rtr2) shows, the current path selected is through Gig0/0/0/0Start iperf server on rtr2iperf was already installed on rtr2 as a native application (more on native apps here# XR toolbox part 5# Running a native WRL7 App) during the vagrant up process.Start iperf server on rtr2 and set it up to accept UDP packets#AKSHSHAR-M-K0DS#vagrant akshshar$ vagrant ssh rtr2 Last login# Mon Jul 18 15#57#05 2016 from 10.0.2.2xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ iperf -s -u ------------------------------------------------------------Server listening on UDP port 5001Receiving 1470 byte datagramsUDP buffer size# 64.0 MByte (default)------------------------------------------------------------Start pathchecker on rtr1 (LXC)SSH into the pathchecker ubuntu container (already brought up as part of vagrant up process) by using port 58822 on your laptop#Password for user “ubuntu” # ubuntuAKSHSHAR-M-K0DS#vagrant akshshar$ AKSHSHAR-M-K0DS#vagrant akshshar$ ssh -p 58822 ubuntu@localhostThe authenticity of host '[localhost]#58822 ([127.0.0.1]#58822)' can't be established.RSA key fingerprint is 19#54#83#a9#7a#9f#0a#18#62#d1#f3#91#87#3c#e9#0b.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[localhost]#58822' (RSA) to the list of known hosts.ubuntu@localhost's password# Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64) * Documentation# https#//help.ubuntu.com/Last login# Mon Jul 18 15#19#45 2016 from 10.0.2.2ubuntu@pathchecker#~$ ubuntu@pathchecker#~$ ubuntu@pathchecker#~$ The pc_run.sh script simply runs the pathchecker.py application with a few sample parameters#ubuntu@pathchecker#~$ ubuntu@pathchecker#~$ cat ./pathchecker/pc_run.sh #!/bin/bash./pathchecker.py --host 6.6.6.6 -u vagrant -p vagrant --port 830 -c 10 -o apphost -a 0 -i GigabitEthernet0/0/0/0 -s 2.2.2.2 -j 4 -l 5 -f -t 10ubuntu@pathchecker#~$ Based on above output, the “-l” option represents the threshold for packet loss and has been set to 5% for this run. Similarly, jitter has a threshold value of 4.Start the pathchecker app by running the pc_run.sh script in the pathchecker repository#ubuntu@pathchecker#~$ cd pathchecker/ ubuntu@pathchecker#~/pathchecker$ ./pc_run.sh Error while opening state file, let's assume low cost stateCurrently, on reference link GigabitEthernet0/0/0/0 Starting an iperf run.....20160718162513,1.1.1.1,62786,2.2.2.2,5001,6,0.0-10.0,1311240,104899220160718162513,1.1.1.1,62786,2.2.2.2,5001,6,0.0-10.0,1312710,104847420160718162513,2.2.2.2,5001,1.1.1.1,62786,6,0.0-10.0,1312710,1048679,2.453,0,892,0.000,1bw is1025.5546875jitter is2.453pkt_loss is0.000verdict isFalseCurrently, on reference link GigabitEthernet0/0/0/0Starting an iperf run.....Perfect! The App seems to be running fine on the reference link Gig0/0/0/0.Create impairment on Active pathWith the app running, let’s scoot over to “devbox” which will also act as our impairment node.AKSHSHAR-M-K0DS#vagrant akshshar$ vagrant ssh devbox Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-87-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information as of Mon Jul 18 16#38#49 UTC 2016 System load# 0.0 Processes# 76 Usage of /# 6.3% of 39.34GB Users logged in# 0 Memory usage# 32% IP address for eth0# 10.0.2.15 Swap usage# 0% IP address for lxcbr0# 10.0.3.1 Graph this data and manage this system at# https#//landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloudLast login# Mon Jul 18 16#38#50 2016 from 10.0.2.2vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ lsimpair_backup.sh impair_reference.sh stop_impair.shvagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ cat impair_reference.sh #!/bin/bashecho ~Stopping all current impairments~sudo tc qdisc del dev eth3 root &> /dev/nullsudo tc qdisc del dev eth4 root &> /dev/nullecho ~Starting packet loss on reference link~sudo tc qdisc add dev eth3 root netem loss 7% vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ ./impair_reference.shStopping all current impairmentsStarting packet loss on reference linkvagrant@vagrant-ubuntu-trusty-64#~$ As we can see, the reference impairment script creates a packet loss of 7% on the reference linkTake a look at the running pathchecker application on rtr1. It should switch to the backup link once it detects an increase in packet loss beyond 5% (as specified in the pc_run.sh file)#Currently, on reference link GigabitEthernet0/0/0/0Starting an iperf run.....20160718164745,1.1.1.1,60318,2.2.2.2,5001,6,0.0-10.0,1311240,104899220160718164745,1.1.1.1,60318,2.2.2.2,5001,6,0.0-10.0,1312710,104851620160718164745,2.2.2.2,5001,1.1.1.1,60318,6,0.0-573.0,1312710,18328,5.215,0,892,0.000,1bw is1025.5546875jitter is5.215pkt_loss is0.000verdict isTrueWoah! iperf run reported discrepancy, increase cost of reference link !Increasing cost of the reference link GigabitEthernet0/0/0/0Currently, on backup link Starting an iperf run.....20160718164755,1.1.1.1,61649,2.2.2.2,5001,6,0.0-10.0,1311240,104899220160718164755,1.1.1.1,61649,2.2.2.2,5001,6,0.0-10.0,1312710,104857720160718164755,2.2.2.2,5001,1.1.1.1,61649,6,0.0-583.3,1312710,18002,1.627,0,893,0.000,0bw is1025.5546875jitter is1.627pkt_loss is0.000verdict isFalseCurrently, on backup linkStarting an iperf run.....20160718164805,1.1.1.1,59343,2.2.2.2,5001,6,0.0-10.0,1311240,104899220160718164805,1.1.1.1,59343,2.2.2.2,5001,6,0.0-10.0,1312710,104852020160718164805,2.2.2.2,5001,1.1.1.1,59343,6,0.0-593.4,1312710,17697,2.038,0,893,0.000,0The app initiated the failover! Let’s see how the router responded.Verify the Failover was successfulAKSHSHAR-M-K0DS#vagrant akshshar$ ssh -p 2223 vagrant@localhostvagrant@localhost's password# RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#RP/0/RP0/CPU0#rtr1#show running-config router ospfMon Jul 18 17#50#47.851 UTCrouter ospf apphost area 0 interface Loopback0 ! interface GigabitEthernet0/0/0/0 cost 30 ! interface GigabitEthernet0/0/0/1 cost 20 ! !!RP/0/RP0/CPU0#rtr1#Great! The Cost of the Gig0/0/0/0 (reference) interface has been increased to 30, greater than the cost of Gig0/0/0/1. This forces the failover to happen to the Gig0/0/0/1 for the iperf traffic (or any traffic destined to rtr2).RP/0/RP0/CPU0#rtr1#show route 2.2.2.2Mon Jul 18 18#01#49.297 UTCRouting entry for 2.2.2.2/32 Known via ~ospf apphost~, distance 110, metric 21, type intra area Installed Jul 18 16#47#45.705 for 01#14#03 Routing Descriptor Blocks 11.1.1.20, from 2.2.2.2, via GigabitEthernet0/0/0/1 Route metric is 21 No advertising protos. RP/0/RP0/CPU0#rtr1#It works! The failover happened and the next hop for 2.2.2.2 (loopback0 of rtr2) is now 11.1.1.20 through Gig0/0/0/1 (the backup link).We leave it upto the reader to try and impair the backup link now and see the App switch the path back to the reference interface.", "url": "/tutorials/2016-07-09-pathchecker-iperf-netconf-for-ospf-path-failover/", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, linux, iperf, ospf, netconf, pathchecker" } , "tutorials-2016-08-15-netmiko-and-napalm-with-ios-xr-quick-look": { "title": "Netmiko and Napalm with IOS-XR: Quick Look", "content": " IOS-XR# Ansible and Vagrant Introduction Module installation Example using Netmiko Example using NAPALM IntroductionTo begin with, let’s take a look the tools we intend to use#Netmiko - multi-vendor ssh tool for device configurationNAPALM- python based automation tool, which provides a common API for different vendor platforms.We will use old setup, which consist of devbox (Ubuntu instance) and rtr (IOS-XRv).We are interested in our interaction with IOS-XRv in particular.Module installationLet’s install python modules on devbox#sudo pip install netmikosudo pip install napalmTo start off, we should verify that the setup was successful. Run python interpreter#vagrant@vagrant-ubuntu-trusty-64#~$ pythonPython 2.7.6 (default, Jun 22 2015, 17#58#13)[GCC 4.8.2] on linux2Type ~help~, ~copyright~, ~credits~ or ~license~ for more information.>>> import netmiko>>> import napalm>>>Example using NetmikoCreate the first file and try to connect to the device.from netmiko import ConnectHandlercisco_ios_xrv = { 'device_type'# 'cisco_xr', 'ip'# '10.1.1.20', 'username'# 'vagrant', 'password'# 'vagrant', 'port' # 22, # optional, defaults to 22 'secret'# 'secret', # optional, defaults to '' 'verbose'# False, # optional, defaults to False}net_connect = ConnectHandler(**cisco_ios_xrv)output = net_connect.send_command('show ip int brief')print(output)output = net_connect.send_config_set(['hostname my_sweet_rtr', 'commit'])print(output)output = net_connect.send_command('show run | b hostname')print(output)Script output#vagrant@vagrant-ubuntu-trusty-64#~$ python netmiko_tut.pyFri Jul 15 12#29#07.691 UTCInterface IP-Address Status Protocol Vrf-NameGigabitEthernet0/0/0/0 10.1.1.20 Up Up defaultMgmtEth0/RP0/CPU0/0 10.0.2.15 Up Up defaultconfig termFri Jul 15 12#29#09.739 UTCRP/0/RP0/CPU0#my_sweetest_rtr(config)#hostname my_sweetest_rtrRP/0/RP0/CPU0#my_sweetest_rtr(config)#commitFri Jul 15 12#29#10.332 UTCendconfig termFri Jul 15 12#29#12.475 UTCRP/0/RP0/CPU0#my_sweetest_rtr(config)#show run | include hostnameFri Jul 15 12#29#13.052 UTCBuilding configuration...hostname my_sweetest_rtrRP/0/RP0/CPU0#my_sweetest_rtr(config)#Now we can proceed with a more serious example. We will use a separate file with the router configuration. Open this file in python, read the lines individually and make a list of commands from it.vagrant@vagrant-ubuntu-trusty-64#~$ cat tel_conftelemetry encoder json policy group FirstGroup policy test transport tcp ! destination ipv4 10.1.1.10 port 2103commitPython piece of code to split lines into list with commands#with open('tel_conf') as f# lines = f.read().splitlines()print linestel_out = net_connect.send_config_set(lines)print tel_outRun python file#vagrant@vagrant-ubuntu-trusty-64#~$ python netmiko_tut.pyconfig termThu Jul 14 23#49#25.447 UTCRP/0/RP0/CPU0#xr(config)#telemetryRP/0/RP0/CPU0#xr(config-telemetry)# encoder jsonRP/0/RP0/CPU0#xr(config-telemetry-json)# policy group FirstGroupRP/0/RP0/CPU0#xr(config-policy-group)# policy testRP/0/RP0/CPU0#xr(config-policy-group)# transport tcpRP/0/RP0/CPU0#xr(config-telemetry-json)# !RP/0/RP0/CPU0#xr(config-telemetry-json)# destination ipv4 10.1.1.10 port 2103RP/0/RP0/CPU0#xr(config-policy-group)#commitThu Jul 14 23#49#26.400 UTCRP/0/RP0/CPU0#xr(config-policy-group)#Verify that config is here#RP/0/RP0/CPU0#my_sweet_rtr#show run | begin telemetryThu Jul 14 20#58#19.116 UTCBuilding configuration...xml agent ssl!xml agent tty!telemetry encoder json policy group FirstGroup policy test transport tcp ! destination ipv4 10.1.1.10 port 2103 ! !!endUseful commands#device.send_command('show ip int brief')Example using NAPALMFile for telemetry configuration will be used. We can change destination port from 2103 to 2109 to try diff command.At current moment commands related to configuration management doesn’t respond correctly, but informative commands work fine#vagrant@vagrant-ubuntu-trusty-64#~$ cat napalus.pyfrom napalm import get_network_driverdriver = get_network_driver('iosxr')device = driver('10.1.1.20', 'vagrant', 'vagrant')device.open()# print device.get_facts() ## doesn't workprint device.get_interfaces()print ''print device.get_interfaces_counters()print ''print device.get_users()device.close()Output will look like#{ 'GigabitEthernet0/0/0/0'# { 'is_enabled'# True, 'description'# u '', 'last_flapped'# -1.0, 'is_up'# True, 'mac_address'# u '0800.27b2.5406', 'speed'# 1000 }}{ 'GigabitEthernet0/0/0/0'# { 'tx_multicast_packets'# 0, 'tx_discards'# 0, 'tx_octets'# 6929839, 'tx_errors'# 0, 'rx_octets'# 586788, 'tx_unicast_packets'# 10799, 'rx_errors'# 0, 'tx_broadcast_packets'# 0, 'rx_multicast_packets'# 0, 'rx_broadcast_packets'# 3, 'rx_discards'# 0, 'rx_unicast_packets'# 9421 }}{ u 'vagrant'# { 'password'# '', 'sshkeys'# [], 'level'# 15 }}For more details and list of available methods#Netmiko githubNAPALM libraryNAPALM list of commandsGood luck with further automation!", "url": "/tutorials/2016-08-15-netmiko-and-napalm-with-ios-xr-quick-look/", "author": "Mike Korshunov", "tags": "vagrant, iosxr, Python" } , "tutorials-2016-08-22-using-puppet-with-iosxr-6-1-1": { "title": "Using Puppet with IOS-XR 6.1.1", "content": " IOS-XR# Puppet Introduction Prerequisites The ciscoyang Puppet Module Description Setup Pre-setup Puppet Master Puppet Agent / IOS-XRv Usage Puppet Manifest The cisco_yang Puppet Type The cisco_yang_netconf Puppet Type Apply Sample Puppet Manifest IntroductionThe goal of this tutorial is to set up Puppet Master and Puppet Agent on an Ubuntu and IOS-XRv vagrant instances respectively. This setup was tested on OSX, but the workflow is the same for other environments.Prerequisites Vagrant 1.8.4 for your Operating System. Virtualbox 5.0.x for your Operating System. A computer with atleast 8G free memory. Vagrantfile and scripts for provisioningVagrant 1.8.5 sets the permissions on ~vagrant/.ssh/authorized_keys to 0644 (world-readable) when replacing the insecure public key with a newly generated one. Since sshd will only accept keys readable just by their owner, vagrant up returns an error, since it cannot connect with the new key and it already removed the insecure key. This is Vagrant bug #7610, which affects CentOS Puppet-Master. You can either downgrade to Vagrant 1.8.4 or add config.ssh.username = ~vagrant~ and config.ssh.password = ~vagrant~ lines to Vagrantfile. More information here.The ciscoyang Puppet ModuleThe ciscoyang module allows configuration of IOS-XR through Cisco supported YANG data models in JSON/XML format. This module bundles the cisco_yang and cisco_yang_netconf Puppet types, providers, Beaker tests, and sample manifests to enable users to configure and manage IOS-XR.This GitHub repository contains the latest version of the ciscoyang module source code. Supported versions of the ciscoyang module are available at Puppet Forge.DescriptionThis module enables management of supported Cisco Network Elements through the cisco_yang and cisco_yang_netconf Puppet types and providers.A typical role-based architecture scenario might involve a network administrator who uses a version control system to manage various YANG-based configuration files. An IT administrator who is responsible for the puppet infrastructure can simply reference the YANG files from a puppet manifest in order to deploy the configurationSetupPre-setupClone the vagrant-xrdocs repository with puppet tutorial#$ cd ~$ git clone https#//github.com/ios-xr/vagrant-xrdocs.git$ cd ~/vagrant-xrdocs/puppet-tutorials/app_hosting/centos-pm/$ lsVagrantfile iosxrv.sh scripts xr_config configs puppetmaster.shTo add an IOS-XR box, you need to download it. IOS-XR Vagrant is currently in Private Beta To download the box, you will need an API-KEY and a CCO-ID To get the API-KEY and a CCO-ID, browse to the following link and follow the steps# Steps to Generate API-KEY$ BOXURL=~http#//devhub.cisco.com/artifactory/appdevci-release/XRv64/latest/iosxrv-fullk9-x64.box~$ curl -u CCO-ID#API-KEY $BOXURL --output ~/iosxrv-fullk9-x64.box$ vagrant box add --name IOS-XRv ~/iosxrv-fullk9-x64.boxOf course, you should replace CCO-ID with your cisco.com ID and API-KEY with the key you generated and copied using the above link.We should now have IOS-XR box available, Use the vagrant box list command to display the current set of boxes on your system as shown below#$ vagrant box listIOS-XRv (virtualbox, 0)The Vagrantfile contains 2 Vagrant boxes; PuppetMaster and IOS-XRv.If you go to app_hosting directory, you will find that we have two different setups of puppetmaster.$ cd ~/iosxr/vagrant-xrdocs/puppet-tutorials/app_hosting/$ lscentos-pm ubuntu-pmcentos-pm and ubuntu-pm has puppetserver installed on CentOS and Ubuntu respectivley. CentOS workflow installs beaker package to run beaker test. So consider centos-pm for development purpose.Boot up the IOS-XR and Puppet-Master boxes#$ cd ~/vagrant-xrdocs/puppet-tutorials/app_hosting/centos-pm/$ lsVagrantfile iosxrv.sh scripts xr_config configs puppetmaster.sh$ vagrant upBringing machine 'puppetmaster' up with 'virtualbox' provider...Bringing machine 'iosxrv' up with 'virtualbox' provider...This will take some time. If guest OS logs a message to stderr then you might see few red lines. Ignore them.Look for “vagrant up” welcome message to confirm the machine has booted#==> iosxrv# Machine 'iosxrv' has a post `vagrant up` message. This is a message==> iosxrv# from the creator of the Vagrantfile, and not from Vagrant itself#==> iosxrv#==> iosxrv#==> iosxrv# Welcome to the IOS XRv (64-bit) VirtualBox.==> iosxrv# To connect to the XR Linux shell, use# 'vagrant ssh'.==> iosxrv# To ssh to the XR Console, use# 'vagrant port' (vagrant version > 1.8)==> iosxrv# to determine the port that maps to guestport 22,==> iosxrv# then# 'ssh vagrant@localhost -p <forwarded port>'==> iosxrv#==> iosxrv# IMPORTANT# READ CAREFULLY==> iosxrv# The Software is subject to and governed by the terms and conditions==> iosxrv# of the End User License Agreement and the Supplemental End User==> iosxrv# License Agreement accompanying the product, made available at the==> iosxrv# time of your order, or posted on the Cisco website at==> iosxrv# www.cisco.com/go/terms (collectively, the 'Agreement').==> iosxrv# As set forth more fully in the Agreement, use of the Software is==> iosxrv# strictly limited to internal use in a non-production environment==> iosxrv# solely for demonstration and evaluation purposes. Downloading,==> iosxrv# installing, or using the Software constitutes acceptance of the==> iosxrv# Agreement, and you are binding yourself and the business entity==> iosxrv# that you represent to the Agreement. If you do not agree to all==> iosxrv# of the terms of the Agreement, then Cisco is unwilling to license==> iosxrv# the Software to you and (a) you may not download, install or use the==> iosxrv# Software, and (b) you may return the Software as more fully set forth==> iosxrv# in the Agreement.Puppet MasterTo access the Puppet Master box just issue the vagrant ssh command (no password required)#$ vagrant ssh puppetmasterThe Puppet Master instance is already configured via file “puppetmaster.sh”. This section is only for the user’s information. Let’s review the “puppetmaster.sh” script.The first line adds Puppet Master and IOS-XRv host information in /etc/hosts file. yes | sudo cp /home/ubuntu/hosts /etc/hosts > /dev/null 2>&1 Next, downloads required packages for Puppet Master and updates the system. wget -q https#//apt.puppetlabs.com/puppetlabs-release-pc1-xenial.debsudo dpkg -i puppetlabs-release-pc1-xenial.deb > /dev/null 2>&1sudo apt update -qq > /dev/null 2>&1sudo apt-get install puppetserver -qq > /dev/null Next, script clones the Puppet-Yang github repository and installs ciscoyang puppet module# git clone https#//github.com/cisco/cisco-yang-puppet-module.git -qcd cisco-yang-puppet-module/opt/puppetlabs/puppet/bin/puppet module build > /dev/nullsudo /opt/puppetlabs/puppet/bin/puppet module install pkg/*.tar.gz The last section creates a puppet configuration file and ensures that puppetserver service is running on the Puppet Master yes | sudo cp /home/ubuntu/puppet.conf /etc/puppetlabs/puppet/puppet.confsudo /opt/puppetlabs/bin/puppet resource service puppetserver ensure=running enable=true > /dev/null Puppet Agent / IOS-XRvTo access the IOS-XRv bash shell just issue the vagrant ssh command (no password required)#$ vagrant ssh iosxrvTo access the XR console on IOS-XRv requires an additional step to figure out the ssh port#$ vagrant port iosxrvThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2200 (host) $ ssh -p 2223 vagrant@localhost # password# vagrantvagrant@localhost's password#RP/0/RP0/CPU0#xrv9k#The IOS-XRv instance is already configured via “iosxrv.sh”. This section is only for the user’s information. Let’s review the “iosxrv.sh” script.The first section installs puppet agent on IOS-XRv. sudo rpm --import http#//yum.puppetlabs.com/RPM-GPG-KEY-puppetlabssudo rpm --import http#//yum.puppetlabs.com/RPM-GPG-KEY-reductivewget -q https#//yum.puppetlabs.com/puppetlabs-release-pc1-cisco-wrlinux-7.noarch.rpmsudo yum install -y puppetlabs-release-pc1-cisco-wrlinux-7.noarch.rpm > /dev/nullsudo yum update -y > /dev/nullsudo yum install -y puppet > /dev/null Next, downloads and installs grpcs gem. export PATH=/opt/puppetlabs/puppet/bin#$PATHwget -q https#//rubygems.org/downloads/grpc-0.15.0-x86_64-linux.gemsudo /opt/puppetlabs/puppet/bin/gem install --no-rdoc --no-ri grpc > /dev/null Next, copies configuration files# yes | sudo cp /home/vagrant/puppet.conf /etc/puppetlabs/puppet/puppet.confyes | sudo cp /home/vagrant/hosts /etc/hostsyes | sudo cp /home/vagrant/cisco_yang.yaml /etc/cisco_yang.yaml UsagePuppet ManifestThis section explains puppet manifest. This section is only for the user’s information. To apply manifest, jump to apply sample manifest section.The following example manifest shows how to use ciscoyang to configure two VRF instances on a Cisco IOS-XR device.node 'default' { cisco_yang { 'my-config'# ensure => present, target => '{~Cisco-IOS-XR-infra-rsi-cfg#vrfs~# [null]}', source => '{~Cisco-IOS-XR-infra-rsi-cfg#vrfs~# { ~vrf~#[ { ~vrf-name~#~VOIP~, ~description~#~Voice over IP~, ~vpn-id~#{~vpn-oui~#875, ~vpn-index~#3}, ~create~#[null] }, { ~vrf-name~#~INTERNET~, ~description~#~Generic external traffic~, ~vpn-id~#{~vpn-oui~#875,~vpn-index~#22}, ~create~#[null] }] } }', }}The following example manifest shows how to copy a file from the Puppet master to the agent and then reference it from the manifest. file { '/root/bgp.json'# source => 'puppet#///modules/ciscoyang/models/bgp.json' } cisco_yang { '{~Cisco-IOS-XR-ipv4-bgp-cfg#bgp~# [null]}'# ensure => present, mode => replace, source => '/root/bgp.json', }}The following example manifest shows how to use ciscoyang to configure two VRF instances on a Cisco IOS-XR device using the Yang NETCONF type.node 'default' { cisco_yang_netconf { 'my-config'# target => '<vrfs xmlns=~http#//cisco.com/ns/yang/Cisco-IOS-XR-infra-rsi-cfg~/>', source => '<vrfs xmlns=~http#//cisco.com/ns/yang/Cisco-IOS-XR-infra-rsi-cfg~> <vrf> <vrf-name>VOIP</vrf-name> <create/> <description>Voice over IP</description> <vpn-id> <vpn-oui>875</vpn-oui> <vpn-index>3</vpn-index> </vpn-id> </vrf> <vrf> <vrf-name>INTERNET</vrf-name> <create/> <description>Generic external traffic</description> <vpn-id> <vpn-oui>875</vpn-oui> <vpn-index>22</vpn-index> </vpn-id> </vrf> </vrfs>', mode => replace, force => false, }}The cisco_yang Puppet TypeAllows IOS-XR to be configured using YANG models in JSON format via gRPC.Parameters targetThe model path of the target node in YANG JSON format, or a reference to a local file containing the model path. For example, to configure the list of vrfs in IOS-XR, you could specify a target of '{~Cisco-IOS-XR-infra-rsi-cfg#vrfs~# [null]}' or reference a file which contained the same JSON string. modeDetermines which mode is used when setting configuration via ensure=>present. Valid values are replace and merge (which is the default). If replace is specified, the current configuration will be replaced by the configuration in the source property (corresponding to the ReplaceConfig gRPC operation). If merge is specified, the configuration in the source property will be merged into the current configuration (corresponding to the MergeConfig gRPC operation). forceValid values are true and false (which is the default). If true is specified, then the config in the source property is set on the device regardless of the current value. If false is specified (or no value is specified), the default behavior is to set the configuration only if it is different from the running configuration.Properties ensureDetermines whether a certain configuration should be present or not on the device. Valid values are present and absent. sourceThe model data in YANG JSON format, or a reference to a local file containing the model data. This property is only used when ensure=>present is specified. In addition, if source is not specified when ensure=>present is used, source will default to the value of the target parameter. This removes some amount of redundancy when the source and target values are the same (or very similar).The cisco_yang_netconf Puppet TypeAllows IOS-XR to be configured using YANG models in XML format via NETCONF.Parameters targetThe Yang Netconf XML formatted string or file location containing the filter used to query the existing configuration. For example, to configure the list of vrfs in IOS-XR, you could specify a target of ‘’ or reference a file which contained the equivalent Netconf XML string. modeDetermines which mode is used when setting configuration. Valid values are replace and merge (which is the default). If replace is specified, the current configuration will be replaced by the configuration in the source property. If merge is specified, the configuration in the source property will be merged into the current configuration. forceValid values are true and false (which is the default). If true is specified, then the config in the source property is set on the device regardless of the current value. If false is specified (or no value is specified), the default behavior is to set the configuration only if it is different from the running configuration.Properties sourceThe model data in YANG XML Netconf format, or a reference to a local file containing the model data. The Netconf protocol does not allow deletion of configuration subtrees, but instead requires addition of ‘operation=”delete”’ attributes in the YANG XML specifed in the source property.Apply Sample Puppet ManifestCreate Sample ManifestA sample manifest file is included in Puppet-Yang git repository. Copy sample manifest file at right location on puppet master.$ vagrant ssh puppetmaster$ find . -name site.pp./cisco-yang-puppet-module/examples/site.pp$ sudo cp ./cisco-yang-puppet-module/examples/site.pp /etc/puppetlabs/code/environments/production/manifests/$ exitThe sample puppet manifest looks like#node 'default' { file { ~/root/temp/vrfs.json~# source => ~puppet#///modules/ciscoyang/models/defaults/vrfs.json~} # Configure two vrfs (VOIP & INTERNET) cisco_yang { '{~Cisco-IOS-XR-infra-rsi-cfg#vrfs~# [null]}'# ensure => present, source => '/root/temp/vrfs.json', }}Apply Sample ManifestThe sample manifest above requires /root/temp directory on puppet agent to copy XR configuration file vrfs.json.$ vagrant ssh iosxrv$ sudo mkdir /root/temp/$ exitThe vrfs.json file#{ ~Cisco-IOS-XR-infra-rsi-cfg#vrfs~#{ ~vrf~#[{ ~vrf-name~#~VOIP~, ~description~#~Voice over IP~, ~vpn-id~#{~vpn-oui~#87, ~vpn-index~#3}, ~create~#[null] }, { ~vrf-name~#~INTERNET~, ~description~#~Generic external traffic~, ~vpn-id~#{~vpn-oui~#85, ~vpn-index~#22}, ~create~#[null] }] }}Run puppet agent puppet agent -t to apply configuration on IOS-XRv.$ vagrant ssh iosxrv$ sudo puppet agent -t$ exitVerify the applied configuration#$ ssh -p 2223 vagrant@localhost # password# vagrantvagrant@localhost's password#RP/0/RP0/CPU0#xrv9k#show running-config vrfFri Aug 19 00#02#40.505 UTCvrf VOIP description Voice over IP vpn id 57#3!vrf INTERNET description Generic external traffic vpn id 55#16!$ exit", "url": "/tutorials/2016-08-22-using-puppet-with-iosxr-6-1-1", "author": "Sushrut Shirole", "tags": "vagrant, iosxr, cisco, linux, Puppet, xr toolbox" } , "tutorials-2016-09-28-solenoid-inject-routes-into-cisco-s-rib-table-using-grpc": { "title": "Solenoid: inject routes into Cisco's RIB table using gRPC", "content": " On This Page Introduction How Solenoid Works Pre-requisites Understand the Topology Clone the git repo Spin up the Ubuntu devbox Create Solenoid LXC tarball Install Application dependencies inside LXC Fetch the Application code from github Configure Solenoid and exaBGP Change the SSH port inside the container Package up the LXC Launch router topology Test out Solenoid Solenoid GUI Solenoid on the Backend IntroductionIf you haven’t checked out the XR toolbox Series, then you can do so here#XR Toolbox SeriesThis series is meant to help a beginner get started with application-hosting on IOS-XR.In this tutorial we intend to utilize almost all the techniques learned in the above series to inject third-party BGP routes into Cisco’s RIB table.How Solenoid WorksThis tutorial focuses on hosting the Solenoid application on IOS-XR, but following is a brief description of how Solenoid works.For the demos Solenoid uses exaBGP as a third-party BGP software. exaBGP will be running on an Ubuntu vagrant box as well as in a third-party container on the IOS-XR (see Understand the Topology for more information). The two boxes form a BGP neighbor relationship.When exaBGP in the IOS-XR container hears a neighborhood update (either an announcement of a new route or a withdrawal of an old route), Solenoid works as the glue between exaBGP and the Cisco RIB table. Solenoid hears the exaBGP update, and pulls out the relevant data from the exaBGP udate and models it using the Cisco YANG model for static routes. Then it uses gRPC to send the data to the RIB table.Pre-requisitesMake sure you have Vagrant and Virtualbox installed on your system.The system must have 4.5GB of space available. The topology includes an IOS-XRv router (3.5G RAM) and an Ubuntu instance (501MB RAM).Go through the Vagrant quick-start tutorial, if you haven’t already, to learn how to use Vagrant with IOS-XR# IOS-XR vagrant quick-startIt would be beneficial for the user to go through the XR Toolbox Series. But it is not a hard requirement. Following the steps in this tutorial should work out just fine for this demo.Once you have everything set up, you should be able to see the IOS-XRv vagrant box in the vagrant box list command#lisroach@LISROACH-M-J0AY ~/W/X/S/vagrant> vagrant box listIOS XRv (virtualbox, 0)Understand the Topology devbox# The Ubuntu Vagrant box on the right. This is running exaBGP and is peered with the xrv router to its left. exaBGP is sending out 3 BGP neighbor announcements and withdrawals about every 2 seconds. xrv# The router on the left. This router is running a gRPC server, and is not running any version of Cisco’s BGP. It has an Ubuntu LXC as it’s third-party container instead, which is running exaBGP and the Solenoid application. solenoid container# The Ubuntu LXC that is running on the xrv. exaBGP is peered with the devbox and hears all of its neighbor’s announcements and withdrawals. Upon receiving a neighborhood update, exaBGP runs Solenoid, which uses a gRPC client and YANG models to send the new route (or withdrawn route) to the RIB table in the IOS-XR. Clone the git repoThe entire environment can be replicated on any environment running vagrant, provided there is at least 4.5GB of space available.Clone the Solenoid code from here# https#//github.com/ios-xr/Solenoid.gitlisroach@LISROACH-M-J0AY ~/Workspace> git clone https#//github.com/ios-xr/Solenoid.gitCloning into 'Solenoid'...remote# Counting objects# 1539, done.remote# Compressing objects# 100% (623/623), done.remote# Total 1539 (delta 884), reused 1508 (delta 866), pack-reused 0Receiving objects# 100% (1539/1539), 713.76 KiB | 317.00 KiB/s, done.Resolving deltas# 100% (884/884), done.Checking connectivity... done.lisroach@LISROACH-M-J0AY ~/Workspace>Spin up the Ubuntu devboxBefore we spin up the routers, we can create the container tarball for the Solenoid code. The way the launch scripts are setup for xrv, you can launch the vagrant boxes without creating a new Solenoid tarball (since one with the latest release will be downloaded for you automatically). But if you interested in the absolute latest code, or are interested in the process for your own education, follow the steps below to create your own Solenoid tarball. If you are not interested, skip to Launch router topology.Move into the vagrant directory and launch only the devbox node#lisroach@LISROACH-M-J0AY ~/Workspace> cd Solenoid/vagrantlisroach@LISROACH-M-J0AY ~/W/S/vagrant> vagrant up devboxexaBGP is already installed and running on your devbox. If you want to see it running, you can jump into the exabgp screen.vagrant@vagrant-ubuntu-trusty-64#~$ sudo screen -lsThere is a screen on# \t1762.exabgp \t(09/27/2016 10#43#34 PM) \t(Detached)1 Socket in /var/run/screen/S-root.vagrant@vagrant-ubuntu-trusty-64#~$ sudo screen -r exabgpTue, 27 Sep 2016 23#43#25 | INFO | 1764 | processes | Command from process add-routes # announce route 2.2.2.0/24 next-hop selfTue, 27 Sep 2016 23#43#25 | INFO | 1764 | reactor | Route added to neighbor 11.1.1.10 local-ip 11.1.1.20 local-as 65000 peer-as 65000 router-id 11.1.1.20 family-allowed in-open # 2.2.2.0/24 next-hop 11.1.1.20To detach from the screen, do the following#CTRL+a, dYou do not want to kill the process in the screen or destroy the screen, so be sure you detach properly. You will see the following output#vagrant@vagrant-ubuntu-trusty-64#~$ sudo screen -r exabgp[detached from 1762.exabgp]Create Solenoid LXC tarballEnter the devbox#lisroach@LISROACH-M-J0AY ~/W/S/vagrant> vagrant ssh devboxWelcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-92-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information as of Tue Sep 27 23#20#46 UTC 2016 System load# 0.06 Users logged in# 0 Usage of /# 5.4% of 39.34GB IP address for eth0# 10.0.2.15 Memory usage# 36% IP address for eth1# 11.1.1.20 Swap usage# 0% IP address for lxcbr0# 10.0.3.1 Processes# 80 Graph this data and manage this system at# https#//landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloudNew release '16.04.1 LTS' available.Run 'do-release-upgrade' to upgrade to it.Last login# Tue Sep 27 23#20#46 2016 from 10.0.2.2vagrant@vagrant-ubuntu-trusty-64#~$Install LXC#vagrant@vagrant-ubuntu-trusty-64#~$ sudo apt-get updatevagrant@vagrant-ubuntu-trusty-64#~$ sudo apt -y install lxcCreate the Solenoid LXC template#vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-create -t ubuntu --name solenoid\t Start the container. You will be dropped into the console once boot is complete.vagrant@vagrant-ubuntu-trusty-64#~$ sudo lxc-start --name solenoidsolenoid login# init# setvtrgb main process (428) terminated with status 1init# plymouth-upstart-bridge main process ended, respawningubuntuPassword#Username# ubuntuPassword# ubuntuInstall Application dependencies inside LXCInstall Solenoid, exaBGP and all of their dependencies inside the container. Initiate the following commands#ubuntu@solenoid#~$ sudo apt-get -y install git curl screen python-dev python-setuptools[sudo] password for ubuntu# ubuntuubuntu@solenoid#~$ sudo easy_install pipubuntu@solenoid#~$ sudo pip install virtualenv exabgpThese dependencies make it possible for us to install the important components of our applications.Fetch the Application code from githubNow, download Solenoid from github. Using the Solenoid directory, we can install most of the remaining dependencies with the setup.py installation script.ubuntu@solenoid#~$ git clone https#//github.com/ios-xr/Solenoid.gitLet’s install the dependencies in a virtualenv. First, navigate into the Solenoid directory and activate the virtualenv.ubuntu@solenoid#~$ cd Solenoidubuntu@solenoid#~$ virtualenv venvubuntu@solenoid#~$ source venv/bin/activateThe (venv) indicates that you have entered your virtualenv. Now you can install the dependencies, and they will only be available in your virtualenv. This means you will have to activate your virtualenv in order to run Solenoid.(venv) ubuntu@solenoid#~$ pip install grpcio(venv) ubuntu@solenoid#~$ python setup.py install Perfect! Now all of our dependencies have been installed.Configure Solenoid and exaBGPSolenoid requires a configuration file to indicate some important metadata. Create a file named solenoid.config with the following data (in the Solenoid/ top-level directory)#[default]\t# Name you choose for the nodetransport# gRPC # Either gRPC or RESTconfip# 11.1.1.10 # IP address of the destination RIB table (the XR device you intend to control)port# 57777 \t # Depends on what is configured for your gRPC or RESTconf serversusername# vagrant # Username for the XR devicepassword# vagrant # Password for the XR deviceThat is all we need to configure Solenoid for your system. Now we need to add a configuration file for exaBGP. Navigate to your home directory, and add a file named router.ini#group demo { router-id 11.1.1.10; process monitor-neighbors { encoder json; receive { parsed; updates; neighbor-changes; } run /usr/bin/env python /home/ubuntu/Solenoid/solenoid/edit_rib.py -f '/home/ubuntu/Solenoid/filter.txt'; } neighbor 11.1.1.20 { local-address 11.1.1.10; local-as 65000; peer-as 65000; }}The most important part of this code is#run /usr/bin/env python /home/ubuntu/Solenoid/solenoid/edit_rib.py -f '/home/ubuntu/Solenoid/filter.txt';This line runs a custom script. The /usr/bin/env python is the path to your python instance. Specifically, it is the path to the first python instance in your PATH, which is important because we are using a virtualenv where the python path might be different than the normal /usr/bin/python./home/ubuntu/Solenoid/solenoid/edit_rib.py is the path to the file that launches Solenoid.The second half of the line, -f '/home/ubuntu/Solenoid/filter.txt' is an optional file argument pointing to the file used for filtering .For more information about the router.ini file, please consult Solenoid’s Wiki and review exaBGP’s documentation.Change the SSH port inside the containerWhen we deploy the container to IOS-XR, we will share XR’s network namespace. Since IOS-XR already uses up port 22 and port 57722 for its own purposes, we need to pick some other port for our container.P.S. If you check the Vagrantfile, we intend to expose port 58822 to the user’s laptop directly, on IOS-XRv.Let’s change the SSH port to 58822#(venv) ubuntu@solenoid#~$ sudo sed -i s/Port\\ 22/Port\\ 58822/ /etc/ssh/sshd_configCheck that your port was updated successfully#(venv) ubuntu@solenoid#~$ cat /etc/ssh/sshd_config | grep PortPort 58822We’re good!Package up the LXCShutdown the container#(venv) ubuntu@solenoid#~$ sudo shutdown -h now(venv) ubuntu@solenoid#~$Broadcast message from ubuntu@solenoid \t(/dev/lxc/console) at 23#00 ...The system is going down for halt NOW!init# tty4 main process (369) killed by TERM signalinit# tty2 main process (372) killed by TERM signalinit# tty3 main process (373) killed by TERM signalinit# cron main process (383) killed by TERM signal...You’re back on the devbox.Become root and package up your tarball#vagrant@vagrant-ubuntu-trusty-64#~$ sudo -sroot@vagrant-ubuntu-trusty-64#~# cd /var/lib/lxc/solenoid/rootfs/root@vagrant-ubuntu-trusty-64#~# tar -czvf /vagrant/solenoid.tgz *See what we did there? We packaged up the container tarball as solenoid.tgz under /vagrant directory. Why is this important?Well, Vagrant also automatically shares a certain directory with your laptop (for most types of guest operating systems). So the /vagrant is automatically mapped to the directory in which you launched your vagrant instance. To check this, let’s get out of our vagrant instance and issue an ls in your launch directory#root@vagrant-ubuntu-trusty-64#~# exitexitvagrant@vagrant-ubuntu-trusty-64#~$ exitlogoutConnection to 127.0.0.1 closed.lisroach@LISROACH-M-J0AY ~/W/S/vagrant> pwd/Users/lisroach/Workspace/Solenoid/vagrantlisroach@LISROACH-M-J0AY ~/W/S/vagrant> ls -la solenoid.tgz-rw-r--r-- 1 lisroach staff 252417007 Aug 2 11#27 solenoid.tgz>Now you have your solenoid tarball! This will be used to launch the container on your IOS-XRv. If you did not create this tarball, the Vagrantfile is smart enough to grab the container from the internet.Launch router topologyLaunching the router topology is incredibly simple. Just do a vagrant up in the Solenoid/vagrant/ directory.lisroach@LISROACH-M-J0AY ~/W/S/vagrant> pwd/Users/lisroach/Workspace/Solenoid/vagrantlisroach@LISROACH-M-J0AY ~/W/S/vagrant> vagrant upBringing machine 'xrv' up with 'virtualbox' provider...Bringing machine 'devbox' up with 'virtualbox' provider...==> xrv# Importing base box 'IOS XRv'...It will take a few minutes, and you will see a number of ugly looking messages like these#==> xrv# tar# dev/audio2# Cannot mknod# Operation not permitted==> xrv# tar# dev/sequencer# Cannot mknod# Operation not permitted==> xrv# tar# dev/midi3# Cannot mknod# Operation not permitted==> xrv# tar# dev/mixer3# Cannot mknod# Operation not permitted==> xrv# tar# dev/smpte3# Cannot mknod# Operation not permitted==> xrv# tar# dev/mpu401data# Cannot mknod# Operation not permittedBut don’t worry, your vagrant boxes are working perfectly. Once you see the following message you will know you are done#==> xrv# Machine 'xrv' has a post `vagrant up` message. This is a message==> xrv# from the creator of the Vagrantfile, and not from Vagrant itself#==> xrv#==> xrv#==> xrv# Welcome to the IOS XRv (64-bit) VirtualBox....You can also check the status of your vagrant boxes#lisroach@LISROACH-M-J0AY ~/W/S/vagrant> vagrant statusCurrent machine states#xrv running (virtualbox)devbox running (virtualbox)This environment represents multiple VMs. The VMs are all listedabove with their current state. For more information about a specificVM, run `vagrant status NAME`.Great! Time to start playing with Solenoid.Test out SolenoidSolenoid GUIAfter completing the initial vagrant up, the application is already up and running. In your browser, navigate to#localhost#57780Here you will see the routes being added and withdrawn from the IOS-XRv’s RIB table.These routes are the routes that are being automatically sent and withdrawn from the exaBGP instance running in your devbox.Currently there is no filtering enabled, but feel free to add prefixes or prefix ranges to the filtering file. This file acts as a whitelist, so by adding a prefix or prefix range, all other prefixes will be dropped. For example, add the prefix range 1.1.1.0/24-2.2.2.0/24 to the filtering. Now watch as the 3.3.3.0/24 network never gets added to the RIB table, because it has been filtered out.To view the application running on the box, reference the instructions below on how to navigate the vagrant environment.Solenoid on the BackendLet’s see what Solenoid looks like on the box. First we’ll check our RIB table on the xrv. In order to do this, we need to SSH into the xrv. First, find out the port that has been forwarded for port 22. Then ssh into that port, and you will find yourself in the CLI. From there, view your RIB table.Password# vagrantlisroach@LISROACH-M-J0AY ~/W/S/vagrant> vagrant port xrvThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution.22 (guest) => 2223 (host) 57722 (guest) => 2222 (host) 57780 (guest) => 57780 (host) 58822 (guest) => 58822 (host) (venv) lisroach@LISROACH-M-J0AY ~/W/S/vagrant> ssh -p 2223 vagrant@localhostvagrant@localhost's password#RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#show ip routeWed Sep 28 18#33#18.266 UTCCodes# C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, su - IS-IS summary null, * - candidate default U - per-user static route, o - ODR, L - local, G - DAGR, l - LISP A - access/subscriber, a - Application route M - mobile route, r - RPL, (!) - FRR Backup pathGateway of last resort is 10.0.2.2 to network 0.0.0.0S* 0.0.0.0/0 [1/0] via 10.0.2.2, 01#01#34C 10.0.2.0/24 is directly connected, 01#03#27, MgmtEth0/RP0/CPU0/0L 10.0.2.15/32 is directly connected, 01#03#27, MgmtEth0/RP0/CPU0/0L 10.1.1.5/32 is directly connected, 01#01#34, Loopback1C 11.1.1.0/24 is directly connected, 01#01#34, GigabitEthernet0/0/0/0L 11.1.1.10/32 is directly connected, 01#01#34, GigabitEthernet0/0/0/0RP/0/RP0/CPU0#ios#We can see here there are currently no static routes except for 0.0.0.0/0. You may see some routes other than this, as Solenoid is running and adding routes constantly to the RIB.Now leave this screen up, open a new tab in your terminal and jump into the Solenoid container. Remember when we changed the ssh port of the container? Now we will use that port to SSH directly from our CLI into the Solenoid container.Password # ubuntulisroach@LISROACH-M-J0AY ~/W/S/vagrant> vagrant port xrvThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host) 57780 (guest) => 57780 (host) 58822 (guest) => 58822 (host)lisroach@LISROACH-M-J0AY ~/W/S/vagrant> ssh -p 58822 ubuntu@localhost The authenticity of host '[localhost]#58822 ([127.0.0.1]#58822)' can't be established.ECDSA key fingerprint is SHA256#Swie3V2VIYDNCACaRLbSjQa7417yIM6hpbeimNwZr1o.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '[localhost]#58822' (ECDSA) to the list of known hosts.ubuntu@localhost's password#Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64) * Documentation# https#//help.ubuntu.com/Last login# Thu Sep 22 21#31#13 2016We are now on the Solenoid container that is running on the xrv. Solenoid is currently running in a screen named exaBGP. Resume the screen to see Solenoid running.ubuntu@solenoid#~$ubuntu@solenoid#~$ screen -lsThere are screens on# \t1423.website \t(09/28/2016 05#38#22 PM) \t(Detached) \t1421.exabgp \t(09/28/2016 05#38#22 PM) \t(Detached)2 Sockets in /var/run/screen/S-ubuntu.ubuntu@solenoid#~$ubuntu@solenoid#~$ screen -r exabgpWed, 28 Sep 2016 18#35#04 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#06 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#11 | INFO | 1436 | solenoid | ANNOUNCE | OKWed, 28 Sep 2016 18#35#13 | INFO | 1436 | solenoid | ANNOUNCE | OKWed, 28 Sep 2016 18#35#17 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#19 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#25 | INFO | 1436 | solenoid | ANNOUNCE | OKWed, 28 Sep 2016 18#35#27 | INFO | 1436 | solenoid | ANNOUNCE | OKWed, 28 Sep 2016 18#35#37 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#37 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#38 | INFO | 1436 | solenoid | ANNOUNCE | OKWed, 28 Sep 2016 18#35#40 | INFO | 1436 | solenoid | ANNOUNCE | OKWed, 28 Sep 2016 18#35#44 | INFO | 1436 | solenoid | WITHDRAW | OKWed, 28 Sep 2016 18#35#46 | INFO | 1436 | solenoid | WITHDRAW | OKThese messages show the output of Solenoid running. All of the OKs show us that it is running properly. If you hop back to your tab running the CLI and run show ip route a few times, you will see the RIB table changing with the messages that Solenoid is sending.RP/0/RP0/CPU0#ios#show ip routeWed Sep 28 18#49#22.165 UTCCodes# C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, su - IS-IS summary null, * - candidate default U - per-user static route, o - ODR, L - local, G - DAGR, l - LISP A - access/subscriber, a - Application route M - mobile route, r - RPL, (!) - FRR Backup pathGateway of last resort is 10.0.2.2 to network 0.0.0.0S* 0.0.0.0/0 [1/0] via 10.0.2.2, 01#17#38S 1.1.1.0/24 [1/0] via 11.1.1.20, 00#00#00C 10.0.2.0/24 is directly connected, 01#19#31, MgmtEth0/RP0/CPU0/0L 10.0.2.15/32 is directly connected, 01#19#31, MgmtEth0/RP0/CPU0/0L 10.1.1.5/32 is directly connected, 01#17#38, Loopback1C 11.1.1.0/24 is directly connected, 01#17#38, GigabitEthernet0/0/0/0L 11.1.1.10/32 is directly connected, 01#17#38, GigabitEthernet0/0/0/0RP/0/RP0/CPU0#ios#show ip routeWed Sep 28 18#49#25.660 UTCCodes# C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, su - IS-IS summary null, * - candidate default U - per-user static route, o - ODR, L - local, G - DAGR, l - LISP A - access/subscriber, a - Application route M - mobile route, r - RPL, (!) - FRR Backup pathGateway of last resort is 10.0.2.2 to network 0.0.0.0S* 0.0.0.0/0 [1/0] via 10.0.2.2, 01#17#42S 1.1.1.0/24 [1/0] via 11.1.1.20, 00#00#03S 2.2.2.0/24 [1/0] via 11.1.1.20, 00#00#01C 10.0.2.0/24 is directly connected, 01#19#35, MgmtEth0/RP0/CPU0/0L 10.0.2.15/32 is directly connected, 01#19#35, MgmtEth0/RP0/CPU0/0L 10.1.1.5/32 is directly connected, 01#17#42, Loopback1C 11.1.1.0/24 is directly connected, 01#17#42, GigabitEthernet0/0/0/0L 11.1.1.10/32 is directly connected, 01#17#42, GigabitEthernet0/0/0/0RP/0/RP0/CPU0#ios#From this example you can see that we first added 1.1.1.0/24, then in a moment 2.2.2.0/24 was added. 3.3.3.0/24 will never be added, since we added the filtering on the GUI.Hopfully this tutorial was helpful! If you have issues or questions running Solenoid, please visit Solenoid’s Issues page and submit your question.", "url": "/tutorials/2016-09-28-solenoid-inject-routes-into-cisco-s-rib-table-using-grpc/", "author": "Lisa Roach", "tags": "vagrant, iosxr" } , "tutorials-2017-02-26-running-docker-containers-on-ios-xr-6-1-2": { "title": "XR toolbox, Part 6: Running Docker Containers on IOS-XR (6.1.2+)", "content": " Running Docker Containers on IOS-XR Introduction Pre-requisites Vagrant IOS-XR box Physical (NCS5500 and ASR9k) Understand the topology Vagrant Setup NCS5500 and ASR9k Setup Install docker-engine on the devbox Vagrant setup NCS5500 and ASR9k setup Docker Daemon support on IOS-XR Vagrant and NCS5500 architecture ASR9k architecture Vagrant setup Docker Client Access NCS5500 and ASR9k Docker Client Access Launch a Docker Container Public Dockerhub registry Vagrant Setup NCS5500 and ASR9k Setup Private “insecure” registry Setting up the insecure registry Vagrant Setup NCS5500 setup ASR9k setup Private Self-Signed Registry Setting up a self-signed Docker Registry Vagrant Setup NCS5500 and ASR9k Setup Docker Save/Load Technique Create a docker image tarball Vagrant Setup NCS5500 and ASR9k setup. Docker export/import Technique Create a custom docker Container tarball/snapshot Vagrant Setup NCS5500 and ASR9k setup. What can I do with the Docker container? Testing out a Web Server IntroductionIf you haven’t checked out the earlier parts to the XR toolbox Series, then you can do so here# XR Toolbox SeriesThe purpose of this series is simple. Get users started with an IOS-XR setup on their laptop and incrementally enable them to try out the application-hosting infrastructure on IOS-XR.In this part, we explore how a user can spin up Docker containers on IOS-XR. There are multiple ways to do this and we’ll explore each one# Public Dockerhub Registry# This is the simplest setup that most docker users would be well aware of. All you need to do is set up reachability to dockerhub with the correct dns resolution. Private “insecure” registry# Some users may choose to do this, specially if they’re running a local docker registry inside a secured part of their network. Private “self-signed” registry# This is more secure than the “insecure” setup, and allows a user to enable TLS. Private “secure” registry# Set up reachability to your private registry, created using a certificate obtained from a CA. The steps used to set this up are identical to a private self-signed registry except for the creation of the certificate. We won’t really tackle this scenario separately in this tutorial due to the absence of said certificate #). Tarball image/container# This is the simplest setup - very similar to LXC deployments. In this case, a user may create and set up a container completely off-box, package it up as an image or a container tar ball, transfer it to the router and then load/import it, before running. For each case, we will compare IOS-XR running as a Vagrant box with IOS-XR running on a physical box (NCS5500 and ASR9k). They should be identical, except for reachability through the Management ports.Pre-requisitesVagrant IOS-XR boxIf you’re bringing up the topology on your laptop using the IOS-XR vagrant box, then# Meet the pre-requisites specified in the IOS-XR Vagrant Quick Start guide# Pre-requisites. The topology here will require about 5G RAM and 2 cores on the user’s laptop. Clone the following repository# https#//github.com/ios-xr/vagrant-xrdocs, before we start.cd ~/git clone https#//github.com/ios-xr/vagrant-xrdocs.gitcd vagrant-xrdocs/You will notice a few directories. We will utilize the docker-app-topo-bootstrap directory in this tutorial.AKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ pwd/Users/akshshar/vagrant-xrdocsAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ ls docker-app-topo-bootstrap/Vagrantfile\tconfigs\t\tscriptsAKSHSHAR-M-K0DS#vagrant-xrdocs akshshar$ Physical (NCS5500 and ASR9k)On the other hand, if you have an NCS5500 or ASR9k lying around (don’t we all?), then load up a 6.1.2+ image on the router and connect an Ubuntu server (for the purpose of this tutorial), to the Management network of the router.The server needs to be reachable from the router over the Management network.Further, we’re going to enable SSH access in XR CLI and in XR linux shell to achieve an equivalence between the NCS5500/ASR9k and Vagrant setup.Note# NCS5500 steps are described, but ASR9k works in exactly the same way.Enable SSH access in the XR CLIOn my NCS5500 setup, I can enable SSH in XR in the default (global) vrf with the following steps and CLI#RP/0/RP0/CPU0#ncs5508#crypto key generate rsaMon Mar 6 05#28#57.184 UTCThe name for the keys will be# the_default Choose the size of the key modulus in the range of 512 to 4096 for your General Purpose Keypair. Choosing a key modulus greater than 512 may take a few minutes.How many bits in the modulus [2048]# Generating RSA keys ...Done w/ crypto generate keypair[OK]RP/0/RP0/CPU0#ncs5508#RP/0/RP0/CPU0#ncs5508#show running-config sshMon Mar 6 05#29#51.819 UTCssh server v2ssh server vrf defaultRP/0/RP0/CPU0#ncs5508#Enable SSH access to XR linux shellThis is openssh running in the XR linux environment. Users may choose to keep this disabled based on the kind of operations they intend to have. Enabling it in a given network namespace (equivalent to XR vrf) opens up port 57722 on all the IP addresses reachable in that VRF.In 6.1.2, only global-vrf (default vrf) is supported in the linux environment for SSH and apps. Post 6.3.1, support for Mgmt vrfs in the linux shell will be brought in.To enable SSH access in the XR linux shell for a sudo user, we’ll take 3 steps# Enable the “sudo” group permissions in /etc/sudoers Open up /etc/sudoers using vi in the XR bash shell and uncomment the following line# # %sudo ALL=(ALL) ALL Save and exit (#wq in vi). Create a non-root user. This is important. For security reasons, root user access over SSH (SSH in the linux shell) is disabled. Only the root XR user can create new (sudo or non-sudo) users, so use the “bash” cli to get into the shell# RP/0/RP0/CPU0#ncs5508#RP/0/RP0/CPU0#ncs5508#bashMon Mar 6 06#16#01.391 UTC [ncs5508#~]$[ncs5508#~]$adduser ciscoLogin name for new user []#ciscoUser id for cisco [ defaults to next available]#Initial group for cisco [users]#Additional groups for cisco []#sudocisco's home directory [/home/cisco]#cisco's shell [/bin/bash]#cisco's account expiry date (MM/DD/YY) []#OK, Im about to make a new account. Heres what you entered so far#New login name# ciscoNew UID# [Next available]Initial group# users/usr/sbin/adduser# line 68# [# -G# binary operator expectedAdditional groups# sudoHome directory# /home/ciscoShell# /bin/bashExpiry date# [no expiration]This is it... if you want to bail out, you'd better do it now.Making new account...useradd# user 'cisco' already existsChanging the user information for ciscoEnter the new value, or press ENTER for the default Full Name []# Room Number []# Work Phone []# Home Phone []# Other []# Enter new UNIX password# Retype new UNIX password# passwd# password updated successfullyDone...[ncs5508#~]$ Finally enable SSH access by starting the sshd_operns service# [ncs5508#~]$service sshd_operns startMon Mar 6 06#21#53 UTC 2017 /etc/init.d/sshd_operns# Waiting for OPERNS interface creation...Mon Mar 6 06#21#53 UTC 2017 /etc/init.d/sshd_operns# Press ^C to stop if needed.Mon Mar 6 06#21#54 UTC 2017 /etc/init.d/sshd_operns# Found nic, Mg0_RP0_CPU0_0Mon Mar 6 06#21#54 UTC 2017 /etc/init.d/sshd_operns# Waiting for OPERNS management interface creation...Mon Mar 6 06#21#54 UTC 2017 /etc/init.d/sshd_operns# Found nic, Mg0_RP0_CPU0_0Mon Mar 6 06#21#54 UTC 2017 /etc/init.d/sshd_operns# OPERNS is readyMon Mar 6 06#21#54 UTC 2017 /etc/init.d/sshd_operns# Start sshd_opernsStarting OpenBSD Secure Shell server# sshd generating ssh RSA key... generating ssh ECDSA key... generating ssh DSA key... generating ssh ED25519 key...[ncs5508#~]$ Check that the sshd_operns service is now listening on port 57722 in the global-vrf network namespace# netns_identify utility is to check which network namespace a process is in. $$ gets the pid of the current shell. In the output below, tpnns is a symbolic link of global-vrf. So they both mean the same thing - XR default VRF mapped to a network namespace in linux. All XR interfaces in the default(global) vrf will appear in the linux shell in this network namespace. Issuing an ifconfig will show up these interfaces. [ncs5508#~]$netns_identify $$tpnnsglobal-vrf[ncs5508#~]$netstat -nlp | grep 57722tcp 0 0 0.0.0.0#57722 0.0.0.0#* LISTEN 622/sshd tcp6 0 0 ###57722 ###* LISTEN 622/sshd [ncs5508#~]$[ncs5508#~]$ifconfigMg0_RP0_CPU0_0 Link encap#Ethernet HWaddr 80#e0#1d#00#fc#ea inet addr#11.11.11.59 Mask#255.255.255.0 inet6 addr# fe80##82e0#1dff#fe00#fcea/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1514 Metric#1 RX packets#3830 errors#0 dropped#0 overruns#0 frame#0 TX packets#4 errors#0 dropped#0 overruns#0 carrier#3 collisions#0 txqueuelen#1000 RX bytes#1288428 (1.2 MiB) TX bytes#280 (280.0 B)fwd_ew Link encap#Ethernet HWaddr 00#00#00#00#00#0b inet6 addr# fe80##200#ff#fe00#b/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1500 Metric#1 RX packets#18 errors#0 dropped#10 overruns#0 frame#0 TX packets#2 errors#0 dropped#1 overruns#0 carrier#0 collisions#0 txqueuelen#1000 RX bytes#486 (486.0 B) TX bytes#140 (140.0 B) fwdintf Link encap#Ethernet HWaddr 00#00#00#00#00#0a inet6 addr# fe80##200#ff#fe00#a/64 Scope#Link UP RUNNING NOARP MULTICAST MTU#1500 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#2 errors#0 dropped#1 overruns#0 carrier#0 collisions#0 txqueuelen#1000 RX bytes#0 (0.0 B) TX bytes#140 (140.0 B)lo Link encap#Local Loopback inet addr#127.0.0.1 Mask#255.0.0.0 inet6 addr# ##1/128 Scope#Host UP LOOPBACK RUNNING NOARP MULTICAST MTU#65536 Metric#1 RX packets#0 errors#0 dropped#0 overruns#0 frame#0 TX packets#0 errors#0 dropped#0 overruns#0 carrier#0 collisions#0 txqueuelen#0 RX bytes#0 (0.0 B) TX bytes#0 (0.0 B)lo#0 Link encap#Local Loopback inet addr#1.1.1.1 Mask#255.255.255.255 UP LOOPBACK RUNNING NOARP MULTICAST MTU#65536 Metric#1[ncs5508#~]$ Awesome! Now let’s test SSH access directly into the linux shell#As seen from the above output, the Mgmt port (Mg0_RP0_CPU0_0) has an IP 11.11.11.59 and the port 57722 is open all the IP addresses in the corresponding network namespace.From the directly connected “devbox” or jumpserver I can then issue an ssh as follows#cisco@dhcpserver#~$ ssh cisco@11.11.11.59 -p 57722cisco@11.11.11.59's password# -sh# /var/log/boot.log# Permission deniedncs5508#~$ ncs5508#~$ sudo -iPassword# [ncs5508#~]$ [ncs5508#~]$ whoamiroot[ncs5508#~]$ Works like a charm!Understand the topologyThe topology I’m using differs slightly between the vagrant setup and the NCS5500 setup.This is owing to the fact that the Management port of the vagrant IOS-XR box is used up in the NAT network. So to show equivalence between the two setups, I directly connect the Gig0/0/0/0 interface of Vagrant ios-xrv64 with eth1 of the devbox as shown in the figure below.The two topologies in use are#Vagrant SetupNCS5500 and ASR9k SetupInstall docker-engine on the devboxVagrant setupFor the Vagrant setup, you will see a script called docker_install.sh under the scripts folder#AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ pwd/Users/akshshar/vagrant-xrdocs/docker-app-topo-bootstrapAKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ lsVagrantfile\tconfigs\t\tscriptsAKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ ls scripts/apply_config.sh\t\tdocker_install.shAKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ This is the vagrant provisioner for the devbox and will install docker-engine on boot (vagrant up).NCS5500 and ASR9k setupIn this case, the devbox must be provisioned by the user. On an ubuntu devbox, docker-engine can be installed by following the instructions at# https#//docs.docker.com/engine/installation/linux/ubuntu/Perfect! Now we’re all set with the topology and SSH access. Before we begin, let’s understand the docker daemon/client setup inside IOS-XR.Docker Daemon support on IOS-XRVagrant and NCS5500 architectureIf you haven’t already gone through the basic overview on the application hosting infrastructure on XR, I would urge you to have a quick read# https#//xrdocs.io/application-hosting/blogs/2016-06-28-xr-app-hosting-architecture-quick-look/ Relevant Platforms# The LXC architecture described above and expanded on below is relevant to the following platforms# NCS5500 (NCS5501, NCS5501-SE, NCS5502, NCS5502-SE, NCS5508, NCS5516) NCS5000 NCS5011 XRv9k IOS-XRv64 (Vagrant box and ISO) From the above article it becomes fairly clear that internally the IOS-XR architecture involves a Host layer running the libvirtd daemon and IOS-XR runs as an LXC spawned using the daemon.Further, the “virsh” client is provided within the XR LXC, so that a user may have client level access to the daemon while sitting inside the XR LXC itself.The setup for launching LXCs in IOS-XR is shown below#The Docker client/daemon setup follows the exact same principle as shown below. Docker Daemon runs on the host and Docker client is made available inside the XR LXC for easy operationalization#ASR9k architectureThe ASR9k architecture is slightly different. In ASR9k, IOS-XR runs inside its own VM on the 64-bit Linux host to be able to support ISSU requirements relevant to traditional Service Provider deployments.In this case, the libvirtd and docker daemons are available inside the XR control plane VM itself.This does not change the user experience from a docker client or virsh client perspective. The difference is mainly how one may interact with the docker daemon as we’ll touch upon in subsequent sections.This is what the architecture looks like for ASR9k#ASR9k LXC/libvirt Setup#Libvirt daemon is local to the XR control plane VM.ASR9k Docker Setup#Docker daemon is local to the XR control plane VM.Alright, so can we verify this?Vagrant setup Docker Client AccessOn your vagrant box, there are two ways to get access to the docker client# Drop into the “bash” shell from XR CLI# Using “bash” ensures that the correct environment variables are sourced to gain access to the Docker Daemon on the host# Password for the XR CLI# vagrant AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ vagrant port rtr The forwarded ports for the machine are listed below. Please note that these values may differ from values configured in the Vagrantfile if the provider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2222 (host) AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ ssh -p 2223 vagrant@localhost The authenticity of host '[localhost]#2223 ([127.0.0.1]#2223)' can't be established. RSA key fingerprint is SHA256#uHev9uiAa0LM36RnnxDYuRyKywra8Oe/G5Gt34OiBqk. Are you sure you want to continue connecting (yes/no)? yes Warning# Permanently added '[localhost]#2223' (RSA) to the list of known hosts. vagrant@localhost's password# RP/0/RP0/CPU0#ios# RP/0/RP0/CPU0#ios# RP/0/RP0/CPU0#ios#bash Sun Mar 5 18#17#18.380 UTC [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$whoami root [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [xr-vm_node0_RP0_CPU0#~]$ Bear in mind that when you drop into the XR linux shell using the “bash” CLI, you are droppped in as root. This is why you can access the docker client without any hassle. For any other user, you will need to first become root (using sudo). Drop directly into the Linux shell over SSH (port 57722)# From the above output for vagrant port rtr, the port 57722 on XR (running openssh in the XR linux shell) is accessible via port 2222 on the host machine (laptop)# Use either vagrant ssh rtr or ssh -p 2222 vagrant@localhost to drop into the XR linux shell Username# vagrantPassword# vagrant AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ vagrant ssh rtr Last login# Sun Mar 5 18#55#20 2017 from 10.0.2.2 xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$whoami vagrant xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ sudo -i [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ whoami root [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [xr-vm_node0_RP0_CPU0#~]$ As shown above, we become root by using -i flag for sudo to make sure the correct environment variables are sourced. NCS5500 and ASR9k Docker Client AccessIf you followed the steps in the pre-requisites section above # Pre-requisites, you would already have access to your NCS5500/ASR9k device over XR SSH (CLI, port 22) as well as sshd_operns (XR linux shell, port 57722)Following the Vagrant model, over XR SSH, we use the “bash” CLI to access the docker client on the NCS5500/ASR9k#Note# The steps for ASR9k are identical. NCS5500 steps are shown below.cisco@dhcpserver#~$ cisco@dhcpserver#~$ ssh root@11.11.11.59The authenticity of host '11.11.11.59 (11.11.11.59)' can't be established.RSA key fingerprint is 8a#42#49#bf#4c#cd#f9#3c#e1#19#f9#02#b6#3a#ad#01.Are you sure you want to continue connecting (yes/no)? yesWarning# Permanently added '11.11.11.59' (RSA) to the list of known hosts.Password# RP/0/RP0/CPU0#ncs5508#RP/0/RP0/CPU0#ncs5508#RP/0/RP0/CPU0#ncs5508#bashMon Mar 6 09#36#37.221 UTC[ncs5508#~]$whoamiroot[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[ncs5508#~]$[ncs5508#~]$Similarly, for direct access to the linux shell, we ssh over 57722, become sudo and then access the docker client#SSH password and sudo password for user cisco will be whatever you’ve set up during the Pre-requisites stage.cisco@dhcpserver#~$ ssh cisco@11.11.11.59 -p 57722cisco@11.11.11.59's password# Permission denied, please try again.cisco@11.11.11.59's password# Last login# Mon Mar 6 06#30#47 2017 from 11.11.11.2-sh# /var/log/boot.log# Permission deniedncs5508#~$ ncs5508#~$ ncs5508#~$ sudo -iPassword# [ncs5508#~]$ [ncs5508#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[ncs5508#~]$ Launch a Docker ContainerAs discussed earlier, we’ll showcase a few different techniques through which a user may spin up a docker container on IOS-XR.Public Dockerhub registryThis is the simplest setup that most docker users would know already. The obvious configuration necessary would be to make sure connectivity to the internet is available from the router.This may not be the preferred setup for production deployments, understandably, since direct connectivity to the internet from a production router is not typical. The next few techniques with private registries or tarball based docker container bringup might be more your cup of tea, in that case.Vagrant SetupThe vagrant IOS-XR box comes with connectivity to the internet already. All you need to do is set up the domain name-server in the global-vrf (before 6.3.1, we only support the global/default vrf for the docker daemon image downloads).Remember that we’re setting up this domain name on per vrf basis. In the future, we intend to sync this through XR CLI for all vrfs to the corresponding network namespaces. Before 6.3.1, of course only global-vrf may be used.Update /etc/netns/global-vrf/resolv.conf to point to a reachable nameserver, in this case 8.8.8.8#[xr-vm_node0_RP0_CPU0#~]$cat /etc/netns/global-vrf/resolv.confnameserver 8.8.8.8[xr-vm_node0_RP0_CPU0#~]$Again, become root with the correct environment (sudo -i) to execute the relevant docker commands to spin up the container.[xr-vm_node0_RP0_CPU0#~]$sudo -i[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ whoami root[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[xr-vm_node0_RP0_CPU0#~]$docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ docker run -itd --name ubuntu -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN ubuntu bashUnable to find image 'ubuntu#latest' locallylatest# Pulling from library/ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for ubuntu#latest495ec2ab0b201418999e159b81a934072be504b05cc278192d8152efd4965635[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES495ec2ab0b20 ubuntu ~bash~ 7 minutes ago Up 7 minutes ubuntu[xr-vm_node0_RP0_CPU0#~]$ ``` You will notice two peculiar things in the command we run# Mounting of /var/run/netns/<vrf-name># We mount /var/run/netns/<vrf-name> into the docker container. This is an option we use to mount the appropriate network namespace(s) (one or more -v options may be used) into the container. These network namespaces (XR release 6.3.1+) are created on the host and then bind-mounted into the XR LXC for user convenience. In case of ASR9k, these network namespaces are local. The docker container, running on the host (inside XR VM in case of ASR9k), will simply inherit these network namespaces through the /var/run/netns/<vrf-name> mount. Each Network namespace may correspond to a VRF in XR (CLI option to achieve this will be available post 6.3.1. Bear in mind that before 6.3.1 release only the global-vrf is supported in the XR linux shell. –cap-add=SYS_ADMIN flag# We’re using the --cap-add=SYS_ADMIN flag because even when network namespaces are mounted from the “host” (or XR VM in case of ASR9k) into the docker container, a user can change into a particular network namespace or execute commands in a particular namespace, only if the container is launched with privileged capabilties. Yay! The container’s running. We can get into the container by starting bash through a docker exec. If you’re running container images that do not support a shell, try docker attach instead.[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$docker exec -it ubuntu bashroot@bf408eb70f88#/# root@bf408eb70f88#/# cat /etc/*-release DISTRIB_ID=UbuntuDISTRIB_RELEASE=16.04DISTRIB_CODENAME=xenialDISTRIB_DESCRIPTION=~Ubuntu 16.04.2 LTS~NAME=~Ubuntu~VERSION=~16.04.2 LTS (Xenial Xerus)~ID=ubuntuID_LIKE=debianPRETTY_NAME=~Ubuntu 16.04.2 LTS~VERSION_ID=~16.04~HOME_URL=~http#//www.ubuntu.com/~SUPPORT_URL=~http#//help.ubuntu.com/~BUG_REPORT_URL=~http#//bugs.launchpad.net/ubuntu/~VERSION_CODENAME=xenialUBUNTU_CODENAME=xenialroot@bf408eb70f88#/# NCS5500 and ASR9k SetupRemember the topology for the NCS5508/ASR9k setup?# NCS5500 and ASR9k Setup TopologyIn order to reach the internet, the NCS5508/ASR9k needs to be configured with a default route through the Management port which is NAT-ted (using iptables Masquerade rules, not shown here) to the outside world through devbox.Note# Steps below are applicable to ASR9k as well.Read the note below if you need a refresher on the routing in XR’s linux kernel# Setting up Default routes in the Linux Kernel# For those who understand the basic principle behind the IOS-XR Packet I/O architecture for Linux application traffic (see here# Application hosting Infrastructure in IOS-XR ), it might be clear that routes in the linux kernel are controlled through the “tpa” CLI. This leads to 3 types of routes# Default route through “fwdintf” # To allow packets through the front panel ports by default. Herein the update-source CLI is used to set the source IP address of the packets. East-West route through “fwd_ew” # This enables packets to flow between XR and a linux app running in a given vrf (network namespace - only global-vrf supported before 6.3.1 release). Management Subnet# The directly connected subnet for the Management port as well non-default routes in the RIB through the Management port. To set up a default route through the Management port#Prior to 6.3.1 releasePrior to 6.3.1, there is no direct knob in the tpa CLI to help set this up. So we drop into the linux shell directly and set the default route ourselves#RP/0/RP0/CPU0#ncs5508#bashWed Mar 8 02#06#54.590 UTC[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$ip routedefault dev fwdintf scope link src 1.1.1.1 10.10.10.10 dev fwd_ew scope link src 1.1.1.1 11.11.11.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 11.11.11.59 [ncs5508#~]$[ncs5508#~]$[ncs5508#~]$ip route del default[ncs5508#~]$ip route add default via 11.11.11.2 dev Mg0_RP0_CPU0_0[ncs5508#~]$[ncs5508#~]$ip routedefault via 11.11.11.2 dev Mg0_RP0_CPU0_0 10.10.10.10 dev fwd_ew scope link src 1.1.1.1 11.11.11.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 11.11.11.59 [ncs5508#~]$Having done the above change, set up the DNS server in global-vrf network namespace, much like in the Vagrant setup#[ncs5508#~]$cat /etc/netns/global-vrf/resolv.confnameserver ######[ncs5508#~]$Of course, use an actual IP address of the DNS server in your network, and not #####. I use it to simply hide the private DNS IP in my setup #)[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$docker run -itd --name ubuntu --cap-add=SYS_ADMIN -v /var/run/netns#/var/run/netns ubuntu bashUnable to find image 'ubuntu#latest' locallylatest# Pulling from library/ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for ubuntu#latest67b781a19b5a164d77ee7ed95201c422e70be57c9ee6547a7e8e9457f8db514b[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES67b781a19b5a ubuntu ~bash~ 3 minutes ago Up 3 minutes ubuntu[ncs5508#~]$Post 6.3.1 releasePost 6.3.1, the default route wouldn’t have to be set using the linux command (ip route default…). We have introduced a default-route CLI under tpa (along with vrfs, but more on that in another blog).The CLI will look something like #tpa vrf <vrf-name> address-family ipv4[ipv6] default-route east-westThe advantage of introducing a CLI is that it helps handle the routes in the linux kernel across reloads and switchovers as well.Private “insecure” registryThis is a straightforward technique when a user expects to bring up private registries for their docker images in a secure part of the network (so that connection between the registry and the router doesn’t necessarily need to be secured) # We spin up an insecure docker registry(which is itself a docker container pulled down from dockerhub) on our devbox. We then modify /etc/sysconfig/docker in XR linux to add the insecure registry information Set up the route to the registry Populate the registry with some docker images from dockerhub Pull the relevant images from the insecure registry down to XR’s docker daemon and spin up containers Setting up the insecure registryLet’s begin by spinning up a registry on the devbox in our Vagrant setup. The same exact steps are relevant to the devbox environment on the NCS5500/ASR9k setup as well. We follow the steps described here# https#//docs.docker.com/registry/deploying/AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$vagrant ssh devbox Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-95-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information disabled due to load higher than 1.0 Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloud0 packages can be updated.0 updates are security updates.New release '16.04.2 LTS' available.Run 'do-release-upgrade' to upgrade to it.vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ sudo -s root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker run -d -p 5000#5000 --restart=always --name registry registry#2Unable to find image 'registry#2' locally2# Pulling from library/registry709515475419# Pull complete df6e278d8f96# Pull complete 16218e264e88# Pull complete 16748da81f63# Pull complete 8d73e673c34c# Pull complete Digest# sha256#28be0609f90ef53e86e1872a11d672434ce1361711760cf1fe059efd222f8d37Status# Downloaded newer image for registry#2b6a2a5fef7b7c201ee4d162b56f1e35054e25225ad27ad3fbf3a267d2ef9fb7aroot@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker pull ubuntu && docker tag ubuntu localhost#5000/ubuntuUsing default tag# latestlatest# Pulling from library/ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for ubuntu#latestroot@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker push localhost#5000/ubuntu The push refers to a repository [localhost#5000/ubuntu]56827159aa8b# Pushed 440e02c3dcde# Pushed 29660d0e5bb2# Pushed 85782553e37a# Pushed 745f5be9952c# Pushed latest# digest# sha256#6b079ae764a6affcb632231349d4a5e1b084bece8c46883c099863ee2aeb5cf8 size# 1357root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# In the above steps, we’ve simply set up the registry on the devbox, pulled down an ubuntu docker image from dockerhub and pushed the image to the local registry.Vagrant SetupBefore we start let’s come back to square-one on our Vagrant setup. Delete the previously running container and downloaded image#[xr-vm_node0_RP0_CPU0#~]$ docker stop ubuntu && docker rm ubuntuubuntuubuntu[xr-vm_node0_RP0_CPU0#~]$ docker rmi ubuntuUntagged# ubuntu#latestDeleted# sha256#0ef2e08ed3fabfc44002ccb846c4f2416a2135affc3ce39538834059606f32ddDeleted# sha256#0d58a35162057295d273c5fb8b7e26124a31588cdadad125f4bce63b638dddb5Deleted# sha256#cb7f997e049c07cdd872b8354052c808499937645f6164912c4126015df036ccDeleted# sha256#fcb4581c4f016b2e9761f8f69239433e1e123d6f5234ca9c30c33eba698487ccDeleted# sha256#b53cd3273b78f7f9e7059231fe0a7ed52e0f8e3657363eb015c61b2a6942af87Deleted# sha256#745f5be9952c1a22dd4225ed6c8d7b760fe0d3583efd52f91992463b53f7aea3[xr-vm_node0_RP0_CPU0#~]$ Now let’s set up XR’s docker daemon to accept the insecure registry located on the directly connected network on Gig0/0/0/0.Based off the config applied via the Vagrantfile, the reachable IP address of the registry running on devbox = 11.1.1.20, port 5000.Log into XR CLI. We will first make sure that the request from XR’s docker daemon originates with a source IP that is reachable from the docker registry. So set the TPA ip address = Gig0/0/0/0 ip address (directly connected subnet)#RP/0/RP0/CPU0#ios(config)#tpaRP/0/RP0/CPU0#ios(config-tpa)#address-family ipv4 ? update-source Update the Source for Third Party <cr> RP/0/RP0/CPU0#ios(config-tpa)#address-family ipv4 RP/0/RP0/CPU0#ios(config-tpa-afi)#update-source gigabitEthernet 0/0/0/0 RP/0/RP0/CPU0#ios(config-tpa-afi)#commitMon Mar 6 05#08#32.436 UTCRP/0/RP0/CPU0#ios(config-tpa-afi)#This should lead to the following routes in the linux kernel#RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#bashMon Mar 6 05#35#49.459 UTC[xr-vm_node0_RP0_CPU0#~]$ip routedefault dev fwdintf scope link src 11.1.1.10 10.0.2.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 10.0.2.15 [xr-vm_node0_RP0_CPU0#~]$Before we launch the container, we need to configure the XR docker daemon to disregard security for our registry. This is done by modifying /etc/sysconfig/docker inside the XR LXC. My eventual configuration looks something like#[xr-vm_node0_RP0_CPU0#~]$cat /etc/sysconfig/docker# DOCKER_OPTS can be used to add insecure private registries to be supported # by the docker daemon# eg # DOCKER_OPTS=~--insecure-registry foo --insecure-registry bar~# Following are the valid configs# DOCKER_OPTS=~<space>--insecure-registry<space>foo~# DOCKER_OPTS+=~<space>--insecure-registry<space>bar~DOCKER_OPTS=~ --insecure-registry 11.1.1.20#5000~[xr-vm_node0_RP0_CPU0#~]$As the instructions/comments inside the file indicate, make sure there is a space before –insecure-registry flag. Further, in a normal docker daemon setup, a user is supposed to restart the docker daemon when changes to /etc/sysconfig/docker are made. In case of XR, this is not needed. We handle automatic restarts of the docker daemon when a user makes changes to /etc/sysconfig/docker and saves it. Further, since the docker daemon will be automatically restarted, wait for about 10-15 seconds before issuing any docker commands.Now issue the docker run command to launch the container on XR.RP/0/RP0/CPU0#ios#RP/0/RP0/CPU0#ios#bashMon Mar 6 05#51#14.341 UTC[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$docker run -itd --name ubuntu -v /var/run/netns --cap-add=SYS_ADMIN 11.1.1.20#5000/ubuntu bashUnable to find image '11.1.1.20#5000/ubuntu#latest' locallylatest# Pulling from ubuntufec6b243e075# Pull complete 190e0e9a3e79# Pull complete 0d79cf192e4c# Pull complete 38398c307b51# Pull complete 356665655a72# Pull complete Digest# sha256#6b079ae764a6affcb632231349d4a5e1b084bece8c46883c099863ee2aeb5cf8Status# Downloaded newer image for 11.1.1.20#5000/ubuntu#latestbf408eb70f88c8050c29fb46610d354a113a46edbece105acc68507e71442d38[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESbf408eb70f88 11.1.1.20#5000/ubuntu ~bash~ 8 seconds ago Up 8 seconds ubuntu[xr-vm_node0_RP0_CPU0#~]$There, you’ve launched a docker container on XR using a private “insecure” registry.NCS5500 setupThe workflow is more or less identical to the Vagrant setup.In this case we’re setting up the registry to be reachable over the Management network (and over the same subnet). For this, you don’t need to set the TPA IP.If you’ve followed the steps above in the Setting up the Insecure Registry section, then you should have an insecure registry already running on the devbox environment, along with a “pushed” ubuntu image.Now hop over to the NCS5500 and issue the “bash” CLI. Your “ip route” setup should look something like this#RP/0/RP0/CPU0#ncs5508#bashTue Mar 7 00#29#56.416 UTC[ncs5508#~]$ip routedefault dev fwdintf scope link src 1.1.1.110.10.10.10 dev fwd_ew scope link src 1.1.1.1 11.11.11.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 11.11.11.59[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$We won’t be leveraging the tpa setup for the fwdintf interface (meant for reachability over front panel/data ports) and instead just use the local management network subnet (11.11.11.0/24) for reachability to the docker registry.Further, much like before, set up /etc/sysconfig/docker to disregard security for our registry.[ncs5508#~]$cat /etc/sysconfig/docker# DOCKER_OPTS can be used to add insecure private registries to be supported # by the docker daemon# eg # DOCKER_OPTS=~--insecure-registry foo --insecure-registry bar~# Following are the valid configs# DOCKER_OPTS=~<space>--insecure-registry<space>foo~# DOCKER_OPTS+=~<space>--insecure-registry<space>bar~DOCKER_OPTS=~ --insecure-registry 11.11.11.2#5000~[ncs5508#~]$When you make the above change,the docker daemon will be automatically restarted. Wait for about 10-15 seconds before issuing any docker commands.Now we can issue a docker run (or docker pull followed by a docker run) to download and launch the docker ubuntu image from the registry.[ncs5508#~]$docker run -itd --name ubuntu -v /var/run/netns --cap-add=SYS_ADMIN 11.11.11.2#5000/ubuntuUnable to find image '11.11.11.2#5000/ubuntu#latest' locallylatest# Pulling from ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for 11.11.11.2#5000/ubuntu#latestaa73f6a81b9346131118b84f30ddfc2d3bd981a4a54ea21ba2e2bc5c3d18d348[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESaa73f6a81b93 11.11.11.2#5000/ubuntu ~/bin/bash~ 4 hours ago Up 4 hours ubuntu[ncs5508#~]$ASR9k setupThe ASR9k setup for an insecure docker registry is slightly different from Vagrant IOS-XR or NCS platforms. There is no automatic mechanism to restart the docker daemon.The user must restart the docker daemon once they modify the /etc/sysconfig/docker file.Again, we’re setting up the registry to be reachable over the Management network (and over the same subnet). For this, you don’t need to set the TPA IP.Now hop over to the ASR9k and issue the “bash” CLI. Your “ip route” setup should look something like this#RP/0/RSP1/CPU0#asr9k#bashTue Mar 7 00#29#56.416 UTC[asr9k#~]$ip routedefault dev fwdintf scope link src 1.1.1.110.10.10.10 dev fwd_ew scope link src 1.1.1.1 11.11.11.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 11.11.11.59[asr9k#~]$[asr9k#~]$[asr9k#~]$We won’t be leveraging the tpa setup for the fwdintf interface (meant for reachability over front panel/data ports) and instead just use the local management network subnet (11.11.11.0/24) for reachability to the docker registry.Further, much like before, set up /etc/sysconfig/docker to disregard security for our registry.[asr9k#~]$cat /etc/sysconfig/docker# DOCKER_OPTS can be used to add insecure private registries to be supported # by the docker daemon# eg # DOCKER_OPTS=~--insecure-registry foo --insecure-registry bar~# Following are the valid configs# DOCKER_OPTS=~<space>--insecure-registry<space>foo~# DOCKER_OPTS+=~<space>--insecure-registry<space>bar~DOCKER_OPTS=~ --insecure-registry 11.11.11.2#5000~[asr9k#~]$Important# For the ASR9k, you need to restart the docker daemon for the above config change to take effect.[asr9k#~]$service docker restartdocker stop/waitingdocker start/running, process 12276[asr9k#~]$Now we can issue a docker run (or docker pull followed by a docker run) to download and launch the docker ubuntu image from the registry.[asr9k#~]$docker run -itd --name ubuntu -v /var/run/netns --cap-add=SYS_ADMIN 11.11.11.2#5000/ubuntuUnable to find image '11.11.11.2#5000/ubuntu#latest' locallylatest# Pulling from ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for 11.11.11.2#5000/ubuntu#latestaa73f6a81b9346131118b84f30ddfc2d3bd981a4a54ea21ba2e2bc5c3d18d348[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESaa73f6a81b93 11.11.11.2#5000/ubuntu ~/bin/bash~ 4 hours ago Up 4 hours ubuntu[asr9k#~]$Private Self-Signed RegistryThis technique is a bit more secure than the insecure registry setup and may be used to more or less secure the connection between the router’s docker daemon and the docker registry running externally. The basic steps involved are# Generate your own certificate on the devbox Use the result to start your docker registry with TLS enabled Copy the certificates to the /etc/docker/certs.d/ folder on the router Don’t forget to restart the Docker daemon for the ASR9k. In case of other platforms, the restart is automatic Set up the route to the registry Populate the registry with some docker images from dockerhub Pull the relevant images from the registry down to XR’s docker daemon and spin up containers Setting up a self-signed Docker RegistryAKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ vagrant ssh devboxWelcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-95-generic x86_64) * Documentation# https#//help.ubuntu.com/ System information disabled due to load higher than 1.0 Get cloud support with Ubuntu Advantage Cloud Guest# http#//www.ubuntu.com/business/services/cloud0 packages can be updated.0 updates are security updates.New release '16.04.2 LTS' available.Run 'do-release-upgrade' to upgrade to it.vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ mkdir -p certs && openssl req -newkey rsa#4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt Generating a 4096 bit RSA private key.......................++..........................++writing new private key to 'certs/domain.key'-----You are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or a DN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter '.', the field will be left blank.-----Country Name (2 letter code) [AU]#State or Province Name (full name) [Some-State]#Locality Name (eg, city) []#Organization Name (eg, company) [Internet Widgits Pty Ltd]#Organizational Unit Name (eg, section) []#Common Name (e.g. server FQDN or YOUR name) []#devbox.comEmail Address []#vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ cd certs/vagrant@vagrant-ubuntu-trusty-64#~/certs$ lsdomain.crt domain.keyvagrant@vagrant-ubuntu-trusty-64#~/certs$ vagrant@vagrant-ubuntu-trusty-64#~/certs$ vagrant@vagrant-ubuntu-trusty-64#~/certs$ vagrant@vagrant-ubuntu-trusty-64#~/certs$ cd ..vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ sudo docker run -d -p 5000#5000 --restart=always --name registry -v `pwd`/certs#/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key registry#2 Unable to find image 'registry#2' locally2# Pulling from library/registry709515475419# Pull complete df6e278d8f96# Pull complete 16218e264e88# Pull complete 16748da81f63# Pull complete 8d73e673c34c# Pull complete Digest# sha256#28be0609f90ef53e86e1872a11d672434ce1361711760cf1fe059efd222f8d37Status# Downloaded newer image for registry#2c423ae398af2ec05fabd9c1efc29b846b21c63af71ed0b59ba6ec7f4d13a6762vagrant@vagrant-ubuntu-trusty-64#~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESc423ae398af2 registry#2 ~/entrypoint.sh /e...~ 5 seconds ago Up 4 seconds 0.0.0.0#5000->5000/tcp registryvagrant@vagrant-ubuntu-trusty-64#~$ Now pull an ubuntu image (just an example) from dockerhub and push it to the local registry#vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ sudo -sroot@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker pull ubuntu && docker tag ubuntu localhost#5000/ubuntu Using default tag# latestlatest# Pulling from library/ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for ubuntu#latestroot@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker push localhost#5000/ubuntu The push refers to a repository [localhost#5000/ubuntu]56827159aa8b# Layer already exists 440e02c3dcde# Layer already exists 29660d0e5bb2# Layer already exists 85782553e37a# Layer already exists 745f5be9952c# Layer already exists latest# digest# sha256#6b079ae764a6affcb632231349d4a5e1b084bece8c46883c099863ee2aeb5cf8 size# 1357root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker images REPOSITORY TAG IMAGE ID CREATED SIZEregistry 2 047218491f8c 5 weeks ago 33.2 MBubuntu latest 0ef2e08ed3fa 5 weeks ago 130 MBlocalhost#5000/ubuntu latest 0ef2e08ed3fa 5 weeks ago 130 MBroot@vagrant-ubuntu-trusty-64#~# Vagrant SetupAll we have to do get out docker daemon on the router working with the self-signed docker registry is to make sure the certificate is available in the right directory# /etc/docker/certs.d/ in the XR shell.Hop over to the router and create folder with name = “<Common Name of the certificate>#5000” in the folder /etc/docker/certs.d/ as shown below#Hop into the router shell from your host/laptop#AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ vagrant ssh rtrLast login# Sun Apr 2 13#45#29 2017 from 10.0.2.2xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ xr-vm_node0_RP0_CPU0#~$ sudo -i[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ Create a folder named devbox.com#5000 under /etc/docker/certs.d.The folder name = &lt;Common Name of the certificate&gt;#&lt;Port opened by the registry&gt;[xr-vm_node0_RP0_CPU0#~]$ mkdir /etc/docker/certs.d/devbox.com#5000[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ Add the dns entry for devbox.com in /etc/hosts of the vrf you’re working in. Since before 6.3.1, we only support global-vrf in the linux kernel, we set up /etc/hosts of the global-vrf network namespace to create a pointer to devbox.com. To do this change into the correct network namespace (global-vrf) and edit /etc/hosts as shown below#Another way to do this would be to edit /etc/netns/global-vrf/hosts file and then change into the network namespace for the subsequent scp to immediately work.[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ ip netns exec global-vrf bash[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ cat /etc/hosts 127.0.0.1\tlocalhost.localdomain\t\tlocalhost11.1.1.20 devbox.com [xr-vm_node0_RP0_CPU0#~]$ Here, 11.1.1.20 is the IP address of the directly connected interface of the devbox on the port Gi0/0/0/0 of the IOS-XR instance.[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ scp vagrant@devbox.com#~/certs/domain.crt /etc/docker/certs.d/devbox.com\\#5000/ca.crtvagrant@devbox.com's password# domain.crt 100% 1976 1.9KB/s 00#00 [xr-vm_node0_RP0_CPU0#~]$ Perfect. Now wait about 5-10 seconds as the certificate gets automatically sync-ed to the underlying host layer (remember, the docker daemon is running on the host).Pull the docker image from the registry#[xr-vm_node0_RP0_CPU0#~]$ docker pull devbox.com#5000/ubuntuUsing default tag# latestlatest# Pulling from ubuntufec6b243e075# Pull complete 190e0e9a3e79# Pull complete 0d79cf192e4c# Pull complete 38398c307b51# Pull complete 356665655a72# Pull complete Digest# sha256#6b079ae764a6affcb632231349d4a5e1b084bece8c46883c099863ee2aeb5cf8Status# Downloaded newer image for devbox.com#5000/ubuntu#latest[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker images REPOSITORY TAG IMAGE ID CREATED SIZEdevbox.com#5000/ubuntu latest 0ef2e08ed3fa 4 weeks ago 130 MB[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ Spin it up! #[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ docker run -itd --name ubuntu -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN devbox.com#5000/ubuntu bashb50424bbe195fd4b79c0d375dcc081228395da467d1c0d5367897180c421b41d[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb50424bbe195 devbox.com#5000/ubuntu ~bash~ 4 seconds ago Up 3 seconds ubuntu[xr-vm_node0_RP0_CPU0#~]$ NCS5500 and ASR9k SetupThe setup of the self-signed registry is already covered above in the Setting up a Self-Signed Docker Registry section.The steps for NCS5500 and ASR9k are identical from hereon and match what we did for the Vagrant setup. To be thorough, here are the steps on an NCS5500 setup#Hop over to the router and issue the “bash” CLI.Now change into the network namespace (explicitly) and set up /etc/hosts (In my setup, the devbox is reachable over the management port on IP=11.11.11.2) #[ncs5508#~]$ip netns exec global-vrf bash [ncs5508#~]$cat /etc/hosts127.0.0.1\tlocalhost.localdomain\t\tlocalhost127.0.1.1 ncs5508.cisco.com ncs550811.11.11.2 devbox.com[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$Set up the directory to store the certificates created for the docker registry#[ncs5508#~]$[ncs5508#~]$mkdir /etc/docker/certs.d/devbox.com#5000[ncs5508#~]$[ncs5508#~]$scp over the self-signed certificate from the devbox into the above directory#[ncs5508#~]$scp cisco@devbox.com#~/certs/domain.crt /etc/docker/certs.d/devbox.com\\#5000/ca.crtWarning# Permanently added 'devbox.com,11.11.11.2' (ECDSA) to the list of known hosts.cisco@devbox.com's password# domain.crt 100% 1976 1.9KB/s 00#00 [ncs5508#~]$Now pull the docker image from the registry and spin it up#[ncs5508#~]$docker pull devbox.com#5000/ubuntuUsing default tag# latestlatest# Pulling from ubuntud54efb8db41d# Pull complete f8b845f45a87# Pull complete e8db7bf7c39f# Pull complete 9654c40e9079# Pull complete 6d9ef359eaaa# Pull complete Digest# sha256#dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535Status# Downloaded newer image for devbox.com#5000/ubuntu#latest[ncs5508#~]$[ncs5508#~]$ docker run -itd --name ubuntu -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN devbox.com#5000/ubuntu bash3b4721fa053a97325ccaa2ac98b3dc3fd9fb224543e0ed699be597f773ab875d[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3b4721fa053a devbox.com#5000/ubuntu ~bash~ 5 seconds ago Up 4 seconds ubuntu[ncs5508#~]$Docker Save/Load TechniqueThis is the potentially the easiest secure technique if you don’t want to meddle around with certificates on a docker registry and potentially don’t want a registry at all.Create a docker image tarballAs a first step, on your devbox create a docker image tar ball. You can either pull the relevant docker image into your devbox (From dockerhub or some other private registry) or build it on your own on the devbox (we will not delve into this here# for details# https#//docs.docker.com/engine/getstarted/step_four/).Once you have the image locally, issue a docker save to save the image into a loadable tar-ball.This is shown below#vagrant@vagrant-ubuntu-trusty-64#~$ sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZEregistry 2 047218491f8c 5 weeks ago 33.2 MBlocalhost#5000/ubuntu latest 0ef2e08ed3fa 5 weeks ago 130 MBubuntu latest 0ef2e08ed3fa 5 weeks ago 130 MBvagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ sudo docker save ubuntu > ubuntu.tar vagrant@vagrant-ubuntu-trusty-64#~$ Vagrant SetupLogin to your Router (directly into the shell or by issuing the bash command in XR CLI). We first scp the docker image tar ball into an available volume on the router and then load it up.[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ df -h /misc/app_host/Filesystem Size Used Avail Use% Mounted on/dev/mapper/app_vol_grp-app_lv0 3.9G 260M 3.5G 7% /misc/app_host[xr-vm_node0_RP0_CPU0#~]$ scp vagrant@11.1.1.20#~/ubuntu.tar /misc/app_host/vagrant@11.1.1.20's password# ubuntu.tar 100% 129MB 107.7KB/s 20#31 [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker load < /misc/app_host/ubuntu.tar [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEubuntu latest 0ef2e08ed3fa 4 weeks ago 130 MB[xr-vm_node0_RP0_CPU0#~]$ Now go ahead and spin it up as shown earlier#[xr-vm_node0_RP0_CPU0#~]$[xr-vm_node0_RP0_CPU0#~]$ docker run -itd --name ubuntu -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN ubuntu bashb50424bbe195fd4b79c0d375dcc081228395da467d1c0d5367897180c421b41d[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES108a5ad711ca ubuntu ~bash~ 3 seconds ago Up 2 seconds ubuntu[xr-vm_node0_RP0_CPU0#~]$ NCS5500 and ASR9k setup.NCS5500 and ASR9k follow the exact same steps as the Vagrant box above. For completeness, though#[ncs5508#~]$[ncs5508#~]$scp cisco@11.11.11.2#~/ubuntu.tar /misc/app_host/cisco@11.11.11.2's password# ubuntu.tar 100% 317MB 10.2MB/s 00#31 [ncs5508#~]$[ncs5508#~]$docker load < /misc/app_host/ubuntu.tar [ncs5508#~]$[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[ncs5508#~]$[ncs5508#~]$ docker run -itd --name ubuntu -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN ubuntu bashffc95e05e05c6e2e6b8e4aa05b299f513fd5df6d1ca8fe641cfa7f44671e6f07[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESffc95e05e05c ubuntu ~bash~ About a minute ago Up About a minute ubuntu[ncs5508#~]$Docker export/import TechniqueA lot of times you might create a tar ball from a custom Docker container on your server (devbox) and would like to run the custom container directly on the router. This technique explores that option.Create a custom docker Container tarball/snapshotAs a first step, on your devbox spin up a docker container from an image you’d like to customize.Assuming you’ve already learnt how to pull docker images into your devbox environment, let’s spin up an ubuntu container and install iproute2 on it#root@vagrant-ubuntu-trusty-64#~# docker images ubuntuREPOSITORY TAG IMAGE ID CREATED SIZEubuntu latest 0ef2e08ed3fa 5 weeks ago 130 MBroot@vagrant-ubuntu-trusty-64#~# docker run -itd --name ubuntu ubuntu basha544ddc41b1fd92cf6b7a751dcafaf63de36f6499f59c256918ca23c32645159Now exec into the created container and start installing iproute2 and python(we’ll use this later)#root@vagrant-ubuntu-trusty-64#~# docker exec -it ubuntu bash root@a544ddc41b1f#/# root@3cc4d9dd0056#/# apt-get update && apt-get install -y iproute2 pythonGet#1 http#//archive.ubuntu.com/ubuntu xenial InRelease [247 kB]Get#2 http#//archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]Get#3 http#//archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]Get#4 http#//archive.ubuntu.com/ubuntu xenial/main Sources [1103 kB]Get#5 http#//archive.ubuntu.com/ubuntu xenial/restricted Sources [5179 B]############################ SNIP Output ######################################## Get#18 http#//archive.ubuntu.com/ubuntu xenial-security/universe Sources [30.0 kB]Get#19 http#//archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages [303 kB]Get#20 http#//archive.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.8 kB]Get#21 http#//archive.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [132 kB]############################ SNIP Output ######################################## The following NEW packages will be installed# file iproute2 libatm1 libexpat1 libffi6 libmagic1 libmnl0 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib libsqlite3-0 libssl1.0.0 libxtables11 mime-support python python-minimal python2.7 python2.7-minimal0 upgraded, 4 newly installed, 0 to remove and 8 not upgraded.Need to get 586 kB of archives.After this operation, 1808 kB of additional disk space will be used.Get#1 http#//archive.ubuntu.com/ubuntu xenial/main amd64 libatm1 amd64 1#2.5.1-1.5 [24.2 kB]Get#2 http#//archive.ubuntu.com/ubuntu xenial/main amd64 libmnl0 amd64 1.0.3-5 [12.0 kB]Get#3 http#//archive.ubuntu.com/ubuntu xenial/main amd64 iproute2 amd64 4.3.0-1ubuntu3 [522 kB]Get#4 http#//archive.ubuntu.com/ubuntu xenial/main amd64 libxtables11 amd64 1.6.0-2ubuntu3 [27.2 kB]Fetched 586 kB in 0s (11.1 MB/s) debconf# delaying package configuration, since apt-utils is not installedSelecting previously unselected package libatm1#amd64.(Reading database ... 7256 files and directories currently installed.)Preparing to unpack .../libatm1_1%3a2.5.1-1.5_amd64.deb ...############################ SNIP Output ######################################## Setting up libmnl0#amd64 (1.0.3-5) ...Setting up iproute2 (4.3.0-1ubuntu3) ...Setting up libxtables11#amd64 (1.6.0-2ubuntu3) ...Processing triggers for libc-bin (2.23-0ubuntu5) ...root@3cc4d9dd0056#/# exitexitroot@vagrant-ubuntu-trusty-64#~# Finally, use the docker export command to save your custom container tar ball#root@vagrant-ubuntu-trusty-64#~# root@vagrant-ubuntu-trusty-64#~# docker export ubuntu_iproute2 > ubuntu_iproute2.tar root@vagrant-ubuntu-trusty-64#~# ls -l ubuntu_iproute2.tar -rw-r--r-- 1 root root 147474432 Apr 8 11#31 ubuntu_iproute2.tarroot@vagrant-ubuntu-trusty-64#~# Vagrant SetupJust like the previous technique, scp the docker container tar ball into the router, but this time import it#scp the tarball onto the router#AKSHSHAR-M-K0DS#docker-app-topo-bootstrap akshshar$ vagrant ssh rtrxr-vm_node0_RP0_CPU0#~$sudo -i[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ scp vagrant@11.1.1.20#~/ubuntu_iproute2.tar /misc/app_host/vagrant@10.0.2.2's password# ubuntu_iproute2.tar 100% 141MB 17.6MB/s 00#08 [xr-vm_node0_RP0_CPU0#~]$ Now import the tar ball and spin up the docker container#[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker import /misc/app_host/ubuntu_iproute2.tar ubuntu_iproute2 sha256#26265a51af3e826b92130ef6bc8a1ead85988908b836c2659164d482e0a73248[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker images ubuntu_iproute2REPOSITORY TAG IMAGE ID CREATED SIZEubuntu_iproute2 latest 26265a51af3e 38 seconds ago 141.7 MB[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker run -itd --name ubuntu_iproute2 -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN ubuntu_iproute2 bash3736cb8350e324636ebad4822bcd4437451c5ba59b9b5d025c7ba9914afd4379[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3736cb8350e3 ubuntu_iproute2 ~bash~ 29 seconds ago Up 28 seconds ubuntu_iproute2NCS5500 and ASR9k setup.NCS5500 and ASR9k follow the exact same steps as the Vagrant box above. For completeness, though#RP/0/RP0/CPU0#ncs5508#bashSun Apr 9 11#29#09.531 UTC[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$[ncs5508#~]$scp cisco@11.11.11.2#~/ubuntu_iproute2.tar /misc/app_host/cisco@11.11.11.2's password# ubuntu_iproute2.tar 100% 141MB 10.1MB/s 00#14 [ncs5508#~]$[ncs5508#~]$[ncs5508#~]$docker import /misc/app_host/ubuntu_iproute2.tar ubuntu_iproute2 sha256#170f8ce009cc920160e47b3e4e7dae1a0711ae4542c9ef0dcfcca4007741a13f[ncs5508#~]$[ncs5508#~]$docker images ubuntu_iproute2REPOSITORY TAG IMAGE ID CREATED SIZEubuntu_iproute2 latest 170f8ce009cc 25 seconds ago 141.7 MB[ncs5508#~]$[ncs5508#~]$docker run -itd --name ubuntu_iproute2 -v /var/run/netns/global-vrf#/var/run/netns/global-vrf --cap-add=SYS_ADMIN ubuntu_iproute2 bash 36f8ae4cad2c575885f2c1243a042972dc74e7dd541e270c06628fe141a5f63a[ncs5508#~]$[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES36f8ae4cad2c ubuntu_iproute2 ~bash~ 4 seconds ago Up 4 seconds ubuntu_iproute2And there you have it! We’ve successfully tried all the possible techniques through which a docker image can be pulled into the router before we spin up the container.What can I do with the Docker container?As a user you might be wondering# What can processes inside the spun-up Docker container really do?The answer# everything that a native app/agent (running inside the XR process space) can do from the perspective of reachability and binding to XR interface IP addresses.You basically have a distribution of your choice with complete access to XR RIB/FIB (through routes in the kernel) and interfaces (data and management) to bind to.Docker images by default are extremely basic and do not include most utilities. To be able to showcase the kind of access that a container has, I pull in a special ubuntu docker image with pre-installed iproute2. To understand how to do this follow the previous section# Importing a Custom Docker container tar ballAt the end of the previous section you would have the ubuntu_iproute2 container up and running#We’re running the steps below on an NCS5500. But the steps are the same for a vagrant setup or for ASR9k.[ncs5508#~]$docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES36f8ae4cad2c ubuntu_iproute2 ~bash~ 9 minutes ago Up 9 minutes ubuntu_iproute2[ncs5508#~]$Now exec into the running container using docker exec#[ncs5508#~]$[ncs5508#~]$docker exec -it ubuntu_iproute2 bashroot@36f8ae4cad2c#/# root@36f8ae4cad2c#/# To view the IOS-XR network interfaces and the relevant routes in the kernel, exec into the global-vrf network namespace#If you remember, every docker run command we have run till now involves mounting the relevant network namespace into the container under /var/run/netns.root@36f8ae4cad2c#/# ip netns exec global-vrf bash root@36f8ae4cad2c#/# root@36f8ae4cad2c#/# ip routedefault dev fwdintf scope link src 11.11.11.59 10.10.10.10 dev fwd_ew scope link src 11.11.11.59 11.11.11.0/24 dev Mg0_RP0_CPU0_0 proto kernel scope link src 11.11.11.59 root@36f8ae4cad2c#/# root@36f8ae4cad2c#/# root@36f8ae4cad2c#/# ip link show1# lo# <LOOPBACK,MULTICAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00#00#00#00#00#00 brd 00#00#00#00#00#003# fwdintf# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 00#00#00#00#00#0a brd ff#ff#ff#ff#ff#ff4# fwd_ew# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 00#00#00#00#00#0b brd ff#ff#ff#ff#ff#ff7# Hg0_0_0_0# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#00 brd ff#ff#ff#ff#ff#ff8# Hg0_0_0_35# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#8c brd ff#ff#ff#ff#ff#ff9# Hg0_0_0_34# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#88 brd ff#ff#ff#ff#ff#ff10# Hg0_0_0_33# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#84 brd ff#ff#ff#ff#ff#ff############################ SNIP Output ######################################## 47# Hg0_0_0_1# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#04 brd ff#ff#ff#ff#ff#ff48# Fg0_0_0_32# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#80 brd ff#ff#ff#ff#ff#ff49# Fg0_0_0_28# <> mtu 1514 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 0c#11#67#46#10#70 brd ff#ff#ff#ff#ff#ff53# Mg0_RP0_CPU0_0# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1514 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 80#e0#1d#00#fc#ea brd ff#ff#ff#ff#ff#ffroot@36f8ae4cad2c#/# Awesome! The entire XR routing stack is your oyster #).Testing out a Web ServerLet’s test this setup out quickly. If you remember, we installed python as part of the ubuntu_iproute2 custom container creation. We’ll spin up a python HTTP web server inside the docker container and see if we can reach it from the outside.root@642894d230a8#/# ip netns exec global-vrf bashroot@642894d230a8#/# root@642894d230a8#/# root@642894d230a8#/# ip addr show Mg0_RP0_CPU0_053# Mg0_RP0_CPU0_0# <MULTICAST,NOARP,UP,LOWER_UP> mtu 1514 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 80#e0#1d#00#fc#ea brd ff#ff#ff#ff#ff#ff inet 11.11.11.59/24 scope global Mg0_RP0_CPU0_0 valid_lft forever preferred_lft forever inet6 fe80##82e0#1dff#fe00#fcea/64 scope link valid_lft forever preferred_lft foreverroot@642894d230a8#/# root@642894d230a8#/# root@642894d230a8#/# python -m SimpleHTTPServer 8080root@642894d230a8#/# root@642894d230a8#/# root@642894d230a8#/# echo ~Hello World~ > /test.txtroot@642894d230a8#/# root@642894d230a8#/# python -m SimpleHTTPServer 8080Serving HTTP on 0.0.0.0 port 8080 ...Hop onto the connected devbox and issue a wget for the test.txt file we created above#root@dhcpserver#~# wget http#//11.11.11.59#8080/test.txt--2017-04-08 12#46#50-- http#//11.11.11.59#8080/test.txtConnecting to 11.11.11.59#8080... connected.HTTP request sent, awaiting response... 200 OKLength# 12 [text/plain]Saving to# ‘test.txt’100%[========================================================================================================================================>] 12 --.-K/s in 0s 2017-04-08 12#46#50 (2.13 MB/s) - ‘test.txt’ saved [12/12]root@dhcpserver#~# The request coming in to the docker container#root@642894d230a8#/# python -m SimpleHTTPServer 8080Serving HTTP on 0.0.0.0 port 8080 ...11.11.11.2 - - [09/Apr/2017 12#09#07] ~GET /test.txt HTTP/1.1~ 200 -Success! It all works as expected.", "url": "/tutorials/2017-02-26-running-docker-containers-on-ios-xr-6-1-2/", "author": "Akshat Sharma", "tags": "vagrant, iosxr, NCS5500, docker, xr toolbox" } , "tutorials-2017-04-12-on-box-telemetry-running-pipeline-and-kafka-on-ios-xr": { "title": "On-box Telemetry: Running Pipeline and Kafka on IOS-XR (6.2.1+)", "content": " Running Pipeline and Kafka on IOS-XR Streaming Telemetry What is on-box Telemetry? Docker container to host Pipeline + Kafka NCS5500/Vagrant On-Box Telemetry Setup ASR9k On-Box Telemetry Setup Docker image for Pipeline+Kafka Building a Pipeline-kafka Docker image for IOS-XR Clone Github repo Understand the Dockerfile Build the Docker image Pull Docker image on the router Launch the Docker container Create a custom pipeline.conf Testing the on-box Telemetry receiver Query the local Kafka instance Streaming TelemetryIf you haven’t checked out the great set of tutorials from Shelly Cadora and team on the Telemetry page of xrdocs# https#//xrdocs.github.io/telemetry/tutorials, it’s time to dive in.Streaming Telemetry in principle is tied to our need to evolve network device monitoring, above and beyond the capabilities that SNMP can provide.To get started, check out the following blogs# The Limits of SNMP Why you should care about Model Driven Telemetry Introduction to pipeline The running theme through the above set of blogs is clear# We need a consistent, model-driven method of exposing operational data from Network devices (read Yang Models# Openconfig, Vendor specific, and IETF) and PUSH the data over industry accepted transport protocols like GRPC or plain TCP/UDP to external Telemetry receivers. This is where IOS-XR really excels.The move from pull (SNMP style) to a push-based model for gathering Telemetry data is crucial to understand. It allows operational data to be collected at higher rates and higher scale (been shown and tested to be nearly 100x more effective than SNMP).Consequently, there is greater focus on tools that can help consume this large amount of data off-box. There are a variety of tools (opensource and otherwise) available for big data consumption# Apache Kafka, Prometheus, influxDB stack, SignalFX etc.A tool we recently open-sourced in this space with complete support for Model Driven Telemetry on IOS-XR (6.0.1+) that, as the name suggests, serves as a pipe/conduit between IOS-XR (over TCP, UDP or GRPC) on the input side and a whole host of tools (Kafka, influxDB, prometheus etc.) on the output side is called Pipeline. You can find it on Github. Find more about it here and here.What is on-box Telemetry?There is no one-size-fits-all technique for monitoring and managing network devices. There are a lot of network operators that will follow the typical approach# Set up the Model Driven Telemetry on IOS-XR and stream operational data to an external receiver or set of receivers. This is shown below. Pipeline as mentioned earlier, is used as a conduit to a set of tools like Kafka,prometheus etc.However, quite a few of our users have come and asked us if it’s possible to have a telemetry receiver run on the router inside a container (lxc or docker) so that applications running locally inside the container can take actions based on Telemetry data.This may be done for different reasons# Users may choose to simplify their server environment and not run external services (like Kafka, influxDB stack or prometheus/Grafana etc.). Typically, somebackground in devops engineering is often important to understand how to scale out these services and process large amounts of data coming from all routers at the same time. The alerts or the remediation actions that a user intends to perform based on the Telemetry data received may be fairly simplistic and can be done on box. Bear in mind that running onbox does come with its own concerns. Network devices typically have limited compute capacity (CPU/RAM) and limited disk capacity. While CPU/RAM isolation can be achieved using Containers on box, managing the disk space on each individual router does require special care when dealing with Streaming Telemetry applications.Docker container to host Pipeline + KafkaIn this tutorial, we look at using a Docker container to host Pipeline and Kafka (with zookeper) as a Telemetry receiver. Further a simple Kafka consumer is written in python to interact with Kafka and take some sample action on a Telemetry data point.If you haven’t had a chance to learn how we enable hosting for Docker containers on IOS-XR platforms and how we set up routing capabilities within the container, take a look at the following section of our detailed Docker guide for IOS-XR# Understanding Docker Setup on IOS-XR platformsAs shown in the platform specific sections below, the pipeline-kafka combination runs as a Docker container onbox. Some specifics on the setup# In IOS-XR 6.2.1 (before 6.3.1) only global-vrf is supported in the linux kernel. The docker container is launched with the global-vrf network namespace mounted inside the container. The pipeline and kafka instances are launched inside the global-vrf network namespace and listen on all visible XR IP addresses in the kernel (Data ports in global-vrf, Management port in Global-vrf, loopback interfaces in global-vrf). The ports and listening IP selected by pipeline can be changed by the user during docker bringup itself by mounting a custom pipeline.conf (shown in subsequent sections). The XR telemetry process is configured to send Telemetry data to pipeline over Transport = UDP (only UDP is supported for onbox telemetry) and Destination address = listening IP address (some local XR IP) for pipeline. NCS5500/Vagrant On-Box Telemetry SetupThe docker daemon on NCS5500, NCS5000, XRv9k and Vagrant XR (IOS-XRv64) platforms runs on the Host layer at the bottom. The onbox telemetry setup will thus look something like#ASR9k On-Box Telemetry SetupOn ASR9k, the setup is the same from the user perspective. But for accuracy, the Docker daemon runs inside the XR VM in this case, as is shown below.It is recommended to host onbox daemons (in this case Kafka, pipeline, zookeeper) on either the all IP address (0.0.0.0) or on XR loopback IP addresses. This makes sure that these daemons stay up and available even when a physical interface goes down.Docker image for Pipeline+KafkaWhile a user is welcome to build their own custom Docker images, we have a base image that can take care of installation of pipeline and Kafka+zookeeper for you and is already available on Docker hub# https#//hub.docker.com/r/akshshar/pipeline-kafka/This image is built out automatically from the following github repo# https#//github.com/ios-xr/pipeline-kafkaWe will utilize this image and build our own custom variant to run on an IOS-XR box for onbox telemetry.Building a Pipeline-kafka Docker image for IOS-XRTo build our own Docker image, you need a development environment with Docker engine installed.This is basically the devbox environment that we have setup in earlier tutorials. To understand how to do this, follow the steps below (in order) from the Docker guide for IOS-XR# Pre-requisites# Setup your Vagrant environment and/or physical boxes (ASR9k, NCS5500 etc.) **Important#** If you're using the Vagrant setup for this tutorial, bear in mind that the default Vagrant image runs in 4G RAM. Since the docker image we host on the router is relatively resource intensive, we will need to increase the memory for our Vagrant IOS-XR instance to atleast 5G (5120 MB). This can be done easily by modifying the `Vagrantfile` in your directory and adding the following# config.vm.define ~rtr~ do |node| ############## SNIP ############# node.vm.provider ~virtualbox~ do |v| v.memory = 5120 end end config.vm.define ~devbox~ do |node| node.vm.box = ~ubuntu/trusty64~ ############## SNIP ############## Set up your topology# Understand the Topology Set up the Devbox environment# Install docker-engine on the devbox Clone Github repoNow that you have a running devbox environment, let’s clone the github-repo for the pipeline-kafka project#we use –recursive to make sure all the submodules get pulled as well. The submodules are actual github repos for the standalone pipeline and docker-kafka projects.vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ git clone --recursive https#//github.com/ios-xr/pipeline-kafkaCloning into 'pipeline-kafka'...remote# Counting objects# 38, done.remote# Compressing objects# 100% (30/30), done.remote# Total 38 (delta 15), reused 20 (delta 4), pack-reused 0Unpacking objects# 100% (38/38), done.Checking connectivity... done.Submodule 'bigmuddy-network-telemetry-pipeline' (https#//github.com/cisco/bigmuddy-network-telemetry-pipeline) registered for path 'bigmuddy-network-telemetry-pipeline'Submodule 'docker-kafka' (https#//github.com/spotify/docker-kafka) registered for path 'docker-kafka'Cloning into 'bigmuddy-network-telemetry-pipeline'...remote# Counting objects# 14615, done.remote# Compressing objects# 100% (8021/8021), done.remote# Total 14615 (delta 3586), reused 0 (delta 0), pack-reused 3349Receiving objects# 100% (14615/14615), 43.97 MiB | 2.02 MiB/s, done.Resolving deltas# 100% (4012/4012), done.Checking connectivity... done.Submodule path 'bigmuddy-network-telemetry-pipeline'# checked out 'a57e87c59ac220ad7725b6b74c3570243e1a4ac3'Cloning into 'docker-kafka'...remote# Counting objects# 98, done.remote# Total 98 (delta 0), reused 0 (delta 0), pack-reused 98Unpacking objects# 100% (98/98), done.Checking connectivity... done.Submodule path 'docker-kafka'# checked out 'fc8cdbd2e23a5cac21e7138d07ea884b4309c59a'vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ vagrant@vagrant-ubuntu-trusty-64#~$ cd pipeline-kafka/iosxr_dockerfile/vagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ lsDockerfile kafka_consumer.pyvagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ Understand the DockerfileLet’s take a look at the Dockerfile under the iosxr_dockerfile folder#vagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ cat Dockerfile FROM akshshar/pipeline-kafka#latestMaintainer akshshar# Specify the ~vrf~ you want to run daemons in during build time# By default, it is global-vrfARG vrf=global-vrf# Set up the ARG for use by Entrypoint or CMD scriptsENV vrf_exec ~ip netns exec $vrf~# Add a sample kafka_consumer.py script. User can provide their ownADD kafka_consumer.py /kafka_consumer.pyCMD $vrf_exec echo ~127.0.0.1 localhost~ >> /etc/hosts && $vrf_exec supervisord -nLet’s break it down#All the references below to Dockerfile instructions are derived from official Dockerfile Documentation#https#//docs.docker.com/engine/reference/builder/#known-issues-runARG vrf=global-vrfWe setup the script to accept arguments from the user during build time. This will allow us to be flexible in specifying the vrf (network namespace) to spin up the daemons in, in the future. Today in 6.2.1 (before 6.3.1), only global-vrf is supported.ENV vrf_exec ~ip netns exec $vrf~In Dockerfiles, the ARG variables are rejected in the ENTRYPOINT or CMD instructions. So we set up an ENV variable (which is honored) to create a command prefix necessary to execute a command in a given network namespace (vrf).ADD kafka_consumer.py /kafka_consumer.pyWe place the sample application (in this case written in python) inside the image to act as a consumer of the Telemetry data pushed to Kafka. This application can contain custom triggers to initiate alerts or other actions. For this tutorial, we will initiate the script manually post launch of the container. The user can choose to start the application by default as part of the ENTRYPOINT or CMD instructions in the dockerfile.CMD $vrf_exec echo ~127.0.0.1 localhost~ >> /etc/hosts && $vrf_exec supervisord -nThis specifies the command that will be run inside the container post boot. The first part of the command $vrf_exec echo ~127.0.0.1 localhost~ >> /etc/hosts sets up /etc/hosts with an entry for localhost making it easier for kafka and applications to talk to each other locally. The second part of the command $vrf_exec supervisord -n is used to start all the services in the correct vrf (hence the $vrf_exec). We use supervisord to easily specify multiple daemons that need to be launched (pipeline, kafka, zookeeper). You can get more details on supervisord here# http#//supervisord.org/Build the Docker imageIssue a docker build in the same folder and let’s tag it as pipeline-kafka-xr#latestvagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ sudo docker build -t pipeline-kafka-xr . Sending build context to Docker daemon 3.584kBStep 1/6 # FROM akshshar/pipeline-kafka#latestlatest# Pulling from akshshar/pipeline-kafka5040bd298390# Pull complete fce5728aad85# Pull complete c42794440453# Pull complete 0c0da797ba48# Pull complete 7c9b17433752# Pull complete 114e02586e63# Pull complete e4c663802e9a# Pull complete efafcf20d522# Pull complete b5a0de42a291# Pull complete e36cca8778db# Pull complete c3626ac93375# Pull complete 3b079f5713c1# Pull complete 2ac62e83a2a3# Pull complete 5fe3b4ab290e# Pull complete 08b6bc2f514b# Pull complete b86ae3d2d58d# Pull complete Digest# sha256#164adfb0da7f5a74d3309ddec4bc7078a81dcd32591cdb72410eccaf1448d88cStatus# Downloaded newer image for akshshar/pipeline-kafka#latest ---> 0f131a6f1d8cStep 2/6 # MAINTAINER akshshar ---> Running in 4da444d1b027 ---> e21b468c12b5Removing intermediate container 4da444d1b027Step 3/6 # ARG vrf=global-vrf ---> Running in 5cdb3d4eecdf ---> e347fe8cd7d9Removing intermediate container 5cdb3d4eecdfStep 4/6 # ENV vrf_exec ~ip netns exec $vrf~ ---> Running in 6601c66ff5fb ---> 6104847fbe17Removing intermediate container 6601c66ff5fbStep 5/6 # ADD kafka_consumer.py /kafka_consumer.py ---> 6cf31ccbf679Removing intermediate container 72d2b0320cf2Step 6/6 # CMD $vrf_exec echo ~127.0.0.1 localhost~ >> /etc/hosts && $vrf_exec supervisord -n ---> Running in 8c44808a44e6 ---> d9c6ec3671c0Removing intermediate container 8c44808a44e6Successfully built d9c6ec3671c0vagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ You should now have the docker image available on the devbox#vagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ sudo docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEpipeline-kafka-xr latest d9c6ec3671c0 About a minute ago 676MBakshshar/pipeline-kafka latest 0f131a6f1d8c 5 hours ago 676MBvagrant@vagrant-ubuntu-trusty-64#~/pipeline-kafka/iosxr_dockerfile$ Pull Docker image on the routerThere are multiple ways in which the freshly created Docker image could be transferred to the IOS-XR router. These methods are discussed in detail in the Docker Guide for IOS-XR. Choose your poison #) # Using an insecure registry Using a self-signed registry Using Docker save/load Launch the Docker containerLet’s assume you chose one of the above methods and pulled the docker container onto the router. In the end, you should see on the router’s linux shell#[xr-vm_node0_RP0_CPU0#~]$ sudo -i[xr-vm_node0_RP0_CPU0#~]$ docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEpipeline-kafka-xr latest d9c6ec3671c0 34 minutes ago 676.4 MB[xr-vm_node0_RP0_CPU0#~]$ The name of the image may be different based on the “docker pull” technique you use.Create a custom pipeline.confBefore we spin up the container, let’s create a custom pipeline.conf file.A sample pipeline.conf can be found here# https#//github.com/cisco/bigmuddy-network-telemetry-pipeline/blob/master/pipeline.confOn-box telemetry in 6.2.1 only works over UDP as transport. Support for TCP and GRPC dial-in/dial-out will come soonConsidering the above limitation, we modify pipeline.conf to only enable UDP as an input transport. Further, we’ll point pipeline to Kafka as an output stage. In the end, the relevant lines in my custom pipeline.conf are shown below#[xr-vm_node0_RP0_CPU0#~]$ grep -v ~^#~ /misc/app_host/pipeline.conf [default]id = pipeline[mykafka]stage = xport_outputtype = kafkaencoding = jsonbrokers = localhost#9092topic = telemetrydatachanneldepth = 1000[udpin]type = udp stage = xport_inputlisten = 1.1.1.1#5958 encap = stlogdata = off[xr-vm_node0_RP0_CPU0#~]$ Let me break down the above output# [udpin] specifies UDP as the input transport for pipeline and forces pipeline to listen on 1.1.1.1#5958. What is 1.1.1.1 # Address of one of the loopbacks in IOS-XR config as shown below# RP/0/RSP1/CPU0#asr9k#show running-config int loopback 0Thu Apr 13 16#21#57.749 UTCinterface Loopback0 ipv4 address 1.1.1.1 255.255.255.255!RP/0/RSP1/CPU0#asr9k# Be a little careful here. Do not select loopback1 IP address or any explicitly configured east-west interface for TPA. To understand more on TPA east-west IP addresses, see here#https#//xrdocs.github.io/application-hosting/blogs/2016-06-28-xr-app-hosting-architecture-quick-look/ [mykafka] stage describes the output stage of pipeline pointing to Kafka running inside the container. Pipeline is instructed to deliver data in a josn format to Kafka running at localhost#9092 Notice the location in which we create the customer pipeline.conf file#/misc/app_host/pipeline.confThis is important because the docker daemon runs on the underlying host layer in case of NCS5500/NCS5000/XRv9k and Vagrant IOS-XR platforms. /misc/app_host is a shared volume between the host layer and the XR LXC in these platforms.As for ASR9k, there is no issue placing the file anywhere since docker daemon runs inside the XR VM itself. But for consistency we’ll stick to the /misc/app_host location.Finally, launch the docker container by mounting /misc/app_host/pipeline.conf to /data/pipeline.conf inside the container where it will be picked up by the pipeline process.[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker run -itd --name pipeline-kafka -v /var/run/netns/global-vrf#/var/run/netns/global-vrf -v /misc/app_host/pipeline.conf#/data/pipeline.conf --hostname localhost --cap-add=SYS_ADMIN pipeline-kafka-xr#latest e42e7e2526253e37b28362bf70c98550ca9ac108dc2aaa667da1290e44c2a035[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESe42e7e252625 pipeline-kafka-xr#latest ~/bin/sh -c '$vrf_exe~ 2 minutes ago Up 2 minutes pipeline-kafka[xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ [xr-vm_node0_RP0_CPU0#~]$ docker exec -it pipeline-kafka bashroot@localhost#/# root@localhost#/# root@localhost#/# ip netns exec global-vrf bash root@localhost#/# root@localhost#/# root@localhost#/# ps -ef | grep -E ~pipeline|kafka|zookeeper~ root 9 6 0 02#05 ? 00#00#00 /pipeline --config=/data/pipeline.conf --log=/data/pipeline.logroot 10 6 0 02#05 ? 00#00#00 /usr/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /etc/zookeeper/conf#/usr/share/java/jline.jar#/usr/share/java/log4j-1.2.jar#/usr/share/java/xercesImpl.jar#/usr/share/java/xmlParserAPIs.jar#/usr/share/java/netty.jar#/usr/share/java/slf4j-api.jar#/usr/share/java/slf4j-log4j12.jar#/usr/share/java/zookeeper.jar org.apache.zookeeper.server.quorum.QuorumPeerMain /etc/zookeeper/conf/zoo.cfgroot 11 6 0 02#05 ? 00#00#00 /bin/sh /usr/bin/start-kafka.shroot 12 11 3 02#05 ? 00#00#02 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx256M -Xms256M -server -XX#+UseG1GC -XX#MaxGCPauseMillis=20 -XX#InitiatingHeapOccupancyPercent=35 -XX#+DisableExplicitGC -Djava.awt.headless=true -Xloggc#/opt/kafka_2.11-0.10.1.0/bin/../logs/kafkaServer-gc.log -verbose#gc -XX#+PrintGCDetails -XX#+PrintGCDateStamps -XX#+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/opt/kafka_2.11-0.10.1.0/bin/../logs -Dlog4j.configuration=file#/opt/kafka_2.11-0.10.1.0/bin/../config/log4j.properties -cp #/opt/kafka_2.11-0.10.1.0/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/argparse4j-0.5.0.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/connect-api-0.10.1.0.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/connect-file-0.10.1.0.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/connect-json-0.10.1.0.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/connect-runtime-0.10.1.0.jar#/opt/kafka_2.11-0-######################## SNIP Output #########################################9.2.15.v20160210.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/jetty-server-9.2.15.v20160210.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/jetty-servlet-9.2.15.v20160210.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/jetty-servlets-9.2.15.v20160210.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/jetty-util-9.2.15.v20160210.jar#/opt/kafka_2.11-0.10.1.0/-locator-1.0.1.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/reflections-0.9.10.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/rocksdbjni-4.9.0.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/scala-library-2.11.8.jar#/opt/kafka_2.11-0.10.1.0/bin/../libs/scala-root 314 312 0 02#06 ? 00#00#00 grep -E pipeline|kafka|zookeeperroot@localhost#/# Perfect! As we can see the required services# Pipeline, Kafka and Zookeeper were started in the correct network namespace ( notice we did an exec into the global-vrf network namespace) before checking if the processes are running.Testing the on-box Telemetry receiverTo test out the setup, let us first configure IOS-XR to send model-driven Telemetry data to the local pipeline receiver.Remember, in our custom pipeline.conf we set up pipeline to listen on UDP port 5958 on IP=1.1.1.1The configuration required on IOS-XR is#RP/0/RSP1/CPU0#asr9k#RP/0/RSP1/CPU0#asr9k#show running-config interface loopback 0Thu Apr 13 18#11#33.729 UTCinterface Loopback0 ipv4 address 1.1.1.1 255.255.255.255!RP/0/RSP1/CPU0#asr9k#show running-config telemetry model-driven Thu Apr 13 18#11#39.862 UTCtelemetry model-driven destination-group DGroup1 address family ipv4 1.1.1.1 port 5958 encoding self-describing-gpb protocol udp ! ! sensor-group SGroup1 sensor-path Cisco-IOS-XR-infra-statsd-oper#infra-statistics/interfaces/interface/latest/generic-counters ! subscription Sub1 sensor-group-id SGroup1 sample-interval 30000 destination-id DGroup1 !!RP/0/RSP1/CPU0#asr9k#Notice the highlighted configurations# We configure the destination to be 1.1.1.1#5958 over UDP, where 1.1.1.1 = loopback0 ip address of XR. Could be any Loopback or interface IP (Except any east-west interface IP address under tpa) We select the following sensor path# Cisco-IOS-XR-infra-statsd-oper#infra-statistics/interfaces/interface/latest/generic-counters. This sensor path is used to export interface stats for all interfaces on the box using the Cisco IOS-XR infra-statsd-oper YANG model. To learn more about how to configure model-driven telemetry, check out this great tutorial by Shelly# https#//xrdocs.github.io/telemetry/tutorials/2016-07-21-configuring-model-driven-telemetry-mdt/Query the local Kafka instanceAs soon as you configure Model-Driven Telemetry as shown above, the router will start streaming statistics to the local pipeline instance.Pipeline will then push the stats to Kafka running locally to the topic = ‘telemetry’ ( We configured this in our custom pipeline.conf file).Finally purely for test purposes, the docker build process includes a sample python script that uses the python-kafka library to act as a kafka consumer.You can find this inside the running docker container under / #[asr9k#~]$ docker exec -it pipeline-kafka bash root@localhost#/# ip netns exec global-vrf bash root@localhost#/# root@localhost#/# pwd/root@localhost#/# ls kafka_consumer.py kafka_consumer.pyThis is what the sample query script looks like#from kafka import KafkaConsumerimport jsonif __name__ == ~__main__~# consumer = KafkaConsumer('telemetry', bootstrap_servers=[~1.1.1.1#9092~]) for msg in consumer# telemetry_msg = msg.value telemetry_msg_json = json.loads(telemetry_msg) print ~\\nTelemetry data Received#\\n ~ print json.dumps(telemetry_msg_json, indent=4, sort_keys=True) if ~Rows~ in telemetry_msg_json# content_rows = telemetry_msg_json[~Rows~] for row in content_rows# if row[~Keys~][~interface-name~] == 'MgmtEth0/RSP1/CPU0/0'# pkt_rcvd = row[~Content~][~packets-received~] input_drops = row[~Content~][~input-drops~] print(~\\nParsed fields for interface MgmtEth0/RSP1/CPU0/0#\\ \\n Packets Received = %s,\\ \\n Input Drops = %s~ %(pkt_rcvd, input_drops)) As you can guess from the output above we’re executing the commands on an ASR9k. The script above has been built to dump the Telemetry stats in json format in realtime and also parse them to based on the interface key = ~MgmtEth0/RSP1/CPU0/0~. If you want this piece of code to work for the Vagrant setup, you will have to use an interface key based on the Vagrant IOS-XR interface naming convention (MgmtEth0/RP0/CPU0/0, GigabitEthernet0/0/0/0 etc.)When we run the script, we get#root@localhost#/# python kafka_consumer.py Telemetry data Received# { ~Rows~# [ { ~Content~# { ~applique~# 0, ~availability-flag~# 0, ~broadcast-packets-received~# 0, ~broadcast-packets-sent~# 0, ~bytes-received~# 0, ~bytes-sent~# 0, ~carrier-transitions~# 0, ~crc-errors~# 0, ~framing-errors-received~# 0, ~giant-packets-received~# 0, ~input-aborts~# 0, ~input-drops~# 0, ~input-errors~# 0, ~input-ignored-packets~# 0, ~input-overruns~# 0, ~input-queue-drops~# 0, ~last-data-time~# 1492110984, ~last-discontinuity-time~# 1484314261, ~multicast-packets-received~# 0, ~multicast-packets-sent~# 0, ~output-buffer-failures~# 0, ~output-buffers-swapped-out~# 0, ~output-drops~# 0, ~output-errors~# 0, ~output-queue-drops~# 0, ~output-underruns~# 0, ~packets-received~# 0, ~packets-sent~# 0, ~parity-packets-received~# 0, ~resets~# 0, ~runt-packets-received~# 0, ~seconds-since-last-clear-counters~# 0, ~seconds-since-packet-received~# 4294967295, ~seconds-since-packet-sent~# 4294967295, ~throttled-packets-received~# 0, ~unknown-protocol-packets-received~# 0 }, ~Keys~# { ~interface-name~# ~Null0~ }, ~Timestamp~# 1492110987184 }, { ~Content~# { ~applique~# 0, ~availability-flag~# 0, ~broadcast-packets-received~# 5894231, ~broadcast-packets-sent~# 0, ~bytes-received~# 2413968971, ~bytes-sent~# 830100769, ~carrier-transitions~# 15, ~crc-errors~# 0, ~framing-errors-received~# 0, ~giant-packets-received~# 0, ~input-aborts~# 0, ~input-drops~# 0, ~input-errors~# 0, ~input-ignored-packets~# 0, ~input-overruns~# 0, ~input-queue-drops~# 0, ~last-data-time~# 1492110987, ~last-discontinuity-time~# 1484314243, ~multicast-packets-received~# 24, ~multicast-packets-sent~# 0, ~output-buffer-failures~# 0, ~output-buffers-swapped-out~# 0, ~output-drops~# 0, ~output-errors~# 0, ~output-queue-drops~# 0, ~output-underruns~# 0, ~packets-received~# 8712938, ~packets-sent~# 2328185, ~parity-packets-received~# 0, ~resets~# 0, ~runt-packets-received~# 0, ~seconds-since-last-clear-counters~# 0, ~seconds-since-packet-received~# 0, ~seconds-since-packet-sent~# 3, ~throttled-packets-received~# 0, ~unknown-protocol-packets-received~# 0 }, ~Keys~# { ~interface-name~# ~MgmtEth0/RSP1/CPU0/0~ }, ~Timestamp~# 1492110987184 } ], ~Source~# ~1.1.1.1#18046~, ~Telemetry~# { ~collection_end_time~# 0, ~collection_id~# 12254, ~collection_start_time~# 1492110987176, ~encoding_path~# ~Cisco-IOS-XR-infra-statsd-oper#infra-statistics/interfaces/interface/latest/generic-counters~, ~msg_timestamp~# 1492110987176, ~node_id_str~# ~asr9k~, ~subscription_id_str~# ~Sub1~ }}Parsed fields for interface MgmtEth0/RSP1/CPU0/0# Packets Received = 8712938, Input Drops = 0Telemetry data Received# { ~Source~# ~1.1.1.1#18046~, ~Telemetry~# { ~collection_end_time~# 1492110987186, ~collection_id~# 12254, ~collection_start_time~# 0, ~encoding_path~# ~Cisco-IOS-XR-infra-statsd-oper#infra-statistics/interfaces/interface/latest/generic-counters~, ~msg_timestamp~# 1492110987186, ~node_id_str~# ~asr9k~, ~subscription_id_str~# ~Sub1~ }}Works Great! Now that you’re able to capture Telemetry messages in realtime through a python script and are able to parse through the fields, you should be able to create your own conditions and actions based on the value of the fields. There you have it! Your own standalone pipeline and Kafka based Telemetry receiver running on the box.", "url": "/tutorials/2017-04-12-on-box-telemetry-running-pipeline-and-kafka-on-ios-xr/", "author": "Akshat Sharma", "tags": "vagrant, iosxr, cisco, docker, pipeline, telemetry" } , "tutorials-2018-05-25-logging-on-ios-xr-guest-os-with-rsyslog-and-elastic-stack": { "title": "Logging on IOS-XR guest OS with rsyslog and Elastic stack ", "content": " Logging on IOS-XR guest OS with rsyslog and Elastic stack Intro Requirements Steps to spin up Vagrant setup rsyslog Package installation on IOS-XR SCP option YUM installation from public repo Filters Configuring server side syslog-ng Elastic stack logrotate Conclusion IntroHave you ever worried about logging in the IOS-XR Linux environment? With Linux adoption, we can align our techniques and operational flows with the server world. There should be as little as possible difference in operations.In this tutorial, we are going to cover routers configuration to stream Syslog message and accept those messages on an Ubuntu machine with Syslog-ng and Elastic stack.Figure 1 - Logging conceptsRequirementsYou can run this tutorial on your computer using Vagrant and virtualization technologies. Topology consists of Ubuntu VM and IOS-XRv instance.You will need following resources# 5 GB of RAM; 2 vCPU; Virtualbox, Vagrant and git installed; IOS-XRv instance. Follow this tutorial to request it.Steps to spin up Vagrant setup$git clone https#//github.com/Maikor/IOS-XR-logging-tutorial.git Cloning into 'IOS-XR-logging-tutorial'...remote# Counting objects# 14, done.remote# Compressing objects# 100% (12/12), done.remote# Total 14 (delta 1), reused 8 (delta 0), pack-reused 0Unpacking objects# 100% (14/14), done.Checking connectivity... done.$ cd IOS-XR-logging-tutorial$ vagrant up$ vagrant port xrThe forwarded ports for the machine are listed below. Please note thatthese values may differ from values configured in the Vagrantfile if theprovider supports automatic port collision detection and resolution. 22 (guest) => 2223 (host) 57722 (guest) => 2200 (host) # to access IOS-XR (password for access vagrant)#$ ssh -p 2223 vagrant@127.0.0.1 Password#RP/0/RP0/CPU0#xr## to access ubuntuvagrant ssh ubuntuWelcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.4.0-127-generic x86_64)vagrant@ubuntu-xenial#~$rsyslogWhat is rsyslog and why should we care?rsyslog is the Rocket-fast SYStem for LOG processing.It offers high-performance, excellent security features, and a modular design. While it started as regular syslogd, rsyslog has evolved into a kind of swiss army knife of logging, being able to accept inputs from a wide variety of sources, transform them, and output the results to a variety of destinations.RSYSLOG can deliver over one million messages per second to local destinations when limited processing is applied (based on v7, December 2013). Even with remote destinations and more elaborate processing the performance is usually considered impressive. The crucial advantage over traditional syslog – the capability to send messages to remote receivers.There are of course, several other advantages# Multi-threading TCP, SSL, TLS, RELP MySQL, PostgreSQL, Oracle and more Filter any part of syslog message Fully configurable output formatPackage installation on IOS-XRBy default, syslog is utilized in the IOS-XR Linux environment. We will replace it with rsyslog. To do it, we will need to get the package installed on the router.There are two options to get rpm on the box# SCP Installation via YUMSCP optionIf you are going to use SCP, you can grab package directly and copy it to the router. Use yum with option localonly to proceed with the installation#[Canonball#~]$ yum install localonly -y rsyslog-7.4.4-r0.0.core2_64.rpmYUM installation from public repoIf you want to use YUM and your router has external connectivity, you may setup a yum repository and install the package via yum. Based on your setup, few extra steps may be required, such as DNS configuration and setting proxy environment.Let’s configure DNS servers on the router.RP/0/RP0/CPU0#Canonball#conf tMon May 28 00#56#06.322 UTCRP/0/RP0/CPU0#Canonball(config)#domain name-server 1.1.1.1RP/0/RP0/CPU0#Canonball(config)#domain name-server 8.8.8.8RP/0/RP0/CPU0#Canonball(config)#commitMon May 28 00#56#25.687 UTCOnce name-server applied in XR CLI config, it will be represented in XR Linux shell#RP/0/RP0/CPU0#Canonball#bashMon May 28 00#57#19.519 UTC[Canonball#~]$ cat /etc/resolv.confdomain localsearch localnameserver 1.1.1.1nameserver 8.8.8.8If your device is behind a proxy, configure it in XR Linux shell#[Canonball#~]$ export http_proxy=http#//proxy.custom.com#80/[Canonball#~]$ export https_proxy=http#//proxy.custom.com#80/Now your external connectivity should be good, proceed with YUM for package installation.In the beginning we need to add the repo via config manager#[Canonball#~]$ yum-config-manager --add-repo https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/adding repo from# https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/[devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_]name=added from# https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/baseurl=https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/enabled=1Enable new repo#[Canonball#~]$ yum-config-manager --enable https#//devhub.cisco.com/artifactory/xr600/3rdparty/x86_64/[Canonball#~]$ yum check-updateLoaded plugins# downloadonly, protect-packages, rpm-persistencelocaldb | 951 B 00#00 ...devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_ | 1.3 kB 00#00devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_/primary | 1.1 MB 00#01devhub.cisco.com_artifactory_xr600_3rdparty_x86_64_ 5912/5912Proceed with installation#[Canonball#~]$[Canonball#~]$ yum install rsyslogLoaded plugins# downloadonly, protect-packages, rpm-persistenceSetting up Remove ProcessResolving Dependencies--> Running transaction check***Omitted output***Installed# rsyslog.core2_64 0#7.4.4-r0.0Complete!If you are facing issues with specific build versions or installation doesn’t go smoothly for you, rpm could be utilized directly, like in snippet below (rpm is already located on device hard drive).[Macrocarpa#~]$ rpm -ivh rsyslog-7.4.4-r0.0.core2_64.rpmPreparing... ########################################### [100%]Stopping system log daemon...0update-rc.d# /etc/init.d/syslog exists during rc.d purge (continuing) Removing any system startup links for syslog ... /etc/rc0.d/K20syslog /etc/rc1.d/K20syslog /etc/rc2.d/S20syslog /etc/rc3.d/S20syslog /etc/rc4.d/S20syslog /etc/rc5.d/S20syslog /etc/rc6.d/K20syslog 1#rsyslog ########################################### [100%]update-alternatives# Linking //sbin/syslogd to /usr/sbin/rsyslogdupdate-alternatives# Linking //etc/syslog.conf to /etc/rsyslog.confupdate-alternatives# Linking //etc/logrotate.d/syslog to /etc/logrotate.rsyslogupdate-alternatives# Linking //etc/init.d/syslog to /etc/init.d/syslog.rsyslog Adding system startup for /etc/init.d/syslog.starting rsyslogd ... doneTo send messages to the remote server, we will need to configure rsyslog, in particular, modify its configuration file /etc/rsyslog.confFor TCP we will use @@For UDP we will use @String below will allow us to send all messages via TCP#*.* @@<remote_ip>#514If there is an intention to offload messages to Elasticsearch, we will need to convert them to JSON format. There are two ways how to do that# Format incoming messages received on the server side; Send messages directly in JSON format.We will elaborate on example with the second option. To convert a message to JSON format we will add the template to rsyslog configuration file.template(name=~JsonFormat~ type=~list~) { constant(value=~{~) constant(value=~\\~@timestamp\\~#\\~~) property(name=~timereported~ dateFormat=~rfc3339~) constant(value=~\\~,\\~@version\\~#\\~1~) constant(value=~\\~,\\~message\\~#\\~~) property(name=~msg~ format=~json~) constant(value=~\\~,\\~sysloghost\\~#\\~~) property(name=~hostname~) constant(value=~\\~,\\~severity\\~#\\~~) property(name=~syslogseverity-text~) constant(value=~\\~,\\~facility\\~#\\~~) property(name=~syslogfacility-text~) constant(value=~\\~,\\~programname\\~#\\~~) property(name=~programname~) constant(value=~\\~,\\~procid\\~#\\~~) property(name=~procid~) constant(value=~\\~}\\n~)}The template is defined, next step to apply it#*.* @<remote_ip>#10514;JsonFormatWith such configuration, the router will send messages to 2 destinations, one in the plain text format to port 514 and another in JSON format to port 10514.Out of box Logstash supports UDP, TCP available via “TCP input plugin”. Not to ample our guide, we will stick to UDP for now.Validate the rsyslog config. No errors? All set!rsyslogd -N1Some extra commands, which will help you to do the health check on your setup. To check rsyslog version running#rsyslogd -versionCheck the Linux system log for rsyslog errors. You should see an event that it it has started with no errors. Some logs may also be in /var/log/syslog.sudo cat /var/log/messages | grep rsyslogRestart rsyslog to enable the changes applied to the configuration#[Macrocarpa#~]$ service syslog restartstopping rsyslogd ... donestarting rsyslogd ... doneIf you decide to utilize rsyslog version 8+, make sure you have library libestr version 1.9+Filtersrsyslog provides the comprehensive set of filters and rules. By specifying dot (“.”) in the config file, we determine that all information from all facilities would be sent to the remote location.What if we want to exclude some facilities from sending process? Adding ; should solve this case.*.*;mail # will match all facilities, except mailauth.warn # will match facility auth, with security level 'warning' or higherWhat could be a filter example? Let’s say we want to search msg property of the incoming Syslog message, for a specific string and pipe it to the custom file. All messages with substring pam_unix(crond#session)# now logged to ncs-custom.log#msg, contains, ~pam_unix(crond#session)~ -/var/log/syslog-ng/ncs-custom.logKeep in mind, that filters are case-sensitive. ‘Pam’ or ‘PAM’ will not be filtered.Official documentation with notification levels and more on filters here.Configuring server sideOn a server side (aka receiver) we will run syslog-ng to get messages from routers in plain text. Syslog-ng would be run natively (the docker image also available) and Elastic (Logstash + Elasticsearch) would start in docker container for easier deployment.syslog-ngIf you have Ubuntu, apt-get should be used to install syslog-ng.sudo apt-get install syslog-ng -yCheck that installation was successful. With apt, you will not get the latest version of software, but it will serve our purpose without limitations.% syslog-ng --versionsyslog-ng 3.5.6Installer-Version# 3.5.6Revision# 3.5.6-2.1 [@416d315] (Ubuntu/16.04)Compile-Date# Oct 24 2015 03#49#19Next step would be to modify Syslog configuration file /etc/syslog-ng/syslog-ng.conf. Following config lines will open port for listening and write all messages to file ncs.logsource s_net { tcp(ip(0.0.0.0), port(514)); udp(ip(0.0.0.0), port(514));};destination logfiles { file(~/var/log/syslog-ng/ncs.log~);};log { source(s_net); destination(logfiles);};The syslog-ng service should listen on all IP addresses, because we specify 0.0.0.0 and TCP/UDP port 514. If you want to listen to specific IP, just replace 0.0.0.0 with your required ip address.$ netstat -tulpn | grep 514tcp 0 0 0.0.0.0#514 0.0.0.0#* LISTEN -udp 0 0 0.0.0.0#514 0.0.0.0#* -Elastic stackOpen source software, such as Elasticsearch and Logstash provide you the tools to transform and store our log data.We can verify the messages in JSON format are received on the server side, before proceeding with the installation. netcat is used for that.nc -ul server_ip 10514{~@timestamp~#~2018-06-03T19#10#01.524244+00#00~,~@version~#~1~,~message~#~ pam_unix(crond#session)# session opened for user root by (uid=0)~,~sysloghost~#~Red_Pine~,~severity~#~info~,~facility~#~authpriv~,~programname~#~crond~,~procid~#~21501~}{~@timestamp~#~2018-06-03T19#10#01.524813+00#00~,~@version~#~1~,~message~#~ (root) CMD (/usr/bin/logrotate /etc/logrotate.conf >/dev/null 2>&1)~,~sysloghost~#~Red_Pine~,~severity~#~info~,~facility~#~cron~,~programname~#~CROND~,~procid~#~21502~}To proceed with the installation, pull the images#docker pull docker.elastic.co/logstash/logstash#6.2.4docker pull docker.elastic.co/elasticsearch/elasticsearch#6.2.4To let Logstash be aware of incoming rsyslog messages, we should provide configuration file logstash.conf## Logstash will wait for input on port 10514;# server_ip -> ip of our machine;# codec, specifies ~json~ receiving format# ~rsyslog~ type used for identification messaging streams in the pipeline.input { udp { host => ~server_ip~ port => 10514 codec => ~json~ type => ~rsyslog~ }}# Filter block is empty, could be used in futurefilter { }output { if [type] == ~rsyslog~ { elasticsearch { hosts => [ ~server_ip#9200~ ] } }}Run the containers! For the Logstash container you can mount the whole folder with the config files, or specify the files directly.docker run -itd --rm --name=logstash -v ~/Documents/elastic/#/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash#6.2.4 docker run -itd -p 9200#9200 -p 9300#9300 --name=elasticsearch -e ~discovery.type=single-node~ docker.elastic.co/elasticsearch/elasticsearch#6.2.4To verify, that Logstash is sending data to Elasticsearch, open a browser http#//server_ip#9200/_all/_search?q=*&pretty.You should see the output from rsyslog#{ ~_index~ # ~.monitoring-logstash~, ~_type~ # ~rsyslog~, ~_id~ # ~d9tyx2MBc17MjdSrF223~, ~_score~ # 1.0, ~_source~#{~@timestamp~#~2018-06-03T21#00#44.739673+00#00~,~@version~#~1~,~message~#~ + /dev/pts/8 root#root~,~sysloghost~#~Red_Pine~,~severity~#~info~,~facility~#~authpriv~,~programname~#~su~,~procid~#~24267~} },Congratulations, all your software is up and running!logrotateIt’s worth to mention one critical component for logging solution. The space on your hard drive is finite. It’s worth investing some amount of time to configure proper rules for your machine to free up space. Logrorate is a perfect solution for that. Let’s take a look at configuration excerpts for ncs.log#cisco@walnut ~ % cat /etc/logrotate.d/apt/var/log/syslog-ng/ncs.log { rotate 24 monthly compress missingok notifempty} rotate 24 - 24 old log files saved; monthly - rotation frequency; compress - gzip used by default, to compress old log files; missingok - no errors if log file is missing; notifempty - skip rotate, if log file is empty.With such a simple addition, your hard drive capacity will last longer.ConclusionYour logs streamed via rsyslog to multiple destinations# syslog-ng and Logstash. Consider visualization with some tools like Kibana or another application and elevate your logging techniques.", "url": "/tutorials/2018-05-25-logging-on-ios-xr-guest-os-with-rsyslog-and-elastic-stack/", "author": "Mike Korshunov", "tags": "iosxr, Logging" } , "tutorials-2018-08-06-comprehensive-guide-to-ansible-on-ios-xr": { "title": "Comprehensive guide to Ansible on IOS-XR", "content": " Comprehensive guide to Ansible on IOS-XR Prerequisites k9 security package comitted on the device. RSA key pair is present Config for SSH server Network management specific moments Communication mechanisms Directory structure Getting into modules Hosts file NETCONF and Banner modules Config, Logging and System modules Interfaces module User module Get Facts module Command module Dealing with passwords Ansible Vault Key-based authentication to IOS-XR device. Performance tips Change strategy for execution Change forks number Conclusion Network Automation for the network is crucial nowadays. It changes how network feels and look. This tutorial is going to cover Ansible modules for IOS-XR, some tips and tricks and how to increase performance for Ansible playbooks.As per Ansible version 2.6.2, we have following 9 modules for IOS-XR. An excerpt from Ansible site below# iosxr_banner - Manage multiline banners on Cisco IOS XR devices iosxr_command - Run commands on remote devices running Cisco IOS XR iosxr_config - Manage Cisco IOS XR configuration sections iosxr_facts - Collect facts from remote devices running IOS XR iosxr_interface - Manage Interface on Cisco IOS XR network devices iosxr_logging - Configuration management of system logging services on network devices iosxr_netconf - Configures NetConf sub-system service on Cisco IOS-XR devices iosxr_system - Manage the system attributes on Cisco IOS XR devices iosxr_user - Manage the aggregate of local users on Cisco IOS XR devicePrerequisitesSince Ansible relies on SSH connection to the device, few requirements need to be met.k9 security package comitted on the device.RP/0/RP0/CPU0#flamboyant#show install activeWed Aug 1 18#00#24.162 UTCNode 0/RP0/CPU0 [RP] Boot Partition# xr_lv0 Active Packages# 9 ncs5500-xr-6.3.2 version=6.3.2 [Boot image] ncs5500-mcast-2.1.0.0-r632 ncs5500-mpls-2.1.0.0-r632 ncs5500-mgbl-4.0.0.0-r632 ncs5500-mpls-te-rsvp-2.2.0.0-r632 ncs5500-ospf-2.0.0.0-r632 ncs5500-isis-1.3.0.0-r632 ncs5500-li-1.0.0.0-r632 ncs5500-k9sec-4.1.0.0-r632Node 0/0/CPU0 [LC] Boot Partition# xr_lcp_lv0 Active Packages# 9 ncs5500-xr-6.3.2 version=6.3.2 [Boot image] ncs5500-mcast-2.1.0.0-r632 ncs5500-mpls-2.1.0.0-r632 ncs5500-mgbl-4.0.0.0-r632 ncs5500-mpls-te-rsvp-2.2.0.0-r632 ncs5500-ospf-2.0.0.0-r632 ncs5500-isis-1.3.0.0-r632 ncs5500-li-1.0.0.0-r632 ncs5500-k9sec-4.1.0.0-r632RSA key pair is presentRSA key pair needs to be generated. Use the crypto key generate rsa command to generate it. You must configure a hostname for the router using the hostname global configuration command.RP/0/RP0/CPU0#flamboyant# crypto key generate rsa keypair-ciscoWed Aug 1 18#09#23.836 UTCThe name for the keys will be# keypair-cisco Choose the size of the key modulus in the range of 512 to 4096 for your General Purpose Keypair. Choosing a key modulus greater than 512 may take a few minutes.How many bits in the modulus [2048]#Generating RSA keys ...Done w/ crypto generate keypair[OK]RP/0/RP0/CPU0#flamboyant#RP/0/RP0/CPU0#flamboyant#! If you want to remove keys from device# RP/0/RP0/CPU0#flamboyant#crypto key zeroize rsaWed Aug 1 18#08#37.471 UTC% Keys to be removed are named keypair-ciscoDo you really want to remove these keys ?? [yes/no]# yesRP/0/RP0/CPU0#flamboyant#If you want to generate keys without prompt, use following command#RP/0/RP0/CPU0#flamboyant#crypto key generate rsa general-keys modulusWed Aug 1 19#31#01.233 UTCThe name for the keys will be# modulus Choose the size of the key modulus in the range of 512 to 4096 for your General Purpose Keypair. Choosing a key modulus greater than 512 may take a few minutes.How many bits in the modulus [2048]# Generating RSA keys ...Done w/ crypto generate keypair[OK]RP/0/RP0/CPU0#flamboyant#Config for SSH serverApply the following config on the target device to enable SSH and NETCONF.netconf agent tty!netconf-yang agent ssh!ssh server session-limit 10ssh server v2ssh server netconf vrf defaultNetwork management specific momentsAnsible workflow for managing network nodes has its nuances.Usually, Ansible runs on managed nodes, however, it’s not the case for the network modules. Everything stays the same from the user perspective and for accustomed keywords. All the magic happens in the background. Typically network devices lack of Python support (IOS-XR support application hosting concepts. Read more about it. Because of that network module executes locally and CLI/XML instruction sent over to the device.One more important aspect for Linux/Unix, configuration files for the system exist in files on hard disks, so the backup for them created in the same directory. It’s not the case for the network modules (configuration not stored on disk) and we will see an example in the iosxr_config module, where backup folder created on the machine, from which we are running playbooks.Additional introduction to Network module - conditions. They allow you to work with output and make value comparison. In conjunction with match parameter, a more sophisticated state can be checked.Communication mechanismsFor network modules, there are 2 main connections# network_cli and netconfPrevious way to connect to targets was local, you can still use old style of declaration in the playbook, however, it’s deprecated and will be removed eventually. In official Ansible documentation, you will notice parameter provider as an indication that parameter local was used.---- name# Configure IOS-XR devices hosts# routers gather_facts# no # connection local instead of network_cli connection# local tasks# - name# collect facts from IOS-XR routers iosxr_facts# gather_subset# - config provider# ~~ register# config vars# cli# host# ~~ username# ~~ password# ~~If you will run the playbook above with verbose key, in the output you will see$ ansible-playbook head-playbook.yml -i ansible-hosts.ini --forks 10 -vvvPLAYBOOK# head-playbook.yml *********************************************************************************omitted output***TASK [include task] **************************************************************************************task path# /home/cisco/Documents/ansible/head-playbook.yml#16<172.30.13.70> using connection plugin network_cli (was local)<172.30.13.71> using connection plugin network_cli (was local)***omitted output***The choice between network_cli and netconf connection should be made based on documentation for modules Not all modules support both connection, so first check before usage.Directory structureAll playbooks mentioned in tutorial, available on Github. In Github repo Vagrant folder included, feel free to practice against IOS-XRv image Request it here.Here is the layout for ansible-playbooks.$ tree ansible .ansible├── ansible-hosts.ini├── head-playbook.yml├── README.md├── roles│ ├── get_facts│ │ └── tasks│ │ └── main.yml│ ├── xr_commands│ │ └── tasks│ │ └── main.yml│ └── xr_config│ ├── backup│ │ ├── canonball_config.2018-08-06@14#22#09│ │ └── flamboyant_config.2018-08-06@14#22#09│ ├── common│ │ └── router.conf│ └── tasks│ └── main.yml└── xr-passes.yml9 directories, 12 filesFolder content# ansible-hosts.ini contains information about target devices. More on inventory; head-playbook.yml - main playbook, from which most we will execute tasks; roles - Ansible concept to separate list of provisioned apps/configs on the device. In terms of servers think of webserver, database, etc. In networking it’s more about services which could be provided via config changes# BGP, VPN, ISIS. In current case, we just demo the ability to separate playbooks into roles; backup directory appears after iosxr_config module triggered with backup# yes parameter. xr-passes.yml - our Vault, described in tutorial.Getting into modulesHosts fileWe need to define the hosts first. There is a separation of variables from host definition in ini file. In provided example, passwords stored in plain text. To avoid it, Ansible Vault should be used and will be covered later in this tutorial. Another mechanism - passwordless authentication, based on keys.$ cat ansible-hosts.ini [routers]flamboyant ansible_host=172.30.13.70 ansible_user=rootcanonball ansible_host=172.30.13.71 ansible_user=cisco[routers#vars]ansible_ssh_pass=ciscoansible_network_os=iosxransible_port=22NETCONF and Banner modulesAs a first example we will run two modules# enable NETCONF on device, set the banner via NETCONF (we just enabled it, why not to utilize it?).Playbook itself demonstrated below. We will just utilize 2 tasks for now. Name of modules is self-describing.For NETCONF module we can provide additionally VRF value, via netconf_vrf parameter. Default VRF, if the parameter is omitted - default. State parameter is responsible for adding or removing NETCONF related knob on the device. Present value will enable NETCONF configuration, Absent will withdraw config.Banner module uses same concept for State parameter (present/absent).Text parameter could start with “|” or “>” as identificator for the multiline string. As per banner itself configuration, you can select would it be login or motd message. Currently, there is a minor bug with caret return symbol treatment, so please use “>” as a multiline identifier.---- name# Configure IOS-XR devices hosts# routers gather_facts# no connection# network_cli tasks# - name# enable netconf service on port 830 iosxr_netconf# listens_on# 830 state# present - name# set welcome banner to device! iosxr_banner# banner# login text# > ! Unauthorized access to device# restricted. # text# > # ~~ state# present connection# netconfConsole output after playbook execution. For first time we run in verbose mode, since -vvv specified. As careful reader may notice, temporary files created directly on the host machine, not on target nodes. This is specific of the network module.$ ansible-playbook head-playbook.yml -i ansible-hosts.ini --forks 10 -vvvansible-playbook 2.6.2 config file = /home/cisco/.ansible.cfg configured module search path = [u'/home/cisco/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 2.7.12 (default, Nov 20 2017, 18#23#56) [GCC 5.4.0 20160609]Using /home/cisco/.ansible.cfg as config fileParsed /home/cisco/Documents/ansible/ansible-hosts.ini inventory source with ini pluginPLAYBOOK# head-playbook.yml ******************************************************************************1 plays in head-playbook.yml [WARNING]# Found variable using reserved name# vars_filesPLAY [Configure IOS-XR devices] ****************************************************************************META# ran handlersTASK [enable netconf service on port 830] ****************************************************************task path# /home/cisco/Documents/ansible/head-playbook.yml#9<172.30.13.70> ESTABLISH LOCAL CONNECTION FOR USER# cisco<172.30.13.70> EXEC /bin/sh -c '( umask 77 && mkdir -p ~` echo /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330985.59-76269032799671 `~ && echo ansible-tmp-1533330985.59-76269032799671=~` echo /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330985.59-76269032799671 `~ ) && sleep 0'Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/network/iosxr/iosxr_netconf.pyUsing module file /usr/local/lib/python2.7/dist-packages/ansible/modules/network/iosxr/iosxr_netconf.py<172.30.13.70> PUT /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/tmpiboupH TO /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330985.59-76269032799671/iosxr_netconf.py<172.30.13.70> EXEC /bin/sh -c 'chmod u+x /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330985.59-76269032799671/ /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330985.59-76269032799671/iosxr_netconf.py && sleep 0'<172.30.13.70> EXEC /bin/sh -c 'rm -f -r /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330985.59-76269032799671/ > /dev/null 2>&1 && sleep 0'changed# [flamboyant] => { ~changed~# true, ~commands~# [ ~netconf-yang agent ssh~, ~ssh server netconf port 830~ ], ~invocation~# { ~module_args~# { ~host~# null, ~listens_on~# 830, ~netconf_port~# 830, ~netconf_vrf~# ~default~, ~password~# null, ~port~# null, ~provider~# null, ~ssh_keyfile~# null, ~state~# ~present~, ~timeout~# null, ~username~# null } }}TASK [set welcome banner to device!] *********************************************************************task path# /home/cisco/Documents/ansible/head-playbook.yml#14<172.30.13.70> ESTABLISH LOCAL CONNECTION FOR USER# cisco<172.30.13.70> EXEC /bin/sh -c '( umask 77 && mkdir -p ~` echo /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229 `~ && echo ansible-tmp-1533330997.74-174548583873229=~` echo /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229 `~ ) && sleep 0'Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/network/iosxr/iosxr_banner.py<172.30.13.70> PUT /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/tmpECuOQc TO /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229/iosxr_banner.py<172.30.13.70> EXEC /bin/sh -c 'chmod u+x /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229/ /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229/iosxr_banner.py && sleep 0'<172.30.13.70> EXEC /bin/sh -c 'https_proxy='~'~''~'~' http_proxy='~'~''~'~' /usr/bin/python /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229/iosxr_banner.py && sleep 0'<172.30.13.70> EXEC /bin/sh -c 'rm -f -r /home/cisco/.ansible/tmp/ansible-local-238349mD4bn/ansible-tmp-1533330997.74-174548583873229/ > /dev/null 2>&1 && sleep 0'changed# [flamboyant] => { ~changed~# true, ~invocation~# { ~module_args~# { ~banner~# ~login~, ~host~# null, ~password~# null, ~port~# null, ~provider~# null, ~ssh_keyfile~# null, ~state~# ~present~, ~text~# ~Unauthorized access to device# flamboyant restricted.\\n~, ~timeout~# null, ~username~# null } }, ~xml~# ~<config xmlns#xc=\\~urn#ietf#params#xml#ns#netconf#base#1.0\\~><banners xmlns=\\~http#//cisco.com/ns/yang/Cisco-IOS-XR-infra-infra-cfg\\~><banner xc#operation=\\~merge\\~>login! Unauthorized access to device# flamboyant restricted.\\n</banner></banners></config>~}META# ran handlersPLAY RECAP ***********************************************************************************************flamboyant # ok=2 changed=2 unreachable=0 failed=0From the output, we can observe that both changes applied on the device. There are two ways how to check if config properly applied.The first approach is a conservative one. Connect to the device and check latest commit changes. Since each task is separate commit, a command issued for last two configuration changes.RP/0/RP0/CPU0#flamboyant#show configuration commit changes last 2Fri Aug 3 21#04#29.739 UTCBuilding configuration...!! IOS XR Configuration version = 6.3.2banner login Unauthorized access to device# flamboyant restricted.netconf-yang agent ssh!endMore interesting approach to stay in same operations paradigm and use Ansible for config validation## 1 new task introduced# - name# check if two first tasks 'enable netconf service on port 830' and 'set welcome banner to device!' successfully applied on the device iosxr_command# commands# - show run | begin netconf - show run | include banner wait_for# - result[0] contains 'netconf-yang agent' - result[1] contains 'banner login !'# - result[1] contains 'banner login My Not Expected Message'Such playbook will pass, and we will be sure, that config presented on the device. If we uncomment last string, the playbook will fail. We can use match parameter with value any and execution will succeed. The default value is all. Think about them as logical operators AND & OR. If wait_for argument included in the task, an output will not be returned until success or number of retries exceeded.As you may notice, wait_for used in the last task. More documentation on this module ~stdout_lines~# [ [ ~Building configuration...~, ~netconf-yang agent~, ~ ssh~, ~!~, ~ssh server session-limit 10~, ~ssh server v2~, ~ssh server netconf vrf default~, ~end~ ], [ ~Building configuration...~, ~banner login ! Unauthorized access to device# flamboyant restricted.~ ] ]PLAY RECAP ***********************************************************************************************flamboyant # ok=3 changed=0 unreachable=0 failed=0 Config, Logging and System modulesNext modules to be utilized are iosxr_config, iosxr_logging and iosxr_system.iosxr_config doesn’t support netconf connection. ⚠️To show a more sophisticated flow, we will create a role in our initial playbook.File main.yml will include tasks, which need to be executed. router.conf - common configuration for devices.|____xr-passes.yml|____head-playbook.yml|____ansible-hosts.ini|____roles| |____xr_config| | |____common| | | |____router.conf| | |____tasks| | | |____main.yml$ cat roles/xr_config/common/router.confrouter ospf 1 area 0 interface HundredGigE0/0/1/0 Playbook consists of three tasks. On first task, Ansible will change the device hostname. During second task Ansible applies configuration file on all devices. Third task checks that OSPF configuration is properly applied and OSPF process is up and running.cat roles/xr_config/tasks/main.yml---- name# Change hostname iosxr_config# lines# - hostname - name# Apply config from file iosxr_config# lines# - ~~ backup# yes - name# check if OSPF is configured iosxr_command# commands# - show run | i ~router ospf~ - show processes ospf | i ~Process state~ wait_for# - result[0] contains ~router ospf 1~ - result[1] contains ~Run~Main parameters for the config module are the following# lines & parents. Lines - ordered set of configs. Parents parameter uniquely identifies, under which block lines should be configured. replace values are line (default), block, config. Defines the behavior for task. If set to block and difference exist in lines, whole block will be pushed. match values are line (default), strict, exact, none. Similar parameter to the previous one in terms of comparison. Defines matching algorithm. backup will create the full running configuration in a backup subfolder, before applying new config; before & after - will append commands respectively; comment - text added to commit description. Default# configured by iosxr_config;Ready to get some logs out of the device? Logging module is purposed for that. If you are interested in streaming operation data, check our thorough Telemetry tutorials---- name# Configure IOS-XR devices hosts# routers gather_facts# no connection# network_cli tasks# - name# configure console logging level iosxr_logging# dest# console level# debugging state# present - name# configure logging for syslog server host iosxr_logging# dest# host name# 172.30.13.2 level# critical state# present For logging in IOS-XR Guest OS, check this tutorialiosxr_logging doesn’t allow you to specify the port, so you may use config module as a workaround. ⚠️The third module in the section is system configuration# configure DNS, domain-search, and lookup. State present/absent used for enable/disable configuration piece, like we saw earlier in the tutorial.---- name# Configure IOS-XR devices hosts# routers gather_facts# no connection# network_cli tasks# - name# configure DNS and domain-name (default vrf=default) iosxr_system# state# present domain_name# local.cisco.com domain-search# - cisco.com name_servers# # new DNS from CloudFlare, easy to remember ;) - 1.1.1.1 - 8.8.8.8 - 8.8.4.4 Interfaces moduleWonder how to manage interfaces? There is the specific module for interface management. There are 4 states for interface# present(default), absent, up & down. Up equal to present + operationally up. Down - present + operationally down.The aggregate parameter used to configure multiple interfaces in one task. New interface - new line. The first task will enable just one interface, second will enable two interfaces and will set MTU value.- name# Unshut interface iosxr_interface# description# link to RouterXX TenGigE0/0/0/11 name# TenGigE0/0/0/28 state# present - name# Configure interfaces using aggregate iosxr_interface# aggregate# - name# TenGigE0/0/0/30 - name# TenGigE0/0/0/31 mtu# 512 state# presentUser moduleOne more module, this time for user management. Password for user provided in the clear text. Public key could be used, but in this case, Python module base64 required (usually it’s included into Python distributions). If public_key or public_key_contents used and multiple users created, the same key used for every user.If you use parameter purge, which is boolean, all other users going to be removed from the device, except admin and newly created within the task.- name# set multiple users to group sys-admin iosxr_user# name# user1 group# sysadmin state# present public_key_contents# ~~- name# Create multiple users to multiple groups iosxr_user# aggregate# - name# user2 - name# user3 configured_password# cisco groups# - sysadmin - root-system state# present# Remove users- name# user deletion iosxr_user# aggregate# - name# user1 - name# user2 state# absentGet Facts moduleFacts is one of the most straightforward module, but also very resource intense because it will prompt device for running configuration. If gather_subset supplied, possible values are all, hardware, config, and interfaces. Try to limit usage of show running-config in playbooks to not sacrifice playbook performance.- name# get config facts and hardware iosxr_facts# gather_subset# - config - hardware register# hardware- name# get all facts, except information regarding interfaces, we have the special module for them! iosxr_facts# gather_subset# - all - ~!interfaces~Command moduleFew examples for this module already provided above. Important parameters for this module# interval - default 1 second, sets the timer between retries; retries - how many attempts Ansible will do before failing on task. Default - 10; wait_for - example was provided when we enable NETCONF on the system and checked response from the device. Can be used in conjunction with match parameter.Dealing with passwordsAnsible VaultAnsible Vault is a feature to store your secrets and sensitive information in encrypted files, instead of plain text.First step to create vault file#$ ansible-vault create xr-passes.ymlNew Vault password#Confirm New Vault password#$Populate it with values aka variables, same key-value scheme used#my_sensitive_pass_vault# cisco_SecUre_P@ssSave file. We can check the encrypted file content#$ cat xr-passes.yml $ANSIBLE_VAULT;1.1;AES256303035653335383834653939333636366535656364656564383333313032613338663763633739326539303964623031383065366135663166356263393937310a316436663938346165376636666631376239383832643964313264643236323466326366343339663766663537316235303436343732636235353763653338370a64373765373539626134626463313238353764313738316534643330393735386364366566643065306633636361353739363865396166623830643630356535Don’t like your Vault password? You can change it.$ ansible-vault rekey xr-passes.ymlAdvanced Encryption Standard (AES) is used for default encryption (which is shared-secret based).---- name# Configure IOS-XR devices hosts# routers gather_facts# no connection# local roles# - get_facts - banner_setup - xr_config vars# # Include vault vars_files# xr-passes.yml cli# host# ~{{ inventory_hostname }}~ username# ~{{ ansible_user }}~ # Password used from Vault password# ~{{ my_sensitive_pass_vault }}~To make it possible, you need to include vars_files key into the playbook with vault file value.Key-based authentication to IOS-XR device.To establish passwordless authentication on IOS-XR we need to go through multiple steps. The public key should be encoded in base64 format. You can use utility base64, part of Linux and macOS distribution, or try to use online tool for encoding, such as base64decode.org. After copy key to IOS-XR based device and import it.Make sure that key-pair generated on the device and create b64 file.$ ls ~/.ssh/ | grep id_rsa.pubid_rsa.pub$ cut -d~ ~ -f2 ~/.ssh/id_rsa.pub | base64 -d > ~/.ssh/id_rsa_pub.b64$ ls -la ~/.ssh/ | grep b64-rw-rw-r-- 1 cisco cisco 535 Aug 6 10#43 id_rsa_pub.b64Key transfer and import operation#RP/0/RP0/CPU0#flamboyant#scp cisco@172.30.13.2#/home/cisco/.ssh/id_rsa_pub.b64 /disk0#/id_rsa_pub.b64Mon Aug 6 17#40#58.958 UTCConnecting to 172.30.13.2...Password# Transferred 535 Bytes 535 bytes copied in 0 sec (535000)bytes/secRP/0/RP0/CPU0#flamboyant#dir disk0# | i rsaMon Aug 6 10#50#14.476 PDT 56 -rw-r--r-- 1 535 Aug 6 10#41 id_rsa_pub.b64RP/0/RP0/CPU0#flamboyant#crypto key import authentication rsa disk0#/id_rsa_pub.b64Mon Aug 6 10#41#03.651 UTCRP/0/RP0/CPU0#flamboyant#Verification, that key is imported#RP/0/RP0/CPU0#flamboyant#show crypto key authentication rsaMon Aug 6 10#51#24.635 PDTKey label# rootType # RSA public key authenticationSize # 4096Imported # 10#41#03 PDT Mon Aug 06 2018Data # 30820222 300D0609 2A864886 F70D0101 01050003 82020F00 3082020A 02820201 00D031B9 C0CD4838 C031D9E8 390C51ED 8B77D3F8 F0637BE3 CB4631C5 5D84A294 BE475637 8F7CC395 3E4AD022 ABBE538A 5304CD3A EC9F0B19 0876132F 7675B36C 46ED953D B870F3FB 2EDB9B50 E6C29278 5A48C0B5 66B09AC3 D03A54FB E7F8DE78 A7733571 660DFED5 FB6D0599 54227601 08924FFD CBB890F7 93DCE02C 13F4FFA2 E15FF061 9C64E0BF B62CF8B0 C6305613 D714F84F 7DBA3B1D ED93609B 8E8384A8 EC259CDA EEBBD07E 5931F467 4D86D59A 24B596C7 4AEDE957 FA8866C1 ED2988F5 7B9945F9 CC308EA3 532A2470 75C8CE23 49C0AA75 A1F03538 BC3DD4DE EACC8150 6640B368 7D5696A7 15C6D1BA D6534F34 3CD4ED92 A313A8D0 0480A169 4BF9575C 6BCE836E D72F4E01 E76C94A1 3B35C430 FB6A471B 453B0DE3 ACD28034 2632E111 192A9CA0 3DBF3410 0E9580C7 E0DE4968 01DB0C43 98254390 FDB43E3E 39429EA2 9CFA40A5 2D8A89EC 1DA9ED1D 494306D2 96936B1D ABDA1F7C 513B9E89 4E45F1FA 50B1DB14 A00D4A83 2B72C5EC 4557A975 A76D49D8 AC184BBE 3C75E292 CFE0F032 2DAE7154 83AE0A21 D4177524 11F33960 56732666 84619C01 BA36E257 93DE4A8B B8E1E7F7 67A80F9A 265320F4 949F6151 D67B1B2E BF3F6C61 C98C45CF EE3F2D87 EE7031D9 AD27C89A 20087789 F711FD69 0957C424 E216E439 51B95831 DCE9008A 7F02D500 802AEADB 4C7469B9 04E98E1A 4BDC6BC1 C36C191F 31747564 5BC178F6 CD020301 0001 Try to connect, to make sure, no password required#ssh root@172.30.13.70 Unauthorized access to device# flamboyant restricted.RP/0/RP0/CPU0#flamboyant#Voilà, passwordless authentication works.Performance tipsChange strategy for executionPlaybook performance can be increased. In order to achieve this, we need to use strategy plugin and change default value linear to free. By default, Ansible task execution wait for completion on the host, then goes to another host and after execution on all hosts for the current task complete, a new task started. With free strategy, Ansible will create a fork of the process. It wouldn’t wait for execution of task to be completed on all nodes. When task is completed on the node, next task is started on it, without delay.It’s easy to change strategy in playbook#---- name# Configure IOS-XR devices hosts# routers strategy# free gather_facts# no connection# network_cli roles# - get_facts Change forks numberForks - parallel processes spawned by Ansible for communication with remote hosts and task execution. The default amount of processes - 5. Add the following string to ansible.cfg to increase number of forks.# ansible.cfg forks = 10Another way to change amount of Forks to specify a value, when you run a playbook#ansible-playbook head-playbook.yml -i ansible-hosts.ini --forks 10 ConclusionWe cover Ansible modules available today for IOS-XR (particular attention to iosxr_command & iosxr_config modules), prerequisites required for that, created role examples and tweak Ansible performance for faster playbook completion. Start slowly with your automation and increase the complexity as you grow.Good luck with further automation 🔧", "url": "/tutorials/2018-08-06-comprehensive-guide-to-ansible-on-ios-xr/", "author": "Mike Korshunov", "tags": "iosxr, Ansible, Automation" } , "#": {} , "tutorials-application-hosting-with-appmgr": { "title": "Application Hosting With Appmgr", "content": " application-hosting-with-appmgr Application Install Application Activation Application Action Application Monitoring Building your application Installing and running your application Third Party Application Hosting on IOS XRApplication Hosting on IOS-XR allows users to run third-party applications on Cisco routers running IOS-XR software. This article will introduce the Application Hosting features and provide a step-by-step guide to running your own application on XR.Why use third-party applications? You can use a third-party application to extend router capabilities to complement IOS-XR features. TPAs help in optimizing the compute resources required in a deployment. TPAs can provide operational simplicity by allowing you to use the same CLI tools and programmability tools for managing both IOS-XR and the application. By leveraging well-defined programmable interfaces on the router such as gRPC, gNMI, and SL-APIs, TPAs can readily exchange data with the router over secure channels. Certain use-cases such as service monitoring and service assurance benefit from having an application that can send and receive traffic on services from the router itself. This provides deeper visibility into network.App Hosting Components on IOS XRNow that we have established some use-cases for hosting applications on routers, let’s dive into IOS XR features that can help us in doing so. Docker on IOS XR# The Docker daemon is packaged with IOS XR software on the base Linux OS. This provides native support for running applications inside docker containers on IOS XR. Docker is the preferred way to run TPAs on IOS XR. Appmgr# While the docker daemon comes packaged with IOS XR, docker applications can only managed using appmgr. Appmgr allows users to install applications packaged as rpms and then manage their lifecycle using IOS XR CLI and programmable models. We will discuss appmgr features in depth in later sections.A public repository (https#//github.com/ios-xr/xr-appmgr-build) contains scripts that package docker images into appmgr supported rpms. We will use these scripts to build appmgr rpms in a later section of this article. PacketIO# This is the router infrastructure that implements the packet path between TPAs and IOS XR running on the same router. It allows TPAs to leverage XR forwarding to send and receive traffic. A future article in this series will discuss the architecture and features of PacketIO in-depth. TPA SecurityIOS XR comes with built in guardrails to prevent Third Party Applications from interfering with its functions as a Network OS. While IOS XR does not limit the number of TPAs that can be run simultaneously, it does restrict the resources available to the docker daemon for the following parameters# CPU# ¼ CPU per core available in the platform. RAM# 1G Maximum Disk is limited by the partition size which varies by platform. Can be checked by executing “run df -h” and looking at the size of the /misc/app_host or /var/lib/docker mounts. All packets to the application are policed by the XR control protection, LPTS. Read this blog to learn more about XR’s LPTS Introduction to NCS 55XX and NCS 5xx LPTS Signed Applications are supported on IOS XR. Users can sign their own applications by onboarding Owner Certificate (OC) using Ownership Voucher based workflows described in RFC 8366. After onboarding an Owner Certificate, users can sign applications with GPG keys based on the Owner Certificate which can then be verified while installing the application on the router.IOS XR appmgrCommand reference#Application Installappmgr|- package | |- install <rpm>| |- uninstall <rpm> <rpm> is a path to the application rpm. Install# Installs the rpm and the application is ready to be configured. Uninstall# Uninstalls the application that was un-configured. From a docker standpoint, executing appmgr install will load the docker image from the rpm into the local repository. Executing appmgr uninstall will remove the docker image from the local repository.Application Activationconfigureappmgr|- application <name>| |- type <type>| |- source <source>| |- run-opts <run-options>| |- run-cmd <cmd>commit Type# Type of app [docker, native] Source# app source file [docker image name] Run-opts# docker run options [docker only]– Not all docker run opts are supported Docker-run-cmd – Command to run in container [docker only] Once committed the container will be started by the App ManagerApplication Actionappmgr|- application| |- copy| |- start| |- stop| |- kill| |- exec Start# Start an app [Docker equivalent# docker start] Stop# Stop an app [Docker equivalent# docker stop] Kill# Kill an app [Docker equivalent# docker rm] Exec# run command in container [Docker equivalent# docker exec] Copy# copy files to/from containerApplication Monitoring show appmgr |- application-table |- application name <name> | |- info [summary|detail] | |- stats | |- logs |- source-table |- source name <name> Application-table# Shows a consolidated view of all applications and their status Application info# Show application status and information Application stats# Show app stats [Docker equivalent# docker stats/top] Application logs# Show app logs [Docker equivalent# docker logs/journalctl] Source table# Show information about available sources.Building your applicationLet us try building a docker application as a XR appmgr support rpm package.To get started, clone the appmgr-build repository on your development environment.git clone https#//github.com/ios-xr/xr-appmgr-buildThe cloned directory structure should look like#xr-appmgr-build├── appmgr_build├── clean.sh├── docker│ ├── build_rpm.sh│ ├── Centos.Dockerfile│ └── WRL7.Dockerfile├── examples│ └── alpine│ ├── alpine.tar.gz│ ├── build.yaml│ ├── config│ │ └── config.json│ └── data│ └── certs│ ├── cert.pem│ ├── client.key│ ├── server.crt│ └── server.key├── LICENSE├── Makefile├── README.md└── release_configs ├── eXR_7.3.1.ini └── ThinXR_7.3.15.iniInside the xr-appmgr-build directory, let us create a directory specific to our docker application (my-app). In the my-app directory, we need to have the following# build.yaml file that will specify the build parameters for our application. A tarball of the docker image [Optional] Config directory containing any configuration files for our application. [Optional] Data directory containing any other files for our application.For our example, we will not set any config or data directories.cisco@cisco#~/demo$ cd xr-appmgr-build/cisco@cisco#~/demo/xr-appmgr-build$ mkdir my-appcisco@cisco#~/demo/xr-appmgr-build$ cd my-app/cisco@cisco#~/demo/xr-appmgr-build/my-app$ touch build.yamlcisco@cisco#~/demo/xr-appmgr-build/my-app$ docker pull my-appUsing default tag# latestlatest# Pulling from library/my-appDigest# sha256#aa0cc8055b82dc2509bed2e19b275c8f463506616377219d9642221ab53cf9feStatus# Image is up to date for my-app#latestcisco@cisco#~/demo/xr-appmgr-build/my-app$ docker save my-app#latest > my-app.tarcisco@cisco#~/demo/xr-appmgr-build/my-app$ lsbuild.yaml my-app.tarcisco@cisco#~/demo/xr-appmgr-build/my-app$The build.yaml contains parameters that specify how to build the rpm. name and version are used to tag the RPM we are building. Release should correspond to an entry in the release_configs directory. Currently, ThinXR_7.3.15 corresponds to LNT platforms and eXR_7.3.1 corresponds to eXR platforms. We must specify the name and path to the docker tarball under sources. [Optional] We specify the name and path to the config directory and data directory under config-dir and data-dir respectively. We are not using data or config directories in this example. Example build.yaml#packages# - name# ~my-app~ release# ~ThinXR_7.3.15~ version# ~0.1.0~ sources# - name# my-app file# my-app/my-app.tarOnce the steps above are completed, we can use the appmgr_build script to build the rpm. We run the script using the following command#./appmgr_build -b my-app/build.yamlIn general#./appmgr_build -b <path to target build.yaml>Once the build process is complete, the rpm should be present in the /RPMS/x86_64/ directory. After building the application, we can copy it to the router using the copy command. Copy supports the following external source# ftp# Copy from ftp# file system http# Copy from http# file system https# Copy from https# file system scp# Copy from scp# file system sftp# Copy from sftp# file system tftp# Copy from tftp# file systemFor example# copy https#//<web-server>/<path-to-rpm>/ /misc/disk1/Installing and running your applicationNow that we have learned how to package docker applications as appmgr rpms, let us try installing and running rpm packages using appmgr.After copying the rpm onto the router, we can install it using appmgr CLI commands.appmgr package install rpm /misc/disk1/my-app-0.1.0-ThinXR_7.3.15.x86_64.rpmWe can verify if the packaged was installed using#RP/0/RP0/CPU0#8201#show appmgr packages installed Thu Jan 19 15#59#17.243 PSTPackage ------------------------------------------------------------my-app-0.1.0-ThinXR_7.3.15.x86_64.rpmOnce installed, the application can be activated usingappmgr application my-app activate type docker source hello-world docker-run-opts “<YOUR DOCKER RUN OPTS>”Running applications can be viewed in the application table#RP/0/RP0/CPU0#8201#show appmgr application-table Thu Jan 19 16#00#29.878 PSTName Type Config State Status ----------- ------ ------------ --------------------------------my-app Docker Activated Running bgpfs2acl_1 Docker Activated Exited (137) 4 months ago bgpfs2acl_2 Docker Activated Exited (2) 2 months ago ", "url": "/tutorials/application-hosting-with-appmgr", "author": "Suhaib Ahmad", "tags": "iosxr, cisco, linux, appmgr, Application Hosting" } , "#": {} }