The Era of Network-Modeling

Posted On Mar-12

It’s 2016. In contrast to the Networks of yester-years, Enterprise Networks can no longer be summarized as just a couple of rows of cabinets in a datacenter.

Today, when we say large-scale enterprises, we’re referring to a few hundred rows , multiple datacenters, several clouds, routers providing connectivity between different vrfs, and wan conections to a few different nodesites. At these scales, the need to represent the network in its entirety as a data-model, quickly becomes obvious.

State vs. Intent

At its simplest form, models are nothing more than a unified representation of the state of all protocols and neighborships in the Network. The idea is to keep this ‘network-state’ always updated.

Then you’d subscribe to events that are triggered when the above state-model updates. To each event, you’d react by verifying that the latest network state aligns with the expected-state. So this necessiates the maintenance of a separate ‘expected-state’ topology. Vendors , in 2016, have decided to name the expected state, the ‘Intent’, and the active-state the ‘State’.

The ‘intent’ and the ‘state’ together make up the core of Network-Modeling.

Monitor the State, Provision the Intent

In a stable Network, the state and intent are converged. From the perspective of Monitoring, the State is always the source-of-truth. Anything that should be monitored pertaining to the Network, is made part of the State-collection – be it OSPF adjacencies, BGP Summary, LLDP neighbors, Muticast State, just about anything.

Monitoring scripts then start working on this state, and diff-ing it against the Intent. Any divergence between the two indicates instability in the Network.

Similarly, Provisioning tools make changes only to the Intent. They don’t directly touch Network devices. The modified Intent then propagates out to the Network – and while doing so, automatically picks the best transport mechanism for propagation – falls back from most favorable to least favorable transport ( Think of it as cascading down from Netconf XML RPCs, thru YANG/Tail-F models, down to plain old SSH-CLI commands.) The idea is that all of that dirty transport-work is abstracted from the engineer, who only deals with writing tools to interact with the Intent.

What does abstraction provide?

Until recently, Provisioning tools interacted directly with the Network. But there are several gaps with that approach. For starters, the lack of any abstraction between provisioning and the actual propagation of candidate config out to the network, makes it nearly impossible to predict what the impact of config change might have on the Network topology as a whole. Maintaining a separate ‘Intent’ lets you run algorithms to try and predict this.

Also, any modification made to the intent, would be reflected in the following state-collection. So this forms the basis of a feedback-system, to check if changes have propagated out to the network successfully, with the intended results. All of these are critical in automating functions involved in managing an Enterprise Network.

The future of Network Models

Provisioning used to be through the CLI. And Monitoring was synonymous with SNMP. Welcome to 2016 – CLI is plain painful. SNMP’s gotta go. And we’re starting to run agents directly on Network equipment to facilitate both Provisioning and Monitoring, in smart and efficient ways.

Cisco has made onePK available publicly and NXAPI improves accessibility and provisioning of Nexus devices . Arista’s EOS comes with excellent EEM-functionality, along with eAPI that can do wonderful things. With the newest EOS release, they even stream such events to remote workstations, which can then react to sysdb changes. Juniper lets you run agents on Junos as well, and subscribe to events.

Think about it for a second – You dont have to ‘poll’ for MAC table changes, or LANZ messages or counters anymore. You just subscribe to these events, and only react when you recieve an event. Provisioning takes place through RPCs with well defined YANG models. Perhaps OpenConfig will gain more steam. Eventually, Networks will provision themselves, Monitor themselves, and hopefully heal themselves (atleast in part). This is the way we all envisioned Network Automation to be!