So, you may know why network observability is becoming increasingly important for enterprises, but what could it look like within a large enterprise toolscape?
Could it be one tool, or is it a strategic combination? How are these tools integrated, and what could enhance an effective network observability practice? While the exact answer will differ across organizations, what's certain is that no matter what your observability platform looks like, it will benefit from automated network assurance data.
Whether you're trying to move beyond monitoring, or you already have a more mature observability practice in place, assurance is the final puzzle piece that injects trust and validation into network operations.
What you need to know in order to have a successful observability practice may seem subjective - but there are some elements that we posit are non-negotiable, with the knowledge of how central the network is to the success of modern enterprises.
A primary cornerstone of observability should be an understanding of the actual, observed, true state of the network, at a particular point in time. This knowledge is vital for observability, as you cannot continuously observe what you don't know exists.
Proper discovery, inventory-taking, and collection of device state data, as well as mapping this out in topologies and visualizing the data, is almost impossible to achieve manually in dynamic enterprise networks, as the result would continuously be out of date.
Regular snapshots of the network give operators a means for comparison from one point in time to another, a way to answer "what's in my network" and "what's changed?"
We've mentioned before that observability is less about the network on a device level (we can leave that for monitoring) and far more about the end user. With this front of mind, easy access to how an application behaves through your network from endpoint to endpoint is invaluable.
Without end-to-end visibility, troubleshooting requires network teams to spend valuable time on repetitive tasks, while said issue could be affecting the end user.
With this in place, teams can act proactively, lowering mean time to resolution thanks to an observable network.
Read: End-to-end path simulation with API
To easily leverage network data in a useful manner for observability, it must be normalized across vendors and environments and made consumable in technology tables, or rich and flexible network models.
It's not uncommon for teams to find a way to access the network data they need but then be stuck on how to actually get value out of the data. Should they invest the time to interpret the network data they have? How can they effectively use it for reporting? Can other teams, beyond the networking team, understand it easily?
Too much undigestible data could in fact hamper your observability, muddying the view of the network with unwanted information. For this reason, any assurance solution providing network data must prioritize flexibility if it is to be useful for observability; allowing the user to choose what they see, a simple and intuitive GUI, and clear presentation of this wealth of data are all imperative.
Automated network assurance provides this network inventory, configuration, and state information, visualized and normalized across vendors and environments, via simple API integration.
Contextualized data that is normally either very difficult or impossible to gather from traditional monitoring systems, such as the end-to-end path of a packet through your network, is automatically mapped and modeled. It's ready to be used wherever you need it most (and by whichever team needs it most!).
IP Fabric's integration ecosystem has already established some pairings that elevate network observability efforts.
Use insights from IP Fabric to make monitoring more contextualized and useful for your team. Avoid alert fatigue by focusing on what's important to your teams. IP Fabric can easily provide the PRTG monitoring platform with network topology analyses for a more comprehensive view of the network.
Download: Paessler PRTG Solution Brief
Splunk is a versatile tool that helps put network data into action; IP Fabric can bolster its usefulness by providing actual network state data easily via API.
Read: How to integrate IP Fabric with Splunk
Another monitoring and assurance match made in heaven, Centreon and IP Fabric work together to take advantage of IP Fabric's advanced discovery process to ensure all the information you could possibly need is being monitored, and nothing can be overlooked.
See Documentation: IP Fabric and Centreon or Read Centreon's Blog Post: Integrating Network Assurance and IT Infrastructure Monitoring for stronger networks
It's clear that whatever your observability strategy, whether still relying on traditional monitoring or already moving toward a more mature implementation, it needs network assurance to answer to those blind spots, and give you confidence through continuous validation of your actual network state.
My fellow Solution Architect, Justin, stated in his blog post about API Programmability that
One of the most difficult parts in a Network Automation journey is collecting the data you need.
and that
IP Fabric extracts your important network data directly from the devices and places it in our vendor-neutral data models which remove this parsing burden from your development teams.
We are now pleased to announce the release of our IP Fabric Ansible collection to allow network professionals to get the information they need into one of the most popular network automation frameworks available.
So, what is an Ansible collection? Ansible collections are a distribution format for Ansible content. It is a way to package and distribute Ansible roles, modules, and plugins as a single archive file. A collection can include multiple roles, modules, and plugins and can be used to organize and distribute related content together. One of the main benefits of collections is their ability to be shared and reused across multiple projects.
Our collection is currently hosted on our GitLab, distributed through Ansible Galaxy and contains the following:
Find our full collection documentation here.
Before the Ansible collection can be used there are some basics that need to have to get going:
pip3 install ipfabric
pip3 install ansible
Once the pre-requisites are installed the Ansible collection can be installed on the command line by using the following command: ansible-galaxy collection install community_fabric.ansible
.
Ansible dynamic inventory is a feature that allows Ansible to automatically discover and manage hosts in a dynamic environment. Instead of specifying the hosts to be managed in a static inventory file, a dynamic inventory retrieves the host information from an external source, in our case, it's IP Fabric.
There can be several benefits to dynamic inventories over static inventories:
plugin: community_fabric.ansible.inventory
provider:
base_url: https://<url>
token: <token>
compose:
ansible_network_os: family
keyed_groups:
- key: family
prefix: ""
separator: ""
The file above if saved as ipf_inventory.yml
upon execution will query IP Fabric and as a result return a list of hosts with information IP Fabric has provided such as site name, uptime etc. The highlighted code in red will create a dynamic variable for each host called ansible_network_os
which is the value of the key family
returned from IP Fabric. The section highlighted in orange will group devices based on their device families such as eos
, ios
and junos
. More parameters are available and they can be found in the documentation.
Ansible modules are pre-written scripts that can be used to perform specific tasks on managed hosts. They are written in Python and can be used to perform tasks such as installing software and configuring network devices. Ansible modules can be used in playbooks, which are written in YAML and describe a set of tasks to be executed on managed hosts.
In the initial release of the Ansible collection, there are three modules, snapshot_info
, snapshot
and table_info
. Let's take a look at each of these modules and see what they do.
snapshot_info
This module is intended to gather snapshot information from IP Fabric such as the name, ID, how many devices are in the snapshot and much more.
- name: Snapshot Info
community_fabric.ansible.snapshot_info:
provider:
base_url: https://<url>
token: <token>
snapshot_id:
The above task shows how to use this module within a playbook. The provider information contains how to connect to the IP Fabric API but this can be achieved with environment variables. Going forward I will exclude the provider parameter from the examples. The last parameter is snapshot_id
, this is an optional parameter that if not used will return a list of loaded/unloaded snapshots to the user. If the snapshot_id
is specified then only one snapshot will be returned.
snapshot
The snapshot
module allows snapshots to be manipulated from within Ansible. Everything from starting a discovery to deleting a snapshot can be done with this module.
- name: Start Snapshot (state=present)
community_fabric.ansible.snapshot:
- name: Delete Snapshot
community_fabric.ansible.snapshot:
snaphot_id: 12dd8c61-129c-431a-b98b-4c9211571f89
state: absent
- name: Unload Snapshot
community_fabric.ansible.snapshot:
snaphot_id: 12dd8c61-129c-431a-b98b-4c9211571f89
state: unload
- name: Clone Snapshot
community_fabric.ansible.snapshot:
snaphot_id: 12dd8c61-129c-431a-b98b-4c9211571f89
state: clone
- name: Clone Snapshot
community_fabric.ansible.snapshot:
snaphot_id: 12dd8c61-129c-431a-b98b-4c9211571f89
devices:
- 9AMSST2E75V
state: rediscover
There are currently eight states that the snapshot module can have each performing a different function.
State | Description |
present | present is the default state it will start a new discovery with the global settings. If the snapshot_id parameter is present with snapshot_name and snapshot_note it will edit a snapshot to contain specific name and note. |
absent | Absent will delete a snapshot when the snapshot_id parameter is present. |
load /unload | This state will unload or load a specific snapshot when the snapshot_id is present. |
lock /unlock | This state will unlock or lock a specific snapshot when the snapshot_id is present. |
clone | When the snapshot_id is specified it will clone the snapshot and load it. |
rediscover | Rediscover allows a list of device serial numbers to be specified in the devices parameter which will be rediscovered in the specified snapshot with snapshot_id . |
table_info
The table_info
module allows Ansible users to gather data from all IP Fabric tables such as NTP Summary, VRF Interfaces, Port Channels and many more. We want our IP Fabric users to get as much of the information we provide into the tools that they love and this module is great for that.
- name: Snapshot Info
community_fabric.ansible.table_info:
# snapshot_id:
technology: inventory
table: devices
This module has three key parameters technology
, table
and snapshot_id. The technology
parameter allows a user to specify what area of IP Fabric to gather the information from each technology that has a corresponding table. The example above shows the technology inventory with the table of devices this corresponds to our UI as we can find the devices table within the inventory menu item within our product. If snapshot_id
is not specified the latest loaded snapshot will be used.
Let's provide another example - say we want to return the ARP table we can use the technology of 'addressing' and the table of 'arp_table'. A full list of available technologies and tables can be found in the module documentation.
- name: Find IP address belonging to a MAC address
community_fabric.ansible.table_info:
# snapshot_id:
technology: addressing
table: arp_table
filter:
mac:
- like
- 5254.00d3.45c5
columns:
- hostname
- intName
- ip
- mac
As we can see, we are gathering the ARP table from IP Fabric. However, we have highlighted some new parameters that could be used within this module. In red we have a filter
this allows users to add as many filters required to the API query into IP Fabric as they need. This example only returns ARP entries that have a specific MAC address. The columns
parameter in orange allows the user to specify columns to be returned by the module to make the returned response more concise.
The final example shows how to use the table_info module to return information from IP Fabric that has failed an Intent Verification Rule.
- name: Filter and noncompliant NTP configurations
community_fabric.ansible.table_info:
# snapshot_id:
technology: management
table: ntp_summary
filter: {sources: ["color", "eq", "30"]}
report: /technology/management/ntp/summary
register: ntp_summary
Using the filter
and the report
parameter allows tables to be returned with specific Intent Verification Rules selected. This can be very useful for Ansible users as we can use this information to auto-remediate any configuration discrepancies (as I will demonstrate shortly).
Ansible lookup plugins allow Ansible to access data from external sources, such as data stored in a file, a database, or a web service. These plugins are called during task execution to retrieve data that can be used to dynamically construct tasks, such as generating a list of hosts to target for a specific operation. Lookup plugins can be used in conjunction with other Ansible modules to retrieve and manipulate data as part of a playbook.
table_info
The table_info
lookup plugin is the same as the table_info module shown above. The plugin allows you to perform the query within different areas of a playbook, such as a template or a string as part of a module. The following code is the same query as the last within the debug module. See the documentation for more information.
- name: Check non-compliant devices
debug:
msg: "Number of non-compliant devices: {{ lookup('community_fabric.ansible.table_info', 'management', 'ntp_summary', filter={'sources': ['color', 'eq', '30']}, report='/technology/management/ntp/summary', base_url=provider.base_url, token=provider.token, verify=False, snapshot_id=new_snapshot.data.id) | length }}"
delegate_to: localhost
run_once: true
## output example
localhost: Number of non-compliant devices: 24
Now that we have had a brief overview of all the components, plugins, and modules available in the Ansible Collection, let's see how we can use them in practice.
---
- hosts: all
gather_facts: False
tasks:
- name: Filter and select columns on technology table
community_fabric.ansible.table_info:
provider: "{{ provider }}"
# snapshot_id: 07b338d0-4cc1-48e9-a99d-12ce100b0bb8
technology: management
table: ntp_summary
filter: {sources: ["color", "eq", "30"]}
report: /technology/management/ntp/summary
delegate_to: localhost
run_once: true
register: NTP_DATA
- debug:
msg: "Number of non-compliant devices: {{ NTP_DATA.data | length }}"
delegate_to: localhost
run_once: true
- name: Configure Junos NTP
junipernetworks.junos.junos_ntp_global:
config:
servers: "{{ ntp.servers }}"
state: overridden
when: ansible_network_os == 'junos'
- name: Configure EOS NTP
arista.eos.eos_ntp_global:
config:
servers:
- server: "{{ ntp_server }}"
state: overridden
when: (ansible_network_os == 'eos') and (item.hostname == hostvars[inventory_hostname]['hostname'])
loop: "{{ NTP_DATA.data }}"
- name: Configure IOS NTP
cisco.ios.ios_ntp_global:
config:
servers:
- server: "{{ ntp_server }}"
vrf: MGMT
state: overridden
when: (ansible_network_os == 'ios') and (item.hostname == hostvars[inventory_hostname]['hostname'])
loop: "{{ NTP_DATA.data }}"
- name: Start Snapshot
community_fabric.ansible.snapshot:
provider: "{{ provider }}"
delegate_to: localhost
run_once: true
register: new_snapshot
- name: check snapshot
community_fabric.ansible.snapshot_info:
provider: "{{ provider }}"
snapshot_id: "{{ new_snapshot.data.id }}"
register: result
until: result.data.status == 'done'
retries: 20
delay: 30
delegate_to: localhost
run_once: true
- name: Check non-compliant devices
debug:
msg: "Number of non-compliant devices: {{ lookup('community_fabric.ansible.table_info', 'management', 'ntp_summary', filter={'sources': ['color', 'eq', '30']}, report='/technology/management/ntp/summary', base_url=provider.base_url, token=provider.token, verify=False, snapshot_id=new_snapshot.data.id) | length }}"
delegate_to: localhost
run_once: true
The code above is very primitive, but it allows for the auto-remediation of NTP via IP Fabric and Ansible. Firstly, the playbook uses the dynamic inventory when running this playbook against all hosts. The first task we have seen before collects the number of non-compliant devices with NTP incorrectly configured this is saved to the NTP_DATA variable. The second task is a debug that shows the number of non-compliant devices to the user.
The configure statements for junos
, ios
and eos
loop over the data within the NTP response and only configure the correct NTP server if the conditionals are met. These conditionals first check that the current host is a junos
, ios
or eos
device, and the second check makes sure that the hostname of the current device matches the hostname from the NTP output from IP Fabric. The device can be configured with the correct NTP server. if these are both correct. Once the configuration is complete, Ansible will start a new snapshot of the network. It will wait for the snapshot to be completed until we check the non-compliant devices again using the lookup command.
The command to execute this playbook would look like the following:
ansible-playbook -i ipf_inventory.yml pb.4.fix-ntp.yml
Observability - the ability to measure the internal state of a system using its outputs - has long been a goal for application and DevOps teams. It's a necessary pursuit to control any complex system. You can't truly know what you can't observe.
At present, this concern is rapidly spreading to enterprise network teams. There's an uptick in interest in network observability more specifically, which involves a lot more than the logs, metrics, and traces generally considered the pillars of observability.
This seems a natural trajectory for modern enterprises in 2023, whose networks are sprawling ecosystems. These networks of networks, seemingly with a life of their own as they dynamically change, make knowing your network from one day to the next all the more difficult.
The more complex your network, the more critical that you have tools and practices in place to understand its behavior. Without this, unknown misconfigurations, or unintended consequences post-change may have a disastrous impact on the network. With the benefits ranging from improved network security to lowering MTTR and adding proactivity to troubleshooting, it's clear that an effective network observability practice will become imperative for enterprises.
An observability practice ensures that network operators have clear insight into network health and behavior, and understand how the current, actual state of the network will affect the end user. This understanding of network behavior means that teams can take active measures against unwanted effects of change in the network. This must span all environments and vendors, and bring together information from a multi-domain network together into a consumable manner.
To achieve this, network data delivered via an observability practice must be 1) contextualized, 2) consumable, and 3) centralized.
The first instinct one may have when considering how to attain network observability is to rely on traditional monitoring tools available on the market - they're designed to tell you exactly what's happening in your network, right? Well, while these are surely vital for real-time alerting of issues on the network, it's becoming obvious that typical network monitoring is insufficient for true, holistic network observability, and may actually hinder network operations in some regard. What's holding monitoring tools back from servicing enterprise observability needs?
According to an Enterprise Management Associates survey of over 400 enterprise stakeholders, only 47% of alerts from monitoring tools are actionable, or represent an actual problem in the network. However, network teams still have to take the time to investigate the other 53% - a massive waste of resources and contributor to alert fatigue. Customizing monitoring tools to avoid this noise requires an investment of time - more overhead.
It's quite usual for modern enterprise networks to be multi-domain and multi-vendor, and this is only becoming more of the norm. If this complexity prevents a monitoring tool from monitoring parts of your network (e.g. cloud instances) then you're in the dark about vital parts of your network that could have an effect on the network as a whole. Complexity should not mean sacrificing visibility.
Once again, the overhead necessary to sift through, interpret, and analyze network data produced by traditional monitoring platforms may prove more of a burden than an asset to network teams looking to strengthen their observability practice. If the data produced is normalized, and visualized in a manner that complements the goals of network engineers, the value immediately skyrockets.
Monitoring tools are generally designed to flag that something is wrong, but rarely give the context of the issue upfront, i.e. How is this problem affecting the rest of your network as a whole? Where can I start looking for the source of this issue? This means more time spent on every alert investigated.
This might be the way it's always been, but we know that better is possible. More than possible, better is necessary for enterprise network teams to be effective. Whether it be an evolution of monitoring tools or a combination of tooling that achieves full-stack observability, the point is that network operators need solutions that eliminate these inefficiencies and blind spots to properly manage dynamic networks.
Stay tuned for our next exploration of network observability here on the IP Fabric blog, where we'll look at different options of actual tools that will help set up enterprises for observability success.
Follow us on LinkedIn, and on our blog, where we regularly publish new content, such as our recent Community Fabric podcast on understanding the buzz around network observability:
The world of network automation is far from immune to buzzwords designed to lure you in with promises of the perfect solution to your network woes.
Some are very appropriate metaphors or powerful terms that effectively crystallize the offerings or services that would seem otherwise abstract. Others are all frosting and no cupcake. Often, these dazzling terms are presented as the function of an offered tool or service, rather than what they are – a goal, or ideal, that tools can help you get closer toward.
Let’s dig into the meaning behind the marketing – what are industry voices saying, and does it line up with what they deliver? Is it a buzzword, or is it brilliant – or both? Here’s our take.
Data democratization refers to making network data, or information about your network, freely accessible to anyone in an organization who might need it beyond just the team working directly with the network (e.g., security teams, cloud teams, C-suite). The benefits include reduction of bottlenecks in workflows through self-service processes, enabling asynchronous work, harmony across teams, and reduced MTTR.
We use this term proudly at IP Fabric – not only because alliteration is irresistible, but because it succinctly sums up the above-mentioned benefits without overstating what the concept refers to practically.
The ambition of a digital twin – that is, an exact virtual replica of your network you can use to simulate and test changes – is sound, in that having a true digital twin would be extremely useful. Real-time updating of this network representation to reflect your actual network state should mean it’s always accurate and behaving as your real-life network does. However, we know that reality is not so.
The issue here is not with the concept – if you can find a true digital twin, sign us up - but the term is often confidently applied to products and platforms that are not a digital twin at all. Generously, some may be a digital cousin, in the sense they share some DNA with your network but fundamentally, they won’t behave exactly the same under the same conditions (which is the whole point).
That’s what the term implies, right? You would expect a digital twin of your network to be a precise DNA match.
It's not really a digital twin if:
Claiming that a digital twin provides end-to-end security posture visibility creates a risk for network operators who believe that they are working with a true digital twin. They may be making decisions based on ultimately useless simulations that can lead to unintended consequences in the actual network.
A so-called digital twin is great when used as a tool for guidance – remember that real-world conditions can always introduce variables that your digital twin can’t account for. Nothing substitutes the insights of the actual network engineer, whose job can be made a lot easier with a digital twin. Their knowledge and observations are hugely augmented by tooling, but not always replaced.
Our network model, which achieves similar goals – test changes, see how data flows through your network – but is proudly of its own DNA; its point-in-time representation of your network normalizes output from different vendors to supply a flexible, sharable understanding of your network behavior. Assurance then ensures that you know exactly what your actual network state is.
This refers to decisions about - and changes to - the network being led by intent, or a defined set of business objectives that represent how you desire your network to operate.
By starting with intent, usually stored in a Source of Truth repository, like the open-source Netbox, and having every network operation be in service of aligning with that intent in an automated fashion, you are ever closer to having your actual network state match your dream network state.
Intent-based networking is largely attractive to enterprises because it can help manage the complexity inherent in a modern, dynamic network.
Anyone in the network automation space has likely seen this term a thousand times over – maybe the first few times, it elicited a vision of utopia – everything you could ever need to operate your network visible in one place. The ultimate consolidation of important information.
After the 999th time, however, it’s clear with so many products and platforms claiming to offer this, they can’t all be the single pane of glass that your organization needs.
If they were, you would only ever have one place to go to do anything in the network infrastructure:
Additionally, it would have to serve a myriad of different lenses that operators approach the network with – cloud teams, security teams, and leadership. The single pane of glass is an ideal to strive for, not a silver bullet that you can buy.
Platforms claiming that they are “it”, rather than showing how they can accelerate you toward this goal.
Tools that gather data from disparate places and present it in a single, consumable form, in an accessible manner, help move the needle toward a unified network view for all teams; as mentioned, it’s a useful metaphor for a goal to work toward.
A Single Source of Truth – that all teams can trust - is touted as a key element of network automation projects, especially so for intent-based networking.
By nature, IBN requires that you express a single, consistent intent against which you build, test, and validate your network state.
Your source of truth is the ultimate repository of your network desires that are determined by clear business goals, which your actual network state should be continuously validated against.
When any one tool claims to contain the entirety of your intent. Expecting enterprises with dynamically growing networks to contain their entire intent in a single system is unrealistic. That said, there are tools that consolidate the information from these systems – your sources of truth – to make them useful, consistent, and updated for network automation projects.
IP Fabric does the same for your actual network state so that you can validate it against your source of truth.
You likely have, as mentioned above, many sources of truth containing information about elements of your network.
A single source of truth can be a helpful data cleansing element to consolidate these repositories managed by different teams and smooth out duplicates, inconsistencies, and interdependencies to ensure that your “sources of truth” are as accurate and valid as possible.
WATCH: For the Journey 2: From Design to Source of Truth with Network to Code & BlueCat
In an industry that is constantly innovating, buzzwords ebb and flow in the zeitgeist. We’re certainly not above overusing some of our favorites to describe our offerings concisely and concretely when applicable – as discussed, buzzwords can be brilliant, if used honestly.
However, overuse can muddy the waters with regard to how these terms are applied. Before your eyes light up at the next promise to solve all your problems, tactically assess – is it a buzzword or brilliant (or both)?
Typically, an organization's network isn't a single thing. It's a collection, a network of networks if you will, which work together to deliver the connectivity from user to app, from sensor to data repository, which underpins application service for an organization.
There are networks of different types, using different technologies, connecting different domains, using multiple vendors; each must be interconnected and interoperable in order to deliver the packets which carry application data from application workload to user. The number and depth of these interactions bring complexity to the network of networks and with it being dynamic and alive, this complexity grows daily.
The biggest challenge that modern network teams face is managing that complexity, along with the scale that adoption of connected applications has brought to the modern IT landscape. And as network engineers, not only are we constantly reminded that the best way to cope is to automate, but we recognize the necessity.
The idea is to maintain a centralized management point for the network which can provision service and deploy change using as few touchpoints as possible. Typically, that might mean introducing:
These approaches all have pros and cons of course, but typically are very focused on delivering an outcome for a specific task, for a specific vendor's equipment, or in a specific network domain. As such, testing of success of automation tends to be focused and task-based too. And while this has a certain level of value in ensuring that tasks themselves aren't broken, it's hard to verify that the impact of change to the network isn't farther reaching, or that further change is required to enable the capability we’re trying to introduce.
Consider the case where you create a new subnet in your private Cloud instance – this is easily verified that it has happened through the API into your favorite Cloud provider. But does that mean it is available and usable? Not necessarily – we might need to make sure it is advertised into our on-prem network, redistributed over our SDWAN into our campus, and that policy is updated to allow traffic to pass to it.
Network Assurance has the goal of validating that the network is operating the way you intend it to and enabling corrective action when your dynamically changing network drifts too far from your intended state. Importantly, the scope for network assurance is the whole network end-to-end, not limited to a specific vendor or domain.
By using IP Fabric's automated network assurance platform, it's possible to validate:
IP Fabric uses snapshots of this model to build up a picture of changes across the network over time. Those snapshots can be of the complete network, scheduled regularly, or they can be ad hoc or partial views, depending on the desired effect (particularly useful before and after change implementation).
And this is the key. When changes are made in the network, it is not likely to be enough to simply test that the desired configuration has been pushed to the device. The impact of that change is likely to be felt further afield and so it is necessary to look more holistically at the outcome, as looking at the change in isolation can be misleading. Is a successful config push successful if it’s impacted your network elsewhere, and therefore your end-to-end service?
You can examine the state of the affected device and that may help but in reality, the best outcome is to validate that once tasks are completed, the overall change has had the desired impact on end-to-end service. And naturally, the only way to accurately verify that end-to-end behavior will be as expected is to not limit the scope but test against a model of the whole network.
And as IP Fabric's API allows snapshot creation and refresh, along with querying of those tests, it is the perfect tool to incorporate into an automated workflow to carry out that big picture validation.
Recently, the IP Fabric team was in Las Vegas, where we shared the stage with Itential at Tech Field Day Extra at Cisco Live 2022. We showcased what it means to integrate network assurance into real network automation processes, and how that turns Network Automation from a point solution to a small problem, into a key component of the complete Self-Driving Network.
Watch the Tech Field Day video below to see exactly how smart integrations can accelerate your network automation:
WATCH: Scaling Network Automation (with Itential)
WATCH: Closing the Loop with Network Assurance (with IP Fabric)
WATCH: Integrated Network Automation and Assurance Demo with Itential & IP Fabric
The IP Fabric team had a whirlwind week in Las Vegas getting together for one of the biggest events of the year – Cisco Live!
We were buzzing at the thought of getting back to face-to-face conversations about how network assurance can revolutionize network operations, accelerate automation efforts, and make you rethink your network tooling ecosystem. With this in mind, we planned a schedule aimed at educating, sharing, and connecting with organizations handling increasingly complex and dynamic networks.
This came to fruition just as we anticipated, and our booth was a hive of activity; from presentations to demos of the product tailored around specific use-cases, we were spoiled with opportunities to showcase IP Fabric’s powerful network assurance platform.
Some presentations from the team included Network Assurance 101, with Daren Fulwell covering the basics of this relatively newer concept that plays a pivotal role in any enterprise network.
Joe Kershaw, Global Channel Sales Lead, joined the party to unpack Network Automation Strategies & Network Assurance, centering our role in helping you automate confidently.
Next up, Justin Jeffery took the stage with his talk, Network as a Database, explaining all the ways you can use network data – automatically gathered and visualized with IP Fabric - to elevate your operations.
Pete Crocker led a relatable talk, which elicited knowing laughs as soon as the first slide entitled “Why has my app team stopped calling?” popped up. Pete outlined some day-to-day frustrations of overworked network engineers that are easily solved with IP Fabric.
An abundance of special guests joined us too! Luke Richardson, from an IP Fabric customer WeWork, was present to add his thoughts to a talk at the Content Corner entitled Making Network Automation Relevant. Daren and Luke drew quite the crowd as they explained why automation is valuable for everyone in an organization, not just the network engineer.
Christian Adell from Network to Code gave a practical demonstration of how to synchronize your Source of Truth using IP Fabric and Nautobot ChatOps, and later joined Justin for a chat about Contributing to the Open Source Community.
Karan Munalingal was at the IP Fabric booth just a skip away from his own Itential booth, showing our audience a recently announced integration – Itential & IP Fabric: Automation & Assurance.
Finally, we were thrilled to participate in Tech Field Day Extra – this formed the perfect stage to demo our integration with Itential to the TFDx delegates away from the loud and lively World of Solutions.
Watch it here: TFDx at Cisco Live
In the collaborative whiteboard session led by Itential’s Chris Wade and Karan Munaligal and IP Fabric’s Daren Fulwell, we saw how each platform augments network operations solo, and then how Itential and IP Fabric fit together within your tooling ecosystem to form a powerful network automation engine with assurance at the core, propelling the network engineer along the Road to the Self-Driving Network.
IP Fabric left Las Vegas exhausted, but after some recovery, we’re absolutely energized and inspired by the connections we made at Cisco Live! If you met us at the event and want to learn more, or if you weren't there and want to learn about what we can do for your network, request a demo with our team!
In Part 3, we discussed using the python-ipfabric SDK to interact with IP Fabric's API - did you know that you can also create diagrams using the API? Today we will be using the python-ipfabric-diagrams SDK. Since diagramming is a coding heavy topic, I am also working on a webinar to show live examples and more advanced features such as turning layers off and ungrouping links.
Find a coding example on GitLab at 2022-05-20-api-programmability-part-4-diagramming.
There are four options for returning data in the IPFDiagram
class.
Each of these methods has five input parameters, and only the first one is required:
This is the most basic diagram as it takes a single IP address. The imports will differ depending on the type of graph.
# 1_host2gateway.py
from ipfabric_diagrams import IPFDiagram, Host2GW
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
with open('1_host2gateway.png', 'wb') as f:
f.write(diagram.diagram_png(Host2GW(startingPoint='10.241.1.108')))
The Network class accepts 3 input parameters. If no parameters are defined, this will create a graph similar to going to the UI and Diagrams > Network.
site name
or a List of site names
.# 2_network.py
from ipfabric_diagrams import IPFDiagram, Network, Layout
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
with open('2_1_network.png', 'wb') as f:
f.write(diagram.diagram_png(Network(sites='MPLS', all_network=True)))
with open('2_2_network.png', 'wb') as f:
f.write(diagram.diagram_png(Network(sites=['LAB01', 'HWLAB'], all_network=False)))
with open('2_3_network.png', 'wb') as f:
f.write(diagram.diagram_png(
Network(sites='L71', all_network=False, layouts=[Layout(path='L71', layout='upwardTree')])
))
Before moving on to Unicast and Multicast let's take a look at how to overlay a snapshot comparison or specific intent rule onto your graph. You can apply this to any type of graph.
# 3_network_overlay.py
from ipfabric_diagrams import IPFDiagram, Network, Overlay
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
with open('3_1_network_snap_overlay.png', 'wb') as f:
f.write(diagram.diagram_png(Network(sites='MPLS', all_network=False),
overlay=Overlay(snapshotToCompare='$prev')))
To overlay an Intent Rule you must first get the ID of the rule to submit. In this example, we are using the ipfabric package to load the intents and get a rule by name. Find more examples of extracting intent rule IDs here.
# 3_network_overlay.py
from ipfabric import IPFClient
from ipfabric_diagrams import IPFDiagram, Network, Overlay
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
# Get intent rule ID
ipf = IPFClient(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
ipf.intent.load_intent()
intent_rule_id = ipf.intent.intent_by_name['NTP Reachable Sources'].intent_id
with open('3_2_network_intent_overlay.png', 'wb') as f:
f.write(diagram.diagram_png(Network(sites=['L71'], all_network=False),
overlay=Overlay(intentRuleId=intent_rule_id)))
The next two examples make it a bit clearer why we first create a python object and then pass it into the diagramming functions. The amount of options required are quite lengthy, and this keeps your code cleaner and provides great type hints (see below). Additionally, it has many built-in checks to ensure you provide the correct data before submitting the payload to IP Fabric and returning an error.
For all valid ICMP types please refer to icmp.py.
# 5_unicast_path_lookup.py
from ipfabric_diagrams import IPFDiagram, Unicast
from ipfabric_diagrams import icmp
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
unicast_icmp = Unicast(
startingPoint='10.47.117.112',
destinationPoint='10.66.123.117',
protocol='icmp',
icmp=icmp.ECHO_REQUEST, # Dict is also valid: {'type': 0, 'code': 0}
ttl=64,
securedPath=False # UI Option 'Security Rules'; True == 'Drop'; False == 'Continue'
)
with open('5_1_unicast_icmp.png', 'wb') as f:
f.write(diagram.diagram_png(unicast_icmp))
TCP and UDP accept srcPorts
and dstPorts
which can be a single port number, a comma-separated list, a range of ports separated by a -
, or any combination of them. The applications
, srcRegions
, and dstRegions
arguments are used for Zone Firewall rule checks and these default to any (.*
).
# 5_unicast_path_lookup.py
from ipfabric_diagrams import IPFDiagram, Unicast, OtherOptions, Algorithm, EntryPoint
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
unicast_tcp = Unicast(
startingPoint='10.47.117.112',
destinationPoint='10.66.123.117',
protocol='tcp',
srcPorts='1024,2048-4096',
dstPorts='80,443',
otherOptions=OtherOptions(applications='(web|http|https)', tracked=False),
srcRegions='US',
dstRegions='CZ',
ttl=64,
securedPath=False
)
with open(path.join('path_lookup', '5_2_unicast_tcp.png'), 'wb') as f:
f.write(diagram.diagram_png(unicast_tcp))
with open(path.join('path_lookup', '5_3_unicast_tcp_swap_src_dst.png'), 'wb') as f:
f.write(diagram.diagram_png(unicast_tcp, unicast_swap_src_dst=True))
# Subnet Example
unicast_subnet = Unicast(
startingPoint='10.38.115.0/24',
destinationPoint='10.66.126.0/24',
protocol='tcp',
srcPorts='1025',
dstPorts='22',
securedPath=False
)
with open(path.join('path_lookup', '5_4_unicast_subnet.png'), 'wb') as f:
f.write(diagram.diagram_png(unicast_subnet))
This is a new graphing feature in version 4.3 and above that allows you to specify a device and interface a packet enters your network. Perhaps you have a firewall rule to allow a certain IP address or subnet and want to verify that this is functioning correctly. The sn
value is the IP Fabric unique serial number, iface
is the intName
or Interface
column (not to be confused with Original Name
), and the hostname
is also required. The easiest way to collect this information is from the Inventory > Interfaces table. The sn
is not a visible column in the UI, but is available from the API.
# Example pulling Interface Inventory table
from ipfabric import IPFClient
ipf = IPFClient(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
interfaces = ipf.inventory.interfaces.all(columns=['sn', 'hostname', 'intName'])
# 5_unicast_path_lookup.py
from ipfabric_diagrams import IPFDiagram, Unicast, Algorithm, EntryPoint
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
# User Defined Entry Point Example
unicast_entry_point = Unicast(
startingPoint='1.0.0.1',
destinationPoint='10.66.126.0/24',
protocol='tcp',
srcPorts='1025',
dstPorts='22',
securedPath=True,
firstHopAlgorithm=Algorithm(entryPoints=[
EntryPoint(sn='test', iface='eth0', hostname='test'),
dict(sn='test', iface='eth0', hostname='test') # You can also use a dictionary
])
)
Multicast is very similar to Unicast except some of the parameter names have changed. You can also specify a receiver
IP address but this is optional.
# 7_multicast.py
from ipfabric_diagrams import IPFDiagram, Multicast
if __name__ == '__main__':
diagram = IPFDiagram(base_url='https://demo3.ipfabric.io/', token='token', verify=False)
multicast = Multicast(
source='10.33.230.2',
group='233.1.1.1',
receiver='10.33.244.200', # Optional
protocol='tcp',
srcPorts='1024,2048-4096',
dstPorts='80,443',
)
with open('7_multicast.png', 'wb') as f:
f.write(diagram.diagram_png(multicast))
One of the great advantages of using this package is returning a Python object instead of returning the raw JSON. This allows a user to more easily understand the complex textual data returned by IP Fabric that represents how the edges (links) connect to the nodes (devices, clouds, etc.) and the decisions a packet may take. You can accomplish this via the JSON output but returning an object provides type hints along with the ability to export the model as a JSON schema. Please note that the model is not the exact same as the JSON output and some structure has been changed for ease of use. It also dynamically links some internal objects to eliminate the need to do extra lookups and references.
# 6_json_vs_model.py
from ipfabric_diagrams.output_models.graph_result import GraphResult
if __name__ == '__main__':
print(GraphResult.schema_json(indent=2))
"""
{
"title": "GraphResult",
"type": "object",
"properties": {
"nodes": {
"title": "Nodes",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Node"
}
},
"edges": {
"title": "Edges",
"type": "object",
"additionalProperties": {
"anyOf": [
{
"$ref": "#/definitions/NetworkEdge"
},
{
"$ref": "#/definitions/PathLookupEdge"
}
]
}
},
...
"""
The ability to create diagrams using the API allows for greater automation and integration into other applications. Many of our customers use this feature to create chatbots to speed up troubleshooting, as shown below. This example is from the Network to Code nautobot-plugin-chatops-ipfabric plugin.
Another useful feature is performing a Path Lookup and parsing the JSON output to ensure traffic is flowing apart of a review process. We have partnered with Itential to demonstrate how using their low-code automation platform can automate ServiceNow requests and ensure that a newly deployed IP has access to correct network services during a Change Control review. Keep an eye out for a video of this exciting demonstration!
If you have any questions, comments, or bug requests please send an email to [email protected] or open an issue request on the GitLab repository.
Update 02/03/2023: Keep your eyes peeled for our formal NetBox integration coming soon! We're always creating resources to save you time and effort in getting the most from IP Fabric. We'd recommend reading on to understand the power of NetBox and IP Fabric working in tandem. If you're planning on implementing this in your network, however, reach out to us first so we can let you know when to expect an faster method.
NetBox is an infrastructure resource modeling (IRM) application designed to empower network automation. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers.
https://docs.netbox.dev/en/stable/
Notably, NetBox is an open-source tool. Everyone has free access to the code and many simple deployment options are offered. For my testing purposes, I decided to use its docker image. It took me about 5 minutes to deploy and get the NetBox ready.
The IP Fabric is the Automated Network Assurance Platform that helps enterprises empower their network and security teams to discover, model, verify and visualize large-scale networks within minutes. Its main goal for any network infrastructure is to regularly capture the current network state. It provides another layer of abstraction for its users to access the network state data and it is vendor-agnostic!
In plain words, the IP Fabric will provide the model for you! And one of its biggest advantages is its standardized and well-documented API. Imagine capturing inventory, routing tables, or security policies from all discovered firewalls in a single request. That's how useful its API can be.
Understanding IP Fabric's API is fairly simple. Every provided data set (inventory, routing information, multicast data, security policies, part numbers, management protocols, and many many more) has its own dynamic documentation that will provide all necessary to build your API request including full payload information, description of all its properties and more. All you need to start your automation journey.
Let's divert the flow a bit and think about the importance of a reliable data model. When we talk about the data model, we mean the dataset that represents the network state.
The need for an accurate data model increases with complexity. Let's consider managing 100 network devices, with a couple of minor changes per month. We keep all the states (routing, switching, policies, ..) in our head, and from time to time we update the spreadsheet and our Visio documentation. Great!
Then what about 1000 network devices with hundreds of minor changes a month managed by a larger team? The previous concept doesn't scale and when an engineer leaves, the knowledge follows. That's why the standardized data model (or it may be called the Source of Truth) should be part of every network team, including operations, development, or architecture.
Now we agree that having an accurate network model is a must to navigate complexity. Then we have two main options to manage it.
1 - Standardize and hastily follow processes around any network change and make sure that everyone updates the CMDB after every change. It can work and it may even scale, but we are still assigning routine work to humans - and we are not good at it!
2 - Automate data management. We know that NetBox is an infrastructure resource modeling (IRM) application, but it doesn't have the discovery mechanism on its own. We still need to provide the data. In the following part, we will use IP Fabric's discovery mechanism and its API to read the network data and move them over to the NetBox with a sample script.
IP Fabric’s lightning-quick intelligent network discovery process empowers you with deep insight into the workings of your network. Baseline every device and path, configuration, and security policy automatically, equipping you and your team with the knowledge and insight to support, maintain, and develop the most complex of networks.
https://ipfabric.io/solution/network-visibility-and-assurance/
As we mentioned before, with IP Fabric you can fully automate the discovery process and have all essential network state data on a silver plate. There are about 2000+ parameters for a single device we can get including the inventory data, IP addresses, VRFs, VLANs, interfaces (standardized/original), routing data, policies, and many more. IP Fabric maintains regular network state updates on its own.
So if there's one API endpoint in IP Fabric to get the device inventory data and a second API endpoint in NetBox to provide data for its inventory. Then it's just two simple requests and the job is done, correct? Technically yes, practically it's not that simple! We need to add a transformation logic.
The schema below depicts the Network infrastructure on the left. IP Fabric captures the data from the existing network over SSH or API (AWS, Azure, NSX, ..) and provides the data over API. Then:
Reading IP Fabric's data is simple, but to create inventory in NetBox, we need to think about the data model differences first. And second, we need to create a proper mapping structure in our code. For example:
IP Fabric's properties should be mapped correctly to create desirable results in the NetBox. Just to mention a few issues we can come into during our implementation phase:
And there can be more, which is the main reason to spend time on planning before execution.
For everyone who is looking for an easy way to start, I have created a sample code to update NetBox with IP Fabric's inventory data. You can find and freely use the repository on GitHub. It consists of a handful of functions to add/remove standard inventory data or to populate the NetBox inventory completely. The prerequisites are:
Good luck and enjoy!
If you have found this article helpful, please follow our company’s LinkedIn or Blog, where more content will be emerging. If you would like to test our solution to see for yourself how IP Fabric can help you manage your network more effectively, please schedule a demo with our team: Request a Demo.
In API Programmability - Part 2 we showed you the basics of how to use IP Fabric’s Python SDK. In this post, we will create a web server using FastAPI to receive webhooks from IP Fabric. After certain events occur, we will use python-ipfabric to extract and manipulate data to enhance your automation efforts.
Find today's code example on GitLab at 2022-05-06-api-programmability-part-3-webhooks.
Today we will be importing IP Fabric data into a PostgreSQL database after a Snapshot Discovery is completed. This is beneficial for visualizing trending analysis of important information such as the number of devices, End of Life migrations, or Intent Verifications. Due to the IP Fabric limit of five loaded snapshots, it is very difficult to see last week's KPI's or 6 months ago. Long historical analysis can be accomplished by extracting a subset of the data using the API, transforming it into a Python data model, and loading it into a PostgreSQL database. Connecting this database to a visualization tool like Grafana or Tableau will allow your teams to create interactive dashboards.
This example takes all the Intent Rules in an Intent Group and adds severities together to summarize the entire group.
It is also possible to graph the individual intent rules for further analysis.
Here are some basic requirements for working on this project. Please note that this is a base example for developmental purposes and extra caution should be taken into account prior to running this project in a production environment (enabling HTTPS, healthchecks, etc).
python3 -m pip install -U pip
poetry
This project is using Poetry to manage python dependencies and allow you to merge multiple examples together. Today we will be focusing on the postgres
example which will take the snapshot data and insert inventory and intent data into a PostgeSQL database for long-term historical trending. This will require access to either a local or remote PostgreSQL database as well as other requirements listed above.
The easiest way to download the project is to use git for cloning.
SSH: git clone [email protected]:ip-fabric/integrations/marketing-examples.git
HTTPS: git clone https://gitlab.com/ip-fabric/integrations/marketing-examples.git
Another option would be going to GitLab and downloading the zip file.
Installing the Python-specific requirements for this project is a simple command poetry install
in the directory with the pyproject.toml
file. Please take a look at the example specific README files for other dependencies outside Python (i.e. a PostgreSQL database).
To create a new Webhook navigate to Settings > Webhooks in IP Fabric and select Add Webhook:
Here you will create a name, your URL, and select the events to listen for. The postgres
example requires both Snapshot and Intent verification to load all the required data. Prior to saving please copy the Secret key as this will be used in the configuration. If you forget this step it can be viewed after saving, unlike API tokens.
In your project copy sample.env
file to .env
and enter your environment's specific data.
IPF_SECRET
is the secret key copied above; this validates that the message came from your IP Fabric instance and not another server.IPF_URL
must be in this format https://demo3.ipfabric.io/
IPF_TOKEN
is created in Settings > API TokenIPF_VERIFY
to false
if your IP Fabric certificate is not trusted.IPF_TEST
to true
for initial testing and then change to false
.The webhook listener can be run through Poetry or Docker. This will require communicating to the PostgreSQL database prior to starting the webserver to ensure that the schema is installed and the tables are set up.
poetry run api
docker-compose up
C:\Code\part-3-webhooks\postgres>poetry run api
INFO: Started server process [12740]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
This output provides us the information which we can put in the Webhook's Settings URL. The 0.0.0.0
signifies it is listening on all IP addresses of your system (run ipconfig
or ip addr
to get the correct IP to replace it). This is also configured to run on port 8000
so the URL which I will need to edit in IP Fabric will look like http://192.168.1.100:8000/ipfabric
.
When the IPF_TEST
variable is set to true
the server will process a test message as a normal webhook and verify it is working. Select the lightning bolt icon in the Webhooks settings and then choose which rule to send.
The postgres
example will use the $last
snapshot to perform the automation against when a test webhook event is sent (make sure to test both Snapshot - discover and Intent verification - calculate to load all the data for that snapshot). When a test webhook runs it creates a random snapshot ID that does not conflict with others in the system.
Once the test is successful it is advisable to set IPF_TEST
back to false
and restart the server. If you try to run the test again it will fail because the unique snapshot_id
has already been inserted into the database to prevent duplicate entries.
This branch will also only process snapshot events that have been run through the scheduler (user:cron
). If a user manually creates a new snapshot or updates an existing one, then the webhook messages will be processed and ignored.
Using IP Fabric Webhooks will further your team on their Network Automation journey and provide the ability to integrate into any external system you can imagine. Today we focused on importing the data into an external database, but this can be extended to import into a Configuration Management Database (CMDB), Network Management System (NMS), or Monitoring Tools to ensure that these critical infrastructure components have full visibility of your network.
If you have found this article helpful, please follow our company’s LinkedIn or check out our other blog posts. If you would like to test our solution to see for yourself how IP Fabric can help you manage your network more effectively, please contact us through www.ipfabric.io.
The Grafana JSON models are located in the Grafana directory on GitHub. You will need to configure your Grafana instance to connect to your PostgreSQL database and find the generated UID
for that connection. Then in the JSON files replace all instances of <REPLACE WITH UID OF YOUR CONNECTED POSTGRES DB>
with the correct value. Finally, you should be able to import the new dashboards.
If you are interested in extracting more data than the example provides (perhaps the BGP Summary table) this can be accomplished by adding on to the existing Python code. If you need assistance with this or have an idea for a new integration please open a GitLab issue.
We're proud to share that IP Fabric earned top billing in Gartner's Cool Vendors in Network Automation list for 2022. Gartner members can access the report here.
While network automation garners a lot of interest from large enterprises with complex networks, its successful implementation lags behind. Fear of change, resource strain, and operational and organizational labyrinths to navigate are all factors that hold organizations back from fully embracing the inevitable.
That's where cool vendors come in. We're here to bridge that gap between understanding that network automation will bring agility, efficiency, and simplicity to your network, and actually making it happen by directly addressing the roadblocks in your way. Cool vendors aren't afraid of breaking new ground - we march forth, illuminating the road to innovation in your network.
We use automated network assurance to revolutionize how enterprises manage increasingly complex networks by bringing robust visibility and insight into network behavior.
We're not just giving back time and resources to our customers, but also changing how an entire industry approaches networking - taking it from reactive to proactive, and eventually, predictive - a self-driving network. On the road to this goal, we offer solutions to problems faced by network engineers in the spaces of network visibility, automation, security assurance, trouble resolution, and multi-cloud networking.
Our straightforward approach makes easily viewing inventory, ensuring configuration compliance, navigating topology
views, state information and analyzing end-to-end forwarding behavior staples in a network engineer's toolbox. Consolidating these functionalities in a single multivendor product that extends to the public cloud is a bit of magic we've spun up to help clear your path to innovation.
So, as the cool kid on the block, how have we mapped out this road to network automation adoption?
We've outlined the adoption of IP Fabric (and therefore, your confident approach to automating your network) in three clear phases:
Shine light into the dark corners of the multi-vendor network for a full picture of the inventory, configuration, topology, and state with automated discovery and documentation. Only when you have a trustworthy baseline can you truly know your network, which is an essential start to any automation project.
Rich data, path simulation, change validation, intent verification, data democratization - IP Fabric uses data from across your entire network to enhance your operational process. As your team experiences how intelligently sharing and leveraging data across your operational ecosystem elevates systems, workflows, and processes, the value becomes undeniable and resistance to change is quelled.
Start integrating with other technologies to put the data intelligence from IP Fabric to work, expanding the reach of your newfound operational bliss. For example, start asking your network questions and getting immediate answers (directly in Slack or Teams if you’d like) opening up a world of possibilities.
The insight IP Fabric provides empowers teams to think in cool ways about how to innovate your network while keeping the infrastructure secure and changes aligned with your intent.
Check out how IP Fabric & Nautobot ChatOps bring innovation to your network at Network Field Day 27.
To our delight, our friends over at Itential join us on the list of Cool Vendors. Together, IP Fabric and Itential work to design and deploy network automation workflows that are validated by measuring the end-to-end behavior of the resulting network, comparing it with its previous state and the desired outcome.
Another great example of how we enhance your network toolset.
If you want more insight into IP Fabric, or would like to see how it can revolutionize your network, get in touch with our team and request a demo.
Follow us over on LinkedIn for more updates.