Watch the webinar
Are you paying a hidden bill for bad data?
read more

Could Full-Stack Observability Be The Answer To Growing Enterprise Complexity?

Daniel Lakier and George Carter of Verinext join Joe Kershaw and Daren Fulwell to discuss full-stack observability! Is it the answer to growing network complexity? Tune in to find out their take!

Transcript

So thank you for for joining us here on the podcast, the first strategy fabric podcast, where we we take some of the recurrent themes from the, community fabric and tech fabric podcasts and webinar themes where we're getting into a much more, technical product integration, use case, data type of themes here where we're bringing in some of our leading global partners, senior practitioners in the space that have seen multiple themes, multiple approaches, and can bring an angle looking much more at enterprise wide tooling strategy, and the strategic initiatives that we're seeing seeing coming up, within the enterprises that were in some mutually working together, and in some cases they're completely unique deployments. But the idea here is that we've brought our friends from very next to to join us on the topic of observability and whether observability is already or is to become the answer to growing, enterprise complexity. I'm one of the hosts today, Joe Kershaw, global lead for sales and channels at IP Fabric. And in the last 4 years or so working with IP Fabric's global partners, some of our strategic customers, we've been seeing a recent movement towards the topic of observability.

We've seen some customers move from very tactical use cases 2 to 3 years ago, and over the years of adoption and spreading the consumption of the the data from network assurance, we're starting to kinda tease into the topic of monitoring as a bigger topic, application performance, infrastructure performance, and looking at holistic datasets and how you can deem actionable insights. And all of these topics are leading towards the buzzword or whether it's not a buzzword, around observability, which has obviously teased me into being the the host for this session. Someone that's helped me throughout a lot of those deployments and has helped growing out a bunch of our global partnerships is Darren. Darren, if you wanna take the next introduction. Sure.

Yeah. My name is Darren Forwell. I'm a product evangelist for for IP Fabric, which is still the best job title in the world. I get to work with customers and partners and and anyone who's interested really to to really understand the business value of deploying network assurance in their operations. And a lot of that has to do with these kinds of observability topics.

So we're keen to keen to contribute today. George, can I hand over to you for an intro? Absolutely. Thank you, Darren. My name is George Carter.

I'm the director of automation services here at VeriNext. We principally focus on from a business unit perspective, 3 different key areas of automation, which is infrastructure automation, business process automation, and then robotic process automation. But one thing that we've seen after doing this for many, many decades is that automation and observability go hand in hand. You can't have one without the other. I certainly look forward to our discussion here this morning.

I'll turn it over to Daniel. Hey, guys. I'm Daniel Lake here. I run the network and security practice at Veranix in the North. Some of you may know I am actually a, IP Fabric alumni, being very interested in observability and assurance in all things for the for for many years because especially as we move into a modern era, nothing can stand in isolation.

Everything has to work together. And so when I was asked to join this podcast and talk on observability and the importance in any modern infrastructure, I was very, excited to join. So thank you. Brilliant. Thank you, Daniel.

So I've I've got a first couple of questions, and I am gonna pull in some of the insights that you guys shared in in 1 or 2 of our planning calls for this. So to to set us off with, hopefully, a shared understanding if that's possible in the world of of observability. I I mentioned that we've arrived, whether by accident or by design, at the observability discussion with some of our biggest enterprise, customers, not from a means of ambitious road map 3, 4 years in the past, and we've managed to stick to all the milestones and plans, but more through, an accumulation of tactical project that have delivered maybe isolated value, but have have amalgamated or or kinda grouped together in a much bigger discussion. So we've gone from tactical requirements across the business for for insights and visibility into a world where we've managed to piece them all together and start stepping on the toes of this observability discussion. George, in your experience, do you tend to find it that you guys are very next or invited into the enterprise discussions based on ambitious road maps or based on tactical needs that that feed this observability discussion?

Yeah. It's a great question. It's been our experience that, yes, those 2 are significant drivers to observability, being able to have a road map and be able to plan exactly what's next as well as those tactical needs. But, it seems to be much, much broader than than just those 2. We know that, you know, for a lot of organizations, the COVID lockdown and then life after COVID or post COVID has really driven a paradigm shift, if you will, into the IT industry.

The need to gain efficiency, the need to capitalize on investments is at an all time high. But there are several other drivers, to basically automation and more important observability. Employee burnout, employee turnover. A lot of organizations are finding it difficult to keep that talent. And certainly having that observability as well as being able to drive automation needs can reduce that.

Driving innovation or or being able to have that differentiated experience, to stand different from your peers, to stand different from, your competitors is also driving a lot of this. Increased time to resolution, eliminating that downtime, being able to identify problems in the ecosystem faster and quicker, and being able to have those returns is also a significant driver. And then last but not least, just really having more insights. There's an old phrase, you don't know what you don't know, which is typically exactly where you need to focus. And so having that observation, being able to have the insights into those dark areas of your ecosystem really reduces anxiety and allows more organizations to adopt greater technologies faster.

I'm frantically scribbling down notes to come back to a couple of these themes. We've we've started looking into the impact of network assurance on maybe one one step up from the individual staff morale and the individual contributors' enjoyment of the work they're doing, and and we will loop around to this theme, but been looking at how do you potentially, even in interview stages, when you're hiring a new architect, someone that's gonna sit at the table and be, responsible for leading the battle on whether it's the network's fault, whether it's infra, whether it's applications, and and showing them at that stage, an abstracted, simplified view of the network or maybe even a full stack observability platform approach where you can show the projects you're gonna be working on, but I'm also not lying. Here's how we're gonna enable you to get stuck into those projects because you're not gonna be bogged down hunting for information. So I'm I'm gonna just kinda park that topic for for fear of not diving down a cul de sac too fast, And just take one one step back out. A couple of things you've mentioned about getting more insights, more visibility, alleviating or liberating people from, from the the mundane tasks.

What is it that makes a project or a deployment observability or even the platform and the tooling stack observability as opposed to being maybe the more common terms of similar themes from from years before monitoring data capture and representation. What is it that differentiates an observability discussion in your in your view, George? Okay. Great question. Yeah.

I mean, from from what I've seen let's just look at the core word. Right? Observation. Typically, it's usually associated with a passive activity. But IT observability platforms that we're talking about, they tend to go much deeper.

We're talking about active observation with the view to identifying and mitigating threats inside of an organization. And there we get into the area of AI or predictive analytics. So being able to identify certain variables, certain, certain markers, if you will, inside the ecosystem, whether they be network related, stores, compute, doesn't matter. But being able to identify those patterns and then taking it to a next level. Alright?

So not only are you able to predict what's going on, but you auto but you automatically can identify the problem down to a granular level. I believe that all historical monitoring systems were nice. They can show you what's going on inside of your infrastructure, but it was always at a distance. It was really a passive activity. But an observability platform, as I mentioned, shows you down to the component level where the problem is.

So it's not just a a performance problem in this part of your data center or even in this particular satellite office. It's the individual disk. It's the CPU. It's the network port. It's the SFP, whatever the case may be.

Being able to identify it down to that level and then taking it a step further, automatically notifying the stakeholders that there is a problem with this particular network endpoint, raising that ticket, and then mitigating even or even triaging the problem. Moving the workloads from this particular network endpoint to a different one or off of that particular storage platform to a backup. And then even going so far as to ordering a replacement of that particular element all through automation. Right? Updating the ticket.

And here is the most attractive part for a lot of organizations. It runs 24 by 7. Doesn't need sleep. It doesn't take vacation. You know?

So the so the call of the day is to have this always on infrastructure. And if you're going to have the always on infrastructure, then you also need an observability platform that's always on, always always responsive. So it goes a little bit deeper than just the old traditional legacy model of just monitoring what's going on. Now we've have evolved, if you will, to the point to where it's taking active observation to the next level. Joe, if I may add something, I think that one of the one of the things about observability is the way it ties everything together.

We have had instances where in the past, we'd have somebody where is technology? Let's be honest. I normally I normally deal with technologists. But the zero observability lets me deal with business problems. And sometimes, you know, 5 years ago, if I was speaking to an attorney inside a firm and the attorney said, we are struggling with hiring attorneys, and we're falling behind in work.

Okay. That's a business problem. But today, because of observability and the what that brings to the table, I can dig further. I can say, ask a couple of questions. Look at what's happening with the economy.

And I know that the reason that they're having trouble is because the last 2 years up until probably a few months ago, everyone was struggling with hiring, resources. It just weren't enough. I looked at what industry they were in. It's a very litigious industry that became more litigious since COVID. And we said, you know, let us let us have a look and use an observability platform, especially around some it called business process automation.

That was what we are thinking. But let us understand why you need all these more attorneys right now. Let us measure the process, what an attorney does in a given day. And from that, we were able to come up with a solution to using technology where we could we could they had a certain number of attorneys that, in that case, had to take forms coming in and put them to the right people. That took a lot of time.

What happens if we could do that automatically? Because forms tend to come in a specific format, easy to deliver. Well, then what do they do? There's a specific process they have to follow. There's 5 or 6 things.

We can tie those together. And so because we were able to measure the people aspect and what they do in a daily basis and the process of everything, because we had an observability platform that could observe a person's a person's functions, how the environment was working, how the system is interacted, we're able to take all of that and compress it down through automation to solve a business problem. But we couldn't. Even though the automation existed before, it would have been really, really difficult without an observability platform to figure out how to tie that all together. That's in BPA.

In networking, it's no different. We had systems that when monitoring systems could monitor the network. We had systems that could monitor applications. We had systems that could monitor security. But in order to to to really get the value out of it, we needed something that would tie all of those together and then add predictive analytics upon it.

When I see CRC errors happening on a switch port, it nothing seems to be happening. Well, over historically speaking, we've seen the same thing happen on 10,000 switches, and 4 months later, that switch port will likely fail. Why wait for it to fail? If we have the observability platform, we have the intelligence on top of it, we can make a a decision to bypass that or replace it before it breaks. Nice.

No. I was I I was glad you transitioned into your own translation there from your, legal use case because I was thinking is it is it a likeness? I you've just gone from a from mapping a personal process through being able to observe it to be able to make in refined decisions or informed decisions and then look at where that can be automated? Or is it genuinely in your view when you're speaking to some of these major enterprises? Is it a discussion of observability that also steps into people processes as well as technical processes, application processes?

Do you guys genuinely take not portfolio approach, but such a wide angle on the observability discussion that it could become any of these plays, or were you using that just specifically as a story to be able to highlight the the likeness for business challenges into technical solutions? I think that one of the things that differentiates us from a lot of our competitors is that although we have experts in each area, observability is is ubiquitous. It's across the entire environment. It is today. There's there's app there's process observability, there's data observability, the infrastructure observability.

Inside infrastructure observe observability, you can break that down into different components. But in all things, in order to fix most things, observability is the starting point. And it's not new. It's just new that we have technology to help us. You have you've had operational process specialists for the last 100 years in business.

We've now figuring out how to do it in in all other things. And, you know, an infrastructure specifically where we, you know, we've been driven by we we all talk about being architecturally driven, making decisions based upon, you know, what we think is the best design. But in reality, most of us made a decision on a manufacturer. Then worked architectures based upon that. In some areas, it's got better.

You know, if you think and it's in all things. It's not just in networking where it's very, very, prevalent. But even if you think to, you know, service, it's a lot more compressed than it used to be. But just 10 years ago, you had probably 10 flavors of UNIX. You had, you know, never mind Windows and Linux and everything else.

And you could make any decision you wanted, but your operating system and then everything else is on top of it. And now now we're at a point where things becoming much more ubiquitous as we move to the cloud. We need to be able to be more agile. In order to be able to be more agile, we need a system that allows us to see the entire environment and make operational decisions. You know, when we were speaking out, we had the the luxury of being slow on making a lot of those decisions.

But in a lot of cases now, especially with the slowdown and the inability to get gear because of supply chain, a lot of people don't have that same luxury. You can't wait on a specific manufacturer that you like in order to run your business. You have to have a way to have a global view, global understanding of what making a change makes it make architectural decisions that are product product independent or manufacturer independent. Does that make sense? From from my perspective, this this works a few ways.

Right? So what what you've talked about there is everything from having, data points from from, from a a far more complex infrastructure than you would ever have had before for for reasons of using more IT, for for using more vendors, using and, more complex infrastructure. Being able to get insight from that, I guess, that then is map able to be mapped onto a process, whether that's an application delivery or whether that's deeper into a business process. And then being able to act on the insights. So this comes back to the to the automation piece, I suppose, that, that that George, mentioned before, doesn't it?

So a fascinating, sort of melee of of of chaos really, I suppose, in the IT operations because what you're trying to do here is grapple with something that's grown in complexity so much and understand the interactions within it in order to, in order to actually do anything about it. And I think this is what a lot of people struggle with at the moment is this idea that there's so much going on and there's and there's so much complexity and so much tech debt in their environment, so much of old stuff that's not been taken out, as as systems are developed But they they don't know where to start. They're they're almost stuck with this this thing. So I guess having having the observability kind of freeze them up from this this this being frozen with fear thing of of not knowing where to begin By having the right platform and the right data and the right insight into that data means that that they're able to then take a step back and and understand what needs to be done and to move things on with with more of their strategic initiatives, I guess, and and and move on with the development of the environment to to cater for the business requirements.

Yeah. It sounds interesting. I I just add a very quick point, George, and then I'll I'll I'll pass it over to you. Mhmm. We we were speaking with one of our customers, and I guess by your your you guys are talking of coming in from the business challenge and observability led discussion, but the the real value and differentiation of the observability approach is to be able to whittle right the way down to the final element that is driving and driving an an impact or a behavior across the infrastructure that is underpinning the applications that are underpinning the business processes.

We've we've approached it from the opposite angle where we have just recently sold to a to a new customer. They were focused on their own projects, their own operational requirements in the network, and they've deployed this technology with a view of tactically getting a handle on the technical debt within their environment. The quote that that came back from the the director, the one sitting underneath the the kind of money discussion and these business process, business outcome type of discussions, it said that, yes, there was a requirement for a technical debt insight of you. And then the the quote was that we we've managed to flip all of the rocks at the same time, which may sound in in kinda conflict with your point of, employee morale and allowing people to focus on the job at hand. You would have thought flipping all the rocks at the same time would be a bad thing, but they're now in a position where they understand what needs to be approached tactically, specifically within the network space so they can almost they can park the technical debt discussion because they've got that data.

They've captured it. They've standardized it. They've abstracted it. So they can now have a discussion about the architecture of that environment, not the vendors within that environment. So it it touches on a number of our points, but what it's immediately allowed them to do because of how we we then allow them to gain insights into the data, it's allowed them to start thinking about an enterprise wide strategic initiative around compliance and standardization to further progress their approach towards abstraction and to look at standardized datasets for for enabling their automation as well.

So we're approaching the other way, and where we're now getting with them is they're saying, right. Well, now we've got the network represented as an API with a standardized dataset. We have a plan of operational projects where we can start tackling this complexity. Enabling more automation, we can start actually getting involved in the discussion with the guys at the top that are talking strategic initiatives. They're actually talking about a particular approach of of data formation and data exploration and looking into ML and AI themes within this data lake or within this this kind of mass data, big data approach.

And so we've gone from a very tactical requirement of tackling technical debt within the network of a data center environment to within the space of just 2 months since deployment, having a discussion about ML and AI. For them, it's it's a cert it's a natural choice of the technology goes for because of the the company that they are, having developed and provided this themselves. But what do you see with your customers? Is this a vendor discussion when we get to this AI layer, this insight layer? Are we talking here's to be your magic quadrant, here's to be your consumption wave or whatever analyst representation we'd be looking at?

Where do you see that topic going, George? Are people looking at specific vendors? Or Yeah. That's another great question. And I think the the well, a lot has changed in the enterprise, obviously.

But the old legacy one vendor for everything approach, just simply does not match the enterprise footprints that most clients have today. Most organizations will adopt a best in breed approach. Alright? They'll they're gonna select a best in breed for their networking gear, a best in breed for their storage frames, best in breed for their server platforms. And because of all those disparate systems, whatever type of observability platform you put in place must be vendor agnostic.

It must be able to layer on top of any vendor regardless of its, you know, regardless of its of its age or or its type, and be able to extract that information away from that particular endpoint. I wanna comment back on Darren's point about the complexity, because he is completely accurate. When you look at doing something like this from, from an observability platform, it can be extremely complex, looking at it from one vantage point. But if you look down to the lowest layer and the lowest layer would be the individual IT admin looking at his particular segment, looking at networking, looking at storage, whatever the case may be. The way to resolve an issue with that particular platform has not changed.

Even though, yes, we're advanced, but it goes back to the old phrase, how do you eat an elephant? It's always one bite at a time. Alright? So when they see a problem, alright, they're gonna do the exact same thing. They're going to basically analyze.

They're gonna extract logs. They're going to, capture, a KPI data. They're gonna capture any kind of trace data. And then from that data, they're going to look at see if they have a pattern and then make some intelligent decisions based off of their findings and then execute a particular action. Now you take that same legacy process that has been in operation almost ever since IT's been in place.

Alright? And now take that and extract from it, and then bring it up to the platform or the holistic observability platform overview and apply that same principle to the entire ecosystem. So nothing's really has changed. It's just the broadness of what's actually being observed now. Now rather than having that tribal knowledge with that individual technician at that layer, I wanna take that exact same concept.

I wanna take that same principle, and I wanna apply it across my entire IT infrastructure, all elements. So because of that, it must be agnostic. It must be flexible in order to interface with any type of system out there. And because it is flexible and agnostic, now we have the possibility of that particular organization getting the most ROI. I was gonna just chip in, in response and and I guess look at the the fact that the way, services are deployed across, the IT infrastructure means there are so many touch points and so many, interdependencies, I suppose.

And this is where where the trick comes in this whole, approach, isn't it? That you've got all that data. You you now have that data from from all of those different areas of infrastructure, but understanding the touch points between them, understanding where you cross from one part to the other is the bit that that then becomes really difficult to handle because of the size of the infrastructure. Right? So I guess this is where where ML really starts to play a part.

Is it? If I go back to what you asked, Joe, and to come back to Darren, you asked about, will we have specialist technologies and specialist areas quad magic quadrant type stuff? And I think if you won't you won't have an overall platform that does everything. It's because even the largest companies, it's just too broad. And if you look at where you're going, if we're saying that we're gonna know eventually, in in all things, have an application make a call that changes the entire the entire infrastructure code on on or how it's configured in order to accommodate that, there's no one one nobody will have one system.

What we will have is technologies or manual or observability platforms and AI platforms that are best at certain things. And because of open APIs, we are tied together. I think the trick is gonna be having the specialists who understand how to tie all the technology together in a way that makes it seamless. Hopefully, you can count to a couple of providers instead of 100, but that's gonna be the trick. And so so how how do you help?

I'm just I'm trying to put my, take the perspective or put the hat on of a of an ambitious VP of infrastructure, overall owner of an infrastructure strategy? Obviously, they've got the architecture teams. They've got the practitioners in each group. What are some of the, the the one that's bubbled up to the surface is looking at interoperability, looking at API first, giving yourself the flexibility. Don't become burdened by your network providers, server, and infrastructure providers.

Make sure that you're enabling or pushing your teams to ensure that they're enabling this interoperability approach in order to assure ensure better return on investment rather than stumble into a place where you've built an amazing strategy and then it's a rip and replace discussion. If if we're taking the perspective of that person and observability starting to bubble up, they go to a few conference discussions, they get themselves involved in some of, you know or listening to some of these podcasts. What other than these requirements, what are some of the the critical things that they should be aware of? Because what I'm hearing is some enterprises will be lucky because they're in their operatives or or individual practitioners within the different areas will almost accidentally through solving their tactical requirements stumble upon this interoperable approaches, group of approaches, then you just apply a, an insights layer over the top. But that's not gonna occur in a lot of cases.

So if we've got this ambitious top level manager that's looking at a tooling strategy, what would you say are some of the the critical overarching themes that they need to be pushing down into their teams or some of the major considerations and stumbling blocks if if we've not already covered all of them in the discussion? I I can take I can start. I think the the one place that we see a lot of organizations fail because if you ask organizations, where are you on your and the end the end result that we want with observability is the opportunity to do automation in an ideal fashion. But in order to do that, you need observability. You need observability so you can have you know what's on your environment.

You need a source of truth. And I think this is where almost everybody stumbles. And most large organizations have a source of truth, be it the CMDB or oftentimes, we have more than ones who have a CMDB and an IPAM. But if you ask them, how do they keep it up to date? Because any operational automation starts with knowing you can only you can only automate what you know you have, and you can only change something if you know what it looks like before you try and change it in terms of in terms of configuration.

And everybody seems to stumble there. This is when a true observability platform, something that doesn't just assume doesn't take your prepositions of what your network does look like and then layer on the components that it knows about. But on a a regular basis, it goes out, sees how the exact environment is running, and then updates the CMDB so that the so any operation automation you will try and layer on top of it, whatever it may be, has a basis for making any decision. Because if you think about your own life, you couldn't go and get any if you couldn't check if you had fuel, you couldn't plan to drive down the street to the store. You have to know what's there.

You can't make any changes in any other part of our life. We can't make a change without knowing. The same stands true with operational automation. Setup automation is really prevalent in most things that we deal with today in almost all IT environments. It's when you try and move to operational automation that we have a problem.

And I think a lot of that starts with we buy all these platforms. We can't get them working ideally. We can't get them working ideally because we don't have something as the basis for it to work off of. That and then look and then having a system which allows our team to not have to know all syntax so we can move away from having one manufacturer. So we can have this low code automation platform that takes away the objection of my team doesn't know it.

If you tell us something as if we have an accurate source of truth and we have a system that allows our team to know outcomes instead of of of of, syntax, then I think we're in we're in a really good place. George, Darren, what do you guys think? Yeah. Absolutely. I think I would add to that, Daniel.

You kinda rub some of my thunder too, by the way. But the only thing I would add is that is that for a lot of organizations, one of the pitfalls, one of the gotchas, if you will, is trying to boil the ocean, trying to start too big with trying to implement an observability platform and roll out automation and rolling out AIML. The ones who actually perform this successfully never start that large. They try to start small. And it's usually by, you know, reaching out to a trusted partner, a trusted adviser, someone who does this day in, day out, but they but they actually start the project small, perhaps a, an observability platform to a single use case, a single business unit even.

And they start with their work out the kinks, get the workflows down, As Daniel mentions, ensuring that their CMDB, that everything ties to, that the AI engine is gonna use, is truly a single source of truth and not a single source of liability. Right? So they start there. They start small. And then once they have the foundation laid, then they grow the platform.

Then they begin to span expand out to other lines of businesses, other segments of the IT infrastructure. Those who take that approach are always the most successful. Darren? No. I think you've I think you've you've hit the nail on the head there.

I think the the the fact is that that no, no IT organization is is gonna be greenfield. So you you can't just start from scratch with anything, can you? You have to develop out from whatever you've got. And and that means building knowledge and, and and, dare I say it, documentation, you know, around around what's there, understand it, and then and then build on it. And and that starts small has to start small, but but then grows out to to get enough knowledge to support first manual process and then develop on, building the trust in that, I suppose, and and having that that ability to to to confirm, yes, this is a a good solid picture of what my environment looks like and then creating the automation, on top of that, once you're once you've built that level of trust.

Brilliant. So if I was to just quickly recap what we've said, we we started off on the theme that observability without automation is is not observability in the in the discussion and the theme that we're talking. So you need to take the 2 in, in unison in the same approach. And if we know that automation is required with the insights in order to create observability, through the the intelligent layer. So when you can actually use the insights draw, there's some key themes underneath.

Those are abstraction, so not being dependent on supply and demand, but also not being dependent on my team don't know that or my team do know that. So the ability to abstract, to have a source of truth, which is validated. So you're not having a single source of liability as you raised there, George. Having interoperability and flexibility or API first approach or however that needs to manifest as a recurrent theme throughout all teams approach to, to tooling purchases, to scripting frameworks, to anything else that they're looking at, and then also then not to not to boil the ocean. We've seen customers try the big bang effect of of flicking the switch and tackling a number of different interlinked projects at the same time, and it only takes 1 or 2 failed projects at the beginning to massively degrade trust in the overall approach and the owners of that approach.

So I think it's a really good insight that you raised there at the end, George, to not boil the ocean. From that recap, anything that any of you would add? No. Daniel? Yeah.

I think you got it covered. Yeah. So nicely there, Joe. Yeah. Very concise.

Brilliant. So if we then look to just just briefly wrap up, obviously, this being the the network assurance element that's tying into the observability discussion through strategy fabric, the the theme that we're looking to run with this podcast. If I take the elements discussed here, the source of truth and be able to validate that and also the ability to abstract, the inter interaction across the network with all of the different technologies and different domains that we're talking within the network. I'd ask maybe between yourselves, Darren, and and Daniel to lead us through a brief summary of network assurance and the application of this approach to data for the network and and how we can obviously help, but, what the what the key themes are. I suppose from my point of view, being an old network guy myself, the problem was always with networks, understanding how they were constructed, why they were constructed the way they were, and how they serve applications.

And I think some of the themes we've touched on today have been been really useful to to really dig into that insight of how networks are built to provide service and and and supply service for for applications. And that's that's the key to it for me. For from network assurance perspective, it's there to to create that insight, to to capture the information that's required from the network, to model it out, and to understand how that is then used in order to to, to supply the network service, be able to measure its capability of doing that, and then to be able to point out, areas that that need remediation and and activity to to fix it. So, I think that's that's sort of my my mile high view of of how assurance plays into the space. Daniel, any more insight?

For me, network assurance is something that is so if if network assurance platforms had existed before we are building networks, we would have started with 1. But they didn't, so we built networks. And then trying to convince people this exists because people just don't know. It is too hard to believe that something like this didn't exist before and then exists now. But an ability to see your network and understand the if it's configured as you expected, we do network audits all the time.

And people say, we think this is the version of code we run. This is the version of SMP. This is the NTP server. These are the protocols. And the reality is even if I have a a a system where they can see all the switches, which I haven't seen it yet, but even if I have that, the same standards need to run across more than just that.

You want the same version of SMMP across all of your environments. And then I need to take this data, and I need to have it I I have to this system allows me to update my source of truth so that I can move towards operational automation. And and understanding your network, not just is it meeting your standards, but is it running as expected, is, I think, critical. I can't tell you how many times we've seen people who say, I have a I have a a a a a a centralized firewall management system. I know it's configured correctly.

And then we find that, yeah, it's still configured correctly, but the traffic doesn't flow through it anymore because somebody made a mistake and they changed something on the network. An assurance platform makes you makes you allows you to see that your network is configured as intended and running as as intended, and then update your your source of truth on a regular basis so you can move towards operational automation so that you don't have to have these problems again. So Brilliant. Thank you, Daniel. No.

It says it's super. Before I quickly wrap up, George, anything that you'd like to add today to the discussion above? No. Not at all. This has been a great discussion.

You know, when we look at everything that we've talked about, we know that, for a lot of organizations who have embraced this or thinking about embracing it, it it can be scary. It can be a challenge. But just know that there are trusted partners out there to help them along this journey. So thank you. Brilliant.

Well, I'll take the moment to, briefly wrap up. Thank you for listening, but more to the point. Thank you, Daniel and George, from VeriNext. Thank you, Darren, and, and thank you again. If you have any questions following this podcast, don't hesitate to get to the VeriNext website or, of course, to ipfabric.io.

Our website or engage the team or engage any of our partners globally to to begin this discussion and understand what it should specifically mean to your organization, but also your own stage of maturity on this journey. Thank you, guys.

Podcast notes

Episode Title:

Could Full-Stack Observability Be The Answer To Growing Enterprise Complexity?

Hosts:

Daniel Lakier, George Carter, Joe Kershaw & Daren Fulwell

Topics:

  • Full-stack Observability
  • Network Complexity
  • Network Assurance
  • Network Automation

Our hosts