Ensuring your environment is secure as you intended – part 2
In a world where security is getting more and more complex, how do you know what you’re doing is accomplishing what you want? Today we will discuss another IP Fabric Network Security Assurance use case. One that not only could make you more secure but also save you a lot of time and money.
On the surface setting up log collection and SIEM services may seem simple but for any of us that have actually had to do it … we know it’s not simple. Not only do we have to make the right technical decisions around which are the correct devices to collect logs from, but we must make economic decisions around how many devices we can collect from and how long to store data. SIEM’s by design are high touch systems, not only do they require continuous monitoring, but they need full-time care and feeding. (we have to remember to add new devices ensure that those devices are configured correctly to send the logs etc … )
In the past at least as it relates to the network and security devices, SIEM sizing has been a combination of three things, guesswork, common sense, and some manual academic review. When I was involved in SIEM design I would always suggest that we start with core switches, border routers, firewalls load balancers, and then look at other obviously central/important devices… Most of the time I think that was the correct advice and at least a good starting point. You may have noticed what I would suggest was quite generic. Why so generic you may ask …. And the answer … I didn’t have a single overarching view that would have allowed me to see the network and its associated traffic flows… if I had I would have been able to see all network choke points, key distribution points, etc at a glance… This would have helped me to be more specific and accomplish two things … 1) establish a priority list for devices that required log collection/correlation 2) Quickly show me which devices would not offer enough value to spend our limited capital on logging. (when economics are considered deciding what should not be collected is almost as important as what should be).
Most large companies that are hacked today have SIEMs in place but despite this many of these companies don’t know they have been breached. Given this we can argue whether most companies get to an acceptable SIEM design state eventually or not … I think at least for a point in time many do ….. but how often is that design properly reviewed and how does that review take place. It should be obvious to all of us that networks are not static. This has always been true but the advent of cloud and DevOps have accelerated the speed at which networks change. Although the cloud and DevOps have certainly increased our companies Agility and in many ways lead to simplified network deployments etc they have also lead to management complexity because there are now so many “more cooks are in the kitchen” so to speak. This poses the question of how anyone or any organization can ensure that a logging solution be it Splunk, Elastic, IBM, Sumo Logic, etc be effective in a modern networking environment past initial install.
Before I get to what I believe to be the answer let us touch on the other problems we frequently see with SIEM deployments… All the above problems aside let’s assume we have all the right devices and a way to update the appropriate people to add the right devices at the right times etc … How do we ensure that all the devices are always configured properly to forward logs. Perhaps your answer is we have a tool for rolling out our golden image… My answer is OK fair enough what happens when the device gets upgraded or changed or if the manufacturer changes something … how can we see without having to do a manual review of the config that it is now no longer acting as we intended how can we be alerted to the change.
The answer to both questions is that you need a tool that does automated Network discovery and is able to create updated snapshots multiple times a day without impacting the network, then translate that discovery and snapshot information into a visual reference (Logical network map overlayed into a physical network map). This same tool then needs to be able to run assurance rule like does this device that’s acting like a core router( a device type in the right location that we have pre-determined needs to log ) send its logs to the SIEM if not alert for review …
I have worked in several large environments where we spent millions of Dollars on our SIEMS one of the key things we were missing was an assurance platform to help with initial setup and continued management.
Well, that’s all for this edition: Ill be back in a few weeks with my next installment on how to use a Network Security Assurance platform to solve real-world problems.