In my last article, I dragged out my soap boxes of patching, backup and access control as key ingredients of a resilient IT architecture. This time I take a look at security analytics and the role it can play in keeping the network available and safe to use.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
One of the big issues with analytics is the sheer volume of data available – vastly more than a mere human can wade through. The starting point for analytics is the log and audit files of the various devices in a company’s network, starting with a server’s hypervisor and moving upwards through the operating system and the various applications. But don’t forget that appliances such as firewalls, intrusion detection and email scanners can create log files, as can Ethernet switches, routers and load balancers.
You need to consider all these files in order to conduct useful analytics. But system log and audit files, if not configured well, can be very large, so the aim is not to capture every event available for capture – only select those key events for the device or application of interest.
For example, capture session activity by user, workstation or application (successful log on, log off, logon failure, attempted access to unauthorised files or systems, activity outside of normal hours) or unexpected data or activity occurring across a security boundary such as port scanning, unexpected large volumes of data, or data moving at unexpected times.
A useful feature that many analytical tools can be configured for is to issue alerts when security-critical events occur, such as password failures on high-privileged user accounts or unusual network activity, such as port scanning or unusually high volumes of traffic emanating from, or going to, a server or application.
Put together a list of those activities that would be useful to capture for analysis and/or for issuing alerts against, then compare with the available logging/auditing parameters available by device or application in order to define the log/audit parameters to be set. Run and analyse the log/audit files for a month and adjust the logging/auditing parameters as necessary and review again in a month and repeat as necessary until you have a set of log/audit parameters that are giving useful analytics and alerts to your organisation.
Part of this definition stage should identify what output is required from the analytic tool(s), for example, alerts that need immediate attention and how those alerts are issued (email, SMS, VDU) and what needs reporting and to what level (daily, weekly, monthly, security specialists, security/IT managers, headlines for senior managers/board).
Given that the log and audit files will be large, what do you use to analyse them? There is a range of commercial and free log analysers, including Logrythm (paid for), Splunk (both free and paid for versions), Microsoft Log Parser 2.2 (free), ADAudit Plus (paid for), SolarWinds Event Log Analyser (paid for), and many others.
Do you need one of these tools? Well, without one, you really don’t stand much of a chance of achieving sound analytics – but which one for your organisation? Some are enterprise level, such as Logrythm and Splunk, with pricing to match, some bridge enterprise and SME/SMB, and some are more suited to SME/SMBs. The size and complexity of an organisation’s IT infrastructure and the corporate appetite for risk will play a role in tool selection, as will the required tool output.