Users generally just want to get on with their job, and although they need to be aware of the cyber threat and know how to behave in terms of cyber hygiene and incident reporting, they cannot be expected to be the first line of defence. In fact, if we need to rely on the user to prevent an attack, then our technical defences have failed.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
So what are the best, most cost-effective measures that allow users to continue working in a safe environment?
The recent WannaCry and Petya attacks remind us of the importance of basic cyber hygiene, such as patching, as an effective way to maintain a safe working environment, but there are many different aspects to this.
Another cost-effective set of measures is the creation of a hardened default build for user hosts. This includes: applying operating system generic exploit mitigation, such as data execution prevention; applying application hardening, such as ensuring that browsers do not execute scripting, flash or java automatically from an untrusted source; and operating system hardening, such as disabling unrequired functionality such as remote desktop and SMB [server message block] protocols.
Some of these measures are unpopular with users because of the limitations involved, but they are an effective way of limiting the attack surface and are cost-effective once built.
There are many more ways to keep users safe by preventing malware execution, including the use of host antivirus to block known malware, and host intrusion detection systems (HIDS) to detect malware exploiting zero-day vulnerabilities. There is also a range of approaches to control rather than stop malware execution, such as application sandboxing and micro-virtualisation. But for this article, let’s focus on how to prevent malware entering the system in the first place.
The simplest way to keep users away from potentially dangerous websites is through web or domain name system (DNS) filtering. DNS filtering is the simplest to implement and allows you to block DNS requests for known bad servers and undesirable content, such as pornographic or gambling sites.
Web filtering provides similar protection, but an on-site proxy server can provide better log information for incident investigation and filtering by content type (for example, executable files). This is fairly cost-effective to set up, but requires a level of maintenance to adapt the filtering, and can lead to user resistance if they cannot access websites, or download certain content.
Moving on to more complex solutions, intrusion detection systems (IDSs) and firewalls can provide some protection against known threats when configured appropriately. At the higher end, automated dynamic analysis of email and web content run in a sandbox can provide a high level of protection against malicious content. Typically, this can be purchased as a service, or as part of a package of network appliances for email and/or web content, but there will be ongoing costs to cover maintenance and updates to the analysis techniques as the threats change.
The strength of these solutions is monitoring potential malware as it executes in a virtual environment, to identify malicious activity and capture any attempts to connect back to the internet. Attackers have been developing techniques to avoid being detected in virtual environments, so there is an evolving arms race between attackers and suppliers.
A key aspect in selecting technology is therefore the supplier’s ability to keep ahead of the malware developers. As a result, these solutions can be expensive, but they are very effective, particularly against zero-day exploits.
A combination of the techniques outlined above, to suit the specific requirements of your organisation, should help to provide a safe working environment without putting pressure on users as the first line of defence.