Technological Measures
In this article we are going to take a look at some of the technical measures that can be put in place to help secure a network.
If you think back to our lecture on the Three Pillars Of Cyber Security you will remember that Technology is one of the Pillars.
It is important to remember that technological methods are not a "Cure" for all security issues, but instead can assist the security process. Threats are constantly evolving, and the best antivirus in the world cannot protect us against threats it doesn't yet know about. Similarly firewalls follow a set of rules, which can be circumvented.
However, that is not to say that you shouldn't use these tools. They are an important part of a "Defence in depth", and if properly configured and managed, and can make your systems a hard target for attackers.
Firewalls
Firewalls are expected to be part of the security of every computer system and network.
A firewall monitors the incoming and/or outgoing traffic and, based on a predefined set of rules, decides whether to block a particular packet or let it through.
In its simplest implementation a firewall will have a set of rule determining whether particular traffic should be allowed through based on criteria like protocol, port and source/destination IP address.
A common use for a firewall would be to Limit Access between a trusted internal network, and an untrusted external network. This allows us to make services available on the internal network, that are needed to perform business roles, without exposing them to possible attack.
Example
For example, consider companies web hosting infrastructure. A firewall could be configured to only allow connections to web related services (such as HTTP / HTTPS) and block connections on every other port.
This means that other services that may be running on the system (such as a database), are only available locally. This can avoid problems with mis-configuration issues making these services viable to attackers.
Firewall Rules
The majority of firewalls will also have relatively strict rules blocking untrusted incoming traffic and relatively relaxed rules for outgoing traffic.
This is logical, our audit process will let us know what services are available on the trusted network, and we can make the decision to block or allow access to them, thus keeping our own infrastructure secure.
However, we have no way of knowing what services a user may need to connect to, (for example different VOIP clients use different ports). This means we either have to either:
-
Decide to block ALL traffic, except for services we actively allow (An Allow-List)
This gives us the highest level of security, as we know exactly what outgoing connections can be made. However, we lose flexibility. If that important client uses a different system to the one we are expecting (which we usually find out at the last minute), then someone has to configure a new exception.
(Note: You may know this approach is as a whitelist, as usual I follow the the NCSC's1 advice)
-
Let all traffic through, except for services we actively deny (a Deny-List)
This approach gives us the most flexiblity, at the expense of security, However, as we are still protecting our Internal systems, most of the time this trade-off is the one that is chosen.
Note
Although this gives flexiblity for users in the network, it also gives hackers some breathing space. IF we have access to the internal network, it is possible to use techniques such as port forwarding to make services available outside of the firewall :p
Using such rule sets we can restrict the exposure/visibility of services available on a particular computer to an untrusted network, e.g. the Internet.
For example, suppose we have a computer server that is running a web server with a back-end database and is also acting as a file server for the local desktop machines.
What we would want in this case is to make sure that the web server is visible to the Internet, while the file server is visible only on the local network, and the database is configured to only visible locally on the machine itself.
Service | Rules |
---|---|
HTTP | Visible to All |
FTP | Visible to Devs |
Database | Visible Locally |
In addition to the configuration of these services, a firewall will also be part of the solution. We will need to prepare a set of firewall rules based on protocol, port and IP address.
- We can write one rule which allows all traffic from any network directed to the web server
- a second rule allowing only traffic from the local network IP range to the file server;
- a third rule blocking all incoming external traffic to the database (or as is more common blocking traffic to everything else).
Note that in this example, we don't block any outgoing traffic.
Network | Port | Direction | Rule | |
---|---|---|---|---|
1 | ALL | 80 | Incoming | Allow |
2 | Developer | 21 | Incoming | Allow |
3 | All | Anything Else | Incoming | Deny |
Using the firewall to detect services
Depending on the firewalls implementation we can use it to guess at services running on a system.
Usually, as part of the TCP handshake a connection request to a port will return either:
- Open: The port exists and is ready for connection
- Closed: No port exists
However, if a firewall blocks communication to a port the TCP handshake may not complete. (IE the client sends a SYN packet, but receives nothing back).
In this case we can Infer that something (probably a firewall) is filtering communication to that service.
Firewall Summary
A firewall is a first step in the defence of our computer networks. And allows us to control the types of traffic that can be sent between devices in the network.
While a firewall can help stop external entities from accessing services, it shouldn't be relied on to provide complete protection, and we should bear in mind that they can always be bypassed.
Intrusion Detection Systems and SIEM
Intrusion detection Systems (IDS) can help automate some elements of security, providing a means to Audit and respond to events within our networks. While a firewall is there to block malicious attacks, a IDS attempts to detect unusual behaviour.
Note
I like to think of IDS as Antivirus for network traffic.
An IDS is a device, or piece of software that monitors a network for malicious activity. Any malicious activity is reported, and some systems are capable of responding automatically to events upon discovery, for example modifying firewall rules to block traffic that may be malicious.
There are two areas where IDS are typically used:
- Network Intrusion Detection Systems (NIDS): Analyse network traffic, by examining network packets for threats.
- Host-based Intrusion Detection Systems (HIDS): Monitor files on the operating system for changes or patterns of behaviour that may indicate compromise.
There are two main ways an NIDS can detect intrusion. A IDS may use a single approach or a combination of the two.
-
Signature Based
Where the system detects threats by looking for specific patterns in network traffic. For example packet sequences sent by malware. This is much like a signature based Anti Virus system, and has similar benefits and drawbacks.
Comparing against a set of defined rules is relatively fast. Techniques like hashing can be used to reduce the time take for lookups.
There is no "initialisation time" that more complex systems may have. We check against a set of rules, rather than the normal behaviour of the network.
However, a significant downside is that we can only respond to threats we know about. If the attack is not in the signature database, then the system will not detect it. This adds maintenance overhead, as we need to maintain the rules database to respond to new threats.
-
Anomaly Based.
Driven by the increase in new types of malware, these systems are designed to adapt to unknown attacks. Anomaly based IDS use machine learning to build a picture of "normal" network behaviour. Traffic is then compared against this baseline to detect events.
This approach has the benefit of being able to detect unknown attacks.
However, the system first has to learn "what is normal", this can take some time to establish. Additionaly, while the model is being built, there can be a high number of "False Positives", where legitimate behaviour that has not been seen before is detected and classified as malitious.
Deep Packet Inspection.
We need some way of looking at network traffic for analysis.
Some systems will use the packet headers, which contain things like source, destination and protocol details to make this decision. This gives a quick and simple way of examining the types of traffic that are being sent, and it is possible to handle large amounts of data without requiring complex hardware.
However, it is possible to spoof header information to fool analysis, and this method gave no indication of a packets content
You cant judge a book by its cover
While in many cases it is sufficient to catch well known attacks. Just examining headers can be likened to analysing the contents of a book just by reading its title. Its hard to know the true content without looking at the contents.
Deep packet inspection, looks at both the header, and body of a packet. This means that analysis is more thorough, and can detect malicious traffic with spoofed packet headers. By inspecting the contents of a packet it is possible to detect malware within data, or even detect and block a potential data breach if the data is flagged as PII. Other advantages of deep packet inspection is improving the efficiency of the network, by prioritising certain data types.
However, it has one downside of requiring significantly more processing, and can effect the throughput of your network.
There are also privacy concerns. While the system may be automated, it is possible that personal information may be flagged by the system, which then becomes available to those managing the IDS. For example if HR ask you to update bank details, and the packet gets flagged, then your bank details could be available to the IT team managing the IDS.
Important
Another technological drawback to deep packet inspection is working with encryption. Making sure that data transmitted over the network is encrypted is good practice. However, if the data is encrypted how does the IDS examine it?
There are work arounds (for example using the IDS as a mid-point in the enryption process) but these can again raise privacy concerns.
SIEM
Intrusion detection systems are often part of a wider Security Information and Event Management (SIEM) system. Which will aggregate data from multiple sources (such as log files, network traffic information, and IDS).
A SIEM will use multiple sources of information (such as system logs, IDS, and data from Antivirus) systems to provide a comprehensive picture of the state of your infrastructure. This information is then analysed to establish relationships between the data collected, and potential threats.
Example
Lets say your network monitoring logs show new outbound traffic on port 1337. However, without extra context it is not possible to detect whether that traffic is legitimate or not, it could be someone using a web app, or it could be something else. (although looking at the port, I would block it)
Lets now say your web server has also detected that after a couple of failed attempts there has been a user login, shortly before the new traffic has started. Again, on its own there is nothing to indicate this could be a problem, people forget or mistype their passwords all the time, so this could be normal behaviour.
Finally, Process monitoring shows a new process starting on the web server. Again, this kind of stuff happens all the time. It could be an admin starting a text editor to make some changes....
However, these events could be also be related. The combination of Failed logins, new process and new network traffic suggest that something more interesting is happening, and should be investigated.
By centralising this data SIEM allow us to see the corrections between the events, and generate alerts when it detects potential security issues. These alerts can be generated through a set of rules, and or through machine learning.
To avoid overloading your security analysists, SIEM will also prioritise potential issues. 5 failed login attempts in 5 minutes could be flagged as suspicious, but low priority as it is probably someone who has forgotten their password. A couple of thousand failed logins in 5 would be flagged as high priority, as its likely a brute force attack.
SIEM makes it easier to manage security as it can filter and classify huge amounts of data, this means that we could detect attacks that may otherwise go unnoticed.
A major benefit of SIEM is in the analysis and audit stages. After an attack it can help recreate the timeline, and provide an understanding of what occurred and when. This helps with the audit process, will help meet obligations under GDPR.
However, SIEM can have some downsides. If AI is used, it can take a long time for the picture of Normal network behaviour to build.
IDS Summary
IDS give us another tool to help secure our networks and devices, by monitoring services for malicious behaviour and responding to it.
IDS (and SIEM) can play a part in preventing attacks, and are a fantastic tool for incident response, as they can provide a clear audit trail of behaviour that has lead to a compromise.
As with all technological measures, they are not a "Cure" for hacking. It is possible to work around IDS systems, be disguising traffic as legitimate, or working slowly to avoid triggering any filters.
Antivirus
Another technological measure that help improve the security of your systems and network is Anti Virus (AV). This protects your network or systems against malicious software.
Like all of the other technical approaches to security, you should not rely solely on antivirus for protection. Not only are they not guaranteed to detect all attacks, (or there would be no such thing as malware), but it is possible (but getting harder) to bypass them. Instead use AV to add another layer to the defence in depth.
Effectiveness of AV
The rate of detection for AV software is an complex issue. With quoted average detection rates falling somewhere between the high 90% to around 25%. (Part of this is methodolgy, the lower detection rates tend to include a higher numnber of zero days.)[^decreasingAV]
The reasons for this are complex, but include a shift in focus from malware developers from "Amature" to "Professional". The trend in ransomware, shows how monitisation of malware has become more common.
Additionally, malware developers are using the tools designed for defence against us. Sites like Virus Total allow you to check suspicious files against 70 AV systems. While this is great for personal security, it also gives developers an easy way to check the effectivness of their malware at avoiding detection.
Methods of Detection.
Traditional AV works using a signature based approach. Byte sequences of code within the malware are used to identify a potential threat. As with all signature based approaches, this has the drawback of only being able to detect known threats. Additionally, malware developers have developed techniques to bypass signature detection, such as changing small parts of the code to modify the byte sequence, or encrypting the malware.
Heuristic based detection improves on signatures by attempting to detect through "Families" of malware, through a generic signature, or close match to an existing signature. This means that different versions of a virus family can be detected. This allows the AV to detect malware where the byte sequence has been modified, but still provides no protection against obstruction or encryption.
Scanning for Malware
As well as scheduled scans, or scanning files for malware when they are downloaded, Real time protection, is now commonly used (for example Windows Defender). When a program executes, it is scanned for malware, using one of the techniques described above. Additionally, the behaviour of a program can be monitored, for example looking for new outgoing network connections, that may indicate a backdoor.
Aside from meaning that we don't have to rely on regular scanning for malware, this also defeats a lot of the techniques used to disguise malware.
Example
A common way of bypassing AV is to encrypt the "evil" portion of the code. When the program executes, the malicious code is decrypted and the payload launched
The encryption process means it is unlikly to be detected by a scanning method as the singature is hidden. However, with real-time protection, the decrypted code is available to the scanner, and malware can be detected.
Problems with AV.
While AV is an important part of protecting our systems, it also has some downsides.
- They can provide the user with a false sense of security. Meaning they are more likely to "click the link" thinking the AV will save them
- False positives: where the AV software identifies a non-malicious file as malware. Depending on the way the AV is configured, this can cause system-breaking issues. For example, if the AV is configured to delete any infected files, and these files are critical to the operation of the system. One amusing example of this is Sophos detecting its own update process as malware3
- Performance issues. Running live scans can effect the performance of a system, by placing extra overhead in the scanning process.
Note
A really interesting exploit was CVE-1018-0986 where a flaw in Windows Defender real time scanning made it vulnerable to remote code execution4.
I think its one of those great hacking stories, where a small code flaw could have had serious consequences.
Summary
In this article we have looked at common technical measures that can be put in place to protect your systems. While non of these measures are 100% effective by themselves, they are an essential part of your defence in depth against attacks.
In the lab session, we will look at how these devices operate in practice.
#techincalMeasures
Do you have any views on the technical approaches to security. Do you think there are other appropriate technical measures we could use?
Discuss your thoughts in the feed with the tag #technicalMeasures