Establishing the Zero-Trust Cybersecurity Framework

1 min. read

Share

With the disappearing network perimeter, data being either at rest or in flight, inside the data centre or on a user’s mobile device, companies are finding it increasingly difficult to secure their critical assets. Cyber attacks, apart from being more sophisticated - harder to detect and mitigate, have risen up from standalone incidents to being more organised and thorough. Attack-as-a-service together with post-sales support and guaranteed success or money back option poses a daunting task ahead of organisations preparing to put up a defence against them. The life of a CISO has never been so challenging as it is now.

Is having a best in breed next-generation security solution the answer to this challenge?

Gartner research suggests that, through 2020, 99% of firewall breaches will be caused by simple firewall misconfigurations, not flaws. There's an overriding need for not only a framework to implement best in class security solutions to detect & mitigate attacks and breaches more quickly but also the right expertise to design, implement, test and gauge the efficacy of such a solution on a periodic basis. What choice would you make - Remediation after a security breach has taken place or continual inspection to ascertain the resilience of your company’s cyber defence? Let's take a closer look at these aspects to help you find the answers.

Establishing the Zero Digital Trust Framework

The principle 'Zero Digital Trust' is one of the most integral security frameworks in recent times. Its crux lies in simplicity - a default deny for all flows and concept of minimal access. To effectively realise 'Zero Digital Trust' in your ecosystem here's what it entails.

Complete Visibility - in the perimeter, network, endpoint, data centre, cloud, saas environment

More than 60% of the traffic in companies of all sizes is web-based and encrypted. They also serve as the carriers for anomalies of varied kinds - malware, ransomware, trojans, spyware etc.

Implementing SSL inspection for outbound internet traffic (i.e. from the users to the Internet) and the inbound direction for servers hosting critical application services is the key to gaining full control over the application flows, made possible with the direct view of the application traffic under the encrypted layer.

Reduce the Attack Surface - It has the widest scope and warrants a strict security discipline

Reduce the attack surface with an affirmative security model "Who needs access, to what resource, how and using what?" Block all known malicious activity. Disabling IPS signatures to gain firewall performance might prove costly further down the line. Avoid implicit trust between all hosts in a TRUST or DMZ zone. Just because the name of the logical zone is TRUST it does not mean all the hosts in it are trustworthy. One affected host in a network can infect the rest in no time.

Internal segmentation - without network segmentation, the lateral movement for the anomalies is straightforward - unhindered and undetected, both for attackers with remote access in your network as well as for automated attacks (i.e. NotPetya, BadRabbit). Combine firewall policy enforcement with identity-based firewalling, Multi-Factor Authentication, File Blocking and URL categorisation using a cloud-based engine - all contribute to effectively reducing the attack surface. Automation is the only possible way to keep up with the volume and complexity of the attacks and overcome the skills shortage.

Extensive Logging, not only blocking. All logs need to be saved. This is crucial for incident response, threat hunting, threat intelligence, machine learning, and other activities. Data is widely considered to be a most valuable asset. Big amounts of data are required in order to create Machine Learning and Artificial Intelligence models to detect malicious activity. Storing and analysing data in-house is not enough. With big amounts of high-quality data and machine learning applied to behavioural analysis, we can keep up with our adversaries.

Prevent known attacks

Almost all next-generation security vendors maintain a huge shared pool of threat intel with each other. It includes hashes and samples from attacks seen in the wild and reported by other customers. This equips the security vendors to consolidate their defence and create timely signatures for detecting and blocking such anomalies, soon after the ‘patient zero’ is reported

Having seamless connectivity of all the sensors in your ecosystem - network firewall, endpoint protection clients, SaaS and Cloud security sensors - to the signature distribution points is the key to detect and respond to these known anomalies. Attackers often target the known vulnerabilities existing in endpoint systems. Having a detailed posture from the user's machine is absolutely imperative It's an essential practice to allow access to critical assets for users, whose endpoints match up to the security guidelines in the organisation. Of course, there needs to be a strict IT security framework and cyber security awareness in the first place!

Prevent Unknown Attacks

Detection, mitigation and distribution of Zero-Day Protection for completely new anomalies - malware and other forms of threats. Sandbox environments in different form factors - on-premise, cloud-based or hosted as a SaaS service to tackle the Zero Day exploits provide a solution to this challenging problem.

Advanced persistent threats warrant a behavioural detection methodology - instead of identifying the malware based on what it is (signature-based) - behavioural malware detection relies on what the malware is prone to do. Sandboxes execute these anomalous programs, observe their behaviour and then analyse it in an automated fashion.

Sandboxes and machine learning-powered models

The sandbox blocks malicious samples based on their behaviour before it runs on the endpoints. These days most sandboxes include a virtual execution environment for in-depth inspection of running samples that use multiple detection methods including behaviour-based detection, in-memory introspection and extrapolation models powered by machine learning.

This approach is more concrete and efficient than comparing the signatures of files. Sandboxing closely inspects what the file does, hence it is much more conclusive in ascertaining if the file is malicious than signature-based detection technique. Often well-crafted malware is resistant to detonation in a virtual sandbox environment.

Hence a sandboxing environment that also uses dynamic behavioural analysis and dynamic unpacking to detonate suspect files in a variety of software and bare-metal environments is required. Once the malware is detected the signature is then created and distributed to all registered points in the customer’s ecosystem - firewalls and endpoint systems.

Choosing next-generation cybersecurity products

Getting best of breed off-the-shelf, standalone security products might prove counterproductive, contrary to the expectation. The focus should rather be directed towards a solution-based approach in which multiple security sensors on the network, endpoint, cloud, SaaS environments along with SIEM and machine learning can be stitched together to build a complete security posture for your organisation.

Sign up for our newsletter

Get the latest security news, insights and market trends delivered to your inbox.

Updates

More updates