The IT ecosystem has undergone a transformation: first, with its move from on-prem, hardware-based systems to distributed, software-based systems in the cloud, then with the shift from single cloud to multi-cloud service providers – along with other far-reaching changes like software-defined everything and distributed analytics.
How do these disruptive changes in the IT environment affect the architecture of the next generation SOC services? And more broadly speaking, what impact do they have on cyber security operations? Let’s have a look at 5 trends in IT and how they are shaping the future of cyber security.1. Traditional SIEMs vs. Native Cloud Security Analytics
The current shift away from traditional SIEMs has a broad range of consequences for cyber security services. Let’s start with the good news: The move to a native cloud SIEM solution offers easier on boarding, faster deployment and lower operational costs – and it can improve the efficiency of security operations in several important ways. For example, enterprises no longer need to update versions on-premises, resources are elastic, with a greater ability to scale out or scale up and with increased availability.
From the perspective of security analytics, native cloud SIEM solutions can aggregate data from all sources, including users, applications, servers, and devices running on-premises or in any cloud – allowing the review of millions of records in a few seconds. And native cloud SIEM solutions provide added value. For example, by facilitating playbook management with AI-enhanced automation, a more intuitive user experience, and providing query languages that allows creating rules for hunting, analytics, dashboards, and reporting – this offers new insights for internal and multi-layer customer stakeholders.
In terms of cyber security operations, the downside lies in the system’s having greater complexity and the potential for new cloud-security vulnerabilities. The inherently distributed nature of cloud-based IT ecosystems means that traditional tools such as firewalls, are no longer the answer.2. Single Cloud vs. Multi-Cloud
As if the shift to the cloud weren’t complicated enough, enterprise topologies have also shifted from a single cloud to multi-CSPs (cloud service providers). This additional layer of complexity raises additional security challenges, requiring teams to handle working with multiple organizations and domains. Issues include:
- Interfacing with numerous providers: Each cloud provider has its own proprietary, internal SIEM and different means of collecting and monitoring data, collecting logs, creating alerts, and managing incidents.
- Data structure: The question is whether to relate to the multiple clouds in a distributed or centralized fashion – i.e., by investing time and resources in bringing all of the data to a single location (where that’s possible), or keeping the data distributed and overcoming the separation through orchestration & automation as well as introducing custom workflows.
- Technological maturity: The maturity level of technologies in the cloud is weaker than that of legacy technologies. Nothing is standardized, and this opens up new security challenges.
- Skill set: Moving to the cloud requires building a new network from scratch. So when an IT team doesn’t have experience building a network in the cloud, a whole new slew of (potentially avoidable) security issues surface for the organization.
With all of these complexities, how do we maintain a strong security posture? As pointed out by Ben Canner in Information Security Solutions Review, what’s necessary to reduce mean time to detect (MTTD) and mean time to respond (MTTR) to cyber security threats is the advanced use of AI-learning protocols like machine learning or user and entity behavior analytics (UEBA), which make it possible to sift through gigabytes of enterprise data in real time.3. Centralized vs. Distributed Analytics
The move to the distributed architectures raises additional questions around analytics.
When working with multi-CSPs, a key issue is where to store the vast quantities of data and how to efficiently transfer it between systems, when needed. Organizations need to choose whether effective data analysis requires maintaining a centralized repository of data for analytics, or whether to spread workloads across clustered systems.
On the one hand, if data is distributed across multiple locations, the data analysis is more complex. It’s harder to do correlations; logs are stored at each location rather than a single log file that covers all data sources.
Sometimes it is not possible to move all data to a single place for analysis. Sometimes there are issues of regulation; for example, if the data is in the U.S., you can’t always export it to Europe. Sometimes the issue is cost, or perhaps there is simply too much data to store in one location.
On the other hand, distributed analytics does have advantages. As pointed out by Chas Clawson in InfoSecurity, by using distributed correlation, organizations can scale out a SIEM in a number of different ways, allowing it to meet the most complicated use cases. Distributed correlation offers the opportunity to throw significantly more data at the correlation engine. And distributed systems offer improved fault tolerance and availability that increases as the cluster grows.4. SDE (Software-Defined Everything)
Software-Defined Everything is an umbrella term that refers to many things: software-defined networking (SDN), software-defined computing, software-defined data centers (SDDC), software-defined storage (SDS), and software-defined storage networks.
SDE is another way of saying that infrastructure is virtual, delivered as a service, and automated by intelligent software. And while SDE is a huge cost saver for enterprises, it involves a certain loss of control, which presents additional security challenges.
Back when IT worked with legacy hardware systems, you’d just need to plug something in – a physical piece of hardware – or take something out. But when an IT team downloads software, the team needs to use “as is” whatever it’s given. From a security perspective, this opens up a range of new security challenges.5. Cyber Security Response
Security operations – including threat hunting, compliance reports, behavioral analytics, log & event management, data correlation, aggregation and analysis – are infinitely harder to implement when working with the multiple technologies and distributed architectures rolling out today.
But as pointed out by Nir Zuk, Founder and Chief Technology Officer of Palo Alto Networks, and Keren Elazari, an ethical hacker, in this presentation at Ignite 2018, the real problem we face with cyber attackers today is that they are not human. They are automated adversaries leveraging a variety of tools including AI to attack other computers – without human intervention.
Attackers are evolving – going big and upstream, using software update mechanisms to subvert systems, using all types of old and new OS (operating system) exploits. They also use file-less attacks, and subvert legitimate tools like Shodan, Mimicat and Metasploit and other administration tools. And they utilize attack automation at scale – something that hasn’t been seen before, and that is going to be our future.
In this reality, cutting-edge orchestration & automation capabilities as well as the expert human workflow offered by managed security service providers (MSSPs) offer a way forward – allowing enterprises to maintain a healthy security posture despite the un-relenting changes and challenges.
For an in depth look at new tactics being used to contribute to cyber security, download our whitepaper, Why Virtual HUMINT is Vital to Effective Threat Intelligence.