Many things in the world of cyber security and IT sound simple in concept but can be difficult to implement. Encrypt data to protect it from prying eyes.Trust, but verify. Use random salt values to create a one-way function to enhance the security of stored password values.

OK, so, maybe not everything even sounds simple, but when considering security models to embrace, one simple question suffices, does it enhance security?

Let's apply that question to two cyber security concepts, "Trust, but Verify," a common approach to enterprise cyber security currently and Zero Trust, which is growing in popularity. 

Trust, but Verify

The phrase “Trust, but verify” entered popular American usage in the mid-1980s when Ronald Reagan used it often during treaty negotiations with Soviet Union. It is a Russian proverb to remind us that procedures are needed to be able to ensure both sides are being compliant with the stated terms – such as those in a detailed treaty.

The phrase has since been extended into a cyber domain and is often used to refer to cross-network domain access where trusted credentials, as presented, are assumed to be accurate. In these situations, certain networks are trusted while others are not. Attackers able to compromise internal trusted networks and obtain admin credentials can then ‘pivot’ to other networks and present user account information that is trusted by design.

Zero Trust Model

Wanting to construct a higher quality security framework, Forrester Research created an alternative approach called Zero Trust. (Research paper available here.) In this model, all network traffic is untrusted.

Major benefits of this approach include:

  • • Zero Trust can be applied to all organizations and industries
  • • Zero Trust is vendor neutral
  • • Zero Trust is scalable

The intent is to ensure all resources are accessed securely. This starts with adopting a least privilege strategy and strictly enforcing access control.

The operational impact is that all traffic gets logged and inspected, which creates a voluminous increase in data.

Moving to a Zero Trust model is a change in approach and requires a well planned transition. The trusted status of each network domains needs to be removed and data logging needs to be implemented.

Relying on strong network perimeter defense to keep attackers out is now considered by most cyber security professionals to be fundamentally flawed. Instead, the principle of ‘defense in depth’ puts policy restrictions and tools in place to prevent an intruder from compromising valid account credentials and impersonating a trusted insider. When are appropriate credentials are provided, identities are asserted and privileges granted without any additional account verification performed.

In this way, cyber security is practiced and enabled throughout the enterprise IT domain, not simply at boundary locations such as border routers and firewalls. New Virtual Machine deployment capabilities provide for constructing firewalls coincident with creating a new VM instance, insuring adequate security is wrapped around each and every VM, no matter what network domain it operates within.

Fundamentals

  • • All resources are accessed securely regardless of location within the enterprise
  • • A "least privilege" strategy is implemented
  • • Access control is strictly enforced
  • • All network traffic is logged and inspected

Least privilege is a core cyber security principle which means enabling only those privileges for a user that are essential to their specific role. It often runs into resistance in operations, due to the difficulty of placing reasonable limitations on the necessary restrictions.

It may be obvious that someone from the marketing department should not be able to access payroll records, but where does the line get drawn on IT tasks such as installing apps on a local computer? Even for legitimate purposes, restrictions can slow business operations and degrade workflow efficiencies. Hence, policies can often be in a state of flux as the boundary conditions get adjusted.

Most often, a normal user will utilize their ‘standard’ credentials for the bulk of their tasks and only use a privileged password protected account (‘super user’) when the work tasks warrant it. Issues arise when users extend their privileges and perform tasks not consistent with the specific user role as designed.

Implementing a least-privilege strategy means less user accounts performing protected activities, which should enhance system security and enable more robust monitoring of those admin tasks that do require super-user  credentials.

Inspecting and logging all network traffic has been enabled by the emergence of standalone appliances with dedicated processors which are able to perform a limited number of security checks at line speed, minimizing the performance impacts observable to the enterprise's end users. Scalability has been a challenge in the past, but has eased with improved processor clock speed, available memory and Network Attached Storage (NAS). Having this data available means being able to automate some of the verifications that a user role is properly operating within their restricted credential space.

In this manner, a completely passive defense can take on tones of real-time protections and more quickly flag suspicious user behavior that might require a forensic analyst to examine. This class of analysis tool is often called Network Analysis and Visibility (NAV) and analyzes network traffic patterns to differentiate normal modes of operation from suspicious activities.

Zero Trust Architecture

Another fundamental principle of Zero Trust Model is to build security layers from the inside out instead of the other way around. Starting with critical system assets, enable local protections and then extend network connectivity in a controlled manner, instead of building separate network domains and then simply interconnecting them.

Attackers use techniques such as phishing attacks to compromise a user account and then pivot into other network domains. If networks are truly segmented and privileged activity is rigorously monitored and checked then this attack vector becomes more difficult.

Another key aspect is to enable a centralized management console that has perspectives across the enterprise and can support real-time alerting to decision makers. Specialized ‘segmentation gateways’ are available to serve this purpose.

As with many models, portions of the objectives and architecture are aspirations as no single network vendor offers the complete range of products and services necessary for implementation. With the continuing assault on IT infrastructures this type of security model has the potential to enhance enterprise security and thwart both insider and outsider attacks.

Trust us. (Or don't.)

About the Author

Robert J. Michalsky has served government and commercial customers for more than 30 years. As NJVC Principal, Cyber Security, he quantifies and pursues new business opportunities in cyber security. Mr. Michalsky spent more than 15 years providing cyber security-related IT engineering services for classified Intelligence Community and Department of Defense customers. Read More | Contact Us