It started with a worm, a graduate student, a little curiosity and an enormous lesson in the law of unintended consequences.

Twenty five years ago, on Saturday, Nov. 2, 1988, Cornell graduate student Robert Morris unleashed a worm onto the nascent internet, infecting an estimated 10 percent of all systems then connected - approximately 6,000 machines.  (By comparison, according to Cisco estimates, 10 percent of the current internet would include more than one billion devices). The Morris event was a light bulb moment by Molotov cocktail, a home grown bit of code hurled out onto the Net to massive effect, allowed to replicate without any semblance of cyber security.

A quarter century later, cyber security is as inseparable an element of information technology as windshields are to cars. And as more and more business and personal activities move online, the nation's attack profile grows, leading companies like NJVC to bring our cyber practices from the front lines of defense and intelligence customers to the homefront of healthcare, finance and commercial sectors.

To talk about the effect of the Morris Worm and the growth of cyber security, the NJVCommentary blog sat down with Cyber Security Principal, frequent blogger and three-decade IT veteran R.J. Michalsky to talk about the evolution of cyber security before and after the worm was created.

Q: The Morris Worm has a reputation as the opening sentence that begins the narrative of cyber security. Is that a fair way to think of the growth of cyber security? How exactly did it have such a greater impact than anything before it?

A: The Morris Worm was certainly the first malware to become well known to the general news media. It spread by replicating itself and exploiting three distinct vulnerabilities existing at the time in target networks. The supposed intent was to determine how large the internet was. A design flaw however, caused the program to replicate itself even after an infection logic check was initiated, which was intended to limit the potential system impacts. This logic flaw led to large network loads and consumption of system resources which caught the attention of system administrators. Morris himself and other university researchers created a fix to limit the spread, but as an unintended result, the worm itself overloaded networks which might have been used to communicate solutions via email. In this manner, it was actually the first Denial of Service (DoS) attack since it made various network resources unavailable for use.

Most worms routinely scan connected internet ports looking for remote systems to infect. Modern defenses include monitoring the number of scans a host initiates. Unless under admin control, any host sending out an inordinate amount of scans is probably itself infected and it can be pulled offline for cleansing. This capability, of course, did not exist in 1988.

Morris himself was the first person convicted under the 1986 Computer Fraud and Abuse Act. He did his 400 hours of community service, paid his $10,000 fine and today is a tenured professor in the Computer Science department at MIT.

The Computer Fraud and Abuse Act was enacted by Congress in 1986 to extend existing computer fraud laws to include the usage of computing platforms to commit crimes across state lines.  The act also criminalized certain computer related acts including the distribution of malicious code and denial of service attacks. The Act has been amended six times since then to cover new technologies and emerging situations such as a response to the attacks of September 11, 2001 and the growing prevalence of identity theft.

Q:  Was the Morris Worm really the big bang moment of cyber, or was the development of cyber security more of a natural evolution?

A: I think it's fair to say the unleashing of the Morris worm was a precipitating event for computer science and the internet. Previously, researchers had tried to use a worm-like capability to enable automated operating system patches across multiple networks but as the Morris incident showed, unintended consequences result from the initiation of software not under user control, which led to the unauthorized consumption of network bandwidth.

This event in essence led to the entire category of worm software to be considered malware. Now, patching systems is typically done only with user or administrator approval and no internetworking capabilities are enabled.

As a result of the widespread alarm and the notoriety of the case, the CERT (Computer Emergency Response Team) Program was created to allow government, industry, law enforcement and academia to share methods and technologies focused on preventing large scale cyber-attacks from propagating.  DARPA stood up in CERT in Pittsburgh and is now part of the Software Engineering Institute (SEI), a Federally Funded Research and Development Center (FFRDC) run by Carnegie Mellon. 

Q: In the beginning, decades before Morris, there were servers, devices, networks and all the bones of IT infrastructure. At what point did the industry first think about security?

A: The concept of a computer virus has actually been around since the very beginning of computer science as an academic discipline. In 1949 John Von Neumann wrote an essay describing how a computer program could be designed to replicate itself.  Later, when ARPANET was being developed in the early 1970s–even before the internet was defined and designed--early viruses were discovered that were essentially technical pranks, software that displayed messages bragging they were able to reproduce and gain access to remote network components.

As with a physical asset such as currency, as soon as the concept of money was created, someone thought of stealing, which created the need for security. The internet was not built as a secure fortress; it was built to facilitate the exchange of information emerging out of research environments where only rudimentary security was ever put in place.

Since early computer networks did not have assets of monetary value, there was little time and attention paid to securing information contained in those computer networks. That changed as financial systems were enabled to run via electronic means necessitating computer security.  As criminal motives emerged, counter measures were developed.

Q: When did cyber security first become its own codified discipline as an enterprise-level effort, in addition to device-by-device or application-by-application security?

A: The terms computer security and Information Assurance (IA) and IT security were around during the genesis of the computer as a means of computation, the maturation of the PC industry and the proliferation of the internet.

The first known instance of the term ‘cybersecurity’ is purported to be 1994.  It originated from the term ‘cybernetics’ – the study of human functions and the electronic systems designed to replace them.  ‘Cyber’ originally meant ‘modern’ and was used as a prefix for many words – cyberwar, cyberattack, cyberpunk, cyberspace etc. And so in the computer security realm ‘cybersecurity’ came to mean the new and modern way to secure electronic computer systems.

Cyber security began to be recognized as a distinct discipline in the late 1980s (right around the time of the Morris worm) as the Special Interest Group for Computer Security (SIG-CS) combined several organizations resulting in the creation of the International Information Systems Security Certification Consortium (affectionaly referred to as (ISC)2).  (ISC)2 created a Common Body of Knowledge (CBK) and established various professional credentials establishing cyber security as a distinct profession.

Q:  What’s the biggest difference between computer security in 1988 and cyber security now?

A: Modern cyber security controls have matured and typically deploy a number of defense mechanisms to insure no outbreaks of something similar to the Morris worm. Anti-virus software checks for signatures of viruses to block previously identified forms of malware. Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) and firewalls seek to block unauthorized software entry via specific ports and can directly enforce organization security policy.

E-mail systems (which already existed prior to the establishment of the internet) now have sophisticated message scanning that enables spam detection and strips out images and other software that may be potential malicious software.  These protections are built into any modern (and usually free) e-mail system.

In 1988 the CERT Coordination Center (CERT/CC) was stood up at Carnegie Mellon to coordinate information with nationwide CIRT teams and house the CERT Knowledgebase.  The Knowledgebase has both public and private (restricted access) information on identified and cataloged computer system vulnerabilities. This information is structured in such a way as to allow the information to be easily searched, analyzed and shared.

In 2003 The Department of Homeland Security (DHS) entered into an agreement with Carnegie Mellon to create the US-CERT which has become the national computer security incident response team for the USA. US-CERT issues regular software update advisories.

Q:  What do we do better in cyber security now than we did in the early days after the Morris worm?

A: Most end-user devices come with some type of anti-virus software and perhaps a personal firewall installed out of the box.  Subscriptions to security vendor services enable the constant updating of virus signature files insuring previously identified viruses are prevented from infecting desktop machines.

Microsoft has initiated ‘Patch Tuesday’ as a weekly event where vulnerabilities identified over the prior week are distributed through automated operating system updates.  With most end devices having internet connectivity, this means the vast majority of patches get wide dissemination reducing the available landscape of susceptible machines.

Then there's general awareness. A large number of users have heard of social engineering attacks such as phishing schemes and are aware of the danger of clicking on links from unknown users.  These security mechanisms can still be breached by a determined adversary, but most automated attacks can be repelled.

Hence, there is a much higher degree of awareness and built-in security than in the 1980's.

Q:  What does cyber security at large still not do well?

A: Malware payloads are still able to penetrate system defenses and establish "zombie" machines that can be consolidated into large "bot nets" which can be sold as a service to spammers and others looking for widespread cheap computing power to conduct their activities. These infected machines are also the source for many Distributed Denial of Service (DDoS) attacks which consume copious amounts of bandwidth and impact IT systems around the globe.

Scanning networks and computer systems for vulnerabilities has historically been a resource intensive process which could only be performed on an occasional basis. As software tools improve and computing horsepower continues to increase and storage costs for items such as system logs continues to decrease, the frequency of these scans can be increased, decreasing the time between system compromise and alert, improving overall security.

DHS is establishing guidance for Continuous Diagnostics and Mitigation (CDM) to provide a higher degree of protection for key information assets. As this matures and gets rolled out across enterprises, the overall security posture of organizations will improve as automated mechansisms are alerted and can quickly respond to new and evolving system attack vectors.

Q: What are the biggest challenges facing the industry right now?

A: At its core – cyber security is a cat and mouse game which may never end.  New software functionality enables new user capabilities, which results in inevitable new security issues - which then get exploited by bad actors– identified by users and researchers – and then finally mitigated by computer security companies.  The patches or fixes get distributed and installed – and the cycle starts again with new software, hardware or network capabilities.