This article illustrates how there is virtually no way to stop a DDoS (Distributed Denial of Services) from affecting even the most busiest websites in the world. I believe this article shows one great example of how vulnerable and weak the internet security infrastructure really is across the board. The below diagram shows the taxonomy of how this attack was executed and how it left users requests dead in the water. This attack was carried out by overloading the web server with tons of fake request, rendering the website useless. I think this article is important because of the widespread use of websites like Twitter and Facebook, and it clearly shows the public and the user population that even social websites and online networking communities (where the attacker may have little to nothing to gain) can be easily exploited and taken out.
Chapter 2 of the NIST special publication 800-53 explains the elementary concepts of the selection and specification of security controls for a given IT system. Some of the various topics covered in this chapter regarding security controls include: security controls organization and structure, security control baselines, the identification and use of common security controls, security controls in external environments, security control assurance, revisions and extensions to security controls. The below table from chapter 2 illustrates the different families of security controls along with their unique two-character identifier (which corresponds to the security control catalog located in Appendix F of NIST 800-53). The class column refers to the three different types of security control classes; management, operational, technical.
Chapter 3 of the NIST special publication 800-53 takes it a step further, and explains the specific processes involved in specifying and choosing security controls. Some of the specific topics covered include; managing risk, security categorization, selecting and tailoring the initial baseline controls, supplementing the tailored baseline controls, and updating the security controls. Table 3-1 (shown below) is a graphical representation of the risk management framework security life cycle. It explicitly illustrates the flow of processes within the system security life cycle along with relevant NIS (and other standardized, regulatory, or policy related initiative documents) that will assist in guiding the risk management process.
In the early 90's, the National Performance Review, in pursuant and part of of the National Information Infrastructure, request the NIST (National institute of Standards and Technology) to develop a set of generally accepted system security principles and practices for the United States government. These principles and practices were primarily created with government's information and data systems in mind. So in 1991 the rules and procedures were outlined in the National Research Council document titled, Computers At Risk. By 1992, several national and international entities started implementing the recommendations of that document. NIST 800-14 GASSP, is based primarily on a document titled, OECD's Guidelines for the Security of Information Systems, created by a team of international experts back in 1992. NIST built on and added to the OECD Guidelines in order to provide a more refined and detailed set of Generally Accepted System Security Principles. The below table illustrates the principals and practices described in the NIST special publication 800-14.
1) What was the cause of the first Internet Worm? In specific, what vulnerabilities did the worm take advantage of in order to spread through the Internet?
The cause of the first worm was “known security loopholes in applications closely related with the operating system” (1989, pg.1), specifically VAX computers and SUN-3 workstations running the 4.2 and 4.3 Berkley UNIX code. The two main vulnerabilities in the systems the worm exploited were basic network services; sendmail and fingerd. In the case of the sendmail service, the worm utilized a “non-standard debug command” (1989, pg.2) to propagate itself within other remote hosts, thus starting the self replicating process of the worm once again. With the fingerd service, the worm instigated a memory overflow situation in which placed more characters that the service itself could handle, thus allowing the worm to execute a random program. Other vulnerabilities this worm took advantage of included password guessing and trusted host features.
2) Are those vulnerabilities still present?
These particular vulnerabilities were address and preventative measures were implemented in order to prevent future exploitation of these services. However, that is not to say another attack or similar vulnerabilities cannot be exploited in the near future. In this particular situation, the vulnerabilities would not have been noticed would it not have been for the worm. While this worm was extremely destructive, it was also extremely eye opening for the computing community as to weaknesses that needed to be addressed.
This article illustrates the complexity of a well developed Malware application installed on ATMs around the world. I thought this article was significant because of the duration and maturity of this program. The level of sophistication this program was able to achieve is remarkable, but at the same time very chilling. This article shows how people not only need to be careful while shopping online and using online banking, but also mindful about where they use their debit card or ATM card. Staying diligent about reconciling bank accounts on a regular basis has obviously become even more essential. I chose this article because nearly everyone uses an ATM from now and then, in doing so, exposing ourselves to yet even more risk.
1) What is the purpose of NIST Special Publication 800-30?
The purpose of NIST 800-30 is to facilitate the risk management process. The NIST 800-30 is a guide that assists entities in making better decisions about vulnerabilities to their IT systems.
2) What is the principal goal of an organization’s risk management process?
The principal goal of an organization’s risk management process according to NIST 800-30, “is to enable the organization to accomplish its missions” (NIST, 2002). Page 2, section 1.3 of NIST 800-30 recommends three practices of risk management to achieve company missions.
* Better securing the IT systems that store, process, or transmit organizational information
* Enabling management to make well-informed risk management decisions to justify the expenditures that are part of an IT budget
* Assisting management in authorizing (or accrediting) the IT systems on the basis of the supporting documentation resulting from the performance of risk management
3) According to NIST, what three processes compose risk management?
4) How does risk management relate to the System Development Life Cycle (SDLC)?
The relationship between the SDLC and the risk management process is one of continuous and uninterrupted cooperation and constant adjacency. “Effective risk management must be totally integrated into the SDLC” (2002, p.4). In addition, according to figure 2-1, each cycle of the SDLC (initiation, development or acquisition, implementation, operation or maintenance, and disposal), should address risk management. With each phase of the SDLC, comes equally supporting risk management activities.
5) NIST 800-30 defines seven Information Assurance “key roles”. Name and briefly describe each of them.
Senior Management – These are the people responsible for making sure they organization meets their goals. They make sure that the project has the necessary resources and that the resources are properly utilized. Senior management is also responsible for evaluating and incorporating results from risk assessment practices.
Chief Information Officer (CIO) – The CIO is the person who takes all of the input and results from the risk assessment processes and is responsible for planning, budgeting, executing and the performance of the project.
System and Information Owners – This is the group who is obligated to target the integrity, confidentiality, and availability of the project's systems and data.
Business and Functional Managers – These managers are fundamentally responsible for the “operations and IT procurement processes” (2002, p.6). Business managers have the authority to make “trade-off” decisions that are imperative to the goals of the project.
ISSO – This manager is responsible for executing the “organizations security programs, including risk management” (2002, p.6). These professionals are the direct and main support of senior management and executive management. They lead the project in terms of identifying, evaluating, and mitigating risks of their IT systems.
IT Security Practitioners – These are the people that are actually performing the major systems jobs including: “network, system, application, and database administrators; computer specialists; security analysts; security consultants” (2002, p.6)
Security Awareness Trainers (Security/Subject Matter Professionals) – This group of individuals is responsible for providing security awareness training to the IT systems user population.
6) How does NIST 800-30 define risk?
According to NIST 800-30, “Risk is the net negative impact of the exercise of a vulnerability, considering both the probability and the impact of occurrence” (2002, p.2). More specifically, “Risk is a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization” (2002, p.8)
7) How does NIST 800-30 define a threat?
NIST 800-30 defines a threat as, “the potential for a particular threat-source to successfully exercise a particular vulnerability” (2002, p.12).
8) How is a threat source defined? Name three common threat sources.
A threat source is, “any circumstance or event with the potential to cause harm to an IT system” (2002, p.13). Three common threat sources include:
* Human threats – intentional and non-intentional acts (e.g., typo, spilled coffee, humidity and HVAC issues, hacking)
* Environmental threats – power loss and pollution
9) How does NIST 800-30 define vulnerability?
NIST 800-30 defines a vulnerability as, “A flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy” (2002, p.15).
10) According to NIST, whose responsibility is IT Security? (technical or management)
IT security is the responsibility of nearly every major functional department in an organization. Mainly, IT security responsibility lies with management, operational, and technical areas within an organization.
11) What is a security control?
Security control is a mechanism that is put in place to try to mitigate risks. Security controls can be broken into two categories. The first category is technical controls.
12) Define: technical controls, management controls, and operational controls.
Technical controls consist of, “safeguards that are incorporated into computer hardware, software, or firmware (e.g., access control mechanisms, identification and authentication mechanisms, encryption methods, intrusion detection software)” (2002, p.20). Nontechnical controls, “are management and operational controls, such as security policies; operational procedures; and personnel, physical, and environmental security” (2002, p.20). “Management security controls, in conjunction with technical and operational controls, are implemented to manage and reduce the risk of loss and to protect an organization’s mission. Management controls focus on the stipulation of information protection policy, guidelines, and standards, which are carried out through operational procedures to fulfill the organization’s goals and missions” (2002, p.35). Operational controls are, “a set of controls and guidelines to ensure that security procedures governing the use of the organization’s IT assets and resources are properly enforced and implemented in accordance with the organization’s goals and mission” (2002, p.36).
13) How should the adverse impact of a security event be described?
“The adverse impact of a security event can be described in terms of loss or degradation of any, or a combination of any, of the following three security goals: integrity, availability, and confidentiality” (2002, p.22)
14) Describe the difference between quantitative and qualitative assessment?
The difference between a quantitative and a qualitative assessment is the difference between quality and cost. Qualitative assessments focus more on immediate vulnerabilities and risks. A Quantitative assessment approach would focus more on the monetary impact controls.
15) Name and describe six risk mitigation options.
According to section 4.1, pg. 27, six risk mitigation options include:
Risk Assumption - to accept the potential risk and continue operating the IT system or to implement controls to lower the risk to an acceptable level
Risk Avoidance - to avoid the risk by eliminating the risk cause and/or consequence (e.g., forgo certain functions of the system or shut down the system when risks areidentified)
Risk Limitation - to limit the risk by implementing controls that minimize the adverse impact of a threat’s exercising a vulnerability (e.g., use of supporting, preventive, detective controls)
Risk Planning - to manage risk by developing a risk mitigation plan that prioritizes, implements, and maintains controls
Research and Acknowledgment - to lower the risk of loss by acknowledging the vulnerability or flaw and researching controls to correct the vulnerability
Risk Transference - to transfer the risk by using other options to compensate for the loss, such as purchasing insurance.
16) What is residual risk?
Residual risk is simply the risk that is remains after the implementation of a new IT system taking into consideration the risk management process. Since it is virtually impossible to eliminate all risk, risk is forever present and risk left over after new controls have taken effect, are considered residual.
The IA model is a tool that shows the working dynamics of how an Information Assurance program should be approached. It encapsulates the different security functions that any organization should posses. The left face of the cube shows the Information States. This represents the states in which the information is in, either sending, storing, or processing. These states show the different times the data may be vulnerable. The top layer of the cube model contains the different Security Services that should be executed in order to have an effective IA program. They include; availability (the process of making the information prompt and reliably accessible to users), Integrity (data, hardware, and security mechanism integrity), Authentication (the right users accessing the right data), Confidentiality (the assumption that personal and sensitive data, financial, medical, ect, is kept safe), and Non Repudiation (the confirmation that the involved parties [sender, receiver] of data transmission account for their transaction of information). Lastly, the right face of the cube model displays Security Countermeasures that should be in place in an efficient IA program. These countermeasures include; training and education of people (employees or parties involved), policies and procedures in place, and technology such as surveillance, communications, hardware and software.
“Court Allows Woman to Sue Bank for Lax Security After $26,000 Stolen by Hacker” displays how something so common as a bank account could create such a major disaster in one instance. I chose this article for two reasons. One, because of its close ties with a similar experience I faced, not nearly of an amount of this scale, however substantial. Second, I chose this article because of the possibility of this affecting anyone at anytime that has an online banking account. This article also shows obvious flaws in security practices surrounding online banking. Irregardless of semantics, there was an apparent failure in security.
I think the lesson in this article is that the more consumers rely on technology for online financial convenience, the more prone they are to being a victim of identity theft. This is a prime example of the never ending battle between good versus evil, hackers versus security professionals. The internet has become the worlds preferred method of communication, with that comes risk. Online banking has become a necessary evil.
Have you ever found yourself sitting in front of your PC with the notorious blue screen? Maybe you have an extra system around the house you want guests that visit your home to use versus them using your personal PC. What if over a long holiday weekend you happen to forget your password after a recent reset? All of these scenarios can be addressed with the use of a live CD. A live CD is a convenient way to run an operating system on a computer without the use of a hard disk drive. There is a vast array of different operating systems that are available as a live disk including, Ubuntu, Back Track, Knoppix, Windows PE, Fedora, Archie, Klax, Clusterix, just to name a few. Most of the live disks you find are some variant of, but not limited to, a Linux/Unix distribution. Depending on the end usage, they have evolved over time to come with a variety of applications for all spectrum of disciplines and areas of interest. Live disks can serve so many other functions, some of the security specific functions include: network sniffing, file integrity checking application, security testing, network discovery, network port and service identification, vulnerability scanning, wireless scanning, password cracking, remote access testing, and penetration testing.
Live CD's are extremely important for from a security perspective because of a number of reasons. First situation may be a primary educational institution or municipal organization where the administration would want to limit certain abilities of the user such as; installation privileges or even write capabilities to the internal hard disk or secondary memory considering there is a designated monitored hard drive available for data saves. Utilizing live CD technology in this situation would alleviate not only virus and malware infections but would also allow the users to have a better performing PC. Another scenario would be a corporate security professional on the job at a Fortune 500 company using live disks to analyze the companies network to mitigate risk and vulnerabilities and the exploitation thereof. Running security tools from live disk distributions like Back Track and Knoppix STD allow a security professional to run different test on various components and hardware on nearly any platform that will enable them to make informed decisions based on the output of the utilities. Lastly, a scenario that may be more pertinent to the everyday PC user; normal household network security. With the changes in technology and computational performance, on what seems to be more of a daily basis rather than the 18 month estimate proposed by Moore and colleagues, users need to be aware and take appropriate actions to potential weaknesses in their own home network by taking advantage of live disk technology. A lack of knowledge of security risks along with frivolous behavior while surfing the web can be to your detriment. The beauty of live disks is simplicity. Each of the types of media in which they are on (CD, DVD, or USB) are ubiquitous, easily dispensable, and always replaceable. You can take your live disk with you to nearly any machine and run your preferred OS and utilities with no worries of corrupting existing files or applications and with the ease of mind that you are safe from viruses and malicious software that would normally corrupt conventional desktop OS's. To take it a step further, some of the live disk distributions allow the customization of installed utilities and custom scripting capabilities to meet the needs of your specific application.