Cloud Controls Framework for Accelerating SaaS Platform Security
BITS ZG628T: Dissertation
Dissertation work carried out at
Triquesta Pte Ltd, Singapore
BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE
Cloud Controls Framework for Accelerating SaaS Platform Security
BITS ZG628T: Dissertation
Dissertation work carried out at
Triquesta Pte Ltd, Singapore
Submitted in partial fulfillment of M.Tech. Software Systems degree programmeUnder the Supervision of
Mr. Johanes Iliadi, Head – Development,
Triquesta Pte Ltd, Singapore
BIRLA INSTITUTE OF TECHNOLOGY & SCIENCE
Cloud computing has transformed the way the world conducts business. With cloud computing, information security is different from what it used to be in traditional on-premises server-based applications. With newer and newer cloud services being available
and the emerging technologies surrounding the cloud, even a small security flaw, when not controlled, gets amplified and exposes a larger attack surface area. Cloud security challenges include the Shared infrastructure (i.e. Compute, Network, Storage), Serverless
functions, Containers and Software defined everything.
This paper approaches the use case of SaaS platform delivery under the premise of end-to-end security controls (i.e. product development to cloud delivery). It is imperative to understand the collection of security controls that must be implemented to ensure that the SaaS model shall be delivered in confidence.
This paper presents the security requirements of such a software platform development and the subsequent deployment in the SaaS model of cloud computing. It also addresses
various security issues in SaaS cloud computing and controls that organizations ought to
implement to mitigate the risks.
This paper focuses on the following areas:
Security requirements in cloud computing
Emerging security domains (DevSecOps, Containers, etc.) that are pivotal to the accelerated movement to cloud
Critical cloud platform security issues
Comprehensive library of security controls
This paper provides a viewpoint on the cloud native development when developing cloud
computing strategies and identifies key security capabilities required to successfully execute that strategy.
Broad academic area of work: Cloud Computing
Keywords: Risk Management, Cloud Security, SaaS, DevSecOps, IAM, Containers, Secure SDLC
I take this opportunity to convey my deep appreciation and regards to Mr. Anton Van Etten, Managing Director, Triquesta Pte Ltd, Singapore for allowing me to have the academic time in pursuing this goal.
I thank my supervisor Mr. Johanes Iliadi, Head – Development, Triquesta Pte Ltd, Singapore for providing excellent guidance, encouragement and monitoring the whole time of this dissertation work.
I express my sincere thanks to my additional examiner Mr. Sreenivasan Balasubramanian, Head – Cyber Security Consulting (APJ), Rapid7 Pte Ltd, Singapore for giving helpful tips and direction all over the course.
I am thankful to my colleagues and team members who presented real-life use cases and helped in many professional ways to make this academic journey smooth and successful.
Table of Contents
TOC o “1-2” h z u
1Introduction PAGEREF _Toc527387887 h 11.1About Triquesta Pte Ltd PAGEREF _Toc527387888 h 11.2Objective PAGEREF _Toc527387889 h 11.3Cloud Service Model PAGEREF _Toc527387890 h 21.4Reference Architecture PAGEREF _Toc527387891 h 32Security Requirements PAGEREF _Toc527387892 h 42.1Risk assessment PAGEREF _Toc527387893 h 42.2Key Risks in SaaS PAGEREF _Toc527387894 h 52.3Establish Security Requirements PAGEREF _Toc527387895 h 72.4Assets PAGEREF _Toc527387896 h 72.5Requirements Engineering PAGEREF _Toc527387897 h 82.6Identity & Access PAGEREF _Toc527387898 h 92.7Determining Data Sensitivity and Importance PAGEREF _Toc527387899 h 102.8Common Pitfalls of Cloud Security PAGEREF _Toc527387900 h 112.9Development & Testing PAGEREF _Toc527387901 h 132.10Ass ess ing Common Vulnerabilities PAGEREF _Toc527387902 h 152.11Cloud-Specific Risks PAGEREF _Toc527387903 h 162.12Physical Infrastructure in Cloud PAGEREF _Toc527387904 h 182.13Threat Modeling PAGEREF _Toc527387905 h 192.14Securing Open Source Software PAGEREF _Toc527387906 h 202.15Identity and Access Management PAGEREF _Toc527387907 h 212.16Deployment & Operations PAGEREF _Toc527387908 h 242.17Data Governance PAGEREF _Toc527387909 h 252.18Business Continuity Management PAGEREF _Toc527387910 h 262.19IT Service Management PAGEREF _Toc527387911 h 272.20Considerations for Shadow IT PAGEREF _Toc527387912 h 282.21Operations Management PAGEREF _Toc527387913 h 292.22Information Security Management PAGEREF _Toc527387914 h 292.23Configuration Management PAGEREF _Toc527387915 h 293?? Preventing the unauthorized installation of software and hardware PAGEREF _Toc527387916 h 303.1Incident PAGEREF _Toc527387917 h 303.2Third Party Services PAGEREF _Toc527387918 h 343.3Regulatory Compliance PAGEREF _Toc527387919 h 353.4Protecting Personal Information PAGEREF _Toc527387920 h 373.5Secure SDLC PAGEREF _Toc527387921 h 403.6Audit and Logging PAGEREF _Toc527387922 h 423.7Database security PAGEREF _Toc527387923 h 424Cloud Security Framework PAGEREF _Toc527387924 h 434.1Information Risk Management PAGEREF _Toc527387925 h 434.2Outsourced Risk Management (Contracts) PAGEREF _Toc527387926 h 434.3Security Project Management PAGEREF _Toc527387927 h 444.4Network Infrastructure PAGEREF _Toc527387928 h 454.5SDLC Requirements PAGEREF _Toc527387929 h 474.6Design PAGEREF _Toc527387930 h 484.7Identity Governance PAGEREF _Toc527387931 h 504.8Segregation of Duties (SoD) PAGEREF _Toc527387932 h 544.9Cryptography PAGEREF _Toc527387933 h 544.10Access Control PAGEREF _Toc527387934 h 564.11The Business Continuity & Disaster Recovery plan PAGEREF _Toc527387935 h 564.12Server Security PAGEREF _Toc527387936 h 574.13Network Security PAGEREF _Toc527387937 h 574.14Data Classification PAGEREF _Toc527387938 h 574.15Remote Access Security PAGEREF _Toc527387939 h 644.16Operations PAGEREF _Toc527387940 h 644.17Performing Patch Management PAGEREF _Toc527387941 h 644.18The Patch Management Process PAGEREF _Toc527387942 h 644.19Implementing Network Security Controls: PAGEREF _Toc527387943 h 654.20Defense in Depth PAGEREF _Toc527387944 h 654.21Layered Security PAGEREF _Toc527387945 h 654.22Conducting Vulnerability Assessments PAGEREF _Toc527387946 h 684.23Log Capture and Log Management PAGEREF _Toc527387947 h 684.24Using Security Information and Event Management PAGEREF _Toc527387948 h 704.25Managing the Logical Infrastructure for Cloud Environments PAGEREF _Toc527387949 h 704.26Implementing Policies PAGEREF _Toc527387950 h 714.27Cloud Computing Policies PAGEREF _Toc527387951 h 724.28Understanding the Implications of the Cloud PAGEREF _Toc527387952 h 734.29to Enterprise Risk Management PAGEREF _Toc527387953 h 735Security Controls Verification PAGEREF _Toc527387954 h 775.1Auditing in the Cloud PAGEREF _Toc527387955 h 775.2Internal and External Audits PAGEREF _Toc527387956 h 775.3Types of Audit Reports PAGEREF _Toc527387957 h 786Summary PAGEREF _Toc527387958 h 817Conclusions and Recommendations PAGEREF _Toc527387959 h 828Bibliography PAGEREF _Toc527387960 h 839References PAGEREF _Toc527387961 h 8410Appendices PAGEREF _Toc527387962 h 8510.1Appendix 1: Risk Management Frameworks PAGEREF _Toc527387963 h 8510.2Appendix 2: Data Classification Standards PAGEREF _Toc527387964 h 8610.3Appendix 3: Application Security Guidelines PAGEREF _Toc527387965 h 8710.4Appendix 4: Glossary of Terms PAGEREF _Toc527387966 h 8811Checklist for the items in the report PAGEREF _Toc527387967 h 89
List of Tables
TOC h z c “Figure” Figure 1: SaaS Reference Architecture PAGEREF _Toc527387968 h 3Figure 2: SaaS risks PAGEREF _Toc527387969 h 5Figure 3: Security Requirements PAGEREF _Toc527387970 h 7Figure 4: Pitfalls of Cloud Security PAGEREF _Toc527387971 h 11Figure 5: Basic Security Design PAGEREF _Toc527387972 h 40Figure 6: Security for Design PAGEREF _Toc527387973 h 45Figure 7: Risk Management Frameworks PAGEREF _Toc527387974 h 85
List of Figures
TOC h z c “Table” Table 1: Service Model Responsibilities PAGEREF _Toc527387975 h 2Table 2: Service Model and Asset Mapping PAGEREF _Toc527387976 h 3Table 3: Data Classification Standards PAGEREF _Toc527387977 h 86Table 4: Checklist items for the report PAGEREF _Toc527387978 h 89
IntroductionAbout Triquesta Pte LtdTriquesta is a FinTech company, embracing the sole focus on collateral management, risk management and compliance solutions for financial clients. Through deep industry knowledge, extensive experience and leading-edge technologies, Triquesta provides solutions for collateral management operations. Triquesta Enterprise Risk Management Solution (TERMS), the flagship product will pro-actively support the management of collateral and offers a platform for growth. TERMS is a new breed of software platform supplying the next generation products and services for risk and collateral management area. With the SaaS solution being planned to offer in EU regions, recent changes to EU (GDPR) has a larger impact on how the personally identifiable information (PII) is collected, stored, used, archived and destroyed
ObjectiveThe main objective of this dissertation paper is to develop a security controls framework that acts as the security reference architecture for the development of the cloud native application and deployment of the application in a SaaS delivery platform. This paper is a refinement of the security controls from various information security frameworks and how it shall be applied to the use case of SaaS delivery. It also provides an alternate means to capture, process, view the security related controls to be used in the Systems Development Life Cycle (SDLC), Deployment & Operations.
A shift-left security mindset has been applied to proactively manage the information security risks during each stage of the systems development life cycle. While the scenario’s might be unique for each cloud environment, there exists certain patterns and common functions which could be attributed to standard security controls. Such common risk factors are linked to the cloud infrastructure, application development and operations security that apply to all types of cloud ecosystem.
This white paper discusses how to formulate a security framework to put forth a comprehensive and continuous assurance model for understanding secure SDLC, multi-tenant data risk and analysing potential risk in the SaaS delivery. After the framework is in place, it can be directly incorporated into an ongoing continuous assurance program that ensures that data are optimally protected against changing threat conditions and any other future changes.
The essential design principles to be adopted while developing this security controls framework was:
The framework shall not be specific to any cloud provider (i.e. Azure, AWS, GCP)
The framework shall provide “Secure by Design” and “Secure by Default” approach to achieve regulatory compliance and governance
The framework shall provide technical approaches and best practices, in a much more secure, open, flexible and efficient way
Page | PAGE * MERGEFORMAT 1
Cloud Service ModelCloud service categories come under three main groups: IaaS, PaaS, and SaaS. Information security management is essential part of the risk management and critical for regulatory compliance need.
In cloud, security is neither fully owned by the service provider nor the consumer. Shared responsibility plays an important role because a portion of security responsibility belongs to the service provider while some belongs to the customer. Security requirements also vary depending on different types of workloads and the service model chosen.
Cloud service provider does not monitor for or respond to security incidents within the customer’s area of responsibility. Hence it is relevant that the security controls should be evaluated and assessed whether they adequately mitigate risks.
Service Model Cloud Service
Responsibility Cloud Hosted (Broker) provider
Responsibility Cloud Customer
PaaS Physical Server
SaaS Physical Server
Physical Network Virtualized Servers
Table SEQ Table * ARABIC 1: Service Model ResponsibilitiesHosted (broker) provider is an entity (i.e. Software product company) that would like to subscribe the infrastructure (IaaS) from the Cloud service provider and then deliver the service (SaaS) to the Cloud customer in a SaaS model.
Data access within service models
Access to data will be decided by the following:
The service modelThe legal system in the country where data is legally stored
In this paper, SaaS model and its associated security risks from the perspective of hosted service provider will be discussed.
Page | PAGE * MERGEFORMAT 2
To demonstrate the need for cloud security controls and its applicability in the SaaS setup, the following reference architecture is used throughout this paper.
Figure SEQ Figure * ARABIC 1: SaaS Reference ArchitectureLocation Asset Details
On-Premises End user computing
Local server infrastructure
Primary Cloud Test infrastructure
Identity & Access Management
Secondary Cloud DR Infrastructure
Identity & Access Management
Table SEQ Table * ARABIC 2: Service Model and Asset MappingPage | PAGE * MERGEFORMAT 3
Security RequirementsRisk assessment Information security risks arise from the loss of Confidentiality, Integrity and Availability (CIA) of information or information systems. Enterprise risks should be understood to manage the risks to the acceptable level. Appropriate risk assessment process shall be established to identify and govern the risks faced by the enterprise.
The Risk-Management Process
The risk-management process has four components:
Respond to risk
Assessing risk requires the careful analysis of threat and vulnerability information to determine the extent to which circumstances or events can adversely affect an organization and the likelihood that such circumstances or events will occur.
The risk assessment shall include:
Information security program strategy and documentation
Information security policies, procedures, guidelines, and baselines
Information security assessments and audits
Applications documentation, secure coding standards, code promotion procedures, test plans, and other documentation as needed
Security incident response plan and corresponding documentation
Data classification schemes and information handling and disposal policies and procedures
NIST Special Publication 800–30, defines a vulnerability as “an inherent weakness in an information system, security procedures, internal controls, or implementation that could be exploited by a threat source. It is common to identify vulnerabilities as they are related to people, processes, data and technology. Data gathering for vulnerability assessments typically includes information on the type of vulnerability, its location, its severity (typically based on the scale of high, medium, and low).
NIST Special Publication 800–30, defines threats as “any circumstance or event with the potential to adversely impact organizational operations and assets, individuals, other organizations, or the Nation through an information system via unauthorized access, destruction, disclosure, or modification of information, and/or denial-of-service.”
Threat-sources can be grouped into a few categories.
Technical: Hardware failure, software failure, malicious code and unauthorized use
Physical: Failure due to faulty components or perimeter defense failure
Operational: A process (manual or automated) that affects the CIA
Determination of Likelihood
Likelihood is a component of a qualitative risk assessment. Likelihood, along with impact, determines risk. Likelihood can be measured by the capabilities of the threat and the presence or absence of countermeasures. Initially, organizations that do not have trending
Page | PAGE * MERGEFORMAT 4
data available may use an ordinal scale, labeled high, medium, and low, to score likelihood rankings.
Determination of Impact
Impact can be ranked much the same way as likelihood. The main difference is that the impact scale is expanded and depends on definitions rather than ordinal selections. Definitions of impact to an organization often include monetary loss, loss of market share, and other facets. Enough emphasis shall be put to define and assign impact definitions for high, medium, low, or any other scale terms that are chosen.
Key Risks in SaaSComprehensive assessment of key risks related to Software-as-as-Service (SaaS) model is depicted below:
Figure SEQ Figure * ARABIC 2: SaaS risksRegulatory Compliance
Regulatory compliance is vital to the business operations, as non-compliance will lead to legal proceedings, heavy penalties and even business close-down. While cloud customers are ultimately responsible for the security and integrity of their own data, SaaS provider should provide necessary confidence by establishing
Understanding the regulatory and legal requirements of the location of processing and location of storage, that impacts the customer business
Developing necessary controls to comply with legal and regulatory needs
Undergo external audits and achieve security certifications for the facility and processing
Privilege User Access
Any sensitive data that gets processed outside the organization brings with it an inherent level of risk. SaaS service provider should employ
Necessary physical, logical and personnel controls
Mechanisms to authorize, monitor and review privilege access
Personnel policies that involve stringent background checks
Page | PAGE * MERGEFORMAT 5
Organizations want to minimize the number of people who have access to secure information or resources, because that reduces the chance of a malicious user getting that access, or an authorized user inadvertently impacting a sensitive resource. However, users still need to carry out privileged operations. Organizations can give users privileged access and there is a need for oversight for what those users are doing with their admin privileges. Privileged access management helps to mitigate the risk of excessive, unnecessary or misused access rights.
In cloud, it is difficult to know exactly where customer data is stored. This is because the data moves between various locations based on backup, disaster recovery, redundancy architecture.
Have controls in place to commit to storing and processing data in specific jurisdictions (i.e. allowed countries)
Contractual commitment to obey local privacy requirements on behalf of cloud customer
Cross border data transfers should be agreed and monitored for compliance purpose
Cloud attracts based on the nature of shared infrastructure (i.e. multi-tenant), hence the lower cost. With the shared environment comes the risk of co-locating data with other customers. This is a valid risk in terms of data storage and processing. SaaS provider should establish necessary controls such as
Encrypting each customer data with unique per customer key
Customer holds the encryption key, which provides confidence of control over the data
Logical separation of each customer resources by establishing virtual private cloud (VPC), Virtual Network (VNet), etc.
For cloud customer who are sensitive to multi-tenant environment, private hosting solution shall be considered for data storage
Data recovery is a key concern among cloud customers due to the nature of multi-tenant co-sharing environment. SaaS provider shall establish confidence in
Recovering customer specific data within the agreed SLA
Priority of recovering customer data in case of a wide spread disaster
Not impacting the process activities of other co-located tenants
Page | PAGE * MERGEFORMAT 6
Establish Security RequirementsThe security requirement and analysis of the SaaS platform can be derived based on the formal risk assessment of the solution components that comprise the SaaS platform. A systematic approach followed in developing the risk management for the SaaS platform leads to the key areas, that need to be considered are listed below:
Figure SEQ Figure * ARABIC 3: Security RequirementsNIST SP 800-30 is a Risk Management Guide for Information Technology Systems that is a good resource for risk management exercise.
Standard operating environment with security hardening shall be established
System and application files need to be monitored for file integrity
Access to the server shall be monitored, managed and audited based on the roles
Applications and services running on the server needs to be whitelisted
Any intrusion to the host system should be detected and prevented
Production server access and deployment must be segregated from non-production servers
Access from untrusted zone (i.e. internet) shall terminate at the servers deployed in the de-militarized zone (DMZ)
Network security requirements shall be looked at both underlying physical environment and the logical security
The physical environment security requirements state the need to ensure that on-premises security where the software development happens should be equally protected
From the on-premises location, any access to the cloud need to ensure that enough protection is maintained for access as well as securing the data while in transit
Production network access and deployment must be segregated from nonproduction network
Any interaction between a trusted zone (i.e. internal network) and untrusted zone (i.e. internet) should happen through a de-militarized zone (DMZ)
Any network intrusion should be detected and prevented
Resources deployed in the cloud shall be protected by deploying in segregated virtual network
Page | PAGE * MERGEFORMAT 7
Traffic between multiple network segment based on business need and logging for auditing purpose
Authorized network configuration change and logging for auditing purpose
Storage security requirements shall be defined as listed below:
Data must be classified when created to ensure enough protection
Data at Rest (DAR) needs to be protected from unauthorized access
Data in Use (DIU) needs to be protected from unauthorized processing and data leakage
Data in Transit (DIT) needs to be protected from unauthorized eavesdropping
Access to the cloud storage should be provisioned based on roles
Storage devices should have enough redundancy to ensure availability
All storage access should be monitored and recorded for auditing purpose
End User Computing
Computing equipment (i.e. Desktop, Laptop, etc.) used by the development and testing team should be protected from known and unknown threats.
All running processes and memory-based anomalies should be managed by continuous monitoring
Any changes to the baselined file, registry and memory needs to be monitored
Any unauthorized privilege escalations need to be alerted
All network communications (inbound and out bound) should be monitored and suspicious flags should trigger the alert
Proper anti-malware solution should be available to manage the prevention of threats
External devices (i.e. USB) activity should be monitored to prevent any data loss
Requirements EngineeringSecurity considerations are vital and should be thought through the early stages of the development cycle, thereby ensuring that threats, requirements, and potential constraints are take care of. During this stage, security is looked at in terms of business risks with input from the information security team. Important points to consider are
Identifying sources of security requirements, such as regulatory and legal laws and industry specific standards
Ensuring all key stakeholders to have a common understanding, including security implications, considerations, and requirements
Business requirements and data in terms of confidentiality, integrity, and availability
Identification and categorizing of information and the need for special handling requirements to transmit, store, or create information
Determination of any privacy requirements, such as storing such as personally identifiable information
Page | PAGE * MERGEFORMAT 8
The security classification of information being processed by the SaaS provider shall be categorized is based on the potential impact on an organization. Security categories are to be used in conjunction with vulnerability and threat information in assessing the risk to an organization by operating an information system.
NIST SP 800-60 is a Guide for mapping types of Information and Information systems to Security Categories and is a good resource for information categorization exercise.
Business impact shall be assessed based on the
Core system components needed to maintain minimal functionality
Length of time the system can be down before the business is impacted (Recovery Time Objective)
Business tolerance for loss of data (Recovery Point Objective)
Refer to NIST SP 800-34 is a guide for Contingency Planning Guide for Information Technology Systems and is a good resource for information categorization exercise.
Assess Privacy Impact
Privacy impact assessment shall be conducted that provides details on where and to what degree privacy information is collected, stored, or created within the system. It is vital to understand the information that may be considered as privacy information. The security categorization process shall identify information type.
Any privacy information should be handled in accordance with the privacy laws and regulations of the region of operation or the regulations under which the cloud customer is contracted to
Necessary safeguards and security controls shall be incorporated
Processes shall be identified to address privacy information incident handling and reporting requirements
Identity & AccessPlanning teams need to carefully consider their business requirements relative to the different integration options that are available and then select the requirements that are most appropriate.
IAM requirements are organized into four categories: Account Provisioning & De-provisioning, Authentication, Authorization & Role Management, and Session Management.
Account provisioning and de-provisioning are the processes that create an account in an application, maintain the accuracy of account information over time, and delete the account when it is no longer needed in the application.
Page | PAGE * MERGEFORMAT 9
The application has an access control model that can be extended to accommodate custom roles.
The application can map groups or individuals into roles.
The application allows to customize the permissions applied to roles.
The application supports multiple roles per user without requiring multiple accounts
The application has a configurable user session inactivity timeout.
The application has a configurable user session maximum lifetime.
The application can trigger authentication for an existing anonymous session when a user requests protected application content.
The application supports session context step-up from password authentication to 2-factor token authentication when more sensitive data or functions are requested by a user.
The application can force re-authentication of a user that was previously authenticated in the single sign-on (SSO) environment
These combined factors are propelling the adoption of privileged-access lifecycle management (PALM) solutions across all industries. PALM is a technology architecture framework consisting of four stages running continuously under a centralized automated platform:
Access to privileged resources
Control of privileged resources
Monitoring actions taken on privileged resources
Remediation to revert changes made on privileged IT resources to a known good state
Determining Data Sensitivity and ImportanceApplications should undergo an assessment of the sensitivity and importance of an application that may be implemented in a cloud environment. The following six key questions can be used to open a discussion of the application to determine its cloud-friendliness.
Impact in the following situations determine the sensitivity:
The data became widely public and widely distributed (including crossing geographic boundaries)
An employee of the cloud service provider (CSP) accessed the application
The process or function was manipulated by an outsider
The process or function failed to provide expected results
The data was unexpectedly changed
The application was unavailable for a period of timeThese questions form the basis of an information-gathering exercise to identify and understand the requirements for AIC of an application and its associated information assets. These questions can be discussed with a system owner to begin a collaborative security discussion.
Page | PAGE * MERGEFORMAT 10
Common Pitfalls of Cloud SecurityIt is important to identify, communicate, and plan for potential cloud-based application challenges. Failure to do so can result in additional costs, failed projects, and duplication of efforts along with loss of efficiencies and executive sponsorship. Although many projects and cloud journeys may have an element of unique or nonstandard approaches, the pitfalls discussed in this section should always be followed and understood
Figure SEQ Figure * ARABIC 4: Pitfalls of Cloud SecurityOn-Premises Apps Does Not Always Transfer
Current configurations and applications may be hard to replicate on or through cloud services. The reason being, they were not developed with cloud-based services in mind. Another issue is that not all applications can be forklifted to the cloud.
Not All Apps Are Cloud Ready
Where high-value data and hardened security controls are applied, cloud development and testing can be more challenging. The reason for this is typically compounded by the requirement for such systems to be developed, tested, and assessed in on-premises or traditional environments to a level where confidentiality and integrity have been verified and assured.
Lack of Training and Awareness
New development techniques and approaches require training and a willingness to
utilize new services. Typically, developers have become accustomed to working with
Microsoft .NET, SQL Server, Java, and other traditional development techniques.
When cloud-based environments are required or are requested by the organization, this
may introduce challenges (particularly if it is a platform or system with which developers
Lack of Documentation and Guidelines
Best practice requires developers to follow relevant documentation, guidelines, methodologies,
processes, and lifecycles to reduce opportunities for unnecessary or heightened
risk to be introduced.
Given the rapid adoption of evolving cloud services, this has led to a disconnect
between some providers and developers on how to utilize, integrate, or meet vendor
requirements for development. Although many providers are continuing to enhance
Page | PAGE * MERGEFORMAT 11
levels of available documentation, the most up-to-date guidance may not always be available,
particularly for new releases and updates.
For these reasons, the CCSP needs to understand the basic concept of a cloud software
development lifecycle and what it can do for the organization. A software development
lifecycle is essentially a series of steps, or phases, that provide a model for the
development and lifecycle management of an application or piece of software. The methodology
within the software development lifecycle process can vary across industries and
organizations, but standards such as ISO/IEC 12207 represent processes that establish a
lifecycle for software and provide a mode for the development, acquisition, and configuration
of software systems.3
The intent of a software development lifecycle process is to help produce a product
that is cost-efficient, effective, and high quality. The software development lifecycle
methodology usually contains the following stages: analysis (requirements and design),
construction, testing, release, and maintenance (response).
Complexities of Integration
Integrating new applications with existing ones can be a key part of the development process.
When developers and operational resources do not have open or unrestricted access
to supporting components and services, integration can be complicated, particularly
where the CSP manages infrastructure, applications, and integration platforms.
From a troubleshooting perspective, it can prove difficult to track or collect events
and transactions across interdependent or underlying components.
In an effort to reduce these complexities, where possible (and available), the CSP’s
API should be used.
At all times, developers must keep in mind two key risks associated with applications that
run in the cloud:
?? Third-party administrators
It is also critical that developers understand the security requirements based on the
?? Deployment model (public, private, community, hybrid) that the application will
?? Service model (infrastructure as a service IaaS, platform as a service PaaS, or
software as a service SaaS)
These two models will assist in determining what security your provider will offer and
what your organization is responsible for implementing and maintaining.
It is critical to evaluate who is responsible for security controls across the deployment
and services models. Consider creating a sample responsibility matrix (Figure 4.3).
Additionally, developers must be aware that metrics will always be required and cloudbasedapplications may have a higher reliance on metrics than internal applications to
supply visibility into who is accessing the application and the actions they are performing.
This may require substantial development time to integrate said functionality and may
eliminate a forklift approach.
Application containers are used to isolate applications from each other within the context of a running an operating system instance.
;Work in Progress;
Development ; TestingUnderstanding the Software Development
Lifecycle Process for a Cloud Environment
The cloud further heightens the need for applications to go through a software development
lifecycle process. Following are the phases in all software development lifecycle
1. Planning and requirements analysis: Business and security requirements and
standards are being determined. This phase is the main focus of the project
managers and stakeholders. Meetings with managers, stakeholders, and users are
held to determine requirements. The software development lifecycle calls for all
business requirements (functional and nonfunctional) to be defined even before
initial design begins. Planning for the quality-assurance requirements and identification
of the risks associated with the project are also conducted in the planning
stage. The requirements are then analyzed for their validity and the possibility of
incorporating them into the system to be developed.
2. Defining: The defining phase is meant to clearly define and document the product
requirements to place them in front of the customers and get them approved.
This is done through a requirement specification document, which consists of
all the product requirements to be designed and developed during the project
3. Designing: System design helps in specifying hardware and system requirements
and helps in defining overall system architecture. The system design specifications
serve as input for the next phase of the model. Threat modeling and secure design
elements should be undertaken and discussed here.
4. Developing: Upon receiving the system design documents, work is divided into
modules or units and actual coding starts. This is typically the longest phase of the
software development lifecycle. Activities include code review, unit testing, and
5. Testing: After the code is developed, it is tested against the requirements to make
sure that the product is actually solving the needs gathered during the requirements
phase. During this phase, unit testing, integration testing, system testing,
and acceptance testing are conducted.
Most software development lifecycle models include a maintenance phase as theirendpoint. Operations and disposal are included in some models as a way of further subdividing
the activities that traditionally take place in the maintenance phase, as noted in
the next sections.
Secure Operations Phase
From a security perspective, once the application has been implemented using software
development lifecycle principles, the application enters a secure operations phase. Proper
software configuration management and versioning are essential to application security.
There are some tools that can be used to ensure that the software is configured according
to specified requirements. Following are two such tools:
?? Puppet: According to Puppet Labs, Puppet is a configuration management system
that allows you to define the state of your IT infrastructure and then automatically
enforces the correct state.4
Assessing Common Vulnerabilities 215
Cloud App lication Security 4
?? Chef: With Chef, you can automate how you build, deploy, and manage your
infrastructure. The Chef server stores your recipes as well as other configuration
data. The Chef client is installed on each server, virtual machine, container, or
networking device you manage (called nodes). The client periodically polls the
Chef server for the latest policy and the state of your network. If anything on the
node is out of date, the client brings it up to date.5
The goal of these applications is to ensure that configurations are updated as needed
and there is consistency in versioning. This phase calls for the following activities to
?? Dynamic analysis
?? Vulnerability assessments and penetration testing (as part of a continuous monitoring
?? Activity monitoring
?? Layer-7 firewalls (such as web application firewalls)
When an application has run its course and is no longer required, it is disposed of. From
a cloud perspective, it is challenging to ensure that data is properly disposed of because
you have no way to physically remove the drives. To this end, there is the notion of
crypto-shredding. Crypto-shredding is effectively summed up as the deletion of the key
used to encrypt data that’s stored in the cloud.
Ass ess ing Common VulnerabilitiesApplications run in the cloud should conform to best practice guidance and guidelines
for the assessment and ongoing management of vulnerabilities. As mentioned earlier,
implementation of an application risk-management program addresses not only vulnerabilities
but also all risks associated with applications.
The most common software vulnerabilities are found in the Open Web Application
Security Project (OWASP) Top 10. Here are the OWASP Top 10 entries for 2013 as well
as a description of each entry:
?? “Injection: Includes injection flaws such as SQL, OS, LDAP, and other injections.
These occur when untrusted data is sent to an interpreter as part of a
command or query. If the interpreter is successfully tricked, it will execute the
unintended commands or access data without proper authorization.
216 Domain 4 Cloud Application Security
?? “Broken authentication and session management: Application functions related
to authentication and session in management are often not implemented correctly,
allowing attackers to compromise passwords, keys, or session tokens or to
exploit other implementation flaws to assume other users’ identities.
?? “Cross-site scripting (XSS): XSS flaws occur whenever an application takes untrusted
data and sends it to a web browser without proper validation or escaping.
XSS allows attackers to execute scripts in the victim’s browser, which can hijack
user sessions, deface websites, or redirect the user to malicious sites.
?? “Insecure direct object references: A direct object reference occurs when a
developer exposes a reference to an internal implementation object, such as a file,
directory, or database key. Without an access control check or other protection,
attackers can manipulate these references to access unauthorized data.
?? “Security misconfiguration: Good security requires having a secure configuration
defined and deployed for the application, frameworks, application server, web
server, database server, and platform. Secure settings should be defined, implemented,
and maintained, as defaults are often insecure. Additionally, software
should be kept up to date.
?? “Sensitive data exposure: Many web applications do not properly protect sensitive
data, such as credit cards, tax IDs, and authentication credentials. Attackers
may steal or modify such weakly protected data to conduct credit card fraud,
identity theft, or other crimes. Sensitive data deserves extra protection, such as
encryption at rest or in transit, as well as special precautions when exchanged with
?? “Missing function-level access control: Most web applications verify functionlevelaccess rights before making that functionality visible in the UI. However,
applications need to perform the same access control checks on the server when
each function is accessed. If requests are not verified, attackers will be able to
forge requests in order to access functionality without proper authorization.
?? “Cross-site request forgery (CSRF): A CSRF attack forces a logged-on victim’s
browser to send a forged HTTP request, including the victim’s session cookie and
any other automatically included authentication information, to a vulnerable
web application. This allows the attacker to force the victim’s browser to generate
requests that the vulnerable application thinks are legitimate requests from the
?? “Using components with known vulnerabilities: Components, such as libraries,
frameworks, and other software modules, almost always run with full privileges.
If a vulnerable component is exploited, such an attack can facilitate serious data
Assessing Common Vulnerabilities 217
Cloud App lication Security 4
loss or server takeover. Applications using components with known vulnerabilities
may undermine application defenses and enable a range of possible attacks and
?? “Invalidated redirects and forwards: Web applications frequently redirect and
forward users to other pages and websites, and use untrusted data to determine
the destination pages. Without proper validation, attackers can redirect victims to
phishing or malware sites or use forwards to access unauthorized pages.”6
To address these vulnerabilities, organizations must have an application risk-management
program in place, which should be part of an ongoing managed process.
One possible approach to building such a risk-management process can be derived from
the NIST “Framework for Improving Critical Infrastructure Cybersecurity.”7 Initially
released in February 2014 as version 1.0, the framework started out as Executive Order
13636, issued in February 2013.8
The framework is composed of three parts:
?? Framework Core: Cybersecurity activities and outcomes divided into five functions:
identify, protect, detect, respond, and recover
?? Framework Profile: To help the company align activities with business requirements,
risk tolerance, and resources
?? Framework Implementation Tiers: To help organizations categorize where they
are with their approach
Building from those standards, guidelines, and practices, the framework provides a
common taxonomy and mechanism for organizations to do the following:
?? Describe their current cybersecurity posture
?? Describe their target state for cybersecurity
?? Identify and prioritize opportunities for improvement within the context of a
continuous and repeatable process
?? Assess progress toward the target state
?? Communicate among internal and external stakeholders about cybersecurity risk
A good first step in understanding how the framework can help inform and improve
your existing application security program is to go through it with an application
You will now examine the first function in the Framework Core, Identify (ID), and its
categories—Asset Management (ID.AM) and Risk Assessment (ID.RA).
218 Domain 4 Cloud Application Security
ID.AM contains the following subcategories:
?? ID.AM-2: Software platforms and applications within the organization are inventoried.
?? ID.AM-3: Organizational communication and data flows are mapped.
?? ID.AM-5: Resources (such as hardware, devices, data, and software) are prioritized
based on their classification, criticality, and business value.
ID.RA contains the following subcategories:
?? ID.RA-1: Asset vulnerabilities are identified and documented.
?? ID.RA-5: Threats, vulnerabilities, likelihoods, and impacts are used to determine
According to Diana Kelley, executive security advisor at IBM Security, “There is a lot
in the Framework that would map nicely to a risk-based software security program. Classifying
applications on criticality and business value can be brought to a deeper and more
precise level when the threat model and vulnerability profile of that application is understood
and validated with testing.”9
Cloud-Specific RisksWhether run in platform as a service (PaaS) or infrastructure as a service (IaaS) deployment
model, applications running in a cloud environment may enjoy the same security
controls surrounding them as applications that run in a traditional data center environment.
This makes the need for an application risk management program more critical
Applications that run in a PaaS environment may need security controls baked into
them. For example, encryption may need to be programmed into applications, and
logging may be difficult depending on what the cloud service provider can offer your
Application isolation is another component that must be addressed in a cloud environment.
You must take steps to ensure that one application cannot access other applications
on the platform unless it’s allowed access through a control.
The Cloud Security Alliance’s Top Threats Working Group has published The
Nine: Cloud Computing Top Threats in 2013.10 Following are the nine topthreats listed in the report:
?? Data breaches: If a multitenant cloud service database is not properly designed,
a flaw in one client’s application can allow an attacker access not only to that
data but to every other client’s data as well.
TLS is a cryptographic protocol designed to provide communication security over a network. It uses X.509 certificates to authenticate a connection and to exchange a symmetric key. This key is then used to encrypt any data sent over the connection. The TLS protocols allows client/server applications to communicate across a network in a way designed to ensure confidentiality.
TLS is made up of two layers:
TLS record protocol: Provides connection security and ensures that the connection is private and reliable. Used to encapsulate higher-level protocols, among them the TLS handshake protocol.
TLS handshake protocol: Allows the client and the server to authenticate each other and to negotiate an encryption algorithm and cryptographic keys before data is sent or received.
Data loss: Any accidental deletion by the CSP, or worse, a physical catastrophe
such as a fire or earthquake, can lead to the permanent loss of customers’ data unless
the provider takes adequate measures to back it up. Furthermore, the burden
of avoiding data loss does not fall solely on the provider’s shoulders. If a customer
encrypts his data before uploading it to the cloud but loses the encryption key, the
data is still lost.
?? Account hijacking: If attackers gain access to your credentials, they can eavesdrop
on your activities and transactions, manipulate data, return falsified information,
and redirect your clients to illegitimate sites. Your account or service instances
may become a new base for the attacker.
?? Insecure APIs: Cloud computing providers expose a set of software interfaces or
APIs that customers use to manage and interact with cloud services. Provisioning,
management, orchestration, and monitoring are all performed using these interfaces.
The security and availability of general cloud services is dependent on the
security of these basic APIs. From authentication and access control to encryption
and activity monitoring, these interfaces must be designed to protect against both
accidental and malicious attempts to circumvent policy.
?? Denial of service (DoS): By forcing the victim cloud service to consume inordinate
amounts of finite system resources such as processor power, memory,
disk space, and network bandwidth, the attacker causes an intolerable system
?? Malicious insiders: European Organization for Nuclear Research (CERN)
defines an insider threat as “A current or former employee, contractor, or other
business partner who has or had authorized access to an organization’s network,
system, or data and intentionally exceeded or misused that access in a manner
that negatively affected the confidentiality, integrity, or availability of the organization’s
information or information systems.”11
?? Abuse of cloud services: It might take an attacker years to crack an encryption key
using his own limited hardware, but using an array of cloud servers, he might be
able to crack it in minutes. Alternatively, he might use that array of cloud servers
to stage a distributed denial-of-service (DDoS) attack, serve malware, or distribute
?? Insufficient due diligence: Too many enterprises jump into the cloud without
understanding the full scope of the undertaking. Without a complete understanding
of the CSP environment, applications, or services being pushed to the cloud,
and operational responsibilities such as incident response, encryption, and security
monitoring, organizations are taking on unknown levels of risk in ways they
may not even comprehend but that are a far departure from their current risks.
Shared technology issues: Whether it’s the underlying components that make
up this infrastructure (central processing unit CPU caches, graphics processing
units GPUs, and so on) that were not designed to offer strong isolation
properties for a multitenant architecture (IaaS), redeployable platforms (PaaS),
or multicustomer applications (SaaS), the threat of shared vulnerabilities exists
in all delivery models. A defensive in-depth strategy is recommended and should
include compute, storage, network, application and user security enforcement,
and monitoring, whether the service model is IaaS, PaaS, or SaaS. The key is that
a single vulnerability or misconfiguration can lead to a compromise across an
entire provider’s cloud.
Physical Infrastructure in CloudWith SaaS deployment, control for certain assets in the cloud might be lost and the security model must account for that. Following are some important considerations when sharing physical resources:
Legal: Simply by sharing the environment in the cloud, you may put your data at risk of seizure. Exposing your data in an environment shared with other companies can give the government “reasonable cause” to seize your assets because another company has violated the law
Compatibility: Storage services provided by one cloud vendor may be incompatible with another vendor’s services should you decide to move from one to the other
Control: If information is encrypted while passing through the cloud, does the customer or cloud vendor control the encryption and decryption keys? Most customers probably want their data encrypted both ways across the Internet using the secure sockets layer (SSL) protocol. They also most likely want their data encrypted while it is at rest in the cloud vendor’s storage pool. Make sure you control the encryption and decryption keys, just as if the data were still resident in the enterprise’s own servers
Log data: As more and more mission-critical processes are moved to the cloud, SaaS suppliers have to provide log data in a real-time, straightforward manner, probably for their administrators as well as their customers’ personnel. Will customers trust the CSP enough to push their mission-critical applications out to the cloud? Because the SaaS provider’s logs are internal and not necessarily accessible externally or by clients or investigators, monitoring is difficult
PCI DSS access: Because access to logs is required for PCI DSS compliance and may be requested by auditors and regulators, security managers need to make sure to negotiate access to the provider’s logs as part of any service agreement
Upgrades and changes: Cloud applications undergo constant feature additions. Users must keep up to date with application improvements to be sure they are protected. The speed at which applications change in the cloud affects both the software development lifecycle and security. A secure software development lifecycle may not be able to provide a security cycle that keeps up with changes that occur so quickly. This means that users must constantly upgrade because an older version may not function or protect the data.
Failover technology: Having proper failover technology is a component of securing the cloud that is often overlooked. The company can survive if a non-mission critical application goes offline, but this may not be true for mission-critical applications. Security needs to move to the data level so that enterprises can be sure their data is protected wherever it goes. Sensitive data is the domain of the enterprise, not of the cloud computing provider. One of the key challenges in cloud computing is data-level security
Compliance: Many compliance regulations require that data not be intermixed with other data, such as on shared servers or databases. Some countries have strict limits on what data about its citizens can be stored and for how long, and some banking regulators require that customers’ financial data remain in their home country.
Placement of security: Cloud-based services result in many mobile IT users accessing business data and services without traversing the corporate network. This increases the need for enterprises to place security controls between mobile users and cloud-based services. Placing large amounts of sensitive data in a globally accessible cloud leaves organizations open to large, distributed threats.
Data fluidity: Data is fluid in cloud computing and may reside in on-premises physical servers, on-premises VMs, or off-premises VMs running on cloud computing resources. This requires some rethinking to manage the auditing and reporting.
Threat ModelingThe goal of threat modelling is to determine any weaknesses in the application and the potential ingress, egress, and actors involved before the weakness is introduced to production. It is the overall attack surface that is amplified by the cloud, and the threat model has to take that into account. One such model is STRIDE, developed by Microsoft for assessing the security posture.
STRIDE Threat Model
STRIDE is a system for classifying known threats according to the kinds of exploits that
are used or the motivation of the attacker. In the STRIDE threat model, the following six
threats are considered, and controls are used to address the threats:
Spoofing: Attacker assumes identity of subject
Tampering: Data or messages altered by an attacker
Repudiation: Illegitimate denial of an event
Information disclosure: Information obtained without authorization
Denial of service: Attacker overloads system to deny legitimate access
Elevation of privilege: Attacker gains a privilege level above what is permitted
Today’s software applications are built by leveraging other software components as
building blocks to create a unique software offering. The software that is leveraged is often seen as a “black box” by developers who might not have the ability or thought to ensure the
security of the applications and code. However, it remains the responsibility of the organization
to assess code for proper, secure function no matter where the code is sourced.
This section discusses some of the security aspects involved with the selection of software
components that are leveraged by your organization’s developers.
Approved Application Programming Interfaces
Application programming interfaces (APIs) are a means for a enterprises to expose functionality
to applications. Following are three benefits of APIs:
?? Programmatic control and access
?? Integration with third-party tools
Consumption of APIs can lead to your firm leveraging insecure products. As discussed
in the next section, organizations must also consider the security of software (and
APIs) outside of their corporate boundaries. Consumption of external APIs should go
through the same approval process that’s used for all other software being consumed by
the organization. The CCSP needs to ensure that there is a formal approval process in
place for all APIs. If there is a change in an API or an issue due to an unforeseen threat,
a vendor update, or any other reason, the API in question should not be allowed until a
thorough review has been undertaken to assess the integrity of the API in light of the new
When leveraging APIs, the CCSP should take steps to ensure that API access is secured.
This requires the use of secure sockets layer, or SSL (REST), or message-level cryptoaccess(SOAP) authentication and logging of API usage. In addition, the use of a tool such
as OWASP’s Dependency-Check—which is a utility that identifies project dependencies
and checks whether there are any known, publicly disclosed, vulnerabilities—
13 This tool currently supports Java and .NET dependencies.14
Software Supply Chain (API) Management
It is critical for organizations to consider the implications of nonsecure software beyond
their corporate boundaries. The ease with which software components with unknown
pedigrees or with uncertain development processes can be combined to produce new
applications has created a complex and highly dynamic software supply chain (API management).
In effect, people are consuming more and more software that is being developed
by a third party or accessed with or through third-party libraries to create or enable
functionality, without having a clear understanding of the origins of the software and
code in question. This often leads to a situation in which a complex and highly dynamic
software interaction is taking place between and among one or more services and systems
within the organization and between organizations via the cloud.
This supply chain supplies agility in the rapid development of applications to meet
consumer demand. However, software components produced without secure software
development guidance similar to that defined by ISO/IEC 27034-1 can create security
risks throughout the supply chain.15 Therefore, it is important to assess all code and services
for proper and secure functioning no matter where they are sourced.
Securing Open Source SoftwareSoftware that the community at large has openly tested and reviewed is considered by
many security professionals to be more secure than software that has not undergone such
a process. This can include open source software.
By moving toward leveraging standards such as ISO 27034-1, companies can be confident
that partners have the same understanding of application security. This increases security
as organizations, regulatory bodies, and the IT audit community learn the importance
of embedding security throughout the processes required to build and consume security.
Identity and Access ManagementIdentity and access management (IAM) includes people, processes, and systems that
manage access to enterprise resources by ensuring that the identity of an entity is verified
and then granting the correct level of access based on the protected resource, this assured
identity, and other contextual information (Figure 4.4).
IAM capabilities include the following:
?? Identity management
?? Access management
?? Identity repository and directory services
Identity management is a broad administrative area that deals with identifying individuals
in a system and controlling their access to resources within that system by associating user
rights and restrictions with the established identity.
Access management deals with managing an individual’s access to resources and is based
on the answers to “Who are you?” and “What do you have access to?”
?? Authentication identifies the individual and ensures that he is who he claims to
be. It establishes identity by asking, “Who are you?” and “How do I know I can
?? Authorization evaluates “What do you have access to?” after authentication occurs.
?? Policy management establishes the security and access policies based on business
needs and the degree of acceptable risk.
?? Federation is an association of organizations that come together to exchange
as appropriate about their users and resources to enable collaborations
?? Identity repository includes the directory services for the administration of user
Identity Repository and Directory Services
Identity repositories provide directory services for the administration of user accounts and
their attributes. Directory services are customizable information stores that offer a single
point of administration and user access to resources and services used to manage, locate,
and organize objects. Common directory services include these:
?? x.500 and LDAP
?? Microsoft Active Directory
?? Novell eDirectory?? Metadata replication and synchronization
?? Directory as a service
Federated Id entity Management
Federated identity management (FIM) provides the policies, processes, and mechanisms
that manage identity and trusted access to systems across organizations.
The technology of federation is much like that of Kerberos within an Active Directory
domain: a user logs on once to a domain controller, is ultimately granted an access token,
and uses that token to gain access to systems for which the user has authorization. The
difference is that whereas Kerberos works well in a single domain, federated identities
allow for the generation of tokens (authentication) in one domain and the consumption
of these tokens (authorization) in another domain.
Although many federation standards exist, Security Assertion Markup Language (SAML)
2.0 is by far the most commonly accepted standard used in the industry today. According
to Oasis, SAML 2.0 is an “XML-based framework for communicating user authentication,
entitlement, and attribute information. As its name suggests, SAML allows business
entities to make assertions regarding the identity, attributes, and entitlements of a subject
(an entity that is often a human user) to other entities, such as a partner company or
another enterprise application.”17
Other standards in the federation space exist:
?? WS-Federation: According to the WS-Federation Version 1.2 OASIS standard,
“this specification defines mechanisms to allow different security realms to
federate, such that authorized access to resources managed in one realm can be
provided to security principals whose identities are managed in other realms.”18
?? OpenID Connect: According to the OpenID Connect FAQ, this is an interoperable
authentication protocol based on the OAuth 2.0 family of specifications.
According to OpenID, “Connect lets developers authenticate their users across
websites and apps without having to own and manage password files. For the
app builder, it provides a secure verifiable answer to the question: ‘What is the
identity of the person currently using the browser or native app that is connected
?? OAuth: OAuth is widely used for authorization services in web and mobile
applications. According to RFC 6749, “The OAuth 2.0 authorization framework
enables a third-party application to obtain limited access to an HTTP service,
either on behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the third-party
application to obtain access on its own behalf.”20
In some cases, the standard that is used may be dictated based on the use cases to be
supported. Take, for example, the Shibboleth standard. This federation standard is heavily
used in the education space. If your organization is in this space, you may very well
have a requirement to support the Shibboleth standard in addition to SAML. According
to the Shibboleth Consortium, “A user authenticates with his or her organizational credentials,
and the organization (or identity provider) passes the minimal identity information
necessary to the service provider to enable an authorization decision. Shibboleth also
provides extended privacy functionality allowing a user and their home site to control the
attributes released to each application.”21
Federated Identity Providers
In a federated environment, there is an identity provider and a relying party. The identity
provider holds all the identities and generates a token for known users. The relying party
is the service provider and consumes these tokens.In a cloud environment, it is desirable that the organization itself continues to maintain
all identities and act as the identity provider.
Federated SSO is typically used for facilitating interorganizational and intersecuritydomain access to resources leveraging federated identity management.
SSO should not be confused with reduced sign-on (RSO). RSO generally operates
through some form of credential synchronization. Implementation of an RSO solution
introduces security issues not experienced by SSO because the nature of SSO eliminates
usernames and other sensitive data from traversing the network. The foundation of federation
relies on the existence of an identity provider; therefore, RSO has no place in a
federated identity system.
Multifactor authentication goes by many names, including two-factor authentication and
strong authentication. The general principle behind multifactor authentication is to add
an extra level of protection to verify the legitimacy of a transaction. To be a multifactor
system, users must be able to provide at least two of the following requirements:
?? What they know (such as password)
?? What they have (such as display token with random numbers displayed)
?? What they are (such as biometrics)
One-time passwords also fall under the banner of multifactor authentication. The use
of one-time passwords is strongly encouraged during provisioning and communicating of
first-login passwords to users.
Step-up authentication is an additional factor or procedure that validates a user’s identity,
normally prompted by high-risk transactions or violations according to policy rules.
Three methods are commonly used:
?? Challenge questions
?? Out-of-band authentication (a call or Short Message Service SMS text message
to the end user)
?? Dynamic knowledge-based authentication (questions unique to the end user)
Application security must be thought through, during the early stages of the development lifecycle. The development team should be imparted with the understanding of the application and ability to identify security defects in functional behaviour and business process logic. Developers are the first level of defense and opportunity to build in security.
Operations document for secure development should be established for the environment
Contingency plan shall be in place for the code repository
Security processes for the assurance level required by the system should be determined and documented
Page | PAGE * MERGEFORMAT 30
Training shall be provided for key developers to understand the current threats and potential exploitations of the developed system
Training shall be provided for secure design and coding techniques
Role-based access should apply to accessing the code repository, and logs should be reviewed
Security vulnerabilities that appear during the development can be categorized into three
Custom code defects
Open Source and Third-party components
Many of the development components comprise of open source code. This comes with known / unknown vulnerabilities as well as certain licensing liabilities. Enough control is required to manage the open source components that gets into the production code. Software composition analysis will provide the visibility into the open source components that are part of the product.
Open source usage must follow the organization policy for open source management
Tools shall be deployed that can integrate with the development environment and alert early in the development lifecycle
List of open source components and their licensing information shall be identified
Open source license risks shall be identified based on the declared vs scanned results
All open source components shall be monitored and tracked for new vulnerabilities and risks
All open source components shall be monitored and tracked for vulnerabilities and risks
All policy exceptions shall be reviewed, approved and recorded for auditing purpose
Deployment & OperationsSupplemental Security Devices
Supplemental security devices add additional elements and layers to a defense-in-depth
architecture. The general approach for a defense-in-depth architecture is to design using
multiple overlapping and mutually reinforcing elements and controls that allow for the
establishment of a robust security architecture. By using a selection of the supplemental
security devices discussed next, the CCSP can augment the security architecture of the
organization by strengthening the border defenses.
Supplemental security devices include the following:
?? Web application firewall (WAF)
?? A WAF is a layer-7 firewall that can understand HTTP traffic.
?? A cloud WAF can be extremely effective in the case of a DoS attack; in several
cases, a cloud WAF was used to successfully thwart DoS attacks of 350 Gbps
and 450 Gbps.
?? Database activity monitoring (DAM)
?? DAM is a layer-7 monitoring device that understands SQL commands.
?? DAM can be agent-based (ADAM) or network-based (NDAM).
?? A DAM can detect and stop malicious commands from executing on an
?? XML gateways transform the way services and sensitive data are exposed as
APIs to developers, mobile users, and cloud users.
?? XML gateways can be either hardware or software.
Cloud App lication Security 4
?? XML gateways can implement security controls such as data loss prevention
(DLP), antivirus, and antimalware services.
?? Firewalls can be distributed or configured across the SaaS, PaaS, and IaaS
landscapes; these can be owned and operated by the provider or can be outsourced
to a third party for ongoing management and maintenance.
?? Firewalls in the cloud need to be installed as software components (such as
?? API gateway
?? An API gateway is a device that filters API traffic; it can be installed as a proxy
or as a specific part of your application stack before data is processed.
?? An API gateway can implement access control, rate limiting, logging, metrics,
and security filtering.
<Work in Progress>
<Work in Progress>
<Work in Progress>
<Work in Progress>
Page | PAGE * MERGEFORMAT 32
Data GovernanceAwareness of Encryption Dependencies
Development staff must take into account the environment their applications will be running
in and the possible encryption dependencies in the following modes:
?? Encryption of data at rest: Addresses encrypting data as it is stored within the
CSP network (such as hard disc drive HDD, storage area network SAN,
attached storage NAS, and solid-state drive SSD)
?? Encryption of data in transit: Addresses security of data while it traverses the
(such as CSP network or Internet)
Additionally, the following method may be applied to data to prevent unauthorized
viewing or accessing of sensitive information:
?? Data masking (or data obfuscation): The process of hiding original data with
random characters or data
When encryption will be provided or supported by the CSP, an understanding of the
encryption types, strength, algorithms, key management, and any associated responsibilities
of other parties should be documented and understood. Additionally, depending
on the industry type, relevant certifications or criteria may be required for the relevant
encryption being used.
Beyond encryption aspects of security, threat modeling (discussed later in this
domain) must address attacks from either other cloud tenants or attacks from one organization
application being used as a mechanism to perform attacks on other corporate
applications in the same or other systems.
2.10.1 Data in Use
<Work in Progress>
2.10.2 Data in Motion
<Work in Progress>
2.10.3 Data at Rest
<Work in Progress>
2.10.4 Cross Border transfers
<Work in Progress>
Business Continuity ManagementBusiness continuity management (BCM) is focused on the planning steps that businesses
engage in to ensure that their mission-critical systems are able to be restored to service
following a disaster or service interruption event. To focus the BCM activities correctly,
a prioritized ranking or listing of systems and services must be created and maintained.
This is accomplished through the use of a business impact analysis (BIA) process. The
BIA is designed to identify and produce a prioritized listing of systems and services critical
to the normal functioning of the business. Once the BIA has been completed, the
CCSP can go about devising plans and strategies that will enable the continuation of
business operations and the quick recovery from any type of disruption.
Comparing BC and BCM
It is important to understand the difference between BC and BCM:
?? BC is defined as the capability of the organization to continue delivery of products
or services at acceptable predefined levels following a disruptive incident.
(Source: ISO 22301:2012)22
?? BCM is defined as a holistic management process that identifies potential threats
to an organization and the impacts to business operations those threats, if realized,
might cause. It provides a framework for building organizational resilience
with the capability of an effective response that safeguards the interests of its
key stakeholders, reputation, brand, and value-creating activities. (Source: ISO
Continuity Management Plan
A detailed continuity management plan should include the following:
?? Required capability and capacity of backup systems
?? Trigger events to implement the plan
?? Clearly defined roles and responsibilities by name and title
?? Clearly defined continuity and recovery procedures
?? Notification requirements
The plan should be tested at regular intervals.
Operations Management 321
Continual Service Improvement Management
Metrics on all services and processes should be collected and analyzed to find areas of
improvement using a formal process. You can use various tools and standards to monitor
performance. One example is the ITIL framework. The organization should adopt and
utilize one or more of these tools.
IT Service ManagementImplementation of Network Security Controls
The implementation of network security controls was discussed extensively earlier in this
book. You need to be able to follow and implement best practices for all security controls.
With regard to network-based controls, consider the following general guidelines:
?? Defense in depth
?? Access controls
?? Secure protocol usage (that is, IPSec and TLS)
?? IDS/IPS system deployments
?? Separation of traffic flows within the host from the guests via use of separate virtual
switches dedicated to specific traffic
?? Zoning and masking of storage traffic
?? Deployment of virtual security infrastructure specifically designed to secure and
monitor virtual networks (that is, VMware’s vCloud Networking and Security
vCNS or NSX products)
Log Capture and Analysis
Log data needs to be collected and analyzed both for the hosts as well as for the guest
running on top of the hosts. Various tools allow you to collect and consolidate log data.
Centralization and offsite storage of log data can prevent tampering provided the
appropriate access controls and monitoring systems are put in place.
You are responsible for understanding the needs of the organization with regard to log
capture and analysis. You are also responsible for ensuring that the necessary toolsets and
solutions are implemented so that information can be managed using best practices and
Implementation of Network Security Controls 307
Management Plan Implementation Through
the Management Plane
You must develop a detailed management plan for the cloud environment. You are ultimately
accountable for the security architecture and resiliency of the systems you design,
implement, and manage.
Ensure due diligence and due care are exercised in the design and implementation of
all aspects of the enterprise cloud security architecture.
Further, keep abreast of changes in the vendor’s offerings that can influence
the choices being made or considered with regard to management capabilities and
approaches for the cloud.
Stay informed about issues and threats that could impact the secure operation and
management of the cloud infrastructure. Also be aware of mitigation techniques and
vendor recommendations that may need to be applied or implemented within the cloud
Ensuring Compliance with Regulations and Controls
Effective contracting for cloud services reduces the risk of vendor lock-in, improves
and encourages competition. Establishing explicit, comprehensive SLAs for
security, continuity of operations, and service quality is key for any organization.
There are a variety of compliance regimes, and the provider should clearly delineate
which it supports and which it does not. Compliance responsibilities of the provider and
the customer should be clearly delineated in contracts and SLAs. The Cloud Security
Alliance Cloud Controls Matrix (CCA CCM) provides a good list of controls required
by different compliance bodies. In many cases, controls from one carry over to those of
To ensure all compliance and regulatory requirements can be met, consider the
and customers’ geographic locations. Involving the organization’s legal team
from the beginning when designing the cloud environment keeps the project on track
and focused on the necessary compliance concerns at the appropriate times in the
Keep in mind that there is probably a long history of project-driven compliance in
one form or another within the enterprise. The challenge is often not the need to create
an awareness around the importance of compliance overall, or even compliance specific
to a certain business need, customer segment, or service offering. Rather, the challenge
is to translate that awareness and historical knowledge to the cloud with the appropriate
Often, certain agreements focusing on premise service provisioning may be in place
but not structured appropriately to encompass a full cloud services solution. The same
308 Domain 5 Operations
may be true with some of the existing outsource agreements that may be in place. In general,
these agreements may be providing an acceptable level of service to internal customers
or allow for the acquisition of a service from an external third party but may not be
structured appropriately for a full-blown cloud service to be immediately spun up on top
It is imperative that you clearly identify your customer’s needs and ensure that IT and
the business are aligned to support the provisioning of services and products that provide
value to the customer in a secure and compliant manner.
Using an ITSM Solution
The use of an ITSM solution to drive and coordinate communication may be useful.
ITSM is needed for the cloud because the cloud is a remote environment that requires
management and oversight to ensure alignment between IT and business. An ITSM solution
makes it possible to do the following:
?? Ensure portfolio management, demand management, and financial management
are all working together for efficient service delivery to customers and effective
charging for services if appropriate
?? Involve all the people and systems necessary to create alignment and ultimately
Look to the organization’s policies and procedures for specific guidance on the mechanisms
and methodologies for communication that are acceptable. More broadly, there
are many additional resources to leverage as needed, depending on circumstance.Considerations for Shadow ITShadow IT is often defined as money spent on technology to acquire services without the
IT department’s dollars or knowledge. On March 26, 2015, a survey based on research
from Canopy, the Atos cloud, was released, revealing that 60 percent of chief information
officers (CIOs) said that shadow IT spending was an estimated €13 million in their organizations
in 2014, and that figure was expected to grow in subsequent years. This trend
highlights the need for greater IT governance to be deployed in organizations to support
digital transformation initiatives.
A review of organizations’ shadow IT expenditures showed that backup needs were
the primary driver, with 44 percent of respondents stating their department had invested
Operations Management 309
in backup in the previous year. Other main areas of shadow IT spending included file
sharing software (36 percent) and archiving data (33 percent).
“Surprisingly, shadow IT is being spent on back-office functions—areas which for
most businesses should be centralized and carefully managed by the IT department,” said
Philippe Llorens, CEO of Canopy. “As businesses embrace digital, it is essential that the
IT department not only provides the IT infrastructure and services to enable and support
the digital transformation but also the governance model to maximize cost efficiencies,
manage risk, and provide the business with secure IT services.”20
According to the survey, the biggest shadow IT spenders were U.S. companies, outlaying
a huge €26 million per company as a proportion of their 2014 global IT budget—
more than double that of companies in the UK and France that admitted to spending €11
million and €10 million, respectively. Firms in Germany estimated spending over four
times less on shadow IT than U.S. companies. The findings demonstrate international
firms’ challenge to manage employees’ varied attitudes to shadow IT spending across
Operations ManagementThere are many aspects and processes of operations that need to be managed, and they
often relate to each other. Some of these include the following:
?? Information security management
?? Configuration management
?? Change management
?? Incident management
?? Problem management
?? Release and deployment management
?? Service-level management
?? Availability management
?? Capacity management
?? Business continuity management (BCM)
?? Continual service improvement management
The following sections explore each of these types of management and then look
more closely at how they relate to each other.
310 Domain 5 Operations
Information Security ManagementOrganizations should have a documented and operational information security management
plan that generally covers the following areas:
?? Security management
?? Security policy
?? Information security organization
?? Asset management
?? Human resources security
?? Physical and environmental security
?? Communications and operations management
?? Access control
?? Information systems acquisition, development, and maintenance
?? Provider and customer responsibilities
Configuration ManagementConfiguration management aims to maintain information about CIs required to deliver
an IT service, including their relationships. As mentioned in the “Release and Deployment
Management” section, there are lateral ties between many of the management
areas discussed in this section. All these lateral connections are extremely important
because they form the basis for the mutually reinforcing web that is created to support the
proper documentation and operation of the cloud infrastructure.
In the case of configuration management, the specific ties to change management
and availability management are important to mention.
You should develop a configuration-management process for the cloud infrastructure.
The process should include policies and procedures for each of the following:
?? The development and implementation of new configurations that should apply to
the hardware and software configurations of the cloud environment
?? Quality evaluation of configuration changes and compliance with established
?? Changing systems, including testing and deployment procedures, that should
include adequate oversight of all configuration changes
?? The prevention of any unauthorized changes in system configurations
Operations Management 311
Change management is an approach that allows organizations to manage and control the
impact of change through a structured process. The primary goal of change management
within a project-management context is to create and implement a series of processes that
allow changes to the scope of a project to be formally introduced and approved.
Change management has several objectives:
?? Respond to a customer’s changing business requirements while maximizing value
and reducing incidents, disruption, and rework.
?? Respond to business and IT requests for change that aligns services with business
?? Ensure that changes are recorded and evaluated.
?? Ensure that authorized changes are prioritized, planned, tested, implemented,
documented, and reviewed in a controlled manner.
?? Ensure that all changes to CIs are recorded in the configuration management system.
?? Optimize overall business risk. It is often correct to minimize business risk, but
sometimes it is appropriate to knowingly accept a risk because of the potential
You should develop or augment a change-management process for the cloud infrastructure
to address any cloud-specific components or components that may not have been
captured under historical processes. You may not be a change-management expert, but
you do still bear responsibility for change and its impact in the organization. To ensure
the best possible use of change management within the organization, attempt to partner
with the project management professionals (PMPs) who exist in the enterprise to incorporate
the cloud infrastructure and service offerings into an existing change-management
program if possible. The existence of a project management office (PMO) is usually a
strong indication of an organization’s commitment to a formal change-management process
that is fully developed and broadly communicated and adopted.
A change-management process focused on the cloud should include policies and procedures
for each of the following:
?? The development and acquisition of new infrastructure and software
?? Quality evaluation of new software and compliance with established security
312 Domain 5 Operations
?? Changing systems, including testing and deployment procedures; they should
include adequate oversight of all changes
?? Preventing the unauthorized installation of software and hardwareIncidentIncident Management
Incident management describes the activities of an organization to identify, analyze, and
correct hazards to prevent a future reoccurrence. Within a structured organization, an
incident response team (IRT) or an incident management team (IMT) typically addresses
these types of incidents. These are often designated beforehand or during the event and
are placed in control of the organization while the incident is dealt with to restore normal
Events Versus Incidents
According to the ITIL framework, an event is defined as a change of state that has significance
for the management of an IT service or other CI. The term can also be used
to mean an alert or notification created by an IT service, CI, or monitoring tool. Events
often require IT operations staff to take actions and lead to incidents being logged.
According to the ITIL framework, an incident is defined as an unplanned interruption
to an IT service or a reduction in the quality of an IT service.
Purpose of Incident Management
Incident management has three purposes:
?? Restore normal service operation as quickly as possible
?? Minimize the adverse impact on business operations
?? Ensure service quality and availability are maintained
Objectives of Incident Management
Incident management has five objectives:
?? Ensure that standardized methods and procedures are used for efficient and
prompt response, analysis, documentation of ongoing management, and reporting
?? Increase visibility and communication of incidents to business and IT support
?? Enhance business perception of IT by using a professional approach in quickly
resolving and communicating incidents when they occur
?? Align incident management activities with those of the business
?? Maintain user satisfaction
316 Domain 5 Operations
Incident Management Plan
You should have a detailed incident management plan that includes the following:
?? Definitions of an incident by service type or offering
?? Customer and provider roles and responsibilities for an incident
?? Incident management process from detection to resolution
?? Response requirements
?? Media coordination
?? Legal and regulatory requirements such as data breach notification
You may also want to consider the use of an incident management tool. The incident
management plan should be routinely tested and updated based on lessons learned from
real and practice events.
Incidents can be classified as either minor or major depending on several criteria. Work
with the organization and customers to ensure that the correct criteria are used for incident
identification and classification and that these criteria are well documented and
understood by all parties to the system.
Incident prioritization is made up of the following items:
?? Impact = Effect upon the business
?? Urgency = Extent to which the resolution can bear delay
?? Priority = Urgency × Impact
When these items are combined into a matrix, you have a powerful tool to help the
business understand incidents and prioritize their management (Figure 5.8).
Example of an Incident Management Process
Incident management should be focused on the identification, classification, investigation,
and resolution of an incident, with the ultimate goal of returning the effected
systems to normal as soon as possible. To manage incidents effectively, a formal incident
management process should be defined and used. In Figure 5.9, a traditional incident
management process is shown.
<Work in Progress>
Security Incident Response
goal for both types is to restore normal service security and operations as quickly as possible after an issue is detected, and an investigation is started
Security vulnerabilities are reported to the Microsoft Security Response Center via [email protected] MSRC works with partners and security researchers around the world to help prevent security incidents and to advance Microsoft product security.
Detections of suspicious activities by internal monitoring and diagnostic systems within the Azure service. These alerts could come in the way of signature-based alarms such as antimalware, intrusion detection or via algorithms designed to profile expected activity and alert upon anomalies.
Escalations for operators of Azure Services. Microsoft employees are trained to identify and escalate potential security issues.
The objective of problem management is to minimize the impact of problems on the
organization by identifying the root cause of the problem at hand. Problem management
plays an important role in the detection of and providing of solutions to problems (workarounds
and known errors) and prevents their reoccurrence.
?? A problem is the unknown cause of one or more incidents, often identified as a
result of multiple similar incidents.
?? A known error is an identified root cause of a problem.
?? A workaround is a temporary way of overcoming technical difficulties (that is,
incidents or problems).
It’s important to understand the linkage between incident and problem management.
In addition, you need to ensure there is a tracking system established to track and monitor
all system-related problems. The system should gather metrics to identify possible trends.
318 Domain 5 Operations
Problems can be classified as minor or major depending on several criteria. Work
with the organization and the customers to ensure that the correct criteria are used for
problem identification and classification and that these criteria are well documented and
understood by parties to the system.
Release and Deployment Management
Release and deployment management aims to plan, schedule, and control the movement
of releases to test and live environments. The primary goal of release and deployment
management is to ensure that the integrity of the live environment is protected and that
the correct components are released.
Following are the objectives of release and deployment management:
?? Define and agree upon deployment plans
?? Create and test release packages
?? Ensure the integrity of release packages
?? Record and track all release packages in the Definitive Media Library (DML)
?? Manage stakeholders
?? Check delivery of utility and warranty (utility + warranty = value in the mind of
?? Utility is the functionality offered by a product or service to meet a specific
need; it’s what the service does.
?? Warranty is the assurance that a product or service will meet agreed-upon
requirements (SLA); it’s how the service is delivered.
?? Manage risks
?? Ensure knowledge transfer
New software releases should be done in accordance with the configuration management
plan. You should conduct security testing on all new releases prior to deployment.
Release management is especially important for SaaS and PaaS providers.
You may not be directly responsible for release and deployment management
and may be involved only tangentially in the process. Regardless of who is in charge,
it is important that the process is tightly coupled to change management, incident
and problem management, and configuration and availability management and the
;Work in Progress;
Change ; Config
A robust change management process provides management with the assurance that only authorized and tested changes to systems and infrastructures are implemented.
The change management process is subject to management oversight to ensure the consistent and timely processing of changes.
Only changes that are authorized, evaluated and prioritized and the have resources required should enter the change process.
Production libraries should be secure, allowing only authorized personnel to access the production libraries. Management must provide oversight of access to libraries, good-practice separation of duties, and synchronization of source and executable libraries.
The production executable libraries contain the computer code to process the programs. Access needs to be limited and monitoring processes need to be in place to effectively oversee change activities affecting the executable libraries.
Page | PAGE * MERGEFORMAT 40
The move to the production process should be controlled and documented. Access should be limited to authorized change management personnel. Only authorized changes should have been made to production programs, and the move process should ensure synchronization of the source and executable libraries.
Emergency changes should be controlled, documented and initiated only in true emergencies.
The enterprise relies on the integrity of systems to operate their applications and to be in alignment with business goals and stakeholder expectations.
Failure to implement and follow good change management practices may result in:
Unauthorized business process changes being introduced into operations
Unintended side effects
Changes not being recorded and tracked
Emergency changes being implemented without adequate oversight, resulting in the introduction of erroneous processes, unauthorized business processes and inefficiencies
Lack of priority management of changes
Unauthorized changes being applied, resulting in compromised security and unauthorized access to corporate information
Failure to comply with compliance requirements
System or application failure, resulting in lack of availability
;Work in Progress;
Page | PAGE * MERGEFORMAT 41
Third Party ServicesCloud Hosting
;Work in Progress;
;Work in Progress;
Managed Detection ; Response (MDR)
Endpoint vulnerability detection and response is provided as a service by many third-party vendors. They typically provide the below services (not an exhaustive list):
Availability of skilled workforce for threat hunting and analysis
24x7x365 monitoring, detection and response
Incident response support
Recovery services and recommendation
Access to technologically advanced tool set
Regulatory ComplianceAs the global nature of technology continues to evolve and essentially simplify and enable conveniences once thought impossible, the challenge and complexity of meeting internal legislations, regulations, and laws becomes greater all the time. Ensuring adherence, compliance, or conformity with these can be challenging within traditional on-premises environments or even on third-party and hosted environments. Add cloud computing, and the complexity increases significantly
Understand how to identify the various legal requirements and unique risks associated with the cloud environment with regard to legislation and conflicting legislation, legal risks, controls, and forensic requirements
Describe the potential personal and data privacy issues specific to personal identifiable information within the cloud environment
Define the process, methods, and required adaptions necessary for an audit within the cloud environment
Describe the different types of cloud-based audit reports
Identify the impact of diverse geographical locations and legal jurisdictions
Understand implications of cloud-to-enterprise risk management
Explain the importance of cloud contract design and management for outsourcing a cloud environment
Identify appropriate supply-chain management processes
In other words, it becomes important to understand how laws apply to the different parties involved and how compliance will ultimately be addressed.
Regardless of which models you are using, you need to consider the legal issues that
apply to how you collect, store, process, and, ultimately, destroy data. There are likely important
national and international laws that you, with your legal functions, need to consider to
ensure you are in legal compliance. There may be numerous compliance requirements,
such as Safe Harbor, HIPAA, PCI DSS, and other technology and information privacy laws
and regulations. Failure to comply may mean heavy punishments and liability issues.
If you are using a cloud infrastructure that is sourced from a CSP, you must impose
all legal and regulatory requirements that are inflicted on you to the CSP. Accountability
remains with you, and making sure you are complying is your responsibility. Usually this
can be addressed through clauses in the contract that specify that the CSP will use effective
security controls and comply with any data privacy provisions. You are accountable
for the actions of any of your subcontractors, including CSPs.
For those familiar with digital evidence’s relevance and overall value in the event of an
incident or suspected instance of cybercrime, e-discovery has long formed part of relevant
e-discovery refers to any process in which electronic data is sought, located, secured,
and searched with the intent of using it as evidence in a civil or criminal legal case.
discovery can be carried out online and offline (for static systems or within particular
network segments). In the case of cloud computing, almost all e-discovery cases are done
in online environments with resources remaining online.
The challenges for the security professional here are complex and need to be fully understood.
Picture this scene. You receive a call from your company’s legal advisors or from
a third party advising of potentially unlawful or illegal activities across the infrastructure
and resources that employees access.
Given that your systems are no longer on-premises (or only a portion of your systems
are), what are the first steps you are going to follow? Start acquiring local devices and
obtaining portions or components from your data center? Surely, you can just get the
data and information required from the CSP. This may or may not be the case, however.
And if it is possible, it may be complicated to extract the relevant information required.
If you look at this from a U.S. perspective, under the Federal Rules of Civil Procedure,
a party to litigation is expected to preserve and be able to produce electronically
stored information that is in its possession, custody, or control. Sounds straightforward,
right? Is the cloud under your control? Who is controlling or hosting the relevant data?
Does this mean that it is under the provider’s control?
Considerations and Responsibilities of e-Discovery
How good is your relationship with your cloud vendor? Good, bad, or fine? Have you
ever spoken with your CSPs’ technical teams? Imagine picking up the phone to speak
with the CSP for the first time when trying to understand how to conduct an e-discovery
investigation involving its systems.
At this point, do you know exactly where your data is housed within your CSP? If
you do, you have a slight head start on many others. If you do not, it is time you find out.
Imagine trying to collect and carry out e-discovery investigations in Europe, Asia, South
America, the United States, or elsewhere when the location of your data is found to be in
a different hemisphere or geography than you are.
Any seasoned investigator will tell you that carrying out investigations or acquisitions
within locations or states that you are not familiar with in terms of laws, regulations, or
other statutory requirements can be tricky and risky. Understanding and appreciating
local laws and their implications is a must for the security professional prior to initiating
or carrying out any such reviews or investigations.
Laws in one state may well clash with or contravene laws in another. It is the Certified
Cloud Security Professional’s (CCSP’s) responsibility under due care and due diligence
to validate that all the relevant laws and statutes that pertain to their investigation are documented
and understood to the best of her ability prior to the start of the investigation.
Given that the cloud is an evolving technology, companies and security professionals can
be caught short when dealing with e-discovery. There is a distinct danger that companies
can lose control over access to their data due to investigations or legal actions being
carried out against them. A key step to reducing the potential implications, costs, and
business disruptions caused by loss of access to data is to ensure your cloud service contract
takes into account such events. As a first requirement, your contract with the CSP
should state that it is to inform you of any such events and enable you to control or make
Protecting Personal InformationThis section describes the potential personal and data privacy issues specific to personally
identifiable information (PII) within the cloud environment. Borderless computing is the
fundamental concept that results in a globalized service, being widely accessible with no
With the cloud, the resources that are used for processing and storing user data and
network infrastructure can be located anywhere on the globe, constrained only by where
the capacities are available. The offering of listed availability zones by cloud service
providers (CSPs) does not necessarily result in exclusivity within these zones, due to
resilience, failover, redundancy, and other factors. Additionally, many other providers
state that resources and information will be used within their primary location (that is,
European Zone/North America, and so on); however, they will be backed up in at least
two additional locations to enable recoverability and redundancy. In the absence of transparency
related to exact data at rest locations, this leads to challenges from a customer
perspective to ensure that relevant requirements for data security are being satisfied.
Differentiating Between Contractual and Regulated PII
In cloud computing, the legal responsibility for data processing is borne by the user who
enlists the services of a CSP. As in all other cases in which a third party is given the task
of processing personal data, the user, or data controller, is responsible for ensuring that
the relevant requirements for the protection and compliance with requirements for PII
are satisfied or met.
The term PII is widely recognized across the area of information security and under
U.S. privacy law. PII relates to information or data components that can be utilized by
themselves or along with other information to identify, contact, or locate a living individual.
PII is a legal term recognized under various laws and regulations across the United
National Institute of Standards and Technology (NIST), in Special Publication (SP)
800-122, defines PII as any information about an individual “that can be used to distinguish
or trace an individual’s identity, such as name, Social Security Number, date and
place of birth, mother’s maiden name, or biometric records; and any other information
that is linked or linkable to an individual, such as medical, educational, financial, and
Fundamentally, there are two main types of PII associated with cloud and noncloudenvironments.
Where an organization or entity processes, transmits, or stores PII as part of its business
or services, this information is required to be adequately protected in line with relevant
local state, national, regional, federal, or other laws. Where any outsourcing of services,
roles, or functions (involving cloud-based technologies, or manual processes such as
call centers), the relevant contract should list the applicable rules and requirements
from the organization that owns the data and the applicable laws to which the provider
Additionally, the contractual elements related to PII should list requirements and
appropriate levels of confidentiality, along with security provisions and requirements
necessary. As part of the contract, the provider is bound by privacy, confidentiality, or
information security requirements established by the organization or entity to which
it provides services. The contracting body may be required to document adherence or
compliance with the contract at set intervals and in line with any audit and governance
requirements from its customers.
Regarding data protection and relevant privacy frameworks, standards, and legal
requirements, cloud computing raises a number of interesting issues. In essence, dataprotection law is based on the premise that it is always clear where personal data is
located, by whom it is processed, and who is responsible for data processing. At all times,
the data subject (that is, the person to whom the information relates, such as John Smith)
should have an understanding of these issues. Cloud computing appears to fundamentally
conflict with these requirements and listed obligations.
Failure to meet or satisfy contractual requirements may lead to penalties (financial or
service compensation) through to termination of contract at the discretion of the organization
to which services are provided.
The key focus and distinct criteria to which the regulated PII must adhere is required
under law and statutory requirements, as opposed to the contractual criteria that may be
based on best practices or organizational security policies.
Key differentiators from a regulated perspective are the must-haves to satisfy regulatory
requirements (such as HIPAA and GLBA). Failure to supply these can result in sizable
and significant financial penalties and restrictions around processes, storing, and providing
Regulations are put in place to reduce exposure and to ultimately protect entities and
individuals from a number of risks. They also force providers and processers alike to take
certain responsibilities and actions.
The reasons for regulations include but are not limited to the following:
?? Take due care.
?? Apply adequate protections.
?? Protect customers and consumers.
?? Ensure appropriate mechanisms and controls are implemented.
?? Reduce likelihood of malformed or fractured practices.
?? Establish a baseline level of controls and processes.
?? Create a repeatable and measurable approach to regulated data and systems.
?? Continue to align with statutory bodies and fulfill professional conduct
?? Provide transparency among customers, partners, and related industries.
Mandatory Breach Reporting
Another key component and differentiator related to regulated PII is mandatory breach
reporting requirements. At present, 47 states and territories within the United States,
including the District of Columbia, Puerto Rico, and the Virgin Islands, have legislation
in place that requires both private and government entities to notify and inform individuals
of any security breaches involving PII.
Many affected organizations lack the understanding or find it a challenge to define
what constitutes a breach, along with defining incidents versus events, and so on. More
recently, the relevant security breach laws include clear and concise requirements related
to who must comply with the law (businesses, information brokers, government entities,
agencies, regulatory bodies, and so on) and defining what personally identifiable information
means (name combined with Social Security number, driver’s license, state identification
documents, and relevant account numbers).
Finally, included in the laws are definitions and examples of what defines and constitutes
a security or data breach (such as unauthorized access, acquisition, or sharing
of data), how the affected parties and individuals are to be notified and informed of any
breaches involving PII, and any exceptions (such as masked, scrambled, anonymized, or
The NIST Guide (SP 800-122) called “Protecting the Confidentiality of Personally
Identifiable Information” should serve as a useful resource when identifying and ensuring
requirements for contractual and regulated PII are established, understood, and enforced.
A breakdown on incident response (IR) and its required stages is also captured in SP
NIST Guide SP 800-122 was developed to assist agencies and state bodies in meeting
PII requirements. Depending on your industry and geographic location, NIST guides
may not ensure compliance. Always check if there are additional or differing controls that
are applicable to your environment and based on local legislation.
From a contractual, regulated, and PII perspective, the following should be reviewed and
fully understood by the CCSP with regards to any hosting contracts (along with other
overarching components within an SLA):
?? Scope of processing: The CCSP needs a clear understanding of the permissible
types of data processing. The specifications should also list the purpose for which
the data can be processed or utilized.
?? Use of subcontractors: The CCSP must understand where any processing,
transmission, storage, or use of information will occur. A complete list should be
drawn up, including the entity, location, rationale, and form of data use (processing,
transmission, and storage), along with any limitations or nonpermitted uses.
Contractually, the requirement for the procuring organization to be informed as
to where data has been provided or will be utilized by a subcontractor is essential.
?? Deletion of data: Where the business operations no longer require information
to be retained for a specific purpose (that is, not retaining for convenience or
potential future uses), the deletion of information should occur in line with the
organization’s data retention policies and standards. Data deletion is also of critical
importance when contractors and subcontractors no longer provide services or
when a contract is terminated.
Appropriate or required data security controls: Where processing, transmission,
or storage of data and resources is outsourced, the same level of security controls
should be required for any entity’s contracting or subcontracting services. Ideally,
security controls should be of a higher level (which is the case for a large number
of cloud computing services) than the existing levels of controls; however, this
is never to be taken as a given in the absence of confirmation or verification.
Additionally, technical security controls should be unequivocally called out and
stipulated in the contract; they are applicable to subcontractors as well. Where
such controls are unable to be met by either the contractor or the subcontractor,
these need to be communicated, documented, understood, and have mitigating
controls in place that enhance and satisfy the data owners’ requirements. Common
methods to ensure the ongoing confidentiality of the data include encryption
of data during transmission or storage (ideally both), along with defense in
depth and layered approaches to data and systems security.
?? Locations of data: To ensure compliance with regulatory and legal requirements,
the CCSP needs to understand the location of contractors and subcontractors.
She must pay particular attention to where the organization is located and where
operations, data centers, and headquarters are located. The CCSP needs to know
where information is being stored, processed, and transmitted. (Many business
units are outsourced or located in geographic locations where storage, resourcing,
and skills may be more economically advantageous for the CSP, contractor, or
subcontractor.) Finally, any contingency or continuity requirements may require
failover to different geographic locations, which can affect or violate regulatory or
contractual requirements. The CCSP should fully understand these and accept
them prior to engagement of services with any contractor, subcontractor, or CSP.
?? Return or restitution of data: For both contractors and subcontractors where a
contract is terminated, the timely and orderly return of data has to be required
both contractually and within the SLA. Appropriate notice should be provided,
as well as the ongoing requirement to ensure the availability of the data is maintained
between relevant parties, with an emphasis on live data being required.
Format and structure of data should be clearly documented, with an emphasis on
structured and agreed-upon formats being clearly understood by all parties. Data
retention periods should be explicitly understood, with the return of data to the
organization that owns the data resulting in the removal or secure deletion on any
contractors’ or subcontractors’ systems or storage.
?? Right to audit subcontractors: In line with the agreement between the organization
utilizing services and the contracting entity where subcontractors are being
utilized, the subcontracting entity should be in agreement and be bound by any
right to audit clauses and requirements. Right to audit clauses should allow for the
organization owning the data (not possessing) to audit or engage the services of
an independent party to ensure that contractual and regulatory requirements are
being satisfied by either the contractor or the subcontractor.
Country-Specific Legislation and Regulations Related
to PII, Data Privacy, and Data Protection
It is vital to recognize the legislation and regulations of various countries related to the
personal information, data privacy, and data protection. The varying data protection legislation among jurisdictions inevitably makes using global cloud computing challenging. This can be further complicated by the fact that sometimes regulations and laws can differ between a larger jurisdiction and its members, as in the case of the European Union and its member countries. Beyond laws, there are broader guidelines as discussed in the “Frameworks and Guidelines Relevant to Cloud Computing” section.
Secure SDLCSecure software development is a requirement to build up a secure service, architecture, software and system. The Secure Software Development Lifecycle (S-SDLC) methodology encompasses all activities to develop an application system and put this system into production. This includes requirement gathering, analysis, design, construction, implementation, and the various maintenance stages.
Application Security Requirements
Figure SEQ Figure * ARABIC 5: Basic Security DesignApplication Security Testing
Security testing of web applications through the use of testing software is generally broken
into two distinct types of automated testing tools. This section looks at these tools
and discusses the importance of penetration testing, which generally includes the use of
human expertise and automated tools. The section also looks at secure code reviews and
OWASP recommendations for security testing.
Static Application Security Testing
Static application security testing (SAST) is generally considered a white-box test, where
the application test performs an analysis of the application source code, byte code, and
binaries without executing the application code. SAST is used to determine coding errors
and omissions that are indicative of security vulnerabilities. SAST is often used as a test
method while the tool is under development (early in the development lifecycle).
SAST can be used to find XSS errors, SQL injection, buffer overflows, unhandled
error conditions, and potential backdoors.
Because SAST is a white-box test tool, it typically delivers more comprehensive
results than those found using the test described in the next section.
Application Security Testing 235
Cloud App lication Security 4
Dynamic Application Security Testing
Dynamic application security testing (DAST) is generally considered a black-box test,
where the tool must discover individual execution paths in the application being analyzed.
Unlike SAST, which analyzes code offline (when the code is not running), DAST
is used against applications in their running state. DAST is mainly considered effective
when testing exposed HTTP and HTML interfaces of web applications.
It is important to understand that SAST and DAST play different roles and that one is
not better than the other. Static and dynamic application tests work together to enhance
the reliability of organizations creating and using secure applications.
Runtime Application Self-Protection
Runtime application self-protection (RASP) is generally considered to focus on applications
that possess self-protection capabilities built into their runtime environments, which
have full insight into application logic, configuration, and data and event flows. RASP
prevents attacks by self-protecting or reconfiguring automatically without human intervention
in response to certain conditions (threats, faults, and so on).
Vulnerability Assessments and Penetration Testing
Both vulnerability assessment and penetration testing play a significant role and support
security of applications and systems prior to an application going into and while in a production
Vulnerability assessments or vulnerability scanning look to identify and report on
known vulnerabilities in a system. Depending on the approach you take, such as automated
scanning or a combination of techniques, the identification and reporting of a vulnerability
should be accompanied by a risk rating, along with potential exposures.
Most often, vulnerability assessments are performed as white-box tests, where the
assessor knows that application and the environment the application runs in.
Penetration testing is a process used to collect information related to system vulnerabilities
and exposures, with the view to actively exploit the vulnerabilities in the system.
Penetration testing is often a black-box test, in which the tester carries out the test as
an attacker, has no knowledge of the application, and must discover any security issues
within the application or system being tested. To assist with targeting and focusing the
scope of testing, independent parties also often perform gray-box testing with some level
of information provided.
Note As with any form of security testing, permission must always be obtained prior to testing.
This is to ensure that all parties have consented to testing, as well as to ensure that no malicious
activity is performed without the acknowledgment and consent of the system owners.
Within cloud environments, most vendors allow for vulnerability assessments or penetration
tests to be executed. Quite often, this depends on the service model (SaaS, PaaS,
IaaS) and the target of the scan (application versus platform). Given the nature of SaaS,
where the service consists of an application consumed by all consumers, SaaS providers
are most likely not to grant permission for penetration tests to occur by clients. Generally,
only a SaaS provider’s resources are permitted to perform penetration tests on the SaaS
OWASP has created a testing guide (presently v4.0) that recommends nine types of active
security testing categories as follows:23
?? Identity management testing
?? Authentication testing
?? Authorization testing
?? Session management testing
?? Input validation testing
?? Testing for error handling
?? Testing for weak cryptography
?? Business logic testing
?? Client-side testing
These OWASP categories play as well in a cloud environment as they do in a traditional
infrastructure. However, additional threat models associated with the deployment
model you choose (such as public versus private) may introduce new threat vectors that
Cloud App lication Security 4
Cloud application security focuses the CCSP on identifying the necessary training and
awareness activities required to ensure that cloud applications are deployed only when
they are as secure as possible. This means that the CCSP has to run vulnerability assessments
and use an software development lifecycle to ensure that secure development
and coding practices are used at every stage of software development. In addition, the
CCSP has to be involved in identifying the requirements necessary for creating secure
identity and access management solutions for the cloud. The CCSP should be able to
describe cloud application architecture as well as the steps that provide assurance and
validation for cloud applications used in the enterprise. The CCSP must also be able to
identify the functional and security testing needed to provide software assurance. Finally,
the CCSP should be able to summarize the processes for verifying that secure software
is being deployed. This includes the use of APIs and any supply chain management
<Work in Progress>
Continuous Integration/Continuous Deployment
<Work in Progress>
<Insert Secure SDLC Requirements Picture>
Audit and Logging<Work in Progress>
Database security<Work in Progress>
Page | PAGE * MERGEFORMAT 49
Cloud Security FrameworkInformation Risk ManagementRoles and responsibilities in managing technology risks in the cloud ecosystem shall be established
Information system assets in the cloud and on-premises shall be identified and prioritised
Identification and assessment of impact and likelihood of current and emerging threats, risks and vulnerabilities in the cloud
Implementation of appropriate practices and controls to mitigate risks
Periodic update and monitoring of risk assessment to include changes in systems, environmental or operating conditions that would affect risk analysis.
A clear policy on information system asset protection is established
The potential impact and consequences of risks on overall business and operations are analysed and quantified.
Threat and vulnerability matrix are developed to assess the impact of threat to the IT environment, and to prioritise IT risks
A risk register is maintained to facilitate monitoring and reporting of risks.
IT risk metrics are developed to highlight the systems, processes or infrastructure that have the highest risk exposure
Outsourced Risk Management (Contracts)Contractual terms and conditions governing the roles, relationships, obligations and responsibilities of all contracting parties are set out fully in written agreements.
Due diligence is carried out to determine its capability, reliability, track record and financial position of the service provider.
Access to the service provider’s systems, operations, documentation and facilities is granted to carry out review or assessment for regulatory, audit or compliance purposes.
The engagement of the service provider does not hinder the ability of regulatory authorities to assess the organisation’s IT risks which would include inspecting, supervising or examining the service provider’s roles, responsibilities, obligations, functions, systems and facilities.
Contractual agreements with the service provider recognise the authority of regulators to perform an assessment on the service provider.
The service provider is required to employ a high standard of care and diligence in its security policies, procedures and controls to protect the confidentiality and security of the organisation’s sensitive or confidential information, such as customer data, computer files, records, object programs and source codes.
The service provider is required to implement security policies, procedures and controls that are at least as stringent as the organisation’s.
The security policies, procedures and controls of the service provider are monitored and reviewed on a regular basis by the organisation, including commissioning or obtaining periodic expert reports on security adequacy and compliance in respect of the operations and services provided.
The service provider is required to develop and establish a disaster recovery contingency framework which defines its roles and responsibilities for documenting, maintaining and testing its contingency plans and recovery procedures.
All parties concerned, including staff from the service provider, receive regular training in activating the contingency plan and executing recovery procedures.
The disaster recovery plan is reviewed, updated and tested regularly in accordance with changing technology conditions and operational requirements.
A contingency plan based on credible worst-case scenarios for service disruptions is established to prepare for the possibility that the current service provider may not be able to continue operations or render the services required.
The contingency plan incorporates identification of viable alternatives for resuming IT operations elsewhere.
The organisation is aware of cloud computing’s unique attributes and risks especially in areas of data integrity, sovereignty, commingling, platform multi-tenancy, recoverability and confidentiality, regulatory compliance, auditing and data offshoring.
The service provider is able to isolate and clearly identify the organisation’s customer data and other information system assets for protection.
The organisation has the contractual power and means to promptly remove or destroy data stored at the service provider’s systems and backups in the event of contract termination with the service provider.
The service provider’s ability to recover outsourced systems and IT services within the stipulated recovery time objective (RTO) is verified prior to contracting with the service provider.
Security Project ManagementSystem deficiencies and defects shall be identified at the system design, development and testing phases
A steering committee, consisting of business owners, the development team and other stakeholders is established to provide oversight and monitoring of the progress of the project
The roles and responsibilities of staff involved in the project are clearly defined in the project management framework
User functional requirements, business cases, cost-benefit analysis, systems design, technical specifications, test plans and service performance expectation are approved by the relevant business and IT management.
Issues or problems which could not be resolved at the project committee level are escalated to senior management for attention and intervention
Security requirements relating to system access control, authentication, transaction authorisation, data integrity, system activity logging, audit trail, security event tracking and exception handling are clearly specified in the early phase of system development or acquisition
Compliance checks on the organisation’s security standards against relevant statutory requirements are performed
The scope of tests covers business logic, security controls and system performance under various stress-load scenarios and recovery conditions
Penetration testing is conducted prior to the commissioning of a new system which offers internet accessibility and open network interfaces
Vulnerability scanning is performed on external and internal network components that support the new system
Due diligence is exercised to ensure applications have appropriate security controls, taking into consideration the type and complexity of services these applications provide
End user developed program codes, scripts and macros are reviewed and tested before they are used to ensure the integrity and reliability of the applications
Key concepts that shall be considered while design and build of application.
Figure SEQ Figure * ARABIC 6: Security for DesignNetwork InfrastructureNetwork controls
Controls must be in place to ensure the security of information in networks and protect connected services from unauthorized access. The following must be considered:
Procedures for the management of networking equipment must be in place
Responsibilities for the management of networking equipment must be made clear
Whenever possible, operational responsibility for networks must be separated from computer operations
Controls must be in place to help safeguard the confidentiality and integrity of data passing through networks. These controls must address the data on both public networks as well as wireless networks. and protect the connected systems and applications.
Controls must be in place to maintain the availability of the network services and computers connected
Logging and monitoring tools must be in place to enable recording and detection of actions that may affect, or are relevant to, information security
Management activities must ensure that controls are consistently applied across the information processing infrastructure
All systems on the network must be subject to authentication
Connection between the system and a network must be restricted
Network service provider
The services provided by the network service provider are very important to any company. As such, the following shall be considered:
The ability of the network service provider to provide the agreed upon services must be established
The services of the network service provider must be regularly monitored
Triquesta must agree upon the right to an audit with the network service provider
When it comes to securing the network configuration, there is a lot to be concerned with. Several technologies, protocols, and services are necessary to ensure a secure and reliable
network is provided to the end user of the cloud-based services (Figure 5.3). For example,
Transport layer security (TLS) and IPSec can be used for securing communications to prevent eavesdropping. Domain name system security extensions (DNSSEC) should be used to prevent domain name system (DNS) poisoning. DNSSEC is a suite of Internet Engineering Task Force (IETF) specifications for securing certain kinds of information provided by DNS as used on Internet protocol (IP) networks.
Before discussing the services, it’s important to understand the role of isolation. Isolation
is a critical design concept for a secure network configuration in a cloud environment. All management of the data center systems should be done on isolated networks. These management networks should be monitored and audited regularly to ensure that confidentiality and integrity are maintained.
Access to the storage controllers should also be granted over isolated network components
that are non-routable to prevent the direct download of stored data and to restrict the likelihood of unauthorized access or accidental discovery. Customer access should be provisioned on isolated networks. This isolation can be implemented through the use of physically separate networks or via VLANs.
All networks should be monitored and audited to validate separation. Access to the management network should be strictly limited to those that require access. Strong authentication methods should be used on the management network to validate identity and authorize usage.
The network can be one of the most vulnerable parts of any system. The VM network requires as much protection as the physical one. Using VLANs can improve networking security in your environment. In simple terms, a VLAN is a set of workstations within a LAN that can communicate with each other as though they were on a single, isolated LAN. They are an Institute of Electrical and Electronics Engineers (IEEE) standard networking scheme with specific tagging methods that allow routing of packets to only those ports that are part of the VLAN.
When properly configured, VLANs provide a dependable means to protect a set of machines from accidental or malicious intrusions. VLANs let you segment a physical network so that two machines in the network can transmit packets back and forth unless they are part of the same VLAN.
What does it mean to say that the VLAN workstations “communicate with each otheras though they were on a single, isolated LAN”? Among other things, it means the
?? Broadcast packets sent by one of the workstations can reach all the others in the
?? Broadcasts sent by one of the workstations in the VLAN cannot reach any workstations
that are not in the VLAN.
?? Broadcasts sent by workstations that are not in the VLAN can never reach workstations
that are in the VLAN.
?? All the workstations can communicate with each other without needing to go
through a gateway.
The ability to isolate network traffic to certain machines or groups of machines via association with the VLAN allows for the opportunity to create secured pathing of data between endpoints.
Although the use of VLANs by themselves does not guarantee that data will be transmitted
securely and that it will not be tampered with or intercepted while on the wire, it is a building block that, when combined with other protection mechanisms, allows for data confidentiality to be achieved.
IPSec uses cryptographic security to protect communications over IP networks. IPSec includes protocols for establishing mutual authentication at the beginning of the session and negotiating cryptographic keys to be used during the session. IPSec supports network- level peer authentication, data origin authentication, data integrity, encryption, and replay protection.
IPSec is a valuable addition to the network configuration that requires end-to-end security for data while transiting a network.
Transport Layer Protection with SSL/TLS
TLS services that are provided by FIPS 140-2 validated cryptomodules shall be used
The login page and all subsequent authenticated pages must be exclusively accessed over TLS
All pages which are available over TLS must not be available over a non-TLS connection.
page must not contain any content that is transmitted over unencrypted HTTP
The “Secure” flag must be set for all user cookies
Sensitive data must not be transmitted via URL arguments
As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data
Use HTTP Strict Transport Security
SDLC RequirementsInformation security requirements must be included during the requirements gathering stage
Information security requirements must be derived from Triquesta’ s information security policies and from regulatory authority (i.e. MAS, HKMA, etc)
At the outset, the following requirements must be defined to ensure a secure software system:
Core Security Requirements
Session Management requirements
Errors & Exceptions Management requirements
Configuration Parameters Management requirements
Deployment Environment requirements
Sequencing and Timing requirements
Security of the source code and application release repositories
DesignSecurity must balance the need for information security with the need for accessibility. New technology must be analysed for security risks and the design must be reviewed against known attack patterns
The established development procedures must be regularly reviewed to ensure that it effectively contributes to enhanced standards of security and to ensure that the software remains up-to-date in terms of combating any new potential threats
To ensure vendor provided software remains up-to-date in terms of combating any new potential threats, contracts and other binding agreements between enterprise and the supplier of the software must include a section that deal with this issue
Secure Code Reviews
Conducting a secure code review, whether informally or formally, is another approach to
assessing code for appropriate security controls. An informal code review may involve one
or more individuals examining sections of the code, looking for vulnerabilities. A formal
code review may involve the use of trained teams of reviewers that are assigned specific
roles as part of the review process, as well as the use of a tracking system to report on vulnerabilities
found. The integration of a code review process into the system development
lifecycle can improve the quality and security of the code being developed.22
Several items must be considered for the secure development:
Principles for engineering secure systems must be established, documented, maintained and applied to any information system implementation efforts
Policies for development of software and systems must be established and applied to developments within the organization
Policies for secure development must be established and implemented
Independent code reviews must be conducted covering general coding guidelines as well as secure coding guidelines
When operating platforms are changed, business critical applications must be reviewed and tested to ensure there is no adverse impact on organizational operations or security. This process must cover:
The review of application control and integrity procedures to ensure that they have not been compromised by the operating platform changes
Ensuring that notification of operating platform changes is provided in time to allow appropriate tests and reviews to take place before implementation
Ensure that appropriate changes are made to the business continuity plans
Environments for Development, Testing and Production must be segregated and appropriately protected
Duties must be segregated to detect the errors and detect / protect malicious activities wherever appropriate
Testing of the software must be conducted in a separate test environment
Production data is not to be used for testing purpose
Test data must be sanitized to remove any production references that might impact the security and privacy compliance
Test data must be selected carefully, protected and controlled
Accepting testing programs and related criteria must be established for new information systems, upgrades and new versions
The testing of security functionality must be carried out during development
All test data and test accounts must be removed before releasing an application in production
Application testing must be performed by a different person than the developer responsible for the development or change to the application
Information involved in application services passing over public networks must be protected from fraudulent activity, contract dispute and unauthorized disclosure and modification by applying appropriate controls
Information involved in application service transactions must be protected to prevent incomplete transmission, mis-routing, unauthorized message alteration, unauthorized disclosure, unauthorized message duplication or replay by applying appropriate controls
When new products are acquired, a formal acquisition process must always be followed.
Functional as well as non-functional (i.e. security) requirements must be determined for any software to be acquired and appropriate approvals must be taken
Product being acquired must meet functional and security requirements
Any product being acquired must not violate organizational policies
Any product must only be acquired from trusted source
Vendor must provide regular updates for the product acquired and these updates are to be implemented
No modifications are to be made to vendor supplied software packages unless necessary
Any changes must be limited to necessary changes and all changes must be strictly controlled
If changes to vendor supplied software packages are to be made, a copy of the original software must be created, and the required changes applied to the copy
It is this copy that is to be used by Triquesta while the original software package is to be retained
Services must provide HTTPS endpoint that allows clients to authenticate the service and guarantees integrity of the transmitted data
Mutually authenticated client-side certificates shall be used to provide additional protection for highly privileged web services
Non-public web services must perform access control at each API endpoint
Cryptographic signature or message authentication code (MAC) shall be used to protect the integrity of the security token (i.e. JWT), used for access control decisions
API keys shall be used to reduce the impact of denial-of-service attacks
Permitted HTTP Methods (i.e. GET, POST, PUT, etc.) shall be whitelisted
As always, all input data must be validated
Request or response body should match the intended content type in the header
Management endpoints, accessible via the Internet must use a strong authentication mechanism, e.g. multi-factor
Management endpoints shall be exposed via different HTTP ports or preferably a restricted subnet.
Access to management endpoints shall be restricted by firewall rules or use of access control lists
Disable Cross-Origin Resource Sharing (CORS) headers if cross-domain calls are not supported
Passwords, security tokens, and API keys should not appear in the URL so as to prevent leaking credentials
Training programs appropriate to the role of software developers and testers must be designed.
Application security must be an integral part of training to software developers and testers.
Software development and testing teams must attend application security training in addition to general security awareness at least once in every 12 months.
Additionally, developers are to be trained in the use of the secure coding standards, the testing policy and the code review policy of Triquesta
Identity governance capabilities should address compliance risk. Look for the following capabilities:
Automated processes for user onboarding, transferring, promoting and terminating, in addition to customizable approval workflows.
An Identity and Entitlement Catalog that automatically maintains a core set of identities and their relationships to your organization. It should eliminate improper updates from non-authoritative sources and update in near-real time.
An authorization model that leverages both roles and broader attributes to automate basic approvals for access requests and certifications and allows a focus on exceptions.
Automated controls for reducing risk, combined with advanced risk analytics to measure baseline risk and the effectiveness of automated controls in reducing risk
Controls to detect and handle violations and exceptions, especially SOD conflicts, expired exceptions and orphan or shared accounts.
An intuitive user experience with mechanisms to close the loop, so access requests and revocations get fulfilled accurately
Out-of-the box and ad-hoc compliance reports to demonstrate the program to auditors with minimal effort.
The most common alternative to the term “access certification” or “access review” is “attestation”. Attestation is an ongoing review and confirmation process that will help enterprises to reduce risk by:
Correlating users with their access to systems and applications
Evaluating the risk associated with that access
Reviewing access deemed as risky or inappropriate
In practice, the enterprise distributes lists of people, their accounts, and the entitlements of those accounts (also known as ‘access’), to different constituents (often line-of-business managers and application owners) for review. The participants in this process decide whether access is appropriate and thus should be retained or inappropriate and thus must be removed.
Access certification is a powerful process where the primary goal is the reduction of risk. This goal can be accomplished in direct and indirect ways. Access certification directly reduces risk by addressing threats associated with over privileged and toxic combinations in excessive access. Revoking inappropriate access removes potential threats to the organizations. The indirect way where access certification reduces risk is by transferring some responsibility to the individual. Participants in the access certification process are charged with evaluating the risks associated with the access they review. They are held responsible for their evaluations.
Deep Identity offers access certification not only of users, but also certification of roles, both business and technical. While user access certification limits who has access to what, role attestation refers to aggregation of access independent of any particular user. Both these attestations have their own advantages and uses in an enterprise.
Privileged Identity Management
See which users are assigned privileged roles to manage Azure resources, as well as which users are assigned administrative roles in Azure AD
Enable on-demand, “just in time” administrative access to Microsoft Online Services like Office 365 and Intune, and to Azure resources of subscriptions, resource groups, and individual resources such as Virtual Machines
See a history of administrator activation, including what changes administrators made to Azure resources
Get alerts about changes in administrator assignments
Require approval to activate Azure AD privileged admin roles
Review membership of administrative roles and require users to provide a justification for continued membership
In Azure AD, Azure AD Privileged Identity Management can manage the users assigned to the built-in Azure AD organizational roles, such as Global Administrator. In Azure, Azure AD Privileged Identity Management can manage the users and groups assigned via Azure RBAC roles, including Owner or Contributor.
If you have not already turned on Azure AD Privileged Identity Management (PIM), do so in your production tenant. After you turn on Privileged Identity Management, you’ll receive notification email messages for privileged access role changes. These notifications provide early warning when additional users are added to highly-privileged roles in your directory.
Turn on Azure AD Privileged Identity Management
If you have not already turned on Azure AD Privileged Identity Management (PIM), do so in your production tenant. After you turn on Privileged Identity Management, you’ll receive notification email messages for privileged accex`ss role changes. These notifications provide early warning when additional users are added to highly-privileged roles in your directory.
Identify and categorize accounts that are in highly privileged roles
Remove any accounts that are no longer needed in those roles, and categorize the remaining accounts that are assigned to admin roles:
Individually assigned to administrative users, and can also be used for non-administrative purposes (for example, personal email)
Individually assigned to administrative users and designated for administrative purposes only
Shared across multiple users
For break-glass emergency access scenarios
For automated scripts
For external users
Define at least two emergency access accounts
Ensure that you do not get into a situation where they could be inadvertently locked out of the administration of your Azure AD tenant due to an inability to sign in or activate an existing individual user’s account as an administrator. For example, if the organization is federated to an on-premises identity provider, that identity provider may be unavailable so users cannot sign in on-premises. You can mitigate the impact of accidental lack of administrative access by storing two or more emergency access accounts in your tenant.
Emergency access accounts help organizations restrict privileged access within an existing Azure Active Directory environment. These accounts are highly privileged and are not assigned to specific individuals. Emergency access accounts are limited to emergency for ‘break glass’ scenarios where normal administrative accounts cannot be used. Organizations must ensure the aim of controlling and reducing the emergency account’s usage to only that time for which it is necessary.
Turn on multi-factor authentication and register all other highly-privileged single-user non-federated admin accounts
Require Azure Multi-Factor Authentication (MFA) at sign-in for all individual users who are permanently assigned to one or more of the Azure AD admin roles: Global administrator, Privileged Role administrator, Exchange Online administrator, and SharePoint Online administrator. Use the guide to enable Multi-factor Authentication (MFA) for your admin accounts and ensure that all those users have registered at https://aka.ms/mfasetup. More information can be found under step 2 and step 3 of the guide Protect access to data and services in Office 365.
Conduct a inventory of services, owners, and admins
With the increase in bring-your-own-device (BYOD) and work-from-home policies and the growth of wireless connectivity in businesses, it is critical that you monitor who is connecting to your network. An effective security audit often reveals devices, applications, and programs running on your network that are not supported by IT, and therefore potentially not secure. For more information, see Azure security management and monitoring overview. Ensure that you include all of the following tasks in your inventory process.
Identify the users who have administrative roles and the services where they can manage.
Use Azure AD PIM to find out which users in your organization have admin access to Azure AD, including additional roles beyond those listed in Stage 1.
Beyond the roles defined in Azure AD, Office 365 comes with a set of admin roles that you can assign to users in your organization. Each admin role maps to common business functions, and gives people in your organization permissions to do specific tasks in the Office 365 admin center. Use the Office Admin Center to find out which users in your organization have admin access to Office 365, including via roles not managed in Azure AD. For more information, see About Office 365 admin roles and Security best practices for Office 365.
Perform the inventory in other services your organization relies on, such as Azure, Intune, or Dynamics 365.
Ensure that your admin accounts (accounts that are used for administration purposes, not just users’ day-to-day accounts) have working email addresses attached to them and have registered for Azure MFA or use MFA on-premises.
Ask users for their business justification for administrative access.
Remove admin access for those individuals and services that don’t need it.
When key events occur in Azure AD Privileged Identity Management (PIM), email notifications are sent. For example, PIM sends emails for the following events:
When a privileged role activation is pending approval
When a privileged role activation request is completed
When a privileged role is activated
When a privileged role is assigned
When Azure AD PIM is enabled
Privileged User Monitoring
Certain users require privileged access to do their day to day activities. Most often, privileged activity is performed directly on data systems, thus it is not visible outside of the system itself. Without effective privileged user monitoring, it is possible that these users can cause immense damage without ever being detected. In addition,
Industry and compliance regulations including PCI DSS, SOX and others, require that privileged users be closely monitored, and their activities authorized. As such, Triquesta must have a monitoring in place for privileged users.
a. Triquesta must monitor all privileged access to files and databases including local system access, audit user creation and newly granted privileges and restrict usage of shared privileged accounts.
b. User behaviour that deviates from normal access patterns must be identified and alerts set up when such behaviour is witnessed as it may indicate privilege abuse. Audit reports and analytical tools are needed to support electronic? forensic investigations.
c. Changes to data objects and data system users must be properly authorized. Unauthorized activities must be thoroughly investigated, and controls should be implemented to prevent future incidents.
d. Following the principle of “Separation Of Duties” (SOD), the monitoring capability must not be managed or operated by privileged users as they are in a position to alter controls that can conceal irregular activities.
e. Hardening systems by granting access to business need know, is an essential step in data breach prevention.
f. Triquesta must do the following to ensure that user privileges are up to date across the company:
1. review user privileges
2. identify highly privileged users and verify that the privileges are necessary for the user’s role and duties
3. revoke excessive user rights
4. remove dormant users.
Role assignments become “stale” when users have privileged access that they don’t need anymore. To reduce the risk that’s associated with these stale role assignments, privileged role administrators should regularly review roles. This document covers the steps for starting an access review in Privileged Identity Management (PIM) for Azure resources.
Segregation of Duties (SoD)Segregation of Duties (SOD) is an important part of identity governanc. Risk analysis for every conflict of access are required and remediation need to be implemented to ensure user access risk are eliminated or reduced. The SoD checks and remediation shall be implemented as below:
Periodic exercise as part of attestation or SoD review process
Access request and approval process
Provisioning of new access to users
Granting of privilege access to super-user or administrator (Privilege SoD checks)
Cloud-based systems operate within and across trusted and untrusted networks. As such, data held within and communicated to and between systems and services operating in the cloud should be encrypted.
Following are some controls for data-in-transit encryption:
Transport layer security (TLS): A protocol that ensures privacy between communicating applications and their users on the Internet. When a server and client communicate, TLS ensures that no third party may eavesdrop or tamper with a message. TLS is the successor to SSL.
SSL: The standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and browsers remains private and integral.
Virtual private network (VPN, such as IPSec gateway): A network that is constructed by using public wires—usually the Internet—to connect to a private network, such as a company’s internal network. A number of systems enable you to create networks using the Internet as the medium for transporting data.
All these technologies encrypt data to and from your data center and system communications within the cloud environment.
Here are examples of data-at-rest encryption used in cloud systems:
Whole instance encryption: A method for encrypting all the data associated with the operation and use of a virtual machine, such as the data stored at rest on the volume, disk input/output (I/O), all snapshots created from the volume, as well as all data in transit moving between the virtual machine and the storage volume.
Volume encryption: A method for encrypting a single volume on a drive. Parts of the hard drive are left unencrypted when using this method. (Full disk encryption should be used to encrypt the entire contents of the drive, if that is what is desired.)
File or directory encryption: A method for encrypting a single file or directory on a drive.
Technologies and approaches such as tokenization, data masking are valuable to augment the implementation of a cryptographic solution.
Roles and responsibilities for management of complete lifecycle of cryptographic keys must be established
Cryptographic keys must be selected according to industry best practices
All cryptographic keys must be permanently protected against unauthorized disclosure, modification and loss
Equipment used to generate and store cryptographic keys must be physically and legally protected (how legally protected?)
Any potential compromise to the cryptographic keys must immediately be reported to the incident team (who must subsequently deal with the potential breach? What procedure is followed?)
Compromised keys must immediately be revoked or changed
Key lifetime must be defined and keys must be changed accordingly
Based on the value of the information asset protected by cryptographic techniques, the cryptographic key information must be split and distributed among at least two personnel to prevent misuse of authority / access to keys
Key Agreement and Authentication
Key exchanges must use one of the following cryptographic protocols: Diffie-Hellman, IKE, or Elliptic curve Diffie-Hellman (ECDH).
End points must be authenticated prior to the exchange or derivation of session keys.
Public keys used to establish trust must be authenticated prior to use. Examples of authentication include transmission via cryptographically signed message or manual verification of the public key hash.
All servers used for authentication (for example, RADIUS or TACACS) must have installed a valid certificate signed by a known trusted provider.
All servers and applications using SSL or TLS must have the certificates signed by a known, trusted provider.
Cryptographic keys must be generated and stored in a secure manner that prevents loss, theft, or compromise.
Key generation must be seeded from an industry standard random number generator (RNG). For examples, see NIST Annex C: Approved Random Number Generators for FIPS PUB 140-2.
Access ControlGrant of Access
The asset owner must determine the appropriate access rights and restrictions for specific user roles toward their assets
Users must only be provided with access to the information and information processing facilities that they have been specifically authorized to use
Access must be provided based on the principle of least privilege (what does this mean?)
The asset owner must periodically review the access rights and restrictions granted to various users and user group for the assets in their care
User Access Management
A formal user registration and de-registration process must be implemented to enable assignment of access rights
A formal user access provisioning process must be implemented to assign or revoke access rights for all user types to all systems and services
The allocation and use of privileged access rights must be restricted and controlled
The allocation of secret authentication information must be controlled through a formal management process (what is meant exactly)
All employee and external party users access rights to information and information processing facilities must be rescinded upon termination of their employment, contract or agreement
User Responsibilities / System and Application control
Users are required to follow the organization’s practices in the use of authentication information (Definition and scope of authentication information)
Access to information and application system functions must be restricted in accordance with the access control policy
Where required by the access control policy, access to systems and application must be controlled by a secure log-on procedure
Password management systems must be interactive and must ensure quality passwords (how are quality passwords defined?)
The use of utility programs that can override system and application controls must be restricted and tightly controlled (Controlled by whom? Restricted to what?).
Access to program source codes must be restricted to specific users based on business requirements (how measured?)
The Business Continuity ; Disaster Recovery plan
A Business Continuity ; Disaster Recovery plan will be created that includes procedures and support agreements to ensure on-time availability and delivery of required business functions and services. The following applies:
The Business Impact Analysis and the Risk Assessment must be reviewed annually by the Business Continuity ; Disaster Recovery Planning Team
The Business Impact Analysis’ and Risk Assessments must be reviewed before the annual review if there are major changes in the business environment that affect the data contained in the Business Impact Analysis and/or Risk Assessment
The Business Continuity & Disaster Recovery Planning Team representatives are responsible for notifying the Business Continuity & Disaster Recovery manager of any major changes that would require amendments to the Business Impact Analysis. The modified Business Impact Analysis and/or Risk Assessment must be approved by the Business Continuity & Disaster Recovery steering Committee
The Business Continuity & Disaster Recovery plan must be certified annually by the Business Continuity & Disaster Recovery manager and approved by the Business Continuity & Disaster Recovery Steering Committee
The Business Continuity & Disaster Recovery compliance verification is managed by the Business Continuity & Disaster Recovery Manager with support from the Business Continuity & Disaster Recovery Planning Team. Each Business Unit must take appropriate action to address gaps identified in the Business Impact Analysis
Server SecurityImplement the following controls as a minimum, to secure host servers within cloud environments:
A standard baseline image shall be created and deploy the baseline to existing servers
Secure initial configuration shall be documented and controlled
All nonessential services and software shall be removed from the host
All required patches provided by the vendors shall be monitored and applied
Non-root access to the host shall be blocked under most circumstances (i.e. local console access only via a root account)
Only allowing the use of secure communication protocols and tools to access the host remotely
Configuration and use of host-based firewall to examine and monitor all communications
Use of role-based access controls (RBACs) to limit which users can access a host and what permissions they have
Periodic vulnerability assessment scanning of hosts, guest OSs, and application workloads running on hosts
Periodic penetration testing of hosts and guest OSs running on them
Network SecurityThe key to virtual network security is isolation
Every host has a management network through which it communicates with other hosts and management systems
In a virtual infrastructure, the management network should be isolated physically and virtually
Connect all hosts, clients, and management systems to a separate physical network to secure the traffic
You should also create isolated virtual switches for your host management network
and never mix virtual-switch traffic with normal VM network traffic
Data ClassificationData exists in one of three basic states: at rest, in process, and in transit. All three states require unique technical solutions for data classification, but the applied principles of data classification should be the same for each. Data that is classified as confidential needs to stay confidential when at rest, in process, and in transit.
Although customers are responsible for classifying their data, cloud providers should make written commitments to customers about how they will secure and maintain the privacy of the customer data stored within their cloud. These commitments should include information about privacy and security practices, data use limitations, and regulatory compliance. In addition, cloud providers should make certifications and audit reports that demonstrate compliance with standards such as the International Organization for Standardization (ISO) and controls such as the American Institute of CPAs Service Organization Controls (SOC1 and SOC2) available so customers can verify the effectiveness of their cloud provider’s practices. Having this information will help customers understand whether the cloud provider supports the data protection requirements mandated by their data classification. Customers should not migrate data to a cloud provider that cannot address their data protection needs.
In addition, organizations that are considering cloud solutions and need to comply with regulatory requirements can benefit by working with cloud providers that comply with regulations such as FedRAMP, U.S. HIPAA, EU Data Protection Directive, and others listed in Appendix 1. However, to achieve compliance, such organizations need to remain aware of their classification obligations and be able to manage the classification of data that they store in the cloud.
Note: When classifying a file or resource that combines data that would typically be classified at differing levels, the highest level of classification present should establish the overall classification. For example, a file containing sensitive and restricted data should be classified as restricted
Sensitivity Terminology model 1 Terminology model 2
High Confidential Restricted
Medium For internal use only Sensitive
Low Public Unrestricted
Confidential (restricted). Information that is classified as confidential or restricted includes data that can be catastrophic to one or more individuals and/or organizations if compromised or lost. Such information is frequently provided on a “need to know” basis and might include:
o Personal data, including personally identifiable information such as Social Security or national identification numbers, passport numbers, credit card numbers, driver’s license numbers, medical records, and health insurance policy ID numbers.
o Financial records, including financial account numbers such as checking or investment account numbers.
o Business material, such as documents or data that is unique or specific intellectual property.
o Legal data, including potential attorney-privileged material.
o Authentication data, including private cryptography keys, username password pairs, or other identification sequences such as private biometric key files.
Data that is classified as confidential frequently has regulatory and compliance requirements for data handling. Specifics of some of these requirements are listed in Appendix 1.
• For internal use only (sensitive). Information that is classified as being of medium sensitivity includes files and data that would not have a severe impact on an individual and/or organization if lost or destroyed. Such information might include:
Trustworthy Computing | Data classification for cloud readiness 8
o Email, most of which can be deleted or distributed without causing a crisis (excluding mailboxes or email from individuals who are identified in the confidential classification).
o Documents and files that do not include confidential data.
Generally, this classification includes anything that is not confidential. This classification can include most business data, because most files that are managed or used day-to-day can be classified as sensitive. With the exception of data that is made public or is confidential, all data within a business organization can be classified as sensitive by default.
• Public (unrestricted). Information that is classified as public includes data and files that are not critical to business needs or operations. This classification can also include data that has deliberately been released to the public for their use, such as marketing material or press announcements. In addition, this classification can include data such as spam email messages stored by an email service.
Define data ownership
It’s important to establish a clear custodial chain of ownership for all data assets. The following table identifies different data ownership roles in data classification efforts and their respective rights.
The data asset owner is the original creator of the data, who can delegate ownership and assign a custodian. When a file is created, the owner should be able to assign a classification, which means that they have a responsibility to understand what needs to be classified as confidential based on their organization’s policies. All of a data asset owner’s data can be auto-classified as for internal use only (sensitive) unless they are responsible for owning or creating confidential (restricted) data types. Frequently, the owner’s role will change after the data is classified. For example, the owner might create a database of classified information and relinquish their rights to the data custodian.
Note regarding personal data: Data asset owners often use a mixture of services, devices, and media, some of which are personal and some of which belong to the organization. A clear organizational policy can help ensure that usage of devices such as laptops and smart devices is in accordance with data classification guidelines.
• The data asset custodian is assigned by the asset owner (or their delegate) to manage the asset according to agreements with the asset owner or in accordance with applicable policy requirements. Ideally, the custodian role can be implemented in an automated system. An asset custodian ensures that necessary access controls are provided and is responsible for managing and protecting assets delegated to their care. The responsibilities of the asset custodian could include:
o Protecting the asset in accordance with the asset owner’s direction or in agreement with the asset owner
o Ensuring that classification policies are complied with
o Informing asset owners of any changes to agreed-upon controls and/or protection procedures prior to those changes taking effect
o Reporting to the asset owner about changes to or removal of the asset custodian’s responsibilities
• An administrator represents a user who is responsible for ensuring that integrity is maintained, but they are not a data asset owner, custodian, or user. In fact, many administrator roles provide data container management services without having access to the data. The administrator role includes backup and restoration of the data, maintaining records of the assets, and choosing, acquiring, and operating the devices and storage that house the assets.
• The asset user includes anyone who is granted access to data or a file. Access assignment is often delegated by the owner to the asset custodian.
Data retention, recovery, and disposal
Data recovery and disposal, like data reclassification, is an essential aspect of managing data assets. The principles for data recovery and disposal would be defined by a data retention policy and enforced in the same manner as data reclassification; such an effort would be performed by the custodian and administrator roles as a collaborative task.
Failure to have a data retention policy could mean data loss or failure to comply with regulatory and legal discovery requirements. Most organizations that do not have a clearly defined data retention policy tend to use a default “keep everything” retention policy. However, such a retention policy has additional risks in cloud services scenarios. For example, a data retention policy for cloud service providers can be considered as “for the duration of the subscription” (as long as the service is paid for, the data is retained). Such a pay-for-retention agreement may not address corporate or regulatory retention policies. Defining a policy for confidential data can ensure that data is stored and removed based on best practices. In addition, an archival policy can be created to formalize an understanding about what data should be disposed of and when. Trustworthy Computing | Data classification for cloud readiness 11
Data retention policy should address the required regulatory and compliance requirements, as well as corporate legal retention requirements. Classified data might provoke questions about retention duration and exceptions for data that has been stored with a provider; such questions are more likely for data that has not been classified correctly.
ProtectingconfidentialdataAfter data is classified, finding and implementing ways to protect confidential data becomes an integral part of any data protection deployment strategy. Protecting confidential data requires additional attention to how data is stored and transmitted in conventional architectures as well as in the cloud.
This section provides basic information about some technologies that can automate enforcement efforts to help protect data that has been classified as confidential.
As the following figure shows, these technologies can be deployed as on-premises or cloud-based solutions—or in a hybrid fashion, with some of them deployed on-premises and some in the cloud. (Some technologies, such as encryption and rights management, also extend to user devices.)
Rights management software
One solution for preventing data loss is rights management software. Unlike approaches that attempt to interrupt the flow of information at exit points in an organization, rights management software works at deep levels within data storage technologies. Documents areencrypted, and control over who can decrypt them uses access controls that are defined in an authentication control solution such as a directory service.
Some of the benefits of rights management software include:
• Safeguarded sensitive information. Users can protect their data directly using rights management-enabled applications. No additional steps are required—authoring documents, sending email, and publishing data offer a consistent data protection experience.
• Protection travels with the data. Customers remain in control of who has access to their data, whether in the cloud, existing IT infrastructure, or at the user’s desktop. Organizations can choose to encrypt their data and restrict access according to their business requirements.
• Default information protection policies. Administrators and users can use standard policies for many common business scenarios, such as “Company Confidential–Read Only” and “Do Not Forward.” A rich set of usage rights are supported such as read, copy, print, save, edit, and forward to allow flexibility in defining custom usage rights.
Encryption gateways operate in their own layers to provide encryption services by rerouting all access to cloud-based data. This approach should not be confused with that of a virtual private network (VPN); encryption gateways are designed to provide a transparent layer to cloud-based solutions.
Encryption gateways can provide a means to manage and secure data that has been classified as confidential by encrypting the data in transit as well as data at rest.
Encryption gateways are placed into the data flow between user devices and application data centers to provide encryption/decryption services. These solutions, like VPNs, are predominantly on-premises solutions. They are designed to provide a third party with control over encryption keys, which helps reduce the risk of placing both the data and key management with one provider. Such solutions are designed, much like encryption, to work seamlessly and transparently between users and the service.
Data loss prevention
Data loss (sometimes referred to as data leakage) is an important consideration, and the prevention of external data loss via malicious and accidental insiders is paramount for many organizations.
Data loss prevention (DLP) technologies can help ensure that solutions such as email services do not transmit data that has been classified as confidential. Organizations can take advantage of DLP features in existing products to help prevent data loss. Such features use policies that can be easily created from scratch or by using a template supplied by the software provider.
DLP technologies can perform deep content analysis through keyword matches, dictionary matches, regular expression evaluation, and other content examination to detect content that violates organizational DLP policies. For example, DLP can help prevent the loss of the following types of data:
• Social Security and national identification numbers
• Banking information
• Credit card numbers
• IP addresses
Some DLP technologies also provide the ability to override the DLP configuration (for example, if an organization needs to transmit Social Security number information to a payroll processor). In addition, it’s possible to configure DLP so that users are notified before they even attempt to send sensitive information that should not be transmitted.
A technical overview of the DLP features in Microsoft Exchange Server 2013 and Exchange Online is available on the Data Loss Prevention page on Microsoft TechNet.
Generally, the topic of data classification does not generate as much interest as other, more exciting technology topics. However, data classification can yield significant benefits, such as compliance efficiencies, improved ways to manage the organization’s resources, and facilitation of migration to the cloud. Although data classification efforts can be complex undertakings and require risk assessment for successful implementation, quicker and simpler efforts can also yield benefits. Any data classification effort should endeavor to understand the needs of the organization and be aware how data is stored, processing capabilities, and how data is transmitted throughout the organization.
It’s important for management to support data classification efforts, and for IT to be involved as well. The concept of classification may seem primarily to be an auditing function, but many technology solutions are available that can reduce the amount of effort that is required to successfully implement a data classification model.
It’s also worth noting that data classification rules that pertain to data retention must be addressed when moving to the cloud, and that cloud solutions can help mitigate risk. Some data protection technologies such as encryption, rights management, and data loss prevention solutions have moved to the cloud and can help mitigate cloud risks.
Although this paper did not specifically discuss hybrid environments, a mixture of on-premises and cloud-based data classification technologies can help effectively reduce risk for organizations of any size by providing more control about where data is stored, which gives customers the option to keep highly sensitive data on-premises and under a different set of controls than data stored in the cloud. Indeed, hybrid environments are likely to be the way of the future, and the key to effective data management may well depend on effective data classification.
Remote Access SecurityRegardless of the model, deployment method, and scope of the cloud system in use, the need to allow customers to securely access data and resources is consistent. The organization must ensure that all authenticated and authorized users of a cloud resource
can access that resource securely, ensuring that confidentiality and integrity are maintained.
Some of the controls that shall be established to handle the cloud based threats are as follows:
Remote access shall be allowed via VPN tunnelling – IPSec or SSL
Access to cloud resources via a secure terminal
Deployment of a DMZ that hosts a jump /bastion host
Encrypted transmission of all communications between the remote user and the cloud host
Secure login with complex passwords or certificate-based login shall be designed
Two-factor authentication shall be implemented providing enhanced security
A log and audit of all connections shall be implemented
OperationsPerforming Patch ManagementPatch management is essential in the cloud environment. From a security perspective, patches are most often of interest because they are mitigating software flaw vulnerabilities. A patch management plan shall be developed as part of the configuration management.
The NIST SP 800-40 Revision 3, “Guide to Enterprise Patch Management Technologies,” is a good point of reference for developing patch process.
The Patch Management ProcessThe patch management process shall address the following items:
Vulnerability detection and evaluation by the vendor
Subscription mechanism to vendor patch notifications
Severity assessment of the patch by the receiving enterprise using that software
Applicability assessment of the patch on target systems
Opening of tracking records in case of patch applicability
Customer notification of applicable patches, if required
Successful patch application verification
Issue and risk management in case of unexpected troubles or conflicting actions
Closure of tracking records with all auditable artefacts
Most of the outlined steps are suited for automation in cloud and traditional IT environment implementations, but some may require human interaction for successful execution.
Implementing Network Security Controls:Defense in DepthThe traditional model of defense in depth, which requires a design thought process that seeks to build mutually reinforcing layers of protective systems and policies to manage them, should be considered as a baseline. Using a defense-in-depth strategy to drive design for the security architecture of cloud-based systems makes it necessary to examine
each layer’s objectives and to understand the impact of the choices being made as the model is assembled.
A firewall is a software- or hardware-based network security system that controls the incoming and outgoing network traffic based on an applied rule set. A firewall establishes
a barrier between a trusted, secure internal network and another network (such as the Internet) that is not assumed to be secure and trusted. The ability to use a host-based firewall is not unique to a cloud environment. Every major OS ships with some form of host-based firewall natively available or with the capability to add one if needed. The issue is not if to use, but rather where to use.
Host-Based Software Firewalls
Traditional host-based software firewalls exist for all the major virtualization platforms. These firewalls can be configured through either a command line or a graphical interface
and are designed to be used to protect the host directly and the VMs running on the hosts
indirectly. This approach may work well for a small network with few hosts and VMs configured to run in a private cloud, but it is not as effective for a large enterprise network
with hundreds of hosts and thousands of VMs running in a hybrid cloud. The use of additional hardware-based firewalls, external to the cloud infrastructure but designed to
provide protection for it, needs to be considered for deployment in this case. The use of
cloud-based firewalls to provide enterprise-grade protection may also be considered.
In addition to the standard TCP and user datagram protocol (UDP) ports typically opened on a firewall, you can configure other ports depending on your needs. Supported services and management agents that are required to operate the host are described in a rule set configuration file. The file contains firewall rules and lists each rule’s relationship with ports and protocols.
Layered SecurityLayered security is the key to protecting any size network, and for most companies that means deploying IDSs and IPSs. When it comes to IPS and IDS, it’s not a question of which technology to add to your security infrastructure; both are required for maximum protection against malicious traffic.
An IDS device is passive, watching packets of data traverse the network from a monitoring port, comparing the traffic to configured rules, and setting off an alarm if it detects anything suspicious. An IDS can detect several types of malicious traffic that would slip by a typical firewall, including network attacks against services, data-driven attacks on applications, host-based attacks such as unauthorized logins, and malware such as viruses, Trojan horses, and worms. Most IDS products use several methods to detect threats, usually signature-based detection, anomaly-based detection, and stateful protocol analysis.
The IDS engine records the incidents that are logged by the IDS sensors in a database and generates alerts to send to the network administrator. Because the IDS gives deep visibility into network activity, it can also be used to pinpoint problems with an organization’s security policy, document existing threats, and discourage users from violating an organization’s security policy.
The primary complaint with IDS is the number of false positives the technology is prone to spitting out—some legitimate traffic is inevitably tagged as bad. The trick is tuning the device to maximize its accuracy in recognizing true threats while minimizing the number of false positives. These devices should be regularly tuned as new threats are discovered and the network structure is altered. As the technology has matured in the past several years, it has gotten better at weeding out false positives.
An IDS can be host based or network based.
Network Intrusion Detection Systems
Network intrusion detection systems (NIDSs) are placed at a strategic point within the network to monitor traffic to and from all devices on the network. They perform analysis for traffic passing across the entire subnet, work in a promiscuous mode, and match the traffic that is passed on the subnets to the library of known attacks. Once the attack is identified or abnormal behavior is sensed, an alert can be sent to the administrator.
One example of the use of a NIDS would be installing it on the subnet where firewalls are located to see if someone is trying to break into the firewall (Figure 5.4). Ideally, you would scan all inbound and outbound traffic; however, doing so might create a bottleneck that impairs the overall speed of the network.
Host Intrusion Detection Systems
Host intrusion detection systems (HIDSs) run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and alerts the user or administrator if suspicious activity is detected. It takes a snapshot of existing system files and matches it to the previous snapshot. If the critical system files were modified or deleted, the alert is sent to the administrator to investigate. An example of HIDS usage can be seen on mission-critical machines, which are not expected to change
An IPS has all the features of a good IDS but can also stop malicious traffic from invading
the enterprise. Unlike an IDS, an IPS sits inline with traffic flows on a network, actively shutting down attempted attacks as they are sent over the wire. It can stop the attack by terminating the network connection or user session originating the attack by blocking access to the target from the user account, IP address, or other attribute associated with that attacker or by blocking all access to the targeted host, service, or application (Figure 5.5).
In addition, an IPS can respond to a detected threat in two other ways:
?? It can reconfigure other security controls, such as a firewall or router, to block an attack; some IPS devices can even apply patches if the host has particular vulnerabilities.
?? Some IPSs can remove the malicious contents of an attack to mitigate the packets, perhaps deleting an infected attachment from an email before forwarding the email to the user.
Combined IDS and IPS
You need to be familiar with IDSs and IPSs to ensure you use the best technology to secure the cloud environment. Be sure to consider combining the IDS and IPS into a single architecture (Figure 5.6).
Different virtualization platforms offer different levels of visibility of intra-VM communications.
In some cases, there may be little or no visibility of the network communications of VMs on the same host.
You should fully understand the capabilities of the virtualization platform to validate that all monitoring requirements are met.
Virtual Machine Introspection
Virtual Machine Introspection (VMI) allows for agentless retrieval of the guest OS state, such as the list of running processes, active networking connections, and opening files.
VMI can be used to perform external monitoring of the VM by security applications such
as an IDS or an IPS. This monitoring is widely used for malware analysis, memory forensics, and process monitoring.
A honeypot is used to detect, deflect, or in some manner counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of a computer, data, or a network site that appears to be part of a network but is actually isolated and monitored and that seems to contain information or a resource of value to attackers (Figure 5.7).
There are some risks associated with deploying honeypots in the enterprise. The CCSP needs to ensure that he understands the legal and compliance issues that may be associated with the use of a honeypot. Honeypots should be segmented from the production network to ensure that any potential activity they generate cannot affect other systems.
Honeynets are an extension of the honeypot, grouping multiple honeypot systems to form a network that is used in the same manner as the honeypot, but with more scalability and functionality.
A sandbox isolates and utilizes only the intended components, while having appropriate separation from the remaining components (that is, the ability to store personal information in one sandbox, with corporate information in another sandbox). Within cloud
environments, sandboxing is typically used to run untested or untrusted code in a tightly
controlled environment. Several vendors have begun to offer cloud-based sandbox environments that can be leveraged by organizations to fully test applications. Organizations can use a sandbox environment to better understand how an application actually works and fully test applications by executing them and observing the file behavior for indications of malicious activity.
Conducting Vulnerability AssessmentsDuring a vulnerability assessment, the cloud environment is tested for known vulnerabilities.
Detected vulnerabilities are not exploited during a vulnerability assessment (nondestructive testing) and may require further validation to detect false positives. Conduct routine vulnerability assessments and have a process to track, resolve, or remediate detected vulnerabilities. The specifics of the processes should be governed by
the nature of the regulatory requirements and compliance issues to be addressed.
Different levels of testing need to be conducted based on the type of data stored. For example, if medical information is stored, you should conduct checks for compliance with HIPAA. All vulnerability data should be securely stored, with appropriate access controls applied and version and change control tracking turned on. The vulnerability data should be limited in circulation to only those authorized parties requiring access. Customers may request proof of vulnerability scanning and may also request the results. The CSP should define a clear policy on the disclosure of vulnerabilities, along with remediation stages or timelines.
Work with the CSP to ensure that all relevant policies and agreements are in place and
clearly documented as part of the decision to host with the provider. You should also conduct external vulnerability assessments to validate internal assessments.
There are various vulnerability assessment tools, including cloud-based tools that require no additional software installation to deploy and use. CCSPs should ensure that they are familiar with whatever tools they are going to use and manage, as well as the tools that the CSP may be using. If a third-party vendor will be used to validate internal assessment findings through an independent assessment and audit, the CCSP needs to understand the tools used by the vendor as well.
Log Capture and Log ManagementAccording to NIST SP 800-92, a log is a record of the events occurring within an organization’s systems and networks. Logs are composed of log entries; each entry contains information related to a specific event that has occurred in a system or network. Many logs in an organization contain records related to computer security. These computer security logs are generated by many sources, including security software, such as antivirus
software, firewalls, and intrusion detection and prevention systems; OSs on servers, workstations, and networking equipment; and applications.
Log data should be protected, with consideration given to the external storage of log data. It should also be part of the backup recovery plan and DRP of the organization. As a CCSP, it is your responsibility to ensure that proper log management takes place. The type of log data collected depends on the type of service provided. For example, with IaaS, the CCSP does not typically collect or have access to the log data of the VMs; the collection of log data is the customer’s responsibility. In a PaaS or SaaS environment, the CCSP may collect application- or OS-level log data.
NIST SP 800-92 offers the following recommendations that should help you facilitate
more efficient and effective log management for the enterprise:
?? Organizations should establish policies and procedures for log management. To establish and maintain successful log management activities, an organization
should take these actions:
?? Develop standard processes for performing log management.
?? Define its logging requirements and goals as part of the planning process.
?? Develop policies that clearly define mandatory requirements and suggested recommendations for log management activities, including log generation, transmission, storage, analysis, and disposal.
?? Ensure that related policies and procedures incorporate and support the log management requirements and recommendations.
The organization’s management should provide the necessary support for the efforts involving log management planning, policy, and procedures development. The organization’s policies and procedures should also address the preservation of original logs. Many organizations send copies of network traffic logs to centralized devices. They also use tools that analyze and interpret network traffic. When logs may be needed as evidence, organizations may want to acquire copies of the original log files, the centralized log files, and interpreted log data in case there are questions about the fidelity of the copying and interpretation processes. Retaining logs for evidence may involve the use of different forms of storage and different processes, such as additional restrictions on access to the records.
?? Organizations should prioritize log management appropriately throughout the organization. After an organization defines its requirements and goals for the log management process, it should prioritize the requirements and goals based on the perceived reduction of risk and the expected time and resources needed to perform log management functions.
?? Organizations should create and maintain a log management infrastructure. A log management infrastructure consists of the hardware, software, networks, and media used to generate, transmit, store, analyze, and dispose of log data.
After establishing an initial log management policy and identifying roles and responsibilities, an organization should develop one or more log management infrastructures that effectively support the policy and roles.
Following are major factors to consider in the design:
?? Volume of log data to be processed
?? Network bandwidth
?? Online and offline data storage
?? Security requirements for the data
?? Time and resources needed for staff to analyze the logs
?? Organizations should provide proper support for all staff with log management
?? Organizations should establish standard log management operational processes.
The major log management operational processes typically include configuring
log sources, performing log analysis, initiating responses to identified events, and
managing long-term storage. Administrators have other responsibilities as well,
such as the following:
?? Monitoring the logging status of all log sources
?? Monitoring log rotation and archival processes
?? Checking for upgrades and patches to logging software and acquiring, testing,
and deploying them
?? Ensuring that each logging host’s clock is synced to a common time source
?? Reconfiguring logging as needed based on policy changes, technology
changes, and other factors
?? Documenting and reporting anomalies in log settings, configurations, and
Using Security Information and Event ManagementSecurity information and event management (SIEM) is the centralized collection and monitoring of security and event logs from different systems. SIEM allows for the correlation of different events and early detection of attacks.
A SIEM system can be set up locally or hosted in an external cloud-based environment.
A SIEM system can support early detection of these events.
?? A locally hosted SIEM system offers easy access and lower risk of external
?? An external SIEM system may prevent tampering of data by an attacker.
SIEM systems are also beneficial because they map to and support the implementation of the Critical Controls for Effective Cyber-Defense. The Critical Controls for Effective Cyber-Defense (the Controls) are a recommended set of actions for cyber-defense that provide specific and actionable ways to stop today’s most pervasive attacks. They were developed
and are maintained by a consortium of hundreds of security experts from across the public and private sectors under the guidance of the Center for Internet Security (CIS) and SANS. An underlying theme of the controls is support for large-scale, standards- based security
automation for the management of cyber defenses.18 See Table 5.5.
Managing the Logical Infrastructure for Cloud EnvironmentsThe logical design of the cloud infrastructure should include measures to limit remote access to only those authorized to access resources, provide the capability to monitor the
cloud infrastructure and allow for the remediation of systems in the cloud environment, as well as the backup and restoring of a guest OS.
Access Control for Remote Access
To support globally distributed data centers and secure cloud computing environments,
enterprises must provide remote access to employees and third-party personnel with
whom they have contracted. This includes field technicians, IT and help desk support,
and many others.
Following are key questions that enterprises should be asking themselves:
?? Do you trust the person connecting to provide access into your core systems?
?? Are you replacing credentials immediately after a remote vendor has logged in?
A cloud remote access solution should be capable of providing secure anywhere
access and extranet capabilities for authorized remote users. The service should utilize
SSL/TLS as a secure transport mechanism and require no software clients to be deployed
on mobile and remote users’ Internet-enabled devices.
One of the fundamental benefits of cloud is the reduction of the attack surface. There
are no open ports. As an example, Citrix Online runs the popular GoToMyPC.com service,
a remote-access service that uses frequent polling to the company’s cloud servers
as a means to pass data back to a host computer. There are no inbound connections to
the host computer; instead, GoToMyPC pulls data from the cloud. The result is that the
attackable parts of the service—any open ports—are eliminated, and the attack surface is
reduced to a centrally managed hub that can be more easily secured and monitored.
Key benefits of a remote access solution for the cloud are many:
?? Secure access without exposing the privileged credential to the end user, eliminating
the risk of credential exploitation or key logging.
?? Accountability of who is accessing the data center remotely with a tamper-proof
?? Session control over who can access, enforcement of workflows such as managerial
approval, ticketing integration, session duration limitation, and automatic
termination when idle.
Managing the Logical Infrastructure for Cloud Environments 305
?? Real-time monitoring to view privileged activities as they are happening or as a
recorded playback for forensic analysis. Sessions can be remotely terminated or
intervened with when necessary for more efficient and secure IT compliance and
cyber security operations.
?? Secure isolation between the remote user’s desktop and the target system they
are connecting to so that any potential malware does not spread to the target
Implementing PoliciesPolicies are crucial to implementing an effective data security strategy. They typically act
as the connectors that hold many aspects of data security together across both technical
and nontechnical components. The failure to implement and utilize policies in cloud based
(or non-cloud-based) environments would likely result in disparate parts or isolation of activities, effectively operating as standalone or one-offs and leading to multiple duplication and limited standardization.
From an organizational perspective, policies are nothing new. In fact, they have long been providing guiding decisions and principles to ensure that actions and decisions achieve the desired and rational outcomes.
From a cloud computing angle, the use of policies can go a long way toward determining
the security posture of cloud services, as can standardizing practices to guide implementation.
Organizational policies form the basis of functional policies that can reduce the likelihood
of the following:
?? Financial loss
?? Irretrievable loss of data
408 Domain 6 Legal and Compliance
?? Reputational damage
?? Regulatory and legal consequences
?? Misuse and abuse of systems and resources
As highlighted in prior sections of this book, particularly for organizations that have a
well-engrained and fully operational ISMS, the following are typically utilized. (These
are typical functional policies—this list is not all encompassing.)
?? Information security policy
?? Information technology policy
?? Data classification policy
?? Acceptable usage policy
?? Network security policy
?? Internet use policy
?? Email use policy
?? Password policy
?? Virus and spam policy
?? Software security policy
?? Data backup policy
?? Disaster recovery (DR) policy
?? Remote access policy
?? Segregation of duties policy
?? Third-party access policy
?? Incident response and management policy
?? Human resources security policy
?? Employee background checks
?? Legal compliance guidelines
Cloud Computing PoliciesThe listed organizational policies define acceptable, desired, and required criteria for
users to follow and adhere. Throughout a number of these, specified criteria or actions
must be drawn out, with reference to any associated standards and processes, which typically
list finite levels of information.
As part of the review of cloud services, either during the development of the cloud
strategy or during vendor reviews and discussions, the details and requirements should
be expanded to compare or assess the required criteria (as per existing policies). This also
helps determine the provider’s ability to meet or exceed relevant requirements.
Following are some policy examples:
?? Password policies: If the organization’s policy requires an eight-digit password
comprised of numbers, uppercase and lowercase characters, and special characters,
is this true for the CSP?
?? Remote access: Where two-factor authentication may be required for access of
network resources by users and third parties, is this true for the CSP?
?? Encryption: If minimum encryption strength and relevant algorithms are required
(such as minimum of AES 256-bit), is this met by the CSP or potential solution?
Where keys are required to be changed every three months, is this true for the
?? Third-party access: Can all third-party access (including the CSP) be logged and
traced for the use of cloud-based services or resources?
?? Segregation of duties: Where appropriate, are controls required for the segregation
of key roles and functions, and can these be enforced and maintained on
?? Incident management: Where required actions and steps are undertaken, particularly
regarding communications and relevant decision makers, how can these
be fulfilled when cloud-based services are in scope?
?? Data backup: Is data backup included and in line with backup requirements
listed in relevant policies? When data integrity is affected or becomes corrupt,
will the information be available and in a position to be restored, particularly onshared platforms, storage, and infrastructure?
Bridging the Policy Gaps
When cloud-based services cannot fulfill the elements listed in the previous section,
there needs to be an agreed-upon list or set of mitigation controls or techniques. You
should not revise the policies to reduce or lower the requirements if at all possible. All
changes and variations to policy should be explicitly listed and accepted by all relevant
risk and business stakeholders.
Understanding the Implications of the Cloudto Enterprise Risk ManagementThe cloud represents a fundamental shift in the way technology is offered. The shift is
toward the consumerization of IT services and convenience. In addition to the countless
benefits outlined in this book and those you may identify yourself, the cloud creates an
organizational change (Figure 6.3).
Figure 6.3 How the cloud affects the enterprise.
It is important for both the CSP and the cloud customer to be focused on risk. The
manner in which typical risk management activities, behaviors, processes, and related
procedures are performed may require significant revisions and redesign. After all, the
way services are delivered changes delivery mechanisms, locations, and providers—all ofwhich result in governance and risk-management changes.
416 Domain 6 Legal and Compliance
These changes need to be identified from the scoping and strategy phases through the
ongoing and recurring tasks, both ad hoc and periodically scheduled. Addressing these
risks requires that the CSP and cloud customer’s policies and procedures be aligned as
closely as possible because risk management must be a shared activity to be implemented
The risk profile is determined by an organization’s willingness to take risks as well as
the threats to which it is exposed. The risk profile should identify the level of risk to be
accepted, the way risks are taken, and the way risk-based decision making is performed.
Additionally, the risk profile should take into account potential costs and disruptions
should one or more risks be exploited.
To this end, it is imperative that an organization fully engages in a risk-based assessment
and review against cloud-computing services, service providers, and the overall
effects on the organization should it utilize cloud-based services.
Swift decision making can lead to significant advantages for the organization, but when
assessing and measuring the relevant risks in cloud-service offerings, it’s best to have a
systematic, measurable, and pragmatic approach. Undertaking these steps effectively
enables the business to balance the risks and offset any excessive risk components, all
while satisfying listed requirements and objectives for security and growth.
Emerging or rapid-growth companies will be more likely to take significant risks when
utilizing cloud-computing services so they can be first to market.
Difference Between the Data Owner and Controller and
the Data Custodian and Processor
Treating information as an asset requires a number of roles and distinctions to be clearly
identified and defined. The following are key roles associated with data management:
?? The data subject is an individual who is the focus of personal data.
?? The data controller is a person who either alone or jointly with other persons
determines the purposes for which and the manner in which any personal data is
?? The data processor in relation to personal data is any person other than an
of the data controller who processes the data on behalf of the data
Understanding the Implications of the Cloud to Enterprise Risk Management 417
Legal and Compliance
?? Data stewards are commonly responsible for data content, context, and associated
?? Data custodians are responsible for the safe custody, transport, data storage, and
implementation of business rules.
?? Data owners hold the legal rights and complete control over a single piece or set
of data elements. Data owners also possess the ability to define distribution and
Similar to a contract signed between a customer and a CSP, the SLA forms the most crucial
and fundamental component of how security and operations will be undertaken. The
SLA should also capture requirements related to compliance, best practice, and general
operational activities to satisfy each of these.
Within an SLA, the following contents and topics should be covered at a minimum:
?? Availability (for example, 99.99 percent of services and data)
?? Performance (for example, expected response times versus maximum response
?? Security and privacy of the data (for example, encrypting all stored and transmitted
?? Logging and reporting (for example, audit trails of all access and the ability to
report on key requirements and indicators)
?? DR expectations (for example, worse-case recovery commitment, recovery time
objectives RTOs, maximum period of tolerable disruption MPTD)
?? Location of the data (for example, ability to meet requirements or consistent with
?? Data format and structure (for example, data retrievable from provider in readable
and intelligent format)
?? Portability of the data (for example, ability to move data to a different provider or
to multiple providers)
?? Identification and problem resolution (for example, help desk/service desk, call
center, or ticketing system)
?? Change-management process (for example, updates or new services)
?? Dispute-mediation process (for example, escalation process and consequences)
?? Exit strategy with expectations on the provider to ensure a smooth transition
418 Domain 6 Legal and Compliance
Although SLAs tend to vary significantly depending on the provider, more often than notthey are structured in favor of the provider to ultimately expose them to the least amount
of risk. Note the examples of how elements of the SLA can be weighed against the customer’s
requirements (Figure 6.4).
Figure 6.4 SLA elements weighed against customer requirements.
?? Uptime Guarantees
?? Service levels regarding performance and uptime are usually featured in
outsourcing contracts but not in software contracts, despite the significant
business-criticality of certain cloud applications.
?? Numerous contracts have no uptime or performance service-level guarantees
or are provided only as changeable URL links.
?? SLAs, if they are defined in the contract at all, are rarely guaranteed to stay the
same upon renewal or not to significantly diminish.
?? A material diminishment of the SLA upon a renewal term may necessitate a
rapid switch to another provider at significant cost and business risk.
?? SLA Penalties
?? For SLAs to be used to steer the behavior of a cloud services provider, they
need to be accompanied by financial penalties.
?? Contract penalties provide an economic incentive for providers to meet stated
SLAs. This is an important risk-mitigation mechanism, but such penalties
rarely, if ever, provide adequate compensation to a customer for related business
?? Penalty clauses are not a form of risk transfer.
Understanding the Implications of the Cloud to Enterprise Risk Management 419
Legal and Compliance
?? Penalties, if they are offered, usually take the form of credits rather than
refunds. But who wants an extension of a service that does not meet requirements
for quality? Some contracts offer to give back penalties if the provider
consistently exceeds the SLA for the remainder of the contract period.
?? SLA Penalty Exclusions
?? Limitation on when downtime calculations start: Some CSPs require that
the application is down for a period of time (for example, 5 to 15 minutes)
before any counting toward SLA penalty will start.
?? Scheduled downtime: Several CSPs claim that if they give you warning, an
interruption in service does not count as unplanned downtime but rather as
scheduled downtime and, therefore, is not counted when calculating penalties.
In some cases, the warning can be as little as eight hours.
?? Suspension of Service
?? Some cloud contracts state that if payment is more than 30 days overdue
(including any disputed payments), the provider can suspend the service. This
gives the CSP considerable negotiation leverage in the event of any dispute
?? Provider Liability
?? Most cloud contracts restrict liability apart from infringement claims relating
to intellectual property to a maximum of the value of the fees over the past
12 months. Some contracts even state as little as six months.
?? If the CSP were to lose the customer’s data, for example, the financial
would likely be much greater than 12 months of fees.
?? Data-Protection Requirements
?? Most cloud contracts make the customer ultimately responsible for security,
data protection, and compliance with local laws. If the CSP is complying with
privacy regulations for personal data on your behalf, you need to be explicit
about what the provider is doing and understand any gaps.
?? Cloud contracts rarely contain provisions about DR or provide financially
backed RTOs. Some IaaS providers do not even take responsibility for backing
up customer data.
420 Domain 6 Legal and Compliance
?? Security Recommendations
?? Gartner recommends negotiating SLAs for security, especially for security
breaches, and has seen some CSPs agree to this. Immediate notification of any
security or privacy breach as soon as the provider is aware is highly recommended.
?? Because the CSP is ultimately responsible for the organization’s data and
alerting its customers, partners, or employees of any breach, it is particularly
critical for companies to determine what mechanisms are in place to alert
customers if any security breaches do occur and to establish SLAs determining
the time frame the CSP has to alert you of any breach.
?? The time frames you have to respond within will vary by jurisdiction but may
be as little as 48 hours. Be aware that if law enforcement becomes involved in
a provider security incident, it may supersede any contractual requirement to
notify you or to keep you informed.
These examples highlight the dangers of not paying sufficient focus and due diligence
when engaging with a CSP around the SLA. Because these controls list a general sample
of potential pitfalls related to the SLA, the following documents can serve as useful reference
points when ensuring that SLAs are in line with business requirements. They can
also balance risks that may previously have been unforeseen.
Key SLA Elements
The following key elements should be assessed when reviewing and agreeing to the SLA:
?? Assessment of risk environment: What types of risks does the organization face?
?? Risk profile: What are the number of risks and potential effects of risks?
?? Risk appetite: What level of risk is acceptable?
?? Responsibilities: Who will do what?
?? Regulatory requirements: Will these be met under the SLA?
?? Risk mitigation: Which mitigation techniques and controls can reduce risks?
?? Risk frameworks: What frameworks are to be used to assess the ongoing effectiveness?
How will the provider manage risks?
Security Controls Verification;Work in Progress;
Another way to classify controls is by the way they address a risk exposure.
Preventive controls should stop an event from happening.
Detective controls should identify an event when it is happening and generate an alert that prompts a corrective control to act.
Corrective controls should limit the impact of an event and help resume normal operations within a reasonable time frame.
Compensating controls are alternate controls designed to accomplish the intent of the original controls as closely as possible when the originally designed controls cannot be used due to limitations of the environment.
Auditing in the CloudThis section defines the process, methods, and required adaptions necessary for an audit
within the cloud environment.
As discussed throughout the book, the journey to cloud-based computing requires
significant investments throughout the organization, with an emphasis on the business
components such as finance, legal, compliance, technology, risk, strategy, executive sponsors,
and so on.
Given the large number of elements and components to consider, it is safe to say
that no small task of work is required before utilizing cloud services. The Cloud Security
Alliance (CSA) has developed the Cloud Controls Matrix (CCM), which looks to list
and categorize the domains and controls, along with which elements and components
are relevant according to the controls. The CCM provides an invaluable resource when
identifying and listing each action and what impacts these may have. Additionally, within
the spreadsheet, a best practice guide is given for each control, along with mapping the
CCM against frameworks and standards such as ISO 27001:2013, Federal Information
Processing Standard (FIPS), NIST, Control Objectives for Information and Related
Technology (COBIT), CSA Trusted Cloud Initiative, European Union Agency for Network
and Information Security (ENISA), Federal Risk and Authorization Management
Program (FedRAMP), Generally Accepted Privacy Principles (GAPP), HIPAA, North
American Electric Reliability Corporation (NERC), Jericho Forum, and others. This
should form the foundation for any cloud strategy, risk reviews, or provider-based risk
Internal and External AuditsAs organizations begin to transition services to the cloud, there is a need for ongoing
assurances from both cloud customers and providers that controls are put in place or are
in the process of being identified.
An organization’s internal audit acts as a third line of defense after the business or
information technology (IT) functions and risk management functions through the following
?? Independent verification of the cloud program’s effectiveness
?? Providing assurance to the board and risk management functions of the organization
with regard to the cloud risk exposure
The internal audit function can also play a trusted advisor and proactively be involved
by working with IT and the business in identifying and addressing the risk associated with
the various cloud services and deployment models. In this capacity, the organization is
actively taking a risk-based approach on its journey to the cloud. The internal audit functioncan engage with stakeholders, review the current risk framework with a cloud lens,
assist with the risk-mitigation strategies, and perform a number of cloud audits such as
?? The organization’s current cloud governance program
?? Data classification governance
?? Shadow IT
CSPs should include an internal audit in their discussions about new services and
deployment models to obtain feedback in the planned design of cloud controls theircustomers will need, as well as to mitigate the risk. The internal audit function will still
require considering how to maintain independence from the overall process because,
eventually, it must actually perform the audit on these controls.
The internal audit function will also continue to perform audits in the traditional
sense. These are directly dependent on the outputs of the organization’s risk-assessment
Cloud customers will want not only to engage in discussions with CSPs’ security
professionals but also to consider meeting with the organization’s internal audit
Another potential source of independent verification on internal controls will be
audits performed by external auditors. An external auditor’s scope varies greatly from an
internal audit, whereas the external audit usually focuses on the internal controls over
financial reporting. Therefore, the scope of services is usually limited to the IT and business
environments that support the financial health of an organization and in most cases
doesn’t provide specific assurance on cloud risks other than vendor risk considerations on
the financial health of the CSP.
Types of Audit ReportsThe internal and external audits assess cloud risks and relationships internally within the
organization between IT and the business and externally between the organization and
cloud vendors. These audits typically focus on the organization. In cloud relationships,
where the ownership of the control that addresses the cloud risks resides within the CSP,
organizations need to assess the CSP controls to understand if there are gaps within the
394 Domain 6 Legal and Compliance
expected cloud control framework that is overlaid between the cloud customer and the
Cloud customers can utilize other reports, such as the American Institute of CPAs
(AICPA) Service Organization Control (SOC) reports (the SOC 1, SOC 2, and SOC 3
reports—see Table 6.2). These examination reports can assist cloud customers in understanding
the controls in place at a CSP.9
Table 6.2 AICPA SOC Reports
Report Nu mber User s Concer n Detail Required
SOC 1 User entities and
their financial statement
Effect of service organization’s
control on user
Requires detail on the
system, controls, tests
performed by the service
auditor, and results of
SOC 2 User entities, regulators,
and others with
to appropriately use
Effectiveness of controls
at the service organization
related to security,
Requires detail on the
system, controls, tests
performed by the service
auditor, and results of
SOC 3 Any users with a
need for confidence
in the service organization’s
Effectiveness of controls
at the service organization
related to security,
Requires limited information
focused on the
boundaries of the system
and the achievement of
the applicable trust services
criteria for security,
?? SOC 1: Reports on controls at service organizations relevant to user entities’ internal
control over financial reporting. This examination is conducted in accordance
with the Statement on Standards for Attestation Engagements No. 16 (SSAE 16).
This report is the replacement of the Statement on Auditing Standards No. 70
(SAS 70). The international equivalent to the AICPA SOC 1 is the International
Auditing and Assurance Standards Board (IAASB) issued and approved ISAE 3402.
?? SOC 2: Reports on controls at a service organization relevant to the Trust Services
principles: security, availability, processing integrity, confidentiality, and privacy.
SOC 1: Reports on controls at service organizations relevant to user entities’ internal
control over financial reporting. This examination is conducted in accordance
with the Statement on Standards for Attestation Engagements No. 16 (SSAE 16).
This report is the replacement of the Statement on Auditing Standards No. 70
(SAS 70). The international equivalent to the AICPA SOC 1 is the International
Auditing and Assurance Standards Board (IAASB) issued and approved ISAE 3402.
?? SOC 2: Reports on controls at a service organization relevant to the Trust Services
principles: security, availability, processing integrity, confidentiality, and privacy.
Auditing in the Cloud 395
Legal and Compliance
Similar to the SOC 1 in the evaluation of controls, the SOC 2 report is an
examination that expands the evaluation of controls to the criteria set forth by the
AICPA Trust Services principles and is a generally restricted report.The SOC 2
is an examination of the design and operating effectiveness of controls that meetthe criteria for principles set forth in the AICPA’s Trust Services principles. This
report provides additional transparency into the enterprise’s security based on a
defined industry standard and further demonstrates the enterprise’s commitment
to protecting customer data. SOC 2 reports can be issued on one or more of the
Trust Services principles.
There are two types of SOC 2 reports:
?? Type 1: A report on management’s description of the service organization’s
system and the suitability of the design of the controls
?? Type 2: A report on management’s description of the service organization’s system
and the suitability of the design and operating effectiveness of the controls
?? SOC 3: Similar to the SOC 2, the SOC 3 report is an examination that expands
the evaluation of controls to the criteria set forth by the AICPA Trust Services
principles. The major difference between SOC 2 and SOC 3 reports is that
SOC 3 reports are general use.
As the cloud matures, so do the varying types of accreditation reporting. As a provider
or customer of cloud services, you need to stay in tune with the changing landscape.
Other types of audit reports and accreditations you can consider are agreed-upon procedures
(AUP) and cloud certifications.
SummaryCloud computing offers excellent prospects to enable increased quality and greater access at lower cost of services. When enterprises consider moving a portion of their infrastructure to cloud, they must evaluate their overall privacy, security, and regulatory compliance posture. This paper is provided to approach such a cloud deployment with the use of international standards, compliance requirements, shared responsibilities, and rationalized risk mapping to address necessary controls.
Page | PAGE * MERGEFORMAT 87
Conclusions and RecommendationsPage | PAGE * MERGEFORMAT 88
BibliographyPage | PAGE * MERGEFORMAT 88
References1 Security ; Privacy controls – National Institute of Standards and Technology (NIST)
2 Controls and Assurance in the Cloud – Using COBIT 5
3 Cloud Controls Matrix – Cloud Security Alliance (CSA)
4 ISO/IEC 27000 family – Information security management systems – International
Organization for Standardization (ISO)
5 Guidelines on the use of cloud computing services – International Association of
Privacy Professionals (IAPP)
6 Cloud Computing Risk Assessment – European Union Agency Network ; Information
7 SaaS Security Best Practices – Intel
8 Controls and Security when moving to SaaS – Deloitte
OWASP REST API Security –
https://www.owasp.org/index.php/REST_Security_Cheat_SheetOWASP Transport Layer Security (TLS) – https://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet13 Effective Security Controls for ISO 27001 Compliance – http://download.microsoft.com/download/1/2/9/12943B91-BBE8-415C-9E0A-4844407E4377/13%20Effective%20Security%20Controls%20for%20ISO%2027001%20Compliance.pdfSTRIDE Threat Model – https://docs.microsoft.com/en-us/azure/security/azure-security-threat-modeling-tool-getting-startedPage | PAGE * MERGEFORMAT 89
Appendix 1: Risk Management FrameworksThe risk frameworks include ISO 31000:2009, ENISA, and NIST—Cloud Computing Synopsis and Recommendations.
Figure SEQ Figure * ARABIC 7: Risk Management FrameworksISO 31000:2009
As ISO 31000:200916 is a guidance standard that is not intended for certification purposes. ISO 31000:2009 sets out terms and definitions, principles, a framework, and a process for managing risk. Similar to other ISO standards, it lists 11 key principles as a guiding set of rules to enable senior decision makers and organizations to manage risks.
The foundation components of ISO 31000:2009 focus on designing, implementing, and reviewing risk management. From a completeness perspective, ISO 31000:2009 focuses on risk identification, analysis, and evaluation through risk treatment. By performing the stages of the lifecycle, a proactive and measured approach to risk management should be the result, enabling management and business decision makers to make informed and educated decisions.
ENISA produced “Cloud Computing: Benefits, Risks, and Recommendations for Information Security,” which can be utilized as an effective foundation for risk management. The document identifies 35 types of risks for organizations to consider, coupled with the top 8 security risks based on likelihood and impact.
NIST—Cloud Computing Synopsis and Recommendations
Following the release of the ENISA document, in May 2011 NIST released Special Publication 800-146, which focuses on risk components and the appropriate analysis of such risks. Although NIST serves as an international reference for many of the world’s leading entities, it continues to be strongly adopted by the U.S. government and related agency sectors.
Page | PAGE * MERGEFORMAT 90
Appendix 2: Data Classification StandardsThe following table identifies sample control objective definitions. This list is not complete or authoritative and should only be used as a discussion point to consider when moving services to a cloud solution.
Regulation, requirement, or standard Control details
NIST SP800-53 R3
National Institute of Standards and Technology RA-2 Security Categorization
AC-4 Information Flow Enforcement
PCI DSS v2.0
Payment Card Industry Data Security Standard 9.7.1 Classify media so the sensitivity of the data can be determined.
9.10 Destroy media when it is no longer needed for business or legal reasons.
12.3 Develop usage policies for critical technologies (for example, remote-access technologies, wireless technologies, removable electronic media, laptops, tablets, personal data/digital assistants (PDAs), e-mail usage and Internet usage) and define proper use of these technologies.
American Institute of CPAs Service Organization Controls (S3.8.0) Procedures exist to classify data in accordance with classification policies and periodically monitor and update such classifications as necessary.
(C3.14.0) Procedures exist to provide that system data are classified in accordance with the defined confidentiality and related security policies.
International Organization for Standardization / International Electrotechnical Commission A.7.2.1 Classification guidelines
European Union Agency for Network and Information Security – Information Assurance Framework 6.05.(c) Asset management – classification, segmentation
Employees obliged to adhere to regulations on information security, data protection, adequate handling of customer data
Table SEQ Table * ARABIC 3: Data Classification StandardsPage | PAGE * MERGEFORMAT 91
Appendix 3: Application Security GuidelinesPage | PAGE * MERGEFORMAT 92
Appendix 4: Glossary of TermsAuthentication. A process that confirms that a user (identified by a username or user ID) is valid through use of a token or password. This process verifies that the user is who they say they are
Authorization. A process that provides an authenticated user with the ability to access an application, data set, data file, or some other object
Separation of duty. As discussed in this paper, the division of responsibilities in an IT environment that helps ensure that no one person can use IT resources for their personal benefit or cause IT-related outcomes that are detrimental to the organization. One of the most common ways to achieve separation of duty is to use a role-based access control system for authorization. More information is available at
Token. An item that is used to authenticate a username or user ID. A token can be something a user possesses, such as a card key, something that is biometrics-based, such as a fingerprint, retinal scan, or voice print, or something that is known, such as a password.
Structured data. Data that is typically human readable and able to be indexed by machine. This data type incudes databases and spreadsheets.
Unstructured data. Data that is not human readable and is difficult to index. This data type includes source code, binaries, and documents, and can include such things as email because the data is typically randomly managed.
Archive and recovery. As discussed in this paper, the long-term storage of data and its retrieval when it needs to be returned to service. Archival and recovery methods must conform to the retention model that is used.
Page | PAGE * MERGEFORMAT 93
Checklist for the items in the reportIs the final report neatly formatted with all the elements required for a technical Report? Yes / No
Is the Cover page in proper format as given in Annexure A? Yes / No
Is the Title page (Inner cover page) in proper format? Yes / No
(a) Is the Certificate from the Supervisor in proper format?
(b) Has it been signed by the Supervisor? Yes / No
Is the Abstract included in the report properly written within one page? Have the technical keywords been specified properly? Yes / No
Is the title of your report appropriate? The title should be adequately descriptive, precise and must reflect scope of the actual work done. Uncommon abbreviations / Acronyms should not be used in the title Yes / No
Have you included the List of abbreviations / Acronyms? Yes / No
Does the Report contain a summary of the literature survey? Yes / No
Does the Table of Contents include page numbers?
Are the Pages numbered properly? (Ch. 1 should start on Page # 1)
Are the Figures numbered properly? (Figure Numbers and Figure Titles should be at the bottom of the figures)
Are the Tables numbered properly? (Table Numbers and Table Titles should be at the top of the tables)
Are the Captions for the Figures and Tables proper?
Are the Appendices numbered properly? Are their titles appropriate Yes / No
Yes / No Yes / No
Yes / No
Yes / No Yes / No
Is the conclusion of the Report based on discussion of the work? Yes / No
Are References or Bibliography given at the end of the Report?
Have the References been cited properly inside the text of the Report?
Are all the references cited in the body of the report
Yes / No
Yes / No
Yes / No
Is the report format and content according to the guidelines? The report should not be a mere printout of a Power Point Presentation, or a user manual. Source code of software need not be included in the report. Yes / No
Table SEQ Table * ARABIC 4: Checklist items for the reportDeclaration by Student:
I certify that I have properly verified all the items in this checklist and ensure that the report is in proper format as specified in the course handout.
Place: Singapore Signature of the Student
Date: DD MMM YYYY Name: Balasubramaniyan P
Page | PAGE * MERGEFORMAT 94