A security company is developing a new cloud-based log analytics platform. Its purpose is to allow:
Customers to upload their log files to the "big data" platform Customers to perform remote log search
Customers to integrate into the platform using an API so that third party business intelligence tools can be used for the purpose of trending, insights, and/or discovery
Which of the following are the BEST security considerations to protect data from one customer being disclosed to other customers? (Select THREE).
A. Secure storage and transmission of API keys
B. Secure protocols for transmission of log files and search results
C. At least two years retention of log files in case of e-discovery requests
D. Multi-tenancy with RBAC support
E. Sanitizing filters to prevent upload of sensitive log file contents
F. Encryption of logical volumes on which the customers' log files reside
Correct Answer: ABD
The cloud-based log analytics platform will be used by multiple customers. We should therefore use a multi-tenancy solution. Multi-tenancy isolates each tenant's (customer's) services, jobs, and virtual machines from other tenants. RBAC (Role-Based Access Control) is used to assign permissions to each user. Roles are defined which have specific sets of permissions. Users are then assigned one or more roles according to what permissions they need (what roles they need to perform). Secure protocols for transmission of log files and search results: this is obvious. A secure protocol such as SSL/TLS should be used for the transmission of any sensitive data to prevent the data being captured by packet sniffing attacks. Encryptions keys used to access the API should be kept securely and transmitted securely. If a user is able to access another customer's key, the users could access the other customer's data.
Question 32:
Ann, a software developer, wants to publish her newly developed software to an online store. Ann wants to ensure that the software will not be modified by a third party or end users before being installed on mobile devices. Which of the following should Ann implement to stop modified copies of her software from running on mobile devices?
A. Single sign-on
B. Identity propagation
C. Remote attestation
D. Secure code review
Correct Answer: C
Trusted Computing (TC) is a technology developed and promoted by the Trusted Computing Group. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by computer
hardware and software. Enforcing this behavior is achieved by loading the hardware with a unique encryption key inaccessible to the rest of the system.
Remote attestation allows changes to the user's computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users tampering with their software to circumvent
technological protection measures. It works by having the hardware generate a certificate stating what software is currently running. The computer can then present this certificate to a remote party to show that unaltered software is currently
executing.
Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that presented and requested the attestation, and not by an eavesdropper.
Question 33:
The risk manager is reviewing a report which identifies a requirement to keep a business critical legacy system operational for the next two years. The legacy system is out of support because the vendor and security patches are no longer released. Additionally, this is a proprietary embedded system and little is documented and known about it. Which of the following should the Information Technology department implement to reduce the security risk from a compromise of this system?
A. Virtualize the system and migrate it to a cloud provider.
B. Segment the device on its own secure network.
C. Install an antivirus and HIDS on the system.
D. Hire developers to reduce vulnerabilities in the code.
Correct Answer: B
The question states that the application is a proprietary embedded system and little is documented and known about it. If we don't know much about the application or system, we should not make any changes to the system. The best solution would be to isolate the system by segmenting the device on its own secure network. This will reduce the risk of a compromise of the system without making changes to the system itself.
Question 34:
A university requires a significant increase in web and database server resources for one week, twice a year, to handle student registration. The web servers remain idle for the rest of the year. Which of the following is the MOST cost effective way for the university to securely handle student registration?
A. Virtualize the web servers locally to add capacity during registration.
B. Move the database servers to an elastic private cloud while keeping the web servers local.
C. Move the database servers and web servers to an elastic private cloud.
D. Move the web servers to an elastic public cloud while keeping the database servers local.
Correct Answer: D
In cloud computing, elasticity is defined as the degree to which a system (or a particular cloud layer) autonomously adapts its capacity to workload over time. The dynamic adaptation of capacity, e.g., by altering the use of computing resources, to meet a varying workload is called "elastic computing". In general, an elastic cloud application or process has three elasticity dimensions, Cost, Quality, and Resources, enabling it to increase and decrease its cost, quality, or available resources, as to accommodate specific requirements.
In this question, the web servers remain idle when they are not used for the rest of the year. Therefore, we should host the web servers in the elastic public cloud. This will be cost effective because we will not be charged for them while they are not in use. The database servers are not idle so they should be kept local.
Question 35:
An organization has several production critical SCADA supervisory systems that cannot follow the normal 30-day patching policy. Which of the following BEST maximizes the protection of these systems from malicious software?
A. Configure a firewall with deep packet inspection that restricts traffic to the systems
B. Configure a separate zone for the systems and restrict access to known ports
C. Configure the systems to ensure only necessary applications are able to run
D. Configure the host firewall to ensure only the necessary applications have listening ports
Correct Answer: C
SCADA stands for supervisory control and data acquisition, a computer system for gathering and analyzing real time data. SCADA systems are used to monitor and control a plant or equipment in industries such as telecommunications, water
and waste control, energy, oil and gas refining and transportation.
If we cannot take the SCADA systems offline for patching, then the best way to protect these systems from malicious software is to reduce the attack surface by configuring the systems to ensure only necessary applications are able to run.
The basic strategies of attack surface reduction are to reduce the amount of code running, reduce entry points available to untrusted users, and eliminate services requested by relatively few users. One approach to improving information
security is to reduce the attack surface of a system or software. By turning off unnecessary functionality, there are fewer security risks. By having less code available to unauthorized actors, there will tend to be fewer failures. Although attack
surface reduction helps prevent security failures, it does not mitigate the amount of damage an attacker could inflict once a vulnerability is found.
Question 36:
A security administrator is tasked with increasing the availability of the storage networks while enhancing the performance of existing applications. Which of the following technologies should the administrator implement to meet these goals? (Select TWO).
A. LUN masking
B. Snapshots
C. vSAN
D. Dynamic disk pools
E. Multipath
F. Deduplication
Correct Answer: DE
We can use dynamic disk pools (DDP) to increase availability and improve performance compared to traditional RAID. Multipathing also improves availability by creating multiple paths to the storage (in case one path fails) and it improves the
performance by aggregating the performance of the multiple paths. DDP dynamically distributes all data, spare capacity, and protection information across a pool of drives. Effectively, DDP is a new type of RAID level, built on RAID 6. It uses
an intelligent algorithm to define where each chunk of data should reside. In traditional RAID, drives are organized into arrays, and logical drives are written across stripes on the physical drives in the array. Hot spares contain no data until a
drive fails, leaving that spare capacity stranded and without a purpose. In the event of a drive failure, the data is recreated on the hot spare, significantly impacting the performance of all drives in the array during the rebuild process.
With DDP, each logical drive's data and spare capacity is distributed across all drives in the pool, so all drives contribute to the aggregate IO of the logical drive, and the spare capacity is available to all logical drives. In the event of a physical
drive failure, data is reconstructed throughout the disk pool. Basically, the data that had previously resided on the failed drive is redistributed across all drives in the pool. Recovery from a failed drive may be up to ten times faster than a rebuild
in a traditional RAID set, and the performance degradation is much less during the rebuild.
In computer storage, multipath I/O is a fault-tolerance and performance-enhancement technique that defines more than one physical path between the CPU in a computer system and its mass-storage devices through the buses, controllers,
switches, and bridge devices connecting them.
As an example, a SCSI hard disk drive may connect to two SCSI controllers on the same computer, or a disk may connect to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route the I/O through
the remaining controller, port or switch transparently and with no changes visible to the applications.
Question 37:
A recently hired security administrator is advising developers about the secure integration of a legacy in-house application with a new cloud based processing system. The systems must exchange large amounts of fixed format data such as names, addresses, and phone numbers, as well as occasional chunks of data in unpredictable formats. The developers want to construct a new data format and create custom tools to parse and process the data. The security administrator instead suggests that the developers:
A. Create a custom standard to define the data.
B. Use well formed standard compliant XML and strict schemas.
C. Only document the data format in the parsing application code.
D. Implement a de facto corporate standard for all analyzed data.
Correct Answer: B
Explanation: To ensure the successful parsing of the data, the XML code containing the data should be well-formed. We can use strict schemas to ensure the correct formatting of the data.
XML has two main advantages: first, it offers a standard way of structuring data, and, second, we can specify the vocabulary the data uses. We can define the vocabulary (what elements and attributes an XML document can use) using either
a document type definition (DTD) or the XML Schema language.
Schemas provide the ability to define an element's type (string, integer, etc.) and much finer constraints (a positive integer, a string starting with an uppercase letter, etc.). DTDs enforce a strict ordering of elements; schemas have a more
flexible range of options. Finally schemas are written in XML, whereas DTDs have their own syntax. For an application to accept an XML document, it must be both well formed and valid. A document that is not well formed is not really XML
and doesn't conform to the W3C's stipulations for an XML document. A parser will fail when given that document, even if validation is turned off.
Question 38:
A trucking company delivers products all over the country. The executives at the company would like to have better insight into the location of their drivers to ensure the shipments are following secure routes. Which of the following would BEST help the executives meet this goal?
A. Install GSM tracking on each product for end-to-end delivery visibility.
B. Implement geo-fencing to track products.
C. Require drivers to geo-tag documentation at each delivery location.
D. Equip each truck with an RFID tag for location services.
Correct Answer: B
A Geo-fencing solution would use GPS to track the vehicles and could be configured to inform the executives where the vehicles are.
Geo-fencing is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries. A geo- fence is a virtual barrier.
Programs that incorporate geo-fencing allow an administrator to set up triggers so when a device enters (or exits) the boundaries defined by the administrator, a text message or email alert is sent. Many geo-fencing applications incorporate
Google Earth, allowing administrators to define boundaries on top of a satellite view of a specific geographical area. Other applications define boundaries by longitude and latitude or through user- created and Web-based maps.
Question 39:
The IT director has charged the company helpdesk with sanitizing fixed and removable media. The helpdesk manager has written a new procedure to be followed by the helpdesk staff. This procedure includes the current standard to be used for data sanitization, as well as the location of physical degaussing tools. In which of the following cases should the helpdesk staff use the new procedure? (Select THREE).
A. During asset disposal
B. While reviewing the risk assessment
C. While deploying new assets
D. Before asset repurposing
E. After the media has been disposed of
F. During the data classification process
G. When installing new printers
H. When media fails or is unusable
Correct Answer: ADH
Data sanitization using physical degaussing tools is the use of magnets to completely destroy data on a storage device. This is performed to ensure confidentiality of data, that is, that the data stored on the device cannot be recovered by unauthorized users. This should be performed when disposing of a storage device or when repurposing a storage device. When media fails or is unreadable, it would be disposed and thus should also be sanitized.
Question 40:
A member of the software development team has requested advice from the security team to implement a new secure lab for testing malware. Which of the following is the NEXT step that the security team should take?
A. Purchase new hardware to keep the malware isolated.
B. Develop a policy to outline what will be required in the secure lab.
C. Construct a series of VMs to host the malware environment.
D. Create a proposal and present it to management for approval.
Correct Answer: D
Before we can create a solution, we need to motivate why the solution needs to be created and plan the best implementation with in the company's business operations. We therefore need to create a proposal that explains the intended implementation and allows for the company to budget for it.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only CompTIA exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your RC0-C02 exam preparations and CompTIA certification application, do not hesitate to visit our Vcedump.com to find your solutions here.