Exam Details

  • Exam Code
    :CCA-500
  • Exam Name
    :Cloudera Certified Administrator for Apache Hadoop (CCAH)
  • Certification
    :CCAH
  • Vendor
    :Cloudera
  • Total Questions
    :60 Q&As
  • Last Updated
    :May 14, 2024

Cloudera CCAH CCA-500 Questions & Answers

  • Question 21:

    You are migrating a cluster from MApReduce version 1 (MRv1) to MapReduce version 2 (MRv2) on YARN. You want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do/

    A. Configure yarn.applicationmaster.resource.memory-mb and yarn.applicationmaster.resource.cpu-vcores so that ApplicationMaster container allocations match the capacity you require.

    B. You don't need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster

    C. Configure mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum ub yarn-site.xml to match your cluster's capacity set by the yarn-scheduler.minimum-allocation

    D. Configure yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu- vcores to match the capacity you require under YARN for each NodeManager

  • Question 22:

    You are configuring a server running HDFS, MapReduce version 2 (MRv2) on YARN running Linux. How must you format underlying file system of each DataNode?

    A. They must be formatted as HDFS

    B. They must be formatted as either ext3 or ext4

    C. They may be formatted in any Linux file system

    D. They must not be formatted - - HDFS will format the file system automatically

  • Question 23:

    Given:

    You want to clean up this list by removing jobs where the State is KILLED. What command you enter?

    A. Yarn application refreshJobHistory

    B. Yarn application kill application_1374638600275_0109

    C. Yarn rmadmin refreshQueue

    D. Yarn rmadmin kill application_1374638600275_0109

  • Question 24:

    Assume you have a file named foo.txt in your local directory. You issue the following three commands:

    Hadoop fs mkdir input

    Hadoop fs put foo.txt input/foo.txt

    Hadoop fs put foo.txt input

    What happens when you issue the third command?

    A. The write succeeds, overwriting foo.txt in HDFS with no warning

    B. The file is uploaded and stored as a plain file named input

    C. You get a warning that foo.txt is being overwritten

    D. You get an error message telling you that foo.txt already exists, and asking you if you would like to overwrite it.

    E. You get a error message telling you that foo.txt already exists. The file is not written to HDFS

    F. You get an error message telling you that input is not a directory

    G. The write silently fails

  • Question 25:

    You use the hadoop fs put command to add a file "sales.txt" to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replication factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of file in this situation?

    A. The file will remain under-replicated until the administrator brings that node back online

    B. The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file's replication factor doesn't fall below)

    C. This will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster's replication values are resorted

    D. The file will be re-replicated automatically after the NameNode determines it is under- replicated based on the block reports it receives from the NameNodes

  • Question 26:

    You have installed a cluster HDFS and MapReduce version 2 (MRv2) on YARN. You have no dfs.hosts entry(ies) in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node. What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?

    A. Without creating a dfs.hosts file or making any entries, run the commands hadoop.dfsadminrefreshModes on the NameNode

    B. Restart the NameNode

    C. Creating a dfs.hosts file on the NameNode, add the worker Node's name to it, then issue the command hadoop dfsadmin refresh Nodes = on the Namenode

    D. Nothing; the worker node will automatically join the cluster when NameNode daemon is started

  • Question 27:

    What two processes must you do if you are running a Hadoop cluster with a single NameNode and six DataNodes, and you want to change a configuration parameter so that it affects all six DataNodes. (Choose two)

    A. You must modify the configuration files on the NameNode only. DataNodes read their configuration from the master nodes

    B. You must modify the configuration files on each of the DataNodes machines

    C. You don't need to restart any daemon, as they will pick up changes automatically

    D. You must restart the NameNode daemon to apply the changes to the cluster

    E. You must restart all six DatNode daemon to apply the changes to the cluster

  • Question 28:

    A slave node in your cluster has 4 TB hard drives installed (4 x 2TB). The DataNode is configured to store HDFS blocks on all disks. You set the value of the dfs.datanode.du.reserved parameter to 100 GB. How does this alter HDFS block storage?

    A. 25GB on each hard drive may not be used to store HDFS blocks

    B. 100GB on each hard drive may not be used to store HDFS blocks

    C. All hard drives may be used to store HDFS blocks as long as at least 100 GB in total is available on the node

    D. A maximum if 100 GB on each hard drive may be used to store HDFS blocks

  • Question 29:

    Your cluster is running MapReduce version 2 (MRv2) on YARN. Your ResourceManager is configured to use the FairScheduler. Now you want to configure your scheduler such that a new user on the cluster can submit jobs into their own queue application submission. Which configuration should you set?

    A. You can specify new queue name when user submits a job and new queue can be created dynamically if the property yarn.scheduler.fair.allow-undecleared-pools = true

    B. Yarn.scheduler.fair.user.fair-as-default-queue = false and yarn.scheduler.fair.allow- undecleared-pools = true

    C. You can specify new queue name when user submits a job and new queue can be created dynamically if yarn .schedule.fair.user-as-default-queue = false

    D. You can specify new queue name per application in allocations.xml file and have new jobs automatically assigned to the application queue

  • Question 30:

    Which command does Hadoop offer to discover missing or corrupt HDFS data?

    A. Hdfs fs du

    B. Hdfs fsck

    C. Dskchk

    D. The map-only checksum

    E. Hadoop does not provide any tools to discover missing or corrupt data; there is not need because three replicas are kept for each data block

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Cloudera exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your CCA-500 exam preparations and Cloudera certification application, do not hesitate to visit our Vcedump.com to find your solutions here.