Exam Details

  • Exam Code
    :CCA-505
  • Exam Name
    :Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade Exam
  • Certification
    :CCAH
  • Vendor
    :Cloudera
  • Total Questions
    :45 Q&As
  • Last Updated
    :May 16, 2024

Cloudera CCAH CCA-505 Questions & Answers

  • Question 31:

    You are upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block of 128MB for all new files written to the cluster after the upgrade. What should you do?

    A. Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final.

    B. Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final.

    C. Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the parameter to final. You do need to set this value on the NameNode.

    D. Set dfs.block.size to 128M on all the worker nodes and client machines, and set the parameter to final. You do need to set this value on the NameNode.

    E. You cannot enforce this, since client code can always override this value.

  • Question 32:

    You have converted your Hadoop cluster from a MapReduce 1 (MRv1) architecture to a MapReduce 2 (MRv2) on YARN architecture. Your developers are accustomed to specifying map and reduce tasks (resource allocation) tasks when they run jobs. A developer wants to know how specify to reduce tasks when a specific job runs. Which method should you tell that developer to implement?

    A. Developers specify reduce tasks in the exact same way for both MapReduce version 1 (MRv1) and MapReduce version 2 (MRv2) on YARN. Thus, executing p mapreduce.job.reduce-2 will specify 2 reduce tasks.

    B. In YARN, the ApplicationMaster is responsible for requesting the resources required for a specific job. Thus, executing p yarn.applicationmaster.reduce.tasks-2 will specify that the ApplicationMaster launch two task containers on the worker nodes.

    C. In YARN, resource allocation is a function of megabytes of memory in multiple of 1024mb. Thus, they should specify the amount of memory resource they need by executing D mapreduce.reduce.memory-mp-2040

    D. In YARN, resource allocation is a function of virtual cores specified by the ApplicationMaster making requests to the NodeManager where a reduce task is handled by a single container (and this a single virtual core). Thus, the developer needs to specify the number of virtual cores to the NodeManager by executing p yarn.nodemanager.cpu-vcores=2

    E. MapReduce version 2 (MRv2) on YARN abstracts resource allocation away from the idea of "tasks" into memory and virtual cores, thus eliminating the need for a developer to specify the number of reduce tasks, and indeed preventing the developer from specifying the number of reduce tasks.

  • Question 33:

    You have a Hadoop cluster running HDFS, and a gateway machine external to the cluster from which clients submit jobs. What do you need to do in order to run on the cluster and submit jobs from the command line of the gateway machine?

    A. Install the impslad daemon, statestored daemon, and catalogd daemon on each machine in the cluster and on the gateway node

    B. Install the impalad daemon on each machine in the cluster, the statestored daemon and catalogd daemon on one machine in the cluster, and the impala shell on your gateway machine

    C. Install the impalad daemon and the impala shell on your gateway machine, and the statestored daemon and catalog daemon on one of the nodes in the cluster

    D. Install the impalad daemon, the statestored daemon, the catalogd daemon, and the impala shell on your gateway machine

    E. Install the impalad daemon, statestored daemon, and catalogd daemon on each machine in the cluster, and the impala shell on your gateway machine

  • Question 34:

    Assume you have a file named foo.txt in your local directory. You issue the following three commands:

    Hadoop fs mkdir input Hadoop fs put foo.txt input/foo.txt Hadoop fs put foo.txt input

    What happens when you issue that third command?

    A. The write succeeds, overwriting foo.txt in HDFS with no warning

    B. The write silently fails

    C. The file is uploaded and stored as a plain named input

    D. You get an error message telling you that input is not a directory E. You get a error message telling you that foo.txt already exists. The file is not written to HDFS

    F. You get an error message telling you that foo.txt already exists, and asking you if you would like to overwrite

    G. You get a warning that foo.txt is being overwritten

  • Question 35:

    You are configuring your cluster to run HDFS and MapReduce v2 (MRv2) on YARN. Which daemons need to be installed on your clusters master nodes? (Choose Two)

    A. ResourceManager

    B. DataNode

    C. NameNode

    D. JobTracker

    E. TaskTracker

    F. HMaster

  • Question 36:

    You are the hadoop fs put command to add a file "sales.txt" to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replication factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of this file in this situation/

    A. The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file's replication doesn't fall two)

    B. This file will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster's replication values are restored

    C. The file will remain under-replicated until the administrator brings that nodes back online

    D. The file will be re-replicated automatically after the NameNode determines it is under replicated based on the block reports it receives from the DataNodes

  • Question 37:

    For each YARN Job, the Hadoop framework generates task log files. Where are Hadoop's files stored?

    A. In HDFS, In the directory of the user who generates the job

    B. On the local disk of the slave node running the task

    C. Cached In the YARN container running the task, then copied into HDFS on fob completion

    D. Cached by the NodeManager managing the job containers, then written to a log directory on the NameNode

  • Question 38:

    Which Yarn daemon or service monitors a Container's per-application resource usage (e.g, memory, CPU)?

    A. NodeManager

    B. ApplicationMaster

    C. ApplicationManagerService

    D. ResourceManager

  • Question 39:

    You are planning a Hadoop cluster and considering implementing 10 Gigabit Ethernet as the network fabric. Which workloads benefit the most from a faster network fabric?

    A. When your workload generates a large amount of output data, significantly larger than amount of intermediate data

    B. When your workload generates a large amount of intermediate data, on the order of the input data itself

    C. When workload consumers a large amount of input data, relative to the entire capacity of HDFS

    D. When your workload consists of processor-intensive tasks

  • Question 40:

    During the execution of a MapReduce v2 (MRv2) job on YARN, where does the Mapper place the intermediate data each Map task?

    A. The Mapper stores the intermediate data on the mode running the job's ApplicationMaster so that is available to YARN's ShuffleService before the data is presented to the Reducer

    B. The Mapper stores the intermediate data in HDFS on the node where the MAP tasks ran in the HDFS / usercache/and[user]sppcache/application_and(appid) directory for the user who ran the job

    C. YARN holds the intermediate data in the NodeManager's memory (a container) until it is transferred to the Reducers

    D. The Mapper stores the intermediate data on the underlying filesystem of the local disk in the directories yarn.nodemanager.local-dirs

    E. The Mapper transfers the intermediate data immediately to the Reducers as it generated by the Map task

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Cloudera exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your CCA-505 exam preparations and Cloudera certification application, do not hesitate to visit our Vcedump.com to find your solutions here.