Exam Details

  • Exam Code
    :CCA175
  • Exam Name
    :CCA Spark and Hadoop Developer Exam
  • Certification
    :Cloudera Certified Associate CCA
  • Vendor
    :Cloudera
  • Total Questions
    :95 Q&As
  • Last Updated
    :May 12, 2024

Cloudera Cloudera Certified Associate CCA CCA175 Questions & Answers

  • Question 41:

    Problem Scenario 26 : You need to implement near real time solutions for collecting information when submitted in file with below information. You have been given below directory location (if not available than create it) /tmp/nrtcontent. Assume your departments upstream service is continuously committing data in this directory as a new file (not stream of data, because it is near real time solution). As soon as file committed in this directory that needs to be available in hdfs in /tmp/flume location Data

    echo "I am preparing for CCA175 from ABCTECH.com" > /tmp/nrtcontent/.he1.txt mv /tmp/nrtcontent/.he1.txt /tmp/nrtcontent/he1.txt After few mins echo "I am preparing for CCA175 from TopTech.com" > /tmp/nrtcontent/.qt1.txt mv /tmp/nrtcontent/.qt1.txt /tmp/nrtcontent/qt1.txt

    Write a flume configuration file named flumes.conf and use it to load data in hdfs with following additional properties.

    1.

    Spool /tmp/nrtcontent

    2.

    File prefix in hdfs sholuld be events

    3.

    File suffix should be Jog

    4.

    If file is not commited and in use than it should have as prefix.

    5.

    Data should be written as text to hdfs

  • Question 42:

    Problem Scenario 27 : You need to implement near real time solutions for collecting information when submitted in file with below information.

    Data

    echo "IBM,100,20160104" >> /tmp/spooldir/bb/.bb.txt echo "IBM,103,20160105" >> /tmp/spooldir/bb/.bb.txt mv /tmp/spooldir/bb/.bb.txt /tmp/spooldir/bb/bb.txt After few mins echo "IBM,100.2,20160104" >> /tmp/spooldir/dr/.dr.txt echo "IBM,103.1,20160105" >> /tmp/spooldir/dr/.dr.txt mv /tmp/spooldir/dr/.dr.txt /tmp/spooldir/dr/dr.txt

    Requirements:

    You have been given below directory location (if not available than create it) /tmp/spooldir .

    You have a finacial subscription for getting stock prices from BloomBerg as well as

    Reuters and using ftp you download every hour new files from their respective ftp site in

    directories /tmp/spooldir/bb and /tmp/spooldir/dr respectively.

    As soon as file committed in this directory that needs to be available in hdfs in

    /tmp/flume/finance location in a single directory.

    Write a flume configuration file named flume7.conf and use it to load data in hdfs with

    following additional properties .

    1.

    Spool /tmp/spooldir/bb and /tmp/spooldir/dr

    2.

    File prefix in hdfs sholuld be events

    3.

    File suffix should be .log

    4.

    If file is not commited and in use than it should have _ as prefix.

    5.

    Data should be written as text to hdfs

  • Question 43:

    Problem Scenario 2 :

    There is a parent organization called "ABC Group Inc", which has two child companies

    named Tech Inc and MPTech.

    Both companies employee information is given in two separate text file as below. Please do

    the following activity for employee details.

    Tech Inc.txt 1,Alok,Hyderabad 2,Krish,Hongkong 3,Jyoti,Mumbai 4,Atul,Banglore 5,Ishan,Gurgaon MPTech.txt 6,John,Newyork 7,alp2004,California 8,tellme,Mumbai 9,Gagan21,Pune 10,Mukesh,Chennai

    1.

    Which command will you use to check all the available command line options on HDFS and How will you get the Help for individual command.

    2.

    Create a new Empty Directory named Employee using Command line. And also create an empty file named in it Techinc.txt

    3.

    Load both companies Employee data in Employee directory (How to override existing file in HDFS).

    4.

    Merge both the Employees data in a Single tile called MergedEmployee.txt, merged tiles should have new line character at the end of each file content.

    5.

    Upload merged file on HDFS and change the file permission on HDFS merged file, so that owner and group member can read and write, other user can read the file.

    6.

    Write a command to export the individual file as well as entire directory from HDFS to local file System.

  • Question 44:

    Problem Scenario 81 : You have been given MySQL DB with following details. You have been given following product.csv file product.csv productID,productCode,name,quantity,price 1001,PEN,Pen Red,5000,1.23 1002,PEN,Pen Blue,8000,1.25 1003,PEN,Pen Black,2000,1.25 1004,PEC,Pencil 2B,10000,0.48 1005,PEC,Pencil 2H,8000,0.49 1006,PEC,Pencil HB,0,9999.99 Now accomplish following activities.

    1.

    Create a Hive ORC table using SparkSql

    2.

    Load this data in Hive table.

    3.

    Create a Hive parquet table using SparkSQL and load data in it.

  • Question 45:

    Problem Scenario 83 : In Continuation of previous question, please accomplish following

    activities.

    1.

    Select all the records with quantity >= 5000 and name starts with 'Pen'

    2.

    Select all the records with quantity >= 5000, price is less than 1.24 and name starts with 'Pen'

    3.

    Select all the records witch does not have quantity >= 5000 and name does not starts with 'Pen'

    4.

    Select all the products which name is 'Pen Red', 'Pen Black'

    5.

    Select all the products which has price BETWEEN 1.0 AND 2.0 AND quantity BETWEEN 1000 AND 2000.

  • Question 46:

    Problem Scenario 62 : You have been given below code snippet.

    val a = sc.parallelize(List("dogM, "tiger", "lion", "cat", "panther", "eagle"), 2)

    val b = a.map(x => (x.length, x))

    operation1

    Write a correct code snippet for operationl which will produce desired output, shown below.

    Array[(lnt, String)] = Array((3,xdogx), (5,xtigerx), (4,xlionx), (3,xcatx), (7,xpantherx),

    (5,xeaglex))

  • Question 47:

    Problem Scenario 28 : You need to implement near real time solutions for collecting information when submitted in file with below

    Data

    echo "IBM,100,20160104" >> /tmp/spooldir2/.bb.txt echo "IBM,103,20160105" >> /tmp/spooldir2/.bb.txt mv /tmp/spooldir2/.bb.txt /tmp/spooldir2/bb.txt After few mins echo "IBM,100.2,20160104" >> /tmp/spooldir2/.dr.txt echo "IBM,103.1,20160105" >> /tmp/spooldir2/.dr.txt mv /tmp/spooldir2/.dr.txt /tmp/spooldir2/dr.txt You have been given below directory location (if not available than create it) /tmp/spooldir2 . As soon as file committed in this directory that needs to be available in hdfs in /tmp/flume/primary as well as /tmp/flume/secondary location. However, note that/tmp/flume/secondary is optional, if transaction failed which writes in this directory need not to be rollback. Write a flume configuration file named flumeS.conf and use it to load data in hdfs with following additional properties .

    1.

    Spool /tmp/spooldir2 directory

    2.

    File prefix in hdfs sholuld be events

    3.

    File suffix should be .log

    4.

    If file is not committed and in use than it should have _ as prefix.

    5.

    Data should be written as text to hdfs

  • Question 48:

    Problem Scenario 9 : You have been given following mysql database details as well as other info. user=retail_dba password=cloudera database=retail_db jdbc URL = jdbc:mysql://quickstart:3306/retail_db Please accomplish following.

    1.

    Import departments table in a directory.

    2.

    Again import departments table same directory (However, directory already exist hence it should not overrride and append the results)

    3.

    Also make sure your results fields are terminated by '|' and lines terminated by '\n\

  • Question 49:

    Problem Scenario 20 : You have been given MySQL DB with following details. user=retail_dba password=cloudera database=retail_db table=retail_db.categories jdbc URL = jdbc:mysql://quickstart:3306/retail_db Please accomplish following activities.

    1. Write a Sqoop Job which will import "retaildb.categories" table to hdfs, in a directory name "categories_targetJob".

  • Question 50:

    Problem Scenario 50 : You have been given below code snippet (calculating an average

    score}, with intermediate output.

    type ScoreCollector = (Int, Double)

    type PersonScores = (String, (Int, Double))

    val initialScores = Array(("Fred", 88.0), ("Fred", 95.0), ("Fred", 91.0), ("Wilma", 93.0),

    ("Wilma", 95.0), ("Wilma", 98.0))

    val wilmaAndFredScores = sc.parallelize(initialScores).cache()

    val scores = wilmaAndFredScores.combineByKey(createScoreCombiner, scoreCombiner,

    scoreMerger)

    val averagingFunction = (personScore: PersonScores) => { val (name, (numberScores,

    totalScore)) = personScore (name, totalScore / numberScores)

    }

    val averageScores = scores.collectAsMap(}.map(averagingFunction)

    Expected output: averageScores: scala.collection.Map[String,Double] = Map(Fred ->

    91.33333333333333, Wilma -> 95.33333333333333)

    Define all three required function , which are input for combineByKey method, e.g.

    (createScoreCombiner, scoreCombiner, scoreMerger). And help us producing required

    results.

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Cloudera exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your CCA175 exam preparations and Cloudera certification application, do not hesitate to visit our Vcedump.com to find your solutions here.