Exam Details

  • Exam Code
    :ARA-C01
  • Exam Name
    :SnowPro Advanced: Architect Certification (ARA-C01)
  • Certification
    :Snowflake Certifications
  • Vendor
    :Snowflake
  • Total Questions
    :65 Q&As
  • Last Updated
    :May 22, 2025

Snowflake Snowflake Certifications ARA-C01 Questions & Answers

  • Question 51:

    An Architect uses COPY INTO with the ON_ERROR=SKIP_FILE option to bulk load CSV files into a table called TABLEA, using its table stage. One file named file5.csv fails to load. The Architect fixes the file and re-loads it to the stage with the exact same file name it had previously.

    Which commands should the Architect use to load only file5.csv file from the stage? (Choose two.)

    A. COPY INTO tablea FROM @%tablea RETURN_FAILED_ONLY = TRUE;

    B. COPY INTO tablea FROM @%tablea;

    C. COPY INTO tablea FROM @%tablea FILES = ('file5.csv');

    D. COPY INTO tablea FROM @%tablea FORCE = TRUE;

    E. COPY INTO tablea FROM @%tablea NEW_FILES_ONLY = TRUE;

    F. COPY INTO tablea FROM @%tablea MERGE = TRUE;

  • Question 52:

    A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the deidentified final data set available publicly for advertising companies who use different cloud providers in different regions.

    The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

    Which design will meet these requirements?

    A. Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

    B. Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

    C. Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.

    D. Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

  • Question 53:

    What are purposes for creating a storage integration? (Choose three.)

    A. Control access to Snowflake data using a master encryption key that is maintained in the cloud provider's key management service.

    B. Store a generated identity and access management (IAM) entity for an external cloud provider regardless of the cloud provider that hosts the Snowflake account.

    C. Support multiple external stages using one single Snowflake object.

    D. Avoid supplying credentials when creating a stage or when loading or unloading data.

    E. Create private VPC endpoints that allow direct, secure connectivity between VPCs without traversing the public internet.

    F. Manage credentials from multiple cloud providers in one single Snowflake object.

  • Question 54:

    Which steps are recommended best practices for prioritizing cluster keys in Snowflake? (Choose two.)

    A. Choose columns that are frequently used in join predicates.

    B. Choose lower cardinality columns to support clustering keys and cost effectiveness.

    C. Choose TIMESTAMP columns with nanoseconds for the highest number of unique rows.

    D. Choose cluster columns that are most actively used in selective filters.

    E. Choose cluster columns that are actively used in the GROUP BY clauses.

  • Question 55:

    An Architect would like to save quarter-end financial results for the previous six years.

    Which Snowflake feature can the Architect use to accomplish this?

    A. Search optimization service

    B. Materialized view

    C. Time Travel

    D. Zero-copy cloning

    E. Secure views

  • Question 56:

    A Snowflake Architect is designing an application and tenancy strategy for an organization where strong legal isolation rules as well as multi-tenancy are requirements.

    Which approach will meet these requirements if Role-Based Access Policies (RBAC) is a viable option for isolating tenants?

    A. Create accounts for each tenant in the Snowflake organization.

    B. Create an object for each tenant strategy if row level security is viable for isolating tenants.

    C. Create an object for each tenant strategy if row level security is not viable for isolating tenants.

    D. Create a multi-tenant table strategy if row level security is not viable for isolating tenants.

  • Question 57:

    Which feature provides the capability to define an alternate cluster key for a table with an existing cluster key?

    A. External table

    B. Materialized view

    C. Search optimization

    D. Result cache

  • Question 58:

    A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

    What is the MOST cost-effective way to bring this data into a Snowflake table?

    A. An external table

    B. A pipe

    C. A stream

    D. A copy command at regular intervals