Databricks Certified Associate Developer for Apache Spark - Certified Associate Developer for Apache Spark Exam
Page: 2 / 42
Total 206 questions
Question #6 (Topic: Exam A)
Which of the following operations is most likely to result in a shuffle?
A. DataFrame.join()
B. DataFrame.filter()
C. DataFrame.union()
D. DataFrame.where()
E. DataFrame.drop()
Answer: A
Question #7 (Topic: Exam A)
The default value of spark.sql.shuffle.partitions is 200. Which of the following describes what that means?
A. By default, all DataFrames in Spark will be spit to perfectly fill the memory of 200 executors.
B. By default, new DataFrames created by Spark will be split to perfectly fill the memory of 200 executors.
C. By default, Spark will only read the first 200 partitions of DataFrames to improve speed.
D. By default, all DataFrames in Spark, including existing DataFrames, will be split into 200 unique segments for parallelization.
E. By default, DataFrames will be split into 200 unique partitions when data is being shuffled.
Answer: E
Question #8 (Topic: Exam A)
Which of the following is the most complete description of lazy evaluation?
A. None of these options describe lazy evaluation
B. A process is lazily evaluated if its execution does not start until it is put into action by some type of trigger
C. A process is lazily evaluated if its execution does not start until it is forced to display a result to the user
D. A process is lazily evaluated if its execution does not start until it reaches a specified date and time
E. A process is lazily evaluated if its execution does not start until it is finished compiling
Answer: B
Question #9 (Topic: Exam A)
Which of the following DataFrame operations is classified as an action?
A. DataFrame.drop()
B. DataFrame.coalesce()
C. DataFrame.take()
D. DataFrame.join()
E. DataFrame.filter()
Answer: C
Question #10 (Topic: Exam A)
Which of the following DataFrame operations is classified as a wide transformation?
A. DataFrame.filter()
B. DataFrame.join()
C. DataFrame.select()
D. DataFrame.drop()
E. DataFrame.union()
Answer: B